text
stringlengths
1
1.3M
id
stringlengths
2
2.39k
metadata
dict
\section{Introduction} Evolution algebras were introduced by Tian in his book \cite{T1} (see also \cite{TV}) to model the self reproduction process in non-Mendelian genetics. As shown in \cite{T1}, the theory of evolution algebras is connected to many areas of Mathematics, like, for example, graph theory, group theory, stochastic processes, mathematical physics, among many others. Since their introduction evolution algebras have attracted the attention to several researchers, who were eager to investigate them from an algebraic point of view; see \cite{BCS}-\cite{Imo} and references therein. Here, we focus our attention on the evolution algebras whose square has dimension one; we classify them using the theory of inner product spaces and quadratic forms. We begin by introducing a few key ideas: any commutative algebra $A$ (not necessarily finite dimensional) over a field $K$ with $\dim(A^2)=1$ admits an inner product $\esc{\cdot,\cdot}$ such that the product in $A$ is given by $xy=\esc{x,y}a$, for some fixed element $a\in A$ (unique up to scalar multiples). From here, we obtain three (excluding) possibilities for $a$: \begin{enumerate} \item\label{one} $a\in\mathrm{Ann}(A)$, which gives $A^3 = (A^2)^2 = 0$. \item $a\notin \mathrm{Ann}(A)$ and $\esc{a,a} = 0$, which implies $(A^2)^2 = 0$ but $A^3\ne 0$. \item $a\notin\mathrm{Ann}(A)$ and $\esc{a,a}\ne 0$, which yields $A^3 \ne 0$ and $(A^2)^2\ne 0$. \end{enumerate} Here $\mathrm{Ann}(A) = \{x \in A \, |\, xA = 0\}$. Thus, the algebras we will be dealing with come in three different flavours given by the trichotomy above. In any case, choosing a suplementary subspace $W$ of $\mathrm{Ann}(A)$ we obtain a decomposition $A = \mathrm{Ann}(A)\oplus W$ such that $\esc{\cdot, \cdot}\vert_W$ is nondegenerate. It is worth mentioning that we have quite a lot of flexibility to choose $W$, so depending on the flavour of our algebra we will require $W$ to satisfy certain conditions. If $A$ is an evolution algebra of flavour \eqref{one}, we will show (in Theorem \ref{Iso1}) that the isomorphic class of $A$ is completely determined by the pair $(\dim(A),[W])$, where $[W]$ denotes the isometry type of $W$. More precisely, if $A = \mathrm{Ann}(A)\oplus W_A$ and $B = \mathrm{Ann}(B) \oplus W_B$ are like in \eqref{one}, then we will prove that $A \cong B$ if and only if $\dim(A) = \dim(B)$, and $W_A$ and $W_B$ are isometric. Similar results for flavours (2) and (3) will be explored. To do so, we will make used of the theory of inner products and/or quadratic forms. The paper is organized as follows: in Section 2, we introduce the required background. Section 3 begins by noticing that our study of the evolution algebras $A$ with $\dim (A^2) = 1$ must be divided into two cases, attending on whether $(A^2)^2 \neq 0$ or $(A^2)^2 = 0$, which are treated in \S 3.1 and \S 3.2, respectively. \section{Preliminaries} Throughout the paper, $V$ denotes a vector space over a field $K$. An {\bf inner product space} is a pair $(V, \esc{\cdot, \cdot})$, where $\esc{\cdot, \cdot}: V \times V \to K$ is a symmetric bilinear form. Two inner product spaces $(V,\esc{\cdot, \cdot})$ and $(V',\esc{\cdot, \cdot}')$ are said to be {\bf isometric} or {\bf equivalent} if there is a (vector space) isomorphism $f: V \to V'$ such that $\esc{f(x),f(y)}'= \esc{x,y}$ for all $x, y \in V$. \smallskip A map $q: V\to K$ satisfying that \begin{enumerate} \item[\rm (1)] $q(\lambda v) = \lambda^2 q(v), \quad \forall \, \, \lambda \in K, \, \, v \in V$, \item[\rm (2)] the map $\langle \cdot, \cdot \rangle_q: V \times V \to K$ given by $(x, y) \mapsto q(x + y) - q(x) - q(y)$, and called the {\bf polar form} of $q$, is bilinear, \end{enumerate} is called a {\bf quadratic form} on $V$, and $(V, q)$ is said to be a {\bf quadratic space}. If the characteristic of $K$ is different from 2, the notions of inner product spaces and quadratic spaces are equivalent: the polar form of a quadratic form $q$ is now defined by $\langle x, y \rangle_q = \frac{1}{2}\left(q(x + y) - q(x) - q(y)\right)$, and satisfies that $\langle x, x\rangle_q = q(x)$. And conversely, if $(V,\esc{\cdot, \cdot})$ is an inner product space, then $q(x):= \esc{x, x}$ for all $x \in V$, is a quadratic form, whose polar form is precisely $\esc{\cdot, \cdot}$. In such a case, we may write $(V, q)$ to refer to the quadratic space $(V, \langle \cdot, \cdot \rangle)$, and vice versa. \smallskip The {\bf radical} of an inner product space $(V, \langle \cdot, \cdot \rangle$) is the subspace of $V$ given by \[ V^\bot = \{x \in V \, |\, \esc{x,V} = 0\}, \] and $(V, \esc{\cdot, \cdot})$ is said to be {\bf nondegenerate} if $V^\bot = 0$; a {\bf subspace} $W$ of $V$ is called {\bf nondegenerate} if $(W, \esc{\cdot, \cdot}|_W)$ is nondegenerate. Recall that $V$ can be written as \begin{equation} \label{decomV} V = V^\bot \oplus W, \end{equation} for $W$ a nondegenerate subspace of $V$. If $V$ has finite dimension, then the matrix $M_B$ of $\langle \cdot, \cdot \rangle$ with respect to a basis $B$ of $V$ is called the {\bf Gram matrix} of $\esc{\cdot, \cdot}$ with respect to $B$. If $M_B$ and $M_{B'}$ are Gram matrices with respect to bases $B$ and $B'$ of $V$, then $M_B$ and $M_{B'}$ have the same rank; the {\bf rank} of $(V, \esc{\cdot, \cdot})$ is defined as the rank of a Gram matrix $M$ of $\esc{\cdot, \cdot}$, and $(V, \esc{\cdot, \cdot})$ is nondegenerate if and only if $M$ is nonsingular. The {\bf discriminant} of $(V, \esc{\cdot, \cdot})$ is defined as zero if $(V, \esc{\cdot, \cdot})$ is degenerate, and as the coset of $\det(M)$ in the factor group $K^*/{(K^*)}^2$, otherwise. The discriminant of two nondegenerate equivalent inner product spaces coincide. If the characteristic of $K$ is not 2, then we can consider the associated quadratic form $q$ of $(V, \esc{\cdot, \cdot})$ and define its rank. Similarly, one can define the discriminant of $q$ provided that $(V, \esc{\cdot, \cdot})$ is nondegenerate. If $V$ has finite dimension $n$, then a real quadratic form $q: V \to \mathbb R$ of rank $r$ can be expressed as \[ q(x_1, \ldots, x_n) = x_1^2 + \ldots + x_p^2- x_{p+1}^2 - \ldots - x_r^2, \] with respect to a suitable basis of $V$; the {\bf signature} of $q$ is defined as $(p, \, r - p)$. Recall that (finite dimensional) inner product spaces over algebraically closed fields are classified (up to congruence) by rank; over the reals they are classified according to their rank and signature (see \cite[Theorem 6.8]{BAI}); while over finite fields of odd characteristic their rank and discriminant constitute a complete set of invariants (see \cite[Theorem 6.9]{BAI}). Over other types of fields many different invariants are available; for instance, over a local field those are the dimension, discriminant and the so-called Hasse invariant (see \cite[p. 39]{Serre}). Lastly, diagonalizable inner product spaces over quadratically closed fields are classified by their rank. \section{The first dichotomy} An {\bf algebra} $A$ over $K$ is a vector space over $K$ endowed with a bilinear map $A \times A \to A$ written as $(a, b) \mapsto ab$, and called the {\bf product} of $A$. We say that $A$ is an {\bf evolution algebra} if there exists a basis $\{a_i\}_{i \in I}$, called a {\bf natural basis} of the underlying vector space of $A$ such that $a_ia_j = 0$ for all $i \neq j$. \begin{remark} \label{product} Let $A$ be an algebra such that $\dim (A^2) = 1$. Then we can find $0 \neq a \in A$ such that $A^2 = Ka$, and the product of any two elements $x, y \in A$ is given by $xy = \lambda_{xy} a$, where $\lambda_{xy} \in K$ depends linearly on both $x$ and $y$. In other words, the map $\langle \cdot, \cdot \rangle: A \times A \to K$ given by $\langle x, y \rangle = \lambda_{xy}$, for all $x, y \in A$, is an inner product in $A$. Clearly, \begin{equation} \label{productA} xy = \langle x, y \rangle a, \mbox{ for all } \, \, x, y \in A. \end{equation} In this case, notice that \eqref{decomV} becomes \begin{equation} \label{decomA} A = \mathrm{Ann}(A) \oplus W; \end{equation} in other words $\mathrm{Ann}(A) = A^\bot$. \end{remark} At this point, it is worth mentioning that if $A$ is an evolution algebra, then \eqref{productA} does not depend of the generator $a$ of $A^2$. We prove this fact in a more general way using pointed vector spaces. \begin{lemma} \label{pointed} Let $(S, s)$ be a pointed vector space, where $0 \neq s \in S$, and $(U, \langle \cdot, \cdot \rangle)$ a nondegenerate inner product space. Then the direct sum $A_s := S \oplus U$ becomes an algebra with 1-dimensional square under the product \[ (s_1 + u_1) (s_2 + u_2) = \langle u_1, u_2 \rangle s, \] for all $s_1, s_2 \in S$ and $u_1, u_2 \in U$. Moreover, if $s'$ is another nonzero element of $S$, then $A_s$ and $A_{s'}$ are isomorphic. \end{lemma} \begin{proof} It is straightforward to check that $A_s$ is an algebra such that $\dim(A^2_s) = 1$. For $(S, s')$ another pointed vector space, take $\theta: S \to S$ a bijective linear map such that $\theta(s) = s'$. The map $f:A_s \to A_{s'}$ given by $f(s + u) = \theta(s) + u$, for all $s \in S$ and $u \in U$, is the desired isomorphism. \end{proof} \begin{proposition} \label{dicho} Let $A$ be a commutative algebra with $\dim(A^2) = 1$. Then either $(A^2)^2 = 0$ or there is a unique nonzero idempotent in $A$. \end{proposition} \begin{proof} From $\dim(A^2) = 1$ we can find $0 \neq a \in A$ such that $A^2 = Ka$. If $(A^2)^2 \neq 0$, then $0 \neq a^2 = \lambda a$, for some $0 \neq \lambda \in K$. We claim that $e := \lambda^{-1}a$ is the unique nonzero idempotent of $A$; in fact: \[ e^2 = \lambda^{-2}a^2 = \lambda^{-2}(\lambda a) = \lambda^{-1}a = e. \] Clearly, $A^2 = Ke$. If $0 \neq u \in A$ is an idempotent, then $u = u^2 \in A^2 = Ke$, and so $u = \gamma e$ for some $0 \neq \gamma \in K$. But then \[ \gamma e = u = u^2 = \gamma^2 e, \] which implies that $\gamma = 1$, and so $u = e$. \end{proof} \subsection{Case $(A^2)^2 \neq 0$} We first study the evolution algebras $A$ whose square is 1-dimensional and satisfy the condition $(A^2)^2 \neq 0$. \smallskip The following result follows from Remark \ref{product} and Proposition \ref{dicho}. \begin{proposition} \label{inner} Let $A$ be a commutative algebra such that $\dim(A^2) = 1$ and $(A^2)^2 \neq 0$. Then there exists a unique inner product $\langle \cdot, \cdot \rangle: A \times A \to K$ such that the product of $A$ is given by \[ xy = \langle x, y \rangle e, \, \, \forall \, \, x, y \in A, \] where $e$ is the nonzero unique idempotent of $A$. Moreover, $\langle e, e \rangle = 1$. \end{proposition} \begin{definition} The inner product defined in Proposition \ref{inner} is called the {\bf canonical inner product} of $A$, and $(A, \langle \cdot, \cdot \rangle)$ the {\bf canonical inner product space}. \end{definition} \begin{proposition} Let $A$ be an evolution algebra such that $\dim(A^2) = 1$ and $(A^2)^2 \neq 0$. Then the canonical inner product of $A$ is diagonalizable, that is, there exists an orthogonal basis of $A$. \end{proposition} \begin{proof} Suppose that $\{a_i\}_{i\in I}$ is a natural basis of $A$. Then, for all $i \neq j$ we have \[ 0 = a_ia_j = \langle a_i, a_j \rangle e, \] which implies $\langle a_i, a_j \rangle = 0$. \end{proof} \begin{conclu} \label{conclu1} {\rm We have proved that the product of an evolution algebra $A$ satisfying $(A^2)^2\ne 0$ and $\dim(A^2) = 1$, is completely determined by a (unique) inner product $\langle \cdot, \cdot \rangle: A \times A \to K$ (diagonalizable with respect to a natural basis of $A$) and a (unique) nonzero idempotent $e$ of $A$ such that $\langle e, e \rangle = 1$. To be more precise, $xy = \langle x, y \rangle e$, for all $x, y \in A$. A kind of converse holds: if $(V, \langle \cdot, \cdot \rangle)$ is an inner product space with an orthogonal basis and a vector $v$ of norm one, then we can endow $V$ with an evolution algebra structure with the product $xy = \langle x, y \rangle v$, for all $x, y \in V$. Moreover, $\dim(V^2) = 1$.} \end{conclu} We can reformulate our problem using categories: \begin{itemize} \item ${\mathcal A}_K$ denotes the category whose objects are the evolution $K$-algebras $A$ satisfying that $(A^2)^2 \ne 0$ and $\dim(A^2) = 1$, and morphisms the algebra homomorphisms; notice that ${\mathcal A}_K$ is a full subcategory of the category of all $K$-algebras; \item ${\mathcal B}_K$ denotes the category whose objects are triples of the form $\big(V, \langle \cdot, \cdot \rangle, v \big)$, where $(V, \langle \cdot, \cdot \rangle)$ is a (diagonalizable) inner product space, and $v \in V$ a norm one vector; a morphism $f: \big(V, \langle \cdot, \cdot \rangle, v) \to \big(V', \langle \cdot, \cdot \rangle', v'\big)$ in ${\mathcal B}_K$ is a linear map $f: V \to V'$ such that $\langle x, y \rangle f(v) = \langle f(x), f(y)\rangle'v'$, for all $x, y\in V$; \item ${\mathcal A}_K^0$ stands for the full subcategory of ${\mathcal A}_K$ consisting of finite dimensional evolution algebras, while ${\mathcal B}_K^0$ denotes the full subcategory of ${\mathcal B}_K$ of triples $\big(V, \langle \cdot, \cdot \rangle, v \big)$, where $V$ is finite dimensional. \end{itemize} We begin by characterizing the isomorphisms in ${\mathcal B}_K$: \begin{theorem} \label{caractIso} A morphism $f: \big(V, \langle \cdot, \cdot \rangle, v \big) \to \big(V', \langle \cdot, \cdot \rangle', v'\big)$ in ${\mathcal B}_K$ is an isomorphism if and only if $f$ is an isometry and $f(v) = v'$. \end{theorem} \begin{proof} Suppose that $f$ is an isomorphism. Then for $\lambda := \langle f(v), f(v) \rangle' \in K$, we have that \[ f(v) = \langle f(v), f(v)\rangle' v' = \lambda v', \] since $f$ is a morphism in ${\mathcal B}_K$ and $\langle v, v \rangle = 1$. From here we obtain that \[ \lambda v' = \langle \lambda v', \lambda v' \rangle' v' = \lambda^2 v', \] which implies that $\lambda = 1$. Thus $f(v) = v'$, and $f$ is an isometry. The converse clearly holds. \end{proof} \begin{theorem} The categories ${\mathcal A}_K$ and ${\mathcal B}_K$ are isomorphic. \end{theorem} \begin{proof} The functors $F: {\mathcal A}_K \to {\mathcal B}_K$ and $G: {\mathcal B}_K \to {\mathcal A}_K$ mapping an evolution algebra $A$ to the triple $\big(A, \langle \cdot, \cdot \rangle, e\big)$ (where $e$ is the unique nonzero idempotent of $A$, and $\langle \cdot, \cdot \rangle$ is its canonical inner product), and a triple $\big(V, \langle \cdot, \cdot \rangle, v\big)$ to the evolution algebra $V$ whose product is $xy := \langle x, y \rangle v$, for all $x, y \in V$, respectively, are well-defined by Proposition \ref{inner} and Conclusion \ref{conclu1}. It is straightforward to check that $FG = 1_{{\mathcal B}_K}$ and $GF = 1_{{\mathcal A}_K}$. \end{proof} A very important consequence of this theorem is the following: \begin{corollary} The problem of classifying (up to isomorphism) the evolution algebras in ${\mathcal A}_K$ is equivalent to classifying (up to isomorphism) the triples $\big(V, \langle \cdot, \cdot \rangle, v\big)$ in ${\mathcal B}_K$. \end{corollary} Keeping in mind that two objects in ${\mathcal B}_K$ are isomorphic in the presence of an isometry, what we are indeed doing here is transitioning from an algebraic classification problem to a geometric classification problem. \begin{remark} \label{new} If $K$ is quadratically closed and $(V, \esc{\cdot, \cdot})$ is finite dimensional, diagonalizable and nondegenerate, then there exists a basis $B$ of $V$ such that the Gram matrix $M_B$ is the identity. As a consequence, any two diagonalizable, nondegenerate inner product spaces of the same dimension are isometric. Moreover, if $(V_1,\esc{\cdot, \cdot}_1)$ and $(V_2,\esc{\cdot, \cdot}_2)$ are such that $\dim(V_1) = \dim(V_2)$ and $\dim(V^\bot_1) = \dim(V^\bot_2)$, then $(V_1,\esc{\cdot, \cdot}_1)$ and $(V_2,\esc{\cdot, \cdot}_2)$ are isometric. \end{remark} Using Witt's (Isometry) Extension Theorem, we can provide a way to construct isomorphic objects in ${\mathcal B}_K^0$. Before doing so, we remind the reader a trivial property of vector spaces very useful for our purposes. \begin{remark} \label{complemento} Let $V$ be a vector space and $U$ a subspace of $V$. If $v \in V$ is such that $v \notin U$, then there exists a subspace $U'$ of $V$ such that $v \in U'$ and $V = U \oplus U'$. \end{remark} \begin{lemma}\label{admunsen} Let $K$ be a field of characteristic different from two or perfect of characteristic two and $(V, q, v) \in {\mathcal B}_K^0$. Suppose that $V = V^\bot \oplus V'$ for $V'$ a subspace of $V$ containing $v$. If $w \in V'$ satisfies that $q(w) = 1$, then $(V, q, v)$ and $(V, q, w)$ are isomorphic in ${\mathcal B}_K^0$. \end{lemma} \begin{proof} Suppose first that the characteristic of $K$ is not two. Then the linear map from $Kv'$ onto $Kw'$ mapping $v'$ onto $w'$ is an isometry, which can be extended to an isometry $\theta'$ of $V'$, by Witt's (Isometry) Extension Theorem. It is straightforward to check that the map $\theta: V \to V$ given by $\theta(t + x) = t + \theta'(x)$, for all $t \in V^\bot$ and $x \in V'$, is an isometry of $V$ mapping $v$ to $w$; the result follows from Theorem \ref{caractIso}. Assume now that $K$ is perfect of characteristic two. In this case, $V$ can be written as orthogonal direct sums $V = Kv \oplus V' = Kw \oplus W'$ such that $V^\bot = V'^\bot = W'^\bot$. Thus, $V' = V^\bot \oplus V''$ and $W'= V^\bot \oplus W''$, which imply that $V = Kv \oplus V^\bot \oplus V'' = Kw \oplus V^\bot \oplus W''$. Thus $(V'', \esc{\cdot, \cdot}|_{V''})$ and $(W'', \esc{\cdot, \cdot}|_{W''})$ are nondegenerate and have the same dimension, and so they are isometric by Remark \ref{new}. If $\theta'': V'' \to W''$ is an isometry, then we can easily construct an isometry $\theta: V \to W$ such that $\theta|_{V''} = \theta''$, $\theta(v) = w$. \end{proof} An immediate consequence of Lemma \ref{admunsen} in terms of isomorphisms of evolution algebras follows: \begin{theorem} \label{uncaso} Let $K$ be a field of characteristic different from two or perfect of characteristic two. Two evolution algebras $A$ and $B$ in ${\mathcal A}_K$ are isomorphic if and only if their canonical inner product spaces are isometric. Moreover, if $K$ is algebraically closed, or of characteristic two and perfect, then $A$ and $B$ are isomorphic if and only if the rank of their canonical inner products coincide; if $K = \mathbb R$, then $A$ is isomorphic to $B$ if and only if the rank and signatures of their canonical inner products coincide. \end{theorem} We close this first case with some concrete examples: \begin{example} In dimension 4, Theorem \ref{uncaso} tells us that there are four different isomorphic classes in $\mathcal{A}_\mathbb C$, which correspond to the triples $(\mathbb C^4, q_i, e_1)$, where $e_1 = (1, 0, 0, 0)$, and $q_1, q_2, q_3, q_4$ are as follows: \begin{align*} q_1(x_1, x_2, x_3, x_4) & = x_1^2, \\ q_2(x_1, x_2, x_3, x_4) & = x_1^2 + x_2^2, \\ q_3(x_1, x_2, x_3, x_4) & = x_1^2 + x_2^2 + x_3^2, \\ q_4(x_1, x_2, x_3, x_4) & = x_1^2 + x_2^2 + x_3^2 + x_4^2, \end{align*} with respect to the canonical basis of $\mathbb C^4$. \end{example} \begin{example} In dimension 3, Theorem \ref{uncaso} reveals that we have six different isomorphic classes in $\mathcal{A}_\mathbb R$ corresponding to the triples $(\mathbb R^3, q_i, e_1)$, where $e_1 = (1, 0, 0)$ and $q_1, \ldots, q_6$ are displayed in the table below, where the coordinates are with respect to the canonical basis of $\mathbb R^3$. \begin{center} \begin{tabular}{|c|c|c|} \hline $q_i$ & rank & signature\cr \hline $x_1^2$ & $1$ & $(1,0)$ \cr $x_1^2 + x_2^2$ & $2$ & $(2,0)$ \cr $x_1^2 - x_2^2$ & $2$ & $(1,1)$ \cr $x_1^2 + x_2^2 + x_3^2$ & $3$ & $(3,0)$ \cr $x_1^2 + x_2^2 - x_3^2$ & $3$ & $(2,1)$ \cr $x_1^2 - x_2^2 - x_3^2$ & $3$ & $(1,2)$ \cr \hline \end{tabular} \end{center} \end{example} \begin{example} Let $\mathbf{F}_4$ be the field of four elements, which is perfect. In dimension 3, Theorem \ref{uncaso} tells us that there are three distinct classes in $\mathcal{A}_{\mathbf{F}_4}$, which correspond to the inner products (on the $\mathbf{F}_4$-vector space $\mathbf{F}^3_4$) whose matrices are $\mathrm{Id}$, and the diagonal matrices $\mathrm{diag}(1, 1, 0)$ and $\mathrm{diag}(1, 0, 0)$. \end{example} \begin{remark} Recall that quadratic forms on a finite dimensional vector space over a finite field of odd characteristic are classified (up to congruence) by their rank and discriminant. Thus $(V_1,q_1,v_1) \cong (V_2, q_2, v_2)$ in ${\mathcal B}_K^0$ if and only if $\dim(V_1) = \dim(V_2)$, $\dim(V_1^\bot) = \dim(V_2^\bot)$ and the discriminant of ${q_1}\vert_{V'_1}$ coincides with that of ${q_2}\vert_{V'_2}$, where $v_i\in V'_i$ and $V'_i$ is the complement of $V_i^\bot$, for $i = 1, 2$. \end{remark} \begin{example} Let $K = \mathbf{F}_3(i) = \{0, \pm 1, \pm i, \pm(1 + i), \pm(1 - i)\}$, where $i^2 = -1$, be the field of nine elements. Notice that $K$ can be seen as an extension of the field of three elements $\mathbf{F}_3$ by adjoining an element of square $-1$. We have that $(K^\times)^2 = \{ \pm 1, \pm i\}$ is the cyclic group of order $4$, and the quotient group $K^\times/(K^\times)^2$ is the cyclic group of order $2$. We can then write $K^\times/(K^\times)^2 = \{[1],[\omega]\}$, where $\omega = 1 + i$ and $[\cdot]$ denotes the corresponding equivalence class. From here we obtain that the discriminant of a quadratic form over $K$ is either $[1]$ or $[\omega]$. In particular, for $V = K^n$ and $q: V \to K$ nondegenerate, we have two possibilities: either $q$ is congruent to $x_1^2 +x_2^2+ \cdots + x_n^2$ or to $\omega x_1^2 + x_2^2 + \cdots + x_n^2$. A natural question arises: how many isomorphic classes of $3$-dimensional evolution $K$-algebras $A$ with $\dim(A^2)=1$ and $(A^2)^2\ne 0$ are there? \newline We obtain the three following types: \begin{enumerate} \item $\hbox{Ann}(A)=0$, then $A\cong K^3$ with product \begin{align*} (x,y,z)(x',y',z') & = (xx'+yy'+zz')(1,0,0), \mbox{ or } \\ (x,y,z)(x',y',z') & = (\omega xx'+yy'+zz')(1,0,0). \end{align*} \item $\dim(\hbox{Ann}(A))=1$, then $A\cong K\times K^2$ with product \begin{align*} (x,y,z)(x',y',z') &= (yy'+zz')(0,1,0), \mbox{ or } \\ (x,y,z)(x',y',z') &= (\omega yy'+zz')(0,1,0). \end{align*} \item $\dim(\hbox{Ann}(A))=2$, then $A\cong K^2\times K$ with product \begin{align*} (x,y,z)(x',y',z') & =zz'(0,0,1), \mbox{ or } \\ (x,y,z)(x',y',z') &= \omega zz' (0,0,1). \end{align*} \end{enumerate} \end{example} \subsection{Case $(A^2)^2 = 0$} We treat now the remaining case: evolution algebras $A$ such that $\dim (A^2) = 1$ and $(A^2)^2 = 0$. \smallskip \begin{lemma} \label{lemmaanndecomp} Let $A$ be an evolution $K$-algebra such that $\dim(A^2) = 1$ and $(A^2)^2 = 0$. Then $A = \mathrm{Ann}(A) \oplus W$, where $(W, \esc{\cdot, \cdot}\vert_W)$ is nondegenerate and has an orthogonal basis $\{w_i\}_{i \in I}$ of nonisotropic vectors. Moreover, if $K$ is quadratically closed, then $\{w_i\}_{i \in I}$ is an orthonormal basis. \end{lemma} \begin{proof} By \eqref{decomA} we have $A = \mathrm{Ann}(A) \oplus W$, for $W$ a subspace of $A$ such that $(W, \esc{\cdot, \cdot}\vert_W)$ is nondegenerate. Take $\{e_i\}_{i \in J}$ a natural basis of $A$, and express each $e_i$ as $e_i = r_i + w_i$, for $r_i \in \mathrm{Ann}(A)$ and $w_i \in W$. Let $J_W = \{i \in J \mid w_i \neq 0\}$. We claim that $\{w_i\}_{i \in J_W}$ is a basis of $W$; in fact, it clearly spans $W$, and for $i \neq j$ we have that \[ 0 = e_ie_j = (r_i + w_i)(r_j + w_j) = r_ir_j + r_iw_j + w_ir_j + w_iw_j = w_iw_j, \] since $r_i, r_j \in \mathrm{Ann}(A)$. This shows that the $w_i's$ are pairwise orthogonal. Now if $w_i \neq 0$ and $w^2_i = 0$ for some $i$, then $\langle w_i, w_i \rangle = 0$, and so $\langle w_i, W \rangle = 0$, which implies that $w_i \in W^\bot \cap W = 0$, a contradiction. Thus $\langle w_i, w_j \rangle = 0$ if $i \neq j$, and $\langle w_i, w_i \rangle \neq 0$ (provided that $w_i \neq 0$), which implies that the $w_i's$ are linearly independent, and so $\{w_i\}_{i \in J_W}$ is a basis of $W$. To finish, notice that if $K$ is quadratically closed, then the set $\Big\{\frac1{\sqrt{\langle w_i, w_i\rangle}} w_i \Big\}_{i \in J_W}$ is an orthonormal basis of $W$. This finishes the proof. \end{proof} \smallskip Let $A$ be an evolution algebra such that $\dim (A^2) = 1$ and $(A^2)^2 = 0$. We proceed like in Remark \ref{product}, and we write the product in $A$ as \[ xy = \langle x, y \rangle a, \, \mbox{ for all } \, x, y \in A, \] where $a \in A$ satisfies $A^2 = Ka$. Notice that $(A^2)^2 = 0$ implies $a^2 = 0$, and so $\langle a, a \rangle = 0$, which says that $a$ is isotropic. In what follows, we distinguish two sub cases depending on whether $a \in \mathrm{Ann}(A)$ or $a \notin \mathrm{Ann}(A)$. \subsubsection{{\bf Sub case:} $a \in \mathrm{Ann}(A)$, or equivalently, $A^3 = 0$} It turns out that $A$ is associative (and commutative) in this case. \begin{remark} Let $A$ be an evolution algebra as in Lemma \ref{lemmaanndecomp}. If $A^3 = 0$, then the basis $\{w_i\}$ of $W$ is such that $w^2_i \in \mathrm{Ann}(A)$ for all $i$. \end{remark} \begin{theorem} \label{Iso1} Let $A$ and $B$ be evolution algebras satisfying that $\dim(A^2) = \dim(B^2) = 1$, $(A^2)^2 = A^3 = 0$ and $(B^2)^2 = B^3 = 0$. Write $A = \mathrm{Ann}(A) \oplus W_A$ and $B = \mathrm{Ann}(B) \oplus W_B$ as in \eqref{decomA}. Then $A$ and $B$ are isomorphic if and only if $\dim(\mathrm{Ann}(A)) = \dim(\mathrm{Ann}(B))$, and the spaces $W_A$ and $W_B$ are isometric. \end{theorem} \begin{proof} Suppose first that $\theta: (W_A, \langle \cdot, \cdot \rangle \vert_{W_A}) \to (W_B, \langle \cdot, \cdot \rangle \vert_{W_B})$ is an isometry and that $\dim(\mathrm{Ann}(A)) = \dim(\mathrm{Ann}(B))$. For a linear isomorphism $\xi: \mathrm{Ann}(A) \to \mathrm{Ann}(B)$, one can easily check that the map $F: A \to B$ given by $F(y + z) = \xi(y) + \theta(z)$, for all $y \in \mathrm{Ann}(A)$ and $z \in W_A$, is the desired isomorphism. Conversely, suppose that $F: A \to B$ is an isomorphism. Then $B = \hbox{Ann}(B)\oplus F(W_A)$, which implies $W_B = F(W_A)$. If $a \in A$ is such that $A^2 = Ka$, then Lemma \ref{pointed} allows us to choose $b = F(a)$, as the generator of $B^2$. It is clear that $F\vert_{\mathrm{Ann}(A)}$ is a linear isomorphism, and so $\dim(\mathrm{Ann}(A)) = \dim(\mathrm{Ann}(B))$. It remains to show that $\theta = F\vert_{W_A}$ is an isometry; in fact, for $z_1, z_2 \in W_A$ we have that \[ z_1z_2 = \langle z_1, z_2 \rangle a, \] which implies that \[ \langle z_1, z_2 \rangle b = F(z_1z_2) = F(z_1)F(z_2) = \theta(z_1)\theta(z_2) = \langle \theta(z_1), \theta (z_2) \rangle b. \] Thus: $\langle z_1, z_2 \rangle = \langle \theta(z_1), \theta (z_2) \rangle$, proving that $\theta$ is an isometry, as desired. \end{proof} \begin{example} Using Theorem \ref{Iso1} we can determine the 3-dimensional real evolution algebras $A$ such that $\dim(A^2) = 1$ and $(A^2)^2 = A^3 = 0$. In fact, for $d = \dim(\mathrm{Ann}(A))$, we have the following cases: \begin{itemize} \item $d = 2$: We have that $A \cong \mathbb R^3$ with $\dim(W) = 1$. There are two nonisomorphic algebras with products given by \[ (x_1, x_2, x_3)(y_1, y_2, y_3) = (\pm x_3y_3, 0, 0). \] \item $d = 1$: We have that $A \cong \mathbb R^3$ with $\dim(W) = 2$. There are three nonisomorphic algebras with products given by \begin{align*} (x_1, x_2, x_3)(y_1, y_2, y_3) &= (x_2y_2 + x_3y_3, 0, 0); \\ (x_1, x_2, x_3)(y_1, y_2, y_3) &= (x_2y_2 - x_3y_3, 0, 0); \\ (x_1, x_2, x_3)(y_1, y_2, y_3) &= (-x_2y_2 - x_3y_3, 0, 0). \end{align*} \item $d = 0$: We have that $A \cong \mathbb R^3$ with $\dim(W) = 3$. There are four nonisomorphic algebras with products given by \begin{align*} (x_1, x_2, x_3)(y_1, y_2, y_3) &= (x_1y_1 + x_2y_2 + x_3y_3, 0, 0); \\ (x_1, x_2, x_3)(y_1, y_2, y_3) &= (x_1y_1 + x_2y_2 - x_3y_3, 0, 0); \\ (x_1, x_2, x_3)(y_1, y_2, y_3) &= (x_1y_1 -x_2y_2 - x_3y_3, 0, 0); \\ (x_1, x_2, x_3)(y_1, y_2, y_3) &= (-x_1y_1 -x_2y_2 - x_3y_3, 0, 0). \end{align*} \end{itemize} \end{example} We can improve Theorem \ref{Iso1}, provided that the field $K$ is quadratically closed. \begin{theorem} \label{A3=0} Let $K$ be a quadratically closed field. Then the isomorphic class of an evolution algebra $A$ with $\dim(A^2) = 1$ and $(A^2)^2 = A^3 = 0$ is completely determined by $\dim (A)$ and $\dim \big(\mathrm{Ann}(A)\big)$. \end{theorem} \begin{proof} Let $B$ be an evolution algebra with $\dim(B^2) = 1$ and $(B^2)^2 = B^3 = 0$. If $\dim(B) = \dim(A)$ and $\dim \big(\mathrm{Ann}(B)\big) = \dim \big(\mathrm{Ann}(A)\big)$, then $\dim(W_A) = \dim(W_B)$ by \eqref{decomA}. Thus, $W_A$ and $W_B$ are isometric by Lemma \ref{lemmaanndecomp}. \noindent The converse follows from Theorem \ref{Iso1} and \eqref{decomA}. \end{proof} \begin{example} Over $\mathbb C$, in dimension 4, Theorem \ref{A3=0} tells us that there are four different (isomorphic) classes of evolution algebras $A$ with $\dim(A^2) = 1$, $(A^2)^2 = A^3 = 0$ for $\dim(\mathrm{Ann}(A)) = 1$. Their quadratic forms are given by \begin{align*} q_1(x_1, x_2, x_3, x_4) & = x_2^2 + x_3^2 + x_4^2, \\ q_2(x_1, x_2, x_3, x_4) & = x_3^2 + x_4^2, \\ q_3(x_1, x_2, x_3, x_4) & = x_4^2, \end{align*} with respect to the canonical basis of $\mathbb C^4$. \end{example} \begin{example} Let us now classify the $3$-dimensional evolution algebras $A$ with $\dim(A^2) = 1$ and $(A^2)^2 = A^3 = 0$ over the field $K$ of nine elements. In this case the existence of a nonzero annihilator is compulsory, and we obtain two types: \begin{enumerate} \item $\dim(\hbox{Ann}(A))=1$, then $A\cong K\times K^2$ with product \begin{align*} (x,y,z)(x',y',z') & =(yy'+zz')(1,0,0), \mbox{ or } \\ (x,y,z)(x',y',z') & =(\omega yy'+zz')(1,0,0). \end{align*} \item $\dim(\hbox{Ann}(A))=2$, then $A\cong K^2\times K$ with product \begin{align*}(x,y,z)(x',y',z') &= zz'(1,0,0), \mbox{ or } \\ (x,y,z)(x',y',z') &= \omega zz' (1,0,0). \end{align*} \end{enumerate} \end{example} \subsubsection{{\bf Sub case:} $A^3 \neq 0$} Suppose that $A = \mathrm{Ann}(A) \oplus W$ is as in \eqref{decomA}. If $a = x + w$, for $x \in \mathrm{Ann}(A)$ and $w \in W$, then $w$ is also isotropic; in fact: \begin{equation} \label{isotropico} 0 = \langle a, a \rangle = \langle x, x \rangle + 2 \langle x, w \rangle + \langle w, w \rangle = \langle w, w \rangle. \end{equation} The proof of the next result is very similar to the proof of Theorem \ref{Iso1}. We provide a sketch of it and leave the details to the reader. \begin{theorem} \label{Iso2} Let $A$ and $B$ be evolution algebras such that $\dim(A^2) = \dim(B^2) = 1$, $(A^2)^2 = 0$, $(B^2)^2 = 0$ and that both $A^3$ and $B^3$ are nonzero. Let $a \in A$ and $b \in B$ such that $A^2 = Ka$ and $B^2 = Kb$. Suppose that $A = \mathrm{Ann}(A) \oplus W_A$ and $B = \mathrm{Ann}(B) \oplus W_B$ as in \eqref{decomA}, and let $a = x + w$, $b = x' + w'$, where $x \in \mathrm{Ann}(A)$, $x' \in \mathrm{Ann}(B)$, $w \in W_A$ and $w' \in W_B$. Then $A$ and $B$ are isomorphic if and only if $\dim(\mathrm{Ann}(A)) = \dim(\mathrm{Ann}(B))$, and there exists an isometry $\theta: W_A \to W_B$ such that $\theta(w) = w'$. \end{theorem} \begin{proof} If $\dim(\mathrm{Ann}(A)) = \dim(\mathrm{Ann}(B))$ and $\theta: W_A \to W_B$ is an isometry such that $\theta(w) = w'$, then choose a linear isomorphism $\xi: \mathrm{Ann}(A) \to \mathrm{Ann}(B)$ such that $\xi(x) = x'$ and proceed like in the proof of Theorem \ref{Iso1}. For the converse, if $F: A \to B$ is an isomorphism, reason like in the proof of Theorem \ref{Iso1} by noticing that $F(x) = x'$ and $F(w) = w'$. \end{proof} \begin{example} \label{F4ejemplo2} Let $K$ be the field of $4$ elements: $K = \hbox{\bf F}_4 = \{0, 1, \alpha, \beta\}$, where $\alpha + \beta = 1$, $\alpha^2 = \beta$, $\beta^2 = \alpha$ and $\alpha\beta = 1$. Since $K$ is perfect, any orthogonalizable nondegenerate inner product admits an orthonormal basis. This implies that (up to isometry) there is only one nondegenerate orthogonalizable inner product. Therefore, we can consider the inner product $\esc{\cdot, \cdot}: K^2 \times K^2 \to K$ defined by $\esc{(x, y), (x', y')} = xx' + yy'$, for all $x, x', y, y' \in K$. Let us begin by computing the group $\mathcal{O}(K^2, \esc{\cdot,\cdot})$ of isometries of $(K^2, \esc{\cdot,\cdot})$. This can be easily done by identifying linear maps $K^2 \to K^2$ with their matrices with respect the canonical basis. In fact, a given linear map is an isometry if and only its matrix $M$ satisfies that $MM^t = M^2 = 1$. Proceeding in this way we obtain that \[ \mathcal{O}(K^2, \esc{\cdot,\cdot}) = \left \{ \left(\begin{array}{@{}cc@{}} 1 & 0 \\ 0 & 1 \end{array} \right), \, \left(\begin{array}{@{}cc@{}} 0 & 1 \\ 1 & 0 \end{array} \right), \, \left(\begin{array}{@{}cc@{}} \alpha & \beta \\ \beta & \alpha \end{array} \right), \, \left(\begin{array}{@{}cc@{}} \beta & \alpha \\ \alpha & \beta \end{array} \right) \right \}, \] which is isomorphic to the Klein group $\hbox{\bf F}_2 \times \hbox{\bf F}_2$. Next, notice that the nonzero isotropic vectors are \[ K^\times(1, 1) = \big \{(1, 1), (\alpha, \alpha), (\beta, \beta) \big \} \cong \hbox{\bf F}_3. \] Thus, there are three singletons orbits under the natural action of the group $\mathcal{O}(K^2, \esc{\cdot,\cdot})$ on the set $K^\times(1, 1)$, namely: $\{(1, 1)\}$, $\{(\alpha, \alpha)\}$ and $\{(\beta, \beta)\}$. We are now in a position to determine all the 3-dimensional evolution algebras $A$ such that $\dim(A^2) = 1$, $(A^2)^2 = 0$, $A^3 \neq 0$ and $\dim(\mathrm{Ann}(A)) = 1$. Let us take $A$ one of these evolution algebras, and write $A = \mathrm{Ann}(A) \oplus W$ by \eqref{decomA}. Then $\dim(W) = 2$, and we can identify $(W, \langle \cdot, \cdot \rangle)\vert_W$ with $K^2$ endowed with the inner product space having the identity matrix (with respect to the canonical basis). Lemma \ref{pointed} allows us to choose the generator of $A^2$ of the form $a = (1, \lambda, \mu)$, where $(\lambda, \mu) \in K^\times(1, 1)$. In total, there are three possibilities for $(\lambda, \mu)$, which induce non-isomorphic evolution algebras. Theorem \ref{Iso2} allows us to conclude that (up to isomorphism) there are three evolution algebras with products: \begin{align*} (x, y, z)(x', y',z') &= (yy' + zz')(1, 1, 1), \\ (x, y, z)(x', y',z') &= (yy' + zz')(1, \alpha, \alpha), \\ (x, y, z)(x', y', z') &= (yy' + zz')(1, \beta, \beta). \end{align*} \end{example} \begin{example} The $3$-dimensional evolution algebras $A$ with $\dim(A^2)=1$ such that $(A^2)^2=0$ but $A^3\ne 0$ over the field $K$ of nine elements are of two types: \begin{enumerate} \item $\hbox{Ann}(A)=0$, then $A\cong K^3$ with product \begin{align*} (x,y,z)(x',y',z') &=(xx'+yy'+zz')(0,1,i), \mbox{ or } \\ (x,y,z)(x',y',z') &=(\omega xx'+yy'+zz')(0,1,i). \end{align*} \item $\dim(\hbox{Ann}(A))=1$, then $A\cong K\times K^2$ with product \[ (x,y,z)(x',y',z')=(yy'+zz')(0,1,i). \] \end{enumerate} Notice that in (2), inner products of the form \[ \langle (x, y, z), (x', y', z') \rangle = \omega yy' + zz', \] can not be considered since they do not have nonzero isotropic vectors. \end{example} \begin{remark} Let $K$ be a field of characteristic two, and $\mathcal{C}_{nd}$ the class of $n$-dimensional evolution algebras $A$ satisfying that $\dim(A^2) = 1$, $(A^2)^2 = 0$, $A^3 \neq 0$, and $d = \dim(\mathrm{Ann}(A))$. Notice that Example \ref{F4ejemplo2} shows us that the isomorphic clases of $\mathcal{C}_{nd}$ are in one to one correspondence with the orbits of the group $\mathcal{O}(K^{n-d}, \esc{\cdot,\cdot})$ (of isometries of $K^{n - d}$) on the set of isotropic vectors. \end{remark} As expected, we can improve Theorem \ref{Iso2} by requiring the field to be quadratically closed. Before doing so, we need to prove a few results, the first one is a reformulation of the famous result known as {\it Witt's Cancelation Theorem}. \begin{proposition} \label{WCT2.0} Let $V_1$ and $V_2$ be vectors spaces over the same field of characteristic not two, $q_i$ a nondegenerate quadratic form on $V_i$, and $U_i$ a nondegenerate subspace of $V_i$, for $i = 1, 2$. If the spaces $(V_1, q_1)$ and $(V_2, q_2)$ are isometric, and there is an isometry $U_1 \to U_2$, then there is an isometry $U_1^\bot \to U_2^\bot$. \end{proposition} \begin{proof} Suppose that $f: V_1 \to V_2$ and $g: U_1 \to U_2$ are isometries. Then the map $gf^{-1}: f(U_1)\to U_2$ is also an isometry. Witt's Cancellation Theorem tells us that there is an isometry between $f(U_1)^\bot = f(U_1^\bot)$ and $U_2^\bot$, say $h$. To finish, notice that the composition $h f\vert_{U_1^\bot}$ is the desired isometry. \end{proof} \begin{lemma} \label{Isometria} Let $(W_1, q_1)$ and $(W_2, q_2)$ be nondegenerate isometric spaces over a field $K$ of characteristic not two. If $w_1 \in W_1$ and $w_2 \in W_2$ are nonzero isotropic vectors, then there exists an isometry $f: W_1 \to W_2$ such that $f(w_1) = w_2$. \end{lemma} \begin{proof} The result trivially holds in dimension 1. Suppose now that both $W_1$ and $W_2$ have dimension $\geq 2$. Write $\esc{\cdot,\cdot}_i$ to denote the polar form of $q_i$, for $i = 1, 2$. There exists $w'_i \in W_i$ such that $(w_i, w'_i)$ is a hyperbolic pair in $W_i$, for $i = 1, 2$. It is straightforward to check that the linear map $\xi: Kw_1\oplus Kw_1'\to Kw_2\oplus Kw_2'$ such that $\xi(w_1) = w_2$ and $\xi(w_1') = w_2'$ is an isometry. If $\hbox{dim}(W_i) = 2$, then we are done. Otherwise, Proposition \ref{WCT2.0} gives an isometry $\eta: (Kw_1\oplus Kw_1')^\bot \to (Kw_2\oplus Kw_2')^\bot$. Now, $V_i = (Kw_i \oplus Kw_i')\oplus (Kw_i\oplus Kw_i')^\bot$, for $i = 1, 2$, and we can easily construct an isometry $f: W_1 \to W_2$ such that $f(w_1) = w_2$, $f(w'_1) = w'_2$ and $f\vert_{(Kw_1\oplus Kw_1')^\bot} = \eta$, finishing the proof. \end{proof} \begin{theorem} \label{A3not0} Let $K$ be a quadratically closed field of characteristic not two. The isomorphic class of an evolution algebra $A$ with $\dim(A^2) = 1$, $(A^2)^2 = 0$ and $A^3 \neq 0$ is completely determined by $\dim (A)$ and $\dim \big(\mathrm{Ann}(A)\big)$. \end{theorem} \begin{proof} Let $B$ be an evolution algebra such that $\dim(B^2) = 1$, $(B^2)^2 = 0$ and $B^3 \neq 0$. We express $A$ and $B$ as in \eqref{decomA}: \begin{equation} \label{ABDec} A = \mathrm{Ann}(A) \oplus W_A, \quad B = \mathrm{Ann}(B) \oplus W_B. \end{equation} We write $a = x + w$, $b = x' + w'$, where $x \in \mathrm{Ann}(A)$, $x' \in \mathrm{Ann}(B)$, and $w \in W_A$, $w' \in W_B$, and $a$ (respectively, $b$) generates $A^2$ (respectively, $B^2$). Notice that $w$ and $w'$ are both isotropic by \eqref{isotropico}. Suppose first that $\dim(A) = \dim(B)$ and $\dim(\mathrm{Ann}(A)) = \dim(\mathrm{Ann}(B))$. Then $\dim(W_A) = \dim(W_B)$, and Lemma \ref{lemmaanndecomp} applies to get that the nondegenerate spaces $W_A$ and $W_B$ are isometric. Lemma \ref{Isometria} and Theorem \ref{Iso2} tell us that $A$ and $B$ are isomorphic. The converse follows from \eqref{ABDec} and Theorem \ref{Iso2}. \end{proof}
proofpile-arXiv_065-3842
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The most studied example of quantum field theory in curved spacetime is probably the theory of a scalar field in de Sitter space. Indeed the model is simple enough to be solved analytically and therefore the properties of the field can be studied in detail. In particular the thermal properties of the vacuum state, related to the phenomenon of particle creation, have been considered \cite{Gibbons:1977mu}. Moreover, the quantum dynamics of the field is of special importance for inflationary cosmological models where de Sitter space describes a universe in exponential expansion \cite{Mukhanov:2005sc}. De Sitter space has also attracted new interest in connection with the conjecture of the dS/CFT correspondence proposed almost a decade ago by Strominger \cite{Strominger:2001pn}. In this letter we expose a new quantization scheme for a massive scalar field in de Sitter spacetime based on the general boundary formulation (GBF) of quantum field theory. The results presented here are mainly intended as a contribution to the GBF program, and represent indeed the first study of QFT in a curved spacetime within the GBF. Furthermore, the quantization scheme we introduce provides a new framework to analyze the results obtained or proposed so far in literature. In a series of papers \cite{Oe:timelike,Oe:GBQFT,Oe:KGtl,CoOe:spsmatrix,CoOe:smatrixgbf,CoOe:smatrix2d} it has been shown that the GBF provides a viable description of the dynamics of quantized fields. Not only this new formulation can recover known results of standard QFT, but more interesting the GBF can handle situations where the methods of standard QFT fail. In QFT, dynamics is described in terms of the evolution of initial data from an initial Cauchy surface to a final Cauchy surface. Therefore, this standard picture involves a spacetime region bounded by the disjoint union of two Cauchy surfaces. In the GBF evolution acquires a more general description: The boundary of the spacetime region where dynamics takes place can have arbitrary form and is not required to reduce to the disjoint union of two Cauchy surfaces. The main novelty of the GBF resides in associating Hilbert spaces of states to arbitrary hypersurfaces in spacetime and amplitudes to spacetime regions and states living on their boundaries. For a region $M$ of spacetime, the amplitude is a map from the Hilbert space ${\cal H}_{\partial M}$ associated with the boundary $\partial M$ of the region to the complex numbers. The formal expression of the amplitude $\rho$, for a state $\psi \in {\cal H}_{\partial M}$, is given in terms of the Feynman path integral combined with the Schr\"odinger representation for quantum states, \begin{equation} \rho_{M}(\psi)= \int \xD \varphi \, \psi(\varphi) \int_{\phi|_{\partial M}=\varphi} \xD\phi\, e^{\im S_{M}(\phi)}, \label{eq:rho} \end{equation} where the outer integral is over all field configurations $\varphi$ on $\partial M$, and the inner integral is over all field configurations $\phi$ in the spacetime region $M$ matching $\varphi$ on the boundary. Finally, a physical interpretation can be given to such amplitudes and an appropriate notion of probability can be extracted from them \cite{Oe:GBQFT,Oe:KGtl}. So far this formalism has been applied only to the study of flat-spacetime-based QFT. There the standard $S$-matrix for an interacting scalar field has been shown to be equivalent to the one derived for free asymptotic quantum states at spatial infinity. The notion of spatial asymptotic state comes from the geometry considered: In particular, in Minkowski spacetime states were defined on an hypercylinder, namely the boundary of a threeball in space extended over all of time, and then the radius of the ball was sent to infinity. The structure of this work follows that of \cite{CoOe:smatrixgbf}. So, in the following we will evaluate the $S$-matrix for coherent states on spacelike hypersurfaces of constant conformal de Sitter time. Then, the asymptotic amplitude will be derived for coherent states defined on an analogue of the hypercylinder in de Sitter space. Finally, by constructing an isomorphism between the respective state spaces, we prove the equivalence of these two types of amplitude. We consider de Sitter spacetime with the metric \begin{equation} \mathrm{d} s^2 = \frac{1}{H^2t^2} \left( \mathrm{d} t^2 - \mathrm{d} \underline{x}^2 \right), \label{eq:dSmetric} \end{equation} where $H$ is the Hubble constant, the conformal time $t$ takes values in the interval $]0,\infty[$ and $\underline{x} \in {\mathbb R}^3$ are coordinates on the equal time hypersurfaces. Such coordinates cover only half of de Sitter spacetime. The remaining half can be included by simply extending the domain of $t$ to $]-\infty,\infty[$, \cite{BiDa:qfcs}. We start with the derivation of the standard transition amplitude by computing the amplitude (\ref{eq:rho}) for a spacetime region $M$ bounded by two equal time hypersurfaces, $\Sigma_1$ and $\Sigma_2$, defined respectively by the values $t_1$ and $t_2$ of the conformal time $t$: $M = [t_1,t_2] \times \mathbb{R}^3$. Then the state space associated with the boundary $\partial M = \Sigma_1 \cup \Sigma_2$ results to be the tensor product ${\cal H}_1 \otimes {\cal H}_2^*$ of the Hilbert spaces defined on the hypersurfaces $\Sigma_1$ and $\Sigma_2$. Following \cite{CoOe:smatrixgbf} we introduce coherent states; their wave function at time t is parametrized by a complex function $\xi$ on momentum space, and in the interaction picture their form is \begin{multline} \psi_{t,\xi} (\varphi) = K_{t, \xi} \\ \times \exp \left( \int \frac{\mathrm{d} ^3 x \, \mathrm{d}^3 k}{(2 \pi)^3} \, \xi(\underline{k}) \, \frac{e^{\im \underline{k} \cdot \underline{x}}}{H_{\nu}(k t) t^{3/2}} \, \varphi(\underline{x}) \right) \psi_{t,0}(\varphi), \end{multline} where $K_{t, \xi}$ is a normalization factor, $\psi_{t,0}(\varphi)$ is the vacuum wave function derived in \cite{Co:vacuum}, $H_{\nu}$ is the Hankel function of order $\nu = \sqrt{\frac{9}{4} - \frac{M^2}{H^2}}$, $M$ denotes the mass of the scalar field and we assume $\nu$ real. Consider first the free theory. The amplitude (\ref{eq:rho}), denoted by the subscript $0$, associated with the tensor product of coherent states $\psi_{t_1,\xi_1} \otimes \overline{\psi_{t_2,\xi_2}}$ is independent of times $t_1$ and $t_2$ and can be written as \begin{multline} \rho_{M,0}(\psi_{\xi_1} \otimes \overline{\psi_{ \xi_2}}) = \exp \left( \frac{\pi H^2}{4} \int \frac{\mathrm{d}^3 k}{(2 \pi)^3} \, \right. \\ \times \left. \left( \overline{\xi_{2}(\underline{k})} \xi_{1}(\underline{k}) - \frac{1}{2}|\xi_{1}(\underline{k})|^2 - \frac{1}{2} |\xi_{2}(\underline{k})|^2 \right) \right). \label{eq:freeampl} \end{multline} Because of independence of initial and final time, the above expression represents the $S$-matrix describing the transition from the coherent state defined by $\xi_1$ (in the asymptotic past) to the coherent state defined by $\xi_2$ (in the asymptotic future). As an intermediate step in the computation of the $S$-matrix for the general interacting theory, we consider the interaction of the scalar field with a source field $\mu$ of the form $\int \mathrm{d}^4 x \sqrt{-g} \, \phi(x) \, \mu(x)$, and we assume that $\mu$ vanishes outside the spacetime region considered here, namely $\mu |_{t \notin ]t_1,t_2[} = 0$. Adding such interaction term to the free action yields for the amplitude (\ref{eq:rho}), denoted by the subscript $\mu$, the result \begin{multline} \rho_{M,\mu}(\psi_{\xi_1} \otimes \overline{\psi_{\xi_2}}) = \\ \rho_{M,0}(\psi_{ \xi_1} \otimes \overline{\psi_{\xi_2}}) \exp \left( \int \mathrm{d} ^4 x \sqrt{-g} \mu(x) \hat{\xi}(x) \right) \\ \times \exp \left(\frac{\im}{2} \int \mathrm{d} ^4 x \sqrt{-g} \, \mu(x) \, \gamma(x) \right), \label{eq:srcampl} \end{multline} where $g$ is the determinant of the metric (\ref{eq:dSmetric}), $\hat{\xi}$ is the complex solution of the Klein-Gordon equation determined by the initial and final coherent states, \begin{multline} \hat{\xi}(x) = \frac{\im \pi H^2}{4} \int \frac{ \mathrm{d}^3 k}{(2 \pi)^3} \left(\xi_1(\underline{k}) \, e^{\im \underline{k} \cdot \underline{x}} \, t^{3/2} \, \overline{H_{\nu}(k t)} \right. \\ \left. + \, \overline{\xi_2(\underline{k})} \, e^{-\im \underline{k} \cdot \underline{x}}\, t^{3/2} \, H_{\nu}(k t) \right). \label{eq:hatxi} \end{multline} The function $\gamma$ in the last exponential of (\ref{eq:srcampl}) is the solution of the inhomogeneous Klein-Gordon equation, \begin{equation} (\Box + M^2) \gamma(x) = \mu(x), \label{eq:inKG} \end{equation} with the following boundary conditions, \begin{multline} \gamma(t, \underline{x}) |_{t<t_1} = t^{3/2} \, H_{\nu}(k t) \, \frac{\im \pi H^2}{4 } \\ \times \int_{t_1}^{t_2} \mathrm{d} t' \sqrt{-g'} (t')^{3/2} \overline{H_{\nu}(k t')} \mu(t',\underline{x}), \label{eq:Fbc1} \end{multline} for early times $t$ before the source is switched on, and \begin{multline} \gamma(t, \underline{x}) |_{t>t_2} = t^{3/2} \, \overline{H_{\nu}(k t)} \, \frac{\im \pi H^2}{4 } \\ \times \int_{t_1}^{t_2} \mathrm{d} t' \sqrt{-g'} (t')^{3/2} H_{\nu}(k t') \mu(t',\underline{x}),\label{eq:Fbc2} \end{multline} for late times $t$ after the source is switched off. In the above expressions $g'$ is the determinant of the metric (\ref{eq:dSmetric}) expressed in the coordinates $(t', \underline{x})$, and the Bessel functions are to be understood as operators via the mode decomposition of the source field. It is convenient the write $\gamma$ in the form \begin{equation} \gamma(x) = \int \mathrm{d}^4 x' \sqrt{-g'} \, G(x,x') \, \mu(x'), \label{eq:gamma} \end{equation} and $G$ is the Green function, solution of the equation $(\Box + M^2)G(x,x') = (-g)^{-1/2} \delta^4(x-x')$, given by \begin{multline} G(x,x')= \frac{H^2}{ 16 \pi} \left( \frac{1}{4} - \nu^2 \right) \sec(\nu \pi)\\ \times F \left( \frac{3}{2}-\nu , \frac{3}{2} + \nu ; 2 ; \frac{1+p- \im 0}{2}\right), \label{eq:propagator} \end{multline} where $F$ is the hypergeometric function and $p= \frac{t^2+t'^2-|\underline{x}- \underline{x}'|^2}{2 t't}$. The above expression coincides with the expression of the Feynman propagator in de Sitter space computed in \cite{Schomblond:1976xc,Bunch:1978yq}, and we can therefore interpret the conditions (\ref{eq:Fbc1},\ref{eq:Fbc2}) has the Feynman boundary conditions. The form (\ref{eq:srcampl}) of the $S$-matrix in the presence of a source field is similar to the one obtained by the path integral in the holomorphic representation \cite{FaSl:gaugeqft}. Finally we use functional methods to express a general interaction in terms of the source interaction, \begin{multline} \int \mathrm{d}^4 x \, \sqrt{-g} \, V(x, \phi(x)) = \\ \int \mathrm{d}^4 x \, V \left( x, \frac{\partial}{\partial \mu(x)}\right) \int \mathrm{d}^4 y \sqrt{-g} \, \phi(y) \mu(y) \bigg|_{\mu =0}. \label{eq:genint} \end{multline} Assuming that the interaction vanishes for $t$ outside the interval $]t_1,t_2[$, the amplitude (\ref{eq:rho}), now indicated with the subscript $V$, takes the form \begin{multline} \rho_{M,V}(\psi_{\xi_1} \otimes \overline{\psi_{\xi_2}}) = \exp \left( \im \int \mathrm{d}^4 x \, V \left( x, - \im \frac{\partial}{\partial \mu(x)}\right) \right) \\ \times \rho_{M, \mu}(\psi_{\xi_1} \otimes \overline{\psi_{\xi_2}})\bigg|_{\mu =0}. \label{eq:genampl} \end{multline} This expression is independent of $t_1$ and $t_2$ and consequently the restriction on $V$ introduced above can be removed. Moreover, the limit of asymptotic times is trivial and (\ref{eq:genampl}) is then interpreted as the $S$-matrix for the general interacting theory. The second geometry we are interested in is conveniently described in terms of spherical coordinates, in which the metric (\ref{eq:dSmetric}) takes the form \begin{equation} \mathrm{d} s^2 = \frac{1}{H^2 t^2} \left( \mathrm{d} t^2 - \mathrm{d} r^2 - r^2 \mathrm{d} \Omega^2 \right), \label{eq:dSmetric2} \end{equation} where $\mathrm{d} \Omega^2$ is the metric on a unit sphere. We will now compute the amplitude (\ref{eq:rho}) associated with the spacetime region $M$ bounded by the hypersurface of constant radius, $r =R$. Hence, $M$ has one connected boundary that we call the \textit{hypercylinder} in analogy with the notion of the hypercylinder introduced in \cite{Oe:KGtl}. We proceed as before by considering coherent states defined in the Hilbert space ${\cal H}_R$ associated with the hypercylinder, and with wave function in the interaction picture given by \begin{multline} \psi_{R,\eta} (\varphi) = K_{R, \eta} \exp \left( \int \mathrm{d} t \, \mathrm{d} \Omega \, \mathrm{d} k \sum_{l,m} \right. \\ \left. \times \eta_{l,m}(k)\frac{t^{-1/2} Z_{\nu}(k t) Y_l^{-m}(\Omega)}{h_l(k R)} \varphi(t, \Omega) \right) \psi_{R,0}(\varphi), \end{multline} where $\eta$ is the complex function on momentum space that parametrizes the coherent state, $\psi_{R,0}$ the vacuum wave function on the hypercylinder of radius $R$ and $K_{R, \eta}$ a normalization factor. Here $Z_{\nu}$ denotes the Bessel function of the first or second kind, $Y_l^m$ the spherical harmonic and $h_l$ the spherical Bessel function of the third kind. The free amplitude for such state reads, \begin{multline} \rho_{M,0}(\psi_{ \eta} ) = \exp \left( - \frac{H^2}{4} \int \mathrm{d} k \sum_{l,m} k^2 \right. \\ \times \left[ |\eta_{l,m}(k)|^2 - \eta_{l,m}(k) \, \eta_{l,-m}(k) \right] \Bigg), \label{eq:freeamplhyp} \end{multline} and is independent of the radius $R$. As before, we now look at the interaction with a source field $\mu$. Requiring this field to be confined in the interior of the hypercylinder, the amplitude for the coherent state $\psi_{\eta}$ turns out to be \begin{multline} \rho_{M,\mu}(\psi_{ \eta} ) = \rho_{M,0}(\psi_{ \eta} ) \exp \left( \int \mathrm{d} ^4 x \sqrt{-g} \mu(x) \hat{\eta}(x) \right) \\ \times \exp \left(\frac{\im}{2} \int \mathrm{d} ^4 x \sqrt{-g} \, \gamma(x) \, \mu(x) \right). \label{eq:srcamplhyp} \end{multline} $\hat{\eta}$ is the complex solution of the Klein-Gordon equation given by \begin{multline} \hat{\eta}(x) = \im H^2 \int \mathrm{d} k \, k \sum_{l,m} \, t^{3/2} Z_{\nu}(k t) \\ \times Y_l^m(\Omega) \, j_l(kr) \, \eta_{l,m}(k), \label{eq:hateta} \end{multline} where $j_l$ is the spherical Bessel function of the first kind. The function $\gamma$ in the last line of (\ref{eq:srcamplhyp}) satisfies the inhomogeneous Klein-Gordon equation (\ref{eq:inKG}), and can therefore be written via the Green function as in (\ref{eq:gamma}). The Green function $G$, defined on the hypercylinder, turns out to be the same Green function that appears in (\ref{eq:gamma}), namely the Feynman propagator (\ref{eq:propagator}). The boundary condition satisfied by $\gamma$ can then be interpreted as the Feynman boundary condition on the hypercylinder, valid for large radius $r$ outside the source field, \begin{multline} \gamma(t,r, \Omega)\big|_{r>R} = k \,\im \, h_l(k r) \\ \times \int_0^R \mathrm{d} r' \, (r')^2 \sqrt{-g'} \,t^2 H^2 \, j_l(k r') \mu(t,r', \Omega). \end{multline} $g'$ denotes the determinant of the metric (\ref{eq:dSmetric2}) in the coordinates $(t,r', \Omega)$, and the Bessel functions are to be understood as operators acting on the mode expansion of $\mu$. Again, we notice that no dependence on the radius $R$ is present in the amplitude (\ref{eq:srcamplhyp}). To conclude, we apply the same functional techniques as before, expressing the general interaction as in (\ref{eq:genint}). Assuming that the interaction now vanishes outside the hypercylinder, we can write the amplitude for the general interacting theory as \begin{multline} \rho_{M,V}(\psi_{\eta}) = \exp \left( \im \int \mathrm{d}^4 x \, V \left( x, - \im \frac{\partial}{\partial \mu(x)}\right) \right) \\ \times \rho_{M, \mu}(\psi_{\eta})\bigg|_{\mu =0}. \label{eq:genamplhyp} \end{multline} Since $R$ does not appear, the cutoff on the interaction can be dropped. Being the limit $R \rightarrow \infty$ trivial, we interpret (\ref{eq:genamplhyp}) as the asymptotic amplitude of the general interacting theory for the coherent state $\psi_{\eta}$. Having computed the asymptotic amplitudes in the two geometries considered here, we now want to analyze their relation. To this aim we adopt an approach analogue to that used in \cite{CoOe:smatrixgbf}. We focus our attention on the expression of the amplitudes for the source interaction in both settings, i.e. (\ref{eq:srcampl}) and (\ref{eq:srcamplhyp}). Considering the same source field in the two cases, namely a source bounded in space and in time, we notice that the last terms of the amplitudes coincide because in the functions $\gamma$ the same propagator (\ref{eq:propagator}) appears. We turn to the second second terms in (\ref{eq:srcampl}) and (\ref{eq:srcamplhyp}): They coincide if and only if the complex solutions to Klein-Gordon equation, $\hat{\xi}$ and $\hat{\eta}$, coincide. It turns out that this equality, namely $\hat{\xi} = \hat{\eta}$, defines an isomorphic map between the state spaces of the two theories, i.e. the Hilbert space ${\cal H}_1 \otimes {\cal H}_2^*$ associated with the boundary of the spacetime region $M = [t_1,t_2] \times \mathbb{R}^3$ and the Hilbert space ${\cal H}_R$ associated with the hypercylinder. Hence, under the isomorphism we have: $\psi_{\xi_1} \otimes \overline{\psi_{\xi_2}} \cong \psi_{\eta}$. We are left with the first term appearing in (\ref{eq:srcampl}) and (\ref{eq:srcamplhyp}), the free amplitudes in the two settings given by (\ref{eq:freeampl}) and (\ref{eq:freeamplhyp}). It is not difficult to show that these free amplitudes are equal under the isomorphism. For example, expressing (\ref{eq:freeamplhyp}) in terms of the function $\hat{\eta}$, we substitute $\hat{\eta}$ with the expression (\ref{eq:hatxi}) of $\hat{\xi}$ and obtain (\ref{eq:freeampl}): \begin{equation} \rho_{M,0}(\psi_{\eta})\big|_{\hat{\eta} = \hat{\xi}} = \rho_{M,0}(\psi_{\xi_1} \otimes \overline{\psi_{ \xi_2}}). \end{equation} We can then conclude the equivalence of the asymptotic amplitudes, interpreted as $S$-matrices, for the general interacting theory, under the isomorphism. Such equivalence offers the possibility to study scattering processes in de Sitter space from a new perspective. Indeed the amplitude for a transition from an in-state with $m$ particles to an out-state with $n$ particles can be mapped to the amplitude for an $(m+n)$-particle state defined on the hypercylinder, and the physical probabilities extracted from the $S$-matrices of the two descriptions are the same. We recover here results analogous to those previously obtained in Minkowski \cite{CoOe:smatrixgbf} and Euclidean spacetime \cite{CoOe:smatrix2d}, and the conclusions discussed there can be exported, mutatis mutandi, to de Sitter space. \begin{acknowledgments} I am grateful to Robert Oeckl for helpful discussions and comments on an earlier draft of this letter. This work was supported in part by CONACyT grant 49093. \end{acknowledgments} \bibliographystyle{amsordx}
proofpile-arXiv_065-6373
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Fifth generation (5G) communication networks will likely adopt \ac{mm-wave} and massive \ac{MIMO} technologies, thanks to a number of favorable properties. In particular, operating at carrier frequencies beyond 30 GHz, with large available bandwidths, \ac{mm-wave} can provide extremely high data rates to users through dense spatial multiplexing by using a large number of antennas \cite{Zhouyue,Rappaport}. While these properties are desirable for 5G services, \ac{mm-wave} communications also face a number of challenges. Among these, the severe path loss at high carrier frequencies stands out. The resulting loss in \ac{SNR} must be compensated through sophisticated beamforming at the transmitter and/or receiver side, leading to highly directional links \cite{Wang,Hur,Tsang}. However, beamforming requires knowledge of the propagation channel. Significant progress has been made in \ac{mm-wave} channel estimation, by exploiting sparsity and related compressed sensing tools, such as \ac{DCS-SOMP} \cite{Duarte}, \ac{CoSOMP} \cite{Duarte2}, and \ac{GCS} \cite{Bolcskei}. In particular, since at \ac{mm-wave} frequencies only the \ac{LOS} path and a few dominant multi-path components contribute to the received power, \ac{mm-wave} channels are sparse in the angular domain \cite{BspaceSayeed,widebandbrady}. This is because in \ac{mm-wave} frequencies, the received power of diffuse scattering and multiple-bounce specular reflections are much lower than that in \ac{LOS} and single-bounce specular reflection \cite{Martinez-Ingles,Vaughan,mmMAGIC}. Different \ac{CS} methods for \ac{mm-wave} channel estimation are proposed in\cite{Marzi,LeeJ,AlkhateebA,ChoiJ,AlkhateebC,HanY,LeeJ2,Ramasamy,BerrakiD}. In \cite{Marzi}, a method for the estimation of \ac{AOA}, \ac{AOD}, and channel gains is proposed based on the compressive beacons on the downlink. A method for the continuous estimation of \ac{mm-wave} channel parameters is proposed in \cite{Ramasamy}, while \cite{LeeJ2} applies CS tools with refinement in the angular domain. In \cite{LeeJ}, a CS method is proposed based on the redundant dictionary matrices. A two-stage algorithm with one-time feedback that is robust to noise is used in \cite{HanY}. In \cite{AlkhateebA}, an adaptive \ac{CS} method is proposed based on a hierarchical multi-resolution codebook design for the estimation of single-path and multi-path \ac{mm-wave} channels. In \cite{ChoiJ}, a beam selection procedure for the multiuser \ac{mm-wave} \ac{MIMO} channels with analog beamformers is proposed. In \cite{AlkhateebC}, a \ac{CS} approach with reduced training overhead was considered. Finally, \ac{CS} tools are used in \cite{BerrakiD} for the sparse estimation of power angle profiles of the \ac{mm-wave} channels and compared with the codebook designs in terms of overhead reduction. However, in all the aforementioned papers, a narrow-band \ac{mm-wave} channel model is used. When the bandwidths becomes larger, one needs to consider the effect of the delays of different paths in the \ac{mm-wave} channel model, i.e., the wide-band \ac{mm-wave} channel model. Channel estimation provides information of the \ac{AOA}/\ac{AOD} and thus of the relative location of the transmitter and receiver. In addition, location information can serve as a proxy for channel information to perform beamforming: when the location of the user is known, the \ac{BS} can steer its transmission to the user, either directly or through a reflected path. This leads to synergies between localization and communication. The use of 5G technologies to obtain position and orientation was previously explored in \cite{sanchis2002novel,DenSaya,vari2014mmwaves} for mm-wave and in \cite{hu2014esprit,Dardari,savic2015fingerprinting} for massive MIMO. The early work \cite{sanchis2002novel} considered estimation and tracking of \ac{AOA} through beam-switching. User localization was treated in \cite{DenSaya}, formulated as a hypothesis testing problem, limiting the spatial resolution. A different approach was taken in \cite{vari2014mmwaves}, where meter-level positioning accuracy was obtained by measuring received signal strength levels. A location-aided beamforming method was proposed in \cite{NGarcia} to speed up initial access between nodes. In the massive \ac{MIMO} case, \cite{hu2014esprit} considered the estimation of angles, \cite{NGarcia2} proposed a direct localization method by jointly processing the observations at the distributed massive \ac{MIMO} \ac{BS}s, while \cite{Dardari} treated the joint estimation of delay, \ac{AOD}, and \ac{AOA}, in the \ac{LOS} conditions and evaluated the impact of errors in delays and phase shifters, and \cite{Arash} derived sufficient conditions for a nonsingular \ac{FIM} of delay, \ac{AOD}, \ac{AOA}, and channel coefficients. A hybrid \ac{TDOA}, \ac{AOA}, and \ac{AOD} localization was proposed in \cite{linhyb} using linearization. In \cite{savic2015fingerprinting}, positioning was solved using a Gaussian process regressor, operating on a vector of received signal strengths through fingerprinting. While latter this approach is able to exploit \ac{NLOS} propagation, it does not directly harness the geometry of the environment. Complementarily to the use of \ac{mm-wave} frequencies, approaches for localization using \ac{cm-wave} signals have been recently proposed as well. The combination of \ac{TDOA}s and \ac{AOA}s using an extended Kalman filter (EKF) was presented in \cite{DBLP:journals/twc/KoivistoCWHTLKV17,DBLP:journals/corr/KoivistoHCKLV16}, where the \ac{MS} has a single antenna, while the \ac{BS} employs an antenna array. This method assumes \ac{LOS} propagation thanks to the high density of access nodes and provides sub-meter accuracy even for moving devices. In this paper, we show that \ac{mm-wave} and large \ac{MIMO} are enabling technologies for accurate positioning and device orientation estimation with only one \ac{BS}, even when the \ac{LOS} path is blocked. The limited scattering and high-directivity are unique characteristics of the \ac{mm-wave} channel and large \ac{MIMO} systems, respectively. We derive fundamental bounds on the position and orientation estimation accuracy, for \ac{LOS}\footnote{\ac{LOS} is defined as the condition where the \ac{LOS} path exists and there are no scatterers.}, \ac{NLOS}\footnote{\ac{NLOS} is defined as the condition where there are scatterers and the LOS path is not blocked.}, and \ac{OLOS}\footnote{\ac{OLOS} is referred to the condition where the LOS path is blocked and only the signals from the scatterers are received.} conditions. These bounds indicate that the information from the \ac{NLOS} links help to estimate the location and orientation of the \ac{MS}. We also propose a novel three-stage position and orientation estimation technique, which is able to attain the bounds at average to high \ac{SNR}. The first stage of the technique harnesses sparsity of the \ac{mm-wave} channel in the \ac{AOA} and \ac{AOD} domain \cite{BspaceSayeed,widebandbrady}. Moreover, the sparsity support does not vary significantly with frequency, allowing us to use \ac{DCS-SOMP} across different carriers. The delay can then be estimated on a per-path basis. As \ac{DCS-SOMP} limits the \ac{AOA} and \ac{AOD} to a predefined grid, we propose a refinement stage, based on the \ac{SAGE} algorithm. Finally, in the last stage, we employ a least-squares approach with \ac{EXIP} to recover position and orientation \cite{Stoicapp,Swindlehurstt}. \begin{figure} \psfrag{x}{\small $x$} \psfrag{y}{\small $y$} \psfrag{d0}{\small $d_{0}$} \psfrag{dk1}{\small $d_{k,1}$} \psfrag{dk2}{\small $d_{k,2}$} \psfrag{dk}{\small $d_{k}=d_{k,1}+d_{k,2}$} \psfrag{sk}{\hspace{-1mm} $\mathbf{s}_{k}$} \psfrag{tt0}{\small \hspace{-1mm} $\theta_{\mathrm{Tx},0}$} \psfrag{tt1}{\small \hspace{-4mm} $\theta_{\mathrm{Tx},k}$} \psfrag{tt1b}{\small \hspace{-8mm} $\pi-(\theta_{\mathrm{Rx},k}+\alpha)$} \psfrag{rr0}{\small \hspace{-8mm} $\pi-\theta_{\mathrm{Rx},0}$} \psfrag{rr1}{\small \hspace{-8mm} $\pi-\theta_{\mathrm{Rx},k}$} \psfrag{rr1b}{\small \hspace{-8mm} $\pi-(\theta_{\mathrm{Rx},k}+\alpha)$} \psfrag{alphab}{\small \hspace{-4mm} $\alpha$} \psfrag{q}{ \hspace{-1mm} $\mathbf{q}$} \psfrag{qk}{ \hspace{-1mm} $\widetilde{\mathbf{q}}_{k}$} \psfrag{p}{ \hspace{-4mm} $\mathbf{p}$} \psfrag{BS}[][c]{BS} \psfrag{VBS}[][c]{virtual BS} \psfrag{MS}[][c]{MS} \centering \includegraphics[width=0.9\columnwidth]{VBST2.eps} \caption{Two dimensional illustration of the \ac{LOS} (blue link) and \ac{NLOS} (red link) based positioning problem. The \ac{BS} location $\mathbf{q}$ and \ac{BS} orientation are known, but arbitrary. The location of the \ac{MS} $\mathbf{p}$, scatterer $\mathbf{s}_{k}$, rotation angle $\alpha$, \ac{AOA}s $\{\theta_{\textup{Rx},k}\}$, \ac{AOD}s $\{\theta_{\textup{Tx},k}\}$, the channels between \ac{BS}, \ac{MS}, and scatterers, and the distance between the antenna centers are unknown.} \label{NLOS_Link} \end{figure} \section{System Model}\label{SEC:Formulation} We consider a \ac{MIMO} system with a \ac{BS} equipped with $N_{t}$ antennas and a \ac{MS} equipped by $N_{r}$ antennas operating at a carrier frequency $f_c$ (corresponding to wavelength $\lambda_c$) and bandwidth $B$. Locations of the \ac{MS} and \ac{BS} are denoted by $\mathbf{p}=[p_{x}, p_{y}]^{\mathrm{T}}\in\mathbb{R}^{2}$ and $\mathbf{q}=[q_{x}, q_{y}]^{\mathrm{T}}\in\mathbb{R}^{2}$ with the $\alpha\in[0, 2\pi)$ denoting the rotation angle of the \ac{MS}'s antenna array. The value of $\mathbf{q}$ is assumed to be known, while $\mathbf{p}$ and $\alpha$ are unknown. \subsection{Transmitter Model} We consider the transmission of \ac{OFDM} signals as in \cite{khateeb3}, where a \ac{BS} with hybrid analog/digital precoder communicates with a single \ac{MS}. At the \ac{BS}, $G$ signals are transmitted sequentially, where the $g$-th transmission comprises $M_{t}$ simultaneously transmitted symbols $\mathbf{x}^{(g)}[n]=[x_{1}[n],\ldots,x_{M_{t}}[n]]^{\mathrm{T}} \in \mathbb{C}^{M_{t}}$ for each subcarrier $n=0,\ldots,N-1$. The symbols are first precoded and then transformed to the time-domain using $N$-point \acf{IFFT}. A \acf{CP} of length $T_{\mathrm{CP}}=DT_{s}$ is added before applying the \ac{RF} precoding where $D$ is the length of \ac{CP} in symbols. Here, $T_{s}=1/B$ denotes the sampling period and $T_{\mathrm{CP}}$ is assumed to exceed the delay spread of the channel. The transmitted signal over subcarrier $n$ at time $g$ can be expressed as $\mathbf{F}^{(g)}[n]\mathbf{x}^{(g)}[n]$. The beamforming matrix $\mathbf{F}[n] \in \mathbb{C}^{N_{t}\times M_{t}}$ is defined as $\mathbf{F}[n]=\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[n]$ where $\mathbf{F}_{\mathrm{RF}}$ is implemented using the analog phase shifters with the entries of the form $e^{j\phi_{m,n}}$, where $\{\phi_{m,n}\}$ are given phases, and $\mathbf{F}_{\mathrm{BB}}[n]$ is the digital beamformer, and overall they satisfy a total power constraint $\Vert\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[n]\Vert_{\mathrm{F}}=1$. Considering the sparsity of the \ac{mm-wave} channels one usually needs much less beams $M_{t}$ than antenna elements $N_{t}$, i.e., $M_{t} \ll N_{t}$. Also, the presence of $\mathbf{F}[n]$ in the proposed model leads to the extension of system model to multi-user mm-wave downlink systems with a limited feedback channel from \ac{MS}s to the \ac{BS}. Our work does not assume any specific beamformer. We will provide general expressions that permit the study of the impact on performance and optimization of different choices of beamformers $\mathbf{F}^{(g)}[n]$ and signals $\mathbf{x}^{(g)}[n]$, although this is out of the scope of the paper. Our approach is also compatible with beam reference signal (initial access) procedures, and it could be complemented with a Bayesian recursive tracker with user-specific precoding. \subsection{Channel Model} Fig.~\ref{NLOS_Link} shows the position-related parameters of the channel. These parameters include $\theta_{\mathrm{Rx},k}$, $\theta_{\mathrm{Tx},k}$, and $d_{k}=c\tau_{k}$, denoting the \ac{AOA}, \ac{AOD}, and the path length (with \ac{TOA} $\tau_{k}$ and the speed of light $c$) of the $k$-th path ($k=0$ for the \ac{LOS} path and $k>0$ the \ac{NLOS} paths). For each NLOS path, there is a scatterer with unknown location $\mathbf{s}_{k}$, for which we define $d_{k,1}=\Vert\mathbf{s}_{k}-\mathbf{q}\Vert_{2}$ and $d_{k,2}=\Vert\mathbf{p}-\mathbf{s}_{k}\Vert_{2}$. We now introduce the channel model, under a frequency-dependent array response \cite{widebandbrady}, suitable for wideband communication (with fractional bandwidth $B/f_c$ up to $50\%$). Assuming $K+1$ paths and a channel that remains constant during the transmission of $G$ symbols, the $N_r \times N_t$ channel matrix associated with subcarrier $n$ is expressed as \begin{equation}\label{Channel1} \mathbf{H}[n]=\mathbf{A}_{\mathrm{Rx}}[n]\mathbf{\Gamma}[n]\mathbf{A}^{\mathrm{H}}_{\mathrm{Tx}}[n], \end{equation} for response vectors \begin{align} \mathbf{A}_{\mathrm{Tx}}[n]& =[\mathbf{a}_{\mathrm{Tx},n}(\theta_{\mathrm{Tx},0}),\ldots,\mathbf{a}_{\mathrm{Tx},n}(\theta_{\mathrm{Tx},K})], \\ \mathbf{A}_{\mathrm{Rx}}[n]& =[\mathbf{a}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},0}),\ldots,\mathbf{a}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},K})], \end{align} and \begin{align} \mathbf{\Gamma}[n]=\sqrt{N_{t}N_{r}}\mathrm{diag}\left\{ \frac{h_{0}}{\sqrt{\rho_0}}e^{-j2\pi n\tau_{0}/(NT_{s})},\ldots,\frac{h_{K}}{\sqrt{\rho_K}}e^{-j2\pi n\tau_{K}/(NT_{s})}\right\}, \end{align} for path loss $\rho_k$ and complex channel gain $h_k$, respectively, of the $k$-th path. For later use, we introduce $\tilde{h}_{k}=\sqrt{(N_{t}N_{r})/\rho_{k}}h_{k}$ and $\gamma_{n}(h_{k},\tau_{k})=\tilde{h}_{k}e^{-j2\pi n\tau_{k}/(NT_{s})}$. The structure of the frequency-dependent antenna steering and response vectors $\mathbf{a}_{\mathrm{Tx},n}(\theta_{\mathrm{Tx},k})\in \mathbb{C}^{N_t}$ and $\mathbf{a}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},k})\in \mathbb{C}^{N_r}$ depends on the specific array structure. For the case of a \ac{ULA}, which will be the example studied in this paper, we recall that (the response vector $\mathbf{a}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},k})$ is obtained similarly) \begin{align}\label{steringvector1} & \mathbf{a}_{\mathrm{Tx},n}(\theta_{\mathrm{Tx},k}) =\\ &\frac{1}{\sqrt{N_{t}}}[ e^{-j \frac{N_{t}-1}{2}\frac{2\pi}{\lambda_{n}}d\sin(\theta_{\mathrm{Tx},k})},\ldots,e^{j\frac{N_{t}-1}{2}\frac{2\pi}{\lambda_{n}}d\sin(\theta_{\mathrm{Tx},k})} ]^{\mathrm{T}}, \nonumber \end{align} where $\lambda_{n} = c/(n/(NT_s)+f_{c})$ is the signal wavelength at the $n$-th subcarrier and $d$ denotes the distance between the antenna elements (we will use $d=\lambda_{c}/2$). We note that when $B\ll f_c$, $\lambda_{n} \approx \lambda_c$, and $\eqref{steringvector1}$ reverts to the standard narrow-band model. \subsection{Received Signal Model} The received signal for subcarrier $n$ and transmission $g$, after \ac{CP} removal and \acf{FFT}, can be expressed as \begin{equation} \mathbf{y}^{(g)}[n]=\mathbf{H}[n]\mathbf{F}^{(g)}[n]\mathbf{x}^{(g)}[n]+\mathbf{n}^{(g)}[n],\label{Receivedb1} \end{equation} where $\mathbf{n}^{(g)}[n]\in\mathbb{C}^{N_r}$ is a Gaussian noise vector with zero mean and variance $N_{0}/2$ per real dimension. Our goal is now to estimate the position $\mathbf{p}$ and orientation $\alpha$ of the \ac{MS} from $\{\mathbf{y}^{(g)}[n]\}_{\forall n, g}$. We will first derive a fundamental lower bound on the estimation uncertainty and then propose a novel practical estimator. \section{Position and Orientation Estimation: Fundamental Bounds}\label{SEC:FundamentalBound} In this section, we derive the \ac{FIM} and the \acf{CRB} for the estimation problem of position and orientation of the \ac{MS} for \ac{LOS}, \ac{NLOS}, and \ac{OLOS}. To simplify the notation and without loss of generality, we consider the case of $G=1$, i.e., only 1 OFDM symbol is transmitted. \subsection{FIM Derivation for Channel Parameters} Let $\boldsymbol{\eta}\in\mathbb{R}^{5(K+1)}$ be the vector consisting of the unknown channel parameters \begin{equation}\label{Parameters1} \boldsymbol{\eta}=\begin{bmatrix}\boldsymbol{\eta}^{\mathrm{T}}_{0},\ldots,\boldsymbol{\eta}^{\mathrm{T}}_{K}\end{bmatrix}^{\mathrm{T}}, \end{equation} in which $\boldsymbol{\eta}_{k}$ consists of the unknown channel parameters (delay, \ac{AOD}, \ac{AOA}, and channel coefficients) for the $k$-th path \begin{equation}\label{Parameters2} \boldsymbol{\eta}_{k}=\begin{bmatrix} \tau_{k},\boldsymbol{\theta}_{k}^{\mathrm{T}},\tilde{\mathbf{h}}_{k}^{\mathrm{T}} \end{bmatrix}^{\mathrm{T}}, \end{equation} where $\tilde{\mathbf{h}}_{k}=[\tilde{h}_{\mathrm{R},k},\tilde{h}_{\mathrm{I},k}]^{\mathrm{T}}$ contains the real and imaginary parts defined as $\tilde{h}_{\mathrm{R},k}$ and $\tilde{h}_{\mathrm{I},k}$, respectively, and $\boldsymbol{\theta}_{k}=\begin{bmatrix}\theta_{\mathrm{Tx},k},\theta_{\mathrm{Rx},k}\end{bmatrix}^{\mathrm{T}}$. Defining $\hat{\boldsymbol{\eta}}$ as the unbiased estimator of $\boldsymbol{\eta}$, the mean squared error (MSE) is bounded as \cite{Kay} \begin{equation}\label{Parameters3} \mathbb{E}_{\mathbf{y}\vert\boldsymbol{\eta}}\left[(\hat{\boldsymbol{\eta}}-\boldsymbol{\eta})(\hat{\boldsymbol{\eta}}-\boldsymbol{\eta})^{\mathrm{T}}\right]\succeq\mathbf{J}^{-1}_{\boldsymbol{\eta}}, \end{equation} in which $\mathbb{E}_{\mathbf{y}\vert\boldsymbol{\eta}}[.]$ denotes the expectation parameterized by the unknown parameters $\boldsymbol{\eta}$, and $\mathbf{J}_{\boldsymbol{\eta}}$ is the $5(K+1)\times 5(K+1)$ FIM defined as \begin{equation}\label{Parameters4} \mathbf{J}_{\boldsymbol{\eta}}\triangleq\mathbb{E}_{\mathbf{y}\vert\boldsymbol{\eta}}\left[-\frac{\partial^{2} \ln f(\mathbf{y}\vert\boldsymbol{\eta})}{\partial\boldsymbol{\eta}\partial\boldsymbol{\eta}^{T}}\right], \end{equation} where $f(\mathbf{y}\vert\boldsymbol{\eta})$ is the likelihood function of the random vector $\mathbf{y}$ conditioned on $\boldsymbol{\eta}$. More specifically, $f(\mathbf{y}\vert\boldsymbol{\eta})$ can be written as \cite{Poor} \begin{equation}\label{Parameters5b} \!\!f(\mathbf{y}\vert\boldsymbol{\eta})\!\propto\! \exp\!\left\{\!\frac{2}{N_{0}}\!\!\sum^{N-1}_{n=0}\Re\{\boldsymbol{\mu}^{\mathrm{H}}[n]\mathbf{y}[n]\}\!-\!\frac{1}{N_{0}}\!\!\sum^{N-1}_{n=0}\Vert\boldsymbol{\mu}[n]\Vert_{2}^{2}\!\right\}\!, \end{equation} where $\boldsymbol{\mu}[n]\triangleq\mathbf{H}[n]\mathbf{F}[n]\mathbf{x}[n]$ and $\propto$ denotes equality up to irrelevant constants. The \ac{FIM} in \eqref{Parameters4} can be structured as \begin{equation}\label{Parameters6w} \mathbf{J}_{\boldsymbol{\eta}}=\begin{bmatrix} \mathbf{\Psi}(\boldsymbol{\eta}_{0},\boldsymbol{\eta}_{0})&\ldots&\mathbf{\Psi}(\boldsymbol{\eta}_{0},\boldsymbol{\eta}_{K})\\ \vdots&\ddots&\vdots\\ \mathbf{\Psi}(\boldsymbol{\eta}_{K},\boldsymbol{\eta}_{0})&\ldots&\mathbf{\Psi}(\boldsymbol{\eta}_{K},\boldsymbol{\eta}_{K}) \end{bmatrix}, \end{equation} in which $\mathbf{\Psi}(\mathbf{x}_{r},\mathbf{x}_{s})$ is defined as \begin{equation}\label{Parameters6ww} \mathbf{\Psi}(\mathbf{x}_{r},\mathbf{x}_{s})\triangleq\mathbb{E}_{\mathbf{y}\vert\boldsymbol{\eta}}\left[-\frac{\partial^{2} \ln f(\mathbf{y}\vert\boldsymbol{\eta})}{\partial \mathbf{x}_{r}\partial \mathbf{x}^{\mathrm{T}}_{s}}\right]. \end{equation} The $5 \times 5$ matrix $\mathbf{\Psi}(\boldsymbol{\eta}_{r},\boldsymbol{\eta}_{s})$ is structured as \begin{equation}\label{Parameters8w} \mathbf{\Psi}(\boldsymbol{\eta}_{r},\boldsymbol{\eta}_{s})=\begin{bmatrix} \Psi(\tau_{r},\tau_{s})&\mathbf{\Psi}(\tau_{r},\boldsymbol{\theta}_{s})&\mathbf{\Psi}(\tau_{r},\mathbf{h}_{s})\\ \mathbf{\Psi}(\boldsymbol{\theta}_{r},\tau_{s})&\mathbf{\Psi}(\boldsymbol{\theta}_{r},\boldsymbol{\theta}_{s})&\mathbf{\Psi}(\boldsymbol{\theta}_{r},\mathbf{h}_{s})\\ \mathbf{\Psi}(\mathbf{h}_{r},\tau_{s})&\mathbf{\Psi}(\mathbf{h}_{r},\boldsymbol{\theta}_{s})&\mathbf{\Psi}(\mathbf{h}_{r},\mathbf{h}_{s}) \end{bmatrix}. \end{equation} The entries of $\mathbf{\Psi}(\boldsymbol{\eta}_{r},\boldsymbol{\eta}_{s})$ are derived in Appendix \ref{elements}. \subsection{FIM for Position and Orientation}\label{Trans_Convert} We determine the FIM in the position space through a transformation of variables from $\boldsymbol{\eta}$ to $\tilde{\boldsymbol{\eta}}=\begin{bmatrix}\tilde{\boldsymbol{\eta}}^{\mathrm{T}}_{0},\ldots,\tilde{\boldsymbol{\eta}}^{\mathrm{T}}_{K}\end{bmatrix}^{\mathrm{T}}$, where $\tilde{\boldsymbol{\eta}}_{k}=\begin{bmatrix}\mathbf{s}^{\mathrm{T}}_{k},\tilde{\mathbf{h}}^{\mathrm{T}}_{k}\end{bmatrix}^{\mathrm{T}}$ for $k > 0$ and $\tilde{\boldsymbol{\eta}}_{0}=\begin{bmatrix}\mathbf{p}^{\mathrm{T}},\alpha,\tilde{\mathbf{h}}^{\mathrm{T}}_{0}\end{bmatrix}^{\mathrm{T}}$. If the \ac{LOS} path is blocked (i.e., \ac{OLOS}), we note that we must consider $\boldsymbol{\eta}_{\mathrm{olos}}=[\boldsymbol{\eta}^{\mathrm{T}}_{1},\ldots,\boldsymbol{\eta}^{\mathrm{T}}_{K}]^{\mathrm{T}}$ and $\tilde{\boldsymbol{\eta}}_{\mathrm{olos}}=[\mathbf{p}^{\mathrm{T}},\alpha,\tilde{\boldsymbol{\eta}}^{\mathrm{T}}_{1},\ldots,\tilde{\boldsymbol{\eta}}^{\mathrm{T}}_{K}]^{\mathrm{T}}$. The \ac{FIM} of $\tilde{\boldsymbol{\eta}}$ is obtained by means of the $(4K+5)\times 5(K+1)$ transformation matrix $\mathbf{T}$ as \begin{equation}\label{TFIM1} \mathbf{J}_{\tilde{\boldsymbol{\eta}}}=\mathbf{T}\mathbf{J}_{\boldsymbol{\eta}}\mathbf{T}^{\mathrm{T}}, \end{equation} where \begin{equation}\label{TFIM2} \mathbf{T}\triangleq\frac{\partial\boldsymbol{\eta}^{\mathrm{T}}}{\partial\tilde{\boldsymbol{\eta}}}. \end{equation} The entries of $\mathbf{T}$ can be obtained by the relations between the parameters in $\boldsymbol{\eta}$ and $\tilde{\boldsymbol{\eta}}$ from the geometry of the problem shown in Fig. \ref{NLOS_Link} as: \begin{align} \tau_{0} & = \Vert\mathbf{p}-\mathbf{q}\Vert_{2}/c, \label{TFIM2xy1}\\ \tau_{k}& = \Vert\mathbf{q}-\mathbf{s}_{k}\Vert_{2}/c+\Vert\mathbf{p}-\mathbf{s}_{k}\Vert_{2}/c,\: k>0 \label{TFIM2xy1b}\\ \theta_{\mathrm{Tx},0} & = \arccos((p_{x}-q_{x})/\Vert\mathbf{p}-\mathbf{q}\Vert_{2}),\label{TFIM2xy2}\\ \theta_{\mathrm{Tx},k} & = \arccos((s_{k,x}-q_{x})/\Vert\mathbf{s}_{k}-\mathbf{q}\Vert_{2}),\: k>0\label{TFIM2xy3}\\ \theta_{\mathrm{Rx},k} & = \pi -\arccos((p_{x}-s_{k,x})/\Vert\mathbf{p}-\mathbf{s}_{k}\Vert_{2})-\alpha ,\: k>0\label{TFIM2xy4}\\ \theta_{\mathrm{Rx},0} & =\pi+\arccos((p_{x}-q_{x})/\Vert\mathbf{p}-\mathbf{q}\Vert_{2})-\alpha.\label{TFIM2xy5} \end{align} Consequently, we obtain \begin{equation}\label{TFIM3} \mathbf{T}=\begin{bmatrix} \mathbf{T}_{0,0}&\ldots&\mathbf{T}_{K,0}\\ \vdots&\ddots&\vdots\\ \mathbf{T}_{0,K}&\ldots&\mathbf{T}_{K,K} \end{bmatrix}, \end{equation} in which $\mathbf{T}_{k,k'}$ is defined as \begin{equation}\label{TFIM4} \mathbf{T}_{k,k'}\triangleq\frac{\partial\boldsymbol{\eta}^{\mathrm{T}}_{k}}{\partial\tilde{\boldsymbol{\eta}}_{k'}}. \end{equation} For $k'\neq 0$, $\mathbf{T}_{k,k'}$ is obtained as \begin{equation}\label{TFIM5} \mathbf{T}_{k,k'}=\begin{bmatrix} \partial\tau_{k}/\partial\mathbf{s}_{k'}&\partial\boldsymbol{\theta}^{\mathrm{T}}_{k}/ \partial\mathbf{s}_{k'}&\partial\tilde{\mathbf{h}}^{\mathrm{T}}_{k}/ \partial\mathbf{s}_{k'}\\ \partial\tau_{k}/\partial\tilde{\mathbf{h}}_{k'}&\partial\boldsymbol{\theta}^{\mathrm{T}}_{k}/ \partial\tilde{\mathbf{h}}_{k'}&\partial\tilde{\mathbf{h}}^{\mathrm{T}}_{k}/\partial\tilde{\mathbf{h}}_{k'} \end{bmatrix}, \end{equation} and $\mathbf{T}_{k,0}$ is obtained as \begin{equation}\label{TFIM6} \mathbf{T}_{k,0}=\begin{bmatrix} \partial\tau_{k}/\partial\mathbf{p}&\partial\boldsymbol{\theta}^{\mathrm{T}}_{k}/ \partial\mathbf{p}&\partial\tilde{\mathbf{h}}^{\mathrm{T}}_{k}/ \partial\mathbf{p}\\ \partial\tau_{k}/ \partial\alpha&\partial\boldsymbol{\theta}^{\mathrm{T}}_{k}/ \partial\alpha&\partial\tilde{\mathbf{h}}^{\mathrm{T}}_{k}/ \partial\alpha\\ \partial\tau_{k}/\partial\tilde{\mathbf{h}}_{0}&\partial\boldsymbol{\theta}^{\mathrm{T}}_{k}/ \partial\tilde{\mathbf{h}}_{0}&\partial\tilde{\mathbf{h}}^{\mathrm{T}}_{k}/\partial\tilde{\mathbf{h}}_{0} \end{bmatrix}, \end{equation} where \begin{align*} \partial\tau_{0}/\partial\mathbf{p} & =\frac{1}{c}\begin{bmatrix}\cos(\theta_{\mathrm{Tx},0}),\sin(\theta_{\mathrm{Tx},0})\end{bmatrix}^{\mathrm{T}} ,\\ \partial\theta_{\mathrm{Tx},0}/\partial\mathbf{p} & =\frac{1}{\Vert\mathbf{p}-\mathbf{q}\Vert_{2}}\begin{bmatrix}-\sin(\theta_{\mathrm{Tx},0}), \cos(\theta_{\mathrm{Tx},0})\end{bmatrix}^{\mathrm{T}},\\ \partial\theta_{\mathrm{Rx},0}/\partial\mathbf{p} & =\frac{1}{\Vert\mathbf{p}-\mathbf{q}\Vert_{2}}\begin{bmatrix}-\sin(\theta_{\mathrm{Tx},0}), \cos(\theta_{\mathrm{Tx},0})\end{bmatrix}^{\mathrm{T}},\\ \partial\theta_{\mathrm{Rx},k}/\partial\alpha & =-1, k\ge 0\\ \end{align*} \begin{align*} \partial\tau_{k}/\partial\mathbf{p} & = \frac{1}{c}\begin{bmatrix}\cos(\pi-\theta_{\mathrm{Rx},k}), -\sin(\pi-\theta_{\mathrm{Rx},k})\end{bmatrix}^{\mathrm{T}},\:k> 0\\ \partial\tau_{k}/\partial\mathbf{s}_{k} & = \frac{1}{c}\begin{bmatrix}\cos(\theta_{\mathrm{Tx},k})+\cos(\theta_{\mathrm{Rx},k}), \sin(\theta_{\mathrm{Tx},k})+\sin(\theta_{\mathrm{Rx},k})\end{bmatrix}^{\mathrm{T}},\:k> 0\\ \partial\theta_{\mathrm{Tx},k}/\partial\mathbf{s}_{k}& =\frac{1}{\Vert\mathbf{s}_{k}-\mathbf{q}\Vert_{2}}\begin{bmatrix}-\sin(\theta_{\mathrm{Tx},k}),\cos(\theta_{\mathrm{Tx},k})\end{bmatrix}^{\mathrm{T}},\:k> 0\\ \partial\theta_{\mathrm{Rx},k}/\partial\mathbf{p}& = \frac{1}{\Vert\mathbf{p}-\mathbf{s}_{k}\Vert_{2}}\begin{bmatrix}\sin(\pi-\theta_{\mathrm{Rx},k}), \cos(\pi-\theta_{\mathrm{Rx},k})\end{bmatrix}^{\mathrm{T}},\: k > 0\\ \partial\theta_{\mathrm{Rx},k}/\partial\mathbf{s}_{k}& =-\frac{1}{\Vert\mathbf{p}-\mathbf{s}_{k}\Vert_{2}}\begin{bmatrix}\sin(\pi-\theta_{\mathrm{Rx},k}), \cos(\pi-\theta_{\mathrm{Rx},k})\end{bmatrix}^{\mathrm{T}},\:k> 0 \end{align*} and $\partial\tilde{\mathbf{h}}^{\mathrm{T}}_{k}/\tilde{\mathbf{h}}_{k}=\mathbf{I}_{2}$ for $k \ge 0$. The rest of entries in $\mathbf{T}$ are zero. \subsection{Bounds on Position and Orientation Estimation Error} The \ac{PEB} is obtained by inverting $\mathbf{J}_{\tilde{\boldsymbol{\eta}}}$, adding the diagonal entries of the $2\times 2$ sub-matrix, and taking the root square as: \begin{equation}\label{PEBexpr} \mathrm{PEB}=\sqrt{\mathrm{tr}\left\{[\mathbf{J}^{-1}_{\tilde{\boldsymbol{\eta}}}]_{1:2,1:2}\right\}}, \end{equation} and the \ac{REB} is obtained as: \begin{equation}\label{REBexpr} \mathrm{REB}=\sqrt{[\mathbf{J}^{-1}_{\tilde{\boldsymbol{\eta}}}]_{3,3}}, \end{equation} where the operations $[.]_{1:2,1:2}$ and $[.]_{3,3}$ denote the selection of the first $2\times 2$ sub-matrix and the third diagonal entry of $\mathbf{J}^{-1}_{\tilde{\boldsymbol{\eta}}}$, respectively. \subsection{The Effect of Multi-Path Components on Position and Orientation Estimation Error} In this subsection, we discuss the effect of adding \ac{MPCs} for localization under different conditions. As the number of antennas in the MS increases, the scalar product between steering vectors corresponding to different receive directions tends to vanish, i.e. $\vert\mathbf{a}^{\mathrm{H}}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},r})\mathbf{a}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},s})\vert\ll 1$ for $\theta_{\mathrm{Rx},r} \neq \theta_{\mathrm{Rx},s}$. Also, increasing the number of antenna elements in the transmitter results in narrower beams and the spatial correlation between different beams is reduced. Moreover, as the system bandwidth increases, the different \ac{MPCs} coming from different scatterers can be more easily resolved. In other words, the MPCs can be considered to be orthogonal \cite{LeitingerJSAC2015,WitrisalSPM2016}. Consequently, large $N_{t}$, $N_{r}$, and bandwidth lead to very small multipath cross-correlation terms in the \ac{FIM} \cite{DBLP:journals/corr/Abu-ShabanZASW17}. Ignoring those terms, the approximate expression for the \ac{EFIM} of position and rotation angle $\mathbf{J}_{e}(\mathbf{p},\alpha)$ with large $N_{t}$, $N_{r}$, and bandwidth is\footnote{In computing \eqref{ans_com1}, we used the fact that the last two rows of $\mathbf{T}_{k,0}$ are zero for $k\neq 0$.} \begin{equation}\label{ans_com1} \mathbf{J}_{e}(\mathbf{p},\alpha)\approx\tilde{\mathbf{T}}_{0,0}\mathbf{\Lambda}_{e,0}\tilde{\mathbf{T}}^{\mathrm{T}}_{0,0}+ \sum_{k=1}^{K}\left[\mathbf{\Upsilon}_{e,k}\right]_{1:3,1:3}, \end{equation} where \begin{equation}\label{ans_com2} \mathbf{\Upsilon}_{e,k}=\mathbf{T}_{k,0}\mathbf{\Psi}(\boldsymbol{\eta}_{k},\boldsymbol{\eta}_{k})\mathbf{T}^{\mathrm{T}}_{k,0}-\mathbf{T}_{k,0}\mathbf{\Psi}(\boldsymbol{\eta}_{k},\boldsymbol{\eta}_{k})\mathbf{T}^{\mathrm{T}}_{k,k}\left(\mathbf{T}_{k,k}\mathbf{\Psi}(\boldsymbol{\eta}_{k},\boldsymbol{\eta}_{k})\mathbf{T}^{\mathrm{T}}_{k,k}\right)^{-1} \mathbf{T}_{k,k}\mathbf{\Psi}(\boldsymbol{\eta}_{k},\boldsymbol{\eta}_{k})\mathbf{T}^{\mathrm{T}}_{k,0}, \end{equation} in which $\tilde{\mathbf{T}}_{0,0}$ is the $3\times 3$ sub-matrix in the transformation matrix $\mathbf{T}_{k,0}$ for $k=0$ in \eqref{TFIM6} containing the derivatives with respect to $\mathbf{p}$ and $\alpha$, $[.]_{1:3,1:3}$ denotes the selection of the first $3\times 3$ sub-matrix, and $\mathbf{\Lambda}_{e,0}$ denotes the EFIM of the delay, AOD, and AOA from LOS, i.e., $\{\tau_{0},\theta_{\mathrm{Tx},0},\theta_{\mathrm{Rx},0}\}$. From simulations, it is observed that the exact and approximate FIM lead to nearly identical PEBs, under the mentioned conditions. Hence, greedy techniques from compressed sensing, which extract path after path, are a natural tool for such scenarios. In the LOS case, \eqref{ans_com1} only contains the term corresponding to $k=0$, i.e., the first term. When MPCs are present, the terms corresponding to $k\geq 1$ appear, i.e., the second summand in \eqref{ans_com1}, which contains terms that are added and others that are subtracted (because the scatterer location is an additional parameter that has to be estimated for each MPC \cite[eq. (3.59)]{LeitingerPhD2016}). The additive terms imply that the presence of MPCs help in the estimation of the MS localization, as they add information to the EFIM. In general the contribution of the MPCs results in a positive contribution to the FIM, and hence in a reduction of the CRB as shown in papers \cite{LeitingerJSAC2015,WitrisalSPM2016}. It is only in the cases where the MPCs heavily overlap, specially with the LOS, in the directional and time domains that the negative terms are dominant, and then the presence of MPCs degrades the MS localization. \section{Position and Orientation Estimation: Estimator in Beamspace} Next, we propose the use of a beamspace channel transformation in order to estimate the channel parameters in \eqref{Receivedb1}. The considered beamspace representation of the channel reduces the complexity by exploiting the sparsity of the \ac{mm-wave} MIMO channel. If the fractional bandwidth and the number of antennas are not violating the condition for the small array dispersion \cite{widebandbrady}, there exists a common sparse support across all subcarriers. Consequently, the \ac{DCS-SOMP} method from \cite{Duarte2} can be applied for the estimation of \ac{AOA}, \ac{AOD}, and \ac{TOA}. As the estimates of \ac{AOA} and \ac{AOD} are limited to lie on a grid defined by the transformation, we apply a refinement of the estimates of all parameters using the \ac{SAGE} algorithm. Finally, we invoke the \ac{EXIP} to solve for the position $\mathbf{p}$ and orientation $\alpha$. \subsection{Beamspace Channel Representation} We introduce the $N_t \times N_t$ transformation matrix, uniformly sampling the virtual spatial angles \cite{BspaceSayeedx} \begin{align*} \mathbf{U}_{\mathrm{Tx}}&\triangleq\left[\mathbf{u}_{\mathrm{Tx}}({-(N_{t}-1)/2}),\ldots,\mathbf{u}_{\mathrm{Tx}}({(N_{t}-1)/2})\right],\\ \mathbf{u}_{\mathrm{Tx}}(p)&\triangleq \begin{bmatrix} e^{-j2\pi \frac{N_{t}-1}{2} \frac{p}{{N_{t}}}},\ldots,e^{j2\pi \frac{N_{t}-1}{2} \frac{p}{N_{t}}} \end{bmatrix}^{\mathrm{T}}, \end{align*} where we assumed $N_t$ to be even. Similarly, we define the $N_r \times N_r$ matrix $\mathbf{U}_{\mathrm{Rx}}$. Both $\mathbf{U}_{\mathrm{Tx}}$ and $\mathbf{U}_{\mathrm{Rx}}$ are unitary matrices. The partial virtual representation of the channel with respect to the angular domain can be written as \begin{align} \check{\mathbf{H}}[n]&=\mathbf{U}_{\mathrm{Rx}}^{\mathrm{H}}\mathbf{H}[n]\mathbf{U}_{\mathrm{Tx}}\label{BWTransceiver1a}\\ & = \sum_{k=0}^{K}\gamma_{n}(h_{k},\tau_{k})\mathbf{U}_{\mathrm{Rx}}^{\mathrm{H}}\mathbf{a}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},k})\mathbf{a}^{\mathrm{H}}_{\mathrm{Tx},n}(\theta_{\mathrm{Tx},k})\mathbf{U}_{\mathrm{Tx}}. \end{align} It is readily verified that \cite{widebandbrady} \begin{align} [\check{\mathbf{H}}[n]]_{i,i'}&=\sum_{k=0}^{K}\gamma_{n}(h_{k},\tau_{k})\chi_{r}\big(\frac{d}{\lambda_{n}}\sin(\theta_{\mathrm{Rx},k})-\frac{i}{N_{r}}\big)\chi_{{t}}\big(\frac{d}{\lambda_{n}}\sin(\theta_{\mathrm{Tx},k})-\frac{i'}{N_{t}}\big),\label{BWTransceiver1b} \end{align} for $-(N_r-1)/2 \le i \le (N_r-1)/2$ and $-(N_t-1)/2 \le i' \le (N_t-1)/2$. We have introduced \begin{align} \chi_t(\phi) & = \frac{\sin(\pi N_{t}\phi)}{\sqrt{N_{t}}\sin(\pi\phi)},\\ \chi_r(\phi) & = \frac{\sin(\pi N_{r}\phi)}{\sqrt{N_{r}}\sin(\pi\phi)}. \end{align} From \eqref{BWTransceiver1b}, it is observed that $\check{\mathbf{H}}[n]$ is approximately sparse, since `strong' components are only present in the directions of $\{\theta_{\mathrm{Tx},k}\}$ and $\{\theta_{\mathrm{Rx},k}\}$. Stacking the observation $\mathbf{y}^{(g)}[n]$ from \eqref{Receivedb1}, we obtain \begin{equation}\label{BWTransceiver2x} \check{\mathbf{y}}[n]=\mathbf{\Omega}[n]\check{\mathbf{h}}[n]+\check{\mathbf{n}}[n], \end{equation} where \begin{align} \mathbf{\Omega}[n]&=\begin{bmatrix} \mathbf{\Omega}^{(1)}[n]\\\vdots\\\mathbf{\Omega}^{(G)}[n] \end{bmatrix},\\ \mathbf{\Omega}^{(g)}[n]&=(\mathbf{Z}^{(g)}_{\mathrm{Tx}}[n])^{\mathrm{T}}\otimes\mathbf{U}_{\mathrm{Rx}},\\ \mathbf{Z}^{(g)}_{\mathrm{Tx}}[n]&=\mathbf{U}^{\mathrm{H}}_{\mathrm{Tx}}\mathbf{F}^{(g)}[n]\mathbf{x}^{(g)}[n],\\ \check{\mathbf{h}}[n] &= \mathrm{vec}(\check{\mathbf{H}}[n]). \end{align} Hence, since $\check{\mathbf{h}}[n]$ is an approximately sparse vector, we can interpret solving \eqref{BWTransceiver2x} for $\check{\mathbf{h}}[n]$ as a {CS} problem, allowing us to utilize tools from that domain. In principle, the columns of $\mathbf{U}_{\mathrm{Tx}}$ and $\mathbf{U}_{\mathrm{Rx}}$ corresponding to non-zero entries of the sparse vector $\check{\mathbf{h}}[n]$ correspond to coarse estimates of the AOA/AOD, while the entries in $\check{\mathbf{h}}[n]$ are estimates of $\gamma_{n}(h_{k},\tau_{k})$ (including the effect of the functions $\chi_t(\cdot)$ and $\chi_r(\cdot)$). The latter values can then be used to estimate $\tau_{k}$ for each path. Since the vectors $\check{\mathbf{h}}[n]\in\mathbb{C}^{N_{r}N_{t}\times 1}$, for $i=1,\ldots,N$, corresponding to the sensing matrix $\mathbf{\Omega}[n]$ in \eqref{BWTransceiver2x} are approximately jointly $(K+1)$-sparse, i.e., the support of $\check{\mathbf{h}}[n]$ does not vary significantly from subcarrier to subcarrier, we can use specialized techniques, such as \ac{DCS-SOMP} for estimating all $\check{\mathbf{h}}[n]$ jointly in an efficient manner. Based on the above discussion, we propose to use the following approach: \begin{enumerate} \item Coarse estimation of AOA/AOD using a modified \ac{DCS-SOMP} algorithm. \item Fine estimation using the SAGE algorithm, initialized by the coarse estimates. \item Estimation of the position and orientation. \end{enumerate} \subsubsection*{Remark} The above sparse representation is not unique. Another representation could rely on a sparse vector of length $N_{t}\times N_{r}\times N$, where each entry would then correspond to an AOA/AOD/TOA triplet. However, the complexity of such an approach would be significantly higher, since $N$ is generally a large number. \subsection{Step 1: Coarse Estimation of Channel Parameters using DCS-SOMP}\label{EstRef} The first stage of the algorithm involves calling the DCS-SOMP algorithm, providing estimates of the number of paths, the AOA/AOD, and estimates of $\check{\mathbf{h}}[n]$. For the sake of completeness, the steps of DCS-SOMP can be found in Algorithm \ref{algor0_det}. We note that the algorithm is rank-blind as it does not assume knowledge of the number of the paths (i.e., $K+1$) \cite{Davies}. Since $K+1$ is unknown, we use the change of residual fitting error $\sum_{n=0}^{N-1}\Vert\mathbf{r}_{t-1}[n]-\mathbf{r}_{t-2}[n]\Vert^{2}_{2}$ at each iteration $t$ to a threshold $\delta$. The value for $\delta$ is obtained using a similar procedure as in \cite{Marzi}: \begin{equation}\label{reqproof3} \delta=N_0\gamma^{-1}\left(N,\Gamma(N)(1-{P}_{\mathrm{fa}})^{{1}/({N_{r}N_{t}})}\right), \end{equation} in which $\gamma^{-1}\left(N,x\right)$ denotes the inverse of the incomplete gamma distribution, $\Gamma(N)$ is the gamma function, and ${P}_{\mathrm{fa}}$ is the false alarm probability. \begin{algorithm}[h!] \caption{Modified DCS-SOMP\label{algor0_det}} \textbf{Input:} Recieved signals $\check{\mathbf{y}}[n]$, sensing matrix $\mathbf{\Omega}[n]$, and the threshold $\delta$.\\ \textbf{Output:} estimates of $K$, ${\theta}_{\mathrm{Tx},k}$, ${\theta}_{\mathrm{Rx},k}$, $\check{\mathbf{h}}[n]$, $n=0,\ldots,N-1.$ \begin{algorithmic}[1] \STATE For $n=0,\ldots,N-1$, the residual vectors are set to $\mathbf{r}_{-1}[n]=\mathbf{0}$ and $\mathbf{r}_{0}[n]=\check{\mathbf{y}}[n]$, the orthogonalized coefficient vector $\hat{\boldsymbol{\beta}}_{n}=\mathbf{0}$, $\mathcal{K}_{0}$ is chosen to be an empty set, and iteration index $t=1$. $\boldsymbol{\omega}_{m}[n]$ is the $m$-th column of measurement matrix $\mathbf{\Omega}[n]$. \WHILE{ $\sum_{n=0}^{N-1}\Vert\mathbf{r}_{t-1}[n]-\mathbf{r}_{t-2}[n]\Vert^{2}_{2}>\delta$} \STATE Find AOA/AOD pair \begin{align} \tilde{n}_{t}& =\underset{m=1,\ldots,N_{r}N_{t}} {\mathrm{argmax}} \:\sum_{n=0}^{N-1}\frac{\vert \boldsymbol{\omega}_{m}^{\mathrm{H}}[n]\mathbf{r}_{t-1}[n]\vert}{\Vert\boldsymbol{\omega}_{m}[n]\Vert_{2}},\label{tinex1ee}\\ n_{\mathrm{Tx},t}&=\lceil \tilde{n}_{t}/N_{r}\rceil, ~~ n_{\mathrm{Rx},t}=\mathrm{mod}(\tilde{n}_{t}-1,N_{r})+1,\\ \hat{\theta}^{(0)}_{\mathrm{Tx},t} & =\arcsin\left((\lambda_{c}/d)(n_{\mathrm{Tx},t}-{(N_{t}-1)/2}-1)/N_{t}\right),\label{tinex1}\\ \hat{\theta}^{(0)}_{\mathrm{Rx},t}& =\arcsin\left((\lambda_{c}/d)(n_{\mathrm{Rx},t}-{(N_{r}-1)/2}-1)/N_{r}\right).\label{rinex1} \end{align} \STATE Update AOA/AOD set of indices $\mathcal{K}_{t}=\mathcal{K}_{t-1}\cup\{\tilde{n}\}$. \STATE Orthogonalize the selected basis vector: \begin{equation}\label{BWTransceiver2zfh} \boldsymbol{\rho}_{t}[n]=\boldsymbol{\omega}_{\tilde{n}_{t}}[n]-\sum_{\tilde{t}=0}^{t-1} \frac{\boldsymbol{\omega}^{\mathrm{H}}_{\tilde{n}_{t}}[n]\boldsymbol{\rho}_{\tilde{t}}[n]}{\Vert\boldsymbol{\rho}_{\tilde{t}}[n]\Vert_{2}}\boldsymbol{\rho}_{\tilde{t}}[n]. \end{equation} \STATE Update the residual vector $\mathbf{r}_{t}[n]$ by subtracting the effect of chosen columns from $\mathbf{r}_{t-1}[n]$: $\mathbf{r}_{t}[n]=\mathbf{r}_{t-1}[n]-\hat{\beta}_{n}(t)\boldsymbol{\rho}_{t}[n]$, where \begin{equation}\label{BWTransceiver2zdyb} \hat{\beta}_{n}(t)=\frac{\boldsymbol{\rho}^{\mathrm{H}}_{t}[n]\mathbf{r}_{t-1}[n]}{\Vert\boldsymbol{\rho}_{t}[n]\Vert^{2}_{2}}. \end{equation} \STATE $t=t+1$. \ENDWHILE \STATE Perform QR factorization of the mutilated basis $\mathbf{\Omega}_{\mathcal{K}_{t}}[n]=[\boldsymbol{\omega}_{\tilde{n}_{1}}[n],\ldots,\boldsymbol{\omega}_{\tilde{n}_{\hat{K}+1}}[n]]=\mathbf{\Upsilon}[n]\mathbf{R}[n]$ where $\mathbf{\Upsilon}[n]=[\boldsymbol{\rho}_{1}[n],\ldots,\boldsymbol{\rho}_{\hat{K}+1}[n]]$ and $\mathbf{R}[n]$ is an upper triangular matrix. Since $\mathbf{\Omega}_{\mathcal{K}_{t}}[n]\hat{\check{\mathbf{h}}}[n]=\mathbf{\Upsilon}[n]\mathbf{R}[n]\hat{\check{\mathbf{h}}}[n]=\mathbf{\Upsilon}[n]\hat{\boldsymbol{\beta}}_{n}$, we obtain \begin{equation}\label{BWTransceiver2z} \hat{\check{\mathbf{h}}}[n]=\mathbf{R}^{-1}[n]\hat{\boldsymbol{\beta}}_{n}. \end{equation} \end{algorithmic} \end{algorithm} For each path $k = 0,\ldots, \hat{K}$, we can now write \begin{equation}\label{BWTransceiver2tz0} \hat{\check{\mathbf{h}}}^{(k)}=\tilde{h}_{k}\mathbf{A}(\tau_{k})\mathbf{z}^{(k)}+\mathbf{v}^{(k)}, \end{equation} where $\hat{\check{\mathbf{h}}}^{(k)}=[\hat{\check{h}}^{(k)}[0],\ldots,\hat{\check{h}}^{(k)}[N-1]]^{\mathrm{T}}$ in which $\hat{\check{h}}^{(k)}[n]$ is the entry on subcarrier $n$, related to the $k$-th path found in Algorithm \ref{algor0_det}, $\mathbf{A}(\tau_{k})=\mathrm{diag}\{1,\ldots,e^{-j2\pi (N-1)\tau_{k}/(NT_{s})}\}$, $\mathbf{v}_{k}$ is the $N\times 1$ noise vector, and $\mathbf{z}^{(k)}$ has entries \begin{equation}\label{Refine2} z_{n}(k)\triangleq\mathbf{u}^{\mathrm{H}}_{\mathrm{Rx}}(\frac{n_{\mathrm{Rx},k}-{(N_{r}-1)/2}-1}{N_{r}})\mathbf{a}_{\mathrm{Rx},n}(\hat{\theta}^{(0)}_{\mathrm{Rx},k})\mathbf{a}^{\mathrm{H}}_{\mathrm{Tx},n}(\hat{\theta}^{(0)}_{\mathrm{Tx},k})\mathbf{u}_{\mathrm{Tx}}(\frac{n_{\mathrm{Tx},k}-{(N_{t}-1)/2}-1}{N_{t}}). \end{equation} For the purpose of coarse estimation, we ignore the dependence on $n$ in \eqref{Refine2}, leading to the simple model \begin{align} \hat{\check{\mathbf{h}}}^{(k)}=\tilde{h}_{k}{z}^{(k)}\mathbf{a}(\tau_k)+\mathbf{v}^{(k)}, \end{align} where $\mathbf{a}(\tau_k) = [1,\ldots,e^{-j2\pi (N-1)\tau_{k}/(NT_{s})}]^{\mathrm{T}}$ and ${z}^{(k)}$ is as in \eqref{Refine2}, but considering only $\lambda_c$ instead of $\lambda_n$. From this model, we can recover $\tau_{k}$ and $\tilde{h}_{k}$ by solving a \ac{LS} problem \begin{equation}\label{BWTransceiver2tzsf} [\hat{\tau}^{(0)}_{k},\hat{\tilde{h}}^{(0)}_{k}]=\underset{\tau_{k},\tilde{h}_{k}}{\mathrm{argmin}}\:\:\Vert\hat{\check{\mathbf{h}}}^{(k)}-\tilde{h}_{k}{z}^{(k)}\mathbf{a}(\tau_k)\Vert_{2}^{2}. \end{equation} Solving for ${\tilde{h}}_{k}$ yields \begin{align}\label{BWTransceiver2tzsfzxe2} \hat{\tilde{h}}^{(0)}_{k}= \frac{\mathbf{a}^{\mathrm{H}}(\tau_k)\hat{\check{\mathbf{h}}}^{(k)}}{{z}^{(k)}N}. \end{align} Substituting \eqref{BWTransceiver2tzsfzxe2} into \eqref{BWTransceiver2tzsf} and expanding the square allows us to solve for ${\tau}_{k}$: \begin{align}\label{BWTransceiver2tzsfzxe3} \hat{\tau}^{(0)}_{k}=\underset{\tau_{k}}{\mathrm{argmax}}\:\: |\mathbf{a}^{\mathrm{H}}(\tau_k)\hat{\check{\mathbf{h}}}^{(k)}|^2. \end{align} \subsection{Step 2: Fine Estimation of Channel Parameters using SAGE} Channel parameter estimates are refined in an iterative procedure, which is initialized by the estimates from step 1. In principle, we can perform an iterative ascent algorithm directly on the log-likelihood function associated with the model \eqref{BWTransceiver2x}. However, this requires a multi-dimensional minimization and computationally complex solutions. A more practical approach is to use the \ac{SAGE} algorithm with the incomplete data space in \eqref{BWTransceiver2x} as the superposition of $K+1$ complete data space $\check{\mathbf{y}}_{k}[n]$ as: \begin{equation}\label{Refine1} \check{\mathbf{y}}[n]=\sum_{k=0}^{\hat{K}}\underbrace{\mathbf{\Omega}[n]\check{\mathbf{h}}_{k}[n]+\check{\mathbf{n}}_{k}[n]}_{\check{\mathbf{y}}_{k}[n]}, \end{equation} where $\check{\mathbf{h}}_{k}[n]$ denotes the vectorized form of $\check{\mathbf{H}}_{k}[n]=\mathbf{U}_{\mathrm{Rx}}^{\mathrm{H}}\mathbf{H}_{k}[n]\mathbf{U}_{\mathrm{Tx}}$ with $\mathbf{H}_{k}[n]$ being the corresponding term for the $k$-th path in the channel frequency response $\mathbf{H}[n]$ in \eqref{Channel1}. Writing \eqref{Refine1} for all the subcarriers results in: \begin{equation}\label{Refine1x} \check{\mathbf{y}}=\sum_{k=0}^{\hat{K}}\underbrace{\check{\mathbf{\Omega}}\check{\mathbf{h}}_{k} +\check{\mathbf{n}}_{k}}_{\check{\mathbf{y}}_{k}}, \end{equation} where \begin{align*} \check{\mathbf{\Omega}}&=\mathrm{diag}\left\{\mathbf{\Omega}[0],\ldots,\mathbf{\Omega}[N-1]\right\},\\ \check{\mathbf{y}}&=\left[\check{\mathbf{y}}^{\mathrm{T}}[0],\ldots,\check{\mathbf{y}}^{\mathrm{T}}[N-1]\right]^{\mathrm{T}},\\ \check{\mathbf{h}}_{k}&=\left[\check{\mathbf{h}}^{\mathrm{T}}_{k}[0],\ldots,\check{\mathbf{h}}^{\mathrm{T}}_{k}[N-1]\right]^{\mathrm{T}},\\ \check{\mathbf{n}}_{k}&=\left[\check{\mathbf{n}}_{k}^{\mathrm{T}}[0],\ldots,\check{\mathbf{n}}_{k}^{\mathrm{T}}[N-1]\right]^{\mathrm{T}}. \end{align*} In the $(m+1)$-th iteration where $m$ is the iteration index, the expectation and maximization steps are performed as described below. For the initialization of the iterative procedure, we use the AOA/AOD, TOA, and channel coefficients from the detection phase using $\hat{\theta}^{(0)}_{\mathrm{Tx},k}$ and $\hat{\theta}^{(0)}_{\mathrm{Rx},k}$ obtained from \eqref{tinex1} and \eqref{rinex1}, respectively, $\hat{\tau}^{(0)}_{k}$ computed from \eqref{BWTransceiver2tzsfzxe3}, and the corresponding coefficient obtained from \eqref{BWTransceiver2tzsfzxe2}. \subsubsection*{Expectation step} We compute the conditional expectation of the hidden data space $\check{\mathbf{y}}_{k}$ log-likelihood function based on the previous estimation $\hat{\boldsymbol{\eta}}^{(m)}$ and the incomplete data space $\check{\mathbf{y}}$ as: \begin{equation}\label{ExpectationS1} Q(\boldsymbol{\eta}_{k}\vert\hat{\boldsymbol{\eta}}^{(m)})\triangleq \mathbb{E}\left[\ln f(\check{\mathbf{y}}_{k}\vert\boldsymbol{\eta}_{k},\{\hat{\boldsymbol{\eta}}^{(m)}_{l}\}_{l\neq k})\vert\check{\mathbf{y}},\hat{\boldsymbol{\eta}}^{(m)}\right]. \end{equation} For $k=0,\ldots,\hat{K}$, we obtain \begin{equation}\label{ExpectationS2} Q(\boldsymbol{\eta}_{k}\vert\hat{\boldsymbol{\eta}}^{(m)})\propto -\Vert\hat{\mathbf{z}}^{(m)}_{k}-\check{\boldsymbol{\mu}}(\boldsymbol{\eta}_{k})\Vert_{2}^{2}, \end{equation} where $\check{\boldsymbol{\mu}}(\boldsymbol{\eta}_{k})=\check{\mathbf{\Omega}}\check{\mathbf{h}}_{k}$, and \begin{equation}\label{ExpectationS3} \hat{\mathbf{z}}^{(m)}_{k}=\check{\mathbf{y}}-\sum_{l\neq k, l=0}^{\hat{K}}\check{\boldsymbol{\mu}}(\hat{\boldsymbol{\eta}}^{(m)}_{l}). \end{equation} \subsubsection*{Maximization step} The goal is to find $\boldsymbol{\eta}_{k}$ such that \eqref{ExpectationS2} is maximized. In other words, we have \begin{equation}\label{MaximizationS1} \hat{\boldsymbol{\eta}}^{(m+1)}_{k}=\underset{\boldsymbol{\eta}_{k}}{\mathrm{argmax}}\:\:Q(\boldsymbol{\eta}_{k}\vert\hat{\boldsymbol{\eta}}^{(m)}). \end{equation} Solving \eqref{MaximizationS1} directly for $\boldsymbol{\eta}_{k}$ is analytically complex due to the fact that it is hard to compute the gradient and Hessian with respect to $\boldsymbol{\eta}_{k}$. Instead, we update the parameters $\hat{\theta}^{(m+1)}_{\mathrm{Tx},k}$, $\hat{\theta}^{(m+1)}_{\mathrm{Rx},k}$, $\hat{\tau}^{(m+1)}_{k}$, and $\hat{\tilde{h}}^{(m+1)}_{k}$ sequentially using Gauss-Seidel-type iterations \cite{Ortega}. \subsection{Step 3: Conversion to Position and Rotation Angle Estimates} As a final step, based on the refined estimates of AOA/AOD/TOA from step 2, here we show how the position and orientation of the MS is recovered. Four scenarios are considered: LOS, NLOS, OLOS, and unknown condition. \begin{itemize} \item \textbf{LOS:} When $\hat{K}=1$ and we are in LOS condition, the expressions \eqref{TFIM2xy1}, \eqref{TFIM2xy2}, and \eqref{TFIM2xy5} describe a mapping $\boldsymbol{\eta} = \boldsymbol{f}_{\mathrm{los}}(\tilde{\boldsymbol{\eta}})$. The classical invariance principle of estimation theory is invoked to prove the equivalence of minimizing the \ac{ML} criterion in terms of either $\boldsymbol{\eta}_{0}$ or $\tilde{\boldsymbol{\eta}}_{0}$ \cite{Zacks}. Consequently, the estimated values of $\hat{\mathbf{p}}$ and $\hat{\alpha}$ are obtained directly from \begin{align}\label{EXIP2} \hat{\mathbf{p}}&=\mathbf{q}+c\hat{\tau}_{0}[\cos(\hat{\theta}_{\mathrm{Tx},0}),\sin(\hat{\theta}_{\mathrm{Tx},0})]^{\mathrm{T}},\\ \hat{\alpha}&=\pi+\hat{\theta}_{\mathrm{Tx},0}-\hat{\theta}_{\mathrm{Rx},0}. \end{align} \item \textbf{NLOS:} For the case with $\hat{K}$ scatterers and a \ac{LOS} path, the \ac{EXIP} can be used, as \eqref{TFIM2xy1}--\eqref{TFIM2xy5} describe a mapping $\boldsymbol{\eta} = \boldsymbol{f}_{\mathrm{nlos}}(\tilde{\boldsymbol{\eta}})$. Consequently, the estimated $\hat{\tilde{\boldsymbol{\eta}}}$ obtained as \begin{equation}\label{EXIP4} \hat{\tilde{\boldsymbol{\eta}}}=\underset{\tilde{\boldsymbol{\eta}}}{\mathrm{argmin}}\: \underbrace{\left(\hat{\boldsymbol{\eta}}-\boldsymbol{f}_{\mathrm{nlos}}(\tilde{\boldsymbol{\eta}})\right)^{\mathrm{T}}\mathbf{J}_{\hat{\boldsymbol{\eta}}}\left(\hat{\boldsymbol{\eta}}-\boldsymbol{f}_{\mathrm{nlos}}(\tilde{\boldsymbol{\eta}})\right)}_{v_{\mathrm{nlos}}(\tilde{\boldsymbol{\eta}})}, \end{equation} is asymptotically (w.r.t.~$G\times N$) equivalent to the ML estimate of the transformed parameter $\tilde{\boldsymbol{\eta}}$ \cite{Stoicapp,Swindlehurstt}. Note that $\mathbf{J}_{\boldsymbol{\eta}}$ could be replaced by the identity matrix, leading also to a meaningful estimator of $\tilde{\boldsymbol{\eta}}$, although with probably slightly larger \ac{RMSE}. The \ac{LMA} can be used to solve \eqref{EXIP4} \cite{Levenberg,Marquardt}, initialized as follows: we first estimate $\hat{\mathbf{p}}$ and $\hat{\alpha}$ from the \ac{LOS} path (i.e., the path with the smallest delay). Then, for the first-order reflection $\hat{\mathbf{s}}_{k}$ can be obtained by the intersection of the following two lines: $\tan(\pi-(\hat{\theta}_{\mathrm{Rx},k}+\hat{\alpha}))=(\hat{p}_{y}-s_{1,y})/(\hat{p}_{x}-s_{1,x})$ and $\tan(\hat{\theta}_{\mathrm{Tx},k})=(s_{1,y}-q_{y})/(s_{1,x}-q_{x})$. \item \textbf{OLOS:} For the case with $\hat{K}$ scatterers and no LOS path, the \ac{EXIP} could be used, as \eqref{TFIM2xy1b}, \eqref{TFIM2xy3}, and \eqref{TFIM2xy4} describe a mapping $\boldsymbol{\eta}_{\mathrm{olos}} = \boldsymbol{f}_{\mathrm{olos}}(\tilde{\boldsymbol{\eta}}_{\mathrm{olos}})$. Consequently, the estimated $\hat{\tilde{\boldsymbol{\eta}}}_{\mathrm{olos}}$ obtained as \begin{equation}\label{EXIP4b} \hat{\tilde{\boldsymbol{\eta}}}_{\mathrm{olos}}=\underset{\tilde{\boldsymbol{\eta}}_{\mathrm{olos}}}{\mathrm{argmin}}\: \underbrace{\left(\hat{\boldsymbol{\eta}}_{\mathrm{olos}}-\boldsymbol{f}_{\mathrm{olos}}(\tilde{\boldsymbol{\eta}}_{\mathrm{olos}})\right)^{\mathrm{T}}\mathbf{J}_{\hat{\boldsymbol{\eta}}_{\mathrm{olos}}}\left(\hat{\boldsymbol{\eta}}_{\mathrm{olos}}-\boldsymbol{f}_{\mathrm{olos}}(\tilde{\boldsymbol{\eta}}_{\mathrm{olos}})\right)}_{v_{\mathrm{olos}}(\tilde{\boldsymbol{\eta}}_{\mathrm{olos}})}, \end{equation} is asymptotically equivalent to the ML estimate of the transformed parameter $\tilde{\boldsymbol{\eta}}_{\mathrm{olos}}$ where $\mathbf{J}_{\hat{\boldsymbol{\eta}}_{\mathrm{olos}}}$ denotes the \ac{FIM} of $\boldsymbol{\eta}_{\mathrm{olos}}$. The estimated parameters from the \ac{NLOS} links could be used to initialize $\tilde{\boldsymbol{\eta}}_{\mathrm{olos}}$ for the application of the \ac{LMA} algorithm. The process is slightly more involved than under NLOS. We consider different trial values of $\alpha$, with a resolution $\Delta \alpha$ over a range $[-\alpha_m,+\alpha_m]$ of possible rotation values. For each trial value $\hat{\alpha}_{\mathrm{trial}}$, we can find a corresponding estimate of $\mathbf{p}$. For instance, by solving a set of linear equations for two paths: \begin{equation}\label{EXIP5b} \mathbf{p}=\mathbf{q}+d_{k,1}\begin{bmatrix} \cos(\hat{\theta}_{\mathrm{Tx},k})\\\sin(\hat{\theta}_{\mathrm{Tx},k}) \end{bmatrix}+(c\hat{\tau}_{k}-d_{k,1})\begin{bmatrix} \cos(\hat{\theta}_{\mathrm{Rx},k}+\hat{\alpha}_{\mathrm{trial}})\\-\sin(\hat{\theta}_{\mathrm{Rx},k}+\hat{\alpha}_{\mathrm{trial}}) \end{bmatrix},\: k \in \{ k_1,k_2 \} \end{equation} where $d_{k,1}$ was introduced in Fig.~\ref{NLOS_Link}. After solving \eqref{EXIP5b} for $[\mathbf{p},d_{1,1},d_{2,1}]$, it is straightforward to determine the scatterer locations (as was done in the NLOS case). For each trial value $\hat{\alpha}_{\mathrm{trial}}$, we can then apply the \ac{LMA} to \eqref{EXIP4b} to obtain $\hat{\tilde{\boldsymbol{\eta}}}_{\mathrm{olos}}$. The solution $\hat{\tilde{\boldsymbol{\eta}}}_{\mathrm{olos}}$ with the smallest $v_{\mathrm{olos}}(\tilde{\boldsymbol{\eta}}_{\mathrm{olos}})$ (with respect to all possible trial value $\hat{\alpha}_{\mathrm{trial}}$) is then retained. Clearly, there is a performance/complexity trade-off based on the choice of $\Delta \alpha$. It is readily seen that to obtain estimates of all parameters, at least three scatterers are needed, since then we have 9 available estimated parameters (1 AOA, 1 AOD, 1 TOA per path) and 9 unknowns (6 scalars for the scatterer locations $\mathbf{s}_k$, 3 scalars for $\mathbf{p}$ and $\alpha$). \item \textbf{Unknown:} For the case that the receiver does not know whether it operates in \ac{NLOS} or \ac{OLOS}, the receiver could apply the technique above under \ac{NLOS} and under \ac{OLOS}, separately. This will give two solutions with different cost (measured in terms of \eqref{EXIP4} and \eqref{EXIP4b}). The best solution (the one with lowest cost) can then be retained. \end{itemize} The complexity analysis for each step of the aforementioned algorithm is presented in Appendix \ref{comprep}. \section{Simulation Results} In this section, we present simulation results show the values of the bounds and the performance of the proposed estimators for different parameters. \subsection{Simulation Setup} We consider a scenario representative of indoor localization in a small conference room with the maximum distance between MS and BS of 4 meters \cite{Maltsevx}. We set $f_c = 60 \:\mathrm{GHz}$, $B=100 \:\mathrm{MHz}$, $c=0.299792\:\mathrm{m}/\mathrm{ns}$, and $N=20$. The geometry-based statistical path loss is used with path length $d_{k}$ and the number of reflectors in each path is set to one, i.e., it is assumed that there is one reflector in each \ac{NLOS} path \cite{geomted}. The path loss $\rho_{k}$ between \ac{BS} and \ac{MS} for the $k$-th path is computed based on geometry statistics \cite{Qian1,Qian2}. We set \begin{equation}\label{pathgeom1} 1/\rho_{k}=\sigma^{2}_{0}\mathbb{P}_{0}(d_{k,2})\xi^{2}(d_{k})\left(\frac{\lambda_{c}}{4\pi d_{k}}\right)^{2}, \end{equation} where $\sigma^{2}_{0}$ is the reflection loss, $\mathbb{P}_{0}(d_{k,2})=(\gamma_{r}d_{k,2})^{2}e^{-\gamma_{r}d_{k,2}}$ denotes the Poisson distribution of environment geometry with density $\gamma_{r}$ (set to $1/7$ \cite{geomted}), $\xi^{2}(d_{k})$ denotes the atmospheric attenuation over distance $d_{k}$, and the last term is the free space path loss over distance $d_{k}$. For the \ac{LOS} link, we obtain \begin{equation}\label{pathgeom2} 1/\rho_{0}=\xi^{2}(d_{0})\left(\frac{\lambda_{c}}{4\pi d_{0}}\right)^{2}. \end{equation} The average reflection loss for the first-order reflection $\sigma^{2}_{0}$ is set to $-10$ dB with the \ac{RMS} deviation equal to $4$ dB \cite{newref5gsix}, and the atmospheric attenuation over distance $d_{k}$ is set to $16$ dB/km \cite{Rappaport}. The number of transmit and receive antennas are set to $N_{t}=65$ and $N_{r}=65$, respectively. The number of simultaneous beams is $M_{t}=1$, and the number of sequentially transmitted signals is $G=32$, unless otherwise stated. The \ac{BS} is located at $\mathbf{q}\:[\mathrm{m}]=[0, 0]^{\mathrm{T}}$ and the \ac{MS} is located at $\mathbf{p}\:[\mathrm{m}]=[4, 0]^{\mathrm{T}}$ with the rotation angle $\alpha=0.1\:\mathrm{rad}$. The elements of the analog beamformers are generated as random values uniformly distributed on the unit circle. The sequences $\tilde{\mathbf{x}}^{(g)}[n]=\mathbf{F}^{(g)}_{\mathrm{BB}}[n]\mathbf{x}^{(g)}[n]$ are obtained as complex exponential terms $e^{j\phi_{g,n}}$ with uniform random phases in $[0, 2\pi)$ along different subcarriers, indexed by $n$, and sequentially transmitted symbols, indexed by $g$. The values of the \ac{CRB} for $\sqrt{\mathrm{CRB}(\tau_{k})}$, $\sqrt{\mathrm{CRB}(\theta_{\mathrm{Rx},k})}$, and $\sqrt{\mathrm{CRB}(\theta_{\mathrm{Tx},k})}$ are defined similar to PEB and REB in \eqref{PEBexpr} and \eqref{REBexpr}, that is, by inverting the \ac{FIM} $\mathbf{J}_{\tilde{\boldsymbol{\eta}}}$ from \eqref{TFIM1}, choosing the corresponding diagonal entries and taking the square root. Finally, the received \acf{SNR} is defined as \begin{align} \mathrm{SNR}\triangleq \frac{\mathbb{E}[\Vert \mathrm{diag}\{\mathbf{\Omega}[0],\ldots \mathbf{\Omega}[N-1]\} \mathrm{vec}\{\check{\mathbf{h}}[0],\ldots,\check{\mathbf{h}}[N-1]\}\Vert^{2}_{2}]}{\mathbb{E}[\Vert \mathrm{vec}\{\check{\mathbf{n}}[0],\ldots,\check{\mathbf{n}}[N-1]\} \Vert^{2}_{2}]}, \end{align} in which $\mathrm{diag}\{\cdot \}$ creates a block diagonal matrix from its arguments and $\mathrm{vec}\{ \cdot\}$ creates a tall column vector from its arguments. The performance of the \ac{RMSE} of the estimation algorithm was assessed from $1000$ Monte Carlo realizations. The false alarm probability was set to ${P}_{\mathrm{fa}}=10^{-3}$ to determine the threshold $\delta$. \subsection{Results and Discussion} \label{results} \subsubsection*{The Performance versus number of sequential beams} Fig. \ref{Geffect} shows the \ac{CDF} of the PEB and the $\mathrm{RMSE}(\hat{\mathbf{p}})$ as a function of the number of beams for LOS conditions. The MS can be anywhere in a rectangle with vertices at the coordinates (in meters): $(2,0)$, $(4,0)$, $(2,0.3)$, and $(4,0.3)$. The signal is scaled so that the total transmit power is kept constant. By increasing the number of beams $G$, the probability of covering the target location in the specified area with a certain accuracy increases. In other words, due to the ergodicity of the process localization accuracy with a certain number of randomly selected sequential beams in each step converges to a constant value for sufficient number of beams $G$. The reason is that for a larger number of beams, the bound decreases thanks to the better spatial coverage. But this effect vanished when the number of beams is sufficient to cover the area where the \ac{MS} may be located, and then increasing the number of beams only translated into an increased complexity. In principle, the 3 dB beam width for the ULA is approximately $2/N_{t}$, thus reducing when increasing the number of transmit antennas $N_{t}$. Consequently, the number of required beams $G$ to cover the target location in the specified area with the same probability increases. Similarly, by reducing the number of transmit antennas $N_{t}$, the number of required beams $G$ to cover the area decreases. However, the localization accuracy is improved for the case with larger number of transmit antennas $N_{t}$ with the cost of transmitting more beams $G$ for the same coverage. It is observed that for the aforementioned system parameters, $G\geq 20$ randomly selected beams approximately provides the same localization accuracy with $\mathrm{CDF}=0.9$. Note that fewer beams would be needed under a well-chosen deterministic strategy. The same behavior has been observed in NLOS conditions. \begin{figure} \centering \psfrag{G}[c][]{\footnotesize number of beams} \psfrag{PEB}[c][]{\scriptsize PEB} \psfrag{RMSE}[c][]{\scriptsize \qquad$\mathrm{RMSE}(\hat{\mathbf{p}})$} \psfrag{PEB,G=24}[c][]{\scriptsize \qquad PEB, $24$ beams} \psfrag{RMSE, G=4}[c][]{\scriptsize \:\:\:\qquad$\mathrm{RMSE}(\hat{\mathbf{p}})$, $4$ beams} \psfrag{RMSE, G=8}[c][]{\scriptsize \:\:\:\qquad$\mathrm{RMSE}(\hat{\mathbf{p}})$, $8$ beams} \psfrag{RMSE, G=20}[c][]{\scriptsize \qquad$\mathrm{RMSE}(\hat{\mathbf{p}})$, $20$ beams} \psfrag{RMSE, G=24}[c][]{\scriptsize \qquad$\mathrm{RMSE}(\hat{\mathbf{p}})$, $24$ beams} \psfrag{RMSE, G=26}[c][]{\scriptsize \qquad$\mathrm{RMSE}(\hat{\mathbf{p}})$, $26$ beams} \psfrag{CDF}[c][]{\footnotesize CDF} \psfrag{Localization error [m]}[c][]{\footnotesize Localization error [m]} \includegraphics[width=0.7\columnwidth]{PEBCDFvsG10.eps} \caption{The effect of increasing the number of beams on (top) PEB and $\mathrm{RMSE}(\hat{\mathbf{p}})$ at $\mathrm{CDF}=0.9$ and (bottom) CDF plots for LOS conditions.} \label{Geffect} \end{figure} \subsubsection*{Performance in LOS} Fig.~\ref{ParamvsIter} shows the evolution of the \ac{RMSE} of TOA and AOA/AOD in the LOS conditions. The Cram\'{e}r-Rao bounds are shown by the red lines with the corresponding markers. It is observed that after a few iterations of Algorithm 2 the \ac{RMSE} of TOA and AOA/AOD converges to the corresponding bounds even for $\mathrm{SNR}=-20\:\mathrm{dB}, -10\:\mathrm{dB}, 0\:\mathrm{dB}$. The performance of the \ac{RMSE} of the estimation algorithm with respect to different values of the received SNR is shown in Fig.~\ref{ParamvsSNR}--\ref{PREBvsSNR_losss}. It is observed that after $\mathrm{SNR}\approx -20$ dB the \ac{RMSE} of the TOA, AOA/AOD, rotation angle, and position converge to their corresponding bounds (red dashed lines). Moreover, the proposed algorithm performs well even for very low values of the received SNR, which is the typical case at \ac{mm-wave} systems before beamforming. We observe that at $\mathrm{SNR}\approx -20\:\mathrm{dB}$ the TOA, AOA/AOD, rotation angle, and position approach the corresponding bounds. \begin{figure} \centering \psfrag{snr2=-20dB}[c][]{\footnotesize \qquad\qquad$\mathrm{SNR}=-20$ dB} \psfrag{snr3=-10dB}[c][]{\footnotesize \qquad\qquad$\mathrm{SNR}=-10$ dB} \psfrag{snr3=0dB}[c][]{\footnotesize \qquad\qquad$\mathrm{SNR}=0$ dB} \psfrag{TOA0 at x}[c][]{\footnotesize \qquad\qquad$\:\:\mathrm{TOA}_{0}$ at $-20$ dB} \psfrag{TOA0 at y}[c][]{\footnotesize \qquad\qquad$\:\:\mathrm{TOA}_{0}$ at $-10$ dB} \psfrag{TOA0 at z}[c][]{\footnotesize \qquad\qquad$\:\:\mathrm{TOA}_{0}$ at $0$ dB} \psfrag{AOA0 at x}[c][]{\footnotesize \qquad\qquad$\:\:\mathrm{AOA}_{0}$ at $-20$ dB} \psfrag{AOA0 at y}[c][]{\footnotesize \qquad\qquad$\:\:\mathrm{AOA}_{0}$ at $-10$ dB} \psfrag{AOA0 at z}[c][]{\footnotesize \qquad\qquad$\:\:\mathrm{AOA}_{0}$ at $0$ dB} \psfrag{AOD0 at x}[c][]{\footnotesize \qquad\qquad$\:\:\mathrm{AOD}_{0}$ at $-20$ dB} \psfrag{AOD0 at y}[c][]{\footnotesize \qquad\qquad$\:\:\mathrm{AOD}_{0}$ at $-10$ dB} \psfrag{AOD0 at z}[c][]{\footnotesize \qquad\qquad$\:\:\mathrm{AOD}_{0}$ at $0$ dB} \psfrag{RMSETOA1}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\tau}_{0})$ [ns]} \psfrag{RMSEAOA1}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Rx},0})$ [rad]} \psfrag{RMSEAOD1}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Tx},0})$ [rad]} \psfrag{iteration index}[c][]{\footnotesize iteration index} \includegraphics[width=0.58\columnwidth]{RMSE_LOS_IterFfU.eps} \caption{The evolution of \ac{RMSE} of TOA and AOA/AOD for the LOS for $\mathrm{SNR}=-20\:\mathrm{dB}, -10\:\mathrm{dB}, 0\:\mathrm{dB}$. The red lines with the same markers show the bounds for the same value of \ac{SNR} corresponding to the \ac{RMSE} of TOA and AOA/AOD.} \label{ParamvsIter} \end{figure} \begin{figure} \centering \psfrag{RMSE0}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\tau}_{0})$ [ns]} \psfrag{RMSE1}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Rx},0})$ [rad]} \psfrag{RMSE2}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Tx},0})$ [rad]} \psfrag{SNR}[c][]{\footnotesize SNR (in dB)} \psfrag{RMSEx1}[c][]{\tiny \qquad$\:\:\mathrm{RMSE}(\hat{\tau}_{0})$} \psfrag{RMSEx2}[c][]{\tiny \qquad$\:\:\mathrm{RMSE}(\hat{\theta}_{\mathrm{Rx},0})$} \psfrag{RMSEx3}[c][]{\tiny \qquad$\:\:\mathrm{RMSE}(\hat{\theta}_{\mathrm{Tx},0})$} \psfrag{RMSEy1}[c][]{\tiny \qquad$\:\:\sqrt{\mathrm{CRB}(\tau_{0})}$} \psfrag{RMSEy2}[c][]{\tiny \qquad$\:\:\sqrt{\mathrm{CRB}(\theta_{\mathrm{Rx},0})}$} \psfrag{RMSEy3}[c][]{\tiny \qquad$\:\:\sqrt{\mathrm{CRB}(\theta_{\mathrm{Tx},0})}$} \psfrag{iteration index}[c][]{\small iteration index} \includegraphics[width=0.58\columnwidth]{ParamvsSNRFff.eps} \caption{RMSE in dB scale plotted against received SNR for TOA and AOA/AOD in the LOS conditions. The red lines show the corresponding bounds.} \label{ParamvsSNR} \end{figure} \begin{figure} \centering \psfrag{SNR}[c][]{\footnotesize SNR (in dB)} \psfrag{REBb}[c][]{\footnotesize \qquad REB} \psfrag{PEBb}[c][]{\footnotesize \qquad PEB} \psfrag{RMSExx}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\alpha})$ [rad]} \psfrag{RMSEyy}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\mathbf{p}})$ [m]} \psfrag{RMSEx}[c][]{\footnotesize \qquad$\mathrm{RMSE}(\hat{\alpha})$} \psfrag{RMSEy}[c][]{\footnotesize \qquad$\mathrm{RMSE}(\hat{\mathbf{p}})$} \includegraphics[width=0.57\columnwidth]{PREB_LOSFf.eps} \caption{RMSE in dB scale plotted against received SNR for rotation angle (top) and position (bottom) in the \ac{LOS}. The red lines show the corresponding bounds.} \label{PREBvsSNR_losss} \end{figure} \subsubsection*{Performance in NLOS} Fig.~\ref{ParamvsIterNLOS} shows the evolution of the \ac{RMSE} of TOA and AOA/AOD for $1000$ Monte Carlo realizations in the presence of a scatterer located at $\mathbf{s}_{k}\:[\mathrm{m}]=[1.5, 0.4]^{\mathrm{T}}$. It can be observed that the \ac{RMSE} of the TOA and the AOA/AOD obtained with the proposed algorithm for both the parameters of the LOS and the reflected signals converges to the theoretical also in this case, even at very low received SNR. At $\mathrm{SNR}\approx -5\:\mathrm{dB}$ the TOA, AOA/AOD, rotation angle, and position approach the corresponding bounds. \begin{figure} \centering \psfrag{snr2=-5dB}[c][]{\footnotesize $\qquad\mathrm{SNR}=-5$ dB} \psfrag{snr3=0dB}[c][]{\footnotesize $\qquad\mathrm{SNR}=0$ dB} \psfrag{RMSETOA1}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\tau}_{0})$ [ns]} \psfrag{RMSETOA2}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\tau}_{1})$ [ns]} \psfrag{RMSEAOA1}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Rx},0})$ [rad]} \psfrag{RMSEAOA2}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Rx},1})$ [rad]} \psfrag{RMSEAOD1}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Tx},0})$ [rad]} \psfrag{RMSEAOD2}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Tx},1})$ [rad]} \psfrag{iteration index}[c][]{\footnotesize iteration index} \includegraphics[width=0.9\columnwidth]{RMSE_NLOS_IterFfU.eps} \caption{The evolution of \ac{RMSE} of TOA and AOA/AOD for the LOS (left column) and the NLOS (right column) paths at $\mathrm{SNR}=-5\:\mathrm{dB}, 0\:\mathrm{dB}$. The red lines with the same markers show the bounds.} \label{ParamvsIterNLOS} \end{figure} \begin{figure} \centering \psfrag{RMSE0}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\tau}_{k})$ [ns]} \psfrag{RMSE1}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Rx},k})$ [rad]} \psfrag{RMSE2}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\theta}_{\mathrm{Tx},k})$ [rad]} \psfrag{SNR}[c][]{\footnotesize SNR (in dB)} \psfrag{RMSEx1}[c][]{\footnotesize \:$\qquad\mathrm{RMSE}(\hat{\tau}_{0})$} \psfrag{RMSEy1}[c][]{\footnotesize \:$\qquad\mathrm{RMSE}(\hat{\tau}_{1})$} \psfrag{RMSEx2}[c][]{\footnotesize \:\:$\qquad\mathrm{RMSE}(\hat{\theta}_{\mathrm{Rx},0})$} \psfrag{RMSEy2}[c][]{\footnotesize \:\:$\qquad\mathrm{RMSE}(\hat{\theta}_{\mathrm{Rx},1})$} \psfrag{RMSEx3}[c][]{\footnotesize \:\:$\qquad\mathrm{RMSE}(\hat{\theta}_{\mathrm{Tx},0})$} \psfrag{RMSEy3}[c][]{\footnotesize \:\:$\qquad\mathrm{RMSE}(\hat{\theta}_{\mathrm{Tx},1})$} \psfrag{TOA0}[c][]{\footnotesize \:$\qquad\sqrt{\mathrm{CRB}(\tau_{0})}$} \psfrag{TOA1}[c][]{\footnotesize \:$\qquad\sqrt{\mathrm{CRB}(\tau_{1})}$} \psfrag{AOA0}[c][]{\footnotesize \:\:$\qquad\sqrt{\mathrm{CRB}(\theta_{\mathrm{Rx},0})}$} \psfrag{AOA1}[c][]{\footnotesize \:\:$\qquad\sqrt{\mathrm{CRB}(\theta_{\mathrm{Rx},1})}$} \psfrag{AOD0}[c][]{\footnotesize \:\:$\qquad\sqrt{\mathrm{CRB}(\theta_{\mathrm{Tx},0})}$} \psfrag{AOD1}[c][]{\footnotesize \:\:$\qquad\sqrt{\mathrm{CRB}(\theta_{\mathrm{Tx},1})}$} \includegraphics[width=0.7\columnwidth]{ParamvsSNRnlosFf.eps} \caption{RMSE in dB scale for the \ac{NLOS} plotted against received SNR for TOA and AOA/AOD in the presence of a scatterer located at $\mathbf{s}_{k}\:[\mathrm{m}]=[1.5, 0.4]^{\mathrm{T}}$. The red lines show the corresponding bounds.} \label{ParamvsSNRnlos} \end{figure} \begin{figure} \centering \psfrag{SNR}[c][]{\footnotesize SNR (in dB)} \psfrag{REBb}[c][]{\footnotesize REB} \psfrag{PEBb}[c][]{\footnotesize PEB} \psfrag{RMSExx}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\alpha})$ [rad]} \psfrag{RMSEyy}[c][]{\footnotesize $\mathrm{RMSE}(\hat{\mathbf{p}})$ [m]} \psfrag{RMSEx}[c][]{\footnotesize $\qquad\mathrm{RMSE}(\hat{\alpha})$} \psfrag{RMSEy}[c][]{\footnotesize $\qquad\mathrm{RMSE}(\hat{\mathbf{p}})$} \includegraphics[width=0.57\columnwidth]{PREB_NLOSFfz.eps} \caption{RMSE in dB scale for the \ac{NLOS} plotted against received SNR for rotation angle (top) and position (bottom) in the presence of a scatterer located at $\mathbf{s}_{k}\:[\mathrm{m}]=[1.5, 0.4]^{\mathrm{T}}$. The red lines show the corresponding bounds.} \label{PREBvsSNR} \end{figure} \begin{figure} \centering \psfrag{SNR}[c][]{\small SNR (in dB)} \psfrag{REBb}[c][]{\scriptsize REB} \psfrag{PEBb}[c][]{\scriptsize PEB} \psfrag{RMSEa}[c][]{\small $\mathrm{RMSE}(\hat{\alpha})$ [rad]} \psfrag{RMSEp}[c][]{\small $\mathrm{RMSE}(\hat{\mathbf{p}})$ [m]} \psfrag{RMSEy}[c][]{\scriptsize $\qquad\qquad\qquad\qquad\mathrm{RMSE}(\hat{\alpha}),\Delta\alpha\:[\mathrm{rad}]=0.01$} \psfrag{RMSEyy}[c][]{\scriptsize $\qquad\qquad\qquad\qquad\mathrm{RMSE}(\hat{\alpha}),\Delta\alpha\:[\mathrm{rad}]=0.05$} \psfrag{RMSEx}[c][]{\scriptsize $\qquad\qquad\qquad\qquad\mathrm{RMSE}(\hat{\mathbf{p}}),\Delta\alpha\:[\mathrm{rad}]=0.01$} \psfrag{RMSExx}[c][]{\scriptsize $\qquad\qquad\qquad\qquad\mathrm{RMSE}(\hat{\mathbf{p}}),\Delta\alpha\:[\mathrm{rad}]=0.05$} \includegraphics[width=0.57\columnwidth]{PREB_OLOSfzz.eps} \caption{RMSE in dB scale plotted against received SNR for rotation angle (top) and position (bottom) in the \ac{OLOS} with three scatterers located at $\mathbf{s}_{k}\:[\mathrm{m}]=[1.5, 0.4+0.5(k-1)]^{\mathrm{T}}$ for $k=1, 2, 3$ and $\Delta\alpha\:[\mathrm{rad}]=\{0.01,0.05\}$. The red lines show the corresponding bounds.} \label{PREBvsSNRolos} \end{figure} \subsubsection*{Performance in OLOS} Finally, the performance in the \ac{OLOS} case for three scatterers located at $\mathbf{s}_{k}\:[\mathrm{m}]=[1.5, 0.4+0.5(k-1)]^{\mathrm{T}}$ for $k=1, 2, 3$ is investigated in this section using two different initializations of the rotation angle: one with grid resolution $\Delta\alpha\:[\mathrm{rad}]=0.01$ and one with $\Delta\alpha\:[\mathrm{rad}]=0.05$. For both, we set $\alpha_{m}\:[\mathrm{rad}]=0.5$. Fig.~\ref{PREBvsSNRolos} shows the performance of the \ac{RMSE} with respect to the received \ac{SNR} for position and rotation angle estimation. The proposed estimation method approaches the bound even for the initialization with the resolution $\Delta\alpha\:[\mathrm{rad}]=0.05$. However, the performance of the estimation algorithm is dependent on the resolution of the grid of points $\Delta\alpha$. In particular, a finer grid for the rotation angle leads to better initial estimates and thus a lower final RMSE. For $\mathrm{SNR}\approx -10\:\mathrm{dB}$ the \ac{RMSE} of position and rotation angle approach the corresponding bounds. We note that the OLOS values, for a fixed SNR, are significantly higher in the OLOS than in the NLOS case. \subsubsection*{Unknown Conditions} To analyze the application of the algorithm when the propagation conditions are unknown, we consider the case where there are three scatterers and the LOS path is blocked, that is, the OLOS condition. Starting with the wrong assumption that the path with the shortest delay is the LOS path (i.e., the NLOS condition) leads to very large values of the cost function \eqref{EXIP4} compared to the actual value of the cost function \eqref{EXIP4b}. The results are summarized in Table. \ref{tab:assumptions} for the average value of the ratio ${\Delta v}\triangleq v_{\mathrm{nlos}}(\hat{\tilde{\boldsymbol{\eta}}})/v_{\mathrm{olos}}(\hat{\tilde{\boldsymbol{\eta}}}_{\mathrm{olos}})$ between the cost function with the wrong and true assumptions. The values in Table \ref{tab:assumptions} are obtained by averaging 100 realizations, and with a grid resolution of $\Delta\alpha=0.05 [\mathrm{rad}]$. The slight difference in the ratio for different values of SNR is due to the limited number of trials. \begin{table}[!t] \centering \caption{Unknown conditions} \label{tab:assumptions} \begin{tabular}{|c|c|c|c|c|} \hline SNR (in dB) & -20 & -10 & 0 &10 \\\hline ${\Delta v}$& 5.5&5.2&5&5.3\\\hline \end{tabular} \end{table} It is clear that using the wrong assumption about the path with the shortest delay leads to much larger values of the cost function, i.e., the mean value of the ratio ${\Delta v}$ between the cost function with the wrong and true assumptions is on the order of $5$. The main reason for the increase of the cost function using the wrong assumption about the shortest path is that the estimate of \ac{MS} rotation angle obtained from the AOA and AOD of this path is heavily erroneous. When the shortest path is considered to be a LOS but it is really a reflection, there is a clear mismatch between the geometry of the propagation and the model equations, since there is a scatterer that breaks the direct relation between AOA and AOD existing with the LOS. This mismatch causes a large error in the initial position that is propagated to the final solution. Therefore, observing the ratio of cost functions, we can identify that the path with the shortest delay is related to the scatterer and the LOS path does not exist, that is to say, the OLOS condition is correctly recognized. \subsubsection*{Comparison of LOS versus NLOS Performance} Fig. \ref{CDFdiff} compares the performance of the positioning algorithm in LOS and NLOS for $\mathrm{SNR}= -5$ dB and $G=20$. The MS is anywhere in the same rectangle described at the beginning of Sec.~\ref{results}. The scatterers are located at coordinates (in meters) $\mathbf{s}_{1}=(1.5,0.4)$ and $\mathbf{s}_{2}=(1.5,0.6)$. The accuracy and robustness of the localization algorithm is improved by adding the scatterers compared to the case when only LOS is used. Moreover, the performance in the OLOS is much worse than in LOS or NLOS due to the severe effect of path loss as shown already in the paper by comparing Figs. \ref{PREBvsSNR_losss} and \ref{PREBvsSNR} with Fig. \ref{PREBvsSNRolos}. \begin{figure} \centering \psfrag{CDF}[c][]{\small CDF} \psfrag{Localization error [m]}[c][]{\small Localization error [m]} \psfrag{NLOS with 1 scatterer}[c][]{\footnotesize \qquad NLOS with 1 scatterer} \psfrag{NLOS with 2 scatterers}[c][]{\footnotesize \qquad NLOS with 2 scatterers} \psfrag{LOS}[c][]{\footnotesize \qquad LOS} \includegraphics[width=0.57\columnwidth]{CDFvsError3b.eps} \caption{CDF of the localization error in LOS and NLOS with one and two scatterers for $\mathrm{SNR}=-5$ dB and $G=20$.} \label{CDFdiff} \end{figure} \section{Conclusion}\label{SEC:Conclusion} We have studied the determination of a receiver position and orientation using a single transmitter in a MIMO system. Our study includes \ac{LOS}, as well as \ac{NLOS} and \ac{OLOS} conditions, shedding insight into the potential of locating a receiver even when the \ac{LOS} is blocked. We have derived fundamental performance bounds on the estimation uncertainty for delay, angle of arrival, angle of departure, and channel gain of each path, as well as the user position and orientation angle. We also proposed a novel three stage algorithm for the estimation of the user position and orientation angle. This algorithm determines coarse estimates of the channel parameters by exploiting the sparsity of the \ac{mm-wave} in beamspace, followed by an iterative refinement, and finally a conversion to position and orientation. Through simulation studies, we demonstrate the efficiency of the proposed algorithm, and show that even in \ac{OLOS} conditions, it is possible to estimate the user's position and orientation angle, by exploiting the information coming from the multipath, though at a significant performance penalty. \appendices \section{Elements in \eqref{Parameters8w} }\label{elements} Replacing $\mathbf{y}[n]$ from (\ref{Receivedb1}) in (\ref{Parameters5b}), using (\ref{Parameters6ww}), and considering $\mathbb{E}_{\mathbf{y}\vert\boldsymbol{\eta}}[\mathbf{n}[n]]=\mathbf{0}$, we obtain \begin{equation}\label{appa0} \Psi(x_{r},x_{s})=\frac{2}{N_{0}}\sum_{n=0}^{N-1}\Re\left\{\frac{\partial\boldsymbol{\mu}^{\mathrm{H}}[n]}{\partial x_{r}}\frac{\partial\boldsymbol{\mu}[n]}{\partial x_{s}}\right\}. \end{equation} The elements of the \ac{FIM} are obtained based on \eqref{appa0}. The entry associated with the $n$-th subcarrier is denoted as $\Psi_n(x_{r},x_{s})$, and given by (for $\left\lbrace\tau_{r}, \tau_{s}\right\rbrace$ and $\left\lbrace\boldsymbol{\theta}_{r}, \boldsymbol{\theta}_{s}\right\rbrace$) \begin{IEEEeqnarray}{rCl} \Psi_{n}(\tau_{r},\tau_{s})& = &\frac{2}{N_{0}}\Re\{\tilde{h}^{*}_{r}\tilde{h}_{s} A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(2)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})\},\label{Parameters9w}\\ \Psi_{n}(\tau_{r},\theta_{\mathrm{Tx},s})& = &\frac{2}{N_{0}}\Re\{j\tilde{h}^{*}_{r}\tilde{h}_{s}A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(1)}_{\mathbf{D}_{\mathrm{Tx},s},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})\},\label{Parameters10w}\\ \Psi_{n}(\tau_{r},\theta_{\mathrm{Rx},s})& = &\frac{2}{N_{0}}\Re\{j\tilde{h}^{*}_{r}\tilde{h}_{s}A_{\mathbf{D}_{\mathrm{Rx,s}},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(1)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})\},\label{Parameters11w}\\ \Psi_{n}(\theta_{\mathrm{Tx},r},\theta_{\mathrm{Tx},s})& = &\frac{2}{N_{0}}\Re\{\tilde{h}^{*}_{r}\tilde{h}_{s}A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A_{\mathbf{Dd}_{\mathrm{Tx}},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})\}, \label{Parameters12w}\\ \Psi_{n}(\theta_{\mathrm{Tx},r},\theta_{\mathrm{Rx},s})& = &\frac{2}{N_{0}}\Re\{\tilde{h}^{*}_{r}\tilde{h}_{s}A_{\mathbf{D}_{\mathrm{Rx,s}},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(0)}_{\mathbf{D}_{\mathrm{Tx},r},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})\}, \label{Parameters13w}\\ \Psi_{n}(\theta_{\mathrm{Rx},r},\theta_{\mathrm{Rx},s})& = &\frac{2}{N_{0}}\Re\{\tilde{h}^{*}_{r}\tilde{h}_{s}A_{\mathbf{D}_{\mathrm{Rx,r,s}},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(0)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})\}.\label{Parameters14w} \end{IEEEeqnarray} The following notations are introduced: \begin{align} A^{(k)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})&\triangleq\mathbf{a}_{\mathrm{Tx},\mathbf{F},n}^{\mathrm{H}}(\theta_{\mathrm{Tx,s}})\mathbf{A}_{k,n}(\tau_{r},\tau_{s})\mathbf{a}_{\mathrm{Tx},\mathbf{F},n}(\theta_{\mathrm{Tx,r}}),\label{Parameters15w}\\ A^{(l)}_{\mathbf{D}_{\mathrm{Tx},s},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})&\triangleq\mathbf{a}_{\mathbf{D}_{\mathrm{Tx}},\mathbf{F},n}^{\mathrm{H}}(\theta_{\mathrm{Tx,s}})\mathbf{A}_{l,n}(\tau_{r},\tau_{s})\mathbf{a}_{\mathrm{Tx},\mathbf{F},n}(\theta_{\mathrm{Tx,r}}),\label{Parameters16w}\\ A^{(l)}_{\mathbf{D}_{\mathrm{Tx},r},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})&\triangleq\mathbf{a}_{\mathrm{Tx},\mathbf{F},n}^{\mathrm{H}}(\theta_{\mathrm{Tx,s}})\mathbf{A}_{l,n}(\tau_{r},\tau_{s})\mathbf{a}_{\mathbf{D}_{\mathrm{Tx}},\mathbf{F},n}(\theta_{\mathrm{Tx,r}}),\label{Parameters17w}\\ A_{\mathbf{Dd}_{\mathrm{Tx}},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})&\triangleq\mathbf{a}_{\mathbf{D}_{\mathrm{Tx}},\mathbf{F},n}^{\mathrm{H}}(\theta_{\mathrm{Tx,s}})\mathbf{A}_{0,n}(\tau_{r},\tau_{s})\mathbf{a}_{\mathbf{D}_{\mathrm{Tx}},\mathbf{F},n}(\theta_{\mathrm{Tx,r}}),\label{Parameters18w} \end{align} where $l \in \{0,1\}$, and $\mathbf{A}_{k,n}(\tau_{r},\tau_{s}), k \in \{0,1,2 \}$, is given by \vspace{-2mm} \begin{equation}\label{Parameters19w} \mathbf{A}_{k,n}(\tau_{r},\tau_{s})\triangleq (2\pi n/(NT_{s}))^{k}~\mathbf{x}[n]\mathbf{x}^{\mathrm{H}}[n]e^{-j2\pi n(\tau_{r}-\tau_{s})/(NT_{s})}. \end{equation} The vectors $\mathbf{a}_{\mathrm{Tx},\mathbf{F},n}(\theta_{\mathrm{Tx},r})$ and $\mathbf{a}_{\mathbf{D}_{\mathrm{Tx}},\mathbf{F},n}(\theta_{\mathrm{Tx},r})$ are given by $\mathbf{a}_{\mathrm{Tx},\mathbf{F},n}(\theta_{\mathrm{Tx},r}) = \mathbf{F}^{\mathrm{H}}[n]\mathbf{a}_{\mathrm{Tx},n}(\theta_{\mathrm{Tx},r})$ and $\mathbf{a}_{\mathbf{D}_{\mathrm{Tx}},\mathbf{F},n}(\theta_{\mathrm{Tx},r})=\mathbf{F}^{\mathrm{H}}[n]\mathbf{D}_{\mathrm{Tx},r}[n]\mathbf{a}_{\mathrm{Tx},n}(\theta_{\mathrm{Tx},r})$. The matrix $\mathbf{D}_{\mathrm{Tx},r}[n]$ is defined as \begin{equation}\label{Parameters19ww} \mathbf{D}_{\mathrm{Tx},r}[n]\triangleq j\frac{2\pi}{\lambda_{n}}d\cos(\theta_{\mathrm{Tx},r})\mathrm{diag}\{0,\ldots,N_{\mathrm{t}}-1\}. \end{equation} The scalars $A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})$, $A_{\mathbf{D}_{\mathrm{Rx,s}},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})$, and $A_{\mathbf{D}_{\mathrm{Rx,r,s}},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})$ are defined as \begin{align} A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})&\triangleq\mathbf{a}^{\mathrm{H}}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},r})\mathbf{a}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},s}), \label{Parameters20w}\\ A_{\mathbf{D}_{\mathrm{Rx,s}},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})&\triangleq\mathbf{a}^{\mathrm{H}}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},r})\mathbf{D}_{\mathrm{Rx},s}[n]\mathbf{a}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},s}), \label{Parameters21w}\\ A_{\mathbf{D}_{\mathrm{Rx,r,s}},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})&\triangleq\mathbf{a}^{\mathrm{H}}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},r})\mathbf{D}^{\mathrm{H}}_{\mathrm{Rx},r}[n]\mathbf{D}_{\mathrm{Rx},s}[n]\mathbf{a}_{\mathrm{Rx},n}(\theta_{\mathrm{Rx},s}), \label{Parameters22w} \end{align} where $\mathbf{D}_{\mathrm{Rx},r}[n]$ has the same expression as \eqref{Parameters19ww} by replacing the subscript $\mathrm{Tx}$ by $\mathrm{Rx}$ and $N_{t}$ by $N_{r}$. The terms including channel coefficients are summarized as: \begin{multline}\label{appa1} \mathbf{\Psi}_{n}(\tau_{r},\tilde{\mathbf{h}}_{s})= \frac{2}{N_{0}}[\Re\{j\tilde{h}^{*}_{r}A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(1)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx},s},\theta_{\mathrm{Tx},r})\},\\\Re\{-\tilde{h}^{*}_{r}A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(1)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx},s},\theta_{\mathrm{Tx},r})\}], \end{multline} \begin{multline}\label{appa2} \mathbf{\Psi}_{n}(\theta_{\mathrm{Tx},r},\tilde{\mathbf{h}}_{s})= \frac{2}{N_{0}}[\Re\{\tilde{h}^{*}_{r}A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(0)}_{\mathbf{D}_{\mathrm{Tx},r},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})\},\\\Re\{j\tilde{h}^{*}_{r}A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(0)}_{\mathbf{D}_{\mathrm{Tx},r},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx,s}},\theta_{\mathrm{Tx,r}})\}], \end{multline} \begin{multline}\label{appa3} \mathbf{\Psi}_{n}(\theta_{\mathrm{Rx},r},\tilde{\mathbf{h}}_{s})= -\frac{2}{N_{0}}[\Re\{\tilde{h}^{*}_{r}A_{\mathbf{D}_{\mathrm{Rx},r},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(0)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx},s},\theta_{\mathrm{Tx},r})\},\\\Re\{j\tilde{h}^{*}_{r}A_{\mathbf{D}_{\mathrm{Rx},r},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(0)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx},s},\theta_{\mathrm{Tx},r})\}], \end{multline} \begin{multline}\label{appa4} \Psi_{n}(\Re\{\tilde{h}_{r}\},\Re\{\tilde{h}_{s}\})=\Psi_{n}(\Im\{\tilde{h}_{r}\},\Im\{\tilde{h}_{s}\})=\\ \frac{2}{N_{0}}\Re\{A_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(0)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx},s},\theta_{\mathrm{Tx},r})\}, \end{multline} \begin{multline}\label{appa5} \Psi_{n}(\Re\{\tilde{h}_{r}\},\Im\{\tilde{h}_{s}\})=-\Psi_{n}(\Im\{\tilde{h}_{r}\},\Re\{\tilde{h}_{s}\})=\\ \frac{2}{N_{0}}\Re\{jA_{\mathrm{Rx},n}(\theta_{\mathrm{Rx,r}},\theta_{\mathrm{Rx,s}})A^{(0)}_{\mathrm{Tx},\mathbf{F},n}(\tau_{r},\tau_{s},\theta_{\mathrm{Tx},s},\theta_{\mathrm{Tx},r})\}. \end{multline} \section{Complexity Analysis}\label{comprep} We analyze the complexity of different stages of the proposed algorithm.\begin{itemize}\item Coarse Estimation: The complexity in performing \eqref{tinex1ee} is on the order of $O(N^{2}_{r}N^{2}_{t}GN_{\mathrm{sub}})$ where $N_{\mathrm{sub}}$ denotes the few subcarriers sufficient to detect the dominant path. The QR factorization of the mutilated basis $\mathbf{\Omega}_{\mathcal{K}_{t}}[n]$ approximately requires $O(GN_{r}\hat{K}^{2})$ operations for each subcarrier, and matrix inversion to obtain the channel coefficients in \eqref{BWTransceiver2z} approximately takes $O(N\hat{K}^{3})$ operations for all the subcarriers. The complexity in computing \eqref{BWTransceiver2tzsfzxe3} is on the order of $O(ND_{o}\hat{K})$ where $D_{o}$ denotes the number of delay grid points, and \eqref{BWTransceiver2tzsfzxe2} requires $O(N\hat{K})$ operations. Consequently, the maximum complexity from coarse estimation of the channel parameters is dominated by the term $\hat{K}\times O(N^{2}_{r}N^{2}_{t}GN_{\mathrm{sub}})$. \item Fine Estimation: In the refinement phase, the complexity is mainly affected by Gauss-Seidel-type iterations with first and second order derivatives of a vector $\mathbf{a}(x)$ of length $L_{\mathrm{x}}$ with respect to a variable $x$ that can be delay, AOA, and AOD. These operations lead to a complexity on the order of $O(L^{2}_{\mathrm{x}}N)$ for each path. Given the subsequent path refinement, the maximum complexity of fine estimation is on the order of $O(\hat{K}^{2})\times O(L^{2}_{\mathrm{x}}N)$.\item Conversion to Position and Orientation: The conversion to position and orientation in the LOS case is easy to implement since it involves only some basic operations. For the NLOS and OLOS scenarios, the LMA algorithm is applied. It is not considered the complexity driver, since it combines the advantages of gradient-descent and Gauss-Newton methods. The LMA algorithm can be effectively applied by implementing delayed gratification, which leads to higher success rate and fewer Jacobian evaluations. \end{itemize} \bibliographystyle{IEEEtran}
proofpile-arXiv_065-7442
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} One of the most intriguing and mysterious objects in the Universe is the black hole. This area of research is now in the forefront ever since the black holes have been observationally identified using gravitational wave astronomy. The study of black holes is now considered mainstream and astrophysically relevant. The thermodynamical properties of the black holes, the singularities inside the black holes etc. are yet to be understood fully. The information loss of the black hole and its resolution are still sort after mysteries. One of the intriguing aspects of a black hole is the volume in the interior of a black hole. This question has been addressed by various authors \cite{MP,Gibbons,CR} using different approaches. Maulikh Parikh \cite{MP} discusses the definition of volume by constructing an invariant slice of the spacetime inside the black hole horizon. Gibbons et al.\cite{Gibbons} have discussed the thermodynamical volume, $V_{th}$ inside a black hole in the presence of a varying cosmological constant $\Lambda$. $V_{th}$ is defined as the conjugate variable to $\Lambda$ appearing in the first law of thermodynamics for black hole i.e., $dE = TdS + \Omega dJ+\Phi dQ+V_{th}\Lambda$, where $E$ is the gravitational enthalpy of spacetime. Christodoulou and Rovelli \cite{CR} provided a somewhat different definition of the black hole volume, in which the volume grows indefinitely as a function of the advance time. In the article, we focus on the approach due to Christodolou and Rovelli \cite{CR}. They find out the volume inside the black hole via a variational approach. From the event horizon to the singularity they define a spacelike curve in general and from among them define an extremal curve or a maximal hypersurface that yields the maximum volume in the interior of the black hole. They set up a Lagrangian framework to solve the above question. A similar method is adopted in the following papers \cite{BZ,BJ,Ong,YCO,CL,XY,MZ,SSR} In the article, we work on two aspects concerning the volume of the black holes. The first part deals with the question about the evolution of the interior volume of the trapped region where the black hole is in the process of formation. The apparent horizon is the boundary of the trapped region. Due to the evolution of the trapped region, the maximal hypersurface evolves and so does the maximum interior volume. This evolution of the volume is not just due to the expanding trapped region, but also due to the evolution of the maximal hypersurface. We set up a variational problem where we write down a Lagrangian for the spacelike curve between the apparent horizon and the singularity that gives the maximal volume. We then obtain the equations for this maximal hypersurface from the Euler-Lagrange equations for the obtained Lagrangian. We first solve the problem for the simple case of 2+1 dimensions where the underlying equations and analysis is simpler. The features of 2+1 dimensional gravity do not generalize in a straightforward way for a general D-dimension. We then carry out the analysis for the case of D-dimensional spherically symmetric dust evolution that leads to the formation of black hole. It is proved in \cite{CR} that the volume generated by maximal hypersurface has maximum contribution from a certain region that we call in this article as $R_{ss}$ (the subscript stands for 'steady state'). This region provides an excellent approximation for the interior volume of the black hole. For a Schwarzschild black hole with mass $M$, the event horizon is at $R=2M$ while the value of $R_{ss}=3M/2$. This steady state radius $R_{ss}$ lies in the interior of the black hole. This region of the maximal hypersurface was first discovered by Reinhart \cite{Reinhart} in 1973. Similar points of interest in the interior of a black hole have been found in the examples described below in various other black holes like BTZ, Kerr, Kerr-AdS blackholes. The volume of the interior of a black hole is shown to be dependent on this radius $R_{ss}$. The volume of a Schwarzschild black hole is shown to be equal to $V=3\sqrt{3}M^2v$ where $v$ is the advance time. The special feature of the surface is that the normal to the surface is divergence free, i.e the trace of the extrinsic curvature tensor vanishes. This unique point has played a pivotal role in the approximation of volume of a black hole in a more general setting. For instance, the asymptotic volume of a BTZ black hole is crucially dependent on a point $R=\sqrt{M/2|\Lambda|}$ in \cite{MZ} with M being the ADM mass of the BTZ black hole and $\Lambda$ is the cosmological constant. Even in the presence of rotation, the asymptotic formula for the interior volume of the BTZ black hole crucially depends on the point $R=\sqrt{M/2|\Lambda|}$ as shown in \cite{SSR}. In the estimation of the volume of the Kerr family of black holes \cite{BJ,XY}, the asymptotic value of the volume is dependent on these special point. When one explores the interesting aspects of a black hole interiors one thinks of the interior as a somewhat trivial region with the only interesting feature being the spacetime singularity (and the inner horizon in case of rotating or charged black holes). The presence of $R_{ss}$ between the event horizon and singularity, therefore reveals yet another interesting region in the interior of the black hole. These points have not been explored in all its generality. In this article, we explore the features of this special point in the spacetime which plays a crucial role in the interior region. We point out the relation of this special point of the maximal hypersurface with the Kodama vector. We show that this point corresponds to the location in the maximal hypersurface which is tangential to the Kodama vector. We study the emergence and evolution aspects of this crucial piece of hypersurface during the evolution and formation of the black holes. The study is exhaustive where we explore the evolutionary aspects in D-dimensions with and without cosmological constant. We arrive at interesting results for the various cases discussed. We develop a formula for tracking the evolution of the $R_{ss}$ from which we can deduce its location. In section II we discuss the evolution of the maximal hypersurface for homogeneous dust model for 2+1 dimensional and D-dimensional case. In section III, we use the extrinsic curvature method to estimate the steady state radius in 2+1 dimensional case. In the subsections of section III, we discuss the vacuum case, static black hole case and cosmological case in 2+1 dimensions. In section IV, we discuss the extrinsic curvature method to locate the steady state radius in D-dimensions. In the subsections of section IV, we discuss the vacuum scenario, D-dimensional Schwarzschild black hole case, D-dimensional Schwarzschild deSitter and anti-deSitter case and the cosmological case. In section V we discuss the conclusions for the work. \section{Evolution of the maximal hypersurface for homogeneous dust collapse model} In this section we track the evolution of the interior volume of the BTZ black hole that has zero angular momentum. We study a dynamical situation corresponding to the formation of the BTZ black hole so that we estimate the evolution of the maximal volume in the interior. The reason for the parameter $J$ is kept to zero in order to obtain analytically tractable expressions. We moreover consider the simplest case of a homogeneous evolution of dust. The metric for a circularly symmetric evolving inhomogeneous dust model is given below \cite{rossandmann, sashideep}. \begin{multline}\label{metric1} ds^2=-dt^2+\frac{(\cos(\sqrt{|\Lambda| }t)+B'\sin(\sqrt{|\Lambda| }t))^2dr^2}{|\Lambda| r^2+|\Lambda| B^2-2\kappa\int_{0}^{r}{\rho _i(s) sds}+1}\\ +(r\cos(\sqrt{|\Lambda| }t)+B\sin(\sqrt{|\Lambda| }t))^2d\phi^2 \end{multline} Here $t$ is the comoving time of dust cloud and $\Lambda$ is the cosmological constant. As the cosmological constant $\Lambda=-1/l^2$ is negative, the background space is AdS. The coordinate $r$ is the comoving radius that is fixed to a dust particle whose distribution is circularly symmetric. The metric is expressed in terms of two functions $B(r)$ and $\rho_i(r)$. Here $R(r,t) = r\cos(\sqrt{|\Lambda| }t)+B\sin(\sqrt{|\Lambda| }t)$ is the area radius defined geometrically using the Killing vector $\partial/\partial \phi$ such that the perimeter of the shell of comoving shell $r$ is $2 \pi R$. $B(r)$ decides the initial velocity of the dust cloud and $\rho_i(r)$ decides the initial density. As the cloud evolves, the density evolves and the area radius of each shell decreases in the black hole formation regime. In our present context, we focus on homogeneous dust interior since the model offers simpler equations without compromising the caveats involved. We choose the boundary of the homogeneous dust to be at a comoving coordinate $r=r_0$. Outside this is comoving radius $r_0$, we assume vacuum. We further choose the condition that $B(r)=0$ and $\rho_i=|\Lambda|/\kappa$ so that the metric (\ref{metric1}) is in its simplest form given by \begin{equation} ds^2=-dt^2+\cos(\sqrt{|\Lambda| }t)^2dr^2+r^2\cos^2(\sqrt{|\Lambda| }t)d\phi^2 \label{nmetric1} \end{equation} or \begin{equation} ds^2=-dt^2+ R'^2dr^2+ R^2d\phi^2 \label{matrix} \end{equation} where $R = R(t,r) = r \cos(\sqrt{|\Lambda| }t)$ and $R' = R'(t,r) = \cos(\sqrt{|\Lambda| }t)$, is the derivative of $R$ w.r.t $r$. We define $F(r)$ is given by the expression $F(r)=2\kappa\int_{0}^{r}{\rho _i(s) sds}$. This is the Misner Sharp mass. For the parameters being considered in the article, it is equal to $|\Lambda|r^2$ We note that for the model under consideration the ADM mass $M$, if the upper limit is taken as the boundary $r_0$, $M= \kappa \rho_i r_0^2-1=|\Lambda|r_0^2-1$. It is easily shown that the metric (\ref{metric1}) can be smoothly matched at the hypersurface $r=r_0$ by equating the first and second fundamental form to the exterior BTZ metric given by \begin{equation} ds^2 = -(|\Lambda| R^2 - M) dT^2 + \frac{1}{(|\Lambda| R^2 - M)}dR^2 + R^2 d\phi^2 \label{exterior} \end{equation} with $T$ is the time coordinate corresponding to Killing time. It is more convenient to switch from Schwarzschild like coordinate system $(t,R,\phi)$ to Eddington-Finkelsein like coordinates $(v,R,\phi)$ to avoid the coordinate singularity at the event horizon. The Eddington-Finkelsein coordinates are defined as \begin{equation}\label{EFC} v = t + \int^R{\frac{dR}{N^2(R)}} \end{equation} where, lapse function $N^2(R)=(|\Lambda| R^2 - M)$ . The metric (\ref{exterior}) can be written as \begin{equation} ds^2 = -(|\Lambda| R^2 - M) dv^2 + 2dvdR + R^2 d\phi^2 \label{exteriorv} \end{equation} Let $\lambda$ be the parameter on the hypersurface $\Sigma$, then the induced metric on the hypersurface $\Sigma$ define as \begin{equation}\label{inducemetric8} ds^2_{\Sigma} = [-(|\Lambda|R^2-M)\dot v^2 + 2\dot v\dot R]d{\lambda^2} + R^2d\phi^2 \end{equation} with $\dot{v}=dv/d\lambda$ and $\dot{R}=dR/d\lambda$ in the above equation. Similarly, the induced metric (\ref{nmetric1}) in terms of induced parameter $\lambda$ defined as \begin{equation}\label{eqn8} ds^2=[-\dot{t}^2+\dot{r}^2 cos^2(\sqrt{|\Lambda|} t)]d\lambda^2 +r^2\cos^2(\sqrt{|\Lambda| }t)d\phi^2 \end{equation} Here $\dot{t}$ and $\dot{r}$ are derivatives w.r.t the parameter $\lambda$. To find out the volume in the interior of a trapped region, we track the position of apparent horizon given in \cite{rossandmann, sashideep}. The condition for the expansion parameter for outgoing null geodesics to become zero for a general metric (\ref{metric1}) is given by \cite{sashideep} \begin{equation} \frac{2\kappa\int_{0}^{r}{\rho _i(s) sds}-1}{|\Lambda| R^2}=1 \label{trappingcondition1} \end{equation} For the specific case we are considering here, we have \begin{equation} \frac{|\Lambda| r^2-1}{|\Lambda| r^2cos^2(\sqrt{|\Lambda|} t)}>1 \label{trappingcondition2} \end{equation} This implies the curve of the apparent horizon is given by \begin{equation} r_a^2=\frac{1}{|\Lambda| sin^2(\sqrt{|\Lambda|} t)} \label{apparentcurve} \end{equation} We consider the collapsing regime where $t$ goes from $0$ to $\pi/(2\sqrt{|\Lambda|})$. At time $t=\pi/(2\sqrt{|\Lambda|})$, we have the formation of singularity since the area radius $R$ of each shell of label $r\leq r_0$ becomes zero. We note that in the time interval considered during the collapsing phase, the apparent horizon $r_a$ is a decreasing function of time. This implies the shell $r_0$ becomes trapped first, and then the smaller values of $r$ get trapped. This also implies that since there is no more mass collapsing beyond the shell $r_0$, the apparent horizon is also an isolated horizon and hence becomes an event horizon. We note that the evolving apparent horizon at a comoving $r<r_0$ is an inner horizon \cite{KS}. The physical radius of this horizon is given by, $R_h=\sqrt{M/|\Lambda|}$ where $M=|\Lambda| r_0^2-1$. During the course of evolution, we therefore have two distinct regions inside the BTZ black hole. (A) is the region from the event horizon $R=R_h$ to the area radius of the outer shell $R_0=r_0 cos (\sqrt{|\Lambda|} t)$. Region (B) consists of interior region of the dust cloud. We note that the region (A) is vacuum whereas (B) is evolving dust that eventually becomes singular. The plan is to track the evolution of the maximal volume where we take into account both the regions (A) and (B). The maximal surface is then the union of both the regions. So the volume is expressed as \begin{equation}\label{vsigma10} \begin{split} V_{\Sigma}[\gamma] =- \int_{\lambda_h}^{\lambda_0} d\lambda \int_{S^1} d\phi \sqrt{R^2\big(-(|\Lambda| R^2 - M)\dot{v}^2 + 2\dot{v}\dot{R}\big)}\\- \int_{\lambda_0}^{\lambda_f} d\lambda \int_{S^1} d\phi \sqrt{r^2 cos^2(\sqrt{|\Lambda|} t)\big(-\dot{t}^2+\dot{r}^2 cos^2(\sqrt{|\Lambda|} t)\big)} \end{split} \end{equation} The minus sign is because the parameter $\lambda$ is chosen to be monotonically decreasing with increasing radius. In the first term, $R(\lambda_h)=\sqrt{M/|\Lambda|}$, which is the event horizon. The above expression is obtained by connecting the hypersurface across the boundary $r=r_0$. The first term in the above equation is the volume inside BTZ black hole from the event horizon to an area radius $R_0 = r_0 cos(\sqrt{|\Lambda|}t)$, which is the outer shell of the collapsing cloud. Therefore we have $R(\lambda_0)=r_0 cos(\sqrt{|\Lambda|}t)$. The solution to this term where we seek the maximal volume in the interior of the vacuum part of BTZ black hole is solved completely in \cite{MZ}. We now focus on the evolution of maximal surface in (B) as a function of comoving time $t$. We follow the same Lagrangian procedure outlined in \cite{CR} for the metric given by eq.(\ref{eqn8}). We define a spacelike surface with topology $R \times S^1$ . (We note that though this topology of the trapped region is generic, but in the context of our cosmological dust model, the evolving trapped region is annular region extending from the event horizon to the marginally trapped radius $R_a$, since the outermost shell $r_0$ get trapped first followed by inner shells). We now address the following variations problem. We have the following boundary values $t(\lambda_0)=t$, $r(\lambda_0)=r_0$ and $t(\lambda_f)=(1/\sqrt{|\Lambda|})\sin^{-1}(1/\sqrt{|\Lambda|} r_a)$, $r(\lambda_f)=r_a$ where $\lambda_f$ ends at the marginally trapped region given by eq.(\ref{apparentcurve}). We note here that as far as the comoving coordinate chart is concerned, the location of the apparent horizon or the event horizon do not appear as any coordinate singularities in the metric. So the Euler-Lagrange equations remain the same irrespective of the boundary limits we impose. This is one of the main advantages of using the comoving coordinates to describe dust evolution. We have to maximize the following functional after integrating over the angle variable \begin{equation}\label{vsigma11} \begin{split} V_{\Sigma} & = -2\pi \int_{\lambda_0}^{\lambda_f} d\lambda \sqrt{r^2 cos^2(\sqrt{|\Lambda|} t)\big(-\dot{t}^2+ \dot{r}^2 cos^2(\sqrt{|\Lambda|} t)\big)} \end{split} \end{equation} We may choose the parameter to be $r$ since this coordinate is monotonic in the domain under consideration. The above functional then has only one function $t(r)$ that needs to be determined. \begin{equation}\label{vsigma12} \begin{split} V_{\Sigma} & = - 2\pi \int_{r_0}^{0} dr \sqrt{r^2 cos^2(\sqrt{|\Lambda|} t)\big(-\dot{t}^2+cos^2(\sqrt{|\Lambda|} t)\big)} \end{split} \end{equation} where, $\dot{r}=1$ and $\dot t =\partial t/\partial r$, is the derivative of $t$ w.r.to $r$. This can be viewed as an extremization problem and our goal is to find the equations of motion for the Lagrangian. \begin{equation}\label{lagrangian13} L = L(t, \dot{t},r) = \sqrt{r^2 cos^2(\sqrt{|\Lambda|} t)\big(-\dot{t}^2+cos^2(\sqrt{|\Lambda|} t)\big)} \end{equation} The Euler-Lagrange equations are \begin{equation}\label{lagrangeeq15} \frac{d}{dr}\bigg(\frac{\partial L}{\partial \dot{t}}\bigg) - \frac{\partial L}{\partial t} = 0 \end{equation} for the Lagrangian, (\ref{lagrangeeq15}) , we get \begin{multline}\label{tdoubledot16} \ddot{t} - \bigg(\frac{sec^2{(\sqrt{|\Lambda|}t)}}{r}\bigg)\dot{t}^3 + 3\sqrt{|\Lambda|}[tan(\sqrt{|\Lambda|}t)]\dot t^2 + \frac{\dot{t}}{r}\\ - \sqrt{|\Lambda|}\sin{(2\sqrt{|\Lambda|}t)} = 0 \end{multline} eq.(\ref{tdoubledot16}) is a non-linear differential equation with variable coefficients where, $r$ is independent variable and $t(r)$ is a dependent variable. Solution of eq.(\ref{tdoubledot16}) yields the comoving time as a function of $r$ i.e $t(r)$, which corresponds to maximal hypersurface. \subsection{D-dimensions case} A convenient coordinate system for describing evolving pressureless matter is a comoving one. The metric that describes the evolving dust scenario in D(= n+2)-dimensions can be written as \begin{equation}\label{metric17} ds^2 = -dt^2 + R'^2 dr^2 + R^2(t,r) d{\Omega}^2_{n} \end{equation}\label{metric18} Here the coordinates are $t,r, \theta_1, \theta_2,...,\theta_n$. The $d\Omega_n$ is the metric induced on the $n$-dimensional sphere at a constant "area" radius $R$. To define the volume let $\lambda$ be the induced parameter on the hypersurface $\Sigma$ then induced metric defined as \begin{equation}\label{inducedmetric18} ds^2_{\Sigma} = ( -\dot t^2 + \dot r^2R'^2) d\lambda^2 + R^2(t,r) d{\Omega}^2_{n} \end{equation} Volume in D(= n+2)-dimensions define as \begin{equation}\label{volume19} \begin{split} V^{(D)}_{\Sigma} = \int d\lambda d\Omega_{n} R^n\sqrt{-\dot t^2 + \dot r^2R'^2}\\ = \frac{2\pi^{\frac{n+1}{2}}}{\Gamma (\frac{n+1}{2})}\int d\lambda R^n\sqrt{-\dot t^2 + \dot r^2R'^2} \end{split} \end{equation} Hence, the Lagrangian is defined as \begin{equation}\label{lagrangian20} L(t,\dot t, \lambda ) = R^n\sqrt{-\dot t^2 + \dot r^2R'^2} \end{equation} Now, let the parameter $\lambda = r$ then $\dot{r}= 1$, substitute $R{(t,r)} = r a(t)$ and $R'(t,r) = a(t)$ in the eq.(\ref{lagrangian20}), we get \begin{equation}\label{lagrangian21} L(t,\dot{t}, r) = [ra(t)]^n\sqrt{-\dot{t}^2 + [a(t)]^2} \end{equation} Our next goal is to find the equation of motion for the above Lagrangian from eq.(15), we get \begin{multline}\label{tdoubledot22} \ddot{t} - \bigg(\frac{n}{r[a(t)]^2}\bigg){\dot{t}}^3 - \bigg(\frac{(n+2)a'(t)}{a(t)}\bigg)\dot t^2 + \bigg(\frac{n}{r}\bigg)\dot{t}\\ + (n+1)a(t)a'(t) = 0 \end{multline} where, $\dot t = \partial t/\partial r$ and $a'(t) = \partial a(t)/\partial t$. This is the differential equation in D-dimensions and we can easily recover the 2+1 dimensional differential equation i.e eq.(\ref{tdoubledot16}) by putting $n = 1$ and $a(t) = \cos(\sqrt{|\Lambda|}t)$ in the eq.(\ref{tdoubledot22}). \\ For estimating the maximal hypersurface in the trapped region, we can choose the appropriate boundary values. We note that the condition for marginal trapping for the model under consideration is given by $r_a=-1/\dot{a}$ \cite{KS}, where the $\dot{a}$ is derivative of the scale factor w.r.t time $t$. In a collapsing scenario, we have $\dot{a}$ is negative thereby giving a positive value for $r_a$. \section{Extrinsic curvature method for estimating steady state radius in 2+1 dimensions} We now track the existence of the 'steady state' hypersurface that is used to estimate the interior volume of black holes. The location and its evolution upon correlating with the evolution of the event horizon provides important clues towards estimating the volume of the evolving black hole. As mentioned in the introduction, special points on such a maximal hypersurface contribute significantly towards estimating the interior volume of the black hole. We evaluate the divergence of extrinsic curvature for the $R(t,r)=const.$ surface. To find the normal to the surface we differentiate $R(t,r)=const.$ and get \begin{equation}\label{constR17} \dot{R}dt + R'dr = 0 \end{equation} The covariant components of normal vector defined as $n_{\alpha}=(n_{t},n_{r})$, where $n_{t}= \dot{R}$ and $n_{r}= R'$. We now show that the Kodama vector is tangential to the maximal hypersurface at these steady state points. The trace of the extrinsic curvature vanishes at all points on the maximal hypersurface by definition \cite{Reinhart,FE}. Kodama vector in a spherically symmetric spacetime is defined as \cite{HK,VF}. We consider the 2-metric in the coordinate chart $(t,r)$. This is given by $-dt^2+R'^2dr^2$. The two dimensional volume form in $(t,r)$ coordinates are of $\omega=R'dt\wedge dr$. Using the standard definition of Kodama vector, we evaluate the components to be: $K^t=1$ and $K^r=-\dot{R}/R'$. Evaluating the dot product, we find that $K^{\alpha}n_{\alpha}=0$. This shows that the Kodama vector is tangential to the maximal hypersurface at the steady state points. We note that the above result is independent of whether the Kodama vector is spacelike or timelike. Infact for the cases that were discovered in \cite{CR,BJ,Ong,YCO,CL,XY}, the Kodama vector is spacelike and is tangential to the maximal hypersurface at the steady state radius. Another interesting observation is that at the steady state radius, both the normal vector to the hypersurface and the Kodama vector have vanishing divergence. We now analyze the case of $2+1$ dimensional scenario separately here owing to the non trivial nature of gravity in $2+1$ dimensions. We start with the area radius $R(t,r)$ of comoving shell as a function of the comoving time and shell label $r$. The normalized contravariant components of normal vector are $N^{\alpha}=(N^{t},N^{r}) = \bigg(\frac{-\dot{R}}{\sqrt{\dot{R}^2-1}} , \frac{1}{R'\sqrt{\dot{R}^2-1}}\bigg)$. The condition for the vanishing trace of extrinsic curvature , which implies that normal vector is divergence free, is written as \begin{equation}\label{nalpha18} N^{\alpha}_{;\alpha} = \frac{1}{\sqrt{-g}} {\frac{\partial}{\partial x^{\alpha}}({\sqrt{-g} N^{\alpha})}} = 0 \end{equation} where, $\sqrt{-g} = R'R$, which is obtained from the determinant of metric (\ref{matrix}). The eq.(\ref{nalpha18}) can be written as \begin{equation} \frac{1}{R'R} \bigg[{\frac{\partial}{\partial t}({R'R N^{t})}} + {\frac{\partial}{\partial r}({R'R N^{r})}}\bigg] = 0 \end{equation} or \begin{equation}\label{partialnalpha20} -\frac{\partial}{\partial t}\bigg(\frac{\dot{R} R' R}{\sqrt{\dot{R}^2-1}}\bigg) + {\frac{\partial}{\partial r}\bigg(\frac{R}{\sqrt{\dot{R}^2-1}}\bigg)} = 0 \end{equation} solution of eq.(\ref{partialnalpha20}) gives, \begin{equation}\label{rprime21} (\dot{R}^2-1)^2 + \frac{R\dot{R}^3\dot{R}'}{R'} - R \ddot{R} = 0 \end{equation} $\dot{R}^2$ can be written as \begin{equation}\label{rdotsqure} \dot{R}^2 = f(r) + \Lambda R^2 \end{equation} We are taking $f(r) = 0$. Differentiating eq.(\ref{rdotsqure}) w.r.to $t$ and $r$, we get expression for $\ddot{R}$ and $\dot{R}'$ as \begin{equation}\label{rdoubledot} \ddot{R} = \Lambda R \end{equation} and \begin{equation}\label{rdotprime} \dot{R}' = \frac{\Lambda R R'}{\dot{R}} + \frac{F'(r)}{2\dot{R}} \end{equation} Substituting the value of $\dot{R^2}, \ddot{R}$ and $\dot{R}'$ from eqs.(\ref{rdotsqure}), (\ref{rdoubledot}) and (\ref{rdotprime}) in to eq.(\ref{rprime21}), we get \begin{equation}\label{fprime25} \frac{RF'(r)}{2R'}[\Lambda R^2+F(r)]+[{\Lambda}R^2 + F(r) -1][2\Lambda R^2 + F(r) - 1] = 0 \end{equation} We now obtain a general formula connecting the co-moving energy density at a point with the area radius, Misner-Sharp mass and cosmological constant. Let $\epsilon$ is the energy density of collapsing dust then we can define the energy density as $\frac{ F'(r)}{2 R R'} = \epsilon$. Therefore, from eq.(\ref{fprime25}), we can write $\epsilon$ as \begin{equation} \epsilon = \frac{ F'(r)}{2 R R'} = -\frac{[{\Lambda}R^2 + F(r) -1][2\Lambda R^2 + F(r) - 1]}{R^2 [\Lambda R^2 + F(r)]} \end{equation} This is the condition for $R=const.$ hypersurface to have vanishing divergence of extrinsic curvature. Though this formula is true in general, we now examine few simple cases and gain an understanding of this special point in the hypersurface. \subsubsection{\textbf{Vacuum Case}} When $F(r)=0$. This situation represents matter content is zero in the entire spacetime. We see that \begin{equation}\label{eqroot33} [{\Lambda}R^2 -1][2\Lambda R^2 - 1] = 0 \end{equation} We have two values of area radius where the condition is met: $R_{CH}=1/\sqrt{\Lambda}$ and $R_{ss}=1/\sqrt{2\Lambda}=R_{CH}/\sqrt{2}.$\\ The former root is the deSitter cosmological horizon which is a null horizon. The normal to the horizon is also the null generator and has vanishing divergence. We look for timelike normals (timelike normals for $R=const$ occur in the interior of a black hole) and spacelike normals, so this root is not our answer. The other root is the answer and we observe that this root lies between the $R=0$ and the cosmological horizon. This feature is also observed in the situations to come. We note that for negative cosmological constant, this case does not yield any solutions. \subsubsection{\textbf{Static Black hole case }} Suppose we consider the case when the Misner-Sharp mass is a constant, this is the situation of a black hole or a naked singularity where all the mass has already collapsed to a point. We can use the formula derived above to examine the situation of a black hole using the comoving chart and not static/stationary coordinates. So we take the condition that $F(r)=const.=M+1$ where, $M$ is the ADM mass. Also we note that automatically $F'(r)=0$. From eq.(\ref{fprime25}), we get \begin{equation}\label{eqroot34} [{\Lambda}R^2 + M ][2\Lambda R^2 + M] = 0 \end{equation} there are again two roots of the eq.(\ref{eqroot34}). \begin{equation} R =R_{CH} = \sqrt{\frac{-M}{\Lambda}}\ and \ R = R_{ss} =\sqrt{\frac{-M}{2\Lambda}} \end{equation} Here $R_{ss} = \frac{1}{\sqrt{2}}R_{CH}$. Here we have two sub cases. For a positive cosmological constant, we require that the mass at the naked singularity (which is a conical singularity) is not greater than 1 so that M is negative. In this case we have the cosmological horizon at $R_{CH}$ and as can be seen in the above equation, $R_{ss}=R_{CH}/\sqrt{2}$. The fact that M has to be negative is observed in \cite{rossandmann,sashideep}. This scenario is not present in the four and higher dimensions where the ADM mass is positive definite. Here $R_{ss} < R_{CH}$, which shows that the surface is not beyond the cosmological horizon. The second subcase involves negative cosmological constant. Here, in contrast, we require M to be positive for roots to be real. So $F>1$ for black hole event horizon to exist. The total mass that collapses has to be greater than 1 so that an event horizon forms \cite{rossandmann,sashideep}. We then have \begin{equation} R =R_{e} = \sqrt{\frac{M}{|\Lambda|}}\ and \ R = R_{ss} =\sqrt{\frac{M}{2|\Lambda|}} \end{equation} As is evident from the above equations, $R_{ss}$ occurs in the interior of the BTZ black hole with zero angular momentum. The radius is $R_{ss}=R_{e}/\sqrt{2}$. This is similar to the situation of the case of four dimensional black holes as observed in the work of \cite{CR,BZ,XY,CL}. \subsubsection{\textbf{Cosmological Case}} We look at cosmological solutions in 2+1 dimensions with and without cosmological constant.The mass function for homogeneous collapsing dust in 2+1 dimensions define as $F(r) = g r^2$ and area radius of comoving shells is $R = R(t,r) = r a(t)$. Substituting these parameters in eq.(\ref{fprime25}), we get \begin{equation}\label{eqroot29} 2(\Lambda a^2 +g )^2r^4 - (3\Lambda a^2 + 2g) r^2 + 1 = 0 \end{equation} \textbf{1. When $\Lambda=0$}. We set $\Lambda=0$ in the above equation then we obtain, $g^2r^4+(gr^2-1)^2=0$. As can be seen readily, this is a sum of squares and is never zero. So we do not have $R_{ss}$ for real values of the area radius. To analyze the case with non zero cosmological constant, we evaluate the roots of the eq.(\ref{eqroot29}). This gives steady state radius $r_{ss}$ as \begin{equation}\label{rss30} r_{ss} = \bigg(\frac{(3\Lambda a^2 + 2g) \pm \sqrt{(3\Lambda a^2 + 2g)^2-8(\Lambda a^2 +g)^2}}{4(\Lambda a^2 + g )^2}\bigg)^{\frac{1}{2}} \end{equation} \textbf{2. When $\Lambda >0$ i.e de-Sitter space:} Here $r_{ss}$ will be positive only when the terms inside the square root is positive i.e \begin{equation} \label{lambda31} \begin{split} (3\Lambda a^2 + 2g)^2-8(\Lambda a^2 +g)^2 >0 \\ \Rightarrow a^2 > -\frac{2g(\sqrt{2}+1)}{{\Lambda(3+2\sqrt{2})}} \ and \ a^2 > \frac{2g(\sqrt{2}-1)}{{\Lambda(3-2\sqrt{2})}} \\ \Rightarrow a^2 > \frac{2g(\sqrt{2}-1)}{{\Lambda(3-2\sqrt{2})}} \ or \ a^2>\frac{2g(\sqrt{2}+1)}{\Lambda} \end{split} \end{equation} This condition can be seen to be equal to the relation inside the square root has to be positive. This inequality implies that $r_{ss}$ is only possible during the evolution if, the scale parameter is greater than a critical value given in the equation. This is a surprising fact that this point emerges during the course of evolution provided the scale parameter crosses a certain threshold value. \\ \textbf{3. When $\Lambda <0$ i.e Anti-deSitter space:} Replacing $\Lambda$ with $-\Lambda$ in eq.(\ref{rss30}) we get \begin{equation}\label{rss30'} r_{ss} = \bigg(\frac{(-3|\Lambda| a^2 + 2g) \pm \sqrt{\Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2)}}{4(-|\Lambda| a^2 + g )^2}\bigg)^{\frac{1}{2}} \end{equation} Let us define $A = -3|\Lambda| a^2 + 2g$ and $B= \sqrt{8}(-|\Lambda| a^2 +2g)$, then the terms, $A^2-B^2= (-3|\Lambda| a^2 + 2g)^2 - 8(\Lambda a^2 +g)^2 = \Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2$. There are two possible situation arises on $A$.\\ \textbf{(i)} Suppose $A$ is negative i.e $A<0$, then eq.(\ref{rss30'}) can be written as \begin{equation}\label{eqn41'} r_{ss} = \bigg(\frac{-|(-3|\Lambda| a^2 + 2g)| \pm \sqrt{\Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2}}{4(-|\Lambda| a^2 + g )^2}\bigg)^{\frac{1}{2}} \end{equation} Now we check the existence of above steady state radius $r_{ss}$ as follows \begin{equation}\label{eqn42'} \begin{split} (-3|\Lambda| a^2 + 2g)^2 > (-3|\Lambda| a^2 + 2g)^2 - 8(\Lambda a^2 +g)^2\\ \Rightarrow (-3|\Lambda| a^2 + 2g)^2 >\Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2\\ \Rightarrow |(-3|\Lambda| a^2 + 2g)| > \sqrt{\Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2}\\ \Rightarrow -|(-3|\Lambda| a^2 + 2g)| < - \sqrt{\Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2}\\ \Rightarrow -|(-3|\Lambda| a^2 + 2g)| + \sqrt{\Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2} <0 \end{split} \end{equation} and $ -|(-3|\Lambda| a^2 + 2g)| - \sqrt{\Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2} <0$ also be possible. Hence, these conditions shows that the terms $-|(-3|\Lambda| a^2 + 2g)| \pm \sqrt{\Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2}<0$ in the eq.(\ref{eqn41'}), which means $r_{ss}$ becomes imaginary and hence there is no any $r_{ss}$ exist.\\ \textbf{(ii).} Suppose $A$ is positive i.e $A=(-3|\Lambda| a^2 + 2g)>0$ then, it gives the condition on the scale parameter $a(t)$ as $a^2<2g/3|\Lambda|$ and using this condition inside the square root term $\sqrt{\Lambda^2 a^4 + 4|\Lambda| g a^2 - 4g^2)}$ we get a imaginary value. Hence in both the scenarios there is no solution for $r_{ss}$ and hence they do not exist for 2+1 dimensional AdS cosmological spacetimes. \section{Extrinsic curvature method in D-dimensions in the presence of cosmological constant $(\Lambda)$} The general metric for an $D (= n+2)$-dimensional spherically symmetric spacetime given in \cite{Rakesh} which is define as \begin{equation}\label{metric36} ds^2 = -e^{\mu(t,r)} dt^2 + e^{\lambda(t,r)} dr^2 + R^2(t,r) d{\Omega}^2_{n} \end{equation} where $d{\Omega}^2_{n} = d{\theta}^2_{1} + sin^2{\theta}_{1}(d{\theta}^2_{2} +sin^2{\theta}_{3} (d{\theta}^2_{3} + ... + sin^2{\theta}_{n-1}d{\theta}^2_{n}))$, is the metric on unit n-dimensional sphere, $t$ is the time coordinate and $r$ is the comoving radial coordinate. It is easily shown in \cite{Rakesh} that the $g_{tt}$ and $g_{rr}$ component of the metric can be define as $g_{tt} = -1$ and $g_{rr} = \frac{R'^2}{1+f(r)}$. For marginally bounded shells of dust where we can take $f(r) = 0$. The metric then has the following form \begin{equation}\label{metric37} ds^2 = -dt^2 + R'^2 dr^2 + R^2(t,r) d{\Omega}^2_{n} \end{equation} Area radius $R(t,r)$ of comoving shell is constant at the maximal hypersurface i.e $R = R(t,r) = constant$. \begin{equation}\label{metric38} \dot{R}dt + R'dr = 0 \end{equation} The covariant components of normal vector defined as $n_{\alpha}=(n_{t},n_{r})$, where $n_{t}= \dot{R}$ and $n_{r}= R'$. Corresponding normalized contravariant components of normal vector is $N^{\alpha}=(N^{t},N^{r}) = \bigg(\frac{-\dot{R}}{\sqrt{\dot{R}^2-1}} , \frac{1}{R'\sqrt{\dot{R}^2-1}}\bigg)$. The condition for the vanishing trace of extrinsic curvature is given by, \begin{equation}\label{eqn39} N^{\alpha}_{;\alpha} = \frac{1}{\sqrt{-g}} {\frac{\partial}{\partial x^{\alpha}}({\sqrt{-g} N^{\alpha})}} = 0 \end{equation} where, $\sqrt{-g} = R'R^n {\Theta}_{n}$ which is obtained from the determinant of metric (\ref{metric37}) where ${\Theta}_{n}$ contains the product of all the angular part of determinant of metric (\ref{metric37}). The eq.(\ref{eqn39}) can be written as \begin{equation}\label{eqn40} \frac{1}{R'R^n {\Theta}_{n}} \bigg[{\frac{\partial}{\partial t}({R'R^n {\Theta}_{n} N^{t})}} + {\frac{\partial}{\partial r}({R'R^n {\Theta}_{n} N^{r})}}\bigg] = 0 \end{equation} or \begin{equation}\label{eqn41} -\frac{\partial}{\partial t}\bigg(\frac{\dot{R} R' R^n}{\sqrt{\dot{R}^2-1}}\bigg) + {\frac{\partial}{\partial r}\bigg(\frac{R^n}{\sqrt{\dot{R}^2-1}}\bigg)} = 0 \end{equation} solution of eq.(\ref{eqn41}) gives, \begin{equation}\label{eqn42} n R^{n-1}(\dot{R}^2-1)^2 + \frac{R^n\dot{R}^3\dot{R}'}{R'} - R^n \ddot{R} = 0 \end{equation} $\dot{R}^2$ can be written as \begin{equation}\label{eqn43} \dot{R}^2 = f(r) + \frac{2\Lambda R^2}{n(n+1)} + \frac{F(r)}{R^{n-1}} \end{equation} We take the $f(r) = 0$ for marginally bounded shell of dust. Differentiating eq.(\ref{eqn43}) w.r.to $t$ and $r$, we get expression for $\ddot{R}$ and $\dot{R}'$ as \begin{equation}\label{eqn44} \ddot{R} = \frac{2\Lambda R}{n(n+1)} - \frac{(n-1)F(r)}{2 R^{n}} \end{equation} and \begin{equation}\label{eqn45} \dot{R}' = \frac{2\Lambda R R'}{n(n+1)\dot{R}} + \frac{F'}{2R^{n-1}\dot{R}} - \frac{(n-1) F R'}{2 R^n\dot{R}} \end{equation} Substituting the value of $\dot{R^2}, \ddot{R}$ and $\dot{R}'$ from eqs.(\ref{eqn43}), (\ref{eqn44}) and (\ref{eqn45}) in to eq.(\ref{eqn42}) and after simplifying we get \begin{multline}\label{eqn46} \frac{R F'(r)}{2R'}\bigg(\frac{2\Lambda R^2}{n(n+1)}+ \frac{F(r)}{R^{n-1}}\bigg)+\\ \bigg(\frac{2\Lambda R^2}{n(n+1)} + \frac{F(r)}{R^{n-1}}-1\bigg) \bigg[\frac{2\Lambda}{n}R^{n+1} + \frac{(n+1)}{2}F(r) - nR^{n-1} \bigg]\\ = 0 \end{multline} We will see below that the above formula simplifies to an expression involving coordinate invariant since the term containing $F'$ gets related to the energy density. The formula therefore is an interesting relation between the principle value of the energy momentum tensor, the cosmological constant and the Misner-Sharp Mass. In a general setting the Misner-Sharp mass is a monotonically increasing function of the comoving radius $r$. We take a non zero value for the mass function $F(r)\neq 0$ (and $F'(r)\neq 0$). Let $\epsilon$ is the energy density of collapsing dust then we can define the energy density as $\frac{n F'(r)}{2 R^n R'} = \epsilon$. Therefore, from eq.(\ref{eqn46}), we can write $\epsilon$ as \begin{multline}\label{eqn47} \epsilon = \frac{n F'(r)}{2 R^n R'} =\\ \frac{n\bigg(1 - \frac{2\Lambda R^2}{n(n+1)} - \frac{F(r)}{R^{n-1}}\bigg) \bigg[\frac{2\Lambda}{n}R^{n+1} + \frac{(n+1)}{2}F(r) - nR^{n-1}\bigg] }{R^{n+1} \bigg(\frac{2\Lambda R^2}{n(n+1)} + \frac{F(r)}{R^{n-1}}\bigg)} \end{multline} \subsubsection{\textbf{Vacuum scenario}} When there is no matter in the spacetime we have $F(r)=0$. In this case the above equation yields Minkowski, deSitter or Anti-deSitter spacetime. For the case of $F(r)=0$, the formula above gives \begin{multline}\label{Fequalszero} \\ \bigg(\frac{2\Lambda R^2}{n(n+1)} -1\bigg) \bigg[\frac{2\Lambda}{n}R^{n+1} - nR^{n-1} \bigg] = 0 \end{multline} We solve for the $R_{ss}$ \begin{equation}\label{eqn56} \frac{2\Lambda R^2}{n(n+1)} -1 = 0 \end{equation} and \begin{equation}\label{eqn57} \frac{2\Lambda}{n} R^{n+1} - nR^{n-1}=0 \end{equation} from eq.(\ref{eqn56}) and (\ref{eqn57}) we get \begin{equation}\label{eqn58} R_{CH} = \sqrt{\frac{n(n+1)}{2\Lambda}}\ and \ R_{ss} = \sqrt{\frac{n^2}{2\Lambda}} \end{equation} or \begin{equation}\label{eqn59} R_{ss} = \sqrt{\frac{n}{n+1}}R_{CH} \end{equation} Here $n<n+1$ so $R_{ss}<R_{CH}$ which means steady state radius occurs before the cosmological horizon. The cosmological horizon is the boundary of anti-trapped region. So $R_{ss}$ occurs in the accessible part of spacetime from the point of view of the interior of the cosmological horizon. We also note that this surface is possible only in deSitter (dS) space and not possible in Anti-deSitter (AdS) space. \subsubsection{\textbf{D-dimensional Schwarzschild black hole }} When $\Lambda = 0$ and $F'(r) = 0$. From the eq.(\ref{eqn46}) we get \begin{equation}\label{eqn48} \bigg(\frac{F(r)}{R^{n-1}}-1\bigg)\bigg[\frac{n+1}{2}F(r) - nR^{n-1}\bigg] = 0 \end{equation} or \begin{equation}\label{eqn49} R = R_e =[F(r)]^{\frac{1}{n-1}}\ and \ R = R_{ss} = \bigg(\frac{n+1}{2n}F(r)\bigg)^{\frac{1}{n-1}} \end{equation} From eq.(\ref{eqn49}), we can write \begin{equation}\label{eqn51} R_{ss} = \bigg(\frac{n+1}{2n}\bigg)^{\frac{1}{n-1}}R_e \end{equation} For Schwarzschild case $n = 2$ (i. e 4-dimensions), the mass function $F(r) = 2M$, then from equation (\ref{eqn49}) we will get $R_e = 2M$ which is correspond to event horizon and $R_{ss} = \frac{3}{2}M$, which is steady state radius correspond to maximal hypersurface as identified in \cite{CR}. From eq.(\ref{eqn51}) $R_{ss} = \frac{3}{4}R_e$ i.e $R_{ss}< R_e$, which means $R_{ss}$ lies inside the event horizon of a black hole. Since $2n>n+1$ for $n = 2,3...$. Which means $R_{ss}$ always lies inside the event horizon for all $n>1$. For the case $n=1$ there is no black hole if there is no negative cosmological constant.\\ The mass of the dust cloud given in \cite{Rakesh}, which is define as \begin{equation}\label{eqn64'} M=\int_{0}^{r}\epsilon(0,r)dV \end{equation} where, $dV$ is the volume element of a spherical shell lying between $r$ and $r+dr$ in $n+1$ space dimensions. The volume element $dV$ define as \begin{equation}\label{eqn65'} dV = \frac{2\pi^{\frac{n+1}{2}}}{\Gamma (\frac{n+1}{2})} r^n dr \end{equation} from eqs.(\ref{eqn64'}) and (\ref{eqn65'}) we get the mass of dust cloud $M$ as \begin{equation}\label{eqn66'} M = \frac{2\pi^{\frac{n+1}{2}}}{\Gamma (\frac{n+1}{2})} \int_{0}^{r}\epsilon(0,r) r^n dr \end{equation} The mass function $F(r)$ for the distribution of dust cloud defined as \begin{equation}\label{eqn67'} F(r) = \frac{2k}{n}\int_{0}^{r}\epsilon(0,r)r^n dr \end{equation} Therefore, from eqs.(\ref{eqn66'}) and (\ref{eqn67'}), the relation between mass function $F(r)$ (i.e. Misner-Sharp mass) and the mass of dust cloud $M$ in $n+1$ spatial dimensions define as \begin{equation}\label{eqn50} F(r) = \frac{2k}{n}\int_{0}^{r}\epsilon(0,r)r^n dr = \frac{2k}{n}\frac{\Gamma(\frac{n+1}{2})M}{2 \pi^{\frac{n+1}{2}}} \end{equation} where, $k = \frac{8\pi G}{c^4}$ is the Einstein's gravitation constant and $G$ is the Newton's gravitation constant. \subsubsection{\textbf{D-dimensional Schwarzschild deSitter/Anti-deSitter spacetime scenario}} If the mass function $F(r)=const.$ then the change in mass function of dust vanishes i.e $F'(r) = 0$ and event horizon will be static. Let cosmological constant $\Lambda \neq 0$, then from eq.(\ref{eqn46}), we get \begin{multline}\label{eqn52} \bigg(\frac{2\Lambda R^2}{n(n+1)} + \frac{F(r)}{R^{n-1}} -1\bigg) \Bigg[\frac{2\Lambda R^{n+1} }{n} + \frac{(n+1)F(r)}{2} - nR^{n-1} \Bigg]\\ = 0 \end{multline} or \begin{equation}\label{eqn53} P(R = R_e) =\frac{2\Lambda R^2_{e}}{n(n+1)} + \frac{F}{R^{n-1}_{e}} -1 = 0 \end{equation} and \begin{equation}\label{eqn54} P(R = R_{ss}) = \frac{2\Lambda R^2_{ss}}{n(n+1)} + \frac{F}{2R^{n-1}_{ss}} - \frac{n}{n+1} =0 \end{equation} Both of the eqs.(\ref{eqn53}) and (\ref{eqn54}) are $n+1$ dimensional polynomials and finding analytical roots are in general difficult. We rather obtain good insight into the relative locations of the event horizons and the $R_{ss}$ by plotting the graphs of the polynomials explicitly. We observe that $R_{ss}$ usually lies in the interior of a black hole. In the presence of a positive cosmological constant, we get another root for $R_{ss}$ which is at a radius smaller than the cosmological horizon. These facts are illustrated in the following graphs on the next page. \begin{figure*} \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{EventHor_AdS.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{SteadyStateRadius_AdS.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{ReRss3d.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{ReRss4d.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{ReRss5d.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{ReRss6d.pdf} } \caption{\label{fig:1} {The graph (a) shows the location of all the event horizons corresponding to the polynomial $P(R = R_e)$ of eq.(\ref{eqn53}) and graph (b) show the location of all the steady state radius corresponding to the polynomial $P(R=R_{ss})$ of eq.(\ref{eqn54}) for D = 3, 4, 5 and 6 dimensions in AdS spacetime. Graph (c) shows that both the $R_e$ and $R_{ss}$ in 3-dimensions overlaps at zero and graphs (d),(e) and (f) shows that the steady state radius $R_{ss}$ always lies inside the event horizon $R_e$. Here we take the cosmological constant $\Lambda = -0.05$ and mass function $F(r) = 1$ for all the graphs} .} \end{figure*} \begin{figure*} \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{EventHor_dS.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{SteadyStateRadius_dS.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{ReRssRch3d_dS_space.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{ReRssRch4d_dS_space.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{ReRssRch5d_dS_space.pdf} } \subfigure[]{ \includegraphics[width=0.31\textwidth,clip]{ReRssRch6d_dS_space.pdf} } \caption{\label{fig:2} {The graph (a) shows the location of all the event horizons $R_e$ and cosmological horizons $R_{CH}$ corresponding to the polynomial $P(R = R_e)$ of eq.(\ref{eqn53}). The graph (b) show the location of all the inner and outer steady state radius $R_{ss}$ corresponding to the polynomial $P(R=R_{ss})$ of eq.(\ref{eqn54}) for D = 3, 4, 5 and 6 dimensions in deSitter (dS) spacetime. The graph (c) shows that all three $R_e$,$R_{ss}$ and $R_{CH}$ overlaps at zero and graphs (d), (e) and (f) shows that the inner $R_{ss}$ always lies inside the $R_e$ and outer $R_{ss}$ lies between $R_e$ and $R_{CH}$. Here we take the cosmological constant $\Lambda = 0.05$ and mass function $F(r) = 1$ for all the graphs} .} \end{figure*} \subsubsection{\textbf{D-dimensional Cosmological scenario}} We now explore the presence and evolution of the steady state radius for the cosmological solutions in D-dimensions. We can think of the cosmological solutions as being valid for the entire universe or we can think of the solutions as a homogeneous interior of an evolving star. If we assume the latter, we need to define an outer comoving shell $r_0$ beyond which there is vacuum. The mass function for homogeneous dust in D(= n+2)-dimension define as $F(r) = \frac{2g}{n(n+1)}r^{n+1}$ where $g=k\epsilon(0,r) >0$ and $R = R(t,r) = r a(t)$. The dynamics of various cases are given in the Appendix. Substituting these value in equation (\ref{eqn46}), we get \begin{widetext} \begin{equation}\label{eqn60} \bigg(\frac{4(\Lambda a^2 + ga^{1-n})^2}{n^3 (n+1)}\bigg) r^{4} - \bigg(\frac{2\Lambda a^2(2n+1) + g(3n+1)a^{1-n}}{n^2(n+1)}\bigg) r^{2} + 1 = 0 \end{equation} \end{widetext} \textbf{Case(1).} For a general case ( when $\Lambda \neq 0$), the solution of equation (\ref{eqn60}) gives steady state radius $r_{ss}$ as \begin{widetext} \begin{equation}\label{eqn61} r_{ss} = \frac{\sqrt{n}}{2}\Bigg(\frac{[2\Lambda a^2(2n+1) + g(3n+1)a^{1-n}] \pm \sqrt{[2\Lambda a^2(2n+1) + g(3n+1)a^{1-n}]^2-16n(n+1)(\Lambda a^2 + ga^{1-n})^2}}{2(\Lambda a^2 + ga^{1-n})^2}\Bigg)^{1/2} \end{equation} \end{widetext} \textbf{Case (2). When $\Lambda = 0$ i.e without cosmological constant:} From the eq.(\ref{eqn61}), we get \begin{widetext} \begin{equation}\label{eqn71} r_{ss} = \frac{\sqrt{n}}{2}\Bigg(\frac{(3n+1) \pm \sqrt{(3n+1)^2 - 16n(n+1)}}{2ga^{1-n}}\Bigg)^{1/2} = \begin{cases} 0 &; \ if \ n = 0 \ i.e \ (\ 2-dimensions)\\ Im &; \ if \ n\geq 1 \ i.e \ (\ 3 \ and \ higher-dimensions) \end{cases} \end{equation} \end{widetext} There are two possible situation arises from the eq.(\ref{eqn61}) for the validity of steady state radius and which we discuss as follows\\ \textbf{ (a). When $\Lambda <0$ i.e Anti-deSitter spacetime:} Replacing the cosmological constant $\Lambda$ with $-\Lambda$ in eq.(\ref{eqn61}) we get the steady state radius $r_{ss}$ as \begin{widetext} \begin{equation}\label{eqn61'} r_{ss} = \frac{\sqrt{n}}{2}\Bigg(\frac{[-2|\Lambda| a^2(2n+1) + g(3n+1)a^{1-n}] \pm \sqrt{[-2|\Lambda| a^2(2n+1) + g(3n+1)a^{1-n}]^2-16n(n+1)(|\Lambda| a^2 + ga^{1-n})^2}}{2(-|\Lambda| a^2 + ga^{1-n})^2}\Bigg)^{1/2} \end{equation} \end{widetext} Now, let us define $x =[-2|\Lambda| a^2(2n+1) + g(3n+1)a^{1-n}]$, $y = 4\sqrt{n(n+1)}(-|\Lambda| a^2 + ga^{1-n})$ and $z = \sqrt{2}(-|\Lambda| a^2 + ga^{1-n})$. Now there are two possible situation arises on $x$.\\ \textbf{(i).} Suppose $x$ is negative i.e $x<0$, then eq.(69) gives steady state radius $r_{ss}$ as follows \begin{equation}\label{eqn71'} r_{ss} = \frac{\sqrt{n}}{2}\bigg(\frac{-|x| \pm \sqrt{x^2-y^2}}{z^2}\bigg) \end{equation} Now we check the existence of above steady state radius $r_{ss}$ as follows \begin{equation}\label{eqn72'} \begin{split} x^2 > x^2 - y^2\\ \Rightarrow |x| > \sqrt{x^2-y^2}\\ \Rightarrow -|x| < - \sqrt{x^2-y^2}\\ \Rightarrow -|x| + \sqrt{x^2-y^2} <0\\ \ and \ also,\ -|x| - \sqrt{x^2-y^2} <0 \end{split} \end{equation} eq.(\ref{eqn72'}) shows that the terms $-|x|\pm \sqrt{x^2-y^2}<0$, which means $r_{ss}$ becomes imaginary and hence doesn't exist. \\ \textbf{(ii).} Suppose $x$ is positive i.e $x>0$, then eq.(69) gives steady state radius as \begin{equation}\label{eqn73'} r_{ss} = \frac{\sqrt{n}}{2}\bigg(\frac{x \pm \sqrt{x^2-y^2}}{z^2}\bigg) \end{equation} Now, if $x>0$ then, we get a condition on scale parameter $a(t)$ as $a^{n+1}<g(3n+1)/2|\Lambda| (2n+1)$. Using this condition for any value of $n$ (i.e $n=1,2,3,4,...$) the term $\sqrt{x^2-y^2}$ in the eq.(\ref{eqn73'}) becomes imaginary. Hence, $r_{ss}$ doesn't exist for positive $x$ also. Therefore, these two cases prove that the steady state radius $r_{ss}$ doesn't exist in D-dimensional cosmological AdS spacetime.\\ \textbf{ (b). When $\Lambda >0$ i.e deSitter spacetime:} From the eq.(\ref{eqn61}), the steady state radius $r_{ss}$ is positive only when, the terms inside the square root is positive i.e \begin{multline}\label{eqn63} [2\Lambda a^2(2n+1) + g(3n+1)a^{1-n}]^2\\ -16n(n+1)(\Lambda a^2 + ga^{1-n})^2 >0 \end{multline} the inequality in the eq.(\ref{eqn63}) gives condition on the scale parameter $a(t)$ as \begin{equation}\label{eqn66} \begin{split} a^{n+1} > -\frac{g}{2\Lambda}\Bigg[\frac{4\sqrt{n(n+1)}+(3n+1)}{(2n+1)+2\sqrt{n(n+1)}}\Bigg]\\ \ and \ a^{n+1} > \frac{g}{2\Lambda}\Bigg[\frac{4\sqrt{n(n+1)}-(3n+1)}{(2n+1)-2\sqrt{n(n+1)}}\Bigg] \end{split} \end{equation} from the inequalities in eq.(\ref{eqn66}), the general condition on scale parameter $a(t)$ in deSitter spacetime is \begin{equation}\label{eqn67} a^{n+1}> \frac{g}{2\Lambda}\Bigg[\frac{4\sqrt{n(n+1)}-(3n+1)}{(2n+1)-2\sqrt{n(n+1)}}\Bigg] \end{equation} where, $4\sqrt{n(n+1)} > 3n+1$ and $2n+1 > 2\sqrt{n(n+1)}$, which shows that the terms in the square bracket of eq.(\ref{eqn67}) is positive. Hence, in the above inequality the scale parameter is positive which means there exists $r_{ss}$ in deSitter spacetime cosmological solution in D-dimensions. \section{Conclusions} In this work we address few aspects concerning maximal hypersurface of a black hole in a dynamically evolving scenario. We considered spherically symmetric collapse of dust, generalized to D-dimensions since the model is analytically tractable. We have carried out the analysis separately for 2+1 dimensions and grouped the other dimensions together. This is due to the fact that 2+1 dimensional gravity is fundamentally different from other dimensions owing to the topological nature of gravity in 2+1 dimensions. The dimensions $D>3$ are qualitatively similar to each other. For the evolving setting, we choose Lemaitre-Tolman-Bondi model generalized to D-dimensions since the model has simplicity in terms of analytical expressions while capturing the core essence of the problem. We obtain the equation for the maximal hypersurface using the variational technique developed in \cite{CR}. We set up a Lagrangian whose solution to the Euler-Lagrange equation yields the maximal hypersurface in an evolving setting. By choosing the appropriate boundary values for the solutions one can arrive at the maximal volume inside a trapped region which is in the process of evolving. The same procedure is generalized to D-dimensions. We present the equations by considering a sub-class of Lemaitre-Tolman-Bondi model, the homogeneous dust evolution where the expressions greatly simplify. We analyze an interesting region of the maximal hypersurfaces, which we denote as 'steady state radius" ($R_{ss}$). The reason for this nomenclature is due to the role these points play in the estimation of the maximal volume inside a black hole. Identified first by Reinhart \cite{Reinhart} 1973, these steady state points were found in various other black holes. In this article, we explored the existence and evolution of these points during the course of formation of black holes. We have identified an interesting property of these points in relation with the maximal hypersurfaces. These points are located where the Kodama Vector becomes tangential to the maximal hypersurface. The geometrical meaning and consequence of this observation is left for future considerations. The Kodama vector works as a substitute for a timelike Killing vector in scenarios that do not have a timelike Killing vector. Kodama vector, when it is timelike, has been used to define surface gravity in a dynamically evolving setting. In this article, we find another role of the Kodama vector, viz, it is used to pinpoint the steady state regions of a maximal hypersurface. We note that inside the black holes, the Kodama vector is spacelike. We develop a formula to find the location of $R_{ss}$ in terms of coordinate invariants like area radius, cosmological constants, principle value of the energy momentum tensor and Misner Sharp mass. Using the formula one can locate the steady state radius in various situations. We have explicitly evaluated the location of $R_{ss}$ for the vacuum case, black hole case with and without the cosmological constant. We have presented our analysis and compared the $R_{ss}$ with the position of the event horizon and cosmological horizon. We showed that in the black hole scenario, the $R_{ss}$ is located within the event horizon. If there is a positive cosmological constant, we showed that $R_{ss}$ lies at a area radius smaller than the cosmological horizon. When we consider an evolving situation, we used the collapse of homogeneous dust. This can be viewed as a cosmological solution or the collapse of a star with homogeneous distribution of dust. We showed that for the case of Oppenheimer-Snyder scenario and the collapse with a negative cosmological horizon, there is no real solution for $R_{ss}$ and therefore $R_{ss}$ does not exist. For the dust evolution in the presence of a positive cosmological constant, we showed that $R_{ss}$ exists provided the evolving scale factor crosses a certain critical value. The analysis raises many questions that are left for future considerations. Does the relation between Kodama vector and the maximal hypersurface continue to hold in a non spherically symmetric situation, like the Kerr family of solutions? A timelike Kodama vector has been used to define various quantities that have thermodynamic interpretation like surface gravity, etc in dynamical situations. Does a spacelike Kodama vector also have a geometric interpretation? The Lagrangian formulation for the maximal hypersurface in Kerr/Kerr-AdS/Kerr-Newman/Kerr-DeSitter has not been formulated though there are many interesting papers estimating the volume of the interior of Kerr family of black holes \cite{BJ,XY}. The Lagrangian formulation of the Kerr family is a work in progress. At $R_{ss}$ both the normal vector and its tangent has zero divergence. Is there a special geometric meaning associated with $R_{ss}$ owing to the above property? These questions are left open. \begin{acknowledgments} We would like to thank our institute BITS Pilani Hyderabad campus for providing the required infrastructure to carry out this research work. Suraj Maurya would further like to thank Government of India for providing CSIR (NET-JRF) fellowship to support this research work. \end{acknowledgments} \section*{Appendix} \subsection{Solution of scale parameter $a(t)$ for homogeneous dust evolution } As we know that the area radius $R(t,r)$ and mass function $F(r)$ of homogeneous dust defined as \begin{equation}\label{homogeneous23} R(t,r) = r a(t) \ and \ F(r) = \frac{2g}{n(n+1)}r^{n+1} \end{equation} also \begin{equation}\label{rdotsquare24} \dot R^2 = \frac{2\Lambda}{n(n+1)}R^2 + \frac{2g}{n(n+1)}r^{n+1} \end{equation} from eq.(\ref{homogeneous23}) and (\ref{rdotsquare24}) we get \begin{equation}\label{eqn25} [\dot{a}(t)]^2 = \frac{2\Lambda}{n(n+1)}[a(t)]^2 + \frac{2g}{n(n+1)[a(t)]^{n-1}} \end{equation} solution of eq.(\ref{eqn25}) gives the scale parameter $a(t)$ for different region of spacetime on the basis of cosmological constant $(\Lambda)$. \subsubsection{\textbf{ For vacuum spacetime $(\Lambda = 0)$.}} For zero cosmological constant the eq.(\ref{eqn25}) becomes \begin{equation}\label{eqn26} [\dot{a}(t)]^2 = \frac{2g}{n(n+1)[a(t)]^{n-1}} \Rightarrow \frac{da(t)}{dt} =\pm \frac{1}{a^{\frac{n-1}{2}}}\sqrt{\frac{2g}{n(n+1)}} \end{equation} solution of eq.(\ref{eqn26}) with initial scale parameter $a(0) = 1$ gives, \begin{equation} a(t) = \Bigg(1 + \sqrt{\frac{g(n+1)}{2n}}t\Bigg)^{\frac{n+1}{2}} \ and \ \Bigg(1 - \sqrt{\frac{g(n+1)}{2n}}t\Bigg)^{\frac{n+1}{2}} \end{equation} \subsubsection{\textbf{For deSitter (dS) spacetime $(\Lambda > 0)$.}} For positive cosmological constant the change in scale parameter defined as \begin{multline}\label{lambdapositive28} [\dot{a}(t)]^2 = \frac{2\Lambda}{n(n+1)}[a(t)]^2 + \frac{2g}{n(n+1)[a(t)]^{n-1}}\\ \Rightarrow \frac{da(t)}{dt} =\pm \sqrt{\frac{2\Lambda}{n(n+1)}[a(t)]^2 + \frac{2g}{n(n+1)[a(t)]^{n-1}}} \end{multline} The solutions of eq.(\ref{lambdapositive28}) with initial condition $a(0) = 1$ is \begin{equation} \begin{split} a(t) = \Bigg[\sqrt{\frac{g}{\Lambda}}sinh\Bigg(\sqrt{\frac{\Lambda (n+1)}{2n}}t + arcsinh\sqrt{\frac{\Lambda}{g}}\Bigg)\Bigg]^{\frac{2}{n+1}}\\ \ and \ \Bigg[-\sqrt{\frac{g}{\Lambda}}sinh\Bigg(\sqrt{\frac{\Lambda (n+1)}{2n}}t - arcsinh\sqrt{\frac{\Lambda}{g}}\Bigg)\Bigg]^{\frac{2}{n+1}} \end{split} \end{equation} \subsubsection{\textbf{For Anti-deSitter (AdS) spacetime $(\Lambda < 0)$.}} For negative cosmological constant the change in scale parameter defined as \begin{multline}\label{lambdanegative30} [\dot{a}(t)]^2 = -\frac{2\Lambda}{n(n+1)}[a(t)]^2 + \frac{2g}{n(n+1)[a(t)]^{n-1}}\\ \Rightarrow \frac{da(t)}{dt} =\pm \sqrt{-\frac{2\Lambda}{n(n+1)}[a(t)]^2 + \frac{2g}{n(n+1)[a(t)]^{n-1}}} \end{multline} The solutions of eq.(\ref{lambdanegative30}) with intial condition $a(0) = 1$ is \begin{equation} \begin{split} a(t) = \Bigg[\sqrt{\frac{g}{\Lambda}}sin\Bigg(\sqrt{\frac{\Lambda (n+1)}{2n}}t + arcsin\sqrt{\frac{\Lambda}{g}}\Bigg)\Bigg]^{\frac{2}{n+1}}\\ \ and \ \Bigg[-\sqrt{\frac{g}{\Lambda}}sin\Bigg(\sqrt{\frac{\Lambda (n+1)}{2n}}t - arcsin\sqrt{\frac{\Lambda}{g}}\Bigg)\Bigg]^{\frac{2}{n+1}} \end{split} \end{equation}\\
proofpile-arXiv_065-9775
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Abstract} {\bf The abstract is in boldface, and should fit in 8 lines. It should be written in a clear and accessible style, emphasizing the context, the problem(s) studied, the methods used, the results obtained, the conclusions reached, and the outlook. You can add a table contents, recommended if your paper is more than 6 pages long. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} \label{sec:intro} The stage is yours. Write your article here. The bulk of the paper should be clearly divided into sections with short descriptive titles, including an introduction and a conclusion. \section{A Section} Use sections to structure your article's presentation. Equations should be centered; multi-line equations should be aligned. \begin{equation} H = \sum_{j=1}^N \left[J (S^x_j S^x_{j+1} + S^y_j S^y_{j+1} + \Delta S^z_j S^z_{j+1}) - h S^z_j \right]. \end{equation} In the list of references, cited papers \cite{1931_Bethe_ZP_71} should include authors, title, journal reference (journal name, volume number (in bold), start page) and most importantly a DOI link. For a preprint \cite{arXiv:1108.2700}, please include authors, title (please ensure proper capitalization) and arXiv link. If you use BiBTeX with our style file, the right format will be automatically implemented. All equations and references should be hyperlinked to ensure ease of navigation. This also holds for [sub]sections: readers should be able to easily jump to Section \ref{sec:another}. \section{Another Section} \label{sec:another} There is no strict length limitation, but the authors are strongly encouraged to keep contents to the strict minimum necessary for peers to reproduce the research described in the paper. \subsection{A first subsection} You are free to use dividers as you see fit. \subsection{A note about figures} Figures should only occupy the stricly necessary space, in any case individually fitting on a single page. Each figure item should be appropriately labeled and accompanied by a descriptive caption. {\bf SciPost does not accept creative or promotional figures or artist's impressions}; on the other hand, technical drawings and scientifically accurate representations are encouraged. \section{Conclusion} You must include a conclusion. \section*{Acknowledgements} Acknowledgements should follow immediately after the conclusion. \paragraph{Author contributions} This is optional. If desired, contributions should be succinctly described in a single short paragraph, using author initials. \paragraph{Funding information} Authors are required to provide funding information, including relevant agencies and grant numbers with linked author's initials. Correctly-provided data will be linked to funders listed in the \href{https://www.crossref.org/services/funder-registry/}{\sf Fundref registry}. \begin{appendix} \section{First appendix} Add material which is better left outside the main text in a series of Appendices labeled by capital letters. \section{About references} Your references should start with the comma-separated author list (initials + last name), the publication title in italics, the journal reference with volume in bold, start page number, publication year in parenthesis, completed by the DOI link (linking must be implemented before publication). If using BiBTeX, please use the style files provided on \url{https://scipost.org/submissions/author_guidelines}. If you are using our \LaTeX template, simply add \begin{verbatim} \section*{Abstract} {\bf We investigate the entanglement dynamics in a free-fermion chain initially prepared in a Fermi sea and subjected to localized losses (dissipative impurity). We derive a formula describing the dynamics of the entanglement entropies in the hydrodynamic limit of long times and large intervals. The result depends only on the absorption coefficient of the effective delta potential describing the impurity in the hydrodynamic limit. Genuine dissipation-induced entanglement is certified by the linear growth of the logarithmic negativity. Finally, in the quantum Zeno regime at strong dissipation the entanglement growth is arrested (Zeno entanglement death). } \section{Introduction} \label{sec:intro} Common experience suggests that the interaction between a quantum system and its environment, and the ensuing dissipation, is detrimental for quantum entanglement. In recent years this view was challenged as it was realized that dissipation can be a resource to engineer quantum states~\cite{lin2013}, for quantum computation~\cite{verstraete-2009}, or to stabilize exotic states of matter, such as topological order~\cite{diehl-2011}. These results, together with the interest in Noisy-Intermediate-Scale-Quantum (NISQ) devices\cite{preskill2018quantum}, urge for a thorough understanding of the interplay between entanglement and dissipation in open quantum systems. A major obstacle is that it is a challenging task to encapsulate the system-environment interaction within a theoretical framework. Within the so-called Markovian approximation, the Lindblad equation provides a powerful framework to address open quantum systems~\cite{petruccione}. Interestingly, for some models it is possible to obtain exact solutions of the Lindblad equation~\cite{prosen-2008,prosen-2011,prosen-2014,prosen-2015, znidaric-2010,znidaric-2011,medvedyeva-2016,buca-2020,bastianello-2020,essler-2020,ziolkowska-2020}, for instance, in noninteracting systems with {\it linear} dissipators~\cite{prosen-2008}. Perturbative field-theoretical approaches are also available~\cite{sieberer-2016}. A promising direction is to extend the hydrodynamic framework to integrable systems subjected to dissipation~\cite{bouchoule-2020,bastianello-2020,Friedman_2020,deleeuw-2021,denardis2021}. This is motivated by the tremendous success of Generalized Hydrodynamics (GHD) for integrable systems~\cite{bertini-2016,olalla-2016}. In some simple free-fermion setups it has been shown that it is possible to use a hydrodynamic approach to described the entanglement dynamics~\cite{alba-2021,maity-2020}. This generalizes a well-known quasiparticle picture for the entanglement spreading in integrable systems~\cite{calabrese-2005,fagotti-2008,alba-2017,alba-2018,alba2017quench,alba2017renyi,alba2018entanglement,alba2021generalizedhydrodynamic}. \begin{figure}[t] \begin{center} \includegraphics[width=1\textwidth]{cartoon} \caption{ Dissipation-induced entanglement growth. (a) and (b) A free-fermion chain is prepared in Fermi sea and subject to fermionic losses acting at the center of the chain at $x=0$. Here $\gamma^-$ is the loss rate. We are interested in the entanglement entropy $S$ of a subregion of length $\ell$. We consider two partitions. In the first one (side partition) $A$ is placed next to the dissipative impurity (see (a)). In the second one (centered partition) a subregion $A'$ is centered around the impurity (see (b)). (c) Mechanism for entanglement generation. A fermion reaching the origin can be absorbed or reflected. The reflected and transmitted fermions are entangled. } \end{center} \label{fig0:cartoon} \end{figure} Dissipative impurities provide a minimal theoretical laboratory to study the effects of dissipation in quantum many-body systems. They are the focus of rapidly-growing interest, both theoretical~\cite{dolgirev-2020,jin-2020,maimbourg-2020, froml-2019,tonielli-2019,froml-2020,krapivsky-2019,krapivsky-2020,rosso-2020,vernier-2020,alba2021noninteracting,chaudhari2021zeno,muller-2021}, as well as experimental~\cite{gericke-2008,brazhnyi-2009,zezyulin-2012,barontini-2013,patil-2015,labouvie-2016}. The interplay between entanglement and thermodynamic entropy in the presence of dissipative impurities has not been explored much. One aim of this paper is to start such investigation. We focus on noninteracting fermions with localized fermion losses. The chain is initially prepared in a Fermi sea, and then undergoes Lindblad dynamics. To monitor the entanglement dynamics we consider the entanglement entropies~\cite{amico2008entanglement,calabrese2009entanglemententropy,eisert2010colloquium,laflorencie2016quantum} (both von Neumann and R\'enyi entropies), and the fermionic logarithmic negativity~\cite{eisert1999a,lee2000partial,vidal2002computable,plenio2005logarithmic,ruggiero2016negativity,ruggiero2016entanglement,wichterich2009scaling,marcovitch2009critical,calabrese2012entanglement,calabrese2013entanglement, coser2014entanglement,eisler2014entanglement,eisler2015partial,blondeau2016universal,shapourian2017many,shapourian2017partial,shapourian2019entanglement}. The setup is depicted in Fig.~\ref{fig0:cartoon}. An infinite chain is prepared in a Fermi sea with generic Fermi level $k_F$. The dissipation acts at the origin removing fermions incoherently at a rate $\gamma^-$. To quantify the entanglement shared between different subregions we consider the bipartitions of the chain shown in Fig.~\ref{fig0:cartoon} (a) and (b). In (a) (side bipartition) a subsystem $A$ of length $\ell$ is placed next to the impurity, whereas in (b) (centered partition) a subsystem $A'$ of the same length is centered around the origin. Here we focus on the hydrodynamic limit of large $\ell$ and long times, with their ratio fixed. A crucial observation is that in the hydrodynamic limit of large distances from the dissipation source and long times, the dissipation acts as an effective delta potential (dissipative impurity) with imaginary strength. The associated reflection and transmission amplitudes can be derived analytically~\cite{alba2021noninteracting}. The presence of loss dissipation is reflected in a nonzero absorption coefficient. Due to the nonunitary dynamics entanglement and thermodynamic correlations are deeply intertwined. The origin of entanglement is understood as follows. The mechanism is depicted in Fig.~\ref{fig0:cartoon} (c). The effective delta potential at the origin gives rise to a superposition between the transmitted and the reflected fermion, which form an entangled pair. The propagation of entangled pairs generate entanglement between different spatial regions of the system. More precisely, regions that share entangled pairs get entangled. A similar mechanism is responsible for entanglement production in free-fermion chains with a defect~\cite{eisler2012entanglement,collura2013entanglement,gruber2020time,gamayun-2020,gamayun2021nonequilibrium}. Together with quantum entanglement, thermodynamic correlation is produced during the dynamics. Although the initial state is homogeneous, dissipation gives rise to a nontrivial density profile. This is accompanied by the creation of thermodynamic entropy. Here we show that the entanglement entropies cannot distinguish between these two types of correlation. Indeed, quite generically the entanglement entropies grow linearly with time. This linear growth of the von Neumann entropy in open quantum systems has been observed already, for instance, in~\cite{ptaszy2019entropy}. One of our main results is that in the hydrodynamic limit the entanglement entropy of $A$ (see Fig.~\ref{fig0:cartoon} (a)) is described by \begin{equation} \label{eq:ent-hydro-intro} S=\frac{\ell}{2}\int_{-k_F}^{k_F}\frac{dk}{2\pi} H_1(1-|a|^2)\min(|v_k|t/\ell,1). \end{equation} We provide similar results for $A'$. In~\eqref{eq:ent-hydro-intro} we defined $H_1(x):=-x\ln(x)-(1-x)\ln(1-x)$, and $v_k$ is the fermion group velocity. Crucially, $|a|^2$ is the absorption coefficient, which is nonzero because of the losses. For lattice systems a maximum velocity $v_\mathrm{max}$ exists and~\eqref{eq:ent-hydro-intro} predicts a linear growth at short times $v_\mathrm{max}t/\ell<1$, followed by a volume-law scaling at long times. We provide similar results for the R\'enyi entropies and the moments of fermionic correlation functions. Formula~\eqref{eq:ent-hydro-intro} is similar to that describing the entanglement dynamics in a free-fermion chain with a bond defect~\cite{eisler2012entanglement}. The main difference is that in the unitary case the growth of the entropy depends only on the transmission coefficient of the defect. We should stress that although we present results only for the two geometries in Fig.~\ref{fig0:cartoon}, it should be possible to generalize Eq.~\eqref{eq:ent-hydro-intro} to to arbitrary bipartitions. Again, the linear growth in~\eqref{eq:ent-hydro-intro} does not reflect genuine entanglement production, which can be diagnosed by the logarithmic negativity. For instance, we show that the logarithmic negativity grows linearly with time for subsystem $A$, whereas it does not increase for $A'$. This supports the mechanism outlined above. For the bipartition in Fig.~\ref{fig0:cartoon} entanglement is due to the shared pairs formed by the transmitted and the reflected fermions. On the other hand, for the bipartition in Fig.~\ref{fig0:cartoon} (b) these pairs are never shared between $A'$ and its complement. The manuscript is organized as follows. In section~\ref{sec:intro} we introduce the model and the setup. In particular, we review the formula for the fermionic correlators in the hydrodynamic limit, which are the main ingredients to compute the entanglement entropies and the negativity. Entangled-related quantities are introduced in section~\ref{sec:obs}. In section~\ref{sec:hydro-ent} we present our main results. We first discuss the formula describing arbitrary functions of the moments of the fermionic correlators in the hydrodynamic limit. In section~\ref{sec:mn} we specialize to the moments of the fermionic correlators. In section~\ref{sec:ent-theo} we discuss the hydrodynamic behavior of the entanglement entropies. In section~\ref{sec:ss-entropy} we focus on the stationary value of the entanglement entropy, discussing its dependence on the dissipation strength. In section~\ref{sec:num} we present numerical benchmarks. We focus on the moments of the fermionic correlators in section~\ref{sec:num-a}, and on the entanglement entropies in section~\ref{sec:num-b}. We discuss some future directions in section~\ref{sec:concl}. In Appendix~\ref{sec:app} we report the derivation of the main result of section~\ref{sec:hydro-ent}. \section{Localized losses in a Fermi sea} \label{sec:intro} Here we consider the infinite free-fermion chain defined by the Hamiltonian \begin{equation} \label{eq:ham} H=\sum_{x=-\infty}^\infty(c_x^\dagger c_{x+1}+c^\dagger_{x+1}c_x)\, , \end{equation} where $c_x^\dagger,c_x$ are creation and annihilation operators at site $x$. The fermionic operators obey standard canonical anticommutation relations. To diagonalize~\eqref{eq:ham} one defines a Fourier transform with respect to $x$, introducing the fermionic operators $b_k$ in momentum space as \begin{equation} b_k:=\sum_{x=-\infty}^\infty e^{-ikx}c_x,\quad c_x=\int_{-\pi}^\pi \frac{dk}{2\pi} e^{i k x}b_k\,. \end{equation} Eq.~\eqref{eq:ham} is rewritten in terms of $b_k$ as \begin{equation} \label{eq:ham-k} H=\int_{-\pi}^\pi\frac{dk}{2\pi} \varepsilon_k b^\dagger_k b_k\, ,\quad \varepsilon_k:=2\cos(k)\, . \end{equation} Eq.~\eqref{eq:ham-k} is diagonal, and it conserves the particle number. Let us consider a generic fermion density $n_f=k_F/\pi$, with $k_F$ the Fermi momentum. The ground state of~\eqref{eq:ham} is obtained by filling the single-particle states with quasimomenta $k$ in $k\in[-k_F,k_F]$. The state with $n_f=1$ ($k_F=\pi$) in which all the quasimomenta are occupied is a product state, and it has trivial correlations. For intermediate filling $0<k_F<\pi$ the ground state of~\eqref{eq:ham} is critical, with power-law correlations. From the single-particle dispersion in~\eqref{eq:ham-k} we define the group velocity $v_k$ of the fermions as \begin{equation} \label{eq:v-k} v_k:=\frac{d\varepsilon_k}{dk}=-2\sin(k)\, . \end{equation} Here we consider the out-of-equilibrium dynamics under the Hamiltonian~\eqref{eq:ham} and localized loss processes at the center of the chain. These are treated in the formalism of quantum master equations~\cite{petruccione}. The time-evolved density matrix $\rho_t$ of the system is described by \begin{equation} \label{eq:lind} \frac{d\rho_t}{dt}=-i[H,\rho_t]+L^{-}\rho_t L^{-\, \dagger}-\frac{1}{2}\{L^{-\,\dagger} L^{-},\rho_t\}\, . \end{equation} Here, the so-called Lindblad jump operator $L^{-}$ is defined as $L^-=\sqrt{\gamma^-}c_0$ (see Fig.~\ref{fig0:cartoon} for a pictorial definition), with $\gamma^-$ the loss rate. Eq.~\eqref{eq:lind} describes incoherent absorption of fermions at the center of the chain. Entanglement properties of the systems can be extracted from the fermionic two-point correlation functions, i.e., the {\it covariance matrix} \begin{equation} G_{x,y}(t):=\mathrm{Tr}(c^\dagger_x c_y\rho(t))\, . \end{equation} The dynamics of $G_{x,y}$ is obtained as (we drop the dependence on the coordinates $x,y$ to lighten the notation) \begin{equation} \label{eq:G} G(t)= e^{t\Lambda}G(0)e^{t\Lambda^\dagger}, \end{equation} where $G(0)$ is the matrix containing the initial correlations. The matrix $\Lambda$ is defined as \begin{equation} \Lambda=ih-\frac{\Gamma^-}{2}, \end{equation} where $h=\delta_{|x-y|,1}$ is the Hamiltonian contribution while $\Gamma^-=\gamma^-\delta_{x,0}$ encodes the localized dissipative effects. The covariance matrix $G_{x,y}$ is the solution of the linear system of equations \begin{equation} \label{eq:one} \frac{d G_{x,y}}{dt}=i(G_{x+1,y}+G_{x-1,y}-G_{x,y+1}-G_{x,y-1}) -\frac{\gamma^-}{2}(\delta_{x,0}G_{x,y}+\delta_{y,0}G_{x,y}). \end{equation} Here we are interested in the hydrodynamic limit of large distances from the origin and long times, i.e., $x,y,t\to\infty$ with the ratios $x/t,y/t$ fixed. In this limit it can be shown that the dissipation is effectively described by a delta potential. The strength of the potential is imaginary, which is a consequence of nonunitarity. Several properties of the system can be derived by studying the scattering problem of a quantum particle with an imaginary delta potential~\cite{burke-2020}. For several initial states, both homogeneous as well as inhomogeneous ones, the dynamics of $G_{x,y}$ can be described solely in terms of the initial fermionic occupations and the reflection and transmission coefficients of the emergent delta potential~\cite{alba2021noninteracting}. Here we are interested in the situation in which the initial state of the dynamics is a Fermi sea with arbitrary Fermi momentum $k_F$. \subsection{Hydrodynamic limit of the covariance matrix} \label{sec:cov-hydro} In the hydrodynamic limit the solution of~\eqref{eq:one} with initial condition the Fermi sea is obtained as~\cite{alba2021noninteracting} (see also~\cite{froml-2020}) \begin{equation} \label{eq:f-corr} G_{x,y}(t)=\int_{-k_F}^{k_F}\frac{dk}{2\pi}(e^{ikx}+\chi_{x}(t)r(k)e^{i|kx|}) (e^{-iky}+\chi_{y}(t)r(k)e^{-i|ky|}). \end{equation} Notice the absolute values in the second terms in the brackets. Moreover, one should observe that the contributions associated with the two coordinates $x,y$ factorize. This factorization is crucial~\cite{krapivsky-2019} to obtain the exact solution of~\eqref{eq:one}. In~\eqref{eq:f-corr} $r(k)$ is the momentum-dependent reflection amplitude of the effective delta potential describing the dissipation source at the origin. The analytic expression for $r(k)$ and for the associated transmission amplitude $\tau(k)$ are given as~\cite{alba2021noninteracting} \begin{equation} \label{eq:delta-coeff} r(k):=-\frac{\gamma^-}{2}\frac{1}{\frac{\gamma^-}{2}+|v_k|},\quad \tau(k):=\frac{|v_k|}{\frac{\gamma^-}{2}+|v_k|}, \end{equation} where $v_k$ is the fermion group velocity defined in~\eqref{eq:v-k}. Notice that~\eqref{eq:delta-coeff} coincide with the reflection and transmission amplitude for a quantum particle scattering with a delta potential with imaginary strength $-i\gamma^-/2$ after redefining~\cite{burke-2020} $v_k\sim k$. Crucially, since the dynamics is nonunitary one has that \begin{equation} \label{eq:abs} |a|^2:=1-|r|^2-|\tau|^2=\frac{\gamma^- |v_k|}{(\frac{\gamma^-}{2}+|v_k|)^2}>0, \end{equation} where we defined the absorption coefficient $|a|^2$, which is the probability that a fermion with quasimomentum $k$ is removed at the origin. The time dependence of the correlator in~\eqref{eq:f-corr} is encoded in the function $\chi_x$, which is defined as \begin{equation} \label{eq:constr} \chi_{x}:=\Theta(|v_k|t-|x|). \end{equation} At $t=0$ from~\eqref{eq:f-corr} one recovers the initial correlation of the Fermi sea as \begin{equation} \label{eq:fs-corr} G_{x,y}(0)=\frac{\sin(k_F(x-y))}{\pi(x-y)}. \end{equation} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{density} \caption{Dynamics of the fermionic density $n_{x,t}$ in the presence of localized losses. Results are for the initial Fermi sea with $k_F=\pi/2$ and for loss rate $\gamma^-=10$ and $\gamma^-=0.5$ (continuous and dotted lines, respectively). The oscillations are an artifact of the approximations and vanish in the hydrodynamic limit $x,t\to\infty$ with their ratio fixed. Notice that in the hydrodynamic limit the density develops a discontinuity at the origin. } \label{fig1aa:density} \end{center} \end{figure} To get an idea of the effect of the dissipation, it is instructive to consider the dynamics of the local fermionic density $n_{x,t}$ \begin{equation} n_{x,t}=G_{x,x}. \end{equation} This is discussed in Fig.~\ref{fig1aa:density}. We plot $n_{x,t}$ versus the scaling variable $x/(2t)$, showing results for $\gamma^-=0.5$ and $\gamma^-=10$. The results are obtained by using~\eqref{eq:f-corr}. We focus on the effects of the localized losses on the initial Fermi sea with $k_F=\pi/2$. As expected, the Fermi seas gets depleted with time and a nontrivial density profile forms around the origin. For $|x/(2t)|>1$ the effect of the dissipation is not present and one has the initial density $1/2$. The density profile exhibits a discontinuity at the origin. This reflects the presence of an effective delta potential at the origin. Finally, the oscillations present in Fig.~\ref{fig1aa:density} are an artifact of the approximations employed to derive~\eqref{eq:f-corr}, and vanish in the hydrodynamic limit. In the strong dissipation limit $\gamma^-\to\infty$ the evolution of the density freezes. This is a manifestation of the quantum Zeno effect~\cite{degasperis-1974,misra-1977,facchi-2002}. In the following sections we show that the fermionic dynamics shown in Fig.~\ref{fig1aa:density} is accompanied with a robust linear entanglement growth with time. \section{Entanglement entropies and logarithmic negativity: Definitions} \label{sec:obs} In order to understand how the presence of localized losses affects the entanglement content of the system here we focus on several quantum-information-related quantities, such as the entanglement entropies and the logarithmic negativity. To introduce them, let us consider a bipartition of the system as $A\cup \bar A$ (see, for instance, Fig.~\ref{fig0:cartoon} (a) and (b)). By tracing over the degrees of freedom of $\bar A$, which is the complement of $A$, one obtains the reduced density matrix $\rho_A=\mathrm{Tr}_{\bar A}\rho$, where $\rho$ is the full-system density matrix. The R\'enyi entropies are defined as~\cite{amico2008entanglement,eisert2010colloquium,calabrese2009entanglemententropy,laflorencie2016quantum} \begin{equation} \label{eq:renyi-def} S^{(n)}:=\frac{1}{1-n}\mathrm{Tr}(\rho^n_A),\quad\mathrm{with}\, n\in\mathbb{R}. \end{equation} In the limit $n\to1$ one obtains the von Neumann entropy as \begin{equation} \label{eq:vn} S=-\mathrm{Tr}\rho_A\ln(\rho_A). \end{equation} Both R\'enyi and von Neumann entropies are good entanglement measures provided that the full system is in a pure state. However, in the presence of dissipation the full system is in a mixed state, which introduces some ``classical'' correlation between $A$ and $\bar A$. This spurious, i.e., non-quantum, correlation, affects both the R\'enyi entropies and the entanglement entropy. In these situations the so-called logarithmic negativity~\cite{lee2000partial,vidal2002computable} can be used to quantify the amount of genuine entanglement between $A$ and the rest. The logarithmic negativity ${\cal E}$ is defined from the partially-transposed density matrix $\rho^{T}$. This is defined from $\rho$ by taking the matrix transposition with respect to the degrees of freedom of $\bar A$ as \begin{equation} \langle e_i,\bar e_j|\rho^T| e_k,\bar e_l\rangle= \langle e_i,\bar e_l|\rho|e_i,\bar e_j\rangle, \end{equation} with $e_i,\bar e_j$ two bases for $A$ and its complement, respectively. Unlike $\rho$, $\rho^T$ is not positive definite, and its negative eigenvalues quantify the amount of entanglement. The logarithmic negativity is defined as \begin{equation} \label{eq:neg-def0} {\cal E}=\ln(\mathrm{Tr}|\rho^T|). \end{equation} Here we focus on free-fermion models. For free-fermion and free-boson models both the R\'enyi entropies and the von Neumann entropy of a region $A$ are calculable from the fermionic correlation function $G_{x,y}$ restricted to $A$, i.e., with $x,y\in A$. Specifically, the R\'enyi entropies are obtained as~\cite{peschel2009reduced} \begin{equation} \label{eq:renyi-def} S^{(n)}=\frac{1}{1-n}\mathrm{Tr}\ln\Big[G^n+(1-G)^n\Big]. \end{equation} In the limit $n\to1$, one recovers the von Neumann entropy as \begin{equation} \label{eq:vn-def} S=-\mathrm{Tr}(G\ln(G)-(1-G)\ln(1-G)). \end{equation} The logarithmic negativity ${\cal E}$ can be calculated efficiently from the two-point function only for free bosons~\cite{audenaert2002entanglement}. For free fermions the partial transposed $\rho^T$ is not a gaussian operator, although it can be written as a sum of two gaussian operators~\cite{eisler2015partial} as \begin{equation} \label{eq:opm} \rho^T=e^{-i\pi/4} O_++e^{i\pi/4}O_-, \end{equation} where $O_\pm$ are gaussian operators. Very recently, an alternative definition of negativity has been put forward~\cite{shapourian2017many,shapourian2017partial,shapourian2019entanglement}. We dub this alternative negativity {\it fermionic} negativity. Its definition reads as \begin{equation} \label{eq:e-f} {\cal E}:=\ln\mathrm{Tr}\sqrt{O_+O_-}. \end{equation} Here we use the same symbol ${\cal E}$ for the fermionic negativity and for the standard one in~\eqref{eq:neg-def0} because in the following we will only use the fermionic one. In contrast with~\eqref{eq:neg-def0}, since the product $O_+O_-$ is a gaussian operator, the fermionic negativity~\eqref{eq:e-f} can be computed effectively in terms of fermionic two-point functions. Specifically, let us rewrite the full-system correlation matrix $G$ as \begin{equation} G=\left(\begin{array}{cc} G_{AA} & G_{A\bar A}\\ G_{\bar A A} & G_{\bar A\bar A} \end{array}\right) \end{equation} Here $G_{WZ}$, with $W,Z=A,\bar A$ is obtained from the full system $G_{x,y}$ restricting to $x\in W$ and $y\in Z$. One now defines the matrices $G^\pm$ as \begin{equation} G^\pm=\left(\begin{array}{cc} -G_{AA} & \pm i G_{A\bar A}\\ \pm i G_{\bar A A} & G_{\bar A\bar A} \end{array} \right) \end{equation} We then define the matrix $C^T$ as \begin{equation} C^T=\frac{1}{2}\mathbb{I}-P^{-1}(G^++G^-),\quad\mathrm{with}\, P=\mathbb{I}-G^+G^-. \end{equation} From the eigenvalues $\xi_i$ of $C^T$ and $\lambda_i$ of $G$ we can define the fermionic negativity ${\cal E}$ as~\cite{shapourian2017partial} \begin{equation} \label{eq:neg-def} {\cal E}=\sum_i\Big[\ln[\xi_i^{1/2}+(1-\xi_i)^{1/2}]+\frac{1}{2}\ln[\lambda_i^2+(1-\lambda_i)^2]\Big], \end{equation} It has been shown in Ref.~\cite{shapourian2019entanglement} that under reasonable assumptions the fermionic negativity is a good entanglement measure for mixed states. \section{Hydrodynamic description of entanglement entropies} \label{sec:hydro-ent} We now discuss the out-of-equilibrium dynamics of the entanglement entropies in the hydrodynamic limit. Before that, we provide a more general result, which allows us to obtain the hydrodynamic behavior of the trace of a generic function of the fermionic correlator (cf.~\eqref{eq:f-corr}). Let us consider the bipartitions in Fig.~\ref{fig0:cartoon} (a) and (b). In Fig.~\ref{fig0:cartoon} (a) subsystem $A$ is the interval $[0,\ell]$, i.e., on the right of the dissipative impurity. In Fig.~\ref{fig0:cartoon} (b) we consider subsystem $A'=[-\ell/2,\ell/2]$ centered around the impurity. Let us consider a generic function ${\mathcal F}(z)$, and let us focus on the quantity $\mathrm{Tr}{\mathcal F}(G_X)$, with $X=A,A'$. In the hydrodynamic limit $t,\ell\to\infty$, with their ratio fixed, one can show that \begin{multline} \label{eq:F} \mathrm{Tr}{\mathcal F}(G_X)=\ell\int_{-k_F}^{k_F}\frac{dk}{2\pi} \Big[\Big(1-\frac{1}{2 z_X}\min(z_X|v_k|t/\ell,1)\Big){\mathcal F}(1) \\+\frac{1}{2 z_X}{\mathcal F}(1-z_X|a(k)|^2) \min(z_X|v_k|t/\ell,1)\Big],\quad z_{A(A')}=1(2). \end{multline} Here $v_k$ is the fermion group velocity in~\eqref{eq:v-k}, and $|a(k)|^2$ is the absorption coefficient of the emergent delta potential (cf.~\eqref{eq:abs}) at the origin. Eq.~\eqref{eq:F} depends only on $1-|a|^2$, i.e., the probability of the fermions not to be absorbed at the origin. Also, the only dependence on time is via the factor $\min(z_X|v_k|t/\ell)$, which encodes the fact that $A$ and $A'$ are finite, and information propagates from the origin at a finite velocity $v_k$. The factor $z_X$ in~\eqref{eq:F} accounts for the different geometries in Fig.~\ref{fig0:cartoon} (a) and (b), and it has a simple interpretation. For instance, in the argument of the second term in~\eqref{eq:F} $z_X$ takes into account that for the bipartition in Fig.~\ref{fig0:cartoon} (a) the number of absorbed fermions is twice that for the bipartition in Fig.~\ref{fig0:cartoon} (a) because the impurity is at the center of $A'$. Moreover, in $\min(z_X |v_k|t/\ell,1)$, $z_X$ reflects that for $A'$ the distance between the impurity and the edge of $A'$ is $\ell/2$ instead of $\ell$. For generic ${\mathcal F}(z)$, Eq.~\eqref{eq:F} predicts a linear behavior with time for $t\le \ell/(z_X v_\mathrm{max})$, with $v_\mathrm{max}$ the maximum velocity in the system. This is followed by an asymptotic saturation at $t\to\infty$ to a volume law $\propto\ell$. Finally, for $\gamma^-=0$ one recovers the unitary case and from~\eqref{eq:F}, one obtains that \begin{equation} \label{eq:nodiss} \mathrm{Tr}{\mathcal F}(G_X)=\ell\int_{-kF}^{k_F}\frac{dk}{2\pi} {\mathcal F}(1). \end{equation} Eq.~\eqref{eq:nodiss} means that in the absence of dissipation there is no dynamics and for any ${\mathcal F}$ one has a constant contribution that is proportional to $\ell$. The fact that there is no dependence on $z_X$ and on the geometry reflects translation invariance. The derivation of~\eqref{eq:F} is reported in Appendix~\ref{sec:app} and it relies on the multidimensional stationary phase approximation~\cite{wong}, and on the assumption that ${\mathcal F}(z)$ admits a Taylor expansion around $z=0$. We should also stress that although we discuss only the two geometries in Fig.~\ref{fig0:cartoon} (a) and (b), it should be possible to generalize~\eqref{eq:F} to arbitrary bipartitions or multipartitions. In the following, by considering different functions ${\mathcal F}(z)$ we provide exact results for the moments of the correlation matrix and the entanglement entropies in the hydrodynamic limit. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{traces_theo} \caption{ Moments of the fermionic correlator $M_n=\mathrm{Tr}(G^n)$, where $G$ is the fermionic correlation function restricted to subsystem $A$ (see Fig~\ref{fig0:cartoon}). We plot the rescaled moments $M_n/\ell$, with $\ell$ the length of $A$ versus $v_\mathrm{max} t/\ell$, $v_\mathrm{max}$ being the maximum velocity. Lines are analytic results in the hydrodynamic limit $\ell,t\to\infty$ with their ratio fixed. We only show results for $\gamma^-=1$ and $k_F=\pi/3$. Note the linear behavior for $t\le \ell/v_M$ followed by a saturation at $t\to\infty$. } \label{fig1:traces-theo} \end{center} \end{figure} \subsection{Moments of the correlation matrix} \label{sec:mn} Here we study the hydrodynamic limit of the moments $M_n$ of the fermionic correlation matrix. These are defined as \begin{equation} M_n=\mathrm{Tr}(G^n), \end{equation} where the correlation matrix $G$ is restricted to subsystem $A,A'$ (see Fig.~\ref{fig0:cartoon}). The behavior of $M_n$ in the hydrodynamic limit is readily obtained from~\eqref{eq:F} by choosing ${\mathcal F}(z)=z^n$. One obtains that \begin{multline} \label{eq:Mn} M_n=\ell\int_{-k_F}^{k_F}\frac{dk}{2\pi} \Big[\Big(1-\frac{1}{2 z_X}\min(z_X|v_k|t/\ell,1)\Big) \\+\frac{1}{2 z_X}(1-z_X|a(k)|^2)^n \min(z_X|v_k|t/\ell,1)\Big],\quad z_{A(A')}=1(2). \end{multline} The structure is the same as in~\eqref{eq:F}. $M_n$ exhibit the same qualitative behavior with a linear decrease at short times $t\le\ell/v_\mathrm{max}$, which is followed by an asymptotic saturation at $t\to\infty$. Several remarks are in order. First, at $t=0$ one has that for any $n$, $M_n=\ell k_F/\pi$, which is the initial number of fermions in the subsystem. For $t\to\infty$ one has that the number of fermions $M_1$ in the subsystem is \begin{equation} M_1 \xrightarrow{t\to\infty}\ell\int_{-k_F}^{k_F}\frac{dk}{2\pi}\Big(1-\frac{|a|^2}{2}\Big). \end{equation} This means that $M_1\propto\ell$ for $t\to\infty$, despite the presence of dissipation. In the strong dissipation limit $\gamma^-\to\infty$ one has that $|a|^2\to0$, and $M_1\to\ell k_F/\pi$, i.e., the initial fermion number. This is a manifestation of the quantum Zeno effect. In the limit $\gamma^-\to\infty$ the dynamics of the system is arrested and the number of fermions absorbed at the origin vanishes. Finally, it is interesting to consider $M_n$ in the limit $n\to\infty$. One can readily check that $1-z|a|^2<1$, which implies that only the first term in~\eqref{eq:Mn} survives. In particular, in the limit $t\to\infty$, from~\eqref{eq:Mn} one obtains that $M_\infty=\ell k_F/\pi(1-1/(2z_X))$. For $z_X=1$ (i.e., for the partition in Fig.~\ref{fig0:cartoon} (a)) one has $M_\infty=\ell k_F/(2\pi)$, which is half of the initial number of fermions. In Fig.~\ref{fig1:traces-theo} we show numerical predictions for $M_n$ obtained by using~\eqref{eq:Mn}. We consider the case with $k_F=\pi/3$ and we restrict ourselves to $\gamma^-=1$. We provide results only for the bipartition in Fig.~\ref{fig0:cartoon} (a). The generic behavior outlined above is clearly visible in the figure. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{ent_theo} \caption{ Entanglement entropies $S^{(n)}$ of a subsystem $A$ placed next to the dissipation source (see Fig.~\ref{fig0:cartoon} (a)). The different lines are analytic predictions in the hydrodynamic limit for different values of $n$. We plot $S^{(n)}/\ell$ versus $v_\mathrm{max}t/\ell$, with $\ell$ the size of $A$ and $v_\mathrm{max}$ the maximum velocity. We only show results for $\gamma^-=1$ and $k_F=\pi$. } \label{fig2:ent-theo} \end{center} \end{figure} \subsection{Entanglement entropies} \label{sec:ent-theo} The hydrodynamic limit of the entanglement entropies, both the von Neumann and the R\'enyi entropies, is obtained from~\eqref{eq:F} by choosing \begin{equation} \label{eq:Hn} {\mathcal F}(z)=H_n(z)=\frac{1}{1-n}\ln[z^n+(1-z)^n]. \end{equation} In the limit $n\to1$ one recovers the von Neumann entropy by choosing $H_1(z)=-z\ln(z)-(1-z)\ln(1-z)$. After using~\eqref{eq:Hn} in~\eqref{eq:F}, and after observing that for any $n$, ${\mathcal F}(1)=0$, one obtains that \begin{equation} \label{eq:ent-hydro} S^{(n)}=\frac{1}{2 z_X}\frac{\ell}{1-n}\int_{-k_F}^{k_F}\frac{dk}{2\pi} H_n(1-z_X|a|^2)\min(z_X|v_k|t/\ell,1). \end{equation} First, for $\gamma^-=0$, i.e., in the absence of dissipation, one has that $S^{(n)}=0$ for any $n$. This is consistent with the fact that for a Fermi sea the entanglement entropies exhibit the typical Conformal Field Theory (CFT) logarithmic scaling as~\cite{calabrese2009entanglemententropy} \begin{equation} \label{eq:cft-scaling} S^{(n)}= \frac{c}{6}\Big(1+\frac{1}{n}\Big)\ln(\ell)+c_n, \end{equation} where $c=1$ is the central charge of the model and $c_n$ are nonuniversal constants. The scaling~\eqref{eq:cft-scaling} cannot be captured by~\eqref{eq:ent-hydro}, which describes the leading volume-law behavior $S^{(n)}\propto\ell$. In the strong dissipation limit $\gamma^-\to\infty$, one has that, reflecting the Zeno effect, $S^{(n)}$ vanish for any $n$. Away from the limits $\gamma^-=0$ and $\gamma^-\to\infty$, the entanglement entropies increase linearly at short times $t\le \ell/(z_X v_\mathrm{max})$, and saturate to a volume-law scaling $S^{(n)}\propto\ell$ at asymptotically long times. It is interesting to consider the limit $n\to\infty$, which gives the so-called single-copy entanglement. From~\eqref{eq:ent-hydro} it is clear that only the first term inside the logarithm in~\eqref{eq:Hn} counts, and one obtains that \begin{equation} \label{eq:s-copy} S^{(\infty)}=-\frac{\ell}{2 z_X}\int_{-k_F}^{k_F}\frac{dk}{2\pi} \ln(1-z_X|a|^2)\min(z_X|v_k|t/\ell,1). \end{equation} It is now crucial to remark that Eq.~\eqref{eq:ent-hydro} gives the same qualitative behavior for the entanglement entropies of $A$ and $A'$ (see Fig.~\ref{fig0:cartoon} (a) and (b)). This is surprising at first because no production of entanglement is expected for the centered bipartition in Fig.~\ref{fig0:cartoon} (b). The reason is that the reflected and the transmitted fermions, which form the entangled pairs, are never shared between $A'$ and its complement. The linear growth in this case should be attributed to the formation of a nontrivial density profile around the origin, which reflects the creation of thermodynamic entropy. The entanglement entropies are not {\it bona fide} entanglement measures for mixed states because they are sensitive to this thermodynamic contribution. We anticipate that, in contrast, the logarithmic negativity is sensitive to the genuine quantum correlation only (see section~\ref{sec:num-b}). In Fig.~\ref{fig2:ent-theo} we report analytic predictions for the dynamics of the entanglement entropies obtained from~\eqref{eq:ent-hydro}. We plot the rescaled entropies $S^{(n)}_X/\ell$ versus $v_\mathrm{max}t/\ell$ for several values of $n$. We consider only the bipartition in Fig.~\ref{fig0:cartoon} (a), i.e., we choose $X=A$ in~\eqref{eq:ent-hydro}. Furthermore, we show data for $k_F=\pi$ and $\gamma^-=1$. The qualitative behaviour discussed above is clearly visible. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{ent_max} \caption{ Steady-state entropy in the free fermion chain with localized losses. We show results for the bipartition in Fig.~\ref{fig0:cartoon} (a). We plot $S^{(\mathrm{steady})}/\ell$ versus the loss rate $\gamma^-$. The different lines in the main figure are for initial states with different Fermi momentum $k_F$. Note that the steady-state entropy has a maximum at $\gamma^-\approx 1$. For $\gamma^-\to\infty$ the steady-state entropy vanishes as $S^{(\mathrm{steady})}/\ell\propto\ln(\gamma^-)/\gamma^-$, as it is shown in the inset. } \label{fig3:ent-max} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{ent_max_p} \caption{ Same as in Fig.~\ref{fig3:ent-max} for the centered partition in Fig.~\ref{fig0:cartoon} (b). Notice the presence of two maxima, in contrast with Fig.~\ref{fig3:ent-max}. } \label{fig3a:ent-max} \end{center} \end{figure} \subsection{Zeno death of entanglement entropy} \label{sec:ss-entropy} It is interesting to investigate the steady-state value of the entanglement entropy as a function of the dissipation rate $\gamma^-$. The steady-state entanglement entropy $S^{\mathrm{steady}}$ is obtained from~\eqref{eq:ent-hydro} as \begin{equation} \label{eq:s-steady} S^{(\mathrm{steady})}=\frac{\ell}{2z_X}\int_{-k_F}^{k_F}\frac{dk}{2\pi} H_1(1-z_X|a|^2). \end{equation} In Fig.~\ref{fig3:ent-max} we plot $S^{(\mathrm{steady})}/\ell$ versus $\gamma^-$. The results are for $X=A$ (see Fig.~\ref{fig0:cartoon} (a)). In the main plot, the different curves correspond to different values of $k_F$. Notice that the entanglement entropy increases upon increasing $k_F$. This is expected because the entanglement entropy is proportional to the number of fermions that scatter with the impurity at the origin. Interestingly, the data exhibit a maximum in the region $\gamma^-\in[1.5,2]$. In the strong dissipation limit $\gamma^-\to\infty$ the entanglement entropy vanishes. This is a consequence of the quantum Zeno effect. The decay is as $S^{(\mathrm{steady})} \propto \ln(\gamma^-)/\gamma^-$ (see the inset of Fig.~\ref{fig3:ent-max}). Finally, it is interesting to compare with the result for the centered partition in Fig.~\ref{fig0:cartoon} (b). This is discussed in Fig.~\ref{fig3a:ent-max}. A richer structure is observed. Indeed, the steady-state entropy exhibits two maxima, one at ``weak'' dissipation for $\gamma^-\approx 0.5$ and one in the ``strong'' dissipation regime for $\gamma^-\approx 10$. Notice also that the steady-state entropy is generically smaller than in Fig.~\ref{fig3:ent-max}. \section{Numerical benchmarks} \label{sec:num} We now provide numerical benchmarks for the results derived in section~\ref{sec:hydro-ent}. We discuss the moments $M_n$ (cf~\eqref{eq:Mn}) in section~\ref{sec:num-a}. In section~\ref{sec:num-b} we focus on the entanglement entropies. Importantly, we discuss the interplay between entanglement and thermodynamic correlation by comparing the evolution of the von Neumann entropy and that of the logarithmic negativity for the two bipartitions in Fig.~\ref{fig0:cartoon} (a) and (b). \begin{figure}[t] \begin{center} \includegraphics[width=0.9\textwidth]{traces_check} \caption{ Moments of the fermionic correlator $M_n=\mathrm{Tr}(G^n)$ restricted to subsystem $A$ (bipartition in Fig~\ref{fig0:cartoon} (a)). We show the rescaled moments $M_n/\ell$, with $\ell$ the size of $A$ plotted versus $v_\mathrm{max} t/\ell$. Here $v_\mathrm{max}$ is the maximum velocity. All the results are for $\gamma^-=1$. The two panels are for $n=1$ and $n=2$. Different lines denote different subsystem size $\ell$. The dashed-dotted line is the analytic result in the hydrodynamic limit. Sizeable finite-time and finite-size corrections are present. In (b) we show the deviation from the hydrodynamic result at $t=0$, $\delta M_2=M_2^{\mathrm{hydro}}-M_2$ as a function of $\ell$. Notice the logarithmic scale on the $x$-axis. } \label{fig4:traces-check} \end{center} \end{figure} \subsection{Moments of fermionic correlators} \label{sec:num-a} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{traces_cent} \caption{ Same as in Fig.~\ref{fig4:traces-check} for the interval $A'$ (centered partition in Fig~\ref{fig0:cartoon} (b)). } \label{fig5:traces-check} \end{center} \end{figure} Our numerical results for $M_n$ are discussed in Fig.~\ref{fig4:traces-check}. In the panel (a) and (b) we plot $M_1$ and $M_2$, respectively. We focus on subsystem $A$ (see Fig.~\ref{fig0:cartoon} (a)). The numerical data in the figure are obtained by using~\eqref{eq:G}. We consider the situation in which the system is initially prepared in a Fermi sea with $k_F=\pi/3$. Notice that $M_1$ is the number of fermions in subsystem $A$. In the absence of dissipation $M_1=\ell k_F/\pi$ at any time. As a consequence of the fermion loss the number of particle decreases with time. In the figure we report results for several values of $\ell$. Clearly, $M_1$ exhibits the qualitative behavior discussed in Fig.~\ref{fig1:traces-theo}. At short times $t\le \ell/v_{\mathrm{max}}$, $M_1$ decreases linearly, whereas for $t\to\infty$ it saturates. However, the data for finite $\ell$ exhibit sizeable deviations from the hydrodynamic limit result, which is reported as dashed-dotted line in Fig.~\ref{fig4:traces-check}. These deviations are expected. The analytic result~\eqref{eq:Mn} is expected to hold only in the hydrodynamic limit $t,\ell\to\infty$ with their ratio fixed. Indeed, upon increasing $\ell$ the data approach~\eqref{eq:Mn}. Importantly, the fact that the initial state is a Fermi sea gives rise to logarithmic corrections. This will also happen for the entanglement entropies, as we will discuss in section~\ref{sec:num-b}. These corrections are visible for $M_2$ (see the inset in Fig.~\ref{fig4:traces-check} (b)). In the figure we plot the deviation $\delta M_2$ from the hydrodynamic result, which is defined as \begin{equation} \delta M_2:= M_2^\mathrm{hydro}-M_2. \end{equation} We consider the initial deviation at $t=0$. At $t=0$ one expects that in the limit $\ell\to\infty$, $M_2=k_f/\pi\ell$. The results in the inset of Fig.~\ref{fig4:traces-check} (b) suggest the logarithmic behavior as \begin{equation} \label{eq:fig-1} \delta M_2=a_2\ln(\ell)+\dots, \end{equation} with the dots denoting subleading terms, and $a_2$ a constant. The dashed-dotted line in the inset of Fig.~\ref{fig4:traces-check} (b) is obtained by fitting the constant $a_2$ in~\eqref{eq:fig-1}. The fit gives $a_2\approx 0.101$. To our knowledge there is no analytic determination of the constant $a_2$, although it should be possible by using standard techniques for free-fermions systems. Moreover, although the data in Fig.~\ref{fig4:traces-check} (b) suggest that such logarithmic terms survive at finite time, it is not clear a priori whether the constant $a_2$ remains the same. Finally, we should remark that the same logarithmic terms should be present for the centered partition in Fig.~\ref{fig0:cartoon} (b). Indeed, for $t=0$ the system is translational invariant, and the moments $M_n$ do not depend on the position of the subsystem. We discuss numerical results for $M_2$ for the centered partition (Fig.~\ref{fig0:cartoon} (b)) in Fig.~\ref{fig5:traces-check}. As it is clear from the figure, the qualitative behavior is the same as for the side bipartition (see Fig.~\ref{fig4:traces-check} (b)). Similar finite-size effects as in Fig.~\ref{fig4:traces-check} (b) are present. Upon approaching the hydrodynamic limit $t,\ell\to\infty$ deviations from the hydrodynamic limit result (red continuous line in the figure) vanish. \subsection{Entanglement entropies and logarithmic negativity} \label{sec:num-b} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{ent} \caption{ Entanglement entropy $S$ in the fermionic chain subjected to localized losses. We consider subsystem $A$ (bipartition in Fig.~\ref{fig0:cartoon} (a)). The figure shows the entropy density $S/\ell$ plotted versus $v_\mathrm{max}t/\ell$, with $\ell$ the size of $A$ and $v_\mathrm{max}$ the maximum velocity. All the results are for fixed loss rate $\gamma^-=1$ and $k_F=\pi/3$. We show results for several values of $\ell$, and the analytic result in the hydrodynamic limit (red continuous line in the figure). } \label{fig6:ent} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{ent_v3} \caption{ Same data as in Fig.~\ref{fig6:ent} plotting $(S-1/3\ln(\ell))/\ell$, where $1/3\ln(\ell)$ is the initial entanglement entropy. On the $x$-axis $v_\mathrm{max}$ is the maximum velocity and $\ell$ is the size of $A$. Inset: The entanglement entropy at $t=0$ plotted versus $\ell$. Note the logarithmic scale on the $x$-axis. The dashed-dotted line is fit to the CFT prediction $1/3\ln(\ell)+a$, with $a$ a fitting constant. } \label{fig7:ent-subtr} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{renyi} \caption{ Out-of-equilibrium dynamics of the R\'enyi entropy $S^{(2)}$ of subsystem $A$ (bipartition in Fig.~\ref{fig0:cartoon}) (a). We plot the subtracted entropy $(S-1/4\ln(\ell))/\ell$, with $1/4\ln(\ell)$ the Fermi sea entropy at $t=0$. Data are for $\gamma^-=1$ and $k_F=\pi/3$. On the $x$-axis $v_\mathrm{max}$ is the maximum velocity and $\ell$ is the size of $A$. In the inset we show the entropy at $t=0$ plotted versus $\ell$ to highlight the logarithmic increase. } \label{fig8:renyi} \end{center} \end{figure} Let us now discuss the out-of-equilibrium dynamics of the entanglement entropies. We first focus on the entanglement entropy for subsystem $A$ next to the dissipation source (as in Fig.~\ref{fig0:cartoon} (a)). Our data are reported in Fig.~\ref{fig6:ent}. We restrict ourselves to fixed $\gamma^-=1$, plotting the entropy density $S/\ell$ versus the rescaled time $v_\mathrm{max}t/\ell$. We show data for $\ell\in[10,160]$. We also report the analytic result in the hydrodynamic limit (cf.~\eqref{eq:ent-hydro}). Clearly, the numerical data exhibit the expected linear growth for $t\le \ell/(v_\mathrm{max})$, followed by a saturation at infinite time. Still, one should observe the sizeable deviations from the analytic result in the hydrodynamic limit~\eqref{eq:ent-hydro}. This is expected due to the finite $\ell$ and finite time $t$. Upon approaching the hydrodynamic limit, however, the deviation from~\eqref{eq:ent-hydro} decrease. An important remark is that since the initial Fermi sea is a critical state, one should expect nontrivial finite-size corrections to the linear entanglement entropy growth. For instance, at $t=0$ the entanglement entropies grow logarithmically with $\ell$ as in~\eqref{eq:cft-scaling}. In Fig.~\ref{fig7:ent-subtr} we subtract the CFT contribution by plotting $S-1/3\ln(\ell)$. The data are the same as in Fig.~\ref{fig6:ent}. As it is clear from the figure, now the subtracted data exhibit a better agreement with the hydrodynamic result. We perform a similar analysis for the R\'enyi entropies. In Fig.~\ref{fig8:renyi} we show numerical data for the second R\'enyi entropy $S^{(2)}$ plotted versus $v_\mathrm{max}t/\ell$. We only consider the bipartition in Fig.~\ref{fig0:cartoon} (a). The data are for $\gamma^-=1$ and the initial Fermi sea with $k_F=\pi/3$. As for the von Neumann entropy, we subtract the CFT contribution (cf.~\eqref{eq:cft-scaling} with $n=2$) that is present at $t=0$. In the Figure we only show data for $v_{\mathrm{max}}t/\ell\lesssim 1$. The agreement with the analytic result in the hydrodynamic limit~\eqref{eq:ent-hydro} is satisfactory. \begin{figure}[t] \begin{center} \includegraphics[width=.9\textwidth]{neg_ent} \caption{ Comparison between the logarithmic negativity ${\cal E}$ and the entanglement entropy $S$ for the two bipartitions in Fig.~\ref{fig0:cartoon} (a). Panels (a) and (b) show ${\cal E}/\ell$ and $S/\ell$ plotted versus $v_\mathrm{max}t/\ell$, respectively. Here $\ell$ is the size of $A$ and $A'$ (see Fig.~\ref{fig0:cartoon}) and $v_\mathrm{max}$ the maximum velocity. As it is clear from (a) the negativity ${\cal E}$ of $A$ exhibits a linear increase with time, whereas that of $A'$ depends mildly on time. Oppositely, the entanglement entropy of both $A$ and $A'$ increases linearly with time (see (b)). } \label{fig9:neg} \end{center} \end{figure} Finally, it is crucial to compare the dynamics of the entanglement entropy with that of the logarithmic negativity (see section~\ref{sec:obs}). As it was stressed in section~\ref{sec:obs} the entanglement entropies are not proper entanglement measures in the presence of dissipation because the full system is in a mixed state. On the other hand, the fermionic negativity ${\cal E}$ (cf.~\eqref{eq:neg-def}) should be sensitive to genuine quantum correlation only. As it was anticipated in the introduction, genuine entanglement and statistical correlations are deeply intertwined, but it is possible to distinguish them by comparing the behavior of the von Neumann entropy and of the logarithmic negativity for the two bipartitions in Fig.~\ref{fig0:cartoon} (a) and (b). Specifically, subsystem $A$ (see Fig.~\ref{fig0:cartoon} (a)) is entangled with its complement because the reflected and the transmitted fermions, which form entangled pairs, are shared between them. Oppositely, this is not the case for $A'$ because the transmitted and the reflected fermions are never shared. This scenario implies that the entanglement entropy of $A$ and $A'$ exhibit a linear growth with time. On the other hand, only the logarithmic negativity of $A$ is expected to grow with time. This is demonstrated in Fig.~\ref{fig9:neg} (a) and (b). In Fig.~\ref{fig9:neg} (a) we plot the rescaled negativity ${\cal E}/\ell$ versus the rescaled time $v_\mathrm{max}t/\ell$, whereas in Fig.~\ref{fig9:neg} (b) we show the rescaled entanglement entropy. The data are for fixed $\gamma^-=1$ and $k_F=\pi/3$ and subsystem' size $\ell=160$. In both panels we show results for the subsystems $A$ (see Fig.~\ref{fig0:cartoon} (a)) and $A'$ (see Fig.~\ref{fig0:cartoon} (b)). It is clear from the figure that both the negativity and the von Neumann entropy of $A$ increase linearly with time. For the von Neumann entropy we report the expected slope of the linear growth in the hydrodynamic limit (dashed-dotted line in Fig.~\ref{fig9:neg} (b)), which is in perfect agreement with the finite-size numerical results. Notice that at asymptotically long times the von Neumann entropy saturates (not shown in the figure), as already discussed in the previous sections. This saturation happens for the logarithmic negativity as well, as expected from the quasiparticle picture discussed above. This is shown explicitly in the inset in Fig.~\ref{fig9:neg} (a) for subsystem $A$ of length $\ell=20$. As in the main plot we show ${\cal E}/\ell$ versus $v_\mathrm{max}t/\ell$. Let us now discuss the entanglement growth for the bipartition in Fig.~\ref{fig0:cartoon} (b). The negativity (see Fig.~\ref{fig9:neg}), does not grow with time but it remains almost constant, showing a small decreasing trend at long times. Oppositely, the entanglement entropy exhibits a linear growth (see Fig.~\ref{fig9:neg} (b)), which, again, does not reflect entanglement production. The slope of the linear growth (dashed-dotted line) is in agreement with~\eqref{eq:ent-hydro}. \section{Conclusions} \label{sec:concl} We investigated the interplay between entanglement and statistical correlation in a uniform Fermi sea subjected to localized losses. We focused on the hydrodynamic limit of long times and the large subsystems, with their ratio fixed. In this regime the dynamics of the entanglement entropies can be understood analytically. We showed that the logarithmic negativity correctly diagnose the production of genuine quantum entanglement, whereas the entanglement entropies are sensitive to both quantum as well as classical correlation. Let us now illustrate some interesting directions for future research. First, our results hold for the Fermi sea as initial state. It should be possible to generalize them to other situations, such as finite-temperature states, or inhomogeneous initial states, for instance, the domain-wall state. One should expect the linear entanglement growth to persist. An interesting possibility is to consider the out-of-equilibrium dynamics starting from product states. Thus, even in absence of losses the entanglement entropy grows linearly with time due to the propagation of entangled pairs of quasiparticles. It would be interesting to understand how this scenario is modified by localized losses. An interesting direction is to try to generalized the hydrodynamic framework to the logarithmic negativity, for which it should be possible to obtain a formula similar to~\eqref{eq:ent-hydro}. Interestingly, our results suggest that local dissipation generically induces robust entanglement production. An important direction is to try to check this scenario for other types of local dissipation. An interesting candidate is incoherent hopping~\cite{eisler2011crossover}. Unlike loss dissipation, for incoherent hopping the Liouvillian describing the dynamics of the density matrix is not quadratic. It would interesting to understand whether the hydrodynamic approach outlined here still applies, at least in the weak dissipation limit. Finally, an interesting direction would be understand the interplay between entanglement, local dissipation, and criticality~\cite{rossini2021coherent}.
proofpile-arXiv_065-10081
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:introduction}Introduction} Imagining oneself attempting to swim in a pool of viscous honey, it is not hard to anticipate that, because of the high fluid viscosity, our usual swimming strategy consisting of imparting momentum to the surrounding fluid will not be effective. The world microorganisms inhabit is physically analogous to this situation \cite{purcell77}. As a result, microorganisms have evolved strategies which exploit the only physical force available to them -- namely fluid drag -- to propel themselves or generate net fluid transport. The success of these propulsion strategies is vital in many biological processes, including bacterial infection, spermatozoa locomotion and reproduction, and ciliary transport \cite{braybook}. Advances in micro- and nano- manufacturing technology have also allowed scientists to take inspiration from these locomotion strategies and design micropumps \cite{leoni08} and microswimmers \cite{dreyfus}. The fundamental physics of small-scale locomotion in simple (Newtonian) fluids is well understood \cite{brennen,childress,lauga2}. In contrast, and although most biological fluids are non-Newtonian, many basic questions remain unanswered regarding the mechanics of motility in complex fluids. Since they usually include biopolymers, most biological fluids of interest display rheological properties common to both fluids (they flow and dissipate energy) and solids (they can store energy). Examples include the airway mucus, which acts as a renewable and transportable barrier along the airways of the lungs to guard against inhaled particulates and toxic substances \cite{samet}, as well as cervical mucus, which is important for the survival and transport of sperm cells \cite{yudin}. The influence of viscoelasticity of the fluid on cell locomotion has been experimentally quantified by a number of studies \cite{shukla, katz1, katz2, katz3, rikmenspoel, ishijima, suarez}, including the change in the waveform, structure, and swimming path of spermatozoa in both synthetic polymer solutions and biological mucus \cite{fauci3}. Gastropod mucus is another common non-Newtonian biofluid, which is useful for adhesive locomotion, and its physical and rheological properties have been measured \cite{denny,ewoldt07,ewoldt08}. Modeling-wise, different constitutive models have been employed to study locomotion in complex fluids (see the short review in Ref.~\cite{lauga1}). Among these models, the Oldroyd-B constitutive equation is the most popular, both because of its simplicity and the fact that it can be derived exactly from kinetic theory by modeling the fluid as a dilute solutions of elastic (polymeric) dumbbells \cite{oldroyd, bird1, bird2, johnson}. Recent quantitative studies have suggested that microorganisms swimming by propagating waves along their flagella have a smaller propulsion speed in a polymeric fluid than in a Newtonian fluid \cite{lauga1, fu}. Likewise, a smaller net flow is generated by the ciliary transport of a polymeric fluid than a Newtonian fluid. Specifically, Lauga \cite{lauga1} considered the problem with a prescribed beating pattern along the flagellum, while Fu and Powers \cite{fu} prescribed the internal force distribution instead; both studies suggest that viscoelasticity tends to decrease the propulsion speed. In a Newtonian fluid, Purcell's scallop theorem states that swimming and pumping in the absence of inertia can only be achieved by motions or body deformations which are not identical under a time-reversal symmetry (so-called ``non-reciprocal'' motion) \cite{purcell77}. This poses of course an interesting challenge in designing artificial swimmers and pumps in simple fluids, which has recently been addressed theoretically and experimentally (see the review in Ref.~\cite{lauga2}). The question we are addressing in this paper is the extent to which the scallop theorem holds in complex fluids. Because polymeric fluids display nonlinear rheological properties such as shear-dependance or normal-stress differences \cite{bird1, bird2}, in general reciprocal motions are effective in polymeric fluids \cite{lauga3}. New propulsion and transport methods can therefore be designed on small scales to specifically take advantage of the intrinsic nonlinearities of the fluid. The goal of this paper is to study such a system in the context of fluid pumping with a simplified geometrical setup where the pumping performance can be characterized analytically. For simple flow geometries, it is not obvious a priori whether a simple oscillatory forcing of a nonlinear fluid leads to a net (rectified) flow. For example, for all Oldroyd-like fluids, a sinusoidally-forced Couette flow leads to zero time-averaged flow \cite{bird1}. In previous work \cite{norman}, we considered a biologically-inspired geometric example of a semi-infinite flapper performing reciprocal (sinusoidal) motion in a viscoelastic (Oldroyd-B) fluid in the absence of inertia. We showed explicitly that the reciprocal motion generates a net force on the flapper occurring at second order in the flapping amplitude, and disappearing in the Newtonian limit as dictated by the scallop theorem. However, there was no time-average flow accompanying the net force generation at second order \cite{norman}. Here, we report on the discovery of a net fluid flow produced by the reciprocal flapping motion in an Oldroyd-B fluid. The net flow transport is seen to occur at fourth order in the flapping amplitude, and is due to normal-stress differences. The dependence of the pumping performance on the actuation and material parameters is characterized analytically, and the optimal pumping rate is determined numerically. Through this example, we therefore demonstrate explicitly the breakdown of the scallop theorem in complex fluids in the context of fluid pumping, and suggest the possibility of exploiting intrinsic viscoelastic properties of the medium for fluid transport on small scales. The geometric setup in this paper is motivated by the motion of cilia-like biological appendages. Cilia are short flagella beating collaboratively to produce locomotion or fluid transport \cite{gibbons,brennen}. For example, cilia cover the outer surface of microorganisms such as \textit{Paramecium} for self-propulsion. They are also present along our respiratory tracts to sweep up dirt and mucus and along the oviduct to transport the ova. Our setup is also relevant to the rigid projections attached to the flagellum of \textit{Ochromonas}, known as mastigonemes, which protrude from the flagellum into the fluid \cite{brennen}. As waves propagate along the flagellum, the mastigonemes are flapped back-and-forth passively through the fluid, a process which can lead to a change in the direction of propulsion of the organism \cite{jahn, holwill, brennen2}. Our study is related to the phenomenon known as steady (or ``acoustic'') streaming in the inertial realm \cite{faraday, rayleigh, schlichting, riley, chang, james, rosenblat, bohme, bagchi, frater, frater2, goldstein, chang2, chang3}, which has a history of almost two centuries after being first discovered by Faraday \cite{faraday}. Under oscillatory boundary conditions, as in the presence of an acoustic wave or the periodic actuation of a solid body in a fluid, migration of fluid particles occur in an apparently purely oscillating flow, manifesting the presence of nonlinear inertial terms in the equation of motion. This phenomenon occurs in both Newtonian and non-Newtonian fluids \cite{chang, james, rosenblat, bohme, bagchi, frater, frater2, goldstein, chang2, chang3}. In particular, it was found that the elasticity of a polymeric fluid can lead to a reversal of the net flow direction \cite{chang, james, rosenblat, bohme}. As expected from the scallop theorem, no steady streaming phenomenon can occur in a Newtonian fluid in the absence of inertia. However, as will be shown in this paper, the nonlinear rheological properties of viscoelastic fluids alone can lead to steady streaming. In other words, we consider here a steady streaming motion arising purely from the viscoelastic effects of the fluid, ignoring any influence of inertia. Recently, polymeric solutions have been shown to be useful in constructing microfluidic devices such as flux stabilizers, flip-flops and rectifiers \cite{groisman1, groisman2}. By exploiting the nonlinear rheological properties of the fluid and geometrical asymmetries in the micro-channel, microfluidic memory and control have been demonstrated without the use of external electronics and interfaces, opening the possibility of more complex integrated microfluidic circuit and other medical applications \cite{groisman1}. In the setup we study here, we do not introduce any geometrical asymmetries and exploit solely the non-Newtonian rheological properties of the polymeric fluid for microscopic fluid transport. The structure of the paper is the following. In \S\ref{sec:formulation}, the flapping problem is formulated with the geometrical setup, governing equations, nondimensionalization and the boundary conditions. In \S\ref{sec:analysis}, we present the asymptotic calculations up to the fourth order (in flapping amplitude), where the time-average flow is obtained. We then characterize analytically the net flow in terms of the streamline pattern, directionality and vorticity distribution (\S\ref{sec:results}). Next, we study the optimization of the flow with respect to the Deborah number (\S\ref{sec:optimization}). Our results are finally discussed in \S\ref{sec:discussion}. \section{\label{sec:formulation}Formulation} \subsection{Geometrical setup} In this paper, we consider a semi-infinite two-dimensional plane flapping sinusoidally with small amplitude in a viscoelastic fluid. The average position of the flapper is situated perpendicularly to a flat wall with its hinge point fixed in space (see Fig.~\ref{fig:setup}). The flapper is therefore able to perform reciprocal motion with only one degree of freedom by flapping back-and-forth. Such a setup is directly relevant to the unsteady motion of cilia-like biological appendages (see \S\ref{sec:introduction}). It is convenient to adopt planar polar coordinates system for this geometrical setup. The instantaneous position of the flapper is described by the azimuthal angle $\theta(t) = \pi/2 + \epsilon \Theta(t)$, where $\Theta(t)$ is an order one oscillatory function of time and $\epsilon$ is a parameter characterizing the amplitude of the flapping motion. The polar vectors $\bf{e}_r(\theta)$ and $\bf{e}_\theta(\theta)$ are functions of the azimuthal angle, and the velocity field $\u$ is expressed as $\u = u_r \e_r + u_\theta \e_\theta$. In this work, we derive the velocity field in the the domain ($0 \leq \theta \leq \pi/2$) in the asymptotic limit of small flapping amplitude, {\it i.e.} $\epsilon \ll 1$; the time-averaged flow in the domain ($\pi/2 \leq \theta \leq \pi$) can then be deduced by symmetry. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{fig1.eps} \end{center} \caption{\label{fig:setup}Geometrical setup and notations for the flapping calculation. A semi-infinite plane flaps sinusoidally with small amplitude $\epsilon$ around an average position at right angle with an infinite wall.} \end{figure} \subsection{Governing equations} We assume the flow to be incompressible and the Reynolds number of the fluid motion to be small, {\it i.e.} we neglect any inertial effects. Denoting the pressure field as $p$ and the deviatoric stress tensor as $\btau$, the continuity equation and Cauchy's equation of motion are respectively \begin{align} \boldsymbol{\boldsymbol{\nabla}} \cdot \u &=0,\\ \boldsymbol{\boldsymbol{\nabla}}p&=\boldsymbol{\boldsymbol{\nabla}} \cdot \btau. \end{align} We also require constitutive equations, which relate stresses and kinematics of the flow, to close the system of equations. For polymeric fluids, the deviatoric stress may be decomposed into two components, $\btau = \btau^s+\btau^p$, where $\btau^s$ is the Newtonian contribution from the solvent and $\btau^p$ is the polymeric stress contribution. For the Newtonian contribution, the constitutive equation is simply given by $\btau^s = \eta_s \gamd$, where $\eta_s$ is the solvent contribution to the viscosity and $\gamd = \boldsymbol{\boldsymbol{\nabla}} \u + {}^t \boldsymbol{\boldsymbol{\nabla}}\u$. For the polymeric contribution, many models have been proposed to relate the polymeric stress to kinematics of the flow \cite{oldroyd, bird1, bird2, johnson}. We consider here the classical Oldroyd-B model, where the polymeric stress, $\btau^p$, satisfies the upper-convected Maxwell equation \begin{align}\label{eq:upperMaxwell} \btau^p + \lambda \stackrel{\triangledown}{\btau^p} = \eta_p \gamd, \end{align} where $\eta_p$ is the polymer contribution to the viscosity and $\lambda$ is the polymeric relaxation time. The upper-convected derivative for a tensor $\bf{A}$ is defined as \begin{align} \stackrel{\triangledown}{\bf{A}} = \frac{\p \bf{A}}{\p t} +\u \cdot \boldsymbol{\boldsymbol{\nabla}} \bf{A} -\left({}^t \boldsymbol{\boldsymbol{\nabla}} \u \cdot \bf{A}+\bf{A} \cdot \boldsymbol{\boldsymbol{\nabla}} \u \right), \end{align} and represents the rate of change of $\bf{A}$ in the frame of translating and deforming with the fluid. From Eq.~(\ref{eq:upperMaxwell}), we can obtain the Oldroyd-B constitutive equation for the total stress, $\btau$, as given by \begin{align} \btau + \lambda_1 \dbtau = \eta \left( \gamd + \lambda_2 \dgamd \right), \end{align} where $\eta=\eta_s+\eta_p$, $\lambda_1=\lambda$, and $\lambda_2 = \eta_s \lambda / \eta$. Here, $\lambda_1$ and $\lambda_2$ are the relaxation and retardation times of the fluid respectively. The relaxation time is the typical decay rate of stress when the fluid is at rest, and the retardation time measures the decay rate of residual rate of strain when the fluid is stress-free \cite{bird1, bird2}. It can be noted that $\lambda_2 < \lambda_1$, and both are zero for a Newtonian fluid. \subsection{Nondimensionalization} Periodic flapping motion with angular frequency $\omega$ is considered in this paper. Therefore, we nondimensionalize shear rates by $\omega$ and stresses by $\eta \omega$. Lengths are nondimensionalized by some arbitrary length scale along the flapper. Under these nondimensionalizations, the dimensionless equations are given by \begin{subequations}\label{eq:govern} \begin{align} \boldsymbol{\boldsymbol{\nabla}} \cdot \u &=0,\\ \boldsymbol{\boldsymbol{\nabla}}p&=\boldsymbol{\boldsymbol{\nabla}} \cdot \btau, \label{eq:mecheq}\\ \btau + \De_1 \dbtau &= \gamd + \De_2 \dgamd, \end{align} \end{subequations} where $\De_1=\lambda_1 \omega$ and $\De_2 = \lambda_2 \omega$ are defined as the two Deborah numbers and we have adopted the same symbols for convenience. \subsection{Boundary conditions} The boundary condition in this problem can be simply stated; on the flat wall ($\theta = 0$), we have the no-slip and the no-penetration boundary conditions. In vector notation, we have therefore \begin{align}\label{eq:bc1} \u (r, \theta = 0) = \bf{0} \end{align} along the flat wall. Along the flapper, we also have the no-slip condition, $u_r (r, \theta=\pi/2+\epsilon \Theta(t))=0$. The other boundary condition imposed on the fluid along the flapper is given by the rotation of the flapper, $u_\theta (r, \theta=\pi/2+\epsilon \Theta(t))=r \Omega(t)$, where $\Omega(t)=\epsilon \dot{\Theta}$. In vector notation, we have then \begin{align}\label{eq:bc2} \u(r, \theta = \pi/2+\epsilon \Theta (t)) = r \Omega(t) \e_\theta. \end{align} \section{\label{sec:analysis}Analysis} Noting that a two-dimensional setup is considered, the continuity equation, $\boldsymbol{\boldsymbol{\nabla}} \cdot \u =0$, is satisfied by introducing the streamfunction $\Psi (r, \theta)$ such that $u_r=\left(\p \Psi / \p \theta \right)/r$ and $u_\theta = - \p \Psi / \p r$. The instantaneous position of the flapper is described by the function $\theta = \pi/2+\epsilon \Theta(t)$, and we consider here a simple reciprocal flapping motion with $\Theta(t) = \cos t$. Since small amplitude flapping motion ($\epsilon \ll 1$) is considered, we will perform the calculations perturbatively in the flapping amplitude and seek perturbation expansions of the form \begin{align}\label{eq:perturb} \{\u, \Psi, \btau, p, {\boldsymbol{\sigma}}}\def\dlbtau{\stackrel{\vartriangle}{\btau}}\def\dbtau{\stackrel{\triangledown}{\btau}}\def\B{{\bf B} \} & =\ \epsilon \{\u_1, \Psi_1, \btau_1, p_1, {\boldsymbol{\sigma}}}\def\dlbtau{\stackrel{\vartriangle}{\btau}}\def\dbtau{\stackrel{\triangledown}{\btau}}\def\B{{\bf B}_1 \} + \epsilon^2 \{\u_2, \Psi_2, \btau_2, p_2, {\boldsymbol{\sigma}}}\def\dlbtau{\stackrel{\vartriangle}{\btau}}\def\dbtau{\stackrel{\triangledown}{\btau}}\def\B{{\bf B}_2 \} + \ldots, \end{align} where ${\boldsymbol{\sigma}}}\def\dlbtau{\stackrel{\vartriangle}{\btau}}\def\dbtau{\stackrel{\triangledown}{\btau}}\def\B{{\bf B} = -p \bf{1} +\btau$ is the total stress tensor and all the variables in Eq.~(\ref{eq:perturb}) are defined in the time-averaged domain $0 \leq \theta \leq \pi/2$. Since a domain-perturbation expansion is performed, careful attention has to be paid on the distinction between instantaneous and average geometry. Recall that the polar vectors $\e_r(\theta(t))$ and $\e_\theta (\theta(t))$ are functions of the azimuthal angle which oscillates in time. To distinguish the average geometry, we denote $\langle \textbf{t} \rangle = \e_r (\pi/2)$ and $\langle \textbf{n} \rangle = \e_\theta (\pi/2)$ as the average directions along and perpendicular to the flapper axis (See Fig.~\ref{fig:setup}). In this paper, $\langle \dots \rangle$ denotes time-averaging. In addition, we employ Fourier notation to facilitate the calculations. In Fourier notation, the actuation becomes $\Theta = {\rm Re} \{e^{i t} \}$ and $\dot{\Theta} = {\rm Re} \{i e^{i t} \}$. Because of the quadratic nonlinearities arising from boundary conditions and the constitutive modeling, the velocity field can be Fourier decomposed into the anticipated form \begin{subequations} \begin{align} \u_1 &= {\rm Re} \{\tilde{\u}_1 e^{it} \},\label{eq:fourieru1}\\ \u_2 &= {\rm Re} \{\tilde{\u}^{(0)}_2+\tilde{\u}^{(2)}_2 e^{2it} \}, \\ \u_3 &= {\rm Re} \{\tilde{\u}^{(1)}_3 e^{it}+\tilde{\u}^{(3)}_3 e^{3it} \},\\ \u_4 &= {\rm Re} \{\tilde{\u}^{(0)}_4+\tilde{\u}^{(2)}_4 e^{2it}+\tilde{\u}^{(4)}_4 e^{4it} \}, \end{align} \end{subequations} with similar decomposition and notation for all other vector and scalar fields. We now proceed to analyze Eq.~(\ref{eq:govern}) order by order, up to the fourth order, where the time-average fluid flow occurs. The boundary conditions, Eqs.~(\ref{eq:bc1}) and (\ref{eq:bc2}), are also expanded order by order about the average flapper position using Taylor expansions. \subsection{First-order solution} \subsubsection{Governing equation} The first-order Oldroyd-B relation is given by \begin{align}\label{eq:constitutive1} \btau_1+\De_1 \frac{\p \btau_1}{\p t} = \gamd_1+\De_2 \frac{\p \gamd_1}{\p t}, \end{align} which in Fourier space becomes \begin{align} \tilde\btau_1 = \frac{1+i\De_2}{1+i\De_1}\tilde\gamd_1. \label{eq:constitutiveF1} \end{align} We then note that we have $\boldsymbol{\nabla} \times \boldsymbol{\nabla} \cdot \btau =0$ by taking the curl of Eq.~(\ref{eq:mecheq}). Therefore, we take the divergence and then curl of Eq.~(\ref{eq:constitutiveF1}) to eliminate the stress and obtain the equation for the first-order streamfunction \begin{align} \boldsymbol{\nabla}^4 \tilde\Psi_1&=0. \end{align} \subsubsection{Boundary conditions} At $\theta = \pi/2$, the boundary condition at this order is given by \begin{align} \u_1 = r \dot{\Theta} \langle \bf{n} \rangle \label{eq:order1bc}, \end{align} which becomes \begin{align} \tilde\u_1 &= i r \langle \bf{n} \rangle, \end{align} upon Fourier transformation. We also have the no-slip and no-penetration boundary condition at $\theta=0$. \subsubsection{Solution} The solution satisfying the above equation and boundary conditions is given by \begin{subequations} \begin{align} \tilde\Psi_1&=\frac{i r^2}{4}\left( \cos2\theta -1\right), \\ \tilde{{u}}{}_{1r}&=-\frac{i r}{2}\sin2\theta, \\ \tilde{{u}}_{1\theta}&=\frac{i r}{2}\left( 1-\cos2\theta \right). \end{align} \end{subequations} \subsection{Second-order solution} \subsubsection{Governing equation} The second-order Oldroyd-B relation is given by \begin{align}\label{eq:constitutive2} &\left( 1+\De_1 \frac{\p }{\p t}\right) \btau_2 -\left( 1+\De_2 \frac{\p }{\p t}\right) \gamd_2 \notag \\ &= \ \De_2 \left[\u_1 \cdot \boldsymbol{\nabla} \gamd_1- \left(^{t} \boldsymbol{\nabla} \u_1 \cdot \gamd_1+\gamd_1 \cdot \boldsymbol{\nabla} \u_1 \right) \right] \notag \\ & - \De_1 \left[\u_1 \cdot \boldsymbol{\nabla} \btau_1- \left(^{t} \boldsymbol{\nabla} \u_1 \cdot \btau_1+\btau_1 \cdot \boldsymbol{\nabla} \u_1 \right) \right]. \end{align} Fourier transforming Eq.~(\ref{eq:constitutive2}) and using Eq.~(\ref{eq:constitutiveF1}), we obtain the two harmonics as \begin{align}\label{eq:2ndorder2} &\left(1+2i \De_1\right) \tilde\btau^{(2)}_2-\left( 1+2i\De_2\right) \tilde\gamd^{(2)}_2 \notag \notag \\ &= \frac{1}{2} \frac{\De_2-\De_1}{1+i \De_1}\left[\tilde\u_1 \cdot \boldsymbol{\nabla} \tilde\gamd_1- \left(^{t}\boldsymbol{\nabla} \tilde\u_1 \cdot \tilde\gamd_1 +\tilde\gamd_1 \cdot \boldsymbol{\nabla} \tilde\u_1 \right) \right], \end{align} and \begin{align}\label{eq:2ndorder0} &\tilde\btau^{(0)}_2-\tilde\gamd^{(0)}_2 \notag \\ &= \frac{1}{2} \frac{\De_2-\De_1}{1+i \De_1} \left[\tilde\u_1^{*} \cdot \boldsymbol{\nabla} \tilde\gamd_1- \left(^{t}\boldsymbol{\nabla} \tilde\u_1^{*} \cdot \tilde\gamd_1 +\tilde\gamd_1 \cdot \boldsymbol{\nabla} \tilde\u_1^{*} \right) \right], \end{align} where the starred variables denote complex conjugates in this paper. Finally, taking the divergence and then curl of both Eq.~(\ref{eq:2ndorder2}) and Eq.~(\ref{eq:2ndorder0}), and using the knowledge of the first-order solution, we obtain the equation for the second-order streamfunctions as simply \begin{subequations} \begin{align} \boldsymbol{\nabla}^4 \tilde\Psi_2^{(2)} =0, \\ \boldsymbol{\nabla}^4 \tilde\Psi_2^{(0)} =0. \end{align} \end{subequations} \subsubsection{Boundary conditions} The boundary condition at this order is given by \begin{align} \u_2 = - \Theta \frac{\p \u_1}{\p \theta} -r \Theta \dot{\Theta} \langle \bf{t} \rangle, \end{align} when evaluated at $\theta = \pi/2$. In Fourier notation and with the first-order solution, the boundary conditions for the second-order average flow and the second harmonic read \begin{subequations} \begin{align} \tilde\u_2^{(0)} &=0, \\ \tilde\u_2^{(2)} &= -\frac{ir}{2} \langle t \rangle. \end{align} \end{subequations} In addition, the no-slip and no-penetration boundary condition are imposed at $\theta=0$. \subsubsection{Solution} The solution satisfying the second-order equation and the boundary conditions is given by \begin{subequations} \begin{align} \tilde\Psi_2^{(0)}&=0,\\ \tilde\Psi_2^{(2)} &= \frac{ir}{4} \left(\frac{1}{2} \sin 2\theta-\frac{\pi}{4} \cos2\theta-\theta+\frac{\pi}{4} \right),\\ \tilde{u}^{(2)}_{2r} &= \frac{ir}{4} \left( \cos2\theta+\frac{\pi}{2} \sin2\theta -1\right),\\ \tilde{u}^{(2)}_{2\theta} &= -\frac{ir}{4} \left( \sin2\theta-\frac{\pi}{2} \cos2\theta-2\theta+\frac{\pi}{2}\right). \end{align} \end{subequations} As anticipated, there is no time-averaged flow at second order, and we proceed with calculations at higher order. \subsection{Third-order solution} \subsubsection{Governing equation} The third-order Oldroyd-B relation is given by \begin{align}\label{eq:constitutive3} &\left( 1+\De_1 \frac{\p }{\p t}\right) \btau_3 -\left( 1+\De_2 \frac{\p }{\p t}\right) \gamd_3 \notag\\ &= \De_2 \left[\u_1 \cdot \boldsymbol{\nabla} \gamd_2- \left(^{t} \boldsymbol{\nabla} \u_1 \cdot \gamd_2+\gamd_2 \cdot \boldsymbol{\nabla} \u_1 \right) \right] \notag \\ & - \De_1 \left[\u_2 \cdot \boldsymbol{\nabla} \btau_1- \left(^{t} \boldsymbol{\nabla} \u_2 \cdot \btau_1+\btau_1 \cdot \boldsymbol{\nabla} \u_2 \right) \right] \notag \\ & + \De_2 \left[\u_2 \cdot \boldsymbol{\nabla} \gamd_1- \left(^{t} \boldsymbol{\nabla} \u_2 \cdot \gamd_1+\gamd_1 \cdot \boldsymbol{\nabla} \u_2 \right) \right] \notag \\ & - \De_1 \left[\u_1 \cdot \boldsymbol{\nabla} \btau_2- \left(^{t} \boldsymbol{\nabla} \u_1 \cdot \btau_2+\btau_2 \cdot \boldsymbol{\nabla} \u_1 \right) \right] . \end{align} Aiming at obtaining the average flow at $O(\epsilon^4)$, we only need to calculate the first harmonic at $O(\epsilon^3)$ since the third harmonic will only enter the oscillatory part at $O(\epsilon^4)$ (see the fourth-order calculations for details). Therefore, upon Fourier transform, we obtain the first harmonic component of Eq.~(\ref{eq:constitutive3}) as \begin{align} \label{eq:constitutiveF3} &\left(1+i \De_1\right) \tilde\btau^{(1)}_3-\left( 1+i\De_2\right) \tilde\gamd^{(1)}_3 \notag \\ &= \frac{1}{2} \frac{\De_2-\De_1}{1+2i \De_1}\left[\tilde\u_1^{*} \cdot \boldsymbol{\nabla} \tilde\gamd_2^{(2)}- \left(^{t}\boldsymbol{\nabla} \tilde\u_1^{*} \cdot \tilde\gamd_2^{(2)} +\tilde\gamd_2^{(2)} \cdot \boldsymbol{\nabla} \tilde\u_1^{*} \right) \right] \notag \\ &+ \frac{1}{2} \frac{\De_2-\De_1}{1-i \De_1}\left[\tilde\u_2^{(2)} \cdot \boldsymbol{\nabla} \tilde\gamd_1^{*}- \left(^{t}\boldsymbol{\nabla} \tilde\u_2^{(2)} \cdot \tilde\gamd_1^{*} +\tilde\gamd_1^{*} \cdot \boldsymbol{\nabla} \tilde\u_2^{(2)} \right) \right] , \end{align} where we have used the constitutive relations given by Eqs.~(\ref{eq:constitutiveF1}) and (\ref{eq:2ndorder2}). Taking the divergence and then curl of Eq.~(\ref{eq:constitutiveF3}), we obtain the equation for the first harmonic of the third-order streamfunction \begin{align} \boldsymbol{\nabla}^4 \tilde\Psi^{(1)}_3 = \frac{3i \De_1 (\De_1-\De_2)\, \cos2\theta}{r^2(1-i\De_1)(1+2i\De_1)(1+i\De_2)}\cdot \end{align} \subsubsection{Boundary condition} At $\theta = \pi/2$, the boundary condition at this order is given by \begin{align} \u_3 = -\Theta \frac{\p \u_2}{\p \theta} - \frac{1}{2} \Theta^2 \frac{\p^2 \u_1}{\p \theta^2}-\frac{1}{2} r \dot{\Theta} \Theta^2 \langle \n \rangle, \end{align} evaluating at $\theta = \pi/2$. The boundary condition for the first harmonic component, in Fourier space, is then given by \begin{align} \tilde\u_3^{(1)}=\frac{ir\pi}{8} \langle \t \rangle -\frac{ir}{4} \langle \n \rangle. \end{align} At $\theta=0$, we also have the no-slip and no-penetration boundary condition. \subsubsection{Solution} The solution at this order has the form \begin{widetext} \begin{subequations} \begin{align} \tilde\Psi_3^{(1)} = \ &\frac{r^{2}}{2} \Biggl[ \left(\frac{\pi^{2}}{8}\alpha+\frac{\pi^{2}-4}{32}i\right)\cos2\theta -\frac{\pi}{4}\left(\alpha+\frac{i}{4}\right)\sin2\theta +\frac{\pi}{2}\left(\alpha+\frac{i}{4}\right)\theta +\left(\frac{4-\pi^{2}}{32}i -\frac{\pi^{2}}{8}\alpha\right) \notag \\ &+\alpha\theta\sin2\theta \Biggr],\\ \tilde u_{3,r}^{(1)} = \ &\frac{r}{2} \Biggl[ \left(\frac{4-\pi^{2}}{16}i -\frac{\pi^{2}}{4}\alpha\right)\sin2\theta-\frac{\pi}{2}\left(\alpha+\frac{i}{4}\right)\cos2\theta+\frac{\pi}{2}\left(\alpha+\frac{i}{4}\right) +\alpha(2\theta \cos2\theta+\sin2\theta)\Biggr], \\ \tilde u_{3,\theta}^{(1)}= \ &-r \Biggl[ \left(\frac{\pi^{2}}{8}\alpha+\frac{\pi^{2}-4}{32}i\right)\cos2\theta -\frac{\pi}{4}\left(\alpha+\frac{i}{4}\right)\sin2\theta +\frac{\pi}{2}\left(\alpha+\frac{i}{4}\right)\theta +\left(\frac{4-\pi^{2}}{32}i -\frac{\pi^{2}}{8}\alpha\right) \notag \\ &+\alpha\theta\sin2\theta \Biggr], \end{align} \end{subequations} \end{widetext} where we have defined the constant \begin{align} \alpha=\frac{3i\De_1(\De_2-\De_1)}{8(1-i\De_1)(1+2i\De_1)(1+i\De_2)}\cdot \end{align} \subsection{Fourth-order solution} \subsubsection{Governing equation} Finally, the fourth-order Oldroyd-B relation is given by \begin{align}\label{eq:constitutive4} &\left( 1+\De_1 \frac{\p }{\p t}\right) \btau_4 -\left( 1+\De_2 \frac{\p }{\p t}\right) \gamd_4 \notag\\ &= \De_2 \left[\u_1 \cdot \boldsymbol{\nabla} \gamd_3- \left(^{t} \boldsymbol{\nabla} \u_1 \cdot \gamd_3+\gamd_3 \cdot \boldsymbol{\nabla} \u_1 \right) \right] \notag \\ & - \De_1 \left[\u_1 \cdot \boldsymbol{\nabla} \btau_3- \left(^{t} \boldsymbol{\nabla} \u_1 \cdot \btau_3+\btau_3 \cdot \boldsymbol{\nabla} \u_1 \right) \right] \notag \\ & + \De_2 \left[\u_2 \cdot \boldsymbol{\nabla} \gamd_2- \left(^{t} \boldsymbol{\nabla} \u_2 \cdot \gamd_2+\gamd_2 \cdot \boldsymbol{\nabla} \u_2 \right) \right] \notag \\ & - \De_1 \left[\u_2 \cdot \boldsymbol{\nabla} \btau_2- \left(^{t} \boldsymbol{\nabla} \u_2 \cdot \btau_2+\btau_2 \cdot \boldsymbol{\nabla} \u_2 \right) \right] \notag \\ & + \De_2 \left[\u_3 \cdot \boldsymbol{\nabla} \gamd_1- \left(^{t} \boldsymbol{\nabla} \u_3 \cdot \gamd_1+\gamd_1 \cdot \boldsymbol{\nabla} \u_3 \right) \right] \notag \\ & - \De_1 \left[\u_3 \cdot \boldsymbol{\nabla} \btau_1- \left(^{t} \boldsymbol{\nabla} \u_3 \cdot \btau_1+\btau_1 \cdot \boldsymbol{\nabla} \u_3 \right) \right] . \end{align} Since we wish to characterize the time-average flow, $\tilde\u_4^{(0)}$, we calculate the time-average of Eq.~\eqref{eq:constitutive4} and obtain \begin{align} \label{eq:constitutiveF4} &\tilde{\bold{\btau}}^{(0)}_4-\tilde{{\gamd}}^{(0)}_4 \notag \\ &= \frac{1}{2} \De_2 \left[ \tilde{\bold{u}}^{*}_1\cdot \boldsymbol{\nabla} \tilde{\gamd}^{(1)}_3 - \left( {}^t \boldsymbol{\nabla} \tilde{\bold{u}}^{*}_1 \cdot \tilde{\gamd}^{(1)}_3+\tilde{\gamd}^{(1)}_3 \cdot \boldsymbol{\nabla} \tilde{\bold{u}}^{*}_1 \right) \right] \notag\\ &-\frac{1}{2} \De_1 \left[ \tilde{\bold{u}}^{*}_1\cdot \boldsymbol{\nabla} \tilde{\btau}^{(1)}_3 - \left( {}^t \boldsymbol{\nabla} \tilde{\bold{u}}^{*}_1 \cdot \tilde{\btau}^{(1)}_3+\tilde{\btau}^{(1)}_3 \cdot \boldsymbol{\nabla} \tilde{\bold{u}}^{*}_1 \right) \right] \notag\\ &+\frac{1}{2} \De_2 \left[ \tilde{\bold{u}}^{(2)*}_2\cdot \boldsymbol{\nabla} \tilde{\gamd}^{(2)}_2 - \left( {}^t \boldsymbol{\nabla} \tilde{\bold{u}}^{(2)*}_2 \cdot \tilde{\gamd}^{(2)}_2+\tilde{\gamd}^{(2)}_2 \cdot \boldsymbol{\nabla} \tilde{\bold{u}}^{(2)*}_2 \right) \right] \notag\\ &-\frac{1}{2} \De_1 \left[ \tilde{\bold{u}}^{(2)*}_2\cdot \boldsymbol{\nabla} \tilde{\btau}^{(2)}_2 - \left( {}^t \boldsymbol{\nabla} \tilde{\bold{u}}^{(2)*}_2 \cdot \tilde{\btau}^{(2)}_2+\tilde{\btau}^{(2)}_2 \cdot \boldsymbol{\nabla} \tilde{\bold{u}}^{(2)*}_2 \right) \right] \notag\\ &+\frac{1}{2} \De_2 \left[ \tilde{\bold{u}}^{(1)*}_3\cdot \boldsymbol{\nabla} \tilde{\gamd}_1 - \left( {}^t \boldsymbol{\nabla} \tilde{\bold{u}}^{(1)*}_3 \cdot \tilde{\gamd}_1+\tilde{\gamd}_1 \cdot \boldsymbol{\nabla} \tilde{\bold{u}}^{(1)*}_3 \right) \right] \notag\\ &-\frac{1}{2} \De_1 \left[ \tilde{\bold{u}}^{(1)*}_3\cdot \boldsymbol{\nabla} \tilde{\btau}_1 - \left( {}^t \boldsymbol{\nabla} \tilde{\bold{u}}^{(1)*}_3 \cdot \tilde{\btau}_1+\tilde{\btau}_1 \cdot \boldsymbol{\nabla} \tilde{\bold{u}}^{(1)*}_3 \right) \right]. \end{align} As done previously, we take the divergence and then curl of Eq.~(\ref{eq:constitutiveF4}), and invoke the lower-order constitutive relations Eqs.~(\ref{eq:constitutiveF1}),~(\ref{eq:2ndorder2}) and (\ref{eq:constitutiveF3}), to obtain the equation for streamfunction of the average flow \begin{align} \boldsymbol{\nabla}^4 \tilde\Psi^{(0)}_4 = \beta \frac{A_4 \sin 2\theta+ B_4 \cos 2\theta + C_4 \sin 4\theta }{r^2}, \end{align} where \begin{align} \beta &= \frac{\De_2-\De_1}{2(1+De^2_1)(2\De_1-i)},\\ A_4 &=8 \alpha-\De_1+8 i \alpha \De_1+4 i \De_1^2+16 \alpha \De_1^2,\\ B_4 &= 2 \pi \left(1+i\De_1+2\De_1^2)(\alpha+\alpha^{*}\right),\\ C_4 &= -2 \bigl( 8\alpha+8i\alpha\De_1+3i\De_1^2+16\alpha\De_1^2 \notag \\ & \ \ \ +4\alpha^{*}+4i\De_1\alpha^{*}+8\De_1^2 \alpha^{*} \bigr). \end{align} \subsubsection{Boundary conditions} At $\theta = \pi/2$, the boundary condition at this order is written as \begin{align} \u_4 = -\Theta \frac{\p \u_3}{\p \theta}-\frac{1}{2} \Theta^2 \frac{\p^2 \u_2}{\p \theta^2}-\frac{1}{6} \Theta^3 \frac{\p^3 \u_1}{\p \theta^3}+\frac{1}{6} r \dot{\Theta} \Theta^3 \langle \t \rangle, \end{align} which we then Fourier-transform to obtain the boundary condition for $\tilde\u^{(0)}_4$. In addition, since we are only interested in the time-averaged flow, \textit{i.e.}, real part of the solution ${\rm Re}\{\tilde\u^{(0)}_4\}$, the boundary condition at $\theta=\pi/2$ can be simplified as \begin{align} {\rm Re}\{\tilde\u^{(0)}_4\} = \frac{r\left(8-\pi^2\right)}{8} {\rm Re}\{\alpha\} \langle \t \rangle, \end{align} where \begin{align} {\rm Re}\{\alpha\} = \frac{-3\De_1(\De_1-\De_2)(\De_1+\De_2+2\De^2_1\De_2)}{8(1+\De^2_1)(1+4\De^2_1)(1+\De^2_2)}\cdot \end{align} Finally, as usual, we have the no-slip and no-penetration boundary conditions at $\theta=0$. \subsubsection{Solution} Solving the inhomogeneous biharmonic equation with the boundary conditions above, we obtain our main result, namely the analytical formula for the time-averaged flow as \begin{widetext} \begin{subequations}\label{eq:order4sol} \begin{align} {\rm Re}\left\{\tilde{\Psi}^{(0)}_{4}\right\} =& \frac{r^2 \De_1 \left(\De_1-\De_2\right)\left(2 \De_2 \De_1^2+\De_1+\De_2\right)}{512 \left(\De_1^2+1\right){}^2 \left(4 \De_1^2+1\right) \left(\De_2^2+1\right)} \notag\\ &\Bigl[32 \pi -3 \pi ^3+8 \pi \De_1 \De_2-3 \pi ^3 \De_1 \De_2+24 \pi \De_1^2+4 \theta \bigl(-20+3 \pi ^2 \notag \\ &-12 \De_1^2-8 \De_1 \De_2+3 \pi ^2 \De_1 \De_2 \bigr) +\sin 4\theta \left(-4-12 \De_1^2+8 \De_1 \De_2\right) \notag \\ &+\cos 2\theta \left(-32 \pi +3 \pi ^3+48 \theta -24 \pi \De_1^2+48 \theta \De_1^2-8 \pi \De_1 \De_2+3 \pi ^3 \De_1 \De_2\right) \notag\\ &+\sin 2\theta \left(24-6 \pi ^2+24 \De_1^2-24 \pi \theta \De_1^2-6 \pi ^2 \De_1 \De_2+24 \pi \theta \De_1 \De_2\right) \Bigr],\\ {\rm Re}\left\{\tilde{u}^{(0)}_{4r}\right\} =& \frac{r \De_1 \left(\De_2-\De_1\right) \left(2 \De_2 \De_1^2+\De_1+\De_2\right)}{256 \left(\De_1^2+1\right){}^2 \left(4 \De_1^2+1\right) \left(\De_2^2+1\right)} \notag\\ & \Bigl[ 40-6 \pi ^2+24 \De_1^2+16 \De_1 \De_2-6 \pi ^2 \De_1 \De_2+\cos 4\theta \left(8+24 \De_1^2-16 \De_1 \De_2\right)\notag\\ &+\sin 2\theta \left(-32 \pi +3 \pi ^3+48 \theta -12 \pi \De_1^2+48 \theta \De_1^2-20 \pi \De_1 \De_2+3 \pi ^3 \De_1 \De_2\right)\notag\\ &+\cos 2\theta \left(-48+6 \pi ^2-48 \De_1^2+24 \pi \theta \De_1^2+6 \pi ^2 \De_1 \De_2-24 \pi \theta \De_1 \De_2\right) \Bigr],\\ {\rm Re}\left\{\tilde{u}^{(0)}_{4\theta}\right\} =& \frac{r \De_1 \left(\De_2-\De_1\right) \left(2 \De_2 \De_1^2+\De_1+\De_2\right)}{256 \left(\De_1^2+1\right){}^2 \left(4 \De_1^2+1\right) \left(\De_2^2+1\right)} \notag\\ &\Bigl[ 32 \pi -3 \pi ^3+8 \pi \De_1 \De_2-3 \pi ^3 \De_1 \De_2+24 \pi \De_1^2+4 \theta \bigl(-20+3 \pi ^2 \notag \\ &-12 \De_1^2-8 \De_1 \De_2+3 \pi ^2 \De_1 \De_2\bigr)+\sin 4\theta \left(-4-12 \De_1^2+8 \De_1 \De_2\right) \notag \\ &+\cos 2\theta \left(-32 \pi +3 \pi ^3+48 \theta -24 \pi \De_1^2+48 \theta \De_1^2-8 \pi \De_1 \De_2+3 \pi ^3 \De_1 \De_2\right) \notag\\ &+\sin 2\theta \left(24-6 \pi ^2+24 \De_1^2-24 \pi \theta \De_1^2-6 \pi ^2 \De_1 \De_2+24 \pi \theta \De_1 \De_2\right)\Bigr]. \end{align} \end{subequations} \end{widetext} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig2.eps} \caption{\label{fig:vorticity}Time-averaged vorticity, $\langle \omega_4\rangle$, as a function of polar angle $\theta$. (a): Fixed Deborah number ($\De =100$) and $\eta_s / \eta =$ 0.1 (blue solid line), 0.01 (red dashed lines) and 0.001 (black dotted line); (b): Fixed relative viscosity ($\eta_s/\eta = 0.1$) and $\De$ = 1 (blue solid line), 10 (red dashed line), and 100 (black dotted line).} \end{figure} \section{\label{sec:results}Characterization of the time-averaged flow} In the analysis above, we have computed the flow field perturbatively up to order $O(\epsilon^4)$ and found that a nonzero time-averaged flow occurs at that order, as described by Eq.~(\ref{eq:order4sol}). Hereafter, for convenience, we rewrite the two Deborah numbers as $\De_1=\De$ and $\De_2=\De \zeta$, where $\zeta = \eta_s /\eta$ is the relative viscosity of the solvent vs. total fluid. The creation of a net flow by the tethered flapping motion demonstrates explicitly that Purcell's scallop theorem breaks down in a viscoelastic fluid. This suggests that reciprocal flapping-like motion can be exploited for pumping polymeric fluids in simple geometries even in the absence of inertia -- a situation which is impossible in Newtonian fluids. In the following sections, we explore the properties of this time-average flow and its dependance on both the actuation frequency and material properties of the fluid. \subsection{Streamline and vorticity pattern} With the streamfunction explicitly calculated, we can easily compute the flow streamlines, as well as the flow vorticity, given by $\langle \omega_4 \rangle = - \boldsymbol{\nabla}^2 \langle \Psi_4 \rangle$, or \begin{align} \langle \omega_4 \rangle =& \frac{\De^3 (1-\zeta ) \left(1+\zeta +2 \De^2 \zeta \right)}{128 \left(1+\De^2\right)^2 \left(1+4 \De^2\right) \left(1+\De^2 \zeta ^2\right)} \notag \\ &\Bigl[ -32 \pi -24 \De^2 \pi +3 \pi ^3-8 \De^2 \pi \zeta +3 \De^2 \pi ^3 \zeta \notag\\ &-4 \left(-20-12 \De^2+3 \pi ^2-8 \De^2 \zeta +3 \De^2 \pi ^2 \zeta \right) \theta \notag \\ & +\left(24 \De^2 \pi -24 \De^2 \pi \zeta \right) \cos2 \theta +\left(48+48 \De^2\right) \sin2 \theta \notag\\ &+\left(-12-36 \De^2+24 \De^2 \zeta \right) \sin4\theta \Bigr]. \end{align} We see that the vorticity is only a function of the polar angle $\theta$ between the wall and the flapper. The vorticity is plotted as a function of the angle $\theta$ in Fig.~\ref{fig:vorticity} for different relative viscosities (Fig.~\ref{fig:vorticity}a) and different Deborah numbers (Fig.~\ref{fig:vorticity}b). The locations where the vorticity changes its sign are apparently invariant and occur around $\theta \approx 3\pi/16$ (from negative to positive) and $\theta \approx 3\pi/8$ (from positive to negative). The streamline pattern and vorticity distribution are also qualitatively similar for different Deborah numbers and relative viscosities, as illustrated in Fig.~\ref{fig:streamline} for different Deborah numbers ($\De=1$ and $\De=100$) at a fixed relative viscosity of $0.1$. It can be noted that, keeping the relative viscosity fixed, increasing the Deborah number leads to more inclined streamlines (greater vertical velocity components) near the flat wall ($\theta=0$). \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{fig3.eps} \caption{\label{fig:streamline} Streamline and vorticity pattern for $\eta_s / \eta = 0.1$; (a): $\De=1$; (b): $\De =100$. The grayscale map displays the value of the vorticity, with legend shown on the right of each plot.} \end{figure*} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig4.eps} \caption{\label{fig:direction}Net flow velocity along the average flapper position, $\langle u_{4r}\rangle/r$, as a function of the polar angle $\theta$. (a): fixed Deborah number ($\De=100$) and $\eta_s / \eta =$ 0.1 (blue solid line), 0.01 (red dashed line) and 0.001 (black dotted line); (b) fixed relative viscosity ($\eta_s/\eta =$ 0.1) and De = 1 (blue solid line), 10 (red dashed line), and 100 (black dotted line).} \end{figure} \subsection{Directionality of the flow} As shown from the arrows in the streamline pattern in Fig.~\ref{fig:streamline}, the flapping motion draws the polymeric fluid towards the hinge point at an acute angle, and pumps the fluid away from the hinge point along both the flat wall and the average flapper position. To illustrate this directionality further, we plot in Fig.~\ref{fig:direction} the radial velocity per unit radius against the polar angle for different relative viscosities (Fig.~\ref{fig:direction}a) and Deborah numbers (Fig.~\ref{fig:direction}b). Again, the locations where the radial velocity changes its sign are apparently invariant under the change of relative viscosity or Deborah number, and occur around $\theta \approx \pi/4$ (positive to negative) and $\theta \approx 7\pi/16$ (negative to positive). \section{\label{sec:optimization}Optimization} Having identified the basic flow patterns generated by the flapping motion, we now turn to a possible optimization of the pumping performance. Specifically, we address the question: what is the optimal Deborah number at which the largest flow can be generated? Since different optimality criteria can be defined, we consider here three different ``optimality measures'' for the net flow, and show they all generate essentially the same conclusion. \subsubsection{Flow along the boundary} Since the flapping motion pumps the fluid away from the hinge point along the average flapper position, one natural measure of the pumping performance is the magnitude of flow along the average flapper position ($\theta=\pi/2$). Note that the velocity field is directly proportional to the radius, and recall that the velocity is only radial along the average flapper position as required by symmetry. Consequently, the dependence of the intrinsic flow strength upon the Deborah number can be characterized by the ratio between the radial velocity along the average flapper position and the radial distance, \begin{align}\label{eq:optimB} U_b \left(\De, \zeta \right)=& \ \frac{\langle u_{4r} \rangle (r, \theta=\pi/2)}{r} \notag\\ =& \ \frac{3 \De^3 \left(\pi ^2-8\right) (1-\zeta ) \left(1+\zeta +2 \De^2 \zeta \right)}{64 \left(1+5 \De^2+4 \De^4\right) \left(1+\De^2 \zeta ^2\right)}, \end{align} which is plotted for different relative viscosities in Fig.~\ref{fig:optim}a. {From Eq.~(\ref{eq:optimB}), we see that for small values of $\De$, $U_b \sim \De^3$, whereas for large values of $\De$, $U_b\sim1/\De$, and therefore an optimal Deborah number is expected to exist.} This is confirmed in Fig.~\ref{fig:optim}a, where we see that for each value of the relative viscosity, there is an optimal value of the Deborah number where the flow along the boundary is maximal. For small relative viscosities, we note the presence of two local peaks (in contrast, only one exists for $\eta_s/ \eta = 10^{-1}$). Physically, by decreasing the relative viscosity, we are varying the retardation time of the fluid, while keeping the relaxation time fixed. The position of the second peak changes correspondingly and commensurately when the relative viscosity is varied by orders of magnitude, while the position the first peak is unchanged. When the relative viscosity is set to zero (zero retardation time, which is a singular limit), we see in Fig.~\ref{fig:optim}a a single peak at essentially the same Deborah number as before. From these observations, we deduce that the two local optimal Deborah numbers arise from two different properties of the fluid, respectively relaxation and retardation. For small relative viscosities, the small local optimal Deborah number can be attributed to relaxation while the larger local optimal Deborah number can be attributed to retardation, and it disappears in the singular (and unphysical) limit of zero retardation. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{fig5.eps} \caption{\label{fig:optim}Dependence of pumping performance with the Deborah number, for two different pumping measures. (a): Reduced flow velocity along the average flapper position, $U_b$; (b): Reduced kinetic energy, $E$. For both cases: $\eta_s/\eta=0.1$ (left solid line, blue); $\eta_s/\eta=0.01$ (red dot-dashed line); $\eta_s/\eta=0.001$ (orange dotted line); $\eta_s/\eta=0.0001$ (right solid line, green); $\eta_s/\eta=0$ (black dashed line).} \end{figure*} \subsubsection{Kinetic energy} Another possible optimization measure is related to the total kinetic energy of the average flow. Since the velocity field is directly proportional to the radius, it takes the general form $\langle u_{4r} \rangle = r f(\theta)$ and $\langle u_{4\theta} \rangle = r g(\theta)$, where the functions $f(\theta)$ and $g(\theta)$ can be found from Eq.~(\ref{eq:order4sol}). Therefore, the dependence of the total kinetic energy of the average flow upon the Deborah number can be characterized by a reduced energy given by the integral over the polar angle \begin{align} E(\De,\zeta) =\int^{\pi/2}_0 \left[f(\theta) \right]^2 +\left[g(\theta) \right]^2d\theta, \end{align} and is given analytically by \begin{align}\label{eq:optimE} E(\De,\zeta)=& \frac{\De^6 \pi (-1+\zeta )^2 \left(1+\zeta +2 \De^2 \zeta \right)^2 }{98304 \left(1+\De^2\right)^4 \left(1+4 \De^2\right)^2 \left(1+\De^2 \zeta ^2\right)^2} \notag \\ &\Bigl[-2074-4028 \De^2-2058 \De^4+1054 \pi ^2 \notag\\ &+858 \De^2 \pi ^2-144 \De^4 \pi ^2-174 \pi ^4 -45 \De^2 \pi ^4 \notag\\ &+36 \De^4 \pi ^4+9 \pi ^6-120 \De^2 \zeta +88 \De^4 \zeta \notag\\ &+1250 \De^2 \pi ^2 \zeta +1146 \De^4 \pi ^2 \zeta -303 \De^2 \pi ^4 \zeta \notag\\ &-117 \De^4 \pi ^4 \zeta +18 \De^2 \pi ^6 \zeta -104 \De^4 \zeta ^2 \notag\\ &+52 \De^4 \pi ^2 \zeta ^2-93 \De^4 \pi ^4 \zeta ^2 +9 \De^4 \pi ^6 \zeta ^2 \Bigr]. \end{align} {With a fixed relative viscosity, for small values of $\De$, we have $E \sim \De^6$ , whereas $E \sim 1/\De^2$ for large values of $\De$, so an optimal $\De$ should exist.} The function $E(\De,\zeta)$ is plotted for different values of the relative viscosity in Fig.~\ref{fig:optim}b, and similarly to the previous section we see indeed the existence of an optimal value of $\De$ for each $\zeta$. \subsubsection{Enstrophy} Finally, we also consider the dependence of the enstrophy of the flow upon the Deborah number. The total enstrophy of the flow is proportional to the integral \begin{align} \mathcal{E}(\De, \zeta) = \int^{\pi/2}_{0} \omega^2_{4}d\theta, \end{align} which can be analytically calculated to be \begin{align} \mathcal{E} =& \frac{\De^6 \pi (-1+\zeta )^2 \left(1+\zeta +2 \De^2 \zeta \right)^2 }{98304 \left(1+\De^2\right)^4 \left(1+4 \De^2\right)^2 \left(1+\De^2 \zeta ^2\right)^2} \notag \\ & \Bigl[-1800-14256 \De^2-12744 \De^4+616 \pi ^2 \notag \\ &+2424 \De^2 \pi ^2 +1440 \De^4 \pi ^2-120 \pi ^4-72 \De^2 \pi ^4 \notag \\ &+9 \pi ^6 +10656 \De^2 \zeta +11232 \De^4 \zeta -1192 \De^2 \pi ^2 \zeta \notag \\ &-456 \De^4 \pi ^2 \zeta -168 \De^2 \pi ^4 \zeta -72 \De^4 \pi ^4 \zeta \notag \\ &+18 \De^2 \pi ^6 \zeta -288 \De^4 \zeta ^2-368 \De^4 \pi ^2 \zeta ^2 & \notag \\ &-48 \De^4 \pi ^4 \zeta ^2 +9 \De^4 \pi ^6 \zeta ^2\Bigr]. \end{align} The variation of $\mathcal{E}$ with $\De$ turns out to be very similar to the one for $E$ (Eq.~\ref{eq:optimE}, shown in Fig.~\ref{fig:optim}b), and is not reproduced here. \subsubsection{Optimal Deborah number} We next compute numerically the optimal Deborah number, maximizing the pumping measures in both Eqs.~(\ref{eq:optimB}) and (\ref{eq:optimE}), as a function of the relative viscosity. The results are displayed in Fig.~\ref{fig:loglogoptim}. The optimality conditions turn out to quantitatively agree for both pumping measures, and correspond to an inverse linear relationship between $\De_{\text{opt}}$ and $\zeta$. This scaling is confirmed by an asymptotic analysis of the exact analytical formula for $\De_{\text{opt}}$ found by setting the partial derivative of Eq.~(\ref{eq:optimB}) to zero, and showing that indeed $\De_{\text{opt}} \sim 1/\zeta$ for small values of $\zeta$. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{fig6.eps} \caption{\label{fig:loglogoptim}Optimal Deborah number, $\De_{\text{opt}}$, as a function of relative viscosity, $\eta_s/\eta$, by optimizing (a) the flow along the average position of the flapper $\theta = \pi/2$ (blue solid line) and (b) the flow kinetic energy (red dots).} \end{figure} \section{\label{sec:discussion}Discussion} In this paper, we have considered what is arguably the simplest geometrical setup to demonstrate that net fluid pumping can be obtained from the purely sinusoidal forcing of a viscoelastic fluid. The fluid was modeled as an Oldroyd-B fluid both for simplicity and because of the physical relevance of the model. The main result we obtained is the time-averaged flow, described by Eq.~(\ref{eq:order4sol}), generated by the reciprocally flapping motion. In accordance with the scallop theorem, setting the Deborah number to zero in Eq.~(\ref{eq:order4sol}) leads to no flow, but a net flow occurs for all nonzero values of $\De$. Our calculations allow us to demonstrate explicitly the breaking of the scallop theorem in the context of fluid pumping, and suggest the possibility of taking advantage of the intrinsic nonlinearities of complex fluids for their transport. Physically, such flow is being driven by normal-stress differences arising in the fluid and due to the stretching of the polymeric microstructures by the background flow. The calculation was done asymptotically for small-amplitude flapping, and the net flow occurs at fourth order. As in the classic work by Moffatt \cite{moffatt}, our results should be understood as similarity solutions which are valid close enough to the fixed hinge point such that the inertial effects are negligible. The advantage of such theoretical treatment is that it allows us to obtain the entire flow field analytically, in particular the spatial structure of the flow, and the dependance of the net pumping on the actuation parameters (the flapping frequency) and the material properties of the fluid (relaxation time and viscosities). Taking advantage of these analytical results, we have been able to analytically optimize the pumping performance, and derive the optimal Deborah number as a function of the fluid ratio of solvent to total viscosity. Although we have considered here the simplest geometrical and dynamical setup possible, the results motivate future work which will focus on the flapping of three-dimensional finite-size appendages in polymeric fluids. We now turn to the relevance of our results to biological transport. In Newtonian fluids, only the non-reciprocal component of the motion of cilia -- {\it i.e.} the difference between their effective and recovery strokes -- affects fluid transport \cite{brennen, blake}. In contrast, we show in this paper that the back-and-forth components of cilia motion, which is reciprocal, does influence transport in the case of viscoelastic biological fluids. The effect is expected to be crucial since the typical Deborah number in ciliary transport is large, and elastic effects of the fluid are therefore likely to be significant. For example, from rheological measurements \cite{hwang, gilboa, lai}, we know the relaxation time of respiratory mucus ranges between $\lambda \approx 30 - 100$ s, and that of the cervical mucus present in female reproductive tract ranges from $\lambda \approx 1-100$ s \cite{six,hwang}. In addition, cilia typically oscillate at frequencies of $f=\omega/2\pi \approx 5 - 50$ Hz \cite{brennen}, and therefore, ciliary transport of mucus occurs at large (or very large) Deborah numbers, $\De=\lambda \omega \sim 10$ to $10^4$. In addition, the results of our paper should be contrasted with previous work. It was shown in Ref.~\cite{lauga1} that the presence of polymeric stresses leads to a decrease of the speed at which a fluid is pumped by a waving sheet -- in that case a complex fluid led therefore to a degradation of the transport performance. In contrast, we demonstrate in the current paper a mode of actuation which is rendered effective by the presence of polymeric stresses -- the complex fluid leads therefore in this case to an improvement of the transport performance. For a general actuation gait, it is therefore not known a priori whether the presence of a complex fluid will lead to a degradation or an improvement of the pumping performance, and whether or not a general classification depending on the type of actuation gait can be derived remains a question to be addressed in the future. \section*{Acknowledgments} Funding by the National Science Foundation (grant CBET-0746285 to EL) is gratefully acknowledged.
proofpile-arXiv_065-10753
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} There is extensive experimental evidence, eg.\ in Refs.~\cite{Adams:2005dq,Adare:2008ab}, that a hot and strongly interacting medium is formed in high-energy heavy-ion collisions. It is expected that color screening of the quark-antiquark binding potential within such a medium would render quarkonium formation less likely~\cite{Matsui:1986dk}, an effect colloquially referred to as "melting" of the bound state. Since quarkonium states with stronger binding energies would melt at higher temperatures than the more weakly bound states, measuring the yields of different quarkonium states may reveal a sequential suppression pattern, thus serving as a thermometer for the medium~\cite{Mocsy:2007jz}. Although the suppression of charmonia was anticipated as a key signature of the quark-gluon plasma formation~\cite{Matsui:1986dk}, later studies revealed that several concurrent effects, such as recombination of uncorrelated $c\bar{c}$ pairs in the medium, cold nuclear matter (CNM) effects and feed-down from higher-mass states, complicate the interpretation of the results~\cite{Grandchamp:2004tn,Vogt:2012fba,Vertesi:2015xba}. Bottomonia, on the other hand, are less affected by recombination at top RHIC energies, and CNM effects are also expected to be moderate at mid-rapidity. Thus the measurements of the \Ups{}($n$S) states provide a cleaner probe of the strongly interacting medium. Recent results from STAR~\cite{Adamczyk:2013poh} show that in central Au+Au collisions at $\sqsn=200$ GeV, the \Ups{}(1S+2S+3S) production is suppressed to an extent that cannot be explained by CNM effects only. In the same study the yields of the excited states are consistent with a complete suppression. Since the energy density in central U+U collisions is estimated to be about 20\% higher on the average than that in central Au+Au collisions at $\sqsn=200$ GeV~\cite{Kikola:2011zz}, studies of quarkonium production in U+U collisions can provide further tests of the sequential suppression hypothesis. Recent installation of the Muon Telescope Detector (MTD)~\cite{Ruan:2009ug}, on the other hand, opened the door for precision measurements of quarkonia at STAR via the dimuon decay channel. \section{First measurement of \Ups{} production in U+U collisions} The STAR detector complex~\cite{Ackermann:2002ad} provides clean identification of electrons at mid-rapidity ($|\eta|<1$) in the full azimuth angle. STAR collected data in U+U collisions at $\sqsn=193$ GeV triggered by high-energy electrons (or positrons) with an integrated luminosity of 263.4 $\mu b^{-1}$. The analysis of this data, briefly recapitulated here, is described in details in Ref.~\cite{Adamczyk:2016dzv}. The tracks of the electron candidates are reconstructed in the Time Projection Chamber (TPC), and then matched with energy deposits in the Barrel Electromagnetic Calorimeter (BEMC). Electron identification is carried out in the TPC based on fractional energy loss, and in the BEMC based on the shape of the electromagnetic shower. Electron candidates have to have their energy-to-momentum ratio close to unity. Identified electron and positron candidates are then paired in order to reconstruct the invariant mass of the \Ups{} candidates. The combinatorial background is then reconstructed using like-sign electron (positron) pairs. Since Drell-Yan processes and open $b\bar{b}$ production yield correlated backround in the signal region, combinatorial background cannot be simply subtracted to get the signal. Instead, a simultaneous fit is applied on the unlike-sign and like-sign invariant mass data. The former is constructed as a sum of the three \Ups($n$S) peaks, the correlated and the combinatorial background. A priori knowledge on the shapes of the signal and background shapes were obtained from simulations~\cite{Adamczyk:2013poh,Adamczyk:2016dzv} in order to reduce the number of free parameters in the fit. Figure~\ref{fig:uumass} shows the invariant mass distributions together with the fitted functions on the combinatorial and correlated backgrounds as well as the signal peaks. \begin{figure}[h!] \includegraphics[width=\columnwidth]{UpsInvMass_0.eps} \caption{\label{fig:uumass}Invariant mass distribution of \Ups candidates and the combinatorial background in the 0-60\% centrality class of U+U collisions at $\sqsn=193$ GeV~\cite{Adamczyk:2016dzv}. \end{figure} We use the nuclear modification factor to quantify suppression, defined as $\Raa=\frac{\sigma^{inel}_{pp}}{\sigma^{inel}_{AA}}\frac{1}{\langle\Ncoll\rangle}\frac{d\sigma^{AA}_\Ups / dy}{d\sigma^{pp}_\Ups / dy}$, where $\sigma^{inel}_{AA(pp)}$ is the total inelastic cross-section of the U+U ($p$+$p$) collisions, and $d\sigma^{AA(pp)}_\Ups /dy$ denotes the \Ups production cross-section in U+U ($p$+$p$) collisions. Figure~\ref{fig:raa_npart} shows the nuclear modification factor of \Ups{}(1S+2S+3S) and \Ups{}(1S), respectively, for U+U collisions at $\sqsn=193$ GeV and Au+Au collisions at $\sqsn=200$ GeV. \begin{figure}[h!] \includegraphics[width=\columnwidth]{raa_npart.eps} \caption{\label{fig:raa_npart}\Raa of (a) \Ups{}(1S+2S+3S) and (b) \Ups{}(1S) as a function of \Npart in U+U and Au+Au collisions from STAR (solid circles and squares, respectively)~\cite{Adamczyk:2013poh,Adamczyk:2016dzv}, compared to PHENIX~\cite{Adare:2014hje} (crosses) and CMS~\cite{Khachatryan:2010zg} data (diamonds) as well as several theoretical calculations~\cite{Emerick:2011xu,Strickland:2011aa,Liu:2010ej}.} \end{figure} The new U+U data consolidate observations made previously in Au+Au data: considering 0-10\% centrality Au+Au and U+U points together, both the \Ups{}(1S+2S+3S) and \Ups{}(1S) points are significantly suppressed, similar in extent to data from the LHC~\cite{Khachatryan:2010zg} but this suppression is not complete~\cite{Adamczyk:2016dzv}. Theory comparisons~\cite{Emerick:2011xu,Strickland:2011aa,Liu:2010ej} show that data favor scenarios where the $q\bar{q}$ pairs are strongly bound, over the weakly bound ones. Figure~\ref{fig:raa_binding} shows the nuclear modification factor of \Ups states measured in U+U collisions at $\sqsn=193$ GeV and Au+Au collisions at $\sqsn=200$ GeV, compared to high-\pT{} \Jpsi in Au+Au collisions at $\sqsn=200$ GeV~\cite{Adamczyk:2012ey}. The emerging picture supports sequential melting of the quarkonium states with different binding energies. While the U+U data show a non-significant presence of the excited states, it is still consistent with the upper limit established previously in the Au+Au measurement. \begin{figure}[h!] \includegraphics[width=\columnwidth]{raa_binding.eps} \caption{\label{fig:raa_binding}\Raa versus binding energy for \Ups(1S) and \Ups{}(2S+3S) in U+U collisions at $\sqsn=193$ GeV~\cite{Adamczyk:2016dzv}, compared to the \Ups and high-\pT \Jpsi results in Au+Au collisions at $\sqsn=200$ GeV~\cite{Adamczyk:2013poh,Adamczyk:2012ey}. The 95\% upper confidence bound is shown for \Ups{}(2S+3S) measured in Au+Au collisions.} \end{figure} \section{Production of \Ups{} in Au+Au collisions with the MTD} Measurements in the dielectron channel suffer from the electron bremsstrahlung tail that causes the higher mass states to give contribution to the peak region of lower mass states. The advantage of $\Ups\rightarrow\mu^+\mu^-$ measurements is that muons are less affected by bremsstrahlung, thus it is less of a challenge to separate the different \Ups{} states with comparable statistics. The Muon Telescope Detector (MTD), completed in 2014, is designed for precision measurements of quarkonia production via the dimuon channel. It is installed outside the solenoidal magnet of STAR, covering 45\% of the azimuth angle within the pseudorapidity range $|\eta|<0.5$. Its Multi-gap Resistive Plate Chamber (MRPC) technology provides means to trigger on muons and identify them. Figure~\ref{fig:mtdmass} shows the invariant mass spectrum of \Ups candidates reconstructed via the dimuon channel. \begin{figure}[h!] \includegraphics[width=\columnwidth]{Run14_Ups_Signal.eps} \caption{\label{fig:mtdmass} Invariant mass distribution of $\Upsilon\rightarrow\mu^+\mu^-$ candidates and combinatorial background in the 0-80\% centrality class of Au+Au collisions at $\sqsn=200$ GeV.} \end{figure} Here, as in the dielectron channel, the raw yields of the \Ups states are obtained from a simultaneous fit on the like-sign and unlike-sign muon pairs. The individual \Ups{}($n$S) states are modelled with Gaussian distributions, each centered at the PDG value of the given state. Figure~\ref{fig:mtdratio} shows the excited-to-ground state ratio \Ups{}(2S+3S)/\Ups{}(1S) via the dimuon channel in Au+Au collisions at $\sqsn=200$ GeV. \begin{figure}[h!] \includegraphics[width=\columnwidth]{Run14_Upsilon_Ratio_AtStar.eps} \caption{\label{fig:mtdratio}Excited-to-ground state ratio of the \Ups mesons via the dimuon channel, compared to CMS measurements~\cite{Chatrchyan:2012lxa}.} \end{figure} The result is compared to worldwide data in p+p collisions, as well as measuremens from CMS in Pb+Pb collisions at $\sqsn=2.76$ TeV~\cite{Chatrchyan:2012lxa}. The result suggests that the excited states may not be as strongly suppressed at RHIC as at the LHC, although the errors are large. \section{Summary and outlook} Measurements of \Ups{}(1S+2S+3S) and \Ups{}(1S) production in U+U collisions at $\sqsn=193$ GeV consolidate the trends observed in Au+Au collisions and extend them toward higher \Npart values. We observe a significant suppression of the \Ups(1S+2S+3S){} signal in heavy-ion collisions at RHIC top energies. The suppression of the \Ups{}(1S) state in central heavy-ion collisions is also confirmed by the new U+U data. New preliminary measurements of \Ups production via the dimuon decay channel show an indication that the excited \Ups{}(2S+3S) states are not completely suppressed in Au+Au collisions at $\sqsn=200$ GeV, and hint that the dissociation of the excited \Ups{}(2S+3S) states is less prominent at RHIC than at LHC energies. There are ongoing analyses of the full 2014--2016 dataset of Au+Au collisions in both the dimuon and dielectron channels. Once completed, these will provide more precise information about the extent of excited state suppression. In parallel to that, analysis of p+A data recorded in 2015 will confirm or exclude subtantial CNM effects at mid-rapidity. This work has been supported by the Hungarian NKFIH/OTKA NK 106119 and K 120660 grants, the J\'anos Bolyai scholarship of the Hungarian Academy of Sciences, as well as the grant 13-20841S of the Czech Science Foundation (GA\v{C}R). \section*{References}
proofpile-arXiv_065-11401
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:Introduction} Introduction} { As the temperature is increased, strongly interacting matter undergoes a transition to a state with different properties than the vacuum at zero temperature. Deconfinement of gluons and quarks, restoration of chiral symmetry and screening of color charges are the key properties of this thermal medium (for recent reviews see \mbox{e.g.}~\cite{Bazavov:2015rfa, Ding:2015ona, Petreczky:2012rq}). The expectation value of the Polyakov loop is a sensitive probe of the screening properties of the medium. In SU(N) gauge theories the Polyakov loop is an order parameter for deconfinement. At the transition temperature, both the bare and the renormalized Polyakov loop exhibit a discontinuity and their fluctuations diverge. Hence, the bare Polyakov loop is used to study the deconfinement phase transition in SU(N) gauge theories, in particular the bare Polyakov loop susceptibility is used to define the phase transition temperature (see e.g. Ref. \cite{Boyd:1996bx}). To what extent it is a sensitive probe of deconfinement in QCD with light dynamical quarks is not quite clear in view of the crossover nature of the transition \cite{Aoki:2006we}. In particular, it is not clear if it is possible to define a crossover temperature from the bare Polyakov loop, since it is a continuous quantity in the crossover region. In recent years the deconfinement transition in QCD with light dynamical quarks has been studied in terms of fluctuations and correlations of conserved charges, which indicate the appearance of quark degrees of freedom just above the chiral transition temperature \cite{Bazavov:2013dta, Bazavov:2014yba, Bellwied:2013cta, Mukherjee:2015mxc}. After proper renormalization the expectation value of the renormalized Polyakov loop is related to the free energy, $F_Q$, of a static quark \cite{McLerran:1981pb,Kaczmarek:2002mc} \begin{equation} L^{\rm{ren}}=\exp(-F_Q^{\rm{ren}}/T). \end{equation} The renormalized Polyakov loop, or equivalently the free energy of a static charge $F_Q$ has been studied in SU(N) gauge theories in a wide temperature interval \cite{Kaczmarek:2002mc, Digal:2003jc, Kaczmarek:2004gv, Gupta:2007ax, Mykkanen:2012ri}. Comparisons of the lattice results with weak-coupling calculations have also been performed up to next-to-leading order (NLO)~\cite{Burnier:2009bk} and up to next-to-next-to-leading order (NNLO)~\cite{Berwein:2015ayt}. The renormalized Polyakov loop has been computed in QCD with dynamical quarks for various quark flavor content and quark masses~\cite{Petreczky:2004pz, Kaczmarek:2005ui, Cheng:2007jq, Bazavov:2009zn, Cheng:2009zi, Borsanyi:2010bp, Bazavov:2011nk, Bazavov:2013yv, Borsanyi:2015yka}. Continuum extrapolated results with physical quark masses exist for staggered fermion formulations \cite{Borsanyi:2010bp, Bazavov:2013yv, Borsanyi:2015yka}. For large quark masses continuum results are also available for overlap and Wilson fermion formulations \cite{Borsanyi:2015waa, Borsanyi:2015zva}. Unfortunately, none of the above studies extend to sufficiently high temperature to make contact with weak-coupling calculations. The relation of the Polyakov loop to the nature of the QCD crossover remains unclear. For large quark masses the deconfinement crossover defined in terms of the Polyakov loop and the chiral crossover defined in terms of the chiral condensate happen at about the same temperature \cite{Karsch:2000kv, Petreczky:2004pz, Kaczmarek:2005ui}. In the crossover region, both the Polyakov loop and the chiral condensate change rapidly and their fluctuations become large. For physical values of the quark masses the situation may be different. In Refs. \cite{Aoki:2006br, Aoki:2009sc} it was found that the deconfinement crossover defined in terms of the renormalized Polyakov loop happens at temperatures significantly higher than the chiral crossover temperature defined as the maximum of the chiral susceptibility. The study of ratios of fluctuations of the imaginary and real parts of the Polyakov loop in Ref. \cite{Lo:2013hla} suggested that the deconfinement and chiral crossover happen at about the same temperature. However, as this study used an ad-hoc renormalization prescription, lacked continuum extrapolation and provided no information on the cutoff effects in full QCD, the implications of this result are not conclusive. In this paper we will study the free energy of a static quark in a broad temperature region extending to $5.8$ GeV. We will also reexamine the behavior of $F_Q$ in the transition region, in particular, we will calculate the entropy of a static quark, $S_Q=-\partial F_Q/\partial T$, and discuss its relation to the deconfinement transition temperature. We will show that the deconfinement transition temperature, defined at the peak of $S_Q$, is actually consistent with the chiral transition temperature. The rest of the paper is organized as follows. In Sec. II we discuss our lattice setup. In Sec. III we discuss the renormalization of the Polyakov loop using the static quark antiquark energy at zero temperature. In Sec. IV our results on the entropy of a static quark will be presented. Sec. V will show how to extend the lattice calculations of the static quark free energy to higher temperatures. In Sec. VI we will discuss the calculation of the renormalized Polyakov loop and its susceptibility using the gradient flow. The free energy of a static quark in the high temperature region will be compared to the weak-coupling results in Sec. VII. Finally, Sec. VIII contains our conclusions. Some technical details of the calculations will be given in the appendices. } \section{\label{sec:Setup} Lattice QCD Setup} { \begin{figure} \includegraphics[width=8cm]{FQbeta.eps} \caption{\label{fig: bare Fq} The bare free energy of a static quark $ f_Q^{\rm bare}=F_Q^{\rm{bare}}/T= -\log L^{\rm{bare}} $ as function of the gauge coupling $\beta$ for different $N_{\tau}$ values.} \end{figure} We perform calculations of the bare Polyakov loop at nonzero temperature on $N_{\sigma}^3 \times N_{\tau}$ lattices with $N_\tau=4,~6,~8,~10$ and $12$, and the aspect ratio of $N_{\sigma}/N_{\tau}=4$ using the highly improved staggered quark (HISQ) action~\cite{Follana:2006rc}. The gauge configurations have been generated by the HotQCD Collaboration ~\cite{Bazavov:2011nk,Bazavov:2014pvz}, in the course of studies of quark number susceptibilities at high temperatures \cite{Ding:2015fca, Bazavov:2013uja} as well as in a previous study of the renormalized Polyakov loop with the HISQ action \cite{Bazavov:2013yv}. We required additional gauge configurations and generated these using the SuperMUC and C2PAP computers at Leibniz Rechenzentrum (LRZ) in Garching. Additional gauge configurations have been generated for $N_{\tau}=4,~6$ and $8$ to calculate the Polyakov loop at very high temperatures. Further gauge configurations have been generated for $N_{\tau}=10$ and $12$ to reduce uncertainties of the free energy at low temperatures and achieve sufficient resolution of the peak of $ S_Q $. The gauge configurations have been generated in the range of gauge coupling $\beta=5.90-9.67$ with $ \beta=10/g_0^2$ using the rational hybrid Monte-Carlo (RHMC) algorithm and the MILC~code. Details on the HISQ action implementation in the MILC~code can be found in~\cite{Bazavov:2010ru}. The lattice spacing $a$ has been fixed by the $r_1$ scale and we use the parametrization of $r_1/a$ given in Ref. \cite{Bazavov:2014pvz}. Using this parametrization we find that the above $\beta$ range corresponds to a temperature range of $116\ {\rm MeV} < T < 5814\ {\rm MeV}$. The Polyakov loop has been calculated after each molecular dynamic time unit (TU). For temperatures $T<407$ MeV the accumulated statistics corresponds to $30-60$ thousands of TUs. At higher temperatures in many cases far fewer gauge configurations are available. The details on collected statistics are given in Appendix A. The Polyakov loop on the lattice is defined as \begin{equation} P({\bf x})=\frac{1}{3} {\rm Tr}\, \prod_{x_0=0}^{N_{\tau}-1} U_0({\bf x},x_0), \label{defP} \end{equation} where $U_{\mu}(x=({\bf x},x_0))$ are the lattice link variables. The bare expectation value of the Polyakov loop will be denoted by $L^{\rm{bare}}$ in what follows, $L^{\rm{bare}}= \langle P \rangle$. Since the expectation value of the Polyakov loop is independent of ${\bf x}$ we average the Polyakov loop over the entire spatial volume. Our results for the bare Polyakov loop are summarized in Fig. \ref{fig: bare Fq} in terms of the scaled bare static quark free energy $f_Q^{\mathrm{bare}}= -\log L^{\rm{bare}} $ as a function of the gauge coupling $ \beta $. Here and in what follows we denote by $f_Q^{\rm{bare}}$ the scaled bare free energy of a static quark, $f_Q^{\rm{bare}}=F_Q^{\rm{bare}}/T$. As one may see from the figure, $f_Q^{\rm{bare}}$ decreases for increasing $\beta$ and for decreasing $N_{\tau}$. The continuum limit at fixed temperature would be reached by varying $N_{\tau}$ and $\beta$ simultaneously in the limit $N_{\tau} \to \infty$, following lines going from the lower left corner into the direction of the upper right corner. Since $f_Q^{\rm{bare}}$ diverges as one proceeds along these lines, the continuum limit of $f_Q^{\rm{bare}}$ is not defined. Thus, we must subtract this divergence before taking the continuum limit. We will discuss this in the next section. } \section{\label{sec:renL} Renormalization of the Polyakov loop and the continuum extrapolation} { The Polyakov loop needs multiplicative renormalization \cite{Polyakov:1980ca}. This means that the free energy of a static quark $F_Q$ needs an additive renormalization. The additive renormalization of $F_Q$ is related to the additive renormalization of the energy of a static quark antiquark ($Q \bar Q$) pair at zero temperature. The static quark antiquark free energy $F_{Q\bar Q}(r,T)$ agrees with the static quark antiquark energy at zero temperature at short distances once a finite additive term due to trivial color factors is included \cite{Kaczmarek:2002mc}. On the other hand $F_{Q\bar Q}(r \rightarrow \infty,T)=2 F_Q(T)$ \cite{McLerran:1981pb, Kaczmarek:2002mc}. Therefore, the renormalization constant of $F_Q$, which we denote by $C_Q$, is half of the renormalization constant of the static energy at zero temperature. To determine the normalization constant $C_Q$ we require that the static $Q \bar Q$ energy for zero temperature at a distance $r=r_0$ is equal to $0.954/r_0$ \cite{Bazavov:2011nk}. This normalization condition is equivalent to normalizing the static energy to $0.2065/r_1$ \cite{Bazavov:2014pvz} at a distance $r=r_1$. Normalizing the static energy at $r_1$, \mbox{i.e.}~at shorter distances has the advantage of reducing the statistical errors at large $\beta$, while the normalization at distance $r_0$ is more suitable for coarser lattices, \mbox{i.e.}~smaller values of $\beta$. Using the lattice results on the static $Q \bar Q$ energy from Ref. \cite{Bazavov:2011nk} and normalizing them to $0.954/r_0$ for $\beta \le 6.488$ we determine $r_0 C_Q$. Then using the results on the static $Q \bar Q$ energy at higher $\beta$ from Refs. \cite{Bazavov:2011nk,Bazavov:2014pvz} and normalizing those to $0.2065/r_1$ we determine $r_1 C_Q$. Finally using $r_1/a$ and $r_0/a$ from Refs. \cite{Bazavov:2011nk, Bazavov:2014pvz} we calculate the values of the normalization constant in lattice units $a C_Q(\beta)=c_Q(\beta)$ which are shown in Fig. \ref{fig:stdzren} and tabulated in Appendix~\ref{appendix:lattice_details}. Note that since $C_Q$ has a $1/a$ divergence, $c_Q$ is finite and is a slowly varying function of $\beta$. Once the cutoff dependence is rephrased in terms of the lattice spacing $a(\beta)$, we may write $C_Q = b/a + c + \mathcal{O}(a^2)$. The divergence $b/a$ cancels against the divergence of the bare free energy. The constant $c$ is a scheme dependent constant, which depends on the distances $r_0$ or $r_1$, but is independent of the lattice spacing. Since the leading higher order corrections are suppressed by $\alpha_s a^2$ for the HISQ/Tree action, the derivative in $a$ of these corrections vanishes in the continuum limit. We note that, since $T=1/(aN_{\tau})$, at fixed $N_{\tau}$ the dependence of $c_Q$ on $a$ translates into a dependence on the temperature. \begin{figure} \includegraphics[width=8cm]{QQproccQ.eps} \caption{\label{fig:stdzren} Renormalization constant $ c_Q(\beta) $ from the $ Q\bar Q $ renormalization procedure. Interpolations are shown as $ 1\sigma $ bands and data points are explained in the text. The inset shows the derivative $ \frac{\partial c_Q}{\partial \beta} $. Optimal and relaxed refer to different spline interpolations with $n_k=4$ or $n_k=5$ knots respectively. } \end{figure} \begin{figure*} \includegraphics[width=8cm]{loQQfQscaling.eps} \includegraphics[width=8cm]{hiQQfQscaling.eps} \caption{ The static quark free energy at various temperatures as function of $N_{\tau}$. ``CL'' marks the continuum limit ($N_\tau \to \infty$). Results for each temperature are shifted by some constant for better visibility. The $1/N_{\tau}^2$ continuum extrapolations are shown as bands with filled pattern. The continuum extrapolations with $1/N_{\tau}^4$ term included are shown as solid filled bands. The width of the band shows the statistical uncertainty of the fits. The left panel shows the results in the low temperature region, while the right panel shows the results in the high temperature region. } \label{fig:fNt} \end{figure*} \begin{figure} \includegraphics[width=8cm]{QQpointwiseFQ.eps} \caption{ Different continuum extrapolations for the static quark free energy $F_Q$. We show extrapolations with coefficient $P_4/N_{\tau}^4$ term set to zero as well as for nonzero values of the coefficient $P_4$. } \label{fig:fcont} \end{figure} \begin{figure} \includegraphics[width=8cm]{FQ_low_summary.eps} \caption{ The continuum results for the free energy of static quark compared to previous calculations \cite{Borsanyi:2010bp,Bazavov:2013yv}. Also shown as a solid black line is the hadron resonance gas calculation of $F_Q$ from Ref. \cite{Bazavov:2013yv}.} \label{fig:fcomp} \end{figure} Now for the renormalized free energy in temperature units we can write \begin{equation} f_Q^{\mathrm{ren}}(T(\beta,N_{\tau}),N_{\tau}) = f_Q^{\rm{bare}}(\beta,N_\tau) + N_\tau c_Q(\beta). \label{eq: fqren} \end{equation} The renormalized free energy depends on $\beta$ through the chain rule for $T(\beta,N_\tau)$. We use $T$ as argument instead of $\beta$, since the continuum limit of $f_Q^{\rm ren}(T(\beta,N_\tau),N_{\tau})$ can be taken for fixed temperature. Hereafter, we usually omit the superscript ``ren'' when referring to renormalized quantities, but keep the superscript ``bare'' for the bare quantities. Here and in what follows we denote by $f_Q$ the scaled renormalized free energy of a static quark, $f_Q=F_Q/T$. In order to determine $f_Q^{\rm{bare}}$ and $c_Q$ as a function of $\beta$ and/or as a function of the temperature, we interpolate the lattice results on $c_Q(\beta)$ and $f_Q^{\rm{bare}}(\beta,N_\tau)$ independently in $\beta$. First, we discuss the interpolation procedure for $c_Q$. To obtain $c_Q$ as a function of $\beta$ we use smooth splines and polynomial interpolations. The errors on the interpolations have been estimated using the bootstrap method. We varied the number of knots of the splines as well as the value of the smoothing parameter in order to estimate the systematic errors. In the case of polynomial fits we consider polynomials of different degree. The interpolation of $c_Q$ is also shown in Fig. \ref{fig:stdzren}. In the inset of the figure we show the derivative of $c_Q$ with respect to beta in order to highlight the spread in different interpolations. The differences between the different interpolations are most visible in the $\beta$ dependence of the derivative of $c_Q$ that is needed for the e valuation of the entropy of a static charge to be discussed in the next section. Next, we discuss the interpolations of the free energy as well as the continuum extrapolations. At finite cutoff, the temperature $T$ is related to $N_\tau$ and the lattice spacing $a$ through $a N_{\tau} =1/T$; trading $a$ for $\beta$ we can also write $\beta = \beta(T,N_\tau)$. Consequently, the limit $a \to 0$ at fixed temperature is tantamount to the limit $N_{\tau} \to \infty$. The power law dependence of cutoff effects on $a$ or $1/N_{\tau}$ respectively is determined by the leading discretization errors of the lattice simulations ($\mathcal{O}(\alpha_s a^2,a^4)$ for the HISQ action). We will use two approaches to do this, which we will call local and global extrapolations. In the first approach, which we will call a local fit, we perform the interpolation of the lattice results for $f_Q^{\rm bare}$ as function of $\beta$ for each $N_{\tau}$ separately. Using the value of $c_Q$ determined above we then calculate the renormalized free energy $f_Q(T(\beta,N_\tau),N_{\tau})$ for each $N_{\tau}$ and perform continuum extrapolations. In the second approach, which we will call a global fit, we simultaneously fit the temperature and $N_{\tau}$ dependence of $f_Q^{\mathrm{bare}}(\beta,N_\tau) + N_\tau c_Q(\beta)$. Setting $N_{\tau} \rightarrow \infty$ in the resulting fit we obtain the continuum extrapolated results for the renormalized free energy. We will discuss these two approaches in the following subsections in more details. \subsection{\label{Local fits}Local interpolations and extrapolations} To perform the interpolation of $f_Q^{\rm bare}(\beta,N_{\tau})$ we split the $\beta$ range in overlapping low $\beta$ and high $\beta$ intervals which roughly correspond to temperatures $T<200$ MeV and $T>200$ MeV respectively. In these intervals for each $N_{\tau}$ we perform interpolations in $\beta$ using smoothing splines as well as polynomial fits. We find that in the low beta range it is sufficient to use splines with $5-7$ knots, while in the high $\beta$ range we use splines with $8-19$ knots depending on the value of $N_{\tau}$. The statistical errors of the interpolations are estimated using the bootstrap method. To estimate possible systematic errors in the interpolation we also performed polynomial fits of the lattice data for $f_Q^{\rm bare}(\beta,N_{\tau})$ in the above intervals. We find that the interpolations obtained with polynomials and splines agree well within the estimated statistical errors not only for $f_Q^{\rm bare}(\beta,N_{\tau})$ but also for its derivative. Therefore, there are no additional systematic errors in our analysis. The details of the interpolations and fits are presented in Appendix \ref{appendix:fits}. Having the interpolation for $f_Q^{\rm bare}(\beta,N_{\tau})$ and the interpolation for $c_Q$ we calculate the renormalized free energy for each $N_{\tau}$. We then perform a $1/N_{\tau}^2$ extrapolation for $f_Q$ to obtain the continuum limit for each value of the temperature. In Fig. \ref{fig:fNt} we show the $N_{\tau}$ dependence of $f_Q$ together with $1/N_{\tau}^2$ and $1/N_{\tau}^4$ extrapolations. As one can see from the figure cutoff effects are fairly small for $T>200$ MeV and $1/N_\tau^2$ holds including $N_{\tau}=6$ data. Note that we do not consider the $N_{\tau}=4$ results partly because they are available only for $T>200$ MeV and partly because they are outside the scaling window. At lower temperature cutoff effects are larger and the $N_{\tau}=6$ data are not in the scaling regime. Therefore, we have to consider fits with $1/N_{\tau}^4$ term included, or use $1/N_{\tau}^2$ fits for $N_{\tau} \geq 8$ only. The continuum results obtained with the above extrapolations are shown in Fig. \ref{fig:fcont}. \subsection{\label{Global fits}Global fits and extrapolations} In the previous subsection we have seen that the temperature dependence can be described by polynomials in the low and high beta ranges once $\beta$ has been reexpressed in $T$. Furthermore, the $N_{\tau}$ dependence of the lattice results is well described by a function $P_0+P_2/N_{\tau}^2+P_4/N_{\tau}^4$. Therefore, we performed fits for $N_{\tau}=6,~8,~10$ and $12$ data on $f_Q(T(\beta,N_{\tau}),N_{\tau})$ using the following form \begin{equation} P_0(T)+\frac{P_2(T)}{N_{\tau}^2}+\frac{P_4(T)}{N_{\tau}^4}. \label{eq:global fit} \end{equation} Here $P_i,~i=0,2,4$ are polynomials in the temperature $T$. As we did for local interpolations, we split the temperature range in overlapping low and high temperature intervals and performed the global fits in both intervals separately. These intervals roughly correspond to $T<200$ MeV and $T>200$ MeV. The low temperature fits extend only down to the lowest temperatures where bare free energies are available for $N_{\tau}=12$, which is slightly above $120$ MeV. The high temperature fits extend only up to the highest temperature where $c_Q$ is available for $N_{\tau}=12$, which is slightly below $410$ MeV. We used fits with and without the $1/N_{\tau}^4$ term, as well as including and excluding the $N_{\tau}=6$ data. We find that within estimated statistical errors all the fits agree both for $f_Q(T(\beta,N_\tau),N_{\tau})$ and its derivatives. The account of these fits is given in appendix \ref{appendix:fits}. For the continuum result we use the fit which does not include the $N_{\tau}=6$ data and has fixed $P_4=0$. We consider this fit as our continuum limit after setting $N_{\tau}=\infty$, which corresponds to setting $P_2=0$ in the resultant fit function. This is shown in Fig. \ref{fig:fcont}, where we see that local and global continuum extrapolations for $f_Q$ agree very well. \subsection{Comparison with previous calculations} Now let us compare the above continuum results with the previously published results that use the same renormalization scheme with improved staggered quark actions. Namely we compare our results with the continuum results obtained with the stout action \cite{Borsanyi:2010bp} as well as with the HISQ action \cite{Bazavov:2013yv}. This comparison is shown in Fig. \ref{fig:fcomp}. We see that our results agree with the previously published results within errors, however, the central values for $F_Q$ in our analysis are slightly smaller for $T<130$ MeV due to different way the continuum extrapolation is performed. The previous estimate of the continuum limit for $T \leq 135$ MeV had been performed by averaging $N_{\tau}=10$ and $N_{\tau}=8$ data \cite{Bazavov:2013yv}, whereas our analysis includes new $N_{\tau}=12$ ensembles at low temperatures that made a controlled continuum extrapolation possible. For $T>180$ MeV the central value of $F_Q$ in our analysis is somewhat larger. This is due to the updated value of the renormalization coefficients $c_Q$. The previous HISQ calculations relied on the zero temperature static quark antiquark energies obtained in Ref. \cite{Bazavov:2011nk}, which have larger statistical uncertainty and use fewer $\beta$ values. The current analysis of $c_Q$ is based on the analysis of the zero temperature static quark antiquark energies from Ref. \cite{Bazavov:2014pvz}, which has higher statistics and uses more $\beta$ values. The main new element in our analysis is that it extends to significantly higher temperatures. Finally, we compare our results with the prediction of the hadron resonance gas (HRG) calculation for $F_Q$ \cite{Bazavov:2013yv}, which includes the contribution of all static-light mesons and all the static-light baryons (see also Ref. \cite{Megias:2012kb}). Since the HRG value of $F_Q$ is only defined up to a temperature independent constant, this constant needs to be fixed. We do so by matching the HRG value of $F_Q$ to the lattice results at lowest temperature. The comparison is shown in Fig. \ref{fig:fcomp}. We see that the HRG description works only for temperature $T<140$ MeV which is in agreement with the previous analysis \cite{Bazavov:2013yv}. } \section{\label{sec:entropy} Entropy of a static quark} { \begin{figure*} \includegraphics[width=8cm]{SQ_nt6.eps} \includegraphics[width=8cm]{SQ_nt8.eps} \includegraphics[width=8cm]{SQ_nt10.eps} \includegraphics[width=8cm]{SQ_nt12.eps} \caption{\label{fig:SQ_allnt} The entropy of a static quark calculated on $N_{\tau}=6$, $8$, $10$ and $12$ lattices. Shown are the results obtained from local and global fits. The vertical band corresponds to the chiral transition temperature from~\cite{Bazavov:2011nk}. The solid black lines show the entropy in the hadron resonance gas model~\cite{Bazavov:2013yv}. } \end{figure*} \begin{figure} \includegraphics[width=8cm]{comp_old.eps} \caption{\label{fig:sq_old_comp} The comparison of $S_Q$ in the continuum limit with previous calculations obtained on $N_{\tau}=4$ lattices\cite{Kaczmarek:2005gi, Petreczky:2004pz}. The temperature axis has been rescaled for each lattice calculation by a corresponding lattice result for $T_c$, namely $T_S=153\,{\rm MeV}$ for our result, $T_\chi=193\,{\rm MeV}$ and $T_\chi=200\,{\rm MeV}$ for the $N_f=3$ and $N_f=2$ results respectively and $T_L=270\,{\rm MeV}$ for the quenched case ($N_f=0$). } \end{figure} \begin{figure} \includegraphics[width=8cm]{hightemp_SQ.eps} \caption{\label{fig:SQcomp} The entropy of a static quark in the high temperature region. The lines correspond to leading order weak-coupling calculations for scale $\mu=2 \pi T$ and $\mu=\pi T$. } \end{figure} While the free energy of a static quark encodes the screening properties of the hot QCD medium its temperature dependence is relatively featureless. The change in the screening properties of the medium can be seen more clearly in terms of the entropy of a static quark \begin{equation} S_Q(T)= -\frac{\partial F_Q(T)}{\partial T}. \end{equation} Note that the equality holds also if the temperature derivative is taken at changing volume, since the pressure exerted by a static quark is zero. The entropy was discussed recently in connection with the strongly coupled nature of quark gluon plasma \cite{Kharzeev:2014pha, Hashimoto:2014fha}. The entropy of a static quark in SU(3) gauge theory diverges at the phase transition temperature and was considered in Ref. \cite{Petreczky:2005bd, Kaczmarek:2005gi}. The entropy was also calculated for 2 and 3 flavor QCD with larger than physical quark masses \cite{Kaczmarek:2005gi,Petreczky:2004pz}. It has a peak at the crossover temperature, i.e. it corresponds to the inflection point of $F_Q$. Therefore, calculating $S_Q$ for the physical quark masses is of interest, since $S_Q$ could be used to define a deconfinement transition temperature. Based on the interpolation of $f_Q$ and $c_Q$ described in the previous section it is straightforward to estimate $S_Q$. We write \begin{equation} -S_Q= f_Q^{\rm bare} + T\frac{\partial \beta}{\partial T} \frac{\partial f_Q^{\rm bare}}{\partial \beta}+ N_{\tau} ( c_Q + T\frac{\partial \beta}{\partial T} \frac{\partial c_Q}{\partial \beta} ). \end{equation} Here, the derivative $\partial \beta/\partial T$ is related to the nonperturbative beta function $R_\beta$ through $R_\beta=T( \partial \beta/\partial T)$, determined in Ref. \cite{Bazavov:2014pvz}. The entropy can also be calculated using the global fits for $f_Q(T(\beta,N_\tau),N_{\tau})$ discussed in the previous section. The numerical results for the entropy of a static quark are shown in Fig. \ref{fig:SQ_allnt} for $N_{\tau}=6$, $8$, $10$ and $12$ with local as well as global fits. These fits have been discussed in Secs. ~\ref{Local fits} and \ref{Global fits}. We see that with increasing temperature $S_Q$ increases reaching a maximum at some temperature and then decreases again. Therefore, it makes sense to discuss the behavior of the entropy at low temperatures, in the peak region and at high temperatures separately. Since $S_Q$ for $N_{\tau}=6$ is not in the $a^2$ scaling regime in the peak region and below, no $ a^2 $ scaling fit is shown for $N_{\tau}=6$. At low temperatures we expect $S_Q$ to be described by the HRG model of Ref. \cite{Bazavov:2013yv}, discussed in the previous section. The HRG predictions from this model for $S_Q$ are shown as black lines in Fig. \ref{fig:SQ_allnt}. For low temperatures $T<130$ MeV our lattice results for $S_Q$ overlap with the HRG curve. As the temperature increases we see very clear deviations from the HRG result, namely the entropy $S_Q$ calculated on the lattice is significantly larger than the HRG prediction. As mentioned above the entropy shows a peak at some temperature. The position of the maximum in $S_Q$ turns out to be up to $3$ MeV below the chiral crossover temperature at finite cutoff, $T_\chi(N_{\tau})$ \cite{Bazavov:2011nk}, which is shown as a vertical line in the figure for each $N_{\tau}$ separately. The bands indicate the uncertainty in $T_\chi(N_{\tau})$. The values of $T_\chi(N_{\tau})$ are obtained from the $O(2)$ scaling fits of the chiral susceptibilities~\cite{Bazavov:2011nk}. If the maximum in the entropy of a static quark is used to define a deconfinement crossover temperature one could say that deconfinement and chiral crossover happen at about the same temperature. We extrapolate to the continuum with different local and global fits, either including a $P_4/N_{\tau}^4$ term (\mbox{cf.} Sec. \ref{Global fits}) and $N_{\tau}=6$ data or excluding both. The position of the peak scatters in the range $150.5$ MeV $\leq T \leq$ $157$ MeV, depending on the details of the fits, which are discussed in Appendix \ref{appendix:fits}. We consider the local fit excluding $P_4$ and $N_{\tau}=6$ as our final result and find the maximum of $S_Q$ at $T_S=153^{+6.5}_{-5}\ {\rm MeV}$. We estimate a systematic uncertainty of $T_S$ as $^{+4}_{-2.5}\ {\rm MeV}$ from the spread of the fits, which is smaller than the statistical errors that we quote. The deconfinement transition temperature was defined as the inflection point of the renormalized Polyakov loop in Refs. \cite{Aoki:2006we,Aoki:2009sc} and values of $T_L=171(3)(4)$ MeV\footnote{ We adjusted for the change in the value of the kaon decay constant that was used to set the scale in Ref. \cite{Aoki:2006we} to the most recent value.} and $T_L=170(4)(3)$ MeV have been found, respectively. These values are significantly larger than the chiral transition temperature. The most likely reason for this is that the inflection point of the renormalized Polyakov loop depends on the renormalization condition and could be different from the inflection point of $F_Q$. The inflection point of the renormalized Polyakov loop can be obtained from the equation \begin{align} 0 = & \frac{1}{L^{\rm ren}} \frac{\partial^2 L^{\rm ren}}{\partial T^2} = \left(\frac{\partial f_Q}{\partial T}\right)^2 - \left(\frac{\partial^2 f_Q}{\partial T^2}\right) \nonumber \\ = & \frac{1}{T}\left( \frac{(f_Q+S_Q)^2 -2(f_Q+S_Q)}{T} + \left(\frac{\partial S_Q}{\partial T}\right) \right), \label{eq:scheme dependence} \end{align} whereas the inflection point of the free energy $F_Q$ is obtained from $0=\partial S_Q/\partial T$. In other words, the two inflection points of the Polyakov loop and the free energy would agree if and only if $f_Q+S_Q=0$ or $2$. This would be the case if weak-coupling relation, $S_Q \simeq -f_Q$, was correct close to the crossover point. Instead, in support of the findings in Refs. \cite{Aoki:2006we, Aoki:2009sc} we find the inflection point of the renormalized Polyakov loop significantly above the chiral transition temperature, between $180$ and $200$ MeV for each $N_{\tau}=12$, $10$, $8$ and $6$. Systematic uncertainties for $N_{\tau}=12$ are underestimated by the error in this range (\mbox{cf.} Appendix \ref{appendix:fits}). \mbox{Equation} (\ref{eq:scheme dependence}) shows that the inflection point of $L^{\rm ren}=\exp{(-f_Q)}$ depends on the term $c$ of $c_Q$ (\mbox{cf. Sec.}~\ref{sec:renL}) through $f_Q$ and $f_Q^2$. For $F_Q$ the change in the renormalization condition does not affect its inflection point in the continuum limit, which, in fact, does not depend on $c$. We also compare our continuum results for $S_Q$ with previous calculations obtained at much larger quark masses and $N_{\tau}=4$ lattices \cite{Kaczmarek:2005gi, Petreczky:2004pz}. This comparison is shown in Fig. \ref{fig:sq_old_comp}. The temperature axis in the figure has been rescaled by the corresponding transition temperatures. We see that the peak in the entropy is much reduced compared to the previous calculations. The height of the peak is about a factor of two smaller compared to the previous calculations. Both larger quark masses and fewer quark flavors correspond to physical settings in between QCD with 2+1 flavors at physical quark masses and pure gauge theory. In pure gauge theory $S_Q$ would diverge as the temperature approaches the deconfinement phase transition from above. We further remark that Fig.~\ref{fig:SQ_allnt} clearly shows that the height of the peak decreases for increasing $N_{\tau}$. Therefore, one would generally expect to see a higher peak in $S_Q$ at finite cutoff than in the continuum limit. Hence, the much reduced height of the peak is no surprise. Finally, let us discuss the behavior of $S_Q$ in the high temperature region. For $T>220$ MeV we have sufficiently accurate data for all lattice spacings. We have performed several continuum extrapolations based on global and local fits. These are shown in Fig. \ref{fig:SQcomp}. We can see from the figure that different continuum extrapolations have overlapping error bands. In particular $N_\tau=6$ data is consistent with $1/N_{\tau}^2$ scaling behavior. The uncertainty grows significantly, however, as we approach $T=400$ MeV due to the fact that renormalization constants are available only up to that temperature for $N_{\tau}=12$ data. In the next section we will discuss how to extend the results to higher temperatures. In Fig. \ref{fig:SQcomp} we also show the results for weak-coupling calculations at leading order with one-loop running coupling for two different renormalization scales. As one can see from the figure the LO result for $S_Q$ is not very different from the lattice calculations, however, the scale dependence is quite large. Furthermore, higher order corrections are also important. Therefore, for a meaningful comparison of the lattice and the weak-coupling results it is necessary to extend the calculations to higher temperatures and to higher orders in the perturbative expansion. This will be discussed in Sec. \ref{sec:weak}. } \section{\label{sec:high} Polyakov loop at high temperatures} { The highest temperature at which we can study the Polyakov loop or equivalently $F_Q$ so far was limited by the knowledge of $c_Q$ determined by the zero temperature static $Q\bar Q$ energy. Below we will discuss a method to work around this limitation which we call the direct renormalization scheme. The idea of the direct renormalization scheme is to determine $c_Q$ by comparing the free energy $f_Q$ calculated for the same temperature but different $N_\tau$ \cite{Gupta:2007ax}. \mbox{Equation}~(\ref{eq: fqren}) can be applied to obtain $c_Q(\beta)$ once $f_Q(T(\beta,N_{\tau}),N_{\tau})$ and $f_Q^{\rm{bare}}(\beta,N_\tau)$ are known. If there were no cutoff effects in $f_Q(T(\beta,N_\tau),N_{\tau})$ after renormalization, $c_Q(\beta)$ at some value of $\beta$ would read \begin{align} c_Q(\beta) =&\ \frac{1}{N_{\tau}} \big[ N_{\tau}^{\rm{ref}} c_Q(\beta^{\rm{ref}}) + \nonumber\\ &\ f_Q^{\rm{bare}}(\beta^{\rm{ref}},N_{\tau}^{\rm{ref}})- f_Q^{\rm{bare}}(\beta,N_{\tau}) \big], \end{align} where $N_\tau^{\rm ref}$ and $\beta^{\rm ref}$ correspond to a reference point, where $c_Q$ is known. \begin{figure}[b] \includegraphics[width=7cm]{residualcutoffeffects.eps} \caption{\label{fig:Delta} Extrapolation of residual cutoff effects in $ f_Q^{\rm{ren}}(T(\beta,N_{\tau}),N_\tau^{\rm{ref}})$. The vertical lines indicate the start of the extrapolated $ \Delta_{N_\tau,N_\tau^{\rm{ref}}}(T) $ for each pair $( N_\tau,N_\tau^{\rm{ref}} )$. } \end{figure} \begin{figure} \includegraphics[width=7cm]{flowchart.eps} \caption{\label{fig:flowchart} The flow chart sketches the different steps of the direct renormalization procedure. For each step the temperature $T(\beta,N_\tau)$ is limited by the corresponding $\beta \leq \beta_{\max}$. } \end{figure} Next, we study the cutoff dependence of $f_Q$. It is convenient to do so by considering the following difference \begin{align} \Delta_{N_\tau,N_\tau^{\rm{ref}}}(T) =& f_Q(T(\beta,N_{\tau}),N_{\tau}) \nonumber \\ -& f_Q(T(\beta^{\rm ref},N_{\tau}^{\rm ref}),N_\tau^{\rm{ref}}). \label{eq: residual cutoff effects} \end{align} In Fig. \ref{fig:Delta} we show $\Delta_{N_\tau,N_\tau^{\rm{ref}}}(T)$ as function of the temperature for different combinations of $N_\tau$ and $N_\tau^{\rm ref}$. At low temperatures, $T<250$ MeV, this quantity shows a strong temperature dependence. However, for $T>250$ MeV the temperature dependence of $\Delta_{N_\tau,N_\tau^{\rm{ref}}}(T)$ is rather mild, and one may approximate it by a constant. Therefore, we assume that above the temperatures where no lattice data for $c_Q$ are available $\Delta_{N_\tau,N_\tau^{\rm{ref}}}(T)$ is constant. If predictions for $c_Q$ from all possible pairs $(N_\tau,N_{\tau}^{\rm{ref}})$ are consistent within uncertainties, one may conclude in retrospect that the assumption was justified. We estimate its central value from the average of the minimum and the maximum of the one sigma band of $\Delta_{N_\tau,N_\tau^{\rm{ref}}}(T)$ for $ T>250 $ MeV and its uncertainty by the respective difference. This estimate is shown in Fig. \ref{fig:Delta}. Using $\Delta_{N_\tau,N_\tau^{\rm{ref}}}^{\rm av}(T)$ determined this way together with the corresponding error we can provide an estimate for $c_Q$ that should be free of cutoff effects: \begin{align} c_Q(\beta) =&\ \frac{1}{N_{\tau}} \big[ N_{\tau}^{\rm{ref}} c_Q^{Q \bar Q}(\beta^{\rm{ref}}) + \Delta_{N_\tau,N_\tau^{\rm{ref}}}^{\rm av} + \nonumber\\ &\ f_Q^{\rm{bare}}(\beta^{\rm{ref}},N_{\tau}^{\rm{ref}})- f_Q^{\rm{bare}}(\beta,N_{\tau}) \big] . \label{eq: direct renormalization} \end{align} \begin{figure}[b] \includegraphics[width=8cm]{comparehicQ.eps} \caption{\label{fig: comparehizren} Comparison between renormalization constant $ c_Q(\beta) $ from direct renormalization and $ Q \bar Q $ procedures. Symbols and data are explained in the text. } \end{figure} We use all possible pairs $( N_\tau,N_\tau^{\rm{ref}} )$ and compute $ c_Q^{\rm{direct}}(\beta,N_\tau,N_\tau^{\rm{ref}}) $ via \mbox{Eq.}~(\ref{eq: direct renormalization}) from $ c_Q^{Q \bar Q}(\beta) $ for all possible temperatures. We can only calculate $c_Q$ with direct renormalization procedure up to $\beta=8.57$, if we use $N_{\tau}=8$ results for the bare Polyakov loops ($T(\beta=8.57,N_{\tau}=8)=1155$ MeV) or to $\beta=8.85$ if we use $N_{\tau}=12$ results for the bare Polyakov loop ($T(\beta=8.85,N_{\tau}=12)=974$ MeV). To extend the beta range even further, we use the two step procedure for the direct renormalization. First, we compute $ c_Q^{\rm direct} $ up to $ \beta=8.85 $ from $c_Q^{Q \bar Q}$ in the first iteration. Next, we add the new values of the renormalization constant to the bare free energies up to $ T(\beta=8.85,4)=2922 $ MeV. Finally, we compute $ c_Q^{\rm direct} $ up to $ \beta=9.67 $ from $ c_Q^{\rm direct} $ in a second iteration and add the new values of the renormalization constant to bare free energies up to $ T(\beta=9.67,4)=5814 $ MeV. We sketch the procedure in the flow chart in Fig.~\ref{fig:flowchart}. In order to test robustness and predictive power of direct renormalization, we omit $ c_Q^{Q \bar Q}(\beta) $ for $ \beta>7.373 $ and calculate $c_Q^{\rm direct}$ using the above procedure. After excluding $c_Q^{Q \bar Q}(7.596)$ and $ c_Q^{Q \bar Q}(7.825) $ from the input, we compare the predictions for $ c_Q^{\rm{direct}}(7.596,N_\tau,N_\tau^{\rm{ref}}) $ and $ c_Q^{\rm{direct}}(7.825,N_\tau,N_\tau^{\rm{ref}}) $ with known values of $ c_Q^{Q \bar Q}(\beta) $. We show this comparison for a few selected $ \beta $ values and pairs ($ N_\tau,N_\tau^{\rm{ref}} $) in \mbox{Fig.}~\ref{fig: comparehizren}. Black bursts represent $ c_Q^{Q \bar Q}(\beta) $ data from zero temperature lattices. Results $ c_Q^{\rm{direct}}(\beta,N_\tau,N_\tau^{\rm{ref}}) $ inferred from coarser \mbox{resp.} finer lattices ($ N_\tau>N_\tau^{\rm{ref}} $ \mbox{resp.} $ N_\tau<N_\tau^{\rm{ref}} $) are displaced to the right \mbox{resp.} left of $ c_Q^{Q \bar Q}(\beta) $. Shape and color of the symbols encode $ N_\tau^{\rm{ref}} $ and $ N_\tau $. As one can see from the figure the direct renormalization method correctly reproduces the values of the renormalization constant obtained in the $Q\bar Q$ procedure. Since no trends in $ c_Q^{\rm{direct}}(\beta,N_\tau,N_\tau^{\rm{ref}}) $ depending on either $ N_\tau $ or $ N_\tau^{\rm{ref}} $ are observed, we conclude that no residual cutoff effects are present. We average over all possible pairs $( N_\tau,N_\tau^{\rm{ref}} )$ that reproduce one of the $ \beta $ values of an underlying Polyakov loop within $ \pm 0.01 $, take the error's mean as statistical error and the standard deviation as systematical error estimate (at most 25\% of the statistical error). We add these errors in quadrature and show the $ 1\sigma $ bands of $ c_Q^{\rm{direct}}(\beta) $ in the figure. We show with dark blue lines (for $ \beta \leq 7.373 $) that input values $ c_Q^{Q \bar Q}(\beta) $ are reproduced. Hence, consistency between both renormalization schemes is evident. We show with cyan lines (for $ \beta > 7.373 $) that predictions of the direct renormalization procedure are consistent with $ c_Q^{Q \bar Q}(\beta) $ outside of the input $ \beta $ range. Therefore, we confirm that our approach for the direct renormalization procedure has predictive power outside of the input $ \beta $ range and that our extrapolation assuming constant cutoff effects in \mbox{Fig.}~\ref{fig:Delta} is justified. \begin{figure}[t] \includegraphics[width=7cm]{directcQ.eps} \caption{\label{fig: directzren} Renormalization constant $ c_Q(\beta) $ from direct renormalization and $ Q \bar Q $ procedures. Interpolations are shown as $ 1\sigma $ bands and data points are explained in the text. The inset shows the derivative $\frac{\partial c_Q}{\partial \beta}$. } \end{figure} \begin{figure*} \includegraphics[width=8cm]{ntvardirsplinefQ.eps} \includegraphics[width=8cm]{ntvardirsplineSQ.eps} \caption{\label{fig:high_FQ_SQ} The free energy (left) and the entropy (right) of a static quark in the high temperature region. The bands show the results of interpolation with the corresponding uncertainty. For comparison, the free energy for SU(3) Yang-Mills theory on $ N_\tau=4 $ lattices is included~\cite{Gupta:2007ax}. The symbols on the right correspond to $S_Q$ calculated from finite differences. } \end{figure*} Having determined the renormalization constant in the extended range of $\beta$ (\mbox{cf.} \mbox{Fig.}~\ref{fig: directzren}) it is straightforward to calculate the free energy $f_Q$ at considerably higher temperatures. Namely our calculations with $N_{\tau}=12$ now extend to $T=900$ MeV, while for $N_{\tau}=6$ and $N_{\tau}=8$ we can reach to temperatures of about $3800$ MeV and $2900$ MeV, respectively. The results of our calculations at high temperatures ($T>400$ MeV) are shown in Fig. \ref{fig:high_FQ_SQ} for different $N_{\tau}$. In the figure we also show the local interpolation of the data as bands. One can see that the cutoff dependence of the data is rather mild, i.e. the bands corresponding to different $N_{\tau}$ are largely overlapping, including the $N_{\tau}=4$ results. In other words, even for our coarsest lattice the cutoff effects are very small in this high temperature region. This will be important for the comparison with the weak-coupling calculations discussed in Sec. \ref{sec:weak} since this comparison can be performed using the $N_{\tau}=4$ results that extend up to temperatures as high as $5814$ MeV. We also note that the free energy becomes negative for $T>500$ MeV as expected from the weak-coupling calculations. The other interesting feature of $f_Q$ is that it has a minimum around temperatures of about $1500$ MeV corresponding to a maximum of the renormalized Polyakov loop. This feature was observed in the SU(3) gauge theory, where the renormalized Polyakov loop has the maximum at temperatures of $12T_d$, with $T_d$ being the deconfinement phase transition temperature \cite{Gupta:2007ax}. These SU(3) Yang-Mills theory results, which have been included in \mbox{Fig.}~\ref{fig:high_FQ_SQ}, yield significantly smaller $|f_Q|$ than our results with 2+1 flavors. The difference is pronounced most strongly in the vicinity of the minimum of $f_Q$. From the interpolations of $f_Q$ it is straightforward to calculate the entropy of a static quark. This is shown in Fig. \ref{fig:high_FQ_SQ} (right panel). Furthermore, since for $T>400$ MeV the free energy varies smoothly with the temperature it is possible to calculate $S_Q$ without any interpolation. We could estimate $S_Q$ by approximating the temperature derivative of $F_Q$ by finite differences of the lattice data on $F_Q$ at two neighboring temperature values. The entropy estimated from the finite differences is also shown in Fig. \ref{fig:high_FQ_SQ} and it agrees very well with the results obtained from interpolations. For $T>900$ MeV we have $S_Q \simeq -f_Q$ as expected in the weak-coupling picture. We also note that the entropy at high temperatures is also higher than in the SU(3) gauge theory. } \section{\label{sec:flow} Renormalization with gradient flow} { \begin{figure} \includegraphics[width=8cm]{nt12ftvarflowfq.eps} \caption{ The free energy of a static quark calculated on $N_{\tau}=12$ lattices for different flow times. } \label{fig:nt12flow} \end{figure} \begin{figure} \includegraphics[width=8cm]{LO.eps} \caption{ The free energy of a static quark at leading order calculated for different flow times. The values of the free energy have been shifted by $300$ MeV (see text). } \label{fig:LO_fdep} \end{figure} The gradient flow was introduced as a tool to remove short distance divergences in the lattice observables \cite{Luscher:2010iy, Luscher:2011bx}. It is defined by the differential equation \cite{Luscher:2010iy} \begin{equation} \frac{d V_{\mu}(x,t)}{d t}=-g_0^2 \partial_{x,\mu} S[V] V_{\mu}(x,t), \label{flow} \end{equation} where $S[V]$ is the lattice gauge action and $g_0^2=10/\beta$ is the bare lattice gauge coupling. The new link variable $V_{\mu}(x,t)$ has the initial value given by the original link variable $V_{\mu}(x,t=0)=U_{\mu}(x)$. Here we use the same notation for $\partial_{x,\mu} S[V]$ as in Ref. \cite{Luscher:2010iy}. The gradient flow has been extensively used at zero temperature for scale setting (see, e.g., Ref. \cite{Borsanyi:2012zs,Bazavov:2015yea}) as well as at nonzero temperature for the calculations of the equation of state \cite{Asakawa:2013laa}. In Ref. \cite{Petreczky:2015yta} it was proposed to use the gradient flow to calculate the renormalized Polyakov loops. It was shown there that up to a temperature independent constant the free energy of a static quark calculated using the gradient flow agrees with the free energy obtained in the conventional ($Q\bar Q$) scheme in the continuum limit up to temperatures $T=400$ MeV, provided that the flow time $f=\sqrt{8 t}$ satisfies the condition: \begin{equation} a \ll f \ll 1/T,~~{\rm or}~~~ 1 \ll f T \ll N_{\tau}. \label{eq:scaling_cond} \end{equation} The gradient flow method also enabled the calculation of the free energy of static charges in higher representation and confirmed the expected Casimir scaling in the high temperature region \cite{Petreczky:2015yta}. Here we would like to extend these studies to higher temperatures and also analyze the fluctuations of the Polyakov loop. \subsection{Renormalized Polyakov loop from gradient flow} We followed the procedure outlined in Ref. \cite{Petreczky:2015yta} and calculated the Polyakov loop at nonzero flow time by replacing the link variables $U_{\mu}(x)$ in Eq. (\ref{defP}) by $V_{\mu}(x,t)$. We use the tree level Symanzik gauge action in Eq. (\ref{flow}). We calculated the Polyakov loop for the same flow times as in Ref. \cite{Petreczky:2015yta}, namely, $f=\sqrt{8 t}=f_0,~3/4 f_0, 1/2 f_0, 1/4 f_0$ and $1/8 f_0$, $f_0=0.2129$ fm. See Ref. \cite{Petreczky:2015yta} for further details. In Fig. \ref{fig:nt12flow} we show our numerical results for $N_{\tau}=12$ shifted by a constant such that the results obtained at different flow times agree with the continuum result for $F_Q$ obtained in the previous section at $T=600$ MeV. The bands shown in the figure correspond to the interpolation of the lattice data. One can see from the figure that the temperature dependence of $F_Q$ obtained with $f=f_0,~3/4 f_0, 1/2 f_0$ is very similar to the temperature dependence of the free energy obtained using the direct renormalization procedure for $T<500$ MeV. With a suitable constant shift all these results can be made to agree with each other in this temperature region. For higher temperatures, however, the temperature dependence of $F_Q$ obtained with these values of the flow time is not captured correctly. Choosing a smaller flow time, namely $f=f_0/4$, the temperature dependence of $F_Q$ obtained using direct renormalization method is reproduced. However, decreasing the flow time even further to $f_0/8$ leads to a completely different temperature dependence. Thus, for $T>500$ MeV the results are very sensitive to the choice of the flow time, i.e. the scaling window is very narrow. We also performed the calculations for $N_{\tau}=6,~8$ and $10$. The corresponding results are similar to the ones shown in Fig. \ref{fig:nt12flow} but the flow time dependence is even stronger. This stronger flow time dependence is expected (cf. Eq. (\ref{eq:scaling_cond})). \begin{figure*} \includegraphics[width=8cm]{chi_f3.eps} \includegraphics[width=8cm]{chi_nt12.eps} \caption{\label{fig:chi} The Polyakov loop susceptibility obtained using gradient flow for $f=3f_0$ and different $N_{\tau}$ (left) and for $N_{\tau}=12$ and $f=f_0, 2f_0$ and $3 f_0$.} \end{figure*} To understand the flow time dependence of the free energy of a static quark shown in Fig. \ref{fig:nt12flow} it is useful to analyze the leading order result for the Polyakov loop obtained at nonzero flow time \cite{Datta:2015bzm}. In terms of the free energy the leading order result reads \begin{equation} F_Q^f(T)=C_F \alpha_s \frac{\sqrt{\pi}}{f}- C_F \alpha_s \frac{m_D}{2} \tilde \Phi(m_D f/2), \label{LO_f} \end{equation} where $\displaystyle \tilde \Phi(z)=e^{z^2} \frac{2}{\sqrt{\pi}} \int_z^{\infty} d x e^{-x^2}$. Here and in what follows we use the label $f$ on the free energy to denote the free energy obtained with gradient flow. For sufficiently small flow time this result approaches the well known leading order result for $F_Q$ (up to a temperature independent constant $\sim 1/f$), since $\tilde \Phi(z=0)=1$. Now the question arises which value of the flow time can be considered as sufficiently small. Therefore, in Fig. \ref{fig:LO_fdep} we show the leading order result given by Eq. (\ref{LO_f}) omitting the constant term $\sim 1/f$. Furthermore, we shifted $F_Q^f$ by $300$ MeV to facilitate the comparison with the lattice results. We see a similar trend in the flow time dependence of the leading order result for $F_Q^f(T)$: As the flow time increases the temperature dependence becomes milder. For $T<400$ MeV $f=f_0/4$ can be considered as sufficiently small. However, at higher temperature we must have $f< f_0/8$. On the other hand, as we have seen above, the value of $f=f_0/8$ is too small for $N_{\tau}=12$ lattices to remove the lattice artifacts. This suggests that one has to use lattices with temporal extent $N_{\tau}>12$ to obtain the correct temperature dependence of the Polyakov loop for $T>400$ MeV. One could also try to follow a different philosophy and fix the flow time such that $f \cdot T=const.$ as it was done in Ref. \cite{Datta:2015bzm}. In this case the term proportional to $1/f$ would contribute to the temperature dependence of $F_Q^f$ and thus to the entropy $S_Q^f=-\partial F_Q^f/\partial T$. The additional contribution to the entropy just amounts to a constant shift compared to the entropy of a static charge defined in the conventional way, i.e. the temperature dependence of the entropy would be the same as before. By matching the entropy obtained from the gradient flow to the entropy of a static quark obtained in the conventional scheme one could in principle obtain results for the entropy at higher temperatures. We tried to implement this scheme, however, it turns out that the resulting errors are too large to obtain reliable results for the entropy of a static charge at high temperatures. \begin{figure*} \includegraphics[width=5.8cm]{ratA_f0.eps} \includegraphics[width=5.8cm]{ratA_f1.eps} \includegraphics[width=5.8cm]{ratA_nt8.eps} \caption{\label{fig:ratA} The ratio of the susceptibilities $R_A$ shown as function of the temperature for zero flow time (left), flow time $f=f_0$ (middle) and for different flow times but for $N_{\tau}=8$ (right).} \end{figure*} \begin{figure*} \includegraphics[width=5.8cm]{ratT_f0.eps} \includegraphics[width=5.8cm]{ratT_f1.eps} \includegraphics[width=5.8cm]{ratT_f3.eps} \caption{\label{fig:ratT} The ratio of the susceptibilities $R_T$ shown as function of the temperature for zero flow time (left), flow time $f=f_0$ (middle) and for flow time $f=3 f_0$ (right). } \end{figure*} \subsection{Fluctuations of Polyakov loop} The Polyakov loop susceptibility defined as \begin{equation} \chi=(V T^3) \left(\langle |P|^2 \rangle - \langle |P| \rangle^2 \right), \label{susc} \end{equation} is often used to study the deconfinement transition in SU(N) gauge theories and for the determination of the transition temperature. It has a sharp peak at the pseudocritical temperature. It is not clear, however, how to renormalize this quantity. Attempts to renormalize it using the square of the renormalization factor of the Polyakov loop have been proposed \cite{Lo:2013etb,Lo:2013hla}. However, apart from being ad-hoc this procedure does not remove all the UV divergences in the susceptibility as can be seen from the comparison of lattice data obtained for different $N_{\tau}$ \cite{Lo:2013hla}. In Ref. \cite{Datta:2015bzm} the gradient flow was used in the calculation of the Polyakov loop susceptibilities in SU(3) gauge theory. The gradient flow effectively renormalizes the susceptibility and thus no cutoff dependence can be seen \cite{Datta:2015bzm}, but the value of the Polyakov susceptibility depends on the choice of the flow time. The peak position is, however, independent of the flow time and is equal to the phase transition temperature \cite{Datta:2015bzm}. We also used gradient flow to study the Polyakov loop susceptibility in 2+1 flavor QCD. Our results for flow time $f=3 f_0$ and different $N_{\tau}$ are shown in Fig. \ref{fig:chi} (left panel). The Polyakov loop susceptibility obtained for $f=3 f_0$ shows a peak around $T \simeq 200$ MeV, i.e. at significantly higher temperature than the peak position in $S_Q$, $T_S$ (\mbox{e.g.}~\mbox{$T_S(N_\tau=12)=157(6)\,{\rm MeV}$}). The $N_{\tau}$ dependence of the Polyakov loop susceptibility is rather mild and does not show a clear tendency. Next, we examine the dependence of the Polyakov loop susceptibility on the flow time. In Fig. \ref{fig:chi} (right panel) we also show the flow time dependence of $\chi$ for $N_{\tau}=12$, where the flow time dependence is expected to be the mildest. We see that the Polyakov loop susceptibility strongly depends on the choice of the flow time. The peak position shifts to large values as the flow time is decreased from $3 f_0$ to $f_0$. This behavior of the Polyakov loop susceptibility in 2+1 flavor QCD can be understood as follows. Unlike in SU(N) gauge theory the Polyakov loop is not related to singular behavior of the free energy in the transition region. The fluctuations of the Polyakov loop are therefore not affected by the critical behavior in the transition region and thus are not enhanced in a significant way. The value of $\chi$ is determined by the regular terms and thus depends on the renormalization procedure, i.e., the choice of the flow time. In addition to the Polyakov loop susceptibility defined by Eq. (\ref{susc}), which corresponds to the fluctuation in the absolute value of the Polyakov loop, one can consider separately the fluctuations of real and imaginary parts of the Polyakov loop \begin{equation} \chi_L=(VT)^3 \langle ({\rm Re} P)^2 \rangle-\langle P\rangle^2,~~\chi_T = (VT)^3 \langle ({\rm Im} P)^2 \rangle, \end{equation} which, following Refs. \cite{Lo:2013etb, Lo:2013hla}, we will call the longitudinal and transverse susceptibilities. In the above equations we used the fact that $\langle P \rangle=\langle {\rm Re} P \rangle$ and $\langle {\rm Im} P \rangle=0$. We have calculated $\chi_L$ and $\chi_T$ using the gradient flow. We find that $\chi_L$ behaves as $\chi$, i.e. it has the same flow time dependence, and for $f=3 f_0$ it shows a broad peak in the temperature region $T=(180-200)$ MeV. We also find a significant flow time dependence for $\chi_T$. However, $\chi_T$ has a peak at temperatures around $160$ MeV, i.e. close to the chiral transition temperature. One may speculate that with increasing the flow time further the peak position of $\chi_L$ will move closer to the chiral transition temperature because the large flow time will enhance the infrared fluctuations in the real part of the Polyakov loop. However, we did not pursue this in the present study. In Refs. \cite{Lo:2013etb,Lo:2013hla} the ratios of the Polyakov loop susceptibilities $R_A=\chi/\chi_L$ and $R_T=\chi_T/\chi_L$ have been studied. It has been argued there that these ratios are sensitive probes of deconfinement and are independent of the cutoff. Therefore, we will study these ratios in more detail. First, let us consider the ratio $R_A$. It is shown in Fig. \ref{fig:ratA} as function of the temperature for various flow times and lattice spacings. For zero flow time our results for $R_A$ are in qualitative agreement with the results of Ref. \cite{Lo:2013hla}. The ratio $R_A$ exhibits a crossover behavior for temperatures $T=(150-200)$ MeV. However, we see a very strong cutoff ($N_{\tau}$) dependence of this ratio. While for $N_{\tau}=8$ the crossover happens at temperatures close to the chiral transition temperatures, for larger $N_{\tau}$ it happens at significantly higher temperatures. For flow time $f=f_0$ we do not see any significant cutoff dependence in $R_A$, i.e. this value of the flow time is sufficiently large to get rid of the cutoff effects and obtain a renormalized quantity for $R_A$ (cf. the middle panel of Fig. \ref{fig:ratA}). Since cutoff effects are quite small already for $f=f_0$ it is sufficient to study the flow time dependence of our results for the $N_{\tau}=8$ lattice data, which is also shown in Fig. \ref{fig:ratA}. One can see from the figure that as the flow time increases the value of $R_A$ at low temperatures increases, and the step function like behavior of $R_A$ gradually disappears. For flow time $f=2 f_0$ and $f=3 f_0$ the ratio $R_A$ smoothly approaches one from below as the temperature increases and shows no sign of an inflection point. Note, that there is no significant flow time dependence for $f \ge 2 f_0$ in $R_A$. The flow time dependence for other $N_{\tau}$ is similar. Now let us examine the temperature dependence of $R_T$. In Fig. \ref{fig:ratT} we show our results for $R_T$ for three different flow times: $f=0, f_0$ and $3 f_0$. For zero flow time we see sizable cutoff dependence in $R_T$ and our results are qualitatively similar to those of Ref. \cite{Lo:2013hla}. For flow time $f=f_0$ the large cutoff dependence is removed and we see a crossover like behavior around temperatures of about $160$ MeV. For $f=3 f_0$ we have a very similar picture and again we see a crossover behavior around temperatures of about $160$ MeV. However, the value of $R_T$ is somewhat reduced at low temperatures. In summary, we find that the ratios $R_A$ and $R_T$ are strongly cutoff dependent contrary to the conjecture of Refs. \cite{Lo:2013etb, Lo:2013hla} stating their cutoff independence. Evaluating these ratios with the gradient flow removes the cutoff dependence. However, $R_A$ obtained from the gradient flow is not sensitive to deconfinement. On the other hand $R_T$ obtained from the gradient flow is sensitive to deconfinement, it shows a crossover behavior close to the chiral crossover temperature. Furthermore, $R_T$ is not very sensitive to the choice of the flow time, and therefore it can be considered as a sensitive probe of deconfinement. } \section{\label{sec:weak} Comparison with the weak-coupling calculations} { \begin{figure} \includegraphics[width=8cm]{SQ_Nf3.eps} \caption{\label{fig:weak} The lattice results obtained with $N_{\tau}=4$ compared to the weak-coupling calculations.} \end{figure} In this section we discuss the comparison of our lattice results with the weak-coupling calculations. The free energy of a static quark has been calculated to next-to-next-to leading order (NNLO) \cite{Berwein:2015ayt}. It is important to calculate the free energy to this order to reduce the large scale dependence of the weak-coupling result. We will use the $N_{\tau}=4$ results for this comparison as these extend up to the rather high temperatures of $5814$ MeV and the lattice artifacts are small, see discussions in Sec. \ref{sec:high}. As was pointed out in Ref. \cite{Berwein:2015ayt} the comparison of the lattice results and the weak-coupling calculations is complicated by the fact that the two calculations are performed in different schemes. The weak-coupling calculations are performed in $\overline{MS}$ scheme, while in the lattice calculations the scheme is fixed by the prescribed values of the static $Q \bar Q$ energy at zero temperature at some distance. The two schemes can be related by a constant (temperature independent) shift in $F_Q$ that can be calculated. This, however, introduces additional uncertainty in the comparison. The most straightforward way to perform the comparison of the lattice and the weak-coupling results is to consider the entropy \cite{Berwein:2015ayt}. Such a comparison has been performed in SU(3) gauge theory, i.e. for $N_f=0$ in a temperature range extending up to $24T_d$, with $T_d \simeq 300$ MeV being the deconfinement phase transition temperature in \cite{Berwein:2015ayt}. It was found that the lattice data are in between the leading order (LO) and the NNLO results, and at the highest temperature the NNLO and the lattice results agree within the uncertainties. In Fig. \ref{fig:weak} we show the comparison of the LO and NNLO weak-coupling results with the $N_{\tau}=4$ results for $S_Q$. We used the 1-loop running coupling constant in the weak-coupling calculations and the value $\Lambda_{\overline{MS}}=315$ MeV obtained from the static energy at zero temperature \cite{Bazavov:2014soa}. This value is compatible with the earlier determination from the static energy in Ref. \cite{Bazavov:2012ka}. The bands shown in Fig. \ref{fig:weak} correspond to scale variations between $\mu=\pi T$ and $\mu=4 \pi T$. At the highest temperature the lattice results and the NNLO results agree within the estimated uncertainties. At lower temperatures, $T<1500$ MeV the lattice results are closer to the LO weak-coupling results. For $T<1000$ MeV the NNLO result for $S_Q$ can turn negative for some choices of the renormalization scale. This is clearly an unphysical behavior indicating that higher-order corrections are too large. The situation is quite different from the case of quark number susceptibilities, where the weak-coupling prediction seems to work for $T>300$ MeV \cite{Bazavov:2013uja,Ding:2015fca}. This is due to the fact that quark number susceptibilities are dominated by the contribution of the non-static Matsubara modes, while for the free energy of a static quark the dominant contribution comes from the static sector \cite{Berwein:2015ayt}. Overall, the agreement of the weak-coupling and the lattice results for $S_Q$ is similar to the case of the SU(3) gauge theory. As previously noted in Sec.~\ref{sec:high} the value of $S_Q$ at high temperature in QCD is larger than in the SU(3) gauge theory. This increase is well explained by the weak-coupling calculations. } \section{Conclusions} { In summary, we have calculated the free energy of a static quark in 2+1 flavor QCD at physical quark masses using several lattice spacings and in a large temperature range. We have presented continuum results for this quantity at much higher temperature than previously available. We also calculated the entropy of a static quark and showed that it is a useful quantity for studying deconfinement in 2+1 flavor QCD. Namely, we showed that it has a peak at a temperature around the chiral transition temperature, indicating that deconfinement and chiral transitions happen at similar temperatures. The entropy of a static quark is also useful for comparing lattice and weak-coupling results at high temperatures. Since the cutoff effects are very small at high temperatures we could do this comparison using the $N_{\tau}=4$ lattice results which extend up to temperatures as high as $5814$ MeV. At the highest temperatures we see agreement between the lattice and the NNLO weak-coupling results within the estimated uncertainties but at lower temperatures higher-order corrections become large and the weak-coupling expansion may not be reliable. We also studied the fluctuations of the Polyakov loop using the gradient flow. We showed that Polyakov loop susceptibilities can be renormalized using the gradient flow and the transverse Polyakov loop susceptibility may be a sensitive probe of deconfinement. } \section*{Acknowledgments} This work was supported by U.S. Department of Energy under Contract No. DE-SC0012704. We acknowledge the support by the DFG Cluster of Excellence ``Origin and Structure of the Universe'' (Universe cluster). The calculations have been carried out on Blue Gene/L computer of New York Center for computational Science in BNL, at NERSC, on the computing facilities of the Computational Center for Particle and Astrophysics (C2PAP) and on SuperMUC cluster of the Leibniz Supercomputer Center (LRZ). Usage of C2PAP and SuperMUC took place under the three Universe cluster grants ``Static Quark Correlators in lattice QCD at nonzero temperature'' for 2014, 2015 and 2016 (project ID pr83pu) and the LRZ grant ``Properties of QCD at finite temperature'' for 2015 (project ID pr48le). N.~Brambilla, A.~Vairo and J. H.~Weber acknowledge the support by the Universe cluster for the seed project ``Simulating the Hot Universe'', by the Bundesministerium f\"{u}r Bildung und Forschung (BMBF) under grant ``Verbundprojekt 05P2015 - ALICE at High Rate (BMBF-FSP 202) GEM-TPC Upgrade and Field theory based investigations of ALICE physics'' under grant No. 05P15WOCA1 and by the Kompetenznetzwerk f\"{u}r Wissenschaftliches H\"{o}chstleistungsrechnen in Bayern (KONWIHR) for the Multicore-Software-Initiative with the project ``Production of gauge configurations at zero and nonzero temperature'' (KONWIHR-IV). H.-P. Schadler was funded by the FWF DK W1203 ``Hadrons in Vacuum, Nuclei and Stars''.
proofpile-arXiv_065-14643
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \setcounter{equation}{0} \renewcommand{\mathcal X}{{X}} \newcommand{{T}}{{T}} \newcommand{{Y}}{{Y}} In this paper, we study a family of spheres with constant mean curvature (CMC) in the Riemannian Heisenberg group $H^1$. We introduce in $H^1$ two real parameters that can be used to deform $H^1$ to the sub-Riemannian Heisenberg group, on the one hand, and to the Euclidean space, on the other hand. Even though we are not able to prove that these CMC spheres are in fact isoperimetric sets, we obtain several partial results in this direction. Our motivation comes from the sub-Riemannian Heisenberg group, where it is conjectured that the solution of the isoperimetric problem is obtained rotating a Carnot-Carath\'eodory geodesic around the center of the group, see \cite{Pan1}. This set is known as Pansu's sphere. The conjecture is proved only assuming some regularity ($C^2$-regularity, convexity) or symmetry, see \cite{CDST,FLM,M3,MR,R,RR}. Given a real parameter $\tau\in\mathbb R$, let $\h = \mathrm{span}\{X,Y,T\}$ be the three-dimensional real Lie algebra spanned by three elements $X,Y,T$ satisfying the relations $[X,Y] = -2\tau T$ and $[X,T] = [Y,T] = 0$. When $\tau\neq 0$, this is the Heisenberg Lie algebra and we denote by $H^1$ the corresponding Lie group. We will omit reference to the parameter $\tau\neq0$ in our notation. In suitable coordinates, we can identify $H^1$ with $\mathbb C\times\mathbb R$ and assume that $X,Y,T$ are left-invariant vector fields in $H^1$ of the form \begin{equation}\label{XYT} X = \frac 1 \epsilon\Big( \frac{\partial}{\partial x}+\sigma y \frac{\partial}{\partial t}\Big), \quad Y = \frac 1 \epsilon\Big( \frac{\partial}{\partial y} - \sigma x\frac{\partial}{\partial t}\Big), \quad\textrm{and}\quad T = \epsilon^2 \frac{\partial}{\partial t}, \end{equation} where $(z,t)\in\mathbb C\times\mathbb R$ and $z=x+iy$. The real parameters $\varepsilon>0$ and $\sigma\neq 0$ are such that \begin{equation}\label{tau-sigma} \tau \varepsilon^4= \sigma . \end{equation} Let $\langle \cdot,\cdot\rangle$ be the scalar product on $\h$ making $X,Y,T$ orthonormal, that is extended to a left-invariant Riemannian metric $g=\langle \cdot,\cdot\rangle$ in $H^1$. The Riemannian volume of $H^1$ induced by this metric coincides with the Lebesgue measure $\mathcal L^3$ on $\mathbb C\times\mathbb R$ and, in fact, it turns out to be independent of $\varepsilon$ and $\sigma$ (and hence of $\tau$). When $\epsilon =1$ and $\sigma\to0$, the Riemannian manifold $(H^1,g)$ converges to the Euclidean space. When $\sigma\neq 0$ and $\varepsilon\to0^+$, then $H^1$ endowed with the distance function induced by the rescaled metric $\varepsilon^{-2} \langle \cdot,\cdot\rangle$ converges to the sub-Riemannian Heisenberg group. The boundary of an isoperimetric region is a surface with constant mean curvature. In this paper, we study a family of CMC spheres $\Sigma_R\subset H^1$, with $R>0$, that foliate $H^1_* = H^1\setminus \{0\}$, where $0$ is the neutral element of $H^1$. Each sphere $\Sigma_R$ is centered at $0$ and can be described by an explicit formula that was first obtained by Tomter \cite{T}, see Theorem \ref{TOM} below. We conjecture that, within its volume class and up to left translations, the sphere $\Sigma_R$ is the unique solution of the isoperimetric problem in $H^1$. When $\epsilon = 1$ and $\sigma\to0$, the spheres $\Sigma_R$ converge to the standard sphere of the Euclidean space. When $\sigma \neq 0$ is fixed and $\epsilon\to0^+$, the spheres $\Sigma_R$ converge to the Pansu's sphere. In Section \ref{TRE}, we study some preliminary properties of $\Sigma_R$, its second fundamental form and principal curvatures. A central object in this setting is the left-invariant $1$-form $\vartheta \in \Gamma(T^* H^1)$ defined by \begin{equation} \label{1.1.1} \vartheta(V) = \langle V,T\rangle\quad \text{for any $V\in\Gamma( TH^1)$.} \end{equation} The kernel of $\vartheta$ is the horizontal distribution. Let $N$ be the north pole of $\Sigma_R$ and $S=-N$ its south pole. In $\Sigma_R^* = \Sigma_R\setminus \{\pm N\}$ there is an orthonormal frame of vector fields $X_1,X_2 \in \Gamma(T \Sigma_R^*)$ such that $\vartheta(X_1)=0$, i.e., $X_1$ is a linear combination of $X$ and $Y$. In Theorem \ref{3.1}, we compute the second fundamental form of $\Sigma_R$ in this frame. We show that the principal directions of $\Sigma_R$ are given by a rotation of the frame $X_1,X_2$ by a \emph{constant} angle depending on the mean curvature of $\Sigma_R$. In Section \ref{S4}, we link in a continuous fashion the foliation property of the Pansu's sphere with the foliation by meridians of the round sphere in the Euclidean space. The foliation $H^1_* = \bigcup_{R>0} \Sigma_R$ determines a unit vector field $\mathcal{N}\in \Gamma(TH^1_*)$ such that $\mathcal{N}(p)\perp T_p\Sigma_R$ for any $p\in\Sigma_R$ and $R>0$. The covariant derivative $\nabla\!_\mathcal{N}\enn$, where $\nabla$ denotes the Levi-Civita connection induced by the metric $g$, measures how far the integral lines of $\mathcal{N}$ are from being geodesics of $H^1$ (i.e., how far the CMC spheres $\Sigma_R$ are from being metric spheres). In space forms, we would have $\nabla\!_\mathcal{N}\enn=0$, identically. Instead, in $H^1$ the normalized vector field \[ \mathcal M(z,t) = \mathrm{sgn}(t) \frac{\nabla\!_\mathcal{N}\enn}{|\nabla\!_\mathcal{N}\enn|}, \qquad (z,t) \in \Sigma_R^*, \] is well-defined and smooth outside the center of $H^1$. In Theorem \ref{4.1}, we prove that for any $R>0$ we have \[ \nabla_\mathcal M^{\Sigma_R} \mathcal M = 0 \quad \textrm{on } \Sigma_R^*, \] where $\nabla^{\Sigma_R}$ denotes the restriction of $\nabla$ to $\Sigma_R$. This means that the integral lines of $\mathcal M$ are Riemannian geodesics of $\Sigma_R$. In the coordinates associated with the frame \eqref{XYT}, when $\epsilon =1$ and $\tau=\sigma \to 0 $ the integral lines of $\mathcal M$ converge to the meridians of the Euclidean sphere. When $ \sigma\neq 0$ is fixed and $\epsilon\to0^+$, the vector field $\mathcal M$ properly normalized converges to the line flow of the geodesic foliation of the Pansu's sphere, see Remark \ref{PAN}. In Section \ref{k_0=0}, we give a proof of a known result that is announced in \cite[Theorem 6]{A5} in the setting of three-dimensional homogeneous spaces (see also \cite{MP}). Namely, we show that any topological sphere with constant mean curvature in $H^1$ is isometric to a CMC sphere $\Sigma_R$. The proof follows the scheme of the fundamental paper \cite{AR}. The surface $\Sigma_R$ is not totally umbilical and, for large enough $R>0$, it even has negative Gauss curvature near the equator, see Remark \ref{UMB}. As a matter of fact, the distance from umbilicality is measured by a linear operator built up on the $1$-form $\vartheta $. We can restrict the tensor product $\vartheta\otimes\vartheta$ to any surface $\Sigma$ in $H^1$ with constant mean curvature $H$ and then define, at any point $p\in \Sigma$, a symmetric linear operator $k \in \mathrm{Hom}(T_p\Sigma; T_p\Sigma)$ by setting \begin{equation*} \label{kappa} k = h + \frac{2\tau^2}{\sqrt{H^2+\tau^2}} q_H\circ (\vartheta\otimes\vartheta)\circ q_H^{-1}, \end{equation*} where $h$ is the shape operator of $\Sigma$ and $q_H$ is a rotation of each tangent plane $T_p\Sigma$ by an angle that depends only on $H$, see formula \eqref{alpha_H}. In Theorem \ref{5.5}, we prove that for \emph{any} topological sphere $\Sigma\subset H^1 $ with constant mean curvature $H$, the linear operator $k$ on $\Sigma$ satisfies the equation $k_0=0$. This follows from the Codazzi's equations using Hopf's argument on holomorphic quadratic differentials, see \cite{H}. The fact that $\Sigma$ is a left translation of $\Sigma_R$ now follows from the analysis of the \emph{Gauss extension} of the topological sphere, see Theorem \ref{5.9}. In some respect, it is an interesting issue to link the results of Section \ref{k_0=0} with the mass-transportation approach recently developed in \cite{BKS}. In Section \ref{SEI}, we prove a stability result for the spheres $\Sigma_R$. Let $E_R\subset H^1 $ be the region bounded by $\Sigma_R$ and let $\Sigma\subset H^1$ be the boundary of a smooth open set $E\subset H^1$, $\Sigma = \partial E$, such that $\mathcal L^3(E) = \mathcal L^{3}(E_R)$. Denoting by $\mathcal{A}(\Sigma)$ the Riemannian area of $\Sigma$, we conjecture that \begin{equation} \label{isop} \mathcal{A}(\Sigma)- \mathcal{A}(\Sigma_R)\geq 0. \end{equation} We also conjecture that a set $E$ is isoperimetric (i.e., equality holds in \eqref{isop}) if and only if it is a left translation of $E_R$. We stress that if isoperimetric sets are topological spheres, this statement would follow from Theorem \ref{5.9}. It is well known that isoperimetric sets are stable for perturbations fixing the volume: in other words, the second variation of the area is nonnegative. On the other hand, using Jacobi fields arising from right-invariant vector fields of $H^1$, it is possible to show that the spheres $\Sigma_R$ are stable with respect to variations supported in suitable hemispheres, see Section \ref{SEI}. In the case of the northern and southern hemispheres, we can prove a stronger form of stability. Namely, using the coordinates associated with the frame \eqref{XYT}, for $R>0$ and $0< \delta< R$ we consider the cylinder \[ C_{\delta,R} = \big\{ (z,t) \in H^1 : |z|<R, t >f(R-\delta; R)\big\}, \] where $f(\cdot;R)$ is the profile function of $\Sigma_R$, see \eqref{fuf}. Assume that the closure of $E\Delta E_R=E_R\setminus E \cup E\setminus E_R$ is a compact subset of $C_{\delta,R}$. In Theorem \ref{thm:quant}, we prove that there exists a positive constant $C_{R\tau\epsilon}>0$ such that the following quantitative isoperimetric inequality holds: \begin{equation} \label{quado} \mathcal{A}(\Sigma ) -\mathcal{A}(\Sigma_R) \geq \sqrt{\delta} C_{R\tau\epsilon} \mathcal L^3(E\Delta E_R)^2. \end{equation} The proof relies on a sub-calibration argument. This provides further evidence on the conjecture that isoperimetric sets are precisely left translations of $\Sigma_R$. When $\epsilon =1$ and $\sigma\to0$, inequality \eqref{quado} becomes a restricted form of the quantitative isoperimetric inequality in \cite{FMP}. For fixed $\sigma\neq0$ and $\epsilon \to 0^+$ the rescaled area $\epsilon\mathcal{A}$ converges to the sub-Riemannian Heisenberg perimeter and $\epsilon C_{R\tau \epsilon}$ converges to a positive constant, see Remark \ref{6.2}. Thus inequality \eqref{quado} reduces to the isoperimetric inequality proved in \cite{FLM}. \section{Foliation of $H^1_*$ by concentric stationary spheres} \label{DUE} \setcounter{equation}{0} In this section, we compute the rotationally symmetric compact surfaces in $H^1$ that are area-stationary under a volume constraint. We show that, for any $R>0$, there exists one such a sphere $\Sigma_R$ centered at $0$. We will also show that $H^1_*=H^1 \setminus\{0\}$ is foliated by the family of these spheres, i.e., \begin{equation}\label{FOL} H^1 _* = \bigcup_{R>0} \Sigma_R. \end{equation} Each $\Sigma_R$ is given by an explicit formula that is due to Tomter, see \cite{T}. We work in the coordinates associated with the frame \eqref{XYT}, where the parameters $\epsilon>0$ and $ \sigma \in\mathbb R$ are related by \eqref{tau-sigma}. For any point $(z,t) \in H^1$, we set $r = |z| = \sqrt{x^2+y^2}$. \begin{theorem}\label{TOM} For any $R>0$ there exists a unique compact smooth embedded surface $\Sigma_R\subset \H^1$ that is area stationary under volume constraint and such that \[ \Sigma_R = \{(z,t) \in \H^1: |t| =f(|z|;R) \} \] for a function $f(\cdot;R)\in C^\infty([0,R))$ continuous at $r=R$ with $f(R)=0$. Namely, for any $0\leq r\leq R$ the function is given by \begin{equation} \label{fuf} f(r;R) = \frac{\varepsilon^2}{2\tau} \big[ \omega(R)^2 \arctan(p(r;R)) + \omega(r)^2 p(r;R)\big], \end{equation} where \begin{equation*} \label{omegap} \omega(r) = \sqrt{1+\tau^2 \varepsilon^ 2 r^2} \quad \text{and} \quad p(r;R) = \tau\varepsilon \frac{\sqrt{R^2-r^2}}{\omega(r)}. \end{equation*} \end{theorem} \begin{proof} Let $ D_R = \{z\in\mathbb C : |z|<R\}$ and for a nonnegative radial function $f \in C^\infty(D_R)$ consider the graph $ \Sigma = \{(z,f(z))\in\H^1: z\in D_R\}. $ A frame of tangent vector fields $V_1,V_2\in\Gamma(T\Sigma)$ is given by \begin{equation} \label{WIWA} \begin{split} V_1 = \varepsilon X + \varepsilon^{-2}(f_x-\sigma y) T \quad \text{and} \quad V_2 = \varepsilon Y +\varepsilon^{-2} (f_y+ \sigma x) T . \end{split} \end{equation} Let $ g_\Sigma = \langle\cdot,\cdot\rangle$ be the restriction of the metric $g$ of $H^1$ to $\Sigma$. Using the entries of $g_\Sigma$ in the frame $V_1,V_2$, we compute the determinant \begin{equation} \label{AER1} \begin{split} \det( g_\Sigma) & = \varepsilon^{4}+ \varepsilon^{-2} \big\{ |\nabla f| ^ 2 + \sigma^2 |z|^2 + 2\sigma (xf_y-yf_x) \big\}, \end{split} \end{equation} where $\nabla f=(f_x,f_y)$ is the standard gradient of $f$ and $|\nabla f|$ is its length. We clearly have $xf_y - yf_x =0$ by the radial symmetry of $f$. Therefore, the area of $\Sigma$ is given by \begin{equation} \label{AER2} A (f) = \mathcal{A} (\Sigma) = \int_{D_R} \sqrt{\det( g_\Sigma)}\ dz = \frac1\varepsilon \int_{D_R} \sqrt { \varepsilon^{6}+ |\nabla f| ^ 2 + \sigma^2 |z|^2}\ dz , \end{equation} where $dz$ is the Lebesgue measure in the $xy$-plane. Thus, if $\Sigma$ is area stationary under a volume constraint, then for any test function $\varphi\in C^\infty_c(D_R)$ that is radially symmetric and with vanishing mean (i.e., $\int _{D_R} \varphi\, dz = 0$) we have \[ 0 = \left. \frac{d}{ds} A(f+s\varphi)\right|_{s=0} = -\frac 1 \varepsilon \int_{D_R} \varphi \,\mathrm{div}\Big( \frac{\nabla f }{\sqrt { \varepsilon^{6}+ |\nabla f| ^ 2 + \sigma^2 |z|^2 }}\Big) \ dz , \] where $\mathrm{div}$ denotes the standard divergence in the $xy$-plane. It follows that there exists a constant $H\in \mathbb R$ such that \begin{equation}\label{CURV} - \frac{1}{\varepsilon} \mathrm{div}\Big( \frac{\nabla f }{\sqrt { \varepsilon^{6}+ |\nabla f| ^ 2 + \sigma^2 |z|^2 }}\Big) = H. \end{equation} With abuse of notation we let $f(|z|) = f(z)$. Using the radial variable $r=|z|$ and the short notation \[ g(r) = \frac{f_r} {r\sqrt{\varepsilon^6 + {f_r}^2 + \sigma^2 r^2}}, \] the above equation reads as follows: \[\frac{1}{r}\frac{d}{dr} \big( r^2 g(r)\big)=\frac 1 r \big( r^2 g_r(r) + 2r g(r) \big) = - \varepsilon H. \] Then there exists a constant $K\in \mathbb R$ such that $r^2 g = -\varepsilon r^2 H + K$. Since $g$ is bounded at $r=0$, it must be $K=0$ and thus $g =- \varepsilon H$, and we get \[ \frac{f_r} {r\sqrt{\varepsilon^6 + {f_ r}^2+ \sigma^2 r^2 }} = -\varepsilon H . \] From this equation, we see that $f_r$ has a sign. Since $\Sigma_R$ is compact, it follows that $H\neq 0$. Since $f\geq 0$ we have $f_r<0$ and therefore $H>0$. The surface $\Sigma_R$ is smooth at the ``equator'' (i.e., where $|z|=R$ and $t=0$) and thus we have $f_r(R) =-\infty$. As we will see later, this is implied by the relation \begin{equation}\label{eHR=1} \epsilon HR=1, \end{equation} that will be assumed throughout the paper. Integrating the above equation we find \begin{equation}\label{f_r} f_r(r) = - \varepsilon^4 Hr \sqrt{\frac{1+\tau^2\varepsilon^2 r^ 2}{1-\varepsilon^2 H^2 r^2}} = -\varepsilon^ 3 r \sqrt{\frac{1+\tau^2 \varepsilon^2 r^ 2}{R^2 - r^2}} ,\quad 0\leq r<R. \end{equation} Integrating this expression on the interval $[r,R]$ and using $f(R)=0$ we finally find \begin{equation} \label{palix} f(r;R) = \varepsilon^ 3 \int_r^ R \sqrt{ \frac{1+\tau^2 \varepsilon^2 s^2}{R^2-s^2}} \, s ds. \end{equation} After some computations, we obtain the explicit formula \[ f(r;R) = \frac{\varepsilon^2}{2\tau} \Big[ \omega(R)^2 \arctan\Big( \tau\varepsilon \frac{\sqrt{R^2-r^2}}{\omega(r)}\Big) +\tau\varepsilon \omega (r) \sqrt{R^2-r^2} \Big],\quad 0\leq r\leq R, \] with $\omega(r) = \sqrt{1+\tau^2 \varepsilon^2 r^2}$. This is formula \eqref{fuf}. \end{proof} \begin{remark} The function $f(\cdot;R) =f(\cdot;R;\tau;\epsilon)$ depends also on the parameters $\tau$ and $\epsilon$, that are omitted in our notation. With $\epsilon =1$, we find \[ \lim_{\tau\to0} f(r;R;\tau;1) =\sqrt{R^2-r^2}. \] When $\tau \to0$, the spheres $\Sigma_R$ converge to Euclidean spheres with radius $R>0$ in the three-dimensional space. With $\tau =\sigma/\epsilon ^4$ as in \eqref{tau-sigma}, we find the asymptotic \[ \begin{split} \lim_{\epsilon \to0} f(r;R;\sigma/\epsilon^4;\epsilon) & =\frac{\sigma}{2}\Big[R^2\arctan\Big(\frac{\sqrt{R^2-r^2}}{r}\Big) +r \sqrt{R^2-r^2} \Big] \\ & =\frac{\sigma}{2}\Big[R^2\arccos\Big(\frac{r}{R}\Big) +r \sqrt{R^2-r^2} \Big] , \end{split} \] which gives the profile function of the Pansu's sphere, the conjectured solution to the sub-Riemannian Heisenberg isoperimetric problem, see e.g.~\cite{MR} or \cite{M3}, with $R=1$ and $\sigma = 2$. \end{remark} \begin{remark} Starting from formula \eqref{fuf}, we can compute the derivatives of $f(\cdot;R)$ in the variable $R$. The first order derivative is given by \begin{equation} \label{f_R} f_R(r;R) = \tau \epsilon^4 R \Big[ \arctan\big( p(r;R) \big) +\frac{1}{ p(r;R)}\Big]= \frac{\sigma R}{p(r;R) \ell(p(r;R))}, \end{equation} where $\ell:[0,\infty)\to\mathbb R$ is the function defined as \begin{equation} \label{ell} \ell(p) = \frac{1}{1+ p\arctan(p)}. \end{equation} The geometric meaning of $\ell$ will be clear in formula \eqref{NR}. \end{remark} We now establish the foliation property \eqref{FOL}. \begin{proposition}\label{fol} For any nonzero $(z,t) \in \H^1$ there exists a unique $R>0$ such that $(z,t) \in \Sigma_R$. \end{proposition} \begin{proof} Without loss of generality we can assume that $t\geq 0$. After an integration by parts in \eqref{palix}, we obtain the formula \[ f(r;R) = \varepsilon^ 3\Bigg\{ \sqrt{R^2-r^2} \omega(r) +\int_r^R \sqrt{R^2-s^2} \omega_r(s) ds\Bigg\},\quad 0\leq r\leq R. \] Since $\omega_r(r)>0$ for $r>0$, we deduce that the function $R\mapsto f(r;R)$ is strictly increasing for $R\geq r$. Moreover, we have \[ \lim_{R\to\infty} f(r;R) = \infty, \] and hence for any $r\geq 0$ there exists a unique $R\geq r$ such that $f(r;R) =t$. \end{proof} \begin{remark} By Proposition \ref{fol}, we can define the function $R:\H^1 \to[0,\infty)$ by letting $R(0)=0$ and $R(z,t) =R$ if and only if $(z,t) \in \Sigma_R$ for $R>0$. The function $R(z,t)$, in fact, depends on $r=|z|$ and thus we may consider $R(z,t)=R(r,t)$ as a function of $r$ and $t$. This function is implicitly defined by the equation $ |t| = f(r; R(r,t))$. Differentiating this equation, we find the derivatives of $R$, i.e., \begin{equation}\label{R_t} R_r = -\frac{f_r}{f_R} \quad \text{and} \quad R_t = \frac{\mathrm{sgn}(t)}{f_R}, \end{equation} where $f_R$ is given by \eqref{f_R}. \end{remark} \section{Second fundamental form of $\Sigma_R$} \label{TRE} \setcounter{equation}{0} In this section, we compute the second fundamental form of the spheres $\Sigma_R$. In fact, we will see that $H =1/(\varepsilon R)$ is the mean curvature of $\Sigma_R$, as already clear from \eqref{CURV} and \eqref{eHR=1}. Let $N= (0, f(0;R)) \in\Sigma_R$ be the north pole of $\Sigma_R$ and let $S = -N = (0,- f(0;R))$ be its south pole. In $\Sigma_R^*=\Sigma_R\setminus \{\pm N\}$ there is a frame of tangent vector fields $X_1,X_2 \in \Gamma( T \Sigma_R^*)$ such that \begin{equation} \label{pipo} |X_1| = |X_2| = 1,\quad \langle X_1,X_2\rangle = 0,\quad \vartheta(X_1)=0, \end{equation} where $\vartheta$ is the left-invariant $1$-form introduced in \eqref{1.1.1}. Explicit expressions for $X_1$ and $X_2$ are given in formula \eqref{xixo} below. This frame is unique up to the sign $\pm X_1$ and $\pm X_2$. Here and in the rest of the paper, we denote by $\mathcal{N}$ the exterior unit normal to the spheres $\Sigma_R$. The second fundamental form $h$ of $\Sigma_R$ with respect to the frame $X_1,X_2$ is given by \[ h=(h_{ij})_{i,j=1,2},\qquad h_{ij} = \langle \nabla_{X_i} \mathcal N,X_j\rangle,\quad i,j=1,2, \] where $\nabla $ denotes the Levi-Civita connection of $H^1$ endowed with the left-invariant metric $g$. The linear connection $\nabla$ is represented by the linear mapping $\h\times\h\mapsto \h$, $(V,W) \mapsto \nabla _{V} W$. Using the fact that the connection is torsion free and metric, it can be seen that $\nabla$ is characterized by the following relations: \begin{equation}\label{FR} \begin{split} &\nabla_XX =\nabla_Y Y=\nabla_TT=0, \\ & \nabla _Y X = \tau T \quad \textrm{and}\quad \nabla _X Y =- \tau T, \\ & \nabla _T X = \nabla _XT = \tau Y, \\ & \nabla _T Y = \nabla _YT = -\tau X. \end{split} \end{equation} Here and in the rest of the paper, we use the coordinates associated with the frame \eqref{XYT}. For $(z,t) \in H^1$, we set $r = |z|$ and use the short notation \begin{equation} \label{RHO} \varrho = \tau \epsilon r. \end{equation} \begin{theorem} \label{3.1} For any $R>0$, the second fundamental form $h$ of $\Sigma_R$ with respect to the frame $X_1,X_2$ in \eqref{pipo} at the point $(z,t) \in \Sigma_R$ is given by \begin{equation} \label{ACCA} h = \frac{1}{1+\varrho^2} \left( \begin{array}{cc} H ( 1+2\varrho^2) & \tau \varrho^2 \\ \tau \varrho^2 & H \end{array} \right), \end{equation} where $R=1/H\varepsilon$ and $H$ is the mean curvature of $\Sigma_R$. The principal curvatures of $\Sigma_R$ are given by \begin{equation}\label{kappa_12} \begin{split} \kappa_1 & = H + \frac{\varrho^2}{1+\varrho^2} \sqrt{H^2 + \tau^2} , \\ \kappa_2 & = H - \frac{\varrho^2}{1+\varrho^2} \sqrt{H^2 + \tau^2}. \end{split} \end{equation} Outside the north and south poles, principal directions are given by \begin{equation} \label{K_12} \begin{split} K_1 & = \cos\beta X_1+\sin\beta X_2, \\ K_2 & = -\sin\beta X_1+\cos\beta X_2, \end{split} \end{equation} where $\beta =\beta_H\in (-\pi/4,\pi/4)$ is the angle \begin{equation}\label{beta_H} \beta_H =\arctan \Bigg(\frac{\tau }{H+\sqrt{ H^2+\tau^2 }}\Bigg). \end{equation} \end{theorem} \begin{proof} Let $a,b:\Sigma_R^*\to\mathbb R$ and $c,p:\Sigma_R\to\mathbb R$ be the following functions depending on the radial coordinate $r=|z|$: \begin{equation} \label{abcp} \begin{split} a & =a(r;R) = \frac{\omega(r)}{r\omega(R)}, \quad b =b(r;R) = \pm \frac{\sqrt{R^2-r^2} } {r R \omega(R)}, \\ c &=c(r;R) =\frac{r\omega(R) } { R \omega(r)}, \quad p=p(r;R) = \pm \tau \epsilon \frac{\sqrt{R^2-r^2} } { \omega(r)}. \end{split} \end{equation} In fact, $b$ and $p$ also depend on the sign of $t$. Namely, in $b$ and $p$ we choose the sign $+$ in the northern hemisphere, that is for $t\geq0$, while we choose the sign $-$ in the southern hemisphere, where $t\leq0$. Our computations are in the case $t\geq0$. The vector fields \begin{equation} \label{xixo} \begin{split} X_1 & = - a \big(( y-xp) X - (x+yp) Y\big), \\ X_2 & =- b \big( (x+yp) X+( y-xp) Y\big) + c T \end{split} \end{equation} form an orthonormal frame for $T\Sigma_R^*$ satisfying \eqref{pipo}. This frame can be computed starting from \eqref{WIWA}. The outer unit normal to $\Sigma_R$ is given by \begin{equation} \label{ENNE} \begin{split} \mathcal N =\frac{1}{R} \Big\{ (x+y p) X +(y-xp) Y +\frac{p}{\tau\epsilon}T\Big\}. \end{split} \end{equation} Notice that this formula is well defined also at the poles. We compute the entries $h_{11} $ and $h_{12}$. Using $X_1 R=0$, we find \begin{equation} \label{M1} \begin{split} \nabla_{X_1} \mathcal N= \frac 1 R \Big\{ & X_1(x+yp) X + X_1(y-xp) Y + X_1\Big(\frac{p}{\tau\epsilon}\Big) T \\ & +(x+yp) \nabla_{X_1} X + (y-xp) \nabla_{X_1} Y +\frac{p}{\tau\epsilon} \nabla_{X_1} T \Big\}, \end{split} \end{equation} where, by the fundamental relations \eqref{FR}, \begin{equation}\label{M2} \begin{split} &\nabla_{X_1} X =\tau a(x+yp) T,\quad \\ & \nabla_{X_1} Y =\tau a(y-xp) T, \\ & \nabla_{X_1} T =-\tau a\big[ ( y-xp) Y+(x+yp) X\big]. \end{split} \end{equation} Using the formulas \[ X_1 x = -\frac{a}{\epsilon} ( y-xp )\quad\text{and} \quad X_1 y =\frac{a}{\epsilon} (x+yp), \] we find the derivatives \begin{equation}\label{M3} \begin{split} X_1(x+yp) & = \frac{a}{\epsilon}\big(2xp +y(p^2-1)\big)+yX_1 p, \\ X_1(y-xp ) & = \frac{a}{\epsilon}\big(2yp +x(1-p^2 )\big)-x X_1 p. \end{split} \end{equation} Inserting \eqref{M3} and \eqref{M2} into \eqref{M1}, we obtain \begin{equation} \label{M4} \begin{split} \nabla_{X_1} \mathcal N= \frac 1 R \Big\{ \Big[ -\frac{a}{\epsilon} (y-xp) +y X_1p\Big] X & + \Big[ \frac{a}{\epsilon} (x+yp) -x X_1p\Big]Y \\ & + \Big[ \frac{X_1 p}{\tau\epsilon} +\tau r^ 2 a (p^2+1)\Big] T \Big\}. \end{split} \end{equation} From this formula we get \[ h_{11} =\langle \nabla_{X_1} \mathcal N ,X_1\rangle =\frac{r^2 a}{R\epsilon} \big\{ a (p^2+1)-\epsilon X_1p\big\}, \] where $p^2+1 = \omega(R)^2/\omega(r)^2$ and $X_1p$ can be computed starting from \begin{equation} \label{p_r} p_r(r;R) = -\tau\epsilon r\frac{\omega(R)^2}{\sqrt{R^2-r^2} \omega(r)^3}. \end{equation} Namely, also using the formula for $a$ and $p$ in \eqref{abcp}, we have \[ X_1 p =\frac{r a}{\epsilon} p p_r = -\tau^2\epsilon r \frac{\omega(R)}{ \omega(r)^3}. \] With \eqref{eHR=1} and \eqref{RHO}, we finally find \[ h_{11} =\frac{1}{R\epsilon}\Big( 1+ \frac{\tau^2\epsilon^2 r^2}{\omega(r)^2}\Big) = H \Big( 1+\frac{\varrho^2}{1+\varrho^2}\Big). \] >From \eqref{M4} we also deduce \[ h_{12}=\langle \nabla_{X_1} \mathcal N ,X_2\rangle =-\frac{b}{R}r^2pX_1p+\frac{c}{R}\Big\{\frac{X_1p}{\tau\varepsilon}+\tau r^2a(1+p^2)\Big\}, \] and using the formula for $X_1p$ and the formulas in \eqref{abcp} we obtain \[ h_{12} =\frac{\tau\varrho^2}{1+\varrho^2} . \] To compute the entry $h_{22}$, we start from \begin{equation} \label{eq:N1} \begin{split} \nabla_{X_2}\mathcal N =\frac{1}{R}\Big\{& X_2(x+yp)X+X_2(y-xp)Y+\frac{X_2(p)}{\tau \varepsilon}T\\ &\quad +(x+yp)\nabla_{X_2}X+(y-xp)\nabla_{X_2}Y +\frac{p}{\tau\varepsilon}\nabla_{X_2}T \Big\}, \end{split} \end{equation} where, by \eqref{FR} we have \begin{equation} \label{eq:N2} \begin{split} \nabla_{X_2}X&=-\tau b(y-xp )T+\tau c Y,\\ \nabla_{X_2}Y&=\tau b(x+yp)T-\tau c X,\\ \nabla_{X_2}T&=-\tau b(x+yp)Y+\tau b(y-xp ) X. \end{split} \end{equation} Since $X_2x=-b(x+yp )/\varepsilon$ and $X_2y=-b(y-xp )/\varepsilon$, we get \begin{equation} \label{eq:N3}\begin{split} X_2(x+yp)&=-\frac{b}{\varepsilon}\big(2yp+x(1-p^2)\big)-yX_2p,\\ X_2(y-xp)&=\frac{b}{\varepsilon}\big(2xp+y(p^2-1)\big)+xX_2p. \end{split} \end{equation} Inserting \eqref{eq:N2} and \eqref{eq:N3} into \eqref{eq:N1} we obtain \begin{equation*} \label{eq:N4} \begin{split} \nabla_{X_2}\mathcal N =\frac{1}{R}\Big\{ -&\Big[\frac{b}{\varepsilon}(x+yp)+yX_2p+\tau c(y-xp)\Big]X\\ +&\Big[-\frac{b}{\varepsilon}( y-xp )+xX_2p+\tau c (x+yp)\Big]Y -\frac{X_2p}{\tau\varepsilon}T \Big\}, \end{split} \end{equation*} and thus \[ h_{22}=\langle \nabla_{X_2}\mathcal N, X_2\rangle = \frac{br^2}{\varepsilon R}\big\{b(1+p^2)+\varepsilon pX_2p\big\}-\frac{c X_2p}{\tau\varepsilon R}. \] Now $X_2p$ can be computed by using \eqref{p_r} and the formulas \eqref{abcp}, and we obtain \[ X_2p=-\frac{\tau r\omega(R)}{R\omega(r)^3}. \]By \eqref{eHR=1} and \eqref{RHO} we then conclude that \[ h_{22} =\frac{H}{1+\varrho^2}. \] The principal curvatures $\kappa_1,\kappa_2$ of $\Sigma_R$ are the solutions to the system \[ \begin{split} \left\{ \begin{array}{l} \kappa_1+\kappa_2 =\mathrm{tr}(h) =2H \\ \displaystyle \kappa_1\kappa_2 =\det(h) = \frac{H^2(1+2\varrho^2) -\tau^2\varrho^4}{(1+\varrho^2)^2}. \end{array} \right. \end{split} \] They are given explicitly by the formulas \eqref{kappa_12}. Now let $K_1,K_2$ be tangent vectors as in \eqref{K_12}. We identify $h$ with the shape operator $h\in \mathrm{Hom}( T_p\Sigma_R; T_p\Sigma_R)$, $h(K) = \nabla_K\mathcal{N}$, at any point $p\in \Sigma_R$ and $K\in T_p\Sigma_R$. When $\varrho\neq0$ (i.e., outside the north and south poles), the system of equations \[ h(K_1) =\kappa_1 K_1\quad \text{and}\quad h(K_2) =\kappa_2 K_2 \] is satisfied if and only if the angle $\beta=\beta_H $ is chosen as in \eqref{beta_H}. The argument of $\arctan$ in \eqref{beta_H} is in the interval $(-1,1)$ and thus $\beta_H \in (-\pi/4,\pi/4)$. \end{proof} \begin{remark} \label{UMB} When $2H^2 <(\sqrt{5}-1)\tau^2$, the set of points $(z,t) \in \Sigma_R$ such that \[ \varrho^2 > \frac{H}{\sqrt{H^2+\tau^2}-H} \] is nonempty. The inequality above is equivalent to $\kappa_2<0$ at the point $(z,t) \in\Sigma_R$. This means that, for large enough $R$, points in $\Sigma_R$ near the equator have strictly negative Gauss curvature. \end{remark} \begin{remark} The convergence of the Riemannian second fundamental form towards its sub-Riemannian counterpart is studied in \cite{CPT}, in the setting of Carnot groups. \end{remark} \section{Geodesic foliation of $\Sigma_R$} \label{S4} \setcounter{equation}{0} We prove that each CMC sphere $\Sigma_R$ is foliated by a family of geodesics of $\Sigma_R$ joining the north to the south pole. In fact, we show that the foliation is governed by the normal $\mathcal{N}$ to the foliation $H^1_*=\bigcup_{R>0}\Sigma_R$. In the sub-Riemannian limit, we recover the foliation property of the Pansu's sphere. In the Euclidean limit, we find the foliation of the round sphere with meridians. We need two preliminary lemmas. We define a function $R: H^1\to [0,\infty)$ by letting $R(0)=0$ and $R(z,t) = R$ if and only if $(z,t)\in\Sigma_ R$. In fact, $R(z,t)$ depends on $r=|z|$ and $t$. The function $p$ in \eqref{abcp} is of the form $p=p(r,R(r,t))$. Now, we compute the derivative of these functions in the normal direction $\mathcal N$. \begin{lemma} The derivative along $\mathcal N$ of the functions $R$ and $p$ are, respectively, \begin{equation}\label{NR} \mathcal N R =\frac{\ell(p)}{\epsilon}, \end{equation} and \begin{equation}\label{Np} \mathcal N p = \epsilon \tau^2 \frac{ R^2 \omega(r)^2 \ell(p)-r^2 \omega(R)^2 }{R\omega(r)^4p} , \end{equation} where $\ell(p) = (1+p\arctan p) ^{-1} $, as in \eqref{ell}. \end{lemma} \begin{proof} We start from the following expression for the unit normal (in the coordinates $(x,y,t)$): \begin{equation*} \label{ENNEradiale} \mathcal N = \frac 1 R \Big\{ \frac r \epsilon \partial _r +\frac p\epsilon (y\partial_ x-x\partial_y) +\mathrm{sgn}(t) \epsilon ^2 \omega(r) \sqrt{R^2-r^2} \partial_t\Big\}. \end{equation*} We just consider the case $t\geq 0$. Using \eqref{R_t}, we obtain \[ \begin{split} \mathcal N R & = \frac 1 R \Big\{ \frac r \epsilon R_r +\epsilon ^ 2 \omega(r) \sqrt{R^2-r^2} R_t\Big\} =\frac {1 }{R f_R} \Big\{\epsilon ^ 2 \omega(r) \sqrt{R^2-r^2} - \frac r \epsilon {f_r} \Big\}. \end{split} \] Inserting into this formula the expression in \eqref{f_r} for $f_r$ we get \[ \mathcal N R = \frac{\varepsilon^2 R \omega(r)}{f_R \sqrt{R^2-r^2}}, \] and using formula \eqref{f_R} for $f_R$, namely, \[ f_R =\tau \epsilon ^ 4 R\Big[ \arctan (p)+\frac 1 p \Big] =\frac{\tau \epsilon ^ 4 R}{p \ell(p)}, \] we obtain formula \eqref{NR}. To compute the derivatives of $p$ in $r$ and $t$, we have to consider $p=p(r;R)$ and $R = R(r,t)$. Using the formula in \eqref{abcp} for $p$ and the expression \eqref{R_t} for $R_r$ yields \[ p_r =-\frac{\tau\epsilon r \omega(R)^2}{\omega(r)^3 \sqrt{R^2-r^2}},\quad p_R =\frac{\tau\epsilon R}{\omega(r)\sqrt{R^2-r^2}},\quad R_r = -\frac{f_r}{f_R} = \frac{\epsilon^3 r\omega(r)} {\sqrt{R^2-r^2} f_R}, \] and thus \[ \begin{split} \frac{\partial}{\partial r} p(r,R(r,t)) & = p_r (r,R(r,t))+p_R (r,R(r,t))R_r(r,t) \\& = \frac{\tau\epsilon r }{\omega(r) ^ 3 \sqrt{R^2-r^2} }\big[ \omega(r)^2 \ell(p) -\omega(R)^2\big]. \end{split} \] Similarly, we compute \[ \frac{\partial}{\partial t} p(r;R(r,t)) = p_R(r;R(r,t)) R_t(r,t)= \frac{\tau \ell(p)}{\epsilon ^2\omega(r)^2}. \] The derivative of $p$ along $\mathcal N$ is thus as in \eqref{Np}, when $t\geq 0$. The case $t<0$ is analogous. \end{proof} In the next lemma, we compute the covariant derivative $\nabla\!\! _{\mathcal N}\mathcal N $. The resulting vector field in $H^1_*$ is tangent to each CMC sphere $\Sigma_R$, for any $R>0$. \begin{lemma} At any point in $(z,t)\in H^1_*$ we have \begin{equation} \label{nablaENNE} \nabla\!\! _{\mathcal N}\mathcal N (z,t)= \mathcal N\Big(\frac p R\Big) \Big[ (y+x\Phi)X-(x-y\Phi ) Y+\frac{1}{\tau\epsilon} T\Big], \end{equation} where $\Phi=\Phi(r;R)$ is the function defined as \[ \Phi = - \frac{\omega(r) ^2 p }{\tau^2\epsilon^2 r^2}, \] and the derivative $\mathcal N(p/R)$ is given by \[ \mathcal N\Big(\frac p R\Big) = - \frac{\epsilon \tau^2 r^2 \big( \omega(R)^2 -\ell(p) \omega(r)^2\big)}{R^2 \omega(r)^4 p}, \] with $\ell$ as in \eqref{ell}. \end{lemma} \begin{proof} Starting from formula \eqref{ENNE} for $\mathcal N$, we find that \begin{equation}\label{G0} \begin{split} \nabla\!\! _{\mathcal N}\mathcal N & =\mathcal N\Big(\frac{x+yp}{R}\Big) X+\mathcal N\Big(\frac{y-xp }{R}\Big)Y +\mathcal N\Big(\frac{p}{\tau \epsilon R}\Big)T \\ &+\frac 1 R \Big((x+yp) \nabla\!\! _{\mathcal N} X+ (y-xp) \nabla\!\! _{\mathcal N} Y+ \frac{p}{\tau\epsilon} \nabla\!\! _{\mathcal N} T\Big), \end{split} \end{equation} where, by the fundamental relations \eqref{FR}, we have \begin{equation}\label{G1} (x+yp) \nabla\!\! _{\mathcal N} X+ (y-xp) \nabla\!\! _{\mathcal N} Y+ \frac{p}{\tau\epsilon} \nabla\!\! _{\mathcal N} T=\frac{2p}{\epsilon R} \Big(-(y-xp)X +(x+yp)Y\Big). \end{equation} >From the elementary formulas \[ \mathcal N x = \frac{1}{R\epsilon} (x+yp) \quad \text{and} \quad \mathcal N y =\frac{1}{R\epsilon} (y-xp), \] we find \begin{equation}\label{G2} \begin{split} \mathcal N (x+yp) & = \frac{1}{\epsilon R} \big(x(1-p^2) +2yp \big) +y\mathcal N p, \\ \mathcal N (y-xp) & = \frac{1}{\epsilon R} \big(y(1-p^2) -2xp \big) -x\mathcal N p. \end{split} \end{equation} Inserting \eqref{G1} and \eqref{G2} into \eqref{G0} we obtain the following expression \begin{equation}\label{G3} \begin{split} \nabla\!\! _{\mathcal N}\mathcal N =\frac{1}{R^2} \Big[ &\Big\{ x ( \epsilon^{-1} (1+p^2) -\mathcal NR) + y (R\mathcal N p -p\mathcal N R) \Big\} X \\ +& \Big\{ y ( \epsilon^{-1} (1+p^2) -\mathcal NR) -x (R\mathcal N p -p\mathcal N R) \Big\} Y \\ +&\frac{1}{\tau\epsilon} (R\mathcal N p -p\mathcal N R) T\Big] . \end{split} \end{equation} >From \eqref{NR} and \eqref{Np} we compute \[ R\mathcal N p -p\mathcal N R = - \frac{\epsilon \tau^2 r^ 2}{\omega(r)^4 p}\big[ \omega(R)^2 -\ell(p) \omega(r)^2\big]. \] Inserting this formula into \eqref{G3} and using $1+p^2 =\omega(R)^2/\omega(r)^2$ yields the claim. \end{proof} Let $\mathcal N\in \Gamma(T H^1 _*)$ be the exterior unit normal to the family of CMC spheres $\Sigma_R$ centered at $0\in H^ 1$. The vector field $\nabla \! _{\mathcal N}\mathcal N$ is tangent to $\Sigma_R$ for any $R>0$, and for $(z,t)\in\Sigma_R$ we have \begin{center} $ \nabla\! _{\mathcal N}\mathcal N(z,t) =0\quad$ if and only if $\quad z=0\,$ or $\, t=0$. \end{center} However, it can be checked that the normalized vector field \[ \mathcal M (z,t) = \mathrm{sgn}(t) \frac{\nabla\! _{\mathcal N}\mathcal N } {|\nabla\! _{\mathcal N}\mathcal N |}\in \Gamma( T\Sigma_R^*) \] is smoothly defined also at points $(z,t)\in \Sigma_R$ at the equator, where $t=0$. We denote by $\nabla^{\Sigma_R}$ the restriction of the Levi-Civita connection $\nabla$ to $\Sigma_R$. \begin{theorem} \label{4.1} Let $\Sigma _R \subset H^1$ be the CMC sphere with mean curvature $H>0$. Then the vector field $\nabla\! _{\mathcal M}\mathcal M $ is smoothly defined on $\Sigma_R$ and for any $(z,t)\in \Sigma_R$ we have \begin{equation} \label{espo} \nabla\!_{\mathcal M}\mathcal M (z,t) = - \frac{H}{ \omega(r)^ 2}\mathcal N. \end{equation} In particular, $\nabla^{\Sigma_R} _{\mathcal M}\mathcal M =0$ and the integral curves of $\mathcal M$ are Riemannian geodesics of $\Sigma_R$ joining the north pole $N$ to the south pole $S$. \end{theorem} \begin{proof} From \eqref{nablaENNE} we obtain the following formula for $\mathcal M$: \begin{equation}\label{C0} \mathcal M = (x\lambda-y\mu) X + (y\lambda+x \mu) Y -\frac{\mu}{\tau\epsilon} T, \end{equation} where $\lambda,\mu:\Sigma_R^* \to \mathbb R$ are the functions \begin{equation} \label{C} \lambda = \lambda(r)=\pm \frac{\sqrt{R^2-r^2}}{rR}\quad \text{and}\quad \mu = \mu(r)=\frac{\tau\epsilon r }{R \omega(r)}, \end{equation} with $r=|z|$ and $R=1/(\epsilon H)$. The functions $\lambda$ and $\mu$ are radially symmetric in $z$. In defining $\lambda$ we choose the sign $+$, when $t\geq 0$, and the sign $-$, when $t<0$. In the coordinates $(x,y,t)$, the vector field $\mathcal M$ has the following expression \begin{equation}\label{star} \mathcal M =\frac 1 \epsilon \Big( \lambda r \partial _r+\mu (x\partial_y-y\partial_x) - \mu \frac{\epsilon^2\omega(r)^2}{\tau} \partial _t\Big), \end {equation} where $r\partial _r =x\partial_x+y\partial_y$, and so we have \begin{equation}\label{phil} \begin{split} \nabla\!_{\mathcal M}\mathcal M = & (x\lambda-y\mu) \nabla\!_{\mathcal M} X + (y\lambda+x\mu) \nabla\!_{\mathcal M} Y-\frac{\mu}{\tau\epsilon} \nabla\!_{\mathcal M} T \\ & + \mathcal M (x\lambda-y\mu) X + \mathcal M (y\lambda+x\mu) Y - \mathcal M \Big( \frac{\mu}{\tau\epsilon} \Big) T. \end{split} \end{equation} Using \eqref{star}, we compute \begin{equation}\label{1} \mathcal M x = \frac1\epsilon (x\lambda-y\mu) \quad \text{and}\quad \mathcal M y = \frac1\epsilon (y\lambda+x\mu), \end{equation} and so we find \begin{equation}\label{2} \begin{split} \mathcal M (x\lambda-y\mu) &= \frac 1 \epsilon (x\lambda-y\mu)\lambda + x\mathcal M \lambda -\frac1\epsilon (y\lambda+x\mu)\mu -y\mathcal M \mu, \\ \mathcal M (y\lambda+x\mu) &= \frac 1 \epsilon (y\lambda+x\mu)\lambda + y\mathcal M \lambda +\frac1\epsilon (x\lambda-y\mu)\mu +x\mathcal M \mu. \end{split} \end{equation} Now, inserting \eqref{1} and \eqref{2} into \eqref{phil}, we get \begin{equation*} \label{3} \begin{split} \nabla\!_{\mathcal M}\mathcal M = & \Big( \frac x \epsilon (\lambda^2+\mu^2)+x\mathcal M \lambda-y\mathcal M\mu\Big) X \\ +& \Big( \frac y \epsilon (\lambda^2+\mu^2) +y\mathcal M \lambda +x\mathcal M\mu\Big)Y -\frac{1}{\tau\epsilon} \mathcal M \mu T. \end{split} \end{equation*} The next computations are for the case $t\geq 0$. Again from \eqref{star}, we get \begin{equation}\label{B} \mathcal M \lambda = \frac{\lambda r}{\epsilon} \partial_r\lambda = - \frac{R\lambda}{\epsilon r\sqrt{R^2-r^2}}, \quad \text{and}\quad \mathcal M \mu = \frac{\lambda r}{\epsilon} \partial _r \mu= \frac{\tau r \lambda}{R \omega(r)^3}. \end{equation} From \eqref{C} and \eqref{B} we have \[ \frac{1}{\epsilon} (\lambda^2+\mu^2)+\mathcal M\lambda = -\frac{1}{\epsilon R^2 \omega(r)^2}, \] and so we finally obtain \begin{equation} \label{NUM1} \nabla\!_{\mathcal M}\mathcal M =(x\Lambda-yM) X +(y\Lambda +xM) Y -\frac{M}{\tau\epsilon} T, \end{equation} where we have set \begin{equation} \label{NUM2} \Lambda = -\frac{1}{\epsilon R^2 \omega(r)^2},\qquad M =\tau \frac{\sqrt{R^2-r^2}}{R^2\omega(r)^3}. \end{equation} Comparing with \eqref{ENNE}, we deduce that \[ \nabla\!_{\mathcal M}\mathcal M = - \frac{1}{\epsilon R\omega(r)^ 2}\mathcal N. \] The claim $\nabla^{\Sigma_R} _{\mathcal M}\mathcal M =0$ easily follows from the last formula. \end{proof} \begin{figure}[h!] \includegraphics[scale=0.58]{SigmaR_ok_interm.pdf} \caption{The plotted curve is an integral curve of the vector field $\mathcal M$ for $R=2$, $\varepsilon=0.5$, and $\sigma=0.5$.} \end{figure} \begin{remark} We compute the pointwise limit of $\mathcal M$ in \eqref{C0} when $\sigma\to0$, for $t\geq 0$. In the southern hemisphere the situation is analogous. By \eqref{star}, the vector field $\mathcal M$ is given by \[ \begin{split} \mathcal M = \frac {1}{ \epsilon R} \Big( \frac{\sqrt{R^2-r^2}}{r} (x \partial_ x +y \partial_y ) + {\frac{\sigma r}{\sqrt{\epsilon^6+ \sigma^2 r ^2}}} ( x \partial_y -y \partial_x ) - r \sqrt{\epsilon^6+ \sigma ^2r ^2} {\partial_t}\Big) . \end{split} \] With $\epsilon=1$ we have \[ \widehat{\mathcal M} =\lim_{\sigma \to 0 } \mathcal M= \frac{\sqrt{R^2-r^2}}{rR} (x \partial_ x +y \partial_y ) -\frac r R {\partial_t}. \] Clearly, the vector field $\widehat {\mathcal M}$ is tangent to the round sphere of radius $R>0$ in the three-dimensional Euclidean space and its integral lines turn out to be the meridians from the north to the south pole. \end{remark} \begin{remark}\label{PAN} We study the limit of $\epsilon {\mathcal M}$ when $\epsilon \to 0$, in the northern hemisphere. The frame of left-invariant vector fields $\bar X = \epsilon X$, $\bar Y = \epsilon Y$ and $\bar T = \epsilon^{-2} T$ is independent of $\epsilon$. Moreover, the linear connection $\nabla$ restricted to the horizontal distribution spanned by $\bar X$ and $\bar Y$ is independent of the parameter $\epsilon$. Indeed, from the fundamental relations \eqref{FR} and from \eqref{tau-sigma} we find \[ \begin{split} & \nabla_{\bar X}\bar X = \nabla_{\bar Y}\bar Y = 0, \\ & \nabla_{\bar X} \bar Y = -\sigma \bar T\quad \textrm{and}\quad \nabla_{\bar Y} \bar X = \sigma \bar T. \end{split} \] Now, it turns out that \[ \begin{split} \bar{\mathcal M} = \lim_{\epsilon \to 0} \epsilon \mathcal M & = \frac 1 R \Big[ \Big(x\frac{\sqrt{R^2-r^2}}{r}-y\Big) \partial _ x +\Big(y\frac{\sqrt{R^2-r^2}}{r} +x\Big) \partial _y - \sigma r^2\partial_t\Big] \\ & = (x\bar\lambda -y \bar \mu) \bar X + (y\bar \lambda+x\bar \mu) \bar Y, \end{split} \] where \[ \bar \lambda = \lambda = \frac{\sqrt{R^2-r^2}}{rR}, \qquad \bar\mu = \frac 1 R. \] The vector field $\bar {\mathcal M}$ is horizontal and tangent to the Pansu's sphere. We denote by $J$ the complex structure $J(\bar X) = \bar Y$ and $J(\bar Y) =- \bar X$. A computation similar to the one in the proof of Theorem \ref{4.1} shows that \begin{equation} \label{check} \nabla\! _{\bar{\mathcal M}} \bar {\mathcal M} = \frac 2 R J (\bar { \mathcal M}). \end{equation} This is the equation for Carnot-Carath\'eodory geodesics in $H^1$ for the sub-Riemannian metric making $\bar X$ and $\bar Y$ orthonormal, see \cite[Proposition 3.1]{RR}. Thus, we reached the following conclusion. The integral curves of $\mathcal M$ are Riemannian geodesics of $\Sigma_R$ and converge to the integral curves of $\bar{\mathcal M}$. These curves foliate the Pansu's sphere and are Carnot-Carath\'eodory geodesics (not only of the Pansu's sphere but also) of $H^1$. Using \eqref{check} we can pass to the limit as $\epsilon \to 0$ in equation \eqref{espo}, properly scaled. An inspection of the right hand side in \eqref{NUM1} shows that the right hand side of \eqref{espo} is asymptotic to $\epsilon ^4$. In fact, starting from \eqref{NUM2} we get \begin{equation} \label{pipox} - \lim_{\epsilon\to0} \frac{H }{ \epsilon^{4}\omega (r)^2} \mathcal N = \frac{1}{R\sigma^2 r^2} \big[- ( x\bar \mu +y\bar \lambda ) \bar X +( x\bar\lambda -y\bar\mu ) \bar Y \big] = \frac{1}{R\sigma^2 r^2} J(\bar{\mathcal M}). \end{equation} From \eqref{espo}, \eqref{check}, and \eqref{pipox} we deduce that \[ \lim_{\epsilon\to0} \epsilon^{-4} \nabla \!_{\mathcal M}\mathcal M = \frac{1}{2\sigma^2 r^2} \nabla\! _{\bar{\mathcal M}} \bar {\mathcal M}. \] \end{remark} \section{Topological CMC spheres are left translations of $\Sigma_R$} \label{k_0=0} \setcounter{equation}{0} In this section, we prove that any topological sphere in $H^1$ having constant mean curvature is congruent to a sphere $\Sigma_R$ for some $R>0$. This result was announced, in wider generality, in \cite{A5}. As in \cite{AR}, our proof relies on the identification of a holomorphic quadratic differential for CMC surfaces in $H^1$. For an oriented surface $\Sigma$ in $H^1$ with unit normal vector $\mathcal{N}$, we denote by $h\in \mathrm{Hom}(T_p\Sigma;T_p\Sigma)$ the shape operator $h(W) = \nabla_W\mathcal{N}$, at any point $p\in\Sigma$. The $1$-form $\vartheta$ in $H^1$, defined by $\vartheta(W) = \langle W,T\rangle$ for $W\in \Gamma(T H^1)$, can be restricted to the tangent bundle $T\Sigma$. The tensor product $\vartheta\otimes\vartheta \in \mathrm{Hom}(T_p\Sigma; T_p\Sigma)$ is defined, as a linear operator, by the formula \[ (\vartheta\otimes\vartheta) (W) = \vartheta(W) (\vartheta(X_1) X_1+\vartheta(X_2) X_2),\qquad W\in \Gamma( T\Sigma), \] where $X_1,X_2$ is any (local) orthonormal frame of $T\Sigma$. Finally, for any $H\in\mathbb R$ with $H\neq0$, let $\alpha_H\in(-\pi/4,\pi/4)$ be the angle \begin{equation} \label{alpha_H} \alpha_H= \frac 12 \arctan\Big(\frac{\tau}{H}\Big), \end{equation} and let $q_H \in\mathrm{Hom}(T _p\Sigma; T_p \Sigma)$ be the (counterclockwise) rotation by the angle $\alpha_H$ of each tangent plane $T_p\Sigma$ with $p\in\Sigma$. \begin{definition} Let $\Sigma$ be an (immersed) surface in $H^1$ with constant mean curvature $H\neq 0$. At any point $p\in\Sigma$, we define the linear operator $k\in \mathrm{Hom}(T_p\Sigma; T_p\Sigma)$ by \begin{equation} \label{ABR} k = h + \frac{2\tau^2}{\sqrt{H^2+\tau^2}} q_H \circ (\vartheta\otimes\vartheta)\circ q_H^{-1}. \end{equation} \end{definition} The operator $k$ is symmetric, i.e., $\langle k(V),W\rangle = \langle V,k(W)\rangle$. The trace-free part of $k$ is $ k _ 0 = k -\frac 12 \mathrm{tr}(k)\mathrm{Id}$. In fact, we have \begin{equation} \label{b_0} k_ 0 = h_0 + \frac{2\tau^2}{\sqrt{H^2+\tau^2}} q_H \circ (\vartheta\otimes\vartheta)_0\circ q_H^{-1}. \end{equation} Formula \eqref{ABR} is analogous to the formula for the quadratic holomorphic differential discovered in \cite{AR}. In the following, we identify the linear operators $h,k,\vartheta\otimes\vartheta$ with the corresponding bilinear forms $(V,W)\mapsto h(V,W) = \langle h(V),W\rangle$, and so on. The structure of $k$ in \eqref{ABR} can be established in the following way. Let $\Sigma_R$ be the CMC sphere with $R = 1/ \varepsilon H$. From the formula \eqref{ACCA}, we deduce that, in the frame $X_1,X_2$ in \eqref{pipo}, the trace-free shape operator at the point $(z,t) \in\Sigma_R$ is given by \begin{equation*} \label{hst} h_0 = \frac{\varrho^2 }{1+\varrho^2} \left( \begin{array}{cc} H & \tau \\ \tau & - H \end{array} \right), \end{equation*} where $\varrho = \tau \varepsilon|z|$. On the other hand, from \eqref{xixo} and \eqref{abcp}, we get \[ \vartheta(X_1)=0 \quad \textrm{ and } \quad \vartheta(X_2) =\frac{\varrho \sqrt{\tau^ 2+ H^ 2 } }{\tau\sqrt{ 1+\varrho^2}}, \] and we therefore obtain the following formula for the trace-free tensor $(\vartheta \otimes\vartheta)_0$ in the frame $X_1,X_2$: \[ (\vartheta\otimes\vartheta) _0 = \displaystyle - \frac{(\tau^2+ H^2)}{2 \tau^2 } \frac{\varrho^2 }{ 1+\varrho^2 } \left( \begin{array}{cc} 1& 0 \\ 0 & -1 \end{array} \right). \] Now, in the unknowns $c\in\mathbb R$ and $q$ (that is a rotation by an angle $\beta$), the system of equations $h_0 + c q (\vartheta\otimes\vartheta)_0 q ^{-1}=0$ holds independently of $\varrho$ if and only if $c= 2\tau^2/\sqrt{H^2+\tau^2}$ and $\beta$ is the angle in \eqref{alpha_H}. We record this fact in the next: \begin{proposition} \label{k_0=0_for_S_R} The linear operator $k$ on the sphere $\Sigma_R$ with mean curvature $H$, at the point $(z,t) \in \Sigma_R$, is given by \[ k = \Big( H + \frac{\varrho^2}{1+\varrho^2} \sqrt{\tau^2 + H^2} \Big) \mathrm{Id}. \] In particular, $\Sigma_R$ has vanishing $k_0$ (i.e., $k_0=0$). \end{proposition} In Theorem \ref{5.5}, we prove that \emph{any} topological sphere in $H^1$ with constant mean curvature has vanishing $k_0$. We need to work in a conformal frame of tangent vector fields to the surface. Let $z=x_1 + i x_2$ be the complex variable. Let $D\subset\mathbb C$ be an open set and, for a given map $F\in C^\infty(D;H^1)$, consider the immersed surface $\Sigma=F(D)\subset\H^1$. The parametrization $F$ is conformal if there exists a positive function $E\in C^\infty(D)$ such that, at any point in $D$, the vector fields $ V_1=F_* \frac {\partial}{ \partial x_1}$ and $V_2=F_* \frac{\partial}{\partial x_2}$ satisfy: \begin{equation} \label{eq:conf} | V_1|^2=| V_2|^2=E,\quad \lambda V_1,V_2\rangle=0. \end{equation} We call $V_1,V_2$ a conformal frame for $\Sigma$ and we denote by $\mathcal{N} $ the normal vector field to $\Sigma$ such that triple $V_1,V_2,\mathcal{N}$ forms a positively oriented frame, i.e., \begin{equation} \label{PO} \mathcal{N} = \frac{1}{E} V_1\wedge V_2. \end{equation} The second fundamental form of $\Sigma$ in the frame $V_1,V_2$ is denoted by \begin{equation} \label{eq:2ff} h= (h_{ij})_{i,j=1,2} =\begin{pmatrix} L&M\\ M&N \end{pmatrix},\quad h_{ij}=\lambda \nabla_i\mathcal{N} ,V_j\rangle, \end{equation} where $\nabla_i=\nabla_{V_i}$ for $i=1,2$. This notation differs from \eqref{ACCA}, where the fixed frame is $X_1,X_2,\mathcal{N}$. Finally, the {\em mean curvature} of $\Sigma$ is \begin{equation} \label{eq:H} H=\frac{L+N}{2E}=\frac{h_{11}+h_{22}}{2E}. \end{equation} By Hopf's technique on holomorphic quadratic differentials, the validity of the equation $k_0=0$ follows from the Codazzi's equations, which involve curvature terms. An interesting relation between the $1$-form $\vartheta$ and the Riemann curvature operator, defined as \[ R(U,V)W = \nabla_U\nabla_V W - \nabla_V\nabla_U W-\nabla_{[U,V]} W \] for any $U,V,W \in \Gamma(T H^1)$, is described in the following: \begin{lemma} \label{lem:van11} Let $V_1,V_2$ be a conformal frame of an immersed surface $\Sigma$ in $H^1$ with conformal factor $E$ and unit normal $\mathcal{N}$. Then, we have \begin{align} \label{eq:c2a} \lambda R(V_2,V_1)\mathcal{N} ,V_2\rangle= 4\tau^2 E\theta(V_1)\theta(\mathcal{N}). \end{align} \end{lemma} \begin{proof} We use the notation \begin{equation} \label{viv} \begin{split} V_i & =V_i^XX+V_i^YY+V_i^TT,\qquad i=1,2, \\ \mathcal{N} & =\mathcal{N}^XX+\mathcal{N}^YY+\mathcal{N}^TT. \end{split} \end{equation} From the fundamental relations \eqref{FR}, we obtain: \[ \begin{array}{lllr} \lambda R(V_2,V_1)\mathcal{N} ,V_2\rangle & = & V_2^XV_1^Y\mathcal{N}^YV_2^X\cdot(-3\tau^2)&(1)\\ && + V_2^XV_1^Y\mathcal{N}^XV_2^Y\cdot(3\tau^2)&(2)\\ && + V_2^XV_1^T\mathcal{N}^TV_2^X\cdot(\tau^2)&(3)\\ && + V_2^XV_1^T\mathcal{N}^XV_2^T\cdot(-\tau^2)&(4)\\ && + V_2^YV_1^X\mathcal{N}^XV_2^Y\cdot(-3\tau^2)&(5)\\ && + V_2^YV_1^X\mathcal{N}^YV_2^X\cdot(3\tau^2)&(6)\\ && + V_2^YV_1^T\mathcal{N}^TV_2^Y\cdot(\tau^2)&(7)\\ && + V_2^YV_1^T\mathcal{N}^YV_2^T\cdot(-\tau^2)&(8)\\ && + V_2^TV_1^X\mathcal{N}^XV_2^T\cdot(\tau^2)&(9)\\ && + V_2^TV_1^X\mathcal{N}^TV_2^X\cdot(-\tau^2)&(10)\\ && + V_2^TV_1^Y\mathcal{N}^YV_2^T\cdot(\tau^2)&(11)\\ && + V_2^TV_1^Y\mathcal{N}^TV_2^Y\cdot(-\tau^2).&(12) \end{array} \] Now, we have $(9)+(10)+(11)+(12)=0$. In fact: \[\begin{split} (9)+(11)&=\tau^2V_2^TV_2^T(V_1^X\mathcal{N} ^X+V_1^Y\mathcal{N} ^Y)=-\tau^ 2 V_2^TV_2^TV_1^T\mathcal{N} ^T,\\ (10)+(12)&=-\tau^2V_2^T\mathcal{N} ^T(V_1^XV_2^X+V_1^YV_2^Y)=\tau^2V_2^T\mathcal{N} ^TV_1^TV_2^T, \end{split} \] where we used $\lambda V_1,\mathcal{N} \rangle=\lambda V_1,V_2\rangle=0$ to deduce $V_1^X\mathcal{N} ^X+V_1^Y\mathcal{N} ^Y=-V_1^T\mathcal{N} ^T$ and $V_1^XV_2^X+V_1^YV_2^Y=-V_1^TV_2^T$. Moreover, we have $(3)+(4)+(7)+(8)=\tau^2 E V_1^T\mathcal{N} ^T$. Indeed, \[ \begin{split} (3)+(7)&=\tau^2 V_1^T\mathcal{N} ^T(V_2^XV_2^X+V_2^YV_2^Y)=\tau^2V_1^T\mathcal{N} ^T(E-V_2^TV_2^T),\\ (4)+(8)&=-\tau^2V_1^TV_2^T(V_2^X\mathcal{N} ^X+V_2^Y\mathcal{N} ^Y)=\tau^2V_1^TV_2^TV_2^T\mathcal{N} ^T, \end{split} \] where we used $\lambda V_2,V_2\rangle=E$ and $\lambda V_2,\mathcal{N} \rangle=0$ to deduce $V_2^XV_2^X+V_2^YV_2^Y=E-V_2^TV_2^T$ and $V_2^X\mathcal{N} ^X+V_2^Y\mathcal{N} ^Y=-V_2^T\mathcal{N} ^T$. Indeed, \[ \begin{split} (1)+(5)&=-3\tau^2(V_2^XV_1^Y\mathcal{N} ^YV_2^X+V_2^YV_1^X\mathcal{N} ^XV_2^Y)\\ &=3\tau^2[V_1^T\mathcal{N} ^T(V_2^XV_2^X+V_2^YV_2^Y)+V_2^XV_1^X\mathcal{N} ^XV_2^X+V_2^YV_1^Y\mathcal{N} ^YV_2^Y]\\ &=3\tau^2[V_1^T\mathcal{N} ^T(E-V_2^TV_2^T)+V_2^XV_1^X\mathcal{N} ^XV_2^X+V_2^YV_1^Y\mathcal{N} ^YV_2^Y]\\ (2)+(6)&=3\tau^2[V_2^XV_1^Y\mathcal{N} ^XV_2^Y+V_2^YV_1^X\mathcal{N} ^YV_2^X]\\ &=-3\tau^2[V_1^TV_2^T(V_2^X\mathcal{N} ^X+V_2^Y\mathcal{N} ^Y)+V_2^XV_1^X\mathcal{N} ^XV_2^X+V_2^YV_1^Y\mathcal{N} ^YV_2^Y]\\ &=-3\tau^2[-V_1^TV_2^T\mathcal{N} ^TV_2^T+V_2^XV_1^X\mathcal{N} ^XV_2^X+V_2^YV_1^Y\mathcal{N} ^YV_2^Y], \end{split} \] where we used $\lambda V_1,\mathcal{N} \rangle=\lambda V_1,V_2\rangle=0$ to deduce $V_1^Y\mathcal{N} ^Y=-V_1^X\mathcal{N} ^X-V_1^T\mathcal{N} ^T$, $V_1^YV_2^Y=-V_1^XV_2^X-V_1^TV_2^T$ and $V_1^XV_2^X=-V_1^XV_2^X-V_1^TV_2^T$. Equation \eqref{eq:c2a} follows. \end{proof} For an immersed surface with conformal frame $V_1,V_2$, we use the notation $V_iE=E_i$, $V_iH=H_i$, $V_iN=N_i$, $V_iM=M_i$, $V_iL=L_i$, $i=1,2$. \begin{theorem}[Codazzi's Equations] Let $\Sigma=F(D)$ be an immersed surface in $\H^1$ with conformal frame $V_1,V_2$, conformal factor $E$ and unit normal $\mathcal{N}$. Then, we have \begin{align} \label{eq:coda1} H_1&=\frac{1}{E}\Big\{ \frac{L_1-N_1}{2}+M_2-4\tau^2E\theta(V_1)\theta(\mathcal{N})\Big\},\\ \label{eq:coda2} H_2&=\frac{1}{E}\Big\{ \frac{N_2-L_2}{2}+M_1-4\tau^2E\theta(V_2)\theta(\mathcal{N})\Big\}, \end{align} where $L,M,N, H$ are as in \eqref{eq:2ff} and \eqref{eq:H}. \end{theorem} \begin{proof} We start from the following well-known formulas for the derivatives of the mean curvature: \begin{align} \label{eq:cod1} H_1&=\frac{1}{E}\Big\{ \frac{L_1-N_1}{2}+M_2+\lambda R(V_1,V_2)\mathcal{N} , V_2\rangle \Big\},\\ \label{eq:cod2} H_2&=\frac{1}{E}\Big\{ \frac{N_2-L_2}{2}+M_1+\lambda R(V_2,V_1)\mathcal{N} , V_1\rangle \Big\}. \end{align} Our claims \eqref{eq:coda1} and \eqref{eq:coda2} follow from these formulas and Lemma \ref{lem:van11}. For the reader's convenience, we give a short sketch of the proof of \eqref{eq:cod1}, see e.g.~\cite{Kling} for the flat case. For any $i,j,k=1,2$, we have \begin{equation}\label{eq:c1} V_k h_{ij}-V_i h_{kj} = \lambda R(V_k,V_i)\mathcal{N} ,V_j\rangle+\lambda\nabla_i\mathcal{N} ,\nabla_kV_j\rangle-\lambda\nabla_k\mathcal{N} ,\nabla_iV_j\rangle. \end{equation} Setting $i=j=2$ and $k=1$ in \eqref{eq:c1}, and using \eqref{eq:H} we obtain \begin{equation} \label{eq:c12}\begin{split} V_1(2EH) &=L_1+M_2+\lambda R(V_1,V_2)\mathcal{N} ,V_2\rangle +\lambda\nabla_2\mathcal{N} ,\nabla_1V_2\rangle -\lambda\nabla_1\mathcal{N} ,\nabla_2V_2\rangle. \end{split} \end{equation} Using the expression of $\nabla_i\mathcal{N}$ in the conformal frame, we find \begin{equation} \label{eq:c13} \lambda\nabla_2\mathcal{N} ,\nabla_1V_2\rangle -\lambda\nabla_1\mathcal{N} ,\nabla_2V_2\rangle =HE_1, \end{equation} and from \eqref{eq:c12} and \eqref{eq:c13} we deduce that \begin{equation} \label{eq:c14} H_1=\frac{1}{2E}\{L_1-E_1H+M_2+\lambda R(V_1,V_2)\mathcal{N} ,V_2\rangle\}. \end{equation} From \eqref{eq:H}, we have the further equation \[ L_1-E_1H =\frac{L_1-N_1}{2}+EH_1, \] that, inserted into \eqref{eq:c14}, gives claim \eqref{eq:cod1}. \end{proof} Now we switch to the complex variable $z = x_1+ i x_2 \in D$ and define the complex vector fields \[ \begin{split} Z = \frac 12 (V_1-i V_2) = F_* \Big( \frac{\partial }{\partial z}\Big), \\ \bar Z = \frac 12 (V_1+i V_2) = F_* \Big( \frac{\partial }{\partial \bar z}\Big). \end{split} \] Equations \eqref{eq:coda1}-\eqref{eq:coda2} can be transformed into one single equation: \begin{equation} \label{COMPO} E(ZH) = \bar Z\Big(\frac{L-N}{2} - i M\Big) - 4\tau^2 E\vartheta(\mathcal{N})\vartheta(Z). \end{equation} Now consider the trace-free part of $b=k-h$, i.e., \begin{equation*} \label{stio} b_0 = \frac{2\tau^2}{\sqrt{H^2+\tau^2}} q_H \circ (\vartheta\otimes\vartheta)_0\circ q_H^{-1} \end{equation*} The entries of $b_0$ as a quadratic form in the conformal frame $V_1,V_2$, with $\vartheta_i = \vartheta(V_i)$ and $ c_H =\frac{2\tau^2}{H^2+\tau^2}$, are given by \begin{equation} \label{poppo} \begin{split} A & =b_0(V_1,V_1) = c_H\Big( H\frac{\theta_1^2-\theta_2^2}{2}-\tau\theta_1\theta_2\Big), \\ B & = b_0(V_1,V_2) = c_H\Big(H\theta_1\theta_2+\tau\frac{\theta_1^2-\theta_2^2}{2}\Big). \end{split} \end{equation} These entries can be computed starting from $q_H (\vartheta\otimes\vartheta)_0 q_H^{-1} = q_H^2 (\vartheta\otimes\vartheta)_0$, where $q_H^2$ is the rotation by the angle $2\alpha_H$ that, by \eqref{alpha_H}, satisfies $\cos(2\alpha_H)=H/\sqrt{H^2+\tau^2}$ and $\sin(2\alpha_H)=\tau/\sqrt{H^2+\tau^2}$. \begin{lemma} \label{lem:van12} Let $\Sigma $ be an immersed surface in $H^1$ with constant mean curvature $H$ and unit normal $\mathcal{N}$ such that $V_1,V_2,\mathcal{N}$ is positively oriented. Then, on $\Sigma$ we have \begin{equation} \label{stix} \bar Z (A-iB) = - 4 \tau^2 E \vartheta(\mathcal{N})\vartheta(Z). \end{equation} \end{lemma} \begin{proof} The complex equation \eqref{stix} is equivalent to the system of real equations \begin{equation} \label{eq:c3a} \begin{split} A_1+ B_2 &= -4\tau^2 E \vartheta(\mathcal{N}) \vartheta(V_1), \\ A_2-B_1&= 4\tau^2 E \vartheta(\mathcal{N}) \vartheta(V_2), \end{split} \end{equation} where $A_i = V_i A$ and $B_i = V_i B$, $i=1,2$. We check the first equation in \eqref{eq:c3a}. Since $H$ is constant, we have \[ A_1 + B_2 = c_H H\Big\{V_1\Big(\frac{\theta_1^2-\theta_2^2}{2}\Big)+V_2(\theta_1\theta_2)\Big\} +\tau c_H \Big\{V_2\Big(\frac{\theta_1^2-\theta_2^2}{2}\Big)-V_1(\theta_1\theta_2)\Big\}, \] where \[ \begin{split} V_1\Big(\frac{\theta_1^2-\theta_2^2}{2}\Big)+V_2(\theta_1\theta_2) & = \theta_1 (V_1\theta_1+V_2\theta_2)+\theta_2 (V_2\theta_1-V_1\theta_2), \\V_2\Big(\frac{\theta_1^2-\theta_2^2}{2}\Big) - V_1(\theta_1\theta_2) & = \theta_1 (V_2\theta_1-V_1\theta_2)-\theta_2 (V_1\theta_1+V_2\theta_2). \end{split} \] For $i,j=1,2$, we have \begin{equation} \label{eq:Vtheta} V_i\theta_j= \lambda\nabla_iT,V_j\rangle+\lambda T,\nabla_iV_j\rangle, \end{equation} where, with the notation \eqref{viv} and by the fundamental relations \eqref{FR}, \begin{equation} \label{VALX} \lambda \nabla_iT, V_j\rangle = \lambda \tau V_i^XY-\tau V_i^YX, V_j\rangle = \tau V_i^XV_j^Y-\tau V_i^YV_j^X. \end{equation} From \eqref{eq:Vtheta}, \eqref{VALX}, \eqref{PO}, and \[ \nabla_2 V_1 - \nabla_1 V_2 =[V_2,V_1] = \Big[F_* \frac{\partial}{\partial x_2},F_*\frac{\partial}{\partial x_1}\Big]= F_*\Big[\frac{\partial}{\partial x_2},\frac{\partial}{\partial x_1}\Big]=0 , \] we deduce \begin{equation} \begin{split} \label{eq:a2} V_2\theta_1-V_1\theta_2&=2\tau(V_1^YV_2^X-V_1^XV_2^Y) + \lambda T,\nabla_2V_1-\nabla_1V_2\rangle=-2\tau E\theta(\mathcal{N}). \end{split} \end{equation} By the definition \eqref{eq:H} and \eqref{eq:conf}, we have \[ \nabla_1V_1 + \nabla_2V_2 =\lambda\nabla_1V_1 + \nabla_2V_2 ,\mathcal{N} \rangle\mathcal{N} =-2EH\mathcal{N}, \] and thus, again from \eqref{eq:Vtheta} and \eqref{VALX}, we obtain \begin{equation} \label{eq:a1} V_1\theta_1+V_2\theta_2 =\theta(\nabla_1V_1+\nabla_2V_2)= -2EH\vartheta(\mathcal{N}). \end{equation} From \eqref{eq:a1} and \eqref{eq:a2} we deduce that \begin{equation} \label{eq:aa} \begin{split} V_1\Big(\frac{\theta_1^2-\theta_2^2}{2}\Big)+V_2(\theta_1\theta_2) & = -2E\theta(\mathcal{N})[H\theta(V_1)+\tau\theta(V_2)], \\ V_2\Big(\frac{\theta_1^2-\theta_2^2}{2} \Big)- V_1(\theta_1\theta_2) & =- 2E\theta(\mathcal{N})[\tau \theta(V_1)-H\theta(V_2)], \end{split} \end{equation} and finally \[ \begin{split} A_1+B_2&= -2 c_H (H^2+\tau^2) E\theta(\mathcal{N})\theta(V_1) = -4\tau^2 E\vartheta(\mathcal{N})\vartheta(V_1). \end{split} \] In order to prove the second equation in \eqref{eq:c3a}, notice that \[ B_1-A_2=c_HH\Big\{V_1(\theta_1\theta_2)-V_2\Big(\frac{\theta_1^2-\theta_2^2}{2} \Big)\Big\} +c_H\tau\Big\{V_2(\theta_1\theta_2)+V_1\Big(\frac{\theta_1^2-\theta_2^2}{2} \Big)\Big\}. \] By \eqref{eq:aa} we hence obtain \[\begin{split} B_1-A_2&=c_HH\Big\{2E\theta(\mathcal{N})[\tau \theta(V_1)-H\theta(V_2)]\Big\} -c_H\tau\Big\{2E\theta(\mathcal{N})[H\theta(V_1)+\tau\theta(V_2)]\Big\}\\ &=-2c_H(H^2+\tau^2)E\theta(\mathcal N)\theta(V_1)= -4\tau^2 E\vartheta(\mathcal{N})\vartheta(V_2). \end{split} \] \end{proof} Let $\Sigma$ be an immersed surface in $H^1$ defined in terms of a conformal parametrization $F\in C^\infty(D;H^1)$. Let $f\in C^\infty(D;\mathbb C)$ be the function of the complex variable $z\in D$ given by \begin{equation} \label{poi} f(z) = \frac{L-N}{2} - i M + A-i B, \end{equation} where $L,M,M,A,B$ are defined as in \eqref{eq:2ff} and \eqref{poppo} via the conformal frame $V_1,V_2$ and are evaluated at the point $F(z)$. \begin{proposition} \label{prop:van1} If $\Sigma$ has constant mean curvature $H$ then the function $f$ in \eqref{poi} is holomorphic in $D$. \end{proposition} \begin{proof} From \eqref{COMPO} with $ZH=0$ and \eqref{stix}, we obtain the equation on $\Sigma = F(D)$ \[ \bar Z\Big(\frac{L-N}{2} - i M + A -iB\Big) = 0, \] that is equivalent to $\partial _{\bar z} f =0$ in $D$. \end{proof} Now, by a standard argument of Hopf, see \cite{H} Chapter VI, for topological spheres the function $f$ is identically zero. By Liouville's theorem, this follows from the estimate \[ |f(z)| \leq \frac{C}{|z|^4},\quad z\in\mathbb C, \] that can be obtained expressing the second fundamental forms in two different charts without the north and south pole, respectively. We skip the details of the proof of the next: \begin{theorem} \label{5.5} A topological sphere $\Sigma$ immersed in $H^1$ with constant mean curvature has vanishing $k_0$. \end{theorem} \begin{comment} \begin{proof} Let $N$ and $S$ be a north and south poles of $\Sigma$. There are conformal parameterizations $F:\mathbb C\to \Sigma\setminus\{N\} $ and $\widehat F: \mathbb C\to \Sigma\setminus\{S\} $ such that $\widehat F(z) = F(1/z) $ for nonzero $z\in\mathbb C$. At the point $p = F(z)$ we have \begin{equation} \label{viola} \begin{split} \frac 12 ( V_1-i V_2) & = Z(p) = F_*(z) \Big(\frac{\partial}{\partial z}\Big) = -\frac {1}{z^2}\widehat F_*(1/z) \Big(\frac{\partial}{\partial z}\Big) = -\frac {1}{z^2} \widehat Z(p) \\ &= -\frac {1}{2 z^2}( \widehat V_1-i \widehat V_2). \end{split} \end{equation} Let $L,M,N,A,B$ be the functions on $\mathbb C$ defined above starting from $F$ and let $\widehat L$,$\widehat M$,$\widehat N$,$\widehat A$,$\widehat B$ be the corresponding functions defined starting from $\widehat F$. By \eqref{viola}, there exists a constant $C_1>0$ such that, for any point $p=F(z)$ with $|z|\geq 1$, \begin{equation} \label{vieri} |L(z)| = |\langle \nabla _{V_1} \mathcal N(p), V_1(p)\rangle | \leq \frac{C_1}{|z|^4} (|\widehat L(1/z)|+|\widehat M(1/z)| +|\widehat N(1/z)|), \end{equation} and analogous estimates hold for $M,N,A,B$. Let $f,\widehat f:\mathbb C\to\mathbb C$ be the holomorphic functions defined as in \eqref{poi} starting from $F$ and $\widehat F$, respectively. By \eqref{vieri}, there exists a constant $C_2>0$ such that for all nonzero $z\in\mathbb C$ there holds \[ |f(z)| \leq \frac{C_2}{|z|^4}. \] Being holomorphic and bounded in $\mathbb C$, the function $f$ is constant and thus identically zero. The entries of $k_0$ in the conformal frame are therefore identically zero on $\Sigma$. It follows that $k_0=0$. \end{proof} \end{comment} In the rest of this section, we show how to deduce from the equation $k_0=0$ that any topological sphere is congruent to a sphere $\Sigma_R$. Differently from \cite{AR}, we do not use the fact that the isometry group of $H^1$ is four-dimensional. Let $\mathfrak h$ be the Lie algebra of $H^1$ and let $\langle\cdot,\cdot\rangle$ be the scalar product making $X,Y,T$ orthonormal. We denote by $S^2 = \{\nu\in\mathfrak h: |\nu| = \sqrt{\langle\nu,\nu\rangle}=1\}$ the unit sphere in $\mathfrak h$. For any $p\in H^1$, let $\tau^p: H^1\to H^1$ be the left-translation $\tau^p (q) = p^{-1}\cdot q$ by the inverse of $p$, where $\cdot$ is the group law of $H^1$, and denote by $\tau_*^p\in \mathrm{Hom}(T_p H^1; \mathfrak h)$ its differential. For any point $(p,\nu) \in H^1\times S^2$ there is a unique $\mathcal{N} \in T_p H^1$ such that $\nu = \tau^p_*\mathcal{N} $ and we define $T^\nu_p H^1 = \{ W \in T_p H^1 : \langle W,\mathcal{N}\rangle = 0 \}$. Depending on the point $(p,\nu)$ and on the parameters $H,\tau \in\mathbb R$, with $H^2+\tau^2\neq 0$, below we define the linear operator $\L_{H}\in \mathrm{Hom}( T_p^\nu H^1; T_\nu S^2)$. The definition is motivated by the proof of Proposition \ref{badpete}. For any $W\in T_p^\nu M$, we let \[ \L_{H} W = \tau_*^p\Big(H W -\frac{2\tau^2}{\sqrt{H^2+\tau^2}} q_{H} ( \vartheta\otimes\vartheta )_0 q_H^{-1} W\Big) + (\nabla_W\tau_*^p)(\mathcal{N}), \] where $\nabla_W\tau_*^p\in \mathrm{Hom}(T_p H^1; \mathfrak h)$ is the covariant derivative of $\tau_*^p$ in the direction $W$ and the trace-free operator $ ( \vartheta\otimes\vartheta )_0 \in\mathrm{Hom}( T^\nu_p H^1; T^\nu_p H^1)$ is \[ ( \vartheta\otimes\vartheta )_0 = \vartheta\otimes\vartheta -\frac 1 2 \mathrm{tr}(\vartheta\otimes\vartheta )\mathrm{Id}. \] The operator $q_{H}\in\mathrm{Hom}(T^\nu_p H^1 ; T^\nu_p H^1)$ is the rotation by the angle $\alpha_H$ in \eqref{alpha_H}. The operator $ \L_{H } $ is well-defined, i.e., $\L_{H } W \in\mathfrak h$ and $\langle \L_{H } W,\nu\rangle =0$ for any $W \in T_p^\nu H^1$. This can be checked using the identity $|\mathcal{N}|=1$ and working with the formula \[ (\nabla _ W\tau_*^p)(\mathcal{N}) = \sum_{i=1}^3 \langle \mathcal{N}, \nabla_W Y_i\rangle Y_i(0), \] where $Y_1,Y_2,Y_3$ is any frame of orthonormal left-invariant vector fields. Finally, for any point $(p,\nu) \in H^1\times S^2$, define \[ \mathcal E_{H }(p,\nu) = \big\{ (W, \mathcal L_{H} W) : W\in T_p^\nu H^1\big\}\subset T_{p} H^1\times T_{ \nu } S^2 . \] Then $(p,\nu)\mapsto \mathcal E_{H }(p,\nu)$ is a distribution of two-dimensional planes in $H^1\times S^2$. The distribution $\mathcal E_{H }$ origins from CMC surfaces with mean curvature $H$ and vanishing $k_0$. Let $\Sigma$ be a smooth oriented surface immersed in $H^1$ given by a parameterization $F\in C^\infty(D; H^1)$ where $D\subset\mathbb C$ is an open set. We denote by $\mathcal{N}(F(z)) \in T_p H^1$, with $p=F(z)$, the unit normal of $\Sigma$ at the point $z \in D$. The normal section is given by the mapping $G: D\to S^2$ defined by $G(z) = \tau_*^{F(z)} \mathcal{N}(F(z))$, and we can define the Gauss section $\Phi:D \to H^1\times S^2$ letting $\Phi(z) = (F(z), G(z))$. Then $\overline{\Sigma} = \Phi(D)$ is a two-dimensional immersed surface in $H^1\times S^2$, called the \emph{Gauss extension} of $\Sigma$. \begin{proposition}\label{badpete} Let $\Sigma$ be an oriented surface immersed in $H^1$ with constant mean curvature $H$ and vanishing $k_0$. Then the Gauss extension $\overline{\Sigma}$ is an integral surface of the distribution $\mathcal E_{H}$ in $H^1\times S^2$. \end{proposition} \begin{proof} Let $\mathcal{N}$ be the unit normal to $\Sigma$. For any tangent section $W\in \Gamma( T\Sigma)$, we have \[ \begin{split} W ( \tau^F_* ( \mathcal{N} ) ) & = \tau^F_* (\nabla_W \mathcal{N} ) + (\nabla_W \tau_*^F)(\mathcal{N}) \\ & = \tau^F_* ( h(W)) + (\nabla_W \tau_*^F)(\mathcal{N}), \end{split} \] where $h(W) = \nabla_W\mathcal{N}$ is the shape operator. Therefore, the set of all sections of the tangent bundle of $\overline{\Sigma}$ is \[ \Gamma( T\overline{\Sigma}) = \Big\{\Big( W, \tau ^F_*( h(W) )+(\nabla_W \tau_*^F) (\mathcal{N}) \Big) : W \in \Gamma(T\Sigma) \Big\}. \] The equation $ k_0=0$ is equivalent to $ h = H\mathrm{Id} -b_0$ where, by \eqref{b_0}, \[ b_0 =\frac{2 \tau^2}{\sqrt{H^2+\tau^2}} q_{H } \Big(\vartheta\otimes\vartheta - \frac{\mathrm{tr}(\vartheta\otimes\vartheta)}{2} \mathrm{Id}\Big)q_{H } ^{-1}, \] and thus the sections of $\overline{\Sigma}$ are of the form \[ (W,\mathcal L_{H} W) \in \Gamma(T\overline{\Sigma})\quad \textrm{with}\quad W \in \Gamma(T\Sigma). \] This concludes the proof. \end{proof} \begin{theorem}\label{5.9} Let $\Sigma$ be a topological sphere in $ H^1$ with constant mean curvature $H$. Then there exist a left translation $\iota $ and $R>0$ such that $\iota(\Sigma) = \Sigma_R$. \end{theorem} \begin{proof} Let $H>0$ be the mean curvature of $\Sigma$, let $R=1/ H\varepsilon$, and recall that the sphere $\Sigma_R$ has mean curvature $H$. Let $T^\Sigma(p) \in T_p\Sigma$ be the orthogonal projection of the vertical vector field $T$ onto $T_p\Sigma$. Since $\Sigma$ is a topological sphere, there exists a point $p\in\Sigma$ such that $T^\Sigma(p) = 0$. This implies that either $T=\mathcal{N}$ or $T=-\mathcal{N}$ at the point $p$, where $\mathcal{N}$ is the outer normal to $\Sigma$ at $p$. Assume that $T = \mathcal{N}$. Let $\iota $ be the left translation such that $\iota(p) = N$, where $N$ is the north pole of $\Sigma_R$. At the point $N$ the vector $T $ is the outer normal to $\Sigma_R$. Since $\iota_* T = T$ (this holds for any isometry), we deduce that $\Sigma_R$ and $\iota(\Sigma)$ are two surfaces such that: \begin{itemize} \item [i)] They have both constant mean curvature $H$. \item [ii)] They have both vanishing $k_0$, by Proposition \ref{k_0=0_for_S_R} and Theorem \ref{5.5}. \item [iii)] $N\in \Sigma_R\cap \iota(\Sigma)$ with the same (outer) normal at $N$. \end{itemize} Let $M_1=\overline{\Sigma}_R$ and $M_2=\overline{\iota(\Sigma)}$ be the Gauss extensions of $\Sigma_R$ and $\iota(\Sigma)$, respectively. Let $\nu =\tau^N_*\mathcal{N} \in S^2$. From i), ii) and Proposition \ref{badpete} it follows that $M_1$ and $M_2$ are both integral surfaces of the distribution $\mathcal E_{H}$. From iii), it follows that $(N,\nu)\in M_1\cap M_2$. Being the two surfaces complete, this implies that $M_1=M_2$ and thus $\Sigma_R = \iota(\Sigma)$. \end{proof} \section{Quantitative stability of $\Sigma_R$ in vertical cylinders} \setcounter{equation}{0} \label{SEI} In this section, we prove a quantitative isoperimetric inequality for the CMC spheres $\Sigma_R$ with respect to compact perturbations in vertical cylinders, see Theorem \ref{thm:quant}. This is a strong form of stability of $\Sigma_R$ in the northern and southern hemispheres. A CMC surface $\Sigma$ in $H^1$ with normal $\mathcal{N}$ is stable in an open region $A\subset\Sigma$ if for any function $g \in C^\infty_c(A)$ with $\int_\Sigma g d\mathcal{A}=0$, where $\mathcal{A}$ is the Riemannian area measure of $\Sigma$, we have \[ \mathcal S(g) = \int_\Sigma \big\{|\nabla g |^2 - \big(|h|^2+\mathrm{Ric}(\mathcal{N})\big)g^2\big\} d\mathcal{A} \geq 0. \] The functional $\mathcal S(g)$ is the second variation, with fixed volume, of the area of $\Sigma$ with respect to the infinitesimal deformation of $\Sigma$ in the direction $g\mathcal{N}$. Above, $|\nabla g|$ is the length of the tangential gradient of $g$, $|h|^2$ is the squared norm of the second fundamental form of $\Sigma$ and $\mathrm{Ric}(\mathcal{N})$ is the Ricci curvature of $H^1$ in the direction $\mathcal{N}$. The Jacobi operator associated with the second variation functional $\mathcal S$ is \begin{equation*} \label{Jacobi} \mathcal L g = \Delta g + (|h|^2 +\mathrm{Ric}(\mathcal{N}))g , \end{equation*} where $\Delta $ is the Laplace-Beltrami operator of $\Sigma$. As a consequence of Theorem 1 in \cite{FCS}, if there exists a strictly positive solution $g\in C^\infty(A)$ to equation $\mathcal L g =0$ on $A$, then $\Sigma$ is stable in $A$ (even without the restriction $\int_A gd\mathcal A=0$). Now consider in $H^1$ the right-invariant vector fields \begin{equation*}\label{XYTright} \widehat X = \frac 1 \epsilon\Big( \frac{\partial}{\partial x}-\sigma y \frac{\partial}{\partial t}\Big), \quad \widehat Y = \frac 1 \epsilon\Big( \frac{\partial}{\partial y} + \sigma x\frac{\partial}{\partial t}\Big), \quad\textrm{and}\quad \widehat T = \epsilon^2 \frac{\partial}{\partial t}. \end{equation*} These are generators of left-translations in $H^1$, and the functions \[ g_{\widehat X} = \langle \widehat X,\mathcal{N}\rangle,\quad g_{\widehat Y} = \langle \widehat Y,\mathcal{N}\rangle,\quad g_{\widehat T} = \langle \widehat T,\mathcal{N}\rangle \] are solutions to $\mathcal L g=0$. By the previous discussion, the CMC sphere $\Sigma_R$ is stable in the hemispheres \[ \begin{split} A _{ {\widehat X}} & = \big\{ (z,t) \in \Sigma_R : g_{\widehat X}>0\big\}, \\ A _{ {\widehat Y}} & = \big\{ (z,t) \in \Sigma_R : g_{\widehat Y}>0\big\}, \\ A _{ {\widehat T}} & = \big\{ (z,t) \in \Sigma_R : g_{\widehat T}>0\big\}. \end{split} \] In particular, $\Sigma_R$ is stable in the northern hemisphere $A _{ {\widehat T}} = \{ (z,t) \in \Sigma_R : t>0\}$. In fact, we believe that the whole $\Sigma_R$ is stable. Actually, this would follow from the isoperimetric property for $\Sigma_R$. The proof of the stability of $\Sigma_R$ requires a deeper analysis and it is not yet clear. However, in the case of the northern (or southern) hemisphere we can prove a strong form of stability in terms of a quantitative isoperimetric inequality. Some stability results in various sub-Riemannian settings have been recently obtained in \cite{Mo,HR1,HR2}. For $R>0$, let $E_R\subset H^1$ be the open domain bounded by the CMC sphere $\Sigma_R$, \[ E_R=\{(z,t)\in\H^1 : |t|<f(|z|;R),\ |z|<R\}, \] where $f(\cdot;R)$ is the profile function of $\Sigma_R$ in \eqref{fuf}. For $0 \leq \delta<R$, we define the half-cylinder \[ C_{R,\delta}=\{(z,t)\in\H^1 : |z|<R\text{ and }t> t_{R,\delta}\}, \] where $t_{R,\delta}=f(r_{R,\delta};R)$ and $r_{R,\delta}=R-\delta$. In the following, we use the short notation \begin{equation} \begin{split} \label{eq:k} k_{R\varepsilon\tau} & = {\varepsilon^3\omega(R)\sqrt{R}}, \\ C_{R\varepsilon\tau} & = \frac{1}{4\pi\varepsilon R^3(Rk_{R\varepsilon\tau} +f(0; R))}, \\ D_{R\varepsilon\tau} & = \frac{1}{12\varepsilon\pi^2 R^5(4Rk_{R\varepsilon\tau}^2+ f(0;R)^2)} . \end{split} \end{equation} We denote by $\mathcal{A}$ the Riemannian surface-area measure in $H^1$. \begin{theorem} \label{thm:quant} Let $R>0$, $0\leq \delta<R$, $\epsilon>0$, and $\tau \in\mathbb R$ be as in \eqref{tau-sigma}. Let $E\subset \H^1$ be a smooth open set such that $\mathcal L^3(E) =\mathcal L^3( E_R)$ and $\Sigma = \partial E$. \begin{itemize} \item [(i)] If $E\Delta E_R \subset\subset C_{R,\delta}$ with $0<\delta<R$ then we have \begin{equation} \label{TP2} \mathcal{A}(\Sigma ) -\mathcal{A}( \Sigma_R) \geq \sqrt{\delta} C_{R\varepsilon\tau} \mathcal L^{3}(E\Delta E_R)^2. \end{equation} \item [(ii)] If $E\Delta E_R \subset\subset C_{R,0}$ then we have \begin{equation} \label{TP} \mathcal{A}(\Sigma) - \mathcal{A}(\Sigma_R) \geq D_{R\varepsilon\tau} \mathcal L^{3}(E\Delta E_R)^3. \end{equation} \end{itemize} \end{theorem} \begin{remark}\label{6.2} When $\Sigma\subset H^1$ is a $t$-graph, $\Sigma = \{ (z,f(z)) \in H^1: z\in D\}$ for some $f\in C^1(D)$, from \eqref{AER1} and \eqref{AER2} we see that the Riemannian area of $\Sigma $ is \[ \mathcal A(\Sigma) = \frac 1 \epsilon \int_{D } \sqrt {\epsilon ^6+ |\nabla f| ^ 2 + \sigma^2 |z|^2+2\sigma(xf_y-y f_x)}\ dz , \] and so \[ \lim_{\epsilon\to 0} \epsilon \mathcal A(\Sigma) = \int_{D } \sqrt { |\nabla f| ^ 2 + \sigma^2 |z|^2+2\sigma(xf_y-y f_x)}\ dz. \] The integral in the right-hand side is the sub-Riemannian area of $\Sigma$. On the other hand, the constants $C_{R\epsilon\tau}$ and $D_{R\epsilon\tau}$ in \eqref{eq:k} are also asymptotic to $1/\epsilon$. Thus, multiplied by $\epsilon$, inequalities \eqref{TP2} and \eqref{TP} pass to the sub-Riemannian limit, see \cite{FLM}. \end{remark} The proof of Theorem \ref{thm:quant} is based on the foliation of the cylinder $C_{R,\delta}$ by a family of CMC surfaces with quantitative estimates on the mean curvature. \begin{theorem} \label{thm:folsub} For any $R>0$ and $0\leq\delta<R$, there exists a continuous function $u: C_{R,\delta}\to\mathbb R$ with level sets $S_\lambda=\big\{(z,t)\in C_{R,\delta} : u(z,t)=\lambda\big\}$, $\lambda\in\mathbb R$, such that the following claims hold: \begin{itemize} \item[(i)] $u\in C^1(C_{R,\delta}\cap E_R) \cap C^1(C_{R,\delta}\setminus E_R)$ and the normalized Riemannian gradient $\nabla u/|\nabla u|$ is continuously defined on $C_{R,\delta}$. \item[(ii)] $\bigcup _{\lambda>R} S_\lambda =C_{R,\delta} \cap E_R$ and $\bigcup _{ \lambda\leq R} S_\lambda =C_{R,\delta} \setminus E_R$. \item[(iii)] Each $S_\lambda$ is a smooth surface with constant mean curvature $H_{\lambda}=1/(\varepsilon \lambda)$ for $\lambda>R$ and $H_{\lambda} = 1/(\varepsilon R)$ for $\lambda\leq R$. \item[(iv)] For any point $(z,f(|z|;R)-t) \in S_\lambda$ with $\lambda>R$ we have \begin{equation} \label{H_R_2} 1-\varepsilon R H_{\lambda} (z,f(|z|;R)-t)\geq \frac{t^2}{4Rk_{R\varepsilon\tau}^2+ f(0;R)^2} , \quad \textrm{when }\delta=0, \end{equation} and \begin{equation} \label{H_R_1} 1-\varepsilon R H_{ \lambda} (z,f(|z|;R)-t)\geq \frac{ \sqrt{\delta} t} {Rk_{R\varepsilon\tau}+ f(0; R)}, \quad \textrm{when }0<\delta<R. \end{equation} \end{itemize} \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:folsub}] For points $(z,t) \in C_{R ,\delta}\setminus E_R$ we let \begin{equation*} \label{eq:uab} u(z,t)=f(|z|; R)-t+R. \end{equation*} Then $u$ satisfies $u(z,t)\leq R$ for $ t \geq f(|z|;R)$ and $u(z,t)=R$ if $t=f(|z|;R)$. In order to define $u$ in the set $C_{R,\delta}\cap E_R$, for $ 0\leq r<r_{R,\delta}$, $t_{R,\delta}<t<f(r; R)$, and $\lambda>R$ we consider the function \begin{equation}\label{fulla} F (r,t,\lambda)=f(r;\lambda)-f(r_{R,\delta};\lambda)+t_{R,\delta}-t . \end{equation} The function $F$ also depends on $\delta$. We claim that for any point $(z,t)\in C_{R,\delta}\cap E_R$ there exists a unique $\lambda>R$ such that $F (|z|,t,\lambda)=0$. In this case, we can define \begin{equation} \label{eq:ubel} u(z,t)=\lambda\quad\text{if and only if}\quad F (|z|,t,\lambda)=0. \end{equation} We prove the previous claim. Let $(z,t)\in C_{R,\delta}\cap E_R$ and use the notation $r=|z|$. First of all, we have \begin{equation} \label{eq:cla} \lim_{\lambda\to R^+}F (r,t,\lambda)=f(r;R)-t>0. \end{equation} We claim that we also have \begin{equation} \label{eq:claim1} \lim_{\lambda\to\infty}F (r,t,\lambda)=t_{R,\delta}-t<0. \end{equation} To prove this, we let $f(r;\lambda)-f(r_{R,\delta};\lambda)=\frac{\varepsilon^2}{2\tau}[ f_1(\lambda)+f_2(\lambda)]$, where \[\begin{split} f_1(\lambda)&=\omega(\lambda)^2\Big[ \arctan(p(r;\lambda))-\arctan(p(r_{R,\delta};\lambda))\Big],\\ f_2(\lambda)&=\omega(r)^2\Big(p(r;\lambda)-p(r_{R,\delta};\lambda)\Big). \end{split} \] Using the asymptotic approximation \[ \arctan(s)=\frac{\pi}{2}-\frac{1}{s}+\frac{1}{3s^3}+o\Big(\frac{1}{s^3}\Big), \quad\text{as }s\to\infty, \] we obtain for $\lambda\to\infty$ \[ \begin{split} f_1(\lambda)&=\lambda\varepsilon\tau(\omega(r_{R,\delta})-\omega(r)))+o(1), \\ f_2(\lambda)&=\lambda\varepsilon\tau(\omega(r)-\omega(r_{R,\delta}))+o(1), \end{split} \] and thus $f(r;\lambda)-f(r_{R,\delta};\lambda)= o(1)$, where $o(1)\to0$ as $\lambda\to\infty$. Since $\lambda\mapsto F (r,t,\lambda)$ is continuous, \eqref{eq:cla} and \eqref{eq:claim1} imply the existence of a solution $\lambda$ of $F (r,t,\lambda)=0$. The uniqueness follows from $\partial_\lambda F(r,t,\lambda) <0$. This inequality can be proved starting from \eqref{f_R} and we skip the details. This finishes the proof of our initial claim. Claims (i) and (ii) can be checked from the construction of $u$. Claim (iii) follows, by Theorem \ref{3.1}, from the fact that $S_\lambda$ for $\lambda>R$ is a vertical translation (this is an isometry of $H^1$) of the $t$-graph of $z\mapsto f(z;\lambda)$. We prove Claim (iv). For any $(z,t)\in H^1$ such that $r=|z|<r_{R,\delta}$ and $0\leq t<f(r;R)-t_{R,\delta}$, we define \begin{equation} \label{pippo} g_z(t)=u(z,f(r;R)-t) =\lambda, \end{equation} where $\lambda\geq R$ is uniquely determined by the condition $(z,f(r;R)-t)\in S_\lambda$. Notice that $g_z(0)=u(z,f(r;R))=R$. We estimate the derivative of the function $t\mapsto g_z(t)$. From the identity $F(r,t,u(z,t)) = 0$, see \eqref{eq:ubel}, we compute $ \partial _t u (z,t) = (\partial_\lambda F (r,t,u(z,t)) )^{-1}$ and so, also using \eqref{fulla}, we find \begin{equation} \label{GG1} g'_z(t) = - \partial_t u (z, f(r;R)-t) = \frac{-1}{\partial_\lambda F (r,f(r;R)-t,g_z(t))}. \end{equation} Now from \eqref{palix} we compute \begin{equation} \label{GG2} \begin{split} \partial _\lambda F (r,t,\lambda) & = -\varepsilon^ 3 \lambda \int_r^{r_{R,\delta}} \frac{s\omega(s)}{(\lambda^2-s^2)^{3/2}}ds \\ & \geq -\varepsilon^ 3 \lambda \omega(r_{R,\delta}) \int_0^{r_{R,\delta}} \frac{s }{(\lambda^2-s^2)^{3/2}}ds \\ & = -\varepsilon^ 3 \omega(r_{R,\delta})\left[\frac{\lambda }{\sqrt{\lambda^2 - r_{R,\delta} ^2 } } -1\right ] \\ & \geq -\varepsilon^3 \omega(R) \frac{\sqrt{R}} {\sqrt{\lambda - r_{R,\delta}}}. \end{split} \end{equation} In the last inequality, we used $r_{R,\delta} <R\leq \lambda$. From \eqref{GG1}, \eqref{GG2} and with $k_{R\varepsilon\tau}$ as in \eqref{eq:k}, we deduce that \begin{equation} \label{eq:disfz} g_z'(t)\geq \frac{1} {k_{R\varepsilon\tau}} \sqrt{g_z(t)-r_{R,\delta}}. \end{equation} In the case $\delta=0$, \eqref{eq:disfz} reads $ g_z'(t)\geq \sqrt{g_z(t)-R} / k_{R\varepsilon\tau}$. Integrating this differential inequality we obtain $ g_z(t)\geq R+ {t^2}/ ({4 k_{R\varepsilon\tau}^2})$, and thus \[ 1-\varepsilon R H_{\lambda}(z,f(r;R)-t) =1-\frac{R}{g_z(t)} \geq\frac{t^2}{4Rk_{R\varepsilon\tau}^2+f(0;R)^2}, \] that is Claim \eqref{H_R_2}. If $0<\delta<R$, \eqref{eq:disfz} implies $g_z'(t)\geq \sqrt{\delta}/k_{R\varepsilon\tau}$ and an integration gives $g_{z}(t)\geq \sqrt{\delta} \, t+ R/k_{R\varepsilon\tau}$. Then we obtain \[ 1-\varepsilon R H_{\lambda}(z,f(r;R)-t)=1-\frac{R}{g_z(t)} \geq\frac{\sqrt{\delta}}{Rk_{R\varepsilon\tau}+f(0;R)}t, \] that is Claim \eqref{H_R_1}. \end{proof} We can now prove Theorem \ref{thm:quant}, the last result of the paper. The proof follows the lines of \cite{FLM}. \begin{proof}[Proof of Theorem \ref{thm:quant}] Let $u:C_{R,\delta}\to\mathbb R$, $0\leq\delta<1$, be the function constructed in Theorem \ref{thm:folsub} and let $S_\lambda=\{(z,t) \in C_{R,\delta} :u(z,t)=\lambda\}$, $\lambda\in\mathbb R$, be the leaves of the foliation. Let $\nabla u$ be the Riemannian gradient of $u$. The vector field \[ V(z,t)=-\frac {\nabla u(z,t)}{|\nabla u(z,t)|},\quad (z,t) \in C_{R,\delta}, \] satisfies the following properties: \begin{itemize} \item[i)] $|V|= 1$. \item[ii)] For $(z,t)\in\Sigma_R \cap C_{R,\delta}$ we have $V(z,t)= \nu_{ \Sigma _R}(z,t)$, where $\nu_{\Sigma_R} = \mathcal{N}$ is the exterior unit normal to $\Sigma_R$. \item[iii)] For any point $(z,t) \in S_\lambda $, $\lambda\in\mathbb R$, the Riemannian divergence of $V$ satisfies \begin{equation} \label{INEQ} \begin{split} &\frac{1}{2}\mathrm{div} V(z,t) =H_{\lambda}(z,t) \leq \frac{1}{\varepsilon R} \quad \text{for }\lambda>R, \\ &\frac{1}{2}\mathrm{div} V(z,t) =H_{\lambda}(z,t) = \frac{1}{\varepsilon R}\quad\text{for }0< \lambda\leq R. \end{split} \end{equation} \end{itemize} Let $\nu_\Sigma$ be the exterior unit normal to the surface $\Sigma=\partial E$. By the Gauss-Green formula and \eqref{INEQ} it follows that \begin{align} \label{eq:E-F}\nonumber \mathcal L^3(E_R\setminus E) & \geq \frac{\varepsilon R}{2}\int_{E_R\setminus E}\div V\;d\mathcal L^3 \\ \nonumber &=\frac{\varepsilon R}{2}\Big(\int_{\Sigma_R \setminus \bar E}\lambda V,\nu_{\Sigma_R}\rangle\;d\mathcal{A} -\int_{\Sigma \cap E_R} \lambda V,\nu_{\Sigma}\rangle\;d\mathcal{A} \Big) \\ \nonumber &\geq \frac{\varepsilon R}{2}\big( \mathcal{A}(\Sigma_R \setminus \bar E )-\mathcal{A}(\Sigma \cap E_R)\big). \end{align} In the last inequality we used the Cauchy-Schwarz inequality and the fact that $\lambda V,\nu_{\Sigma _R}\rangle=1$ on $\Sigma_R\setminus \bar E$. By a similar computation we also have \begin{align} \nonumber \mathcal L^3(E\setminus E_R)& =\frac{\varepsilon R}{2}\int_{E\setminus E_R} \div V\;d\mathcal L^3 \\ \nonumber& =\dfrac{\varepsilon R}{2}\left\{\int_{ \Sigma\setminus \bar E_R}\langle V,\nu_{ \Sigma }\rangle d\mathcal{A} -\int_{\Sigma_R \cap E}\langle V,\nu_{\Sigma_R}\rangle d\mathcal{A} \right\} \\ \nonumber &\leq \frac{\varepsilon R}{2}\big( \mathcal{A}( \Sigma \setminus \bar E_R) -\mathcal{A} (\Sigma_R\cap E)\big). \end{align} Using the inequalities above and the fact that $\mathcal L^3(E) =\mathcal L^3(E_R)$, it follows that: \[ \begin{split} \frac{\varepsilon R}{2}\big(\mathcal{A} (\Sigma_R \setminus \bar E)-\mathcal{A}(\Sigma \cap E_R)\big) &\leq \frac{\varepsilon R}{2}\int_{E_R\setminus E}\div V\;d\mathcal L^3 \\ &=\mathcal L^3(E\setminus E_R)-\int_{E_R\setminus E}\Big(1-\frac{\varepsilon R}{2}\div V\Big)\;d\mathcal L^3 \\ &\leq \frac{\varepsilon R}{2}\big(\mathcal{A}(\Sigma \setminus \bar E_R) -\mathcal{A}(\Sigma_R\cap E) \big) -\mathcal G(E_R\setminus E), \end{split}\] where we let \[ \mathcal G( E_R\setminus E)= \int_{ E_R\setminus E}\Big( 1-\dfrac{\varepsilon R}{2}\div V\Big)\;d\mathcal L^3. \] Hence, we obtain \begin{equation} \label{F_E} \mathcal{A}(\Sigma) - \mathcal{A}(\Sigma_R) \geq \frac{2}{\varepsilon R}\mathcal G( E_R\setminus E). \end{equation} For any $z$ with $|z|<R-\delta$, we define the vertical sections $ E_R^z =\{t\in\mathbb R : (z,t) \in E_R\}$ and $E^z = \{t\in\mathbb R:(z,t) \in E\}$. By Fubini-Tonelli theorem, we have \[ \begin{split} \mathcal G( E_R\setminus E) & = \int _{\{|z|<R\}} \int_{ E_R^z\setminus E^z} \Big( 1-\dfrac{\varepsilon R}{2}\div V(z,t)\Big)dt \,dz. \end{split} \] The function $t\mapsto \div V(z,t)$ is increasing, and thus letting $m(z) = \mathcal L^1 ( E_R^z\setminus E^z)$, by monotonicity we obtain \[ \begin{split} \mathcal G( E_R\setminus E) & \geq \int_{\{|z|<1\}} \int_{f(|z|;R)-m(z)}^{f(|z|;R)} \Big( 1-\dfrac{\varepsilon R}{2}\div V(z,t) \Big)dt \,dz \\& = \int_{\{|z|<1\}} \int_{0}^{m(z)} \left(1-\frac{R}{g_z(t)} \right) dt \,dz, \end{split} \] where $g_z(t) = u(z, f(|z|;R)-t)$ is the function introduced in \eqref{pippo}. When $\delta=0$, by the inequality \eqref{H_R_2} and by H\"older inequality we find \begin{equation}\label{pix} \begin{split} \mathcal G( E_R\setminus E) & \geq \frac{1}{4Rk_{R\varepsilon\tau}^2+f(0;R)^2} \int_{\{|z|<R\}} \int_{0}^{m(z)} t^2 dt \,dz \\ & \geq \frac{1}{24\pi^2R^4(4Rk_{R\varepsilon\tau}^2+f(0;R)^2) } \mathcal L^3 ( E\Delta E_R) ^3. \end{split} \end{equation} From \eqref{pix} and \eqref{F_E} we obtain \eqref{TP}. By \eqref{H_R_1}, when $0<\delta<1$ the function $g_z$ satisfies the estimate $1-1/g_z(t) \geq (\sqrt{\delta}/(k_{R\varepsilon\tau}+f(0;R)))t$ and we find \begin{equation}\label{pox} \begin{split} \mathcal G( E_R\setminus E) & \geq \frac{\sqrt{\delta}} {Rk_{R\varepsilon\tau}+f(0;R)} \int_{\{|z|<R\}} \int_{0}^{m(z)} t \, dt \,dz \\ & \geq \frac{\sqrt{\delta}}{8\pi R^2(Rk_{R\varepsilon\tau}+f(0;R))} \mathcal L^3 ( E\Delta E_R) ^2. \end{split} \end{equation} From \eqref{pox} and \eqref{F_E} we obtain Claim \eqref{TP2}. \end{proof}
proofpile-arXiv_066-681
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} For a metric space $(X,d)$ with a notion of volume and boundary, an isoperimetric inequality gives a lower bound on the boundary of a set of fixed volume. Ideally, for any fixed volume, it produces a set of that volume with minimal boundary. The most well-known isoperimetric inequality states that, in Euclidean space, the unique set of fixed volume with minimal boundary is the Euclidean ball. A graph $G = (V,E)$ can be defined as a metric space in the usual way: for $u, v \in V$, \begin{equation*} d(u,v) = \text{the length of the shortest path from } u \text{ to } v. \end{equation*} For a graph, an isoperimetric inequality gives a lower bound on the boundary of a set $A \subset V$ of a given size. The term ``boundary'' here can be interpreted in two standard ways: the vertex boundary or the edge boundary. The vertex boundary is typically defined as follows: \begin{equation*} \partial A = \{v \in V: d(v,A) \leq 1\} \end{equation*} where \begin{equation*} d(x,A) = \inf_{a \in A} d(x,a) = \inf_{a \in A}\{\text{the length of the shortest path from } x \text{ to } a\} \end{equation*} In words: the vertex boundary of $A$ is the set $A$ itself, along with all of the neighbors of $A$. The vertex boundary of various graphs has been studied in \cite{MR0200192}, \cite{DiscTor}, \cite{MR1082843}, \cite{MR1612869}, \cite{MR2946103}, and others. In this paper, we use another definition for boundary: the edge boundary. The edge boundary is defined as follows: \begin{equation*} \partial_e(A) = \{(x,y) \in E: |A \cap \{x,y\}| = 1\} \end{equation*} In words: the edge boundary of $A$ is the set of edges exiting the set $A$. Many different types of graphs have been studied in terms of the edge isoperimetric question, see for example \cite{MR1137765}, \cite{MR1863367}, \cite{MR1357256}, \cite{MR1755430}, \cite{MR1909858}, \cite{MR2021742}. Although the vertex and edge isoperimetric inequalities have similar statements, often the resulting optimal sets (and thus the techniques used in their proofs) are quite different. Indeed, this will be the case for the family of graphs that we consider: $G_n = ({\mathbb{Z}}^n, E_\infty)$. For $n \in \N$, the vertices of $G_n$ are the integer points ${\mathbb{Z}}^n$ in ${\mathbb{R}}^n$. The edges $E_\infty$ are between pairs of points whose $\ell_\infty$-distance is 1: \begin{equation*} E_\infty = \{(u,v) \in {\mathbb{Z}}^n: ||u-v||_\infty = 1\} \end{equation*} where if $u = (u_1, u_2, \dots, u_n), v = (v_1, v_2, \dots, v_n)$, then \begin{equation*} ||u-v||_\infty = \max_{i=1, 2, \dots, n} \{|u_i-v_i|\} \end{equation*} In \cite{MR2946103}, the author and A.J. Radcliffe gave the vertex isoperimetric inequality for $({\mathbb{Z}}^n, E_\infty)$. The sets of minimum vertex boundary are nested, and the technique of compression was used to prove this. Compression relies heavily on the fact that sets of minimum boundary are nested. Discussions of compression as a technique in discrete isoperimetric problems can be found in \cite{MR2035509}, \cite{MR1444247}, \cite{MR1455181}, and \cite{MR1082842}. As was shown in \cite{MR2946103}, sets of size $k^2$ with minimum vertex boundary in $({\mathbb{Z}}^2, E_\infty)$ are squares of side length $k$. In addition, if a set has size which is not a perfect square, then the set achieving optimality will be a rectangular box, or a rectangular box with a strip on one side of the box. Optimal sets of sizes 1 through 8 in ${\mathbb{Z}}^2$ are shown in Figure \ref{Boxes}. \begin{figure}[htbp] \centering \includegraphics[width = 1.2in]{Box1.jpg} \hspace{.1 cm} \includegraphics[width = 1.2in]{Box2.jpg} \includegraphics[width = 1.2in]{Box3.jpg} \includegraphics[width = 1.2in]{Box4.jpg} \includegraphics[width = 1.2in]{Box5.jpg} \includegraphics[width = 1.2in]{Box6.jpg} \includegraphics[width = 1.2in]{Box7.jpg} \includegraphics[width = 1.2in]{Box8.jpg} \caption{Sets of sizes 1 through 8 of minimal vertex boundary in ${\mathbb{Z}}^2$} \label{Boxes} \end{figure} However, these are not the types of sets which achieve optimal edge boundary. For example, Figure \ref{Edge1_1} shows a set of size 12 with 36 outgoing edges. It also has 20 vertex neighbors. In contrast, Figure \ref{Edge1_2} shows a set of size 12 with 38 outgoing edges and 18 vertex neighbors. The set in Figure \ref{Edge1_2} has minimal vertex boundary, but in comparison to the set in Figure \ref{Edge1_1}, it cannot have minimal edge boundary. \begin{figure}[htbp] \begin{center} \subfigure[Vertices: 12, Vertex Boundary: 20, Edge Boundary:36]{\label{Edge1_1}\includegraphics[width=2.3 in]{Edge1_1.jpg}} \hspace{1 in} \subfigure[Vertices: 12, Vertex Boundary: 18, Edge Boundary:38]{\label{Edge1_2}\includegraphics[width=2.3 in]{Edge1_2.jpg}} \end{center} \caption{The blue points represent the vertices in the set, the red points are their vertex neighbors in $({\mathbb{Z}}^n, E_\infty)$. Note that \ref{Edge1_1} has a larger vertex boundary than \ref{Edge1_2} but a smaller edge boundary} \label{Edge1} \end{figure} An example in the literature of differing optimal sets when considering vertex boundary versus edge boundary can be found in the graph $({\mathbb{Z}}_m^n, E_1)$. This graph has edge set ${\mathbb{Z}}_m^n$, where ${\mathbb{Z}}_m$ denotes the integers modulo $n$. The edge set consists of all pairs of points whose $\ell_1$ distance is 1: \begin{equation*} E_1 = \{(u,v) \in {\mathbb{Z}}_m^n: ||u-v||_1=1\} \end{equation*} where for $u = (u_1, u_2, \dots, u_n)$ and $v=(v_1, v_2, \dots, v_n)$ we have \begin{equation*} ||u-v||_1 = \sum_{i=1}^n|u_i-v_i| \end{equation*} In \cite{DiscTor} Bollob\'as and Leader show that the optimal sets for the vertex isoperimetric problem are nested. In fact, they correspond to balls using the $\ell_1$-metric. This is proved using compression, along with the concepts of fractional systems and symmetrization. However, in \cite{MR1137765}, Bollob\'as and Leader show that the optimal sets for the \emph{edge} isoperimetric problem on the same graph are not nested. They use rectilinear bodies to aid in computing their isoperimetric inequalities. \section{The Edge Boundary} We note that we use the \emph{unordered pair} notation for the edges $E_\infty$. That is, if $(u,v) \in E_\infty$, then we consider $(v,u) \in E_\infty$ and $(u,v) = (v,u)$. It is not too hard to calculate the edge boundary for a general set $S \subset {\mathbb{Z}}^n$ in the graph $({\mathbb{Z}}^n, E_\infty)$. First we require a couple of definitions. For $S \subset {\mathbb{Z}}^n$ and $\epsilon \in \{-1, 0, 1\}^n$, let $P_\epsilon(S)$ be the projection of $S$ onto $\epsilon^\perp$. That is, \begin{equation*} P_\epsilon(S) = \left\{u-\frac{\left<u,\epsilon\right>}{||\epsilon||_2}\epsilon: u \in S\right\} \end{equation*} where for $u = (u_1, u_2, \dots, u_n)$ and $\epsilon = (\epsilon_1, \epsilon_2, \dots, \epsilon_n)$, \begin{align*} \left<u,\epsilon\right> &= \sum_{i=1}^n u_i \epsilon_i \\ ||\epsilon||_2 &= \sqrt{\sum_{i=1}^n\epsilon_i^2} \end{align*} We also need the following: \begin{Definition} Let $S \subset {\mathbb{Z}}^n$ be finite. For $\epsilon \in \{-1, 0, 1\}^n$ with $\epsilon \not=0$, we define \begin{equation*} \text{gap}_\epsilon(S) = \{x \in {\mathbb{Z}}^n: x-\epsilon \in S, x \not\in S, \text{ and } x+b\epsilon \in S \text{ for some }b \geq 1\} \end{equation*} Thus, one can think of a point $x \in \text{gap}_\epsilon(S)$ as the first vertex in ${\mathbb{Z}}^n$ which indicates a gap in $S$ in the line through $x$ in the direction of $\epsilon$. \end{Definition} We have the following: \begin{Theorem}\label{GapThm} Let $S \subset {\mathbb{Z}}^n$ be a finite set. Then \begin{equation}\label{eqn1} |\partial_e (S)| = \sum_{\epsilon \in \{-1,0,1\}^n, \epsilon \not=0}\left( |P_\epsilon(S)| +|\text{gap}_\epsilon(S)|\right) \end{equation} \end{Theorem} \begin{proof} We proceed by induction on $|S|$. If $|S| =1$, then $\text{gap}_\epsilon(S) = \emptyset$ for each $\epsilon \in \{-1, 0, 1 \}^n, \epsilon \not=0$. We can also see that if $S = \{u\}$, then \begin{equation*} \partial_\epsilon( S) = \{(u, u+\epsilon): \epsilon \in \{-1,0,1\}^n: \epsilon \not= 0\} \end{equation*} We can also see that in this case, \begin{equation*} |P_\epsilon(S)| = 1 \end{equation*} for each $\epsilon \in \{-1, 0, 1\}^n, \epsilon \not= 0$. Thus, we have \begin{equation*} |\partial_e (S)| = \sum_{\epsilon \in \{-1,0,1\}^n, \epsilon \not=\vec{0}}\left( |P_\epsilon(S)| +|\text{gap}_\epsilon(S)|\right) \end{equation*} if $|S|=1$. Now suppose that $|S|>1$. Fix $u \in S$. By induction, \begin{equation*} |\partial_e (S\backslash\{u\})| = \sum_{\epsilon \in \{-1,0,1\}^n, \epsilon \not=0}\left( |P_\epsilon(S\backslash\{u\})| + |\text{gap}_\epsilon(S\backslash\{u\})|\right) \end{equation*} Consider what $u$ contributes to the edge boundary of $S$. Note that each $\epsilon \in \{-1,0,1\}^n, \epsilon \not=0$ can be uniquely paired with $-\epsilon \in \{-1,0,1\}^n$. We have three cases: \noindent \underline{Case 1: Both $u+\epsilon$ and $u-\epsilon$ are in $S$} In this case, \begin{align*} (u+\epsilon, u) \in \partial_e(S\backslash \{u\}) & \quad\quad \quad (u-\epsilon, u) \in \partial_e(S\backslash \{u\}) \\ (u+\epsilon, u) \not\in \partial_e(S) & \quad\quad \quad (u-\epsilon, u) \not\in \partial_e(S) \end{align*} and \begin{equation*} \text{gap}_\epsilon(S) = \text{gap}_\epsilon(S\backslash\{u\})\backslash\{u\} \quad \quad \quad \text{gap}_{-\epsilon}(S) = \text{gap}_{-\epsilon}(S\backslash\{u\})\backslash\{u\} \end{equation*} Thus we can see that both the left and right hand sides of equation \eqref{eqn1} go down by 2 corresponding to edges $(u, u+\epsilon), (u, u-\epsilon)$ when $u$ is added back to $S$. \noindent \underline{Case 2: Exactly one of $u+\epsilon$ or $u-\epsilon$ is in $S$} Here, without loss of generality, assume that $u-\epsilon \in S$. Then \begin{align*} (u-\epsilon, u) \in \partial_e(S\backslash \{u\}) & \quad\quad \quad (u+\epsilon, u) \not\in \partial_e(S\backslash \{u\}) \\ (u-\epsilon, u) \not\in \partial_e(S) & \quad\quad \quad (u+\epsilon, u) \in \partial_e(S) \end{align*} and \begin{equation*} \text{gap}_\epsilon(S) = \text{gap}_\epsilon(S\backslash\{u\})\backslash\{u\} \cup \{u+\epsilon\} \quad \quad \quad \text{gap}_{-\epsilon}(S) = \text{gap}_{-\epsilon}(S\backslash\{u\}) \end{equation*} Thus we can see that both the left and right hand sides of equation \eqref{eqn1} do not change corresponding to edges $(u, u+\epsilon), (u, u-\epsilon)$ when $u$ is added back to $S$. \noindent \underline{Case 3: Neither $u+\epsilon$ nor $u-\epsilon$ are in $S$} In this case, \begin{align*} (u+\epsilon, u) \not\in \partial_e(S\backslash \{u\}) & \quad\quad \quad (u-\epsilon, u) \not\in \partial_e(S\backslash \{u\}) \\ (u+\epsilon, u) \in \partial_e(S) & \quad\quad \quad (u-\epsilon, u) \in \partial_e(S) \end{align*} and \begin{equation*} \text{gap}_\epsilon(S) = \text{gap}_\epsilon(S\backslash\{u\})\cup \{u+\epsilon\} \quad \quad \quad \text{gap}_{-\epsilon}(S) = \text{gap}_{-\epsilon}(S\backslash\{u\})\cup \{u-\epsilon\} \end{equation*} Thus we can see that both the left and right hand sides of equation \eqref{eqn1} go up by 2 corresponding to edges $(u, u+\epsilon), (u, u-\epsilon)$ when $u$ is added back to $S$. Since $\epsilon$ was arbitrary, we can see that all of the changes between $\partial_e(S \backslash\{u\})$ and $\partial_e(S)$ are balanced out by changes in the corresponding gaps. Thus, we have \begin{equation}\label{eqn1} |\partial_e (S)| = \sum_{\epsilon \in \{-1,0,1\}^n, \epsilon \not=0}\left( |P_\epsilon(S)| +|\text{gap}_\epsilon(S)|\right) \end{equation} \end{proof} Theorem \ref{GapThm} clearly has the following corollary: \begin{Corollary} Let $S \subset {\mathbb{Z}}^n$ be a finite set such that $\text{gap}_\epsilon(S) = \emptyset$ for each $\epsilon \in \{-1,0,1\}^n, \epsilon \not=0$. Then \begin{equation}\label{eqn1} |\partial_e (S)| = \sum_{\epsilon \in \{-1,0,1\}^n, \epsilon \not=0}|P_\epsilon(S)| \end{equation} \end{Corollary} which is a much more satisfying result, as it only involves $n-1$-dimensional projections of $S$, and is a nice counterpoint to the vertex boundary calculations in \cite{MR2946103}. This leads to the natural desire to show that it is possible to ``squish'' any set $S \subset {\mathbb{Z}}^n$ to form a new set $S_0$ such that $|S| = |S_0|$, $\partial_e(S)\geq \partial_e(S_0)$, and $S_0$ has no gaps (that is, $\text{gap}_\epsilon(S_0) = \emptyset$ for each $\epsilon \in \{-1,0,1\}^n, \epsilon \not=0$). In the following section, we show how we can ``squish'' set $S$ into set $S_i$ so that $|S| = |S_i|, \partial_e(S) \geq \partial_e(S_i)$, and \begin{equation*} \text{gap}_{e_i}(S_i) = \emptyset \end{equation*} where $e_i \in \{-1,0,1\}^n$ is the $i$th standard basis vector. \section{Central Compression} The following notation and definitions are similar to those in \cite{MR2946103}. For simplicity, we introduce the following notation: for a real-valued vector $p = (p_1, p_2, \dots, p_n) \in {\mathbb{R}}^n$, and $x \in {\mathbb{R}}$, we define \begin{equation*} (p, x \rightarrow i) = (p_1, p_2, \dots, p_{i-1}, x, p_i, p_{i+1}, \dots, p_n) \in {\mathbb{R}}^{n+1} \end{equation*} In words, $(p, x \rightarrow i)$ is the vector that results when placing $x$ in the $i$th coordinate of $p$ and shifting the $i$th through $n$th coordinates of $p$ to the right. \begin{Definition} We say that a set $S \subset {\mathbb{Z}}^n$ is \emph{centrally compressed} in the $i$-th coordinate ($1 \leq i \leq n$) with respect to $p \in {\mathbb{Z}}^{n-1}$ if the set \begin{equation*} \{x \in {\mathbb{Z}}: (p, x \rightarrow i) \in S\} \end{equation*} is either empty or of one of the following two forms: \begin{align*} \{x: -a \leq x &\leq a \text{ for } a \in \N\} \\ & \text{OR} \\ \{x: -a \leq x &\leq a+1 \text{ for } a \in \N\} \end{align*} \end{Definition} This definition allows us to define the $i$th central compression of a set: \begin{Definition} Let $S \subset {\mathbb{Z}}^n$. For $1 \leq i \leq n$, we define $S_i$ to be the $i$th central compression of $S$ by specifying its 1-dimensional sections in the $i$th coordinate. Specifically, \begin{enumerate} \item For each $p \in {\mathbb{Z}}^{n-1}$, \begin{equation*} |\{x \in {\mathbb{Z}}: (p, x \rightarrow i) \in S_i\}| = |\{x \in {\mathbb{Z}}: (p, x \rightarrow i) \in S\}| \end{equation*} \item $S_i$ is centrally compressed in the $i$th coordinate with respect to $p$ for each $p \in {\mathbb{Z}}^{n-1}$. \end{enumerate} \end{Definition} In words: after fixing a coordinate $i \in \{1, 2, \dots, n\}$, we consider all lines in ${\mathbb{Z}}^n$ where only the $i$th coordinate varies, and we intersect those lines with $S$. Each of the points in those intersections are moved along the line so that they are a segment centered around 0. The result is $S_i$. The following Proposition shows how we can ``squish'' set $S$ into set $S_i$ so that $|S| = |S_i|, \partial_e(S) \geq \partial_e(S_i)$, and \begin{equation*} \text{gap}_{e_i}(S_i) = \emptyset \end{equation*} where $e_i \in \{-1,0,1\}^n$ is the $i$th standard basis vector. \begin{Proposition}\label{Compression1} Suppose that $S \subset {\mathbb{Z}}^n$. For $1 \leq i \leq n$, let $S_i$ be the $i$th central compression of $S$. Then \begin{equation*} \left|\partial_e S_i\right| \leq \left| \partial_e S \right| \end{equation*} \end{Proposition} \begin{proof} Suppose that $S \subset {\mathbb{Z}}^n$ and fix $i \in \{1, 2, \dots, n\}$. First we note that we can count the edge boundary of $S$ by partitioning the outgoing edges of $S$ into the sets of edges coming from each 1-dimensional $i$-section of $S$. Specifically for $p \in {\mathbb{Z}}^{n-1}$, let \begin{equation*} \partial_e(S,p) = \{(u,v) \in E_\infty: u \in S, v \in {\mathbb{Z}}^n \backslash S \text{ and } u = (p, x \rightarrow i) \text{ for some } x \in {\mathbb{Z}}\} \end{equation*} Then we have \begin{equation*} \partial_e S = \bigcup_{p \in {\mathbb{Z}}^{n-1}} \partial_e (S,p) \end{equation*} and the above union is disjoint. We can partition these even further, based on which 1-dimensional section the vertex which is not in $S$ lies. That is, if $(u,v) \in E_\infty$ with $u \in S$, $v \in {\mathbb{Z}}^n \backslash S$, and $u = (p, x \rightarrow i)$ for $x \in {\mathbb{Z}}$, then we must have \begin{equation*} v = (p +\epsilon, y \rightarrow j) \end{equation*} for some $\epsilon \in \{-1,0,1\}^{n-1}$ and $y \in {\mathbb{Z}}$ (specifically, $y \in \{x-1, x, x+1\}$). Let \begin{multline*} \partial_e (S,p,\epsilon) = \{(u,v) \in E_\infty: u \in S, v \in {\mathbb{Z}}^n \backslash S, u = (p, x \rightarrow i) \text{ for some } x \in {\mathbb{Z}}, \\ \text{ and } v = (p+\epsilon, y \rightarrow i) \text{ for some }y \in {\mathbb{Z}}\}. \end{multline*} Then \begin{equation*} \partial_e(S,p) = \bigcup_{\epsilon \in \{-1,0,1\}^{n-1}} \partial_e(S, p,\epsilon) \end{equation*} and the union is disjoint. Thus, we have \begin{equation*} \partial_e S = \bigcup_{p \in {\mathbb{Z}}^{n-1}} \bigcup_{\epsilon \in \{-1,0,1\}^{n-1}} \partial_e(S,p,\epsilon) \end{equation*} and the above unions are disjoint, so that \begin{equation*} \left| \partial_e S \right| = \sum_{p \in {\mathbb{Z}}^{n-1}} \sum_{\epsilon \in \{-1,0,1\}^{n-1}} \left| \partial_e(S,p,\epsilon) \right| \end{equation*} Similarly, \begin{equation*} \left| \partial_e S_i \right| = \sum_{p \in {\mathbb{Z}}^{n-1}} \sum_{\epsilon \in \{-1,0,1\}^{n-1}} \left| \partial_e(S_i,p,\epsilon) \right| \end{equation*} We will show that $\left| \partial_e S_i \right| \leq \left| \partial_e S \right|$ by showing that \begin{equation*} \left| \partial_e(S_i,p,\epsilon) \right| \leq \left| \partial_e(S,p,\epsilon) \right| \end{equation*} for each $p \in {\mathbb{Z}}^{n-1}$ and $\epsilon \in \{-1,0,1\}^{n-1}$. It is straightforward to see that for $\vec{0} \in \{-1, 0, 1\}^{n-1}$, \begin{align*} \left| \partial_e(S_i,p,\vec{0}) \right|&= 2 \\ \left| \partial_e(S,p,\vec{0}) \right| & \geq 2 \end{align*} so that \begin{equation*} \left| \partial_e(S_i,p,\vec{0}) \right| \leq \left| \partial_e(S,p,\vec{0}) \right| \end{equation*} Thus, we can now consider a fixed $p$ and fixed $\epsilon \not=0$. Let \begin{align*} \ell &= \{(p, x\rightarrow i)\in S : x \in {\mathbb{Z}} \} \\ n &= \{(p+\epsilon, y \rightarrow i)\in S: y \in {\mathbb{Z}}\} \end{align*} be the lines of vertices in $S$ corresponding to fixing all entries except the $i$th by the entries in vectors $p$ and $p+\epsilon$ respectively. Note that we can visualize these lines as integer points in the plane, with the lines parallel to the $x$-axis. For all of our visualizations, the upper line will denote $n$, the lower line $\ell$. Open circles are vertices in $n$ or $\ell$, filled in circles are vertices which are not in $n$ or $\ell$. The solid lines are edges which are definitely in $\partial_e(S,p,\epsilon)$; the dotted lines are edges which may be in $\partial_e(S,P,\epsilon)$ (depending on whether particular vertices are in $\ell$ and $n$). See Figure \ref{VisualizationP}. \begin{figure}[h] \begin{center} \includegraphics[width=4 in]{EdgeP1.jpg} \end{center} \caption{Sample visualization of $n$ and $\ell$.} \label{VisualizationP} \end{figure} We say that there is a ``gap'' in the line $\ell$ at $b$ if there exist $a<b<c$ such that $(p, b \rightarrow i) \not\in \ell$, but $(p, a \rightarrow i) \in \ell$ and $(p, c \rightarrow i) \in \ell$. See Figure \ref{NewGapP}. \begin{figure}[h] \begin{center} \includegraphics[width=4 in]{EdgeP2.jpg} \end{center} \caption{The gap in $\ell$ is highlighted.} \label{NewGapP} \end{figure} We can similarly define a gap in $n$. We claim that we can rearrange the vertices within $\ell$ and $n$ by perhaps changing the $i$th coordinate of some vertices to make new lines of vertices $\ell_0$ and $n_0$ with no gaps, such that $\partial_e(\ell \cup n, p, \epsilon) \geq \partial_e(\ell_0 \cup n_0,p,\epsilon)$. We do this in a sequence of steps. First note that if both $\ell$ and $n$ have a gap at $b$, then the vertices of $\ell$ to the left of the gap and the vertices of $n$ to the left of the gap can all be shifted to the right by 1 without increasing $\partial_e(\ell \cup n, p, \epsilon)$; only possibly decreasing $\partial_e(\ell \cup n, p, \epsilon)$ (See Figure \ref{SampleShift}). \begin{figure}[htbp] \begin{center} \includegraphics[width=2 in]{EdgeP3.jpg} \hspace{.5 cm} \includegraphics[width=.5 in]{rsarrow.jpg} \hspace{.5 cm} \includegraphics[width=2 in]{EdgeP4.jpg} \end{center} \caption{The parallel gap is eliminated by shifting vertices to the right. In this case, the edge boundary decreases.} \label{SampleShift} \end{figure} Thus, we can assume that there are no such parallel gaps. Let $\ell^i$ be the minimum value of $x$ for any point $(p,x \rightarrow i) \in \ell$ and let $n^i$ be the minimum value of $y$ for any point $(p,y\rightarrow i) \in n$. We now split this problem into 6 cases: \begin{enumerate} \item $\ell^i = n^i$, there is a gap in $\ell$ at $b$, and there is no gap in either $\ell$ or $n$ for any $a<b$. \item $\ell^i = n^i$, there is a gap in $n$ at $b$, and there is no gap in either $\ell$ or $n$ for any $a<b$. \item $\ell^i = n^i+1$ \item $\ell^i = n^i-1$ \item $\ell^i = n^i+c$ where $c \geq 2$ \item $\ell^i = n^i-c$ where $c \geq 2$. \end{enumerate} In each of these cases, we can see that the number of edges going from a vertex in $\ell$ to a vertex not in $n$ (i.e. the number of edges in $\partial_e(\ell \cup n, p, \epsilon)$) either does not change or decreases by shifting some vertices in $n$ to the right or shifting some vertices in $\ell$ to the right. This can more easily be seen through pictures: \begin{enumerate} \item $\ell^i = n^i$, there is a gap in $\ell$ at $b$, and there is no gap in either $\ell$ or $n$ for any $a<b$. \begin{figure}[htbp] \begin{center} \includegraphics[width=1.3 in]{EdgeP5.jpg} \hspace{.7 cm} \includegraphics[width=.5 in]{rsarrow.jpg} \hspace{.7 cm} \includegraphics[width=1.9 in]{EdgeP6.jpg} \end{center} \caption{Vertices in $\ell$ to the left of the gap are shifted right by 1.} \end{figure} \item $\ell^i = n^i$, there is a gap in $n$ at $b$, and there is no gap in either $\ell$ or $n$ for any $a<b$. \begin{figure}[htbp] \begin{center} \includegraphics[width=1.8 in]{EdgeP7.jpg} \hspace{.7 cm} \includegraphics[width=.5 in]{rsarrow.jpg} \hspace{.7 cm} \includegraphics[width=1.8 in]{EdgeP8.jpg} \end{center} \caption{Vertices in $n$ to the left of the gap are shifted right by 1.} \end{figure} \item $\ell^i = n^i+1$ \begin{figure}[htbp] \begin{center} \includegraphics[width=2.2 in]{EdgeP9.jpg} \hspace{.7 cm} \includegraphics[width=.5 in]{rsarrow.jpg} \hspace{.7 cm} \includegraphics[width=2.2 in]{EdgeP10.jpg} \end{center} \caption{Vertices in $n$ to the left of the first gap (or first vertex in $n$ not in $S$) are shifted right by 1.} \end{figure} \pagebreak \item $\ell^i = n^i-1$ \begin{figure}[htbp] \begin{center} \includegraphics[width=2.1 in]{EdgeP11.jpg} \hspace{.7 cm} \includegraphics[width=.5 in]{rsarrow.jpg} \hspace{.7 cm} \includegraphics[width=2.3 in]{EdgeP12.jpg} \end{center} \caption{Vertices in $\ell$ to the left of the first gap (or first vertex in $\ell$ not in $S$) are shifted right by 1.} \end{figure} \item $\ell^i = n^i+c$ where $c \geq 2$ \begin{figure}[htbp] \begin{center} \includegraphics[width=2.1 in]{EdgeP13.jpg} \hspace{.7 cm} \includegraphics[width=.5 in]{rsarrow.jpg} \hspace{.7 cm} \includegraphics[width=2.2 in]{EdgeP14.jpg} \end{center} \caption{Vertices in $n$ to the left of the first gap (or first vertex in $n$ not in $S$) are shifted right by 1.} \end{figure} \item $\ell^i = n^i-c$ where $c \geq 2$. \begin{figure}[htbp] \begin{center} \includegraphics[width=2.1 in]{EdgeP15.jpg} \hspace{.7 cm} \includegraphics[width=.5 in]{rsarrow.jpg} \hspace{.7 cm} \includegraphics[width=2.2 in]{EdgeP16.jpg} \end{center} \caption{Vertices in $\ell$ to the left of the first gap (or first vertex in $\ell$ not in $S$) are shifted right by 1.} \end{figure} \end{enumerate} Through a sequence of these steps, the vertices in $n$ and $\ell$ can be shifted, without increasing the boundary, to vertices in lines $n_0$ and $\ell_0$ respectively so that at least one of $n_0$ or $\ell_0$ is a segment: \begin{align*} \ell_0 &= \{(p, x \rightarrow i): a \leq x \leq b\} \\ & \quad \quad \quad \quad \text{OR} \\ n_0 &= \{(p+\epsilon, x \rightarrow i): c \leq x \leq d\} \\ \end{align*} Finally, it is not hard to see that if one of $\ell_0$ or $n_0$ is a segment, the edge boundary can only stay the same or go down if the vertices in each of those lines are now centralized. Thus, we have shown that \begin{equation*} \left| \partial_e(S_i,p,\epsilon) \right| \leq \left| \partial_e(S,p,\epsilon) \right| \end{equation*} which, by the arguments above, implies that \begin{equation*} \left|\partial_e S_i\right| \leq \left| \partial_e S \right| \end{equation*} \end{proof} \bibliographystyle{plain}
proofpile-arXiv_066-1310
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $A$ be a Banach algebra and let $\varphi \colon A\times A\to\mathbb{C}$ be a continuous bilinear functional satisfying \begin{equation}\label{B} a,b\in A, \ [a,b]=0 \ \ \Longrightarrow \ \ \varphi(a,b)=0 \end{equation} (here and subsequently, $[a,b]$ stands for the commutator $ab - ba$). This is certainly fulfilled if $\varphi$ is of the form \begin{equation}\label{Bl} \varphi(a,b)=\tau([a,b]) \quad (a,b\in A) \end{equation} for some $\tau$ in $A^*$, the dual of $A$. We will say that $A$ is a \emph{zero Lie product determined Banach algebra} if, for every continuous bilinear functional $\varphi \colon A\times A\to\mathbb{C}$ satisfying~\eqref{B}, there exists $\tau\in A^*$ such that~\eqref{Bl} holds. This is an analytic analogue of the purely algebraic notion of a zero Lie product determined algebra, first indirectly considered in~\cite{BrSe} and, slightly later, more systematically in~\cite{BGS} (see also subsequent papers~\cite{Gr,WCZ}). Further, the concept of a zero Lie product determined Banach algebra can be seen as the Lie version of the notion of a Banach algebra having property $\mathbb{B}$ (see~\cite{ABEV0}), which will also play an important role in this paper. Another motivation for us for studying this concept is the similarity with the group-theoretic notion of triviality of Bogomolov multiplier (see, e.g.,~\cite{M}), which made us particularly interested in considering it in the context of group algebras. The paper is organized as follows. In Section~\ref{s0} we provide motivating examples. Firstly, by applying a result by Goldstein~\cite{G} we show that $C^*$-algebras are zero Lie product determined Banach algebras. Secondly, we find a Banach algebra, even a finite dimensional one, that is not zero Lie product determined. In Section~\ref{sec1} we prove that in the definition of a zero Lie product determined Banach algebra one can replace the role of $\mathbb{C}$ by any Banach space. The bulk of the paper is Section~\ref{sec2} in which we show that the group algebra $L^1(G)$ of any amenable locally compact group $G$ is a zero Lie product determined Banach algebra. We actually obtain this as a byproduct of the result concerning the condition \begin{equation}\label{B1lz} a,b\in A, \ ab=ba=0 \ \ \Longrightarrow \ \ \varphi(a,b)=0, \end{equation} where $A$ is an amenable Banach algebras with property $\mathbb B$. We remark that~\eqref{B1lz} has also been already studied in the literature, but definitive results were so far obtained only for finite dimensional algebras~\cite{ABEVmat, KLZ}. \section{Examples}\label{s0} The goal of this section is to provide examples indicating the nontriviality of the concept of a zero Lie product determined Banach algebra. \begin{proposition} Every $C^*$-algebra is a zero Lie product determined Banach algebra. \end{proposition} \begin{proof} Let $A$ be a $C^*$-algebra, and let $\varphi\colon A\times A\to\mathbb{C}$ be a continuous bilinear functional satisfying~\eqref{B}. Then the map $\psi\colon A\times A\to\mathbb{C}$ defined by $\psi(a,b)=\varphi(a,b^*)$ for all $a$, $b\in A$ is a continuous sesquilinear functional. Further, if $a,b\in A$ are self-adjoint and $ab=0$, then $ba=0$, which in turn implies that $[a,b]=0$ and therefore $\psi(a,b)=\varphi(a,b)=0$. This shows that $\psi$ is orthogonal in the sense of~\cite{G} (see~\cite[Definition~1.1]{G}). By~\cite[Theorem~1.10]{G}, $A$ is $\mathbb{C}$-stationary, which means (\cite[Definition~1.5]{G}) that there exist $\tau_1,\tau_2\in A^*$ such that $\psi(a,b)=\tau_1(ab^*)+\tau_2(b^*a)$ for all $a,b\in A$. Consequently, we have \begin{equation}\label{ee1} \varphi(a,b)=\tau_1(ab)+\tau_2(ba) \quad (a,b\in A). \end{equation} On the other hand, if $a\in A$, then $[a,a]=0$ and therefore $\varphi(a,a)=0$. Hence $\varphi$ is skew-symmetric and taking into account~\eqref{ee1} we get \begin{equation}\label{ee2} \varphi(a,b)=-\varphi(b,a)=-\tau_1(ba)-\tau_2(ab) \quad (a,b\in A). \end{equation} Adding~\eqref{ee1} and~\eqref{ee2}, we obtain \[ 2\varphi(a,b)=\tau_1([a,b])-\tau_2([a,b]) \quad (a,b\in A), \] which shows that $\varphi$ is of the form~\eqref{Bl}, where $\tau\in A^*$ is defined by $\tau=\tfrac{1}{2}(\tau_1-\tau_2)$. \end{proof} We will now give an example of a finite dimensional Banach algebra that is not zero Lie product determined. This is of interest also from a purely algebraic viewpoint. Namely, so far only examples of infinite dimensional algebras that are not zero Lie product determined were found~\cite{BGS} (since bilinear functionals are automatically continuous in finite dimension, in this framework there is no difference between ``zero Lie product determined Banach algebra'' and ``zero Lie product determined algebra''). The algebra from the next proposition can be thought of as the Grassmann algebra with four generators to which we add another relation. \begin{proposition} The 10-dimensional Banach algebra \[ A= \mathbb{C}\big\langle x_1,x_2,x_3,x_4\,|\, x_1 x_2 = x_3 x_4, x_i^2=0, x_i x_j=-x_j x_i,\, i,j=1,2,3,4\big\rangle \] is not zero Lie product determined. \end{proposition} \begin{proof} It is easy to check that the elements \[ 1,\, x_1, \, x_2,\, x_3,\, x_4,\, x_1x_2, \, x_1x_3, \, x_1x_4,\, x_2x_3,\,x_2x_4 \] form a basis of $A$ (so that $\dim_\mathbb{C} A = 10$). Note that $1$ and all $x_i x_j$ lie in $Z$, the center of $A$. Define a bilinear functional $\varphi \colon A\times A\to\mathbb{C}$ by \[ \varphi(x_1,x_2) = -\varphi(x_2,x_1) = 1 \] and \[ \varphi(u,v) = 0 \] for all other pairs of elements from our basis. Take a pair of commuting elements $a$, $b\in A$. We can write \[ a = \sum_{i=1}^4 \lambda_i x_i + z\quad\mbox{and}\quad b= \sum_{j=1}^4 \mu_j x_j + w, \] where $\lambda_i,\mu_j\in \mathbb{C}$ and $z,w\in Z$. Our goal is to show that $\varphi(a,b)=\lambda_1\mu_2-\lambda_2\mu_1$ is $0$. From $[a,b]=0$ we obtain \[ \Big[\sum_{i=1}^4 \lambda_i x_i, \sum_{j=1}^4 \mu_j x_j\Big] =0, \] which yields \begin{align*} &\big((\lambda_1\mu_2 - \lambda_2\mu_1)+ (\lambda_3\mu_4 - \lambda_4\mu_3)\big)x_1x_2\\ +& (\lambda_1\mu_3 - \lambda_3\mu_1)x_1x_3 + (\lambda_2\mu_3 - \lambda_3\mu_2)x_2x_3 \\ +& (\lambda_1\mu_4 - \lambda_4\mu_1)x_1x_4+ (\lambda_2\mu_4 - \lambda_4\mu_2)x_2x_4 =0. \end{align*} Consequently, \begin{equation} \label{a} (\lambda_1\mu_2 - \lambda_2\mu_1)+ (\lambda_3\mu_4 - \lambda_4\mu_3) =0, \end{equation} \begin{equation} \label{bb} \lambda_1\mu_3 = \lambda_3\mu_1,\,\,\,\lambda_2\mu_3 = \lambda_3\mu_2, \end{equation} \begin{equation} \label{c} \lambda_1\mu_4 = \lambda_4\mu_1,\,\,\,\lambda_2\mu_4 = \lambda_4\mu_2. \end{equation} Note that~\eqref{bb} yields \[ (\lambda_1\mu_2 - \lambda_2\mu_1)\mu_3=0. \] and, similarly,~\eqref{c} yields \[ (\lambda_1\mu_2 - \lambda_2\mu_1)\mu_4=0. \] But then we infer from~\eqref{a} that $\lambda_1\mu_2 - \lambda_2\mu_1=0$, as desired. We have thereby proved that $\varphi$ satisfies~\eqref{B}. However, since $\varphi(x_1,x_2)\ne \varphi(x_3,x_4)$, we see from $[x_1,x_2] = [x_3,x_4]$ that $\varphi$ does not satisfy~\eqref{Bl}. \end{proof} \section{An alternative definition}\label{sec1} From now on, we write $[A,A]$ for the linear span of all commutators of the Banach algebra $A$. \begin{proposition} Let $A$ be a Banach algebra. Then the following properties are equivalent: \begin{enumerate} \item the algebra $A$ is a zero Lie product determined Banach algebra, \item for each Banach space $X$, every continuous bilinear map $\varphi\colon A\times A\to X$ with the property that $\varphi(a,b)=0$ whenever $a,b\in A$ are such that $[a,b]=0$ is of the form $\varphi(a,b)=T([a,b])$ $(a,b\in A)$ for a unique continuous linear map $T\colon [A,A]\to X$. \end{enumerate} \end{proposition} \begin{proof} Suppose that (1) holds. Let $X$ be a Banach space and let $\varphi\colon A\times A\to X$ be a continuous bilinear map with the property that $\varphi(a,b)=0$ whenever $a,b\in A$ are such that $[a,b]=0$. For each $\xi\in X^*$, the continuous bilinear functional $\xi\circ\varphi\colon A\times A\to\mathbb{C}$ satisfies~\eqref{B}. Therefore there exists a unique $\tau(\xi)\in [A,A]^{*}$ such that $\xi(\varphi(a,b))=\tau(\xi)([a,b])$ for all $a,b\in A$. It is clear that the map $\tau\colon X^*\to [A,A]^*$ is linear. We next show that $\tau$ is continuous. Let $(\xi_n)$ be a sequence in $X^*$ with $\lim\xi_n=0$ and $\lim\tau(\xi_n)=\xi$ for some $\xi\in [A,A]^*$. For each $a,b\in A$, we have \[ 0=\lim\xi_n(\varphi(a,b))=\lim\tau(\xi_n)([a,b])=\xi([a,b]). \] We thus have $\xi=0$, and the closed graph theorem yields the continuity of $\tau$. For all $a_1,\ldots,a_n,b_1,\ldots,b_n\in A$ and $\xi\in X^*$ we have \begin{equation}\label{11667} \begin{aligned} \xi\Bigl(\sum_{k=1}^n\varphi(a_k,b_k)\Bigr) & = \sum_{k=1}^n\xi\bigl(\varphi(a_k,b_k)\bigr) = \sum_{k=1}^n\tau(\xi)([a_k,b_k]) \\ & = \tau(\xi)\Bigl(\sum_{k=1}^n[a_k,b_k]\Bigr). \end{aligned} \end{equation} Consequently, if $a_1,\ldots,a_n,b_1,\ldots,b_n\in A$ are such that $\sum_{k=1}^n[a_k,b_k]=0$, then $\xi\bigl(\sum_{k=1}^n\varphi(a_k,b_k)\bigr)=0$ for each $\xi\in X^*$, and hence $\sum_{k=1}^n\varphi(a_k,b_k)=0$. We thus can define a linear map $T\colon [A,A]\to X$ by \[ T\Bigl(\sum_{k=1}^n[a_k,b_k]\Bigr)=\sum_{k=1}^n\varphi(a_k,b_k) \] for all $a_1,\ldots,a_n,b_1,\ldots,b_n\in A$. Of course, $\varphi(a,b)=T([a,b])$ for all $a,b\in A$. Our next concern is the continuity of $T$. Let $a_1,\ldots,a_n,b_1,\ldots,b_n\in A$. Then there exists $\xi\in X^*$ such that \[ \xi\Bigl(\sum_{k=1}^n\varphi(a_k,b_k)\Bigr)=\Bigl\Vert\sum_{k=1}^n\varphi(a_k,b_k)\Bigr\Vert. \] On account of~\eqref{11667}, we have \begin{align*} \Bigl\Vert T\Bigl(\sum_{k=1}^n[a_k,b_k]\Bigr)\Bigr\Vert & = \Bigl\Vert\sum_{k=1}^n\varphi(a_k,b_k)\Bigr\Vert = \xi\Bigl(\sum_{k=1}^n\varphi(a_k,b_k)\Bigr) \\ & = \Bigl\vert\tau(\xi)\Bigl(\sum_{k=1}^n[a_k,b_k]\Bigr)\Bigr\vert \le \Vert\tau(\xi)\Vert\Bigl\Vert\sum_{k=1}^n[a_k,b_k]\Bigr\Vert \\ & \le \Vert\tau\Vert\Bigl\Vert\sum_{k=1}^n[a_k,b_k]\Bigr\Vert, \end{align*} which shows the continuity of $T$, and hence that property (2) holds. We now assume that (2) holds. Let $\varphi\colon A\times A\to\mathbb{C}$ be a continuous bilinear functional satisfying~\eqref{B}. By applying property (2) with $X=\mathbb{C}$, we get $\tau\in [A,A]^*$ such that $\varphi(a,b)=\tau([a,b])$ $(a,b\in A)$. The functional $\tau$ can be extended to a continuous linear functional on $A$ so that (1) is obtained. \end{proof} \section{Amenable Banach algebras with property $\mathbb{B}$}\label{sec2} We say that a Banach algebra $A$ has \emph{property $\mathbb{B}$} if for every continuous bilinear functional $\varphi \colon A\times A\to \mathbb{C}$, the condition \begin{equation}\label{B1} a,b\in A, \ ab=0 \ \ \Rightarrow \ \ \varphi(a,b)=0 \end{equation} implies the condition \begin{equation}\label{B11} \varphi(ab,c)=\varphi(a,bc) \quad (a,b,c\in A). \end{equation} According to~\cite[Remark 2.1]{AFA}, this definition agrees with the one given in the seminal paper~\cite{ABEV0}, i.e., the Banach algebra $A$ has property $\mathbb{B}$ if and only if for each Banach space $X$ and for each continuous bilinear map $\varphi\colon A\times A\to X$ the condition~\eqref{B1} implies the condition~\eqref{B11}. We remark that if $A$ has a bounded approximate identity,~\eqref{B11} is equivalent to the condition that $\varphi(a,b)= \tau(ab)$ for some $\tau\in A^*$ (see~\cite[Lemma~2.3]{ABEV0}). In~\cite{ABEV0} it was shown that many important examples of Banach algebras, including $C^*$-algebras, group algebras on arbitrary locally compact groups, and the algebra $\mathcal{A}(X)$ of all approximable operators on any Banach space $X$, have property $\mathbb{B}$, and that this property can be applied to a variety of problems. Since then, a number of papers treating property $\mathbb{B}$ have been published; see the last paper in the series~\cite{ABESV2} and references therein. The class of amenable Banach algebras is of great significance. We refer the reader to~\cite{R} for the necessary background on amenability. There are different characterizations of amenable Banach algebras. The seminal one comes from B. E. Johnson: vanishing of a certain cohomology group. For our purposes here, the best way to introduce the amenability is the following. Let $A$ be a Banach algebra. The projective tensor product $A\widehat{\otimes}A$ becomes a Banach $A$-bimodule for the products defined by \[ a\cdot(b\otimes c)=(ab)\otimes c \] and \[ (b\otimes c)\cdot a=b\otimes (ca) \] for all $a,b,c\in A$. There is a unique continuous linear map $\pi\colon A\widehat{\otimes}A\to A$ such that \[ \pi(a\otimes b)=ab \] for all $a,b\in A$. The map $\pi$ is the projective induced product map, and it is an $A$-bimodule homomorphism. An \emph{approximate diagonal} for $A$ is a bounded net $(u_{\lambda})_{\lambda\in\Lambda}$ in $A\widehat{\otimes}A$ such that, for each $a\in A$, we have \begin{equation}\label{ad1} \lim_{\lambda\in\Lambda}(a\cdot u_ \lambda-u_\lambda\cdot a)=0 \end{equation} and \begin{equation}\label{ad2} \lim_{\lambda\in\Lambda}\pi(u_\lambda)a=a. \end{equation} We point out that~\eqref{ad1} together with~\eqref{ad2} implies that also $\lim a\pi(u_\lambda)=a$ for each $a\in A$. Consequently, the net $(\pi(u_\lambda))_{\lambda\in\Lambda}$ is a bounded approximate identity for $A$. The Banach algebra $A$ is \emph{amenable} if and only if $A$ has an approximate diagonal. Throughout this section we are notably interested in amenable Banach algebras having property $\mathbb{B}$. According to~\cite{R}, the following are examples of amenable Banach algebras (which we already know to have property $\mathbb{B}$): nuclear $C^*$-algebras, the group algebra $L^1(G)$ for each amenable locally compact group $G$, and the algebra $\mathcal{A}(X)$ for Banach spaces with certain approximation properties (this includes the Banach space $C_0(\Omega)$ for each locally compact Hausdorff space $\Omega$ and the Banach space $L^p(\mu)$ for each measure space $(\Omega,\Sigma,\mu)$ and each $p\in[1,\infty]$). We begin with a lemma whose version appears also in~\cite{ABEV}. \begin{lemma}\label{l1519} Let $A$ be a Banach algebra with property $\mathbb{B}$ and having a bounded approximate identity, let $X$ be a Banach space, and let $\varphi\colon A\times A\to X$ be a continuous bilinear map satisfying the condition: \begin{equation*} a,b\in A, \ ab=ba=0 \ \Rightarrow \ \varphi(a,b)=0. \end{equation*} Then \begin{equation}\label{pat3} \varphi(ab,cd)-\varphi(a,bcd)+\varphi(da,bc)-\varphi(dab,c)=0 \quad (a,b,c,d\in A) \end{equation} and there exists a continuous linear operator $S\colon A\to X$ such that \begin{equation}\label{eqfifi} \varphi(ab,c)-\varphi(b,ca)+\varphi(bc,a)=S(abc) \quad (a,b,c\in A). \end{equation} \end{lemma} \begin{proof} Let $\mathcal{B}^2(A;X)$ denote the Banach space of all continuous bilinear maps from $A\times A$ to $X$, and let $\mathcal{B}_0^2(A;X)$ denote the closed subspace of $\mathcal{B}^2(A;X)$ consisting of those bilinear maps $\varphi$ which satisfy~\eqref{B1}. We define \[ \psi\colon A\times A\to\mathcal{B}^2(A;X) \] by \[ \psi(a,b)(s,t)=\varphi(bs,ta) \quad (a,b,s,t\in A). \] It is immediate to check that $\psi(a,b)\in\mathcal{B}_0^2(A;X)$ whenever $a,b\in A$ are such that $ab=0$. Consequently, the continuous bilinear map \[ \widetilde{\psi}\colon A\times A\to \mathcal{B}^2(A;X)/\mathcal{B}_0^2(A;X) \] defined by \[ \widetilde{\psi}(a,b)=\psi(a,b)+\mathcal{B}_0^2(A;X) \quad (a,b\in A) \] satisfies~\eqref{B1}. Property $\mathbb{B}$ then gives \begin{equation*} \psi(ab,c)-\psi(a,bc)\in\mathcal{B}_0^2(A;X) \quad (a,b,c\in A). \end{equation*} For each $a,b,c\in A$, property $\mathbb{B}$ now yields \[ \bigl(\psi(ab,c)-\psi(a,bc)\bigr)(rs,t)= \bigl(\psi(ab,c)-\psi(a,bc)\bigr)(r,st) \] for all $r$, $s$, $t\in A$. Hence \begin{equation}\label{pat2} \varphi(crs,tab)-\varphi(bcrs,ta)-\varphi(cr,stab)+\varphi(bcr,sta)=0 \end{equation} for all $a$, $b$, $c$, $r$, $s$, $t\in A$. Let $(\rho_\lambda)_{\lambda\in\Lambda}$ be an approximate identity of $A$ of bound $C$. For each $a$, $b$, $c$, $r$, $s\in A$, we apply~\eqref{pat2} with the element $t$ replaced by $\rho_\lambda$ ($\lambda\in\Lambda$) and then we take the limit to arrive at \begin{equation}\label{pat2b} \varphi(crs,ab)-\varphi(bcrs,a)-\varphi(cr,sab)+\varphi(bcr,sa)=0. \end{equation} We now replace $r$ by $\rho_\lambda$ ($\lambda\in\Lambda$) in~\eqref{pat2b} and take the limit to get \begin{equation*} \varphi(cs,ab)-\varphi(bcs,a)-\varphi(c,sab)+\varphi(bc,sa)=0, \end{equation*} which gives~\eqref{pat3}. By applying~\eqref{pat3} with the element $c$ replaced by $\rho_\lambda$ ($\lambda\in\Lambda$) we see that the net $(dab, \varphi(\rho_\lambda))_{\lambda\in\Lambda}$ is convergent and by taking the limit in~\eqref{pat3} we arrive at \begin{equation}\label{e2148} \begin{aligned} &\varphi(ab,d)-\varphi(a,bd)+\varphi(da,b)- \lim_{\lambda\in\Lambda}\varphi(dab,\rho_\lambda) \\ =& \lim_{\lambda\in\Lambda} \bigl(\varphi(ab,\rho_\lambda d)-\varphi(a,b\rho_\lambda d)+ \varphi(da,b\rho_\lambda)-\varphi(dab,\rho_\lambda)\bigr)=0 \end{aligned} \end{equation} for all $a$, $b$, $d\in A$. By Cohen's factorization theorem (see~\cite[Corollary~11 in \S 11]{bd}), each $c\in A$ can be written in the form $c=dab$ with $a$, $b$, $d\in A$, and hence the net $(\varphi(c,\rho_\lambda))_{\lambda\in\Lambda}$ is convergent. We can thus define a linear operator $S\colon A\to X$ by \[ S(a)=\lim_{\lambda\in\Lambda}\varphi(a,\rho_\lambda) \] for each $a\in A$. Since $\Vert\varphi(a,\rho_\lambda)\Vert\le C \Vert\varphi\Vert \Vert a\Vert$ for all $a\in A$ and $\lambda\in\Lambda$, it follows that $\Vert S(a)\Vert\le C \Vert\varphi\Vert \Vert a\Vert$ for each $a\in A$, which implies that $S$ is continuous. Further,~\eqref{e2148} gives~\eqref{eqfifi}. \end{proof} \begin{lemma}\label{l1520} Let $A$ be an amenable Banach algebra, let $X$ be a Banach space, and let $\varphi\colon A\times A\to X$ be a continuous bilinear map. Suppose that there exists a continuous linear operator $S\colon A\to X$ such that \begin{equation}\label{b} \varphi(ab,c)-\varphi(b,ca)+\varphi(bc,a)=S(abc) \quad (a,b,c\in A). \end{equation} Then there exist continuous linear operators $\Phi\colon [A,A]\to X$ and $\Psi\colon A\to X$ such that \begin{equation*} \varphi(a,b)=\Phi([a,b])+\Psi(a\circ b) \quad (a,b\in A). \end{equation*} Here and subsequently, $a\circ b$ stands for $ab + ba$. \end{lemma} \begin{proof} Let $(u_\lambda)_{\lambda\in\Lambda}$ be an approximate diagonal for $A$ of bound $C$, and let $\mathcal{U}$ be an ultrafilter on $\Lambda$ refining the order filter. On account of the Banach-Alaoglu theorem, each bounded subset of the bidual $X^{**}$ of $X$ is relatively compact with respect to the weak$^*$-topology. Consequently, each bounded net $(x_\lambda)_{\lambda\in\Lambda}$ in $X$ has a unique limit in $X^{**}$ with respect to the weak$^*$-topology along the ultrafilter $\mathcal{U}$, and we write $\displaystyle{\lim_\mathcal{U}} x_\lambda$ for this limit. Let $\widehat{\varphi}\colon A\widehat{\otimes}A\to X$ be the unique continuous linear map such that \[ \widehat{\varphi}(a\otimes b)=\varphi(a,b) \] for all $a,b\in A$. We define $T\colon A\to X^{**}$ by \[ T(a)=\lim_{\mathcal{U}}\widehat{\varphi}( u_\lambda\cdot a) \] for each $a\in A$. For each $a\in A$, we have \begin{equation}\label{e940} \Vert\widehat{\varphi}(u_\lambda\cdot a)\Vert\le \Vert\widehat{\varphi}\Vert\Vert u_\lambda\Vert\Vert a\Vert\le C \Vert\varphi\Vert \Vert a\Vert \quad (\lambda\in\Lambda). \end{equation} Hence the net $(\widehat{\varphi}( u_\lambda\cdot a))_{\lambda\in\Lambda}$ is bounded and the map $T$ is well-defined. The linearity of the limit along an ultrafilter on a topological linear space gives the linearity of $T$. Further, from~\eqref{e940} we deduce that $\Vert T(a)\Vert\le C \Vert\varphi\Vert \Vert a \Vert $ for each $a\in A$, which gives the continuity of $T$. We now claim that \begin{equation}\label{e1049b} \widehat{\varphi}(u\cdot a)= \widehat{\varphi}(a\cdot u)+\widehat{\varphi}(\pi(u)\otimes a)-S(a\pi(u)) \end{equation} for all $a\in A$ and $u\in A\widehat{\otimes}A$. Of course, it suffices to prove~\eqref{e1049b} for the simple tensor products $u=b\otimes c$ with $b,c\in A$. Observe that \eqref{b} can be written as \[ \widehat{\varphi}(a\cdot(b\otimes c))- \widehat{\varphi}((b\otimes c)\cdot a)+ \widehat{\varphi}(\pi(b\otimes c)\otimes a)=S(a\pi(b\otimes c)) \] and this gives~\eqref{e1049b}. For each $\lambda\in\Lambda$, we apply~\eqref{e1049b} with $u$ replaced by $u_\lambda\cdot a$ and $a$ replaced by $b$ to get the following \begin{equation*} \begin{split} \widehat{\varphi}(u_\lambda\cdot (ab)) & = \widehat{\varphi}((u_\lambda\cdot a)\cdot b)\\ & = \widehat{\varphi}(b\cdot u_\lambda\cdot a)+ \widehat{\varphi}(\pi(u_\lambda\cdot a)\otimes b) -S(b\pi(u_\lambda\cdot a))\\ & = \widehat{\varphi}(b\cdot u_\lambda\cdot a)+\widehat{\varphi}((\pi(u_\lambda) a)\otimes b) -S(b\pi(u_\lambda) a). \end{split} \end{equation*} We thus have \begin{equation} \begin{aligned} \label{e1128b} &\widehat{\varphi}(u_\lambda\cdot (ab)) - \widehat{\varphi}(u_\lambda\cdot (ba))\\ =& \widehat{\varphi}(b\cdot u_\lambda\cdot a)- \widehat{\varphi}(u_\lambda\cdot (ba))+ \widehat{\varphi}((\pi(u_\lambda) a)\otimes b)-S(b\pi(u_\lambda) a)\\ =& \widehat{\varphi}((b\cdot u_\lambda-u_\lambda\cdot b)\cdot a)+ \widehat{\varphi}((\pi(u_\lambda) a)\otimes b)-S(b\pi(u_\lambda) a). \end{aligned} \end{equation} On account of~\eqref{ad1}, we have $\lim_{\lambda\in\Lambda}(b\cdot u_\lambda-u_\lambda\cdot b)=0$ and therefore $\lim_{\lambda\in\Lambda}(b\cdot u_\lambda-u_\lambda\cdot b)\cdot a=0$, which implies that $\lim_{\lambda\in\Lambda}\widehat{\varphi}((b\cdot u_\lambda-u_\lambda\cdot b)\cdot a)=0$. Since $\mathcal{U}$ refines the order filter on $\Lambda$, it follows that $\lim_{\mathcal{U}}\widehat{\varphi}((b\cdot u_\lambda-u_\lambda\cdot b)\cdot a)=0$. According to~\eqref{ad2}, we have $\lim_{\lambda\in\Lambda}\pi(u_\lambda)a=a$. Hence $$\lim_{\lambda\in\Lambda}(\pi(u_\lambda)a)\otimes b=a\otimes b\quad\mbox{and}\quad \lim_{\lambda\in\Lambda}b\pi(u_\lambda) a=ba.$$ The continuity of both $\widehat{\varphi}$ and $S$ then gives $$\lim_{\lambda\in\Lambda}\widehat{\varphi}((\pi(u_\lambda)a)\otimes b)=\widehat{\varphi}(a\otimes b)\quad\mbox{and}\quad \lim_{\lambda\in\Lambda}S(b\pi(u_\lambda) a)=S(ba).$$ Since $\mathcal{U}$ refines the order filter on $\Lambda$, we conclude that $$\lim_{\mathcal{U}}\widehat{\varphi}((\pi(u_\lambda)a)\otimes b)=\widehat{\varphi}(a\otimes b) \quad\mbox{and}\quad \lim_{\mathcal{U}}S(b\pi(u_\lambda)a)=S(ba).$$ We now prove that \begin{equation}\label{e1652} \varphi(a,b)=T([a,b])+S(ba) \quad (a,b\in A). \end{equation} Indeed, by taking the limit along $\mathcal{U}$ in~\eqref{e1128b} we arrive at \begin{align*} T([a,b]) & =T(ab)-T(ba)= \lim_\mathcal{U}\widehat{\varphi}(u_\lambda\cdot (ab))- \lim_\mathcal{U}\widehat{\varphi}(u_\lambda\cdot (ba)) \\ & = \lim_\mathcal{U}(\widehat{\varphi}(u_\lambda\cdot (ab))- \widehat{\varphi}(u_\lambda\cdot (ba))) \\ & = \lim_\mathcal{U}\widehat{\varphi}((b\cdot u_\lambda-u_\lambda\cdot b)\cdot a)+ \lim_\mathcal{U}\widehat{\varphi}((\pi(u_\lambda) a)\otimes b) \\ & \quad {}-\lim_{\mathcal{U}}S(b\pi(u_\lambda)a) \\ & = \widehat{\varphi}(a\otimes b)-S(ba)=\varphi(a,b)-S(ba). \end{align*} Define $\Phi\colon[A,A]\to X^{**}$ and $\Psi\colon A\to X$ by \[ \Phi(a)=(T-\tfrac{1}{2}S)(a) \quad (a\in [A,A]) \] and \[ \Psi=\tfrac{1}{2} S. \] Note that, on account of~\eqref{e1652}, $T$ maps $[A,A]$ into $X$ and therefore $\Phi$ does not map merely into $X^{**}$, but actually into $X$. From~\eqref{e1652} we see that $\varphi(a,b)=\Phi([a,b])+\Psi(a\circ b)$ for all $a,b\in A$. \end{proof} \begin{theorem}\label{tab} Let $A$ be an amenable Banach algebra with property $\mathbb{B}$, let $X$ be a Banach space, and let $\varphi\colon A\times A\to X$ be a continuous bilinear map satisfying the condition: \begin{equation*} a,b\in A, \ ab=ba=0 \ \Rightarrow \ \varphi(a,b)=0. \end{equation*} Then there exist continuous linear operators $\Phi\colon [A,A]\to X$ and $\Psi\colon A\to X$ such that \begin{equation*} \varphi(a,b)=\Phi([a,b])+\Psi(a\circ b) \end{equation*} for all $a$, $b\in A$. \end{theorem} \begin{proof} A straightforward consequence of Lemmas \ref{l1519} and \ref{l1520}. \end{proof} \begin{corollary}\label{cab} If $A$ is an amenable Banach algebra with property $\mathbb{B}$, then $A$ is a zero Lie product determined Banach algebra. \end{corollary} \begin{proof} Let $\varphi\colon A\times A\to\mathbb{C}$ be a continuous bilinear functional satisfying~\eqref{B}. If $a,b\in A$ are such that $ab=ba=0$, then $[a,b]=0$ and therefore $\varphi(a,b)=0$. Consequently, the functional $\varphi$ satisfies the condition in Theorem~\ref{tab}. Hence there exist continuous linear functionals $\tau_1\colon [A,A]\to\mathbb{C}$ and $\tau_2\colon A\to\mathbb{C}$ such that \begin{equation}\label{ecab1} \varphi(a,b)=\tau_1([a,b])+\tau_2(a\circ b) \quad (a,b\in A). \end{equation} Of course, the functional $\tau_1$ extends to a continuous linear functional on $A$. On the other hand, if $a\in A$, then $[a,a]=0$ and therefore $\varphi(a,a)=0$. Hence $\varphi$ is skew-symmetric and~\eqref{ecab1} yields \begin{equation}\label{ecab2} \begin{aligned} \varphi(a,b)&=-\varphi(b,a)=-\tau_1([b,a])-\tau_2(b\circ a) \\ &=\tau_1([a,b])-\tau_2(a\circ b) \quad (a,b\in A). \end{aligned} \end{equation} Adding~\eqref{ecab1} and~\eqref{ecab2}, we obtain \[ \varphi(a,b)=\tau_1([a,b]) \quad (a,b\in A), \] which shows that $\varphi$ is of the form~\eqref{Bl}. \end{proof} Since the group algebra $L^1(G)$ has property $\mathbb{B}$ for each locally compact group $G$ and further it is amenable exactly in the case when $G$ is amenable, the following result follows. \begin{theorem} Let $G$ be an amenable locally compact group. Then the group algebra $L^1(G)$ is a zero Lie product determined Banach algebra. \end{theorem}
proofpile-arXiv_066-2843
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{Introduction} Blue Straggler Stars (BSS) are intriguing objects found in diverse environments such as globular clusters \citep{sandage53, fusi92, sarajedini93}, open clusters \citep{johnson55,burbidge58,sandage62,al95}, OB associations \citep{mathys87}, Galactic fields \citep{preston00}, and dwarf galaxies \citep{momany07,mapelli09}. On color-magnitude diagrams (CMDs) of star clusters, BSS are found along the extension of the main-sequence as the brighter and bluer objects than the main-sequence turn off point of the cluster \citep{sandage53}. Their presence at these locations on the CMDs suggests that processes that increase the initial masses of stars are at work in these clusters. Observational evidences of BSS being more massive than other member stars of the clusters have been reported \citep{shara97,gilli98,ferraro06,fioren14}. Mass transfer in binary systems \citep{mccrea64}, and stellar mergers resulting from direct stellar collisions \citep{hills76} have been considered as the two chief mechanisms for the formation of the BSS. BSS are considered crucial probes to study the interplay between stellar evolution and stellar dynamics \citep{bailyn95}. Being the most massive objects of the clusters, their radial distributions are expected to show observational signatures of the dynamical friction working at drifting these massive objects to the innermost regions of the cluster. \citet{ferraro12} discovered that the BSS radial distributions of 21 coeval globular clusters fall into one of the three distinct families - flat (family I), bimodal (family II), and centrally peaked (family III). As illustrated in \citet{ferraro12}, clusters with centrally peaked BSS radial distributions are dynamically most evolved with all of their most massive members already in the innermost regions. Those with the bimodal BSS radial distributions are of the intermediate dynamical ages where the minima in the distribution, $r_\mathrm{{min}}$, delineates the cluster region up to which the dynamical friction has been effective as yet. Finally clusters with the flat radial distributions are the least dynamically evolved clusters in which the dynamical friction has not yet affected the initial spatial distribution of the BSS. \citet{ferraro12} further reported a strong correlation between the relaxation times of the family II clusters and the location of $r_\mathrm{{min}}$, which further substantiated their interpretation of BSS radial distributions as indicators of the dynamical ages of the clusters. For the first time, \citet{bhattacharya19} reported reliable BSS candidates of open cluster, Berkeley 17 \citep{phelps97}, identified using Gaia DR2 data. They found a bimodal radial distribution of the BSS, similar to family II globular clusters, which are of intermediate dynamical ages. This open cluster was already found to show the effect of mass segregation \citep{bhattacharya17} and hence known to be a dynamically evolved cluster. \citet{bhattacharya19} showed the first example of how the use BSS radial distributions can allow us to make comparison of the dynamical ages of open clusters with those of the globular clusters. Recently \citet{rain19} presented BSS population of another old open cluster Collinder 261 using Gaia DR2 data and radial velocity data from FLAMES@VLT. They found a flat BSS radial distribution implying that Collinder 261 is a dynamically young cluster. It is important to study BSS radial distributions in more open clusters to know whether open clusters show the same three families of BSS radial distributions. Globular clusters and open clusters differ particularly in terms of age, metallicity, and stellar density. Studying BSS populations in open clusters can offer significant insight into the BSS formation scenario and their role in cluster evolution. The rest of the paper is organized as follows. Section~\ref{Section 2} gives information about the Gaia DR2 data used in this work, and the selection of open clusters analyzed here, Section~\ref{Sec. 3} details the method we used for the membership determination, Section~\ref{Sec. 4} gives our results, and finally Section~\ref{Sec. 5} presents a discussion on our results. \section{Data and Cluster Selection} \label{Section 2} We use Gaia DR2 \citep{brown18} data to study the BSS of open clusters. Gaia DR2 data provides stellar positions RA ($\alpha$) and DEC ($\delta$), proper motions in RA ($\mu_{\alpha}cos\delta$) and in DEC ($\mu_{\delta}$), and parallaxes ($\omega$) with a limiting magnitude of G=21 mag for more than 1.3 billion sources \citep{lindegren18}. The unique advantage with Gaia DR2 is the unprecedented precision that it offers, e.g., in parallax measurements, up to 0.04 milli-arc-second (mas, hereafter) for G < 15 mag sources, $\sim$0.1 mas for G = 17 mag, and up to 0.7 mas for fainter G > 20 mag sources, in proper motions, up to 0.06 mas yr$^{-1}$ for G < 15 mag, $\sim$0.2 mas yr$^{-1}$ for G = 17 mag, and 1.2 mas yr$^{-1}$ for G > 20 mag. For each open cluster, using the Gaia DR2 proper motions, parallaxes, and radial velocities when available, we identify cluster members including BSS and red giant branch stars (hereafter RGB or reference) populations. The most comprehensive catalog of BSS in open clusters containing 1887 BSS candidates of 427 open clusters was developed by \citet[hereafter AL07]{al07}. They used the photometric data of open clusters through Open Cluster Database $\mathit{WEBDA}$\footnote{https://webda.physics.muni.cz/navigation.html} \citep{mermilliod03}, with membership information in only a small number of clusters. We selected $\sim$90 clusters having $\geq$10 BSS in the AL07 catalog. In order to obtain more reliable membership information of these BSS populations of clusters, we made use of the Gaia DR2 data to identify cluster members. Our membership analysis as outlined in Section~\ref{Sec. 3} revealed many inconsistencies in the BSS of various open clusters when compared with the AL07 catalog. For instance, in several clusters the numbers of BSS as reported in AL07 appeared significantly overestimated when compared with our membership criteria. \citet{carraro08} had also pointed out similar caveat in the AL07 catalog, namely, the numbers of BSS tend to correlate with the Galactic latitudes of the given clusters. It is evident that using the Gaia DR2 kinematic information, reliable BSS populations of clusters can be derived. From our work on the $\sim$90 open clusters, we got around 15 open clusters with reasonably large numbers of BSS ($\geq$15) that would allow a meaningful statistical comparison of their radial distribution with respect to a reference population. We present here analysis of seven of those clusters. The analysis of the remaining clusters with $\geq$15 BSS will be presented elsewhere (in preparation Rao et al. 2020). Table 1 lists the 7 target clusters whose BSS populations we are presenting in this work. Four of these clusters, Melotte 66, NGC 2158, NGC 2506, and NGC 6819 are intermediate age $\sim$2-3 Gyr, whereas three clusters, Berkeley 39, NGC 188, and NGC 6791 are old $\sim$6-8 Gyr. The numbers of AL07 BSS candidates in these open clusters range from 15 to 75. \begin{table} \centering \caption{Target Open Clusters} \begin{tabular}{llll} \hline \\ Cluster& RA & DEC. & No. of BSS \\ ~&(deg)&(deg)& (AL07)\\ \\ \hline \\ Berkeley 39& 116.6750 & $-$4.6000 & 43\\ Melotte 66&111.5958 & $-$47.6666 & 35\\ NGC 188&11.8666 & $+$85.2550 & 24\\ NGC 2158 &91.8541 & $+$24.0966 & 40\\ NGC 2506&120.0041 & $-$10.7700 & 15 \\ NGC 6791&290.2208 & $+$37.7716 & 75\\ NGC 6819&295.3250 & $+$40.1866 & 29\\ \\ \hline \end{tabular}\label{table:A01} \end{table} \begin{figure*} \includegraphics[width=17.5cm]{NGC2158-panel.png} \caption{The left panel shows a scatter diagram of proper motions of sources within 10$\arcmin$ of the center of NGC 2158. The rectangular region on this panel shows our initial range of proper motions fixed by visual examination of the distributions. The two middle panels show the histograms of proper motions in RA and DEC of the selected sources, respectively. The fitted Gaussian functions are overplotted on the distributions. The right panel shows the distribution of proper motions of selected sources, with the proper motions of previously confirmed spectroscopic members shown as red open circles, and their mean position shown with a red filled circle. The peak position calculated from the mean shift algorithm is shown as a black plus symbol. The same figure for the other clusters is given in the Figure~\ref{Fig. A1}.} \label{Fig. 1} \end{figure*} \begin{table*} \centering \caption{For each cluster (Column 1), the mean value of the proper motion (RA) determined from the Gaussian fit along with its standard deviation (Columns 2 and 3), the peak value (RA) determined by the mean shift method (Column 4), the mean proper motion (RA) of the previously known confirmed spectroscopic member stars (Column 5), the mean value of the proper motion (DEC) determined from the Gaussian fit along with its standard deviation (Columns 6 and 7), the peak value (DEC) determined by the mean shift method (Column 8), and the mean proper motion (DEC) of the previously known confirmed spectroscopic member stars (Column 9).} \adjustbox{max width=\textwidth}{ \begin{tabular}{rrrrrrrrrrr} \hline \\ Cluster& $\mathrm{G_{mean}}$ (RA)&$\mathrm{\sigma}$ (RA)&$\mathrm{Peak_{ms}}$ (RA)& $\mathrm{Mean_{mem}}$ (RA)& $\mathrm{G_{mean}}$ (DEC)&$\mathrm{\sigma}$ (DEC)&$\mathrm{Peak_{ms}}$ (DEC)& $\mathrm{Mean_{mem}}$ (DEC) \\ ~&(mas/yr)&(mas/yr)&(mas/yr)&(mas/yr)& (mas/yr)&(mas/yr)&(mas/yr)&(mas/yr)\\ \\ \hline \\ Berkeley 39&$-$1.726&0.234&$-$1.728&$-$1.712$\pm${0.081}&$-$1.645&0.173&$-$1.640&$-$1.649$\pm${0.050} \\ Mellote 66&$-$1.475&0.198&$-$1.479&$-$1.480$\pm${0.074}&~2.740&0.183&2.738&2.732$\pm${0.088}\\ NGC 188&$-$2.305&0.164&$-$2.300&$-$2.320$\pm${0.148}&$-$0.948&0.146&$-$0.955&$-$0.952$\pm${0.160}\\ NGC 2158 &$-$0.184&0.245&$-$0.182&$-$0.198$\pm${0.109}&$-$2.006&0.222&$-$2.011&$-$2.009$\pm${0.103}\\ NGC 2506&$-$2.573&0.191&$-$2.572&$-$2.587$\pm${0.114}&~3.908&0.141&3.904&3.944$\pm${0.126}\\ NGC 6791&$-$0.422&0.240&$-$0.419&$-$0.438$\pm${0.101}&$-$2.270&0.282&$-$2.274&$-$2.253$\pm${0.113}\\ NGC 6819&$-$2.907&0.176&$-$2.912&$-$2.919$\pm${0.122}&$-$3.865&0.193&$-$3.867&$-$3.853$\pm${0.122}\\ \\ \hline \label{table 2} \end{tabular} } \end{table*} \section{Methodology} \label{Sec. 3} \subsection{Proper Motion and Parallax Ranges for Selection of Member Stars} \label{Sec. 3.1} In order to determine the proper motion and the parallax ranges for our membership selection criteria, we first downloaded Gaia DR2 data for a small 10$\arcmin$ field around the cluster center for all the clusters. Next, we plotted scatter diagrams of proper motions in RA ($\mu_{\alpha}cos\delta$) versus proper motions in DEC ($\mu_{\delta}$) for each cluster. By visually examining these plots, we first fixed an initial range of proper motions to select sources whose proper motions appeared distinct from field stars. The left panel in Figure~\ref{Fig. 1} shows this scatter diagram for a representative cluster, NGC 2158. The rectangular region marks our initial ranges of proper motions. Next, we fitted Gaussian distributions to the proper motions of these selected sources and determined the mean ($\mathrm{G_{mean}}$) and the standard deviation ($\sigma$) of the distribution. The two middle panels of Figure~\ref{Fig. 1} show the Gaussian fitting to the proper motion distributions in RA and DEC, respectively. We also used the mean shift-based clustering algorithm \citep{comaniciu02} to find the peak positions in the distributions of the proper motions of these selected sources. The right panel in Figure~\ref{Fig. 1} shows the proper motions of the selected sources with this peak value of the proper motions determined using the mean shift method \citep{fukunaga75} marked with a black plus symbol. The proper motions of the confirmed spectroscopic members of the cluster are shown with red open circles and their mean is marked with a red filled circle. The same figure as Figure 1 but for the other clusters is given in Figure~\ref{Fig. A1}. In Table~\ref{table 2}, we list the mean values of the proper motions determined from the Gaussian fitting, $\mathrm{G_{mean}}$, along with their standard deviations, $\mathrm{\sigma}$, the peak values determined by the mean shift method, and the mean proper motion of the previously known confirmed spectroscopic member stars for comparison. For selection of cluster members, we tried seven proper motion ranges, $\mathrm{G_{mean}} \pm 2\mathrm{\sigma}$, $\mathrm{G_{mean}} \pm 2.5\mathrm{\sigma}$, $\mathrm{G_{mean}} \pm 3\mathrm{\sigma}$, $\mathrm{G_{mean}} \pm 3.5\mathrm{\sigma}$, $\mathrm{G_{mean}} \pm 4\mathrm{\sigma}$, $\mathrm{G_{mean}}\pm 4.5\mathrm{\sigma}$, and $\mathrm{G_{mean}} \pm 5\mathrm{\sigma}$. For parallax ranges to select member stars, we used the mean Gaia DR2 values of parallaxes of the confirmed spectroscopic member stars of clusters and their errors. We tried three parallax ranges using the mean parallax value, $\bar{\omega}$, and the mean in parallax errors, $\Delta \bar{\omega}$, namely, $\bar{\omega}$ $\pm 2 \Delta \bar{\omega}$, $\bar{\omega}$ $\pm 2.5 \Delta \bar{\omega}$, and $\bar{\omega}$ $\pm 3 \Delta \bar{\omega}$. \begin{figure} \centering \includegraphics[width=\columnwidth]{NGC2158-plx.png} \caption{The parallax distribution of proper motion selected sources for the cluster NGC 2158 is shown in blue histogram. The orange histogram shows the parallax distribution of proper motion selected sources with their parallaxes within the range $\bar{\omega}$ $\pm 3 \Delta \bar{\omega}$, where $\bar{\omega}$ is the mean Gaia DR2 parallax, and $\Delta \bar{\omega}$ is the mean error in the Gaia DR2 parallaxes of the previously known, spectroscopically confirmed members of the cluster. The same figure showing parallax distributions of the other clusters is given in Figure~\ref{Fig. A2}.} \label{Fig. 2} \end{figure} Along with the Gaia DR2 proper motions and parallaxes, we made use of the previously known confirmed spectroscopic members to select our members. For four of our clusters, radial velocity data under the WIYN Open Cluster Survey (WOCS) have been published earlier which we utilize here. \citet{geller08} presented radial velocity data of more than 1000 sources in NGC 188 confirming membership for 473 sources. Of these 320 confirmed members are within our adopted radius of the cluster (Section~\ref{Sec. 3.2}). \citet{tofflemire14} presented the radial velocities for 280 evolved sources of NGC 6791 confirming membership of 111 sources. Most of these confirmed members are in our adopted radius of this cluster (Section~\ref{Sec. 3.2}). \citet{milliman14} presented the WIYN radial velocity data for 2641 sources in NGC 6819 confirming membership for 679 candidates, almost all within our adopted range of the cluster. \citet{twarog18} presented WIYN radial velocity data for 287 stars in the cluster NGC 2506, confirming membership of 191 sources. Surprisingly in this cluster, Gaia counterparts of only 26 sources were found. \citet{bragaglia12} presented VLT/FLAMES spectra for 29 evolved sources in Berkeley 39, all of which are within our adopted radius (Section~\ref{Sec. 3.2}). For the remaining two clusters, Melotte 66 and NGC 2158, spectroscopy for 8 red clump sources were presented by \citet{sestito08} and \citet{smith84,jacobson09}, respectively. \begin{figure} \includegraphics[width=\columnwidth]{NGC2158-radial.png} \caption{The radial distribution of sources with proper motions and parallaxes within the range of selection criteria for the members of the cluster, NGC 2158. The estimated radius and tidal radius (see Section~\ref{Sec. 4.1}) of the cluster are marked on the figure. The same figure for other clusters is given in Figure~\ref{Fig. A3}.} \label{Fig. 3} \end{figure} \begin{table*} \caption{The parameters of clusters, age (Column 2), distance (Column 3), metallicity (Column 4), mean extinction in $G$ band (Column 5), mean color excess $E(B_P-R_P)$ (Column 6), estimated radius of the cluster (Column 7), core radius (Column 8), tidal radius (Column 9), location of minima in the bimodal BSS radial distribution in units of core radius (Column 10), central relaxation time of the cluster (Column 11), the ratio of cluster age to its central relaxation time (Column 12), the literature values of age, distance, and metallicity (Columns 13, 14, and 15).} \adjustbox{max width=\textwidth}{ \begin{tabular}{cccccccccccc|ccc} \hline \\ ~&~&~&~&~&This Work~&~&~&~&~&~&~&~&Literature&~\\ \hline Cluster & Age & $d$ & Metallicity &$A_G$ &$E(B_P-R_P)$ & Radius & $r_c$ & $r_t$ & $r_{\mathrm{min}}/r_c$ & $t_{rc}$ & $N_{\mathrm{relax}}$ & Age &$d$ & Metallicity \\ ~&(Gyr)& (parsec)& ($Z$) & (mag)& (mag) & ($\arcmin$) & ($\arcmin$) & ($\arcmin$) & ~ & (Myr) & ~ & (Gyr)&(parsec)&($[Fe/H]$)\\ \\ \hline \\ Berkeley 39 & 6.0 & 4254 & 0.0127 & 0.30 & 0.20 & 14 & 1.9 & 20.5 & -- & 78 & 76.9 & 6--8 & 4000 & $-$0.20 \\ Mellote 66 & 3.4 & 4847 & 0.010 & 0.50 & 0.20 & 15 & 2.9 & 25.1 & 1.68 & 186 & 18.3 & 4--7 & 4700 & $-$0.51 -- $-$0.33 \\ NGC 188 & 7.0 & 1800 & 0.018 & 0.34 & 0.24 & 30 & 4.1 & 48.7 & 3.25 & 90 & 77.7 & 7 & 1900 & 0.30 \\ NGC 2158 & 1.9 & 4250 & 0.0186 & 1.0 & 0.53 & 11 & 1.4 & 22.0 & 4.025 & 43 & 44.2 & 1--2 & 4000 & $-$0.63 -- $-$0.3 \\ NGC 2506 & 2.0 & 3110 & 0.008 & 0.23 & 0.10 & 22 & 2.7 & 42.6 & 1.3875 & 150 & 13.3 & 1.85 & 3548 & $-$0.27\\ NGC 6791 & 8.5 & 4475 & 0.015 & 0.30 & 0.26 & 14 & 2.4 & 24.7 & 2.5 & 216 & 39.3 & 8 & 4000 & 0.31 \\ NGC 6819 & 2.4 & 2652 & 0.019 & 0.20 & 0.20 & 15 & 2.3 & 47.0 & -- & 60 & 40.0 & 2.5 & 2992 & 0.09 \\ \\ \hline \label{table4} \end{tabular} } \begin{tablenotes} \item {Berkeley 39: Age -- \citet{kassis97,kaluzny89}, distance -- \citet{kaluzny89}, metallicity -- \citet{bragaglia12}} \item {Mellote 66: Age -- \citet{friel93,kassis97}, distance -- \citet{carraro14}, metallicity -- \citet{friel93,sestito08}} \item {NGC 188: Age -- \citet{sarajedini99}, distance -- \citet{sarajedini99}, metallicity -- \citet{friel10}} \item {NGC 2158: Age -- \citet{arp62}, distance -- \citet{carraro02}, metallicity -- \citet{carraro02, jacobson09}} \item {NGC 2506: Age -- \citet{twarog16}, distance -- \citet{twarog16}, metallicity -- \citet{twarog18}} \item {NGC 6791: Age -- \citet{cunha15}, distance -- \citet{cunha15}, metallicity -- \citet{villanova18}} \item {NGC 6819: Age -- \citet{rosvick98}, distance -- \citet{kalirai01, balona13,brewer16}, metallicity -- \citet{bragaglia01}} \end{tablenotes} \end{table*} To judge which range of proper motions and parallaxes is most appropriate for selection of cluster members, we considered the retrieval-rate of the previously known confirmed spectroscopic member stars as one criteria, and minimization of the contamination as seen in the CMD as the other criteria. Figure~\ref{Fig. 2} shows proper motion selected sources, and overlaid on them proper motion plus parallax selected sources for a representative cluster, NGC 2158. The same figure as Figure 2 but for the other clusters is given in Figure~\ref{Fig. A2}. In almost all the clusters, for proper motion range, $\mathrm{G_{mean}} \pm 2.5\mathrm{\sigma}$, and for parallax, $\bar{\omega}$ $\pm 3 \Delta \bar{\omega}$, satisfied both of the above criteria. In the last step, we used the cluster radii determined as explained in Section~\ref{Sec. 3.2} to add members to our initial sample of 10$\arcmin$ field to prepare our complete cluster catalogs. \subsection{Cluster Centers and Cluster Radii} \label{Sec. 3.2} To estimate the radii of our clusters and use it further to build the complete catalogs of our clusters, we downloaded Gaia DR2 sources for a large field of 40$\arcmin$ radius around the cluster centers. Next, we plotted radial distributions of the sources to estimate the radius of each cluster. Figure~\ref{Fig. 3} shows this radial distribution for our representative cluster, NGC 2158. The cluster radius is estimated as the radius at which the cluster radial distribution merges with the field stars distribution. The same figure as Figure~\ref{Fig. 3} but for the other clusters is given in Figure~\ref{Fig. A3}. The estimated cluster radii are listed in Table~\ref{table4}. The final catalogs of the clusters are compiled using these values of the cluster radii. Since the knowledge of accurate cluster center coordinates is important for the analysis that we have carried out, we determined the cluster centers using our final catalogs of clusters following two different methods, the mean shift algorithm \citep{comaniciu02} to determine the densest points in the RA and DEC coordinates, and the fitting of Gaussian functions to frequency distributions of RA and DEC to determine the mean RA and DEC positions. Figure~\ref{Fig. 4} shows the mean shift algorithm applied to RA and DEC of the cluster NGC 2158 in the left panel, and the Gaussian fits to RA and DEC frequency distributions of the same cluster in the middle and the right panels, respectively. The same figure as Figure 4 but for the other clusters is given in Figure~\ref{Fig. A4}. In Table~\ref{table 3}, we give the cluster centers determined by both the methods for all the clusters. \begin{figure*} \includegraphics[width=17.5cm]{NGC2158-center_panel.png}\par \caption{The left panel shows the result of the mean shift clustering algorithm to determine the cluster center for NGC 2158. The middle and the right panels show the frequency distributions of cluster members in RA and DEC, respectively, and Gaussian functions fitted to the distributions. The same figure with cluster center determinations for the other clusters is given in the Figure~\ref{Fig. A4}.} \label{Fig. 4} \end{figure*} \begin{table*} \centering \caption{Comparison of cluster center coordinates determined by the mean shift algorithm and fitting of Gaussian functions.} \begin{tabular}{ccccc} \hline \\ ~&Mean Shift&Mean Shift&Gaussian&Gaussian\\ \hline Cluster&RA (deg) &DEC (deg) &RA (deg)&DEC (deg)\\ \\ \hline \\ Berkeley 39 & 116.69925 & $-$04.66973 & 116.70279$\pm$0.05 & $-$04.66830$\pm$0.05 \\ Mellote 66 & 111.58094 & $-$47.68514 & 111.57993$\pm$0.09 & $-$47.68661$\pm$0.06 \\ NGC 188 & 011.82039 & $+$85.23955 & 011.81136$\pm$1.28 & $+$85.24355$\pm$0.10 \\ NGC 2158 & 091.86376 &$+$24.09921 & 091.86288$\pm$0.04 & $+$24.09921$\pm$0.04 \\ NGC 2506 & 120.01196 & $-$10.77474 & 120.01064$\pm$0.08 & $-$10.77394$\pm$0.07 \\ NGC 6791 & 290.22079 & $+$37.77634 & 290.21958$\pm$0.07 & $+$37.77634$\pm$0.05 \\ NGC 6819 & 295.33109 & $+$40.18865 & 295.33021$\pm$0.10 & $+$40.18965$\pm$0.07 \\ \\ \hline \label{table 3} \end{tabular} \end{table*} \section{Results} \label{Sec. 4} \subsection{Radial Density Profiles} \label{Sec. 4.1} We plotted radial density profiles of the cluster members using the cluster centers determined as explained in Section~\ref{Sec. 3.2}. The plot for NGC 2158 is shown in Figure~\ref{Fig. 5}. We have fitted King's function to the radial distribution of the cluster \citep{kings62}. In order to fit the King's function, we first divided the range of cluster radius in $\sim$30 equal radius bins. We then plotted the logarithm of the number density against the radius for each bin. The King's function is seen to fit well to the clusters (see the Figure~\ref{Fig. A5} in appendix for the remaining clusters). The resultant parameters, core radii ($r_c$) and tidal radii ($r_t$) are listed in Table~\ref{table4}, and these along with the normalization factor $A$ are as well marked on the figures. \begin{figure} \includegraphics[width=\columnwidth]{NGC2158-kings_26bins.png} \caption{The radial density profile of the cluster members is shown with King's function fitting. The error bars represent 1$\sigma$ Poisson errors. The same figure showing the fitting of the King's function to the radial density profile of the other clusters is given in Figure~\ref{Fig. A5}.} \label{Fig. 5} \end{figure} \subsection{Color Magnitude Diagrams} \label{Sec. 4.2} We plot the CMDs of the clusters using our identified cluster members. To fit the isochrones, we downloaded PARSEC isochrones \citep{bressan12} of the known ages and metallicity values from the literature (see references in Table~\ref{table4}), and used the mean value of the distances of our bright ($G$ < 15 mag) members \citep{bailer18} along with the median values of the extinctions in the $G$ magnitude, $A_G$, and reddening, $E(B_P$-$R_P)$, of our members, available in the Gaia DR2. For some CMDs, some fine-tunning of parameters was necessary to fit the isochrones to members. In such instances, we varied the metallicity, $A_G$, and $E(B_P$-$R_P)$ values, keeping the distance, and age fixed, to obtain the best fitting of the isochrones. Sources with $G \leq$ TO$_{\mathrm{Mag}}$+0.5, and $B_P-R_P \leq$ TO$_{\mathrm{Col}}-$0.05, where TO$_{\mathrm{Mag}}$ and TO$_{\mathrm{Col}}$ are the magnitude and the color of the main-sequence turn-off point, respectively, were identified as our BSS candidates (blue solid squares). Sources with $G \leq$ TO$_{\mathrm{Mag}}-$0.5, and $B_P-R_P$ redder than the color value of the bottom of the red-giant branch of the PARSEC isochrone were identified as our RGB candidates (red solid squares). The CMD of NGC 2158 is shown in Figure~\ref{Fig. 6}. The previously known spectroscopically confirmed members of the clusters are shown as black open circles. Figure~\ref{Fig. A6} gives the CMDs of the remaining clusters. Table~\ref{table4} gives the parameters of the fitted isochrones, and a comparison of the known parameters from the literature. \begin{figure} \includegraphics[width=\columnwidth]{NGC2158-cm-GBP_RP-eps-converted-to.pdf} \caption{The CMD of the cluster NGC 2158 is shown with the fitted PARSEC isochrones. Our BSS are marked as blue solid squares, BSS from AL07 (if any) are marked as green open triangles, and BSS known with spectroscopy data in the literature (if any) are marked as magenta open circles. Previously known spectroscopically confirmed members \citet{jacobson09} and \citet{smith84} are marked as black open circles. Our RGB populations are marked as red solid squares. Figure~\ref{Fig. A6} shows the CMDs of the other clusters.} \label{Fig. 6} \end{figure} \subsection{BSS Populations} \label{Sec. 4.3} NGC 2158 and NGC 6791 are rich in both BSS and RGB populations containing more than 40 BSS candidates each, and 137 and 243 RGB candidates, respectively. NGC 188, NGC 2506, and Berkeley 39 have 20--30 BSS candidates and 55--75 RGB candidates. Melotte 66 contains 14 BSS candidates, the least among our clusters, but contains 106 RGB candidates. NGC 6819 contains 14 BSS candidates and 68 RGB candidates. In most of the clusters, BSS candidates are up to 2--2.5 mag brighter in the $G$ band from the main-sequence turn-off point. NGC 2158 and NGC 2506 contain a bright BSS candidate each, which is $\sim$3 mag brighter than the main-sequence turn-off point. All the clusters except NGC 188 show presence of a significant red clump with confirmed spectroscopic members in several clusters such as Melotte 66, NGC 2158, NGC 2506, and NGC 6791. \begin{table*} \caption{Comparison of the BSS candidates identified in this work with the BSS known in the literature: numbers of BSS candidates and numbers of new BSS candidates identified in this work (Columns 2 and 3), numbers of BSS candidates in the AL07 catalog (Column 4), AL07 BSS candidates that are found members according to our criteria (Column 5), BSS candidates that are common with the AL07 catalog (Column 6), numbers of confirmed BSS in the literature (Column 7), known BSS from the literature that are found members according to our criteria (Column 8), BSS candidates that are common with the confirmed BSS from the literature (Column 9).} \begin{tabular}{ccc|ccc|ccc} \hline \\ ~&This work&~&~&AL07~&~&~&~Literature&\\ \hline Cluster &$N_{BSS}$&New BSS&$N_{BSS}$&Members&Common BSS&$N_{BSS}$ &Members& Common BSS\\ \\ \hline \\ Berkeley 39 & 23 & 9 & 42 & 16 & 14 & -- & -- & -- \\ Mellote 66 & 14 & 5 & 35 & 9 & 9 & -- & -- & --\\ NGC 188 & 24 & 8 & 24 & 18 & 16 & 20 & 18 & 16 \\ NGC 2158 & 40 & -- & 40 & -- & -- & -- & -- & -- \\ NGC 2506 & 28 & 22 & 15 & 6 & 6 & -- & -- & --\\ NGC 6791 & 47 & 32 & 75 & 18 & 15 & 7 & 6 & 6 \\ NGC 6819 & 14 & 2 & 29 & 3 & 2 & 17 & 12 & 12 \\ \\ \hline \label{table 5} \end{tabular} \begin{tablenotes} \item {NGC 188: \citet{geller08}} \item {NGC 6791: \citet{tofflemire14}} \item {NGC 6819: \citet{milliman14}} \end{tablenotes} \end{table*} We compared our BSS candidates with AL07 BSS candidates of the clusters, as well as with the confirmed BSS of the three clusters, NGC 188 \citep{geller08}, NGC 6791 \citep{tofflemire14}, and NGC 6819 \citep{milliman14}. In AL07, the sources of photometric information are listed in their Table~1 \citep{al07}. We referred to $\mathit{WEBDA}$ and any data linked on ADS\footnote{https://ui.adsabs.harvard.edu} to the references of the photometric data, to obtain the RA and DEC of the AL07 BSS. The comparison of our BSS candidates with AL07 catalogs has only been possible for those sources whose RA and DEC information has been found from the above mentioned resources. For NGC 188, NGC 6791, and NGC 6819, our comparison with the known spectroscopically confirmed BSS populations lends a confirmation to our membership analysis. Using the available spectroscopic information of other member stars of these three clusters, we also ascertain membership of our new, previously unknown BSS candidates in these clusters, and discard them if they are found to be either confirmed or likely non-members. Table~\ref{table 5} gives a summary of the comparison of our BSS candidates with the previously known BSS. \subsubsection{NGC\,188} \label{Sec. 4.3.1} We identified 26 BSS candidates in NGC\,188. The BSS populations of NGC\,188 have been studied in great detail. In particular, \citet{geller08} found 20 confirmed BSS based on the radial velocities of sources from WIYN data. Due to our conservative membership criteria based on Gaia DR2 proper motions and parallaxes, 2 of the \citet{geller08} BSS are not identified as members by us. Of the 18 BSS of \citet{geller08} that we find members, 16 are also our BSS candidates. One BSS of \citet{geller08} is very close to the main-sequence in our CMD, and thus is not included in our BSS candidates. \citet{geller08} also mentioned that this particular BSS (source \#1366) has a low (23\%) membership probability based on its proper motion and is located at 29.6$\arcmin$ from the cluster center. The second BSS from \citet{geller08} appears on RGB in our CMD and has been corrected in the author's later work \citep{geller13}. We found 10 new BSS candidates that are not included in the list of \citet{geller08}. Of these, four sources are listed as binary with unknown membership ``BU'' whereas one source is listed as a single source with unknown membership ``U'' by \citet{geller08}. Sources with unknown membership in \citet{geller08} lack complete orbit solution in case of binary sources, and lack at least three separate observations of radial velocity with a base-line of one year in case of single sources. Two new BSS candidates are identified as members by \citet{geller08} but they do not group them into BSS possibly because they are closer to the main-sequence in their CMD. Two of our new BSS candidates are classified as likely non-members or non-members by \citet{geller08}, and for one new BSS candidate there is no information in \citet{geller08}. We drop the two likely non-members from further analysis but retain remaining eight new BSS candidates which are either members or with candidates with unknown membership or without information in \citet{geller08} in our analysis. This leaves us with 24 BSS candidates in this cluster, 8 of which are new candidates (Table~\ref{table 5}). \subsubsection{NGC\,6791} \label{Sec. 4.3.2} NGC 6791 has 48 BSS candidates, the highest among the clusters that we present in this work. \citet{al07} had reported 75 BSS in NGC 6791 based on the photometric study of the cluster by \citet{kaluzny90}. We found Gaia counterparts of all of these sources within 1$'$ search radius of the RA, DEC positions from \citet{kaluzny90}. However, only 18 of these 75 sources are our members, and 15 are our BSS candidates. \citet{tofflemire14} presented radial velocities of the evolved populations of NGC 6791 in their WIYN open cluster study. Within our adopted cluster radius, they found 7 BSS which are confirmed members based on their radial velocities. All but one of these BSS have been identified as BSS candidates in our analysis. From our 42 new BSS candidates which are not identified as BSS by \citet{tofflemire14} (though 9 of them are common with AL07), the membership analysis is available for 8 sources in \citet{tofflemire14}. One of these sources is a confirmed member, 5 are single sources with unknown membership including two rapid rotators, one is a binary with unknown membership, and a single source is a binary likely non-member. The remaining 34 new BSS candidates lack observations in \citet{tofflemire14} probably as they are below their cut-off magnitude for the targets ($V$=16.8 mag). After removal of one likely non-member from our BSS list, we are left with 47 BSS candidates in this cluster of which 32 are new candidates as listed in Table~\ref{table 5}. \subsubsection{NGC\,6819} \label{Sec. 4.3.3} We found 18 BSS candidates in NGC\,6819. In the WIYN study of the cluster, \citet{milliman14} found 17 BSS candidates, of which 12 were chosen for detailed Barium abundance study by \citet{milliman15}. According to our membership criteria, 5 of these 17 BSS are not members because their parallaxes are significantly out of the range that we have chosen as membership criteria. The remaining 12 BSS of \citet{milliman14} are our BSS candidates. We have 6 new BSS candidates, but four of these are likely non-members according to \citet{milliman14}, one source has an unknown membership whereas one has not been observed by \citet{milliman14}. After discarding the 4 likely non-members, we have a total of 14 BSS candidates in this cluster, of which two are new BSS candidates. \begin{figure*} \begin{multicols}{3} \includegraphics[width=6.0cm]{Be39.png}\par \includegraphics[width=6.0cm]{Me66.png}\par \includegraphics[width=6.0cm]{NGC188.png}\par \end{multicols} \begin{multicols}{3} \includegraphics[width=6.0cm]{NGC2158.png}\par \includegraphics[width=6.0cm]{NGC2506.png}\par \includegraphics[width=6.0cm]{NGC6791.png}\par \end{multicols} \begin{multicols}{3} \includegraphics[width=6.0cm]{NGC6819.png}\par \end{multicols} \caption{The cumulative radial distributions of BSS and RGB populations. The normalized number of the two populations are plotted on the y-axis, and the radial distance in the units of core radius, $r_c$, is plotted on the x-axis.} \label{Fig. 7} \end{figure*} \subsubsection{Berkeley\,39} \label{Sec. 4.3.4} We identify 23 BSS candidates in this cluster. \citet{al07} had reported 42 BSS in Berkeley 39. Only 16 of the AL07 BSS are identified as members in our criteria, of which 14 are our BSS candidates. We find 9 new BSS candidates in this cluster. \subsubsection{Melotte\,66} \label{Sec. 4.3.5} Melotte 66 is the cluster with the least number of BSS in our analysis. We found 14 BSS in this cluster. \citet{al07} had reported 35 BSS in Melotte 66, only 9 of their BSS are common with our BSS candidates. We find 5 new BSS candidates in this cluster. \subsubsection{NGC\,2158} \label{Sec. 4.3.6} We found 40 BSS candidates in NGC 2158. AL07 had also found 40 BSS candidates in this cluster, however, no positional information of these sources were found in the literature. Hence, we could not make a comparison of our BSS candidates with AL07 BSS candidates. \subsubsection{NGC\,2506} \label{Sec. 4.3.7} We detect 28 BSS candidates in this cluster. \citet{al07} listed 15 BSS in this cluster, 6 of which are common with our BSS candidates. We find 22 new BSS candidates, with two of them being bright BSS sources which have not been reported so far. \subsection{BSS as Probes of Dynamical Evolution} \label{Sec. 4.4} Being the most massive objects of the clusters, BSS are expected to be the most affected by the effect of dynamical friction that produces the segregation of massive stars toward the cluster center. The bimodality in the BSS radial distributions was first discovered by \citet{ferraro97} in the globular cluster, M3, in their UV HST and ground-based optical wavelengths study of the cluster. Such a bimodality was then discovered in a large majority of the globular clusters, with only a small fraction of globular clusters showing no external upturn, and another small fraction showing completely flat BSS radial distributions, e.g., see references in \citet{ferraro14}. In their work, \citet{ferraro12} made use of RGB or horizontal branch stars (HB) as a reference population and plotted the ratio, $N_{\mathrm{BSS}}/N_{\mathrm{RGB}}$, against, the radial distance in units of $r_c$. The three families of clusters have undergone varying degrees of dynamical evolution with family I being the least dynamically evolved to family III being the most dynamically evolved. In family II clusters, the location of minima in the distribution, $r_\mathrm{{min}}$, systematically moves outward for more dynamically evolved clusters. We plotted cumulative radial distributions of BSS and RGB (our reference population) for these clusters. Figure~\ref{Fig. 7} shows the plots where normalized numbers of the two populations are plotted on the y-axis and the radial distance, in the units of the core radius ($r_c$), is plotted on the x-axis. For a meaningful comparison between the numbers of BSS and RGB, we consider only candidates within the same magnitude range of the two populations in this analysis (as mentioned on Figure~\ref{Fig. 7}). In three clusters, Melotte 66, NGC 2158, and NGC 2506, BSS are more concentrated than the RGB for the entire extent of the cluster. Melotte 66 shows a dip in the BSS frequency distribution at $r$ $\sim$2.5$r_c$, but still shows BSS population to be more concentrated throughout the cluster extent. NGC 2506 shows no BSS in the regions beyond $r \sim$3$r_c$. The radial distributions of three clusters, NGC 188, NGC 6791, and NGC 6819 show their BSS populations to be more concentrated in the inner regions, but less concentrated in the outer regions. In the cluster NGC 6819, the two distributions do not look much distinct in the outer regions, however in the central regions BSS population looks more concentrated. Berkeley 39 showed no differences in the radial distributions of the two populations. For four of our clusters, the Kolmogorov-Smirnov test yields a high probability ($\geq$ 95\%) that the two populations, BSS and RGB, are not derived from the same parent population. These are Melotte 66 (98.2\%), NGC 188 (96.8\%), NGC 2506 (99.8\%) and NGC 6791 (98.2\%). For NGC 2158, the test gives a low distinction, 88\% between the BSS and RGB populations. In Figure~\ref{Fig. 8}, we show the ratio of the numbers of the two populations, $N_{\mathrm{BSS}}/N_{\mathrm{RGB}}$, against the radial distance, $r$, in units of $r_c$. As in Figure~\ref{Fig. 7}, in this analysis as well, we include only those BSS and RGB candidates which are in the same magnitude range in each cluster. The $N_{\mathrm{BSS}}/N_{\mathrm{RGB}}$ are divided in equal sized bins in $r/r_c$ such that each bin has at least 1 BSS, except the end bins where there are no BSS left. In five of our clusters, Melotte 66, NGC 188, NGC 2158, NGC 2506, and NGC 6791, the radial distribution is seen to peak at the center, falling with radius until a certain radius, $r_\mathrm{{min}}$, and showing a rising trend again beyond this radius. The mean value of the bin in which the distribution falls to a minima, is called $r_\mathrm{{min}}$, and is listed in Table~\ref{table4} for each cluster. To evaluate the bimodality visible in these radial distributions, we performed Hartigan's dip statistic \citep{hartigan87} test. The dip test, based on null-hypothesis logic, works by finding the maximum difference between the empirical distribution function and the unimodal distribution function that minimizes this maximum difference. The test result with $p$-values smaller than 0.05 suggests significant bimodality (or multimodality) in the distribution, and with $p$-values smaller than 0.1 but greater than 0.05 suggests marginal bimodality (or multimodality) in the distribution \citep{freeman12}. The results of the dip test that we performed, the dip statistic (D) and the $p$-value, are mentioned on the plot of each cluster (Fig. \ref{Fig. 8}). According to our analysis, Melotte 66, NGC 188, and NGC 2506 show significant bimodality whereas NGC 2158 and NGC 6791 show marginal bimodality. We conclude that these five clusters are Family II type, as defined by \citet{ferraro12}, and are of intermediate dynamical ages. The remaining two clusters, Berkeley 39 and NGC 6819, show flat radial distributions. The dip tests for these two clusters also fail to detect bimodality in these two clusters. We thus classify these two clusters as Family I type, as defined in \citet{ferraro12}, but discuss their dynamical status in detail in Section~\ref{Sec. 5}. \begin{figure*} \begin{multicols}{3} \includegraphics[width=6.0cm]{Be39-dip_test.png}\par \includegraphics[width=6.0cm]{Me66-dip_test.png}\par \includegraphics[width=6.0cm]{NGC188-dip_test.png}\par \end{multicols} \begin{multicols}{3} \includegraphics[width=6.0cm]{NGC2158-dip_test.png}\par \includegraphics[width=6.0cm]{NGC2506-dip_test.png}\par \includegraphics[width=6.0cm]{NGC6791-dip_test.png}\par \end{multicols} \begin{multicols}{3} \includegraphics[width=6.0cm]{NGC6819-dip_test.png}\par \end{multicols} \caption{The ratio $N_{\mathrm{BSS}}/N_{\mathrm{RGB}}$ is plotted against the radial distance in the units of core radius, $r_c$, for each cluster. The bin sizes are selected such that all bins have at least 1 BSS except the end bins where there are no BSS left. Only the BSS and RGB in the same magnitude range for a given cluster are used for this analysis. The error bars represent Poisson errors. The dip statistic, D, and the $p$-value from the dip test for bimodality are mentioned on the plots.} \label{Fig. 8} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{Figure9.png} \caption{The correlation between the location of minima in the BSS radial distributions, $r_{\mathrm{min}}/r_c$, and the number of current central relaxations occurred since cluster formation, $N_{\mathrm{relax}}$, for five open clusters (blue filled circles) and 21 globular clusters (red filled circles) of \citet{ferraro12} for which data has been taken from Table~1 of \citet{lanzoni16}. The open cluster correlation shown as a blue dashed line, with errors calculated from the covariance matrix of the fit and shown as the blue shaded area, has a slope that is consistent within the errors, to the slope of the globular cluster correlation (red dashed line), but has a slightly larger intercept. The combined correlation, fitted to both the open and globular clusters, is shown as a black dashed line. The red and gray shaded areas show the errors of the globular cluster correlation, and the common correlation of open and globular clusters, respectively.} \label{Fig. 9} \end{figure} To test the scenario that the shift in the location of $r_{\mathrm{min}}$ in these five clusters with bimodal distributions, is suggestive of their dynamical ages, we estimated central relaxation times, $t_{rc}$, using the equation, $t_{rc}=1.491 \times 10^7 ~\mathrm{yr} \times \frac{k}{\mathrm{ln}(0.4N_{\ast})} \langle{m_{\ast}}\rangle^{-1} \rho_{M,0}^{1/2}~r_c^3$, where $k \approx$ 0.5592, $N_{\ast}$ is the estimated number of stars in the cluster, $\langle{m_{\ast}}\rangle$ is an average stellar mass in solar units, and $ \rho_{M,0}$ is the central mass density in $ M_{\sun}/pc^3 $ \citep{djorgovski93}. For this calculation, the average stellar mass and the central mass density were empirically estimated using the member stars for each cluster. With these values of the central relaxation times, $t_{rc}$, and the cluster ages (Table~\ref{table4}), we estimated a theoretical parameter, indicative of the dynamical age of the cluster, $N_{\mathrm{relax}} = \mathrm{Age}/t_{rc}$, i.e. the numbers of current central relaxation times occurred since the cluster formation. The two parameters, $t_{rc}$ and $N_{\mathrm{relax}}$, of the clusters are listed in Table~\ref{table4}. The plot between logarithm of $N_{\mathrm{relax}}$ and the logarithm of $r_{\mathrm{min}}/{r_c}$ for our Family II clusters (blue filled circles) is shown in Figure~\ref{Fig. 9}. On the same plot, we also show data-points of 21 globular clusters (red filled circles) taken from Table~1 of \citet{lanzoni16}. In order to plot the globular cluster data-points, we computed the $N_{\mathrm{relax}}$ values of the globular clusters by dividing the central relaxation times, $t_{rc}$, available in the Table~1 of \citet{lanzoni16} from the mean age of the globular clusters (=12 Gyr) \citep{forbes10} similar to that done by \citet{ferraro18}. The best-fit relation for our open clusters data-points, shown as a blue dashed line, is: \begin{equation} \mathrm{log}(N_{\mathrm{relax}})= 1.43 (\pm 0.42)~\mathrm{log} (r_{\mathrm{min}}/r_c)+0.97 (\pm 0.17) \label{eq1} \end{equation} The errors are calculated from the covariance matrix of the fit and are used to plot the shaded blue region. The correlation confirms that the observed parameter, $r_{\mathrm{min}}$, does reveal the true dynamical status of the clusters. Interestingly, the slope of the fitted open cluster correlation turns out to be in agreement, within the errors, to the slope of the previously known correlation of globular clusters \citep{ferraro12}\footnote{\footnotesize{\citet{ferraro12} reported a correlation between $log(t_{rc}/t_{\mathrm{H}})$ and $r_{\mathrm{min}}/{r_c}$, where $t_{\mathrm{H}}$ is the Hubble time, for 21 coeval globular clusters that are included in \citet{lanzoni16}, as: $\mathrm{log}(t_{rc}/t_H)=-1.11\mathrm{log}(r_{\mathrm{min}}/r_c)-0.78$. \citet{ferraro12} noted that using individual globular cluster ages in place of the Hubble time, does not change the correlation. As we are fitting here $N_{\mathrm{relax}}$ for these 21 globular clusters, our values of the slopes and intercepts of the globular cluster correlation are positive.}}, shown as red dashed line: \begin{equation} \mathrm{log}(N_{\mathrm{relax}})= 1.09 (\pm 0.16)~\mathrm{log}(r_{\mathrm{min}}/r_c)+0.7 (\pm 0.17) \end{equation} As our open cluster data-points share a common parameter space as the globular cluster data-points, we attempt a combined fit for both the clusters, and the correlation (black dashed line) is given as: \begin{equation} \mathrm{log}(N_{\mathrm{relax}})= 0.96 (\pm 0.14)~\mathrm{log}(r_{\mathrm{min}}/r_c)+0.89 (\pm 0.14) \end{equation} The open cluster correlation equation \ref{eq1} has a higher intercept. This could be due to the structural differences in the two kinds of clusters which give rise to longer relaxation times for globular clusters, or could be a signature of other factors, such as dominance of different formation channels of BSS in the two kinds of clusters (discussed further in Sec. \ref{Sec. 5}). In order to obtain a more constrained correlation for open clusters, we certainly need to analyze more open clusters with reasonable numbers of BSS (In preparation, Rao et al. 2020). \section{Discussion} \label{Sec. 5} The Gaia DR2 data has made it feasible to identify secure cluster members using the precise proper motions, parallaxes, and radial velocity information that it provides. We evolve a criteria of membership determination by the combined use of all these parameters along with the information of confirmed members of the clusters found in the literature. Our membership criteria has been fixed by using various selection ranges in proper motions and parallaxes, and after arriving at a compromise between minimizing the visible contamination on the CMDs and maximizing the retrieval of known members of the clusters by our criteria. The mean of the proper motions of our members, determined by fitting the Gaussian functions as well as by employing the mean shift-based algorithm, match very well with the mean proper motions of the known members of the clusters. The mean distances of our bright (G $\leq$15 mag) members are in excellent agreement with the mean distances of the known members of the clusters. In five of our clusters, we retrieve 90--100\%, and in the remaining two clusters, 80--90\% of all previously known members, most of which are RGB stars, but in some clusters they also include main-sequence stars below the turn-off point. Our membership criteria maybe stringent as we do miss a small fraction of the confirmed giants and BSS stars, however, since our analysis is based on the BSS and the reference population (RGB) having the same magnitude range, our incompleteness in both the populations is expected to be comparable and hence irrelevant to the analysis presented here. There are large inconsistencies in AL07 BSS candidates. From our matched clusters, 40--60\% of the AL07 BSS are found non-members. Thus a membership analysis of cluster members using kinematic information, such as that provided by Gaia DR2, is very important to study BSS populations of open clusters. In the clusters with spectroscopically known BSS, NGC 188, NGC 6791 and NGC 6819, we retrieve 70--100\% of the BSS but lose up to 30\% BSS due to our stringent kinematic selection criteria. At the same time, we find 8 new BSS candidates in NGC 188, 30 new BSS candidates in NGC 6791 and 2 new BSS candidates in NGC 6819. Among our new BSS candidates of these three clusters, the fraction of non-members are 12\% in NGC 6791 and 20\% in NGC 188, and a significant 66\% in NGC 6819. The crowdedness along the line of sight for NGC 6819 (Figure~\ref{Fig. A3}) explains the high field star fraction in case of this cluster. A radial velocity follow-up of the identified BSS candidates is necessary for membership confirmation. BSS radial distributions can be exploited to learn about the dynamical ages of the clusters. The common feature of bimodal BSS radial distributions in globular clusters \citep{ferraro12, beccari13} has been reproduced using the numerical simulations by \citet{mapelli04, mapelli06} and \citet{lanzoni07}. These numerical simulations have shown that the central peak in the radial distribution can be attributed to both, formation of collisional BSS in the densest regions of the cluster and to sinking of mass-transfer binaries in the central regions as a result of mass segregation. The secondary maxima occurring in the outskirts of the clusters, on the other hand, is attributed to BSS generated by primordial binaries via the mass transfer process in a largely non-interactive manner from other cluster stars. Whereas this observational signature of mass segregation has long been recognized and extensively exploited as an accurate dynamical clock in globular clusters, we present the first similar analysis for multiple open clusters and notice an extension of such a correlation in open clusters. NGC 2158 is the most dynamically evolved cluster as per our analysis. Its short relaxation time corroborates our finding that the effect of dynamical friction has significantly altered the initial distribution of its most massive population up to more than half of its radius ($r_{\mathrm{min}}=5.5\arcmin$). NGC 2158 has been known to be an interesting cluster possessing a globular cluster like core \citep{arp62}, low metallicity, [Fe/H]=$-$0.64, a young age of 1--2 Gyr \citep{carraro02}, and location in the Galactic plane. Our result provides first direct evidence of the cluster having an old dynamical age despite being a relatively young open cluster (1.9 Gyr), among the youngest we study in this work. We find that NGC 188 is the second most dynamically evolved cluster. Even though the dip in the minima in NGC 188 occurs at a smaller core radius (3.25$r_c$) as compared to its location in NGC 2158 (4.0$r_c$), NGC 188 has the highest central peak in the distribution, higher by a factor of 3--5 times from most clusters and higher by a factor of 1--2 times from NGC 2506. Our finding is consistent with the previous knowledge that NGC 188 is a dynamically evolved cluster as shown by \citet{geller13} based on N-body simulations of the cluster alongside the observational data of the cluster. Melotte 66 and NGC 2506 appear of comparable dynamical age from their BSS radial distribution as is also confirmed by their relaxation times. Based on their flat radial distributions, Berkeley 39 and NGC 6819 may be classified as Family I type clusters. The theoretically estimated values of $N_{\mathrm{relax}}$ for these two clusters (see Table~\ref{table4}), however, suggest that these two clusters are dynamically evolved. With these $N_{\mathrm{relax}}$ values of Berkeley 39 and NGC 6819, their $r_{\mathrm{min}}$ are predicted to be 4.36$r_c$ and 2.75$r_c$, respectively, following the open cluster correlation (Eq. \ref{eq1}). In Berkeley 39, however, all the BSS of the cluster can be seen to be already segregated to cluster regions within $r \sim$3$r_c$, i.e. smaller than the predicted $r_{\mathrm{min}}$. This indicates that Berkeley 39 has undergone mass segregation and is dynamically evolved, and could in fact be a Family III cluster. The RGB stars of this cluster too are segregated to inner cluster regions with radial distances smaller than $\sim$4$r_c$. With the segregation of two of its most massive stellar populations complete to regions within the inner half of the cluster radius, i.e. $\sim$7.36$r_c$ (or 14$\arcmin$), this cluster is indeed dynamically evolved. Without a clear central peak in the BSS radial distribution though, Berkeley 39 cannot be definitively classified as a Family III cluster. In the cluster NGC 6819, we do not see a minima in the BSS radial distribution at the predicted $r_{\mathrm{min}} \sim 2.75 r_c$ of the cluster. Thus, with the current data that we use, we are unable to comment on whether NGC 6819 is likely a Family II cluster or a Family I cluster as its radial distribution shows. With radial velocity information for our BSS and RGB candidates, it may be possible to revisit the Family classification of Berkeley 39 and NGC 6819. The location of minima in BSS radial distributions of globular clusters has been shown to work as the ``hand'' of the dynamical clock by \citet{ferraro12}. For the first time in the literature, we investigate whether BSS radial distributions of open clusters do also work as accurate probes of dynamical ages of clusters. Though we have only five open clusters with bimodal BSS distributions, we clearly see a positive correlation between $r_{\mathrm{min}}/r_c$ and the dynamical ages of the clusters, quantified here by $N_{\mathrm{Relax}}$, the number of current central relaxations experienced by clusters during their age. This correlation confirms that BSS radial distributions are sensitive probes of dynamical evolution in open clusters as well. Our best-fit open cluster correlation is comparable to the previously known globular cluster correlation in its slope, but has a higher intercept. A possible reason for an offset in the two intercepts could be pertaining to the structural differences among the two kinds of clusters. Apart from the vast differences in the numbers of stars, metallicity, and stellar density in these two kinds of clusters, there are also large variations in their sizes. For example, the most evolved Family II cluster (Radius $\sim$8$r_c$) in our sample has its $r_{\mathrm{min}}$ equal to 4.0$r_c$. Indeed as can be seen in \citet{kharchenko13}, the radii of most open clusters are typically smaller than ten times the King's core radii of these clusters. An implication of the small sizes of open clusters is that, the range of $r_{\mathrm{min}}$ in Family II open clusters will vary between 0 and $\sim$10$r_c$. Clusters with $r_{\mathrm{min}}$ greater than $\sim$10$r_c$ would have already transitioned to Family III type. This is unlike Family II globular clusters which show a much larger range of $r_{\mathrm{min}}$ (0--100$r_c$; see Fig. \ref{Fig. 9}). The offset in the two intercepts might also hint at the dominance of different formation channel in the two kinds of clusters. As shown by \citet[and the references therein]{chat13} stellar collisions are shown to contribute to BSS formation in dense cluster environments such as in globular clusters and in the cores of open clusters. However, collisions are not considered to be the dominant formation channel in open clusters \citep{leonard96,hurly05,perets09}. Most notably, in two open clusters, M67 and NGC 188, radial velocity surveys have shown a much higher binary fraction in BSS, 60$\pm$24\% and 76$\pm$22\% respectively \citep{latham07,geller08}. We stress that extending similar analysis to a larger sample of open clusters containing reasonable number of BSS populations, as well as confirmation of membership of BSS candidates by new radial velocity observations, would be important steps in further improving our knowledge of this correlation in open clusters. Such a correlation, if established, would serve as the basis for the use of BSS radial distributions as direct indicators of the dynamical evolution of open clusters. \section*{Acknowledgements} The authors are grateful to the anonymous referee for the valuable comments. SB acknowledges support from the IMPRS on Astrophysics at the LMU Munich. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. This research made use of Astropy-- a community-developed core Python package for Astronomy \citep{2013A&A...558A..33A}, Numpy \citep{oliphant15} and Matplotlib \citep{hunter07}. This research also made use of NASA's Astrophysics Data System (ADS).
proofpile-arXiv_066-3259
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Acknowledgements} This work was partially supported by Innovate UK grant 133549: \textit{Intelligent Situational Awareness Platform}, and by EPSRC grant EP/R033722/1: \textit{Trust in Human-Machine Partnerships}. Also, this work is partially supported by the Autonomous Province of Trento in the scope of L.P. n.6/1999 with grant MAIS (Mechanical Automation Integration System) n. 2017-D323-00056 del. n. 941 of 16/06/2017 and by EIT DIgital within the "AWARD" project. \section{Background} We consider planning problems expressed in the PDDL 2.1~\cite{pddl21} temporal planning language; for the sake of brevity we do not report the full syntax and semantics of such planning problems, but we directly introduce the parametrized planning problem idea adapted from~\cite{robustness-envelopes}. \begin{definition A \deftitle{parametrized planning problem} $\ensuremath{\mathcal{P}}_\Gamma$ is a tuple $\tuple{\Gamma, \ensuremath{\mathcal{P}}}$, where $\Gamma$ is a finite set of real-valued parameters $\{\gamma_1, \cdots, \gamma_n\}$ and $\ensuremath{\mathcal{P}}$ is a PDDL 2.1 planning problem in which conditions, effects, goals and initial states can contain parameters. \end{definition} \noindent Intuitively, symbols (from a known set $\Gamma$) can be used in expressions where real-typed constants are usually allowed. As customary in many cases of plan execution, we use plans expressed as Simple Temporal Networks (STN) \cite{dechter-stn}. An STN plan is a set of constraints of the form $t_i - t_j \le k$ where $t_i$ and $t_j$ are time points linked to action happenings (i.e. either the start or the end of an action instance in the plan) and $k \in \mathbb{Q}$. In addition, we allow parameters in the plan specification by generalizing the notion of STN plans. \begin{definition A \deftitle{parametrized STN plan} $\pi_\Gamma$ for a parametrized planning problem $\ensuremath{\mathcal{P}}_\Gamma \ensuremath{\doteq} \tuple{\Gamma, \ensuremath{\mathcal{P}}}$ is an STN Plan where some constraints are in the form $t_i - t_j \le \gamma_i$ where $t_i$ and $t_j$ are time-points of the STN plan and $\gamma_i \in \Gamma$. \end{definition} We define the Robustness Envelope (RE) for a parametrized problem and plan as the set of possible values for the parameters that make the plan valid when the symbols are substituted with values in the plan and problem specifications. In order to compute the RE, \citeauthor{robustness-envelopes} define a set of logical formulae that characterize the RE and use quantifier elimination techniques (e.g. \cite{fourier-motzkin}) to explicitly construct the region. The encoding is divided in three expressions: indicated as $enc_{tn}^{\pi_\Gamma}$, $enc_{\mathit{eff}}^{\pi_\Gamma}$ and $enc_{\mathit{proofs}}^{\pi_\Gamma}$. The formula $enc_{tn}^{\pi_\Gamma}$ encodes the temporal constraints imposed by $\pi_\Gamma$ limiting the possible orderings of time points. The formula $enc_{\mathit{eff}}^{\pi_\Gamma}$ encodes the effects of each time point on the state variables, while $enc_{\mathit{proofs}}^{\pi_\Gamma}$ encodes the validity properties of the plan, namely that the conditions of each executed action are satisfied, that the goal is reached, and that the $\epsilon$-separation constraint imposed by PDDL 2.1 is respected. Then, let $\bar{X}$ be the set of all the variables appearing in the formulae above except the parameter values, the RE is characterized by all the models of the following formula. $$\exists \bar{X} . (enc_{tn}^{\pi_\Gamma} \wedge enc_{\mathit{eff}}^{\pi_\Gamma} ) \wedge \forall \bar{X} . ((enc_{tn}^{\pi_\Gamma} \wedge enc_{\mathit{eff}}^{\pi_\Gamma}) \rightarrow enc_{\mathit{proofs}}^{\pi_\Gamma})$$ As observed by \citeauthor{robustness-envelopes}, any under-approximation of the RE gives sound information on the contingencies in which the plan is guaranteed to be valid; in particular, a convenient restriction for the representation and handling of REs is to associate a closed interval of possible values to each parameter, defining an hyper-rectangle. If such hyper-rectangle is contained in the RE, we have a "Decoupled Robustness Envelope" (DRE) that retains the guarantees of the RE but avoids the complexity of inter-dependencies among parameters. \begin{definition}\label{def:dre} A \deftitle{Decoupled Robustness Envelope} for a parametrized planning problem $\ensuremath{\mathcal{P}}_\Gamma$ and STN plan $\pi_\Gamma$ is a bound assignment $\rho : \Gamma \rightarrow \mathbb{Q}_{>=0} \times \mathbb{Q}_{>=0}$, such that any parameter assignment $v : \Gamma \rightarrow \mathbb{Q}_{>=0}$, with $l \le v(\gamma) \le u$ and $\tuple{l, u} \ensuremath{\doteq} \rho(\gamma)$, is contained in the RE for $\ensuremath{\mathcal{P}}_\Gamma$ and $\pi_\Gamma$. \end{definition} \noindent Note that many DREs are possible for a given problem and plan: it suffices that all the assignments allowed by the DRE are points in the RE. In this paper, we elaborate on this idea and propose an algorithm that incrementally builds DREs that are contained within the unconstrained RE without paying the cost of explicitly computing the RE itself. Finally, we highlight how given any two DREs $\rho_1$ and $\rho_2$ three cases are possible: either $\rho_1$ is subsumed by $\rho_2$ (i.e. for each parameter $\gamma$, $\rho_1(\gamma) \subseteq \rho_2(\gamma)$), or $\rho_1$ subsumes $\rho_2$, or the two DREs are incomparable. Hence, there is no absolute best DRE in general: we aim for a DRE that is not subsumed by any other, but there can be multiple DREs with this property. \section{Conclusion} In this paper, we make the case for the use of REs in a plan execution framework. We present a novel, anytime algorithm to compute DREs that is empirically superior to the previously known logic-based construction. Moreover, we demonstrate the usefulness of the produced artifacts by integrating them in the ROSPlan framework and showing on a concrete example the positive impact on the number of re-plannings and the plan success-rate. In the future, we will consider other kinds of approximations for the robustness envelope (e.g. hyper-octagons instead of hyper-rectangles). We will also explore the link to temporal uncontrollability and non-deterministic planning. Finally, using IRR in parallel with the dispatcher could allow variation in parameters being considered during execution. \section{Experiments} We now present our empirical analysis that comprises three sets of experiments. The first aims at showing the superior performance of \textsc{IRR} as compared with the logical approach of~\cite{robustness-envelopes}. The second shows a practical use-case of the execution flow proposed in this paper when the duration of actions is uncertain. In the third experimentation we use our DRE technique to execute plans when the consumption rates of resources is uncertain. \myparagraph{IRR} We start by considering the experimental dataset and the tool (hereafter called \textsc{CCMMZ}) provided in~\cite{robustness-envelopes}. The benchmarks use a varying number of parameters; in particular, AUV ranges from 1 to 8 parameters, Generator Linear from 1 to 4 and Solar Rover between 1 and 4. We compare our IRR implementation with \textsc{CCMMZ} on all the available instances and domains, measuring the total run-time and using the ``decoupled envelope generation'' functionality of the tool. Moreover, in order to take into account the anytime nature of IRR, we also measure the time at which the rectangle $R$ in IRR widens and becomes different than a single point (i.e. we measure the first time the algorithm \ref{algo:irr} reaches line 17) and we call this timing ``IRR First Widening''. In all our experiments we set $\beta = 1$ and all $\omega_i = 1$ to find the decoupled region approximated to a single unit with no preferences among the parameters (obviously, we set the same parameter preference also in \textsc{CCMMZ}). We executed all of the instances on a Xeon E5-2620 2.10GHz machine setting a time limit of 3600s and a RAM memory limit of 20GB. Figure \ref{fig:domains} shows the result of this analysis. IRR is able to solve many more instances than \textsc{CCMMZ} and is consistently quicker. Moreover, we note how the first widening is often encountered quite early in the execution, marking the margin for anytime exploitation of IRR. In fact, after the first widening, IRR already computed a meaningful and non-trivial under-approximation of the RE that can be used for execution. This is particularly evident in the Generator Linear domain where the algorithm is unable to fully terminate in some cases, but the first widening point is reached. In addition to these domains, we also experiment with several instances of a service-robot domain that we also use for the following execution experimental analysis. The domain, called ``Robot Delivery'' is a simplified version\footnote{A simplified RCLL domain was used because the PDDL provided in the RCLL image is not complete and the RCLL simulation requires external processes, e.g. a referee box. We are interested in the flexible execution success rate, so we created PDDL instances encoding logistics problems without any external processes.} of the domain used in the Planning and Execution Competition for Logistics Robots in Simulation~\cite{rcll}. The domain comprises a fleet of small robots that can navigate in an euclidean graph. These robots are tasked to pick and deliver orders within a deadline. Collecting orders requires two robots present at a machine. We scaled the number of parameters in the instances between 1 and 33. \begin{figure}[tb] \centering \includegraphics[width=.75\columnwidth]{plots/robot_delivery.pdf} \caption{\label{fig:delivery} Scalability experiments on the delivery domain: the number of solved instances (sorted by difficulty for each solver) is considered on the X axis and is compared with the logarithmic time needed to solve each instance (lower, longer lines are better).} \end{figure} Figure \ref{fig:delivery} shows the scalability of IRR and \textsc{CCMMZ} on this domain. These instances are much harder for both the solvers compared to the previous domains; in fact, \textsc{CCMMZ} is only able to solve 3 instances, while IRR is able to solve 15 of them. Also in this case, the anytime nature of IRR is evident by observing the difference from the first widening and the algorithm completion. \begin{figure}[tb] \centering \includegraphics[width=.75\columnwidth]{plots/prec.pdf} \caption{\label{fig:convergence} Convergence of IRR in terms of steps: each red dot is a DRR computed by IRR and the plot shows the progression in terms of convergence at each step of the algorithm. The purple line indicates the poorest convergence percentage for each step in any experiment; similarly the black, dashed line shows the best convergence.} \end{figure} Finally, we investigate how quickly the IRR algorithm converges in our experiments. We define convergence at step $i$ in a run of IRR that terminates with hyper-rectangle $R_{end}$ as follows ($R_i$ indicates the hyper-rectangle at step $i$). $$ \resizebox{.75\columnwidth}{!}{$Convergence(i) = \frac{\sum_{[l, u] \in R_i} u-l}{\sum_{[l, u] \in R_{end}} u-l} \times 100$} $$ \noindent Intuitively, this gives the percentage of the region covered by $R_i$ with respect to $R_{end}$. (Note that $R_{end}$ contains $R_i$ because the IRR algorithm only expands previous hyper-rectangles.) Figure \ref{fig:convergence} shows, for all problems solved by IRR in our benchmark set, the percentage of convergence achieved after any number of steps of the IRR algorithm. Clearly from the plot, in a limited number of steps we often approximate very well the final intervals; in particular, within 50 steps we already cover more than 70\% of the final sum of the interval sizes in all the cases. \begin{table*}[tb] \centering \resizebox{.9\textwidth}{!}{ \setlength{\tabcolsep}{2pt} \begin{tabular}{l|cc|cc|cc|cc|cc} \multirow{2}{*}{\bf Executor} & \multicolumn{2}{c|}{\bf 1 Parameter} & \multicolumn{2}{c|}{\bf 2 Parameters} & \multicolumn{2}{c|}{\bf 3 Parameters} & \multicolumn{2}{c|}{\bf 4 Parameters} & \multicolumn{2}{c}{\bf 5 Parameters}\\ & \bf \small Coverage & \bf \small Avg Replans & \bf \small Coverage & \bf \small Avg Replans & \bf \small Coverage & \bf \small Avg Replans & \bf \small Coverage & \bf \small Avg Replans & \bf \small Coverage & \bf \small Avg Replans\\ \hline &&&&&&&&&&\\[-1.5ex] \textsc{DREEx}\xspace & \bf 92.2\% & \bf 0.1& \bf 85.8\% & \bf 0.2& \bf 83.2\% & \bf 0.2& \bf 72.8\% & \bf 0.1& \bf 63.0\% & \bf 0.1 \\ \blexec{0} & 0.0\% & NA& 0.0\% & NA& 0.0\% & NA& 0.0\% & NA& 0.0\% & NA \\ \blexec{10} & 24.5\% & 1.0& 4.8\% & 1.0& 0.8\% & 1.2& 0.2\% & 0.8& 0.0\% & NA \\ \blexec{20} & 44.4\% & 0.8& 19.9\% & 1.0& 6.3\% & 1.0& 2.4\% & 1.6& 0.8\% & 1.9 \\ \blexec{30} & 58.9\% & 0.7& 34.0\% & 0.9& 18.8\% & 1.2& 9.2\% & 1.2& 4.5\% & 1.4 \\ \blexec{40} & 68.8\% & 0.6& 52.2\% & 0.8& 34.7\% & 1.0& 23.8\% & 1.1& 11.8\% & 1.5 \\ \blexec{50} & 75.0\% & 0.5& 62.2\% & 0.7& 49.0\% & 0.9& 37.8\% & 1.1& 27.9\% & 1.2 \\ \blexec{60} & 78.8\% & 0.4& 69.0\% & 0.5& 59.4\% & 0.7& 53.4\% & 0.9& 44.5\% & 1.1 \\ \end{tabular} } \caption{Coverage and average number of re-plans in the duration-uncertain delivery domain.} \label{tab:results} \end{table*} \myparagraph{Duration-Uncertain Flexible Execution} We use the Robot Delivery domain to investigate the merits of an on-line plan executor equipped with our IRR algorithm. In particular, we begin by focusing the analysis on the number of re-plans and on the plan execution success rate when only the duration of actions is uncertain during execution. In this domain, a robot has to collect a spindle from a shelf, construct a base by performing six steps (possibly in parallel), then mount a number of rings, and finally deliver the order. Orders have deadlines that must be met for delivery. The domain allows the agent to drop an order and restart from scratch with a new one at any time, but this disposal action takes some time (10 seconds in our case) and the robot needs to navigate on a symbolic euclidean graph to pick the parts, assemble and deliver the order. Each instance is simulated in an environment where actions have a non-deterministic duration described by a normal distribution with a minimum value. Due to the difficulty in manipulation tasks, the actions executed for preparing the base (in which the robots interact with machines) have the highest degree of variance. These actions have mean durations of $120$, $130$, $140$, $150$, $160$ and $170$ seconds, and a standard deviation of $70$. Due to this uncertainty and the presence of deadlines for the order delivery, the execution of a plan can fail even when a re-planning schema is employed. We generated a total of 100 problems by varying the deadlines for the orders. Our DRE-based approach was implemented in ROSPlan, as described in section \ref{sec:exe}. The STN dispatcher starts the execution of actions following the temporal constraints of the STN: the process is illustrated in algorithm \ref{algo:dispatch}. For each node, the minimum and maximum dispatch times are calculated during execution (line 5). The dispatch ends when an action completes outside of the temporal constraints allowed by the STN, or has not been started after the maximum allowed dispatch time. When the dispatch ends, it returns $true$ if the goals have been achieved; otherwise, re-planning is triggered as shown in figure \ref{fig:flow}. The system will continuously attempt to re-plan until the deadlines make the PDDL planning problem unsolvable. \begin{algorithm}[tb] \small \begin{algorithmic}[1] \Function{STNDispatch}{$\pi_{stn},\rho$} \State{$finished = false$} \While{$\neg finished$} \ForEach{node $n \in \pi_{stn}$} \State{$min, max \gets$ \Call{MinMaxDispatchTime}{$n,\pi_{stn},\rho$}} \If{$n$ is action start} \If{$(min \leq n \leq max) \wedge \neg$\Call{started}{$n$}} \State{\Call{StartExecuting}{$n$}} \ElsIf{$(n \geq max) \wedge \neg$\Call{started}{$n$}} \State{$finished = true$} \EndIf \ElsIf{$n$ is action end} \If{$(n \geq max) \wedge \:\neg$\Call{Completed}{$n$}} \State{$finished = true$} \ElsIf{$(n \leq min) \wedge \:$\Call{Completed}{$n$}} \State{$finished = true$} \EndIf \EndIf \EndFor \EndWhile \State{\Return{\Call{GoalsAchieved}{ }}} \EndFunction \end{algorithmic} \caption{\label{algo:dispatch} STN Dispatch} \end{algorithm} We compare the executor described in section \ref{sec:exe} (indicated as \textsc{DREEx}\xspace) against several baselines in which we dispatch the STN plan $\pi_{stn}$ without parameterization. In such baselines, the executor dispatches the STN plan allowing for a fixed deviation in the duration of actions and ends dispatch only when the action duration falls outside of this interval. This is the optimistic technique for execution implemented in ROSPlan that, differently from \textsc{DREEx}\xspace, offers no formal guarantees. We consider baseline executors named \blexec{0} to \blexec{60} allowing for 0\% to 60\% variability in action duration before triggering a re-plan. For example, given an action with a predicted duration 100 seconds, \blexec{0} will re-plan if the duration is not exactly 100; \blexec{20} will re-plan if the duration is outside of the interval $[80,120]$. The baseline \blexec{0} corresponds to formally executing the time-triggered plan $\pi_{tt}$: re-planning happens if any action duration differs from what was expected in $\pi_{tt}$. We highlight that, when \textsc{DREEx}\xspace is employed and the observation is within the envelope computed by IRR, we have the formal guarantee of plan success; as soon as one observation is outside of the envelope, we choose to re-plan. The overarching idea in these experiments is that the planner usually optimistically selects the easier, quicker goal and the agent starts to execute the plan. If the execution of the preparation actions goes overlong, it might become impossible to deliver the order, so the only way to successfully recover is to immediately dispose the current order and switch to another one with a less imminent deadline. If the executor fails in realizing this situation, it continues to execute the plan until it tries to deliver the order, at which point it realizes that the deadline is not met. Since a lot of time has been wasted in the preparation, it might be impossible to recover from this situation. Ideally, we expect that the predictive power of DREs allows the identification of situations where the deadline cannot be met and a swift re-planning to change the objective order is needed. Table~\ref{tab:results} reports the results of the experiment. We report the coverage percentage (i.e. the percentage of problems successfully executed over the benchmark set) as well as the average number of re-plannings for successful runs. The baseline \blexec{0}, not accounting for any variance in action duration, was unable to solve any problem successfully. Allowing for more flexibility in the duration of actions increases the coverage as should be expected. However, the \textsc{DREEx}\xspace approach achieves greater coverage than all baselines in all the cases. This is because in this problem, the ability to realize early that the agent is late for the first order and change course of actions to achieve the second order is pivotal for achieving a good success rate. \myparagraph{Resource-Uncertain Flexible Execution} Finally, we show that our flow can be used when parameters are not just action durations. We expanded the delivery domain to consider the battery consumption of the robots. In particular, each action in the revised domain checks that enough battery is present upon start and consumes a fixed amount of battery. We parametrized the consumption rate of actions, so that the DRE will compute the possible consumption values for which a given plan is valid. The executor is then demanded to observe the contingent consumption and possibly invoke a re-planning if the observation does not fall in the DRE prescription. Also in this case, the baselines \blexec{X} invoke the replanning when the battery consumption is observed to be $X\%$ higher or lower than the nominal value. \begin{table}[tb] \centering \resizebox{.75\columnwidth}{!}{ \setlength{\tabcolsep}{5pt} \begin{tabular}{l|c|c} \bf Executor & \bf Coverage & \bf Avg Replans \\ \hline &&\\[-1.5ex] \textsc{DREEx}\xspace & \bf 99.2\% & \bf 0.1\\ \blexec{0} & 1.0\% & 2.0\\ \blexec{10} & 25.6\% & \bf 0.1\\ \blexec{20} & 50.7\% & \bf 0.1\\ \blexec{30} & 66.4\% & \bf 0.1\\ \blexec{40} & 77.9\% & \bf 0.1\\ \blexec{50} & 82.1\% & \bf 0.1\\ \blexec{60} & 86.8\% & \bf 0.1\\ \end{tabular} } \caption{Coverage and average number of re-plans in the resource-uncertain delivery domain.} \label{tab:res-results} \vspace{-0.7cm} \end{table} Table~\ref{tab:res-results} reports the results of the experiment, and shows how the use of \textsc{DREEx}\xspace is beneficial for the success-rate achieving an almost perfect success-rate with very few replannings on average. \section{Introduction} When planning and scheduling techniques are employed in practical applications, one of the major problems is the need for on-line re-planning when the observed contingencies are not aligned with the ones that were considered at planning time. These situations are common, because it is arguably impossible to predict the entire range of situations an autonomous system can encounter, especially when the planning domain encompasses time and temporal constraints. Unfortunately, re-planning can be costly in terms of time, and computational resources can be scarce on-board, so limiting the use of re-planning is very important for practical purposes. In principle, it is also possible to continue with the execution of a plan even when the observed contingencies are unexpected, optimistically hoping for a successful completion. However, this approach offers no formal guarantee, and is prone to the risk of continuing execution of a plan that is bound to fail. Several approaches have been proposed in the literature to address this problem (see~\cite{ing17} for a survey focused on robotics). Some authors propose to post-process plans and generalize them relying on the scheduling constraints that are relevant for execution~\cite{policella-flexibility,muscettola-envelope,frank-linear-envelope}. Another line of research focuses on the creation of ``least commitment plans'', i.e. plans that are left partially open by the planner so that the execution can be adapted to some variation in the contingencies~\cite{ixtet,rmpl,europa,apsi,platinum}. Others tackled the idea of transforming temporal plans with no adaptability into flexible plans~\cite{flexibility-do}. Finally, one can explicitly model the uncertainties in the planning problem and construct a plan that offers formal guarantees with respect to such a model. Examples include Conformant and Contingent Planning~\cite{traverso-book}, Probabilistic Planning~\cite{probabilistic-planning} and Strong Temporal Planning with Uncertain Durations~\cite{micheli_aij} that considers temporal uncertainty in the durations of actions. Recently, \emph{Robustness Envelopes} (REs) have been proposed to overcome several limitations of the approaches mentioned above. REs formally capture the possible contingencies that a given temporal plan, obtained by planning in a \emph{deterministic domain}, can deal with, without having to re-plan \cite{robustness-envelopes}. REs are regions defined over a set of numeric parameters that represent possible contingencies, and contain all the parameter valuations ensuring plan validity. In general, REs may be non-convex, and can express dependencies between the parameters. However, the technique proposed in~\cite{robustness-envelopes} has two main drawbacks limiting its practical applicability. First, the exact computation of REs is extremely expensive: the proposed approach is doubly exponential in the size of the planning problem. Second, REs in their general form are not suited for efficient execution: the dependencies among parameters might require run-time reasoning. In this paper, we overcome these limitations, achieving scalability and executability. We focus on \emph{Decoupled Robustness Envelopes} (DREs), i.e. hyper-rectangular REs where the dependencies among parameters are not present, and are thus much easier to execute. Our first contribution is a novel and scalable algorithm for computing DREs as sound approximations of REs. The algorithm is anytime, and proceeds by incrementally under-approximating the RE with increasingly large DREs. The algorithm can be stopped at any time, providing a meaningful result already amenable to start execution. In its general formulation, the RE for a given plan is naturally modeled as a quantified first order formula in the theory of Linear Real Arithmetic. Our algorithm does not need to precisely compute the quantifier-free description of the RE (which requires an expensive step of quantifier elimination, and is ultimately responsible for the inefficiency demonstrated in~\cite{robustness-envelopes}). Rather, it starts from a degenerate DRE consisting of a single point, and progressively tries to enlarge it along different dimensions, checking if each extension is contained in the RE, until a given precision is reached. The algorithm relies on \emph{quantifier-free} queries to a Satisfiability Modulo Theory~\cite{smt} solver. Our second contribution is to demonstrate the practical use of DREs in a robotic executor, extending the classical flow from planning to execution to re-planning, as follows. First, a plan is generated from a deterministic model using temporal planning technologies, and transformed into a Simple Temporal Network (STN) formulation~\cite{dechter-stn}; at this point, we parametrize the durations of some of the actions in the plan and/or the consumption rates in the domain specification. DREs are then computed for the introduced parameters and passed to the executor. In turn, the dispatching of the actions in the STN plan begins and continues until one observed duration or consumption rate happens to be outside of the DRE. At this point, the executor detects that the plan is no longer guaranteed to succeed, and re-planning is triggered. The proposed approach was implemented in the ROSPlan~\cite{rosplan} framework, and experimentally evaluated along two directions. The algorithm for DRE generation was compared against the base line in~\cite{robustness-envelopes}, demonstrating orders-of-magnitude improvements compared to the exact computation of REs, and the ability to deal with a much larger number of parameters. The overall execution loop has been evaluated on a family of concrete case studies in a logistic domain, showing that the use of DREs, compared to the optimistic executor in ROSPlan, significantly reduces the number of re-plannings and improves the execution success-rate. \section{Incremental Rectangular-Robustification} \begin{figure}[tb] \centering \resizebox{.85\columnwidth}{!}{\input{images/algo_intuition}} \caption{A graphical representation of IRR: starting from the parameter values from the original plan (depicted as the black point), IRR tries to construct increasingly better under-approximations (the colored rectangles) of the RE (the gray area), without actually computing it. Upon termination, each edge of the resulting DRE is guaranteed to be at most $\beta$ apart from the border of the actual region.} \label{fig:algo-intuition} \end{figure} We now present our novel algorithm for incrementally computing decoupled robustness envelopes. We call this algorithm "Incremental Rectangular-Robustification" (IRR). The idea behind the algorithm is to construct incrementally better hyper-rectangular under-approximations of the RE for a given problem and plan. In fact, this constitutes a direct way of computing a DRE by generate-and-test. The starting point is the de-generated hyper-rectangle composed of the single point given by the parameter values of the original plan. The algorithm tries to extend the hyper-rectangle along one dimension (i.e. it tries to widen the interval of possibilities associated to one of the parameters) and checks if the resulting hyper-rectangle is in fact an under-approximation of the RE. If it is, the new hyper-rectangle is kept as it is guaranteed to be a valid DRE. Otherwise, another dimension or another increment is chosen for the algorithm to proceed. The general intuition behind the algorithm is depicted in figure \ref{fig:algo-intuition}. \begin{algorithm}[tb] \small \begin{algorithmic}[1] \State{$enc_{valid} \gets $ \Call{QuantifierElimination}{$\exists \bar{X} . enc_{tn}^{\pi_\Gamma} \wedge enc_{\mathit{eff}}^{\pi_\Gamma}$}} \vskip 3pt \Function{IRR}{$\beta:\mathbb{Q}_{>0}$} \State{$R \gets \{\gamma \rightarrow [\pi(\gamma), \pi(\gamma)] \mid \gamma \in \Gamma\}$} \State{$\Delta \gets \{\gamma \rightarrow max(\pi(\gamma) \times \omega_{\gamma}, \beta) \mid \gamma \in \Gamma \}$} \State{$\Theta \gets \{\gamma \rightarrow \{\texttt{UB}, \texttt{LB}\} \mid \gamma \in \Gamma \}$} \While{$\exists \gamma \in \Gamma . \Delta(\gamma) \ge \beta$} \State{$\tilde{\gamma}\gets $ \Call{Pick}{$\{\gamma \mid \gamma \in \Gamma \wedge \Theta(\gamma) \not = \emptyset \wedge \Delta(\gamma) \ge \beta\}$}} \State{$\theta \gets $ \Call{Pick}{$\Theta(\tilde{\gamma})$}} \State{$[l, u] \gets R(\tilde{\gamma})$} \IfThenElse{$\theta = \texttt{UB}$}{$u \gets (u + \Delta(\tilde{\gamma}))$}{$l \gets (l - \Delta(\tilde{\gamma}))$} \State{$R' \gets \{\gamma \rightarrow R(\gamma) \mid \gamma \in \Gamma \wedge \gamma \not = \tilde{\gamma}\} \cup \{\tilde{\gamma} \rightarrow [l, u]\}$} \MyIf{\Call{CheckInEnvelope}{$R'$}}{$R \gets R'$} \MyElse \State{$\Theta(\tilde{\gamma}) \gets \Theta(\tilde{\gamma}) \setminus \theta$} \If{$\Theta(\tilde{\gamma}) = \emptyset$} \State{$\Delta(\tilde{\gamma}) \gets \Delta(\tilde{\gamma}) / 2$; \ \ $\Theta(\tilde{\gamma}) \gets \{\texttt{LB}, \texttt{UB}\}$} \EndIf \EndMyElse \EndWhile \State{\Return{$R$}} \EndFunction \vskip 3pt \Function{CheckInEnvelope}{$R$} \State{$enc_R \gets \bigwedge_{\gamma \in \Gamma, [l, u] = R(\gamma)} l \le \bar{\gamma} \wedge \bar{\gamma} \le u$} \MyIf{\Call{IsSAT}{$enc_R \wedge \neg enc_{valid}$}}{\Return{false}} \MyElse \State{\Return{\Call{IsValid}{$(enc_{tn}^{\pi_\Gamma} \wedge enc_{\mathit{eff}}^{\pi_\Gamma} \wedge enc_R) \rightarrow enc_{\mathit{proofs}}^{\pi_\Gamma}$}}} \EndMyElse \EndFunction \end{algorithmic} \caption{\label{algo:irr} Incremental Rectangular-Robustification} \end{algorithm} Algorithm \ref{algo:irr} reports the pseudo-code of IRR. The formula $enc_{valid}$ is computed once and off-line. It corresponds to the basic requirements for the hyper-rectangle to be a valid DRE: only parameter values that are not contradicting the STN plan and the causal flow of effects are admissible. This is the same as the first piece of the logical formulation in~\cite{robustness-envelopes}, but luckily it is the easier part of the quantification and can be efficiently computed. Then, the \textsc{IRR} function is in charge of computing a hyper-rectangle $R$ maintaining the following invariant: at each step, $R$ is a subset of the RE of the problem. The hyper-rectangle $R$ is represented as a pair of bounds (lower- and upper-) assigned to each parameter (this directly models a DRE as per definition \ref{def:dre}), and is initialized (line 3) with the values of the non-parametric plan $\pi$. The algorithm uses two functions to control how the hyper-rectangle is transformed from one cycle to the next. $\Delta$ associates to each parameter a number that is the value used to increase the upper-bound or to decrease the lower-bound for that parameter. The initial value for $\Delta$ is the original value of the parameter scaled by a weight for such parameter, but any positive number bigger than $\beta$ is enough to guarantee soundness and termination of the algorithm. Note that these weights can be used to express preferences on the parameters: a higher weight pushes the algorithm to expand a specific parameter more than others. The function $\Theta$ is used to decide in which direction the interval of a parameter can be extended. Two directions are possible: \texttt{UB} indicates that we want to extend the upper-bound and \texttt{LB} indicates that we want to decrease the lower-bound (line 10). Initially both directions are possible, but when we discover (line 13) that one direction is infeasible with the current $\Delta$, we remove this direction from the possibilities. The value of $\Delta$ gets refined to converge to a value lower than $\beta$, so each time $\Delta$ gets updated, we reset $\Theta$ to allow both directions once again. The main loop of the algorithm continues until all the values of $\Delta$ are lower than $\beta$: this is to guarantee that the minimum distance from each border of the hyper-rectangle and the border of the actual RE is at most $\beta$. The algorithm picks a parameter $\tilde{\gamma}$ to be analyzed among the parameters having at least one direction available in $\Theta$ and that have not converged already (line 7); then, it generates a candidate hyper-rectangle $R'$ by extending either the lower- or the upper- bound of $\tilde{\gamma}$. At this point, we check if $R'$ is contained in the RE or not (line 12). If it is, we keep it and continue the loop, otherwise, we discard this hyper-rectangle and we record that with the current $\Delta$ we cannot extend $\tilde{\gamma}$ in this direction by removing the direction $\theta$ from $\Theta(\tilde{\gamma})$. Moreover, if no direction is left for $\tilde{\gamma}$, we halve its value of $\Delta$ and reset $\Theta$ so that $\tilde{\gamma}$ can be tentatively extended again using a smaller step (lines 15-16). The core part of the algorithm consists in checking a candidate hyper-rectangle for containment in the actual RE, without explicitly computing the region itself. This is done via the \textsc{CheckInEnvelope} function that performs two SMT checks corresponding to the two quantifiers appearing in the RE logical formulation of \cite{robustness-envelopes}. The first check looks for points belonging to $R$ that are not parts of the validity region $enc_{valid}$, the second checks if the rectangle (together with the guarantees from the plan and the effects) implies the proof requirements characterizing the REs. The important point here, is that both checks are quantifier-free, i.e. no quantifier elimination is involved. \begin{theorem} The \textsc{CheckInEnvelope}($R$) function returns $true$ if and only if $R$ is a valid DRE \end{theorem} \begin{proof} The algorithm logically checks the following formula: $\neg (\exists \bar{\Gamma} . enc_R \wedge \neg enc_{valid}) \wedge \forall \bar{\Gamma}, \bar{X} . (enc_{tn}^{\pi_\Gamma} \wedge enc_{\mathit{eff}}^{\pi_\Gamma} \wedge enc_R) \rightarrow enc_{\mathit{proofs}}^{\pi_\Gamma}$, that can be rewritten as $\forall \bar{\Gamma} . enc_R \rightarrow (enc_{valid} \wedge (\forall \bar{X} . (enc_{tn}^{\pi_\Gamma} \wedge enc_{\mathit{eff}}^{\pi_\Gamma}) \rightarrow enc_{\mathit{proofs}}))$ that states that $enc_R$ is a subset of the encoding of the RE. Then, for definition \ref{def:dre}, R is the encoding of a valid DRE. \end{proof} An interesting feature of the algorithm is that it is ``anytime'', i.e. at each time, we can take the hyper-rectangle $R$ and we have the guarantee that $R$ is contained in the RE and is thus a valid DRE. Moreover, the algorithm is guaranteed to terminate if the RE is finite in all dimensions. \begin{theorem} If the the robustness envelope is bounded in all dimensions, \textsc{IRR} always terminates. \end{theorem} \begin{proof} All the values in $\Delta$ are initially positive and whenever the candidate rectangle is found to exit the RE (line 13) one of the values in $\Delta$ is halved. Eventually all the parameters will be considered and they will be eventually found to exit the RE because it is bounded in all dimensions. Therefore, all the values of $\Delta$ will become smaller than $\beta$. \end{proof} We highlight that \textsc{IRR} is in fact an optimization procedure that incrementally maximizes the size of a starting DRE, terminating when a maximal DRE is found within the given precision limit $\beta$. \section{Execution Flow}\label{sec:exe} \begin{figure}[tb] \centering \resizebox{\columnwidth}{!}{\input{images/flow}} \caption{Overview of the proposed flow.} \label{fig:flow} \end{figure} The general idea we pursue in this paper is to exploit the information and the generalization provided by the synthesis of REs to limit the number of re-plannings and increase the success-rate in execution. In particular, we propose the flow from planning to execution depicted in figure \ref{fig:flow}. Starting from a planning problem formulation expressed in PDDL 2.1, we use any off-the-shelf temporal planner\footnote{Several existing PDDL planners are unable to generate flexible STNs either because of an implementation limitation or because the technique does not allow it (e.g. SAT-based planners). Our approach is able to generate DREs from these planners as well, and work in concert with existing algorithms for the execution of STNs.} to compute a timed sequence of actions that reaches the goal from the initial state. We call this plan ``time-triggered'' (indicated with $\pi_{tt}$) in the picture. This plan is not natively amenable for execution because it defines one specific trace that does not allow any adaptability: it is extremely unlikely for a real system to be perfectly controlled to satisfy a specific trace. Hence, $\pi_{tt}$ needs to be converted in a flexible, executable STN ($\pi_{stn}$) by the ESTEREL transformer of ROSPlan. The usual flow would pass this STN directly to the dispatcher for translating the plan actions into commands for the robotic platform at the proper time. Instead, here we pre-process this plan using REs in the hope of generalizing its applicability and reducing the number of re-plannings. In particular, the STN plan is passed to a parametrization component that re-reads the planning problem formulation and enriches it with parameters, generating a Parametric Planning Problem and a parametric STN plan ($\pi_\Gamma$). Those are the inputs for the computation of the RE. In our flow, for performance reasons and to avoid complex run-time reasoning, instead of computing the exact, unconstrained RE, we use a novel algorithm, called Incremental Rectangular-Robustification (IRR for short), that computes a DRE. The algorithm is anytime, so that it is possible to retrieve unfinished computations and exploit them in execution: in fact, any under-approximation of the final result retains the needed properties of the RE. At this point, we pass the DRE ($\rho$) together with the parametrized STN plan to the STN dispatcher. We modified the dispatching algorithm to exploit the information in the DRE to limit the re-plannings to situations where they are needed. In particular, the dispatcher translates the actions, while checking that the observed values for the parameters (being either action durations, resources or rates) fall within the bounds imposed by $\rho$. If this is not the case, re-planning is needed and the whole flow is re-executed. \myparagraph{Parametrization} The first non-standard step highlighted in figure \ref{fig:flow} is the parametrization. In fact, there are multiple ways in which parameters can be added to a deterministic temporal planning problem to characterize useful quantities for execution. In general, one can parametrize any numeric quantity in the planing problem whose value might differ from the environment in which the plan will be executed. In order to be useful for the STN dispatcher, however, such quantities must be eventually observable (directly or indirectly). Otherwise, it is impossible for the executor to check whether the RE is still satisfied or if a re-planning is needed. In this paper experimentation, we focused on two such quantities, namely the durations of actions and resource consumption rates. The former is a classical source of uncertainty when temporal planning is employed in a robotics scenario, the latter is another source of uncertainty that can perturbate the execution of a plan, for example when the resource harvesting is not fully controllable (e.g. a solar panel yield depends on the weather) or when the consumption is not fully predictable (e.g. the battery consumption is very hard to precisely estimate as it depends on temperature, exact capacity and so on).
proofpile-arXiv_066-3465
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A \emph{quadrature formula} for integration with respect to the \emph{weight function} $\rho\colon\Omega\mapsto\real$ takes the form \begin{equation}\label{eq:quadrature} \int_{\Omega}f(t)\rho(t)d{t}\approx \sum_{k=1}^{K}w_k f(t_k). \end{equation} The weight function $\rho$ is a strictly positive measurable function that is the probability density function of a continuous random variable with finite moments. The \emph{weights} $w_k$ and \emph{nodes} $t_k$, $k=1,\dots,K$, are chosen so as to maximize the highest degree $d$ for which the approximation \eqref{eq:quadrature} is exact for every polynomial $f$ of degree up to $d$. This degree is called the \emph{degree of polynomial exactness} (sometimes the degree of precision) of the formula. A \emph{quadrature rule} is a sequence of quadrature formulas with an increasing number of nodes and an increasing degree of exactness. For many applications it is desirable that a quadrature rule be \emph{nested}, that is, that the node set of each formula is a subset of the node set of its successors. To obtain such a sequence, we start with a quadrature formula with $K(1)$ nodes, and extend it to a new formula with higher degree of exactness by adding $K(2)-K(1)$ additional nodes. The weights of the existing nodes may change. Repeated application of such an algorithm shall give rise to a nested sequence of quadrature formulas of increasing degree of polynomial exactness. In 1964, Kronrod presented a method to extend the well-known Gauss--Legendre formulas \cite{Kronrod1965}. His construction adds, for any $K$, $K+1$ nodes to the $K$-node Gauss--Legendre formula (which has degree of exactness $2K-1$ for the constant weight function $\rho\equiv1/2$ on $\Omega=[-1,1]$), extending it to a formula with $2K+1$ nodes and a degree of exactness at least $3K+1$. He also showed that this is the best possible extension (in terms of the achieved degree of exactness), but he did not consider longer sequences of nested formulas. Patterson \cite{Patterson1968} showed that Kronrod's method can be used to obtain nested rules by iteratively extending the extended formulas. He also considered the constant weight function over the interval $[-1,1]$, the resulting quadrature rule is now known as the Gauss--Kronrod--Patterson (or GKP) rule. It is possible to generalize Patterson's method to obtain nested quadrature rules for other, non-uniform, continuous distributions with finite moments. To the best of our knowledge, this is very little known or used in the literature. One known example is the quadrature rule proposed by Genz and Keister \cite{GenzKeister1996} for integration with the Gaussian weight function. Their approach is a direct generalization of Patterson's 1964 algorithm. Patterson's 1989 paper \cite{Patterson1989} also considers general weight functions, but that algorithm is not a direct generalization of the first. Patterson's 1989 method requires that the distribution is given by the three-term linear recurrence relation satisfied by orthogonal polynomial bases with respect to the probability density function of the distribution. Obtaining this recurrence may itself be a difficult task \cite{GolubWelsch1969}, and the coefficients of the recurrence is not as commonly available information for known distributions as moments are. In this note we formally propose and give the details of an algorithm that generates nested sequences of quadrature formulas for general continuous distributions with finite moments. The algorithm circumvents the use of the three-term linear recurrence, and works with the moments of the underlying distribution directly. This yields a streamlined version of Patterson's algorithm, which can also be easily implemented. A Mathematica implementation is also provided. \section{Extending quadrature formulas using known moments} Our quadrature formula extension algorithm relies on the following two results. The first one is an immediate generalization of \cite[Theorem 1]{Kronrod1965}; its proof is omitted. \begin{proposition}\label{lem:GKP-1} For every probability density function $\rho$ and every set $\{t_1,\dots,t_{K}\}\subseteq \real$ of $K$ nodes there exists unique weights $w_1,\dots,w_{K}$ such that the quadrature formula for integration with respect to $\rho$ has a polynomial degree of exactness at least $K-1$. These weights are the unique solution of the linear (square) system of equations \begin{equation}\label{eq:Patterson1} \sum_{i=1}^{K} w_i t_i^k = \int_\Omega t^k \rho(t)dt, \quad k=0,\dots,K-1. \end{equation} \end{proposition} \begin{theorem}\label{thm:GKP-2} Let $\rho$ be the probability density function of a distribution supported on $\Omega\subseteq\real$ with finite moments. Let $F$ be a univariate polynomial of degree $n$ with $n$ distinct real roots, and suppose that there exists a polynomial $G$ of degree $p$ satisfying \begin{equation}\label{eq:GKP-1} \int_\Omega F(t)G(t)t^i\rho(t)dt = 0,\quad i=0,\dots,p-1. \end{equation} Assume further that the roots of $G$ are all real and of multiplicity one, and distinct from those of $F$. Then there exists a quadrature formula supported on the roots of $FG$, whose degree of polynomial exactness is at least $n+2p-1$. \end{theorem} \begin{proof} The polynomial $FG$ has degree $n+p$, therefore every polynomial $H$ of degree $n+2p-1$ can be written as $H = QFG + R$ for some polynomials $Q$ of degree $p-1$ and $R$ of degree $n+p-1$. Consider now the quadrature formula supported on the roots of $FG$ with degree of polynomial exactness at least $n+p-1$, whose existence is established in the previous Lemma (with $n+p$ in place of $K$). Denoting by $q(H)$ the value $\sum_{i=1}^{n+p}w_iH(t_i)$ of this quadrature formula for the polynomial $H$ we have that $q(R) = \int_\Omega R(t)\rho(t)dt$ because $R$ has degree $n+p-1$; $q(QFG)=0$ because $q$ is supported on the roots of $FG$; and $\int_\Omega Q(t)F(t)G(t)\rho(t)dt=0$ owing to \eqref{eq:GKP-1} and the fact that $Q$ is of degree $p-1$. Hence, \begin{align*} \int_\Omega H(t)\rho(t)dt &= \int_\Omega R(t)\rho(t)dt + \int_\Omega Q(t)F(t)G(t)\rho(t)dt =\\ &= q(R) + 0 = q(R) + q(QFG) = q(H). \end{align*} Therefore, the formula gives the correct value of $\int_\Omega H(t)\rho(t)dt$ for every $H$ of degree $n+2p-1$. \end{proof} These assertions suggest the following algorithm: \begin{enumerate} \item Consider an arbitrary quadrature formula on $n$ nodes, and the polynomial $F$ whose roots (of multiplicity one) are the nodes of the formula. \item Choose an integer $p \geq 1$, and find a degree $p$ polynomial $G$ that is not identically zero and that solves the system of equations \eqref{eq:GKP-1}. Eq.~\eqref{eq:GKP-1} is a homogeneous system of linear equations with one fewer variables than equations, but we can assume that the leading coefficient of $G$ is $1$, and turn \eqref{eq:GKP-1} into an inhomogeneous square system of linear equations. \item Determine the roots of $G$; these are the new nodes of the extended formula. Now find the weights of the formula by solving \eqref{eq:Patterson1}. \end{enumerate} By Theorem \ref{thm:GKP-2}, the resulting formula adds new nodes to the initial formula, potentially increasing its degree of polynomial exactness. If $p > n/2$, the second formula necessarily has a higher degree of exactness, since an $n$-node formula cannot have a degree of exactness higher than $2n-1$. Repeated application of this algorithm may yield a nested sequence of quadrature formulas of increasing degree of exactness. As with Patterson's 1964 algorithm, this algorithm may also fail to yield a new formula if either the linear system \eqref{eq:GKP-1} has no nonzero solution, or $G$ has complex roots or real roots of multiplicity higher than one, or if $F$ and $G$ have common roots. If the algorithm fails for a given $p$, different values of $p$ may be tried sequentially. Finally, different initial formulas may give rise to different sequences. \section{Further remarks} The original GKP formulas can be obtained using the above algorithm with $\Omega = [-1,1]$ and $\rho(t) = 1/2$, starting with the trivial $1$-node formula (with the node at $0$, with weight $1$), and taking $p=n+1$ in each iteration. The number of nodes is thus $2^{i+1}-1$ after $i$ iterations. It is not known whether this process can be repeated indefinitely, but formulas up to $511$ nodes have been determined. The Genz--Keister formulas mentioned above can be obtained by successively applying the above algorithm to the normal distribution. It should be noted that although the computation of the nodes and weights may be numerically challenging (especially since the matrix of the linear system \eqref{eq:GKP-1} is an ill-conditioned Hankel matrix), the \emph{existence} of the solution for a given $p$ can be decided using exact rational arithmetic for every distribution whose moments are rational numbers. In this case, the coefficients of $G$ (if $G$ exists) are rational numbers, and the number of those roots of $G$ that are distinct from the roots of $F$ can be determined without the computation of the roots of either $F$ or $G$ \cite[Chap.~2]{BPR-2003}. A Mathematica implementation of the algorithm, which runs with exact rational arithmetic or extended precision arithmetic based on the input, is available from the authors at \url{http://users.iems.northwestern.edu/~dpapp/}, and given in the Appendix, where a numerical example is also provided.
proofpile-arXiv_066-4168
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
proofpile-arXiv_066-5352
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \begin{flushleft} {\footnotesize We might characterize today's breakdown of industrial or ``Second Wave"\\ society as a civilizational ``bifurcation", and the rise of a more differentiated, \\ ``Third Wave" society as a leap to new ``dissipative structures" on a world scale.\\ And, if we accept this analogy, might we not look upon the leap from Newtonianism to \\ Prigoginianism in the same way? Mere analogy, no doubt. But illuminating, nevertheless.\\ Alvin Toffler (preface to \cite{PS3}).} \end{flushleft} \vspace{3mm} Popularization of science seems to be doing very well: the Big Bang, the theory of elementary particles or of black holes are explained in countless books for the general public. The same is true for chaos theory, irreversibility or self-organization. However, it seems also that a lot of confusion exists concerning these latter notions, and that at least some of the popular books are spreading misconceptions. The goal of this article is to examine some of them, and to try to clarify the situation. In particular, I will make a critical evaluation of the various claims concerning chaos and irreversibility made by Prigogine and by Stengers, since ``La Nouvelle Alliance". Several of those claims, especially the most recent ones, are rather radical: ``the notion of chaos leads us to rethink the notion of ``law of nature"." (\cite{P2}, p.15)\footnote{Here and below, I have translated the texts that were available only in French.} For chaotic systems, ``{\it trajectories are eliminated from the probabilistic description} \dots The statistical description is {\it irreducible}." (\cite{P2}, p.59) The existence of chaotic dynamical systems supposedly marks a radical departure from a fundamentally deterministic world-view, makes the notion of trajectory obsolete, and offers a new understanding of irreversibility. Prigogine and Stengers claim that the classical conception was unable to incorporate time in our view of the world (\cite{PS2}, chap.1) or to account for the irreversibility of macroscopic phenomena. Boltzmann's attempt to explain irreversibility on the basis of reversible laws failed (\cite{P2}, p.41). On the basis of these theories, a number of speculations are put forward on the notion of ``event", on the place of human beings in Nature, or even on overcoming Cartesian dualism (see \cite{P2}, chap.9, \cite{P3}, p.106, and \cite{P4}). These writings have been indeed quite influential, mostly among non-experts. They are frequently quoted in philosophical or cultural circles, as an indication that chaos, nonlinear phenomena or the ``arrow of time" have led to a profound revolution in our way of thinking. I want to develop quite different views on most of these issues. In my opinion, chaos does not invalidate in the least the classical deterministic world-view; the existence of chaotic dynamical systems actually strengthens that view (Sect. 2). Besides, the relationship between chaos and irreversibility is quite different from what is claimed e.g. in ``Les lois du chaos" \cite{P2} . Finally, when they are correctly presented, the classical views of Boltzmann perfectly account for macroscopic irreversibility on the basis of deterministic, reversible, microscopic laws (Sect. 3). Part of the difficulty in understanding those views comes from some confusions about the use of the words ``objective" and ``subjective", associated with probability or entropy. I will try to be careful with these notions (Sect. 4 and 5). In section 6, I will discuss the applications of probabilistic reasoning to complex phenomena and biology. I shall also argue that most of the speculation on the ``new alliance" between the human sciences and the natural ones is misguided and that the people working in sociology or psychology have very little to learn from the alleged ``leap from Newtonianism to Prigoginianism" (Sect. 7). On the other hand, I believe that the ideas of Laplace and of Boltzmann are worth defending against various misrepresentations and misunderstandings. Quite independently of the work of Prigogine, there are serious confusions that are found in the literature on irreversibility, chaos or time (some of which go back to philosophers such as Popper, Feyerabend or Bergson). Besides, many textbooks or popular books on statistical mechanics are rather obscure, at least in the part concerning the foundations of the field (e.g., on the role played by ergodic theorems). I will try to clarify these questions too (Sect. 4). I wrote this paper in a not too technical language, relegating formulas to footnotes and remarks. Nothing of what I say is new\footnote{On the issue of irreversibility, see Feynman \cite{Fe2}, Jaynes \cite{Ja}, Lebowitz \cite{Le1,Le2}, Penrose \cite{Pe2}. For a similar and less technical critique of various confusions about chaos, see Maes \cite{Mae}.}. In fact, everything is quite standard and old, and it is a sad fact that those ideas that were so nicely explained by Boltzmann a century ago \cite{Bo} have to be reexplained over and over again. Finally, I have to emphasize that this is in no way a criticism of Prigogine's work in general, and even less of the Brussels' school. I shall only discuss the radical claims made in the popular books and, in particular, the idea that fundamental flaws have been found in the scientific world-view and that one has to rethink the {\it notion} of law of nature. I believe that a lot of interesting scientific ideas have been developed around Prigogine and that he has had an exceptional taste for discovering new directions in physics, whether in irreversible thermodynamics or in chaotic phenomena. But this does not put his views on foundational questions beyond criticism\footnote{I must add that I have defended, in the past, some of the ideas criticized below. Needless to say, I am interested in the critique of ideas and not of individuals.}. \section{Chaos and determinism: Defending Laplace.} \begin{flushleft} {\footnotesize The concept of dog does not bark.\\ B. Spinoza} \end{flushleft} \vspace{3mm} \noindent {\bf 2.1. Determinism and predictability.} \vspace{3mm} A major scientific development in recent decades has been popularized under the name of ``chaos". It is widely believed that this implies a fundamental philosophical or conceptual revolution. In particular, it is thought that the classical world-view brilliantly expressed by Laplace in his ``Philosophical Essay on Probabilities" has to be rejected\footnote{ For rather negative comments on Laplace, see e.g. Ekeland (\cite{Ek}, p.31), and Gleick (\cite{Gl}, p.21 in the French edition).}. Determinism is no longer defensible. I think this is based on a serious confusion between {\it determinism} and {\it predictability}. I will start by underlining the difference between the two concepts. Then, it will be clear that what goes under the name of ``chaos" is a major scientific progress but does not have the radical philosophical implications that are sometimes attributed to it. In a nutshell, determinism has to do with how Nature behaves, and predictability is related to what we, human beings, are able to observe, analyse and compute. It is easy to illustrate the necessity for such a distinction. Suppose we consider a perfectly regular, deterministic {\it and} predictable mechanism, like a clock, but put it on the top of a mountain, or in a locked drawer, so that its state (its initial conditions) become inaccessible to us. This renders the system trivially unpredictable, yet it seems difficult to claim that it becomes non-deterministic\footnote{Likewise, only the most radical social constructivist might object to the idea that Neptune or Pluto were following their (deterministic) trajectories before they were discovered.}. Or consider a pendulum: when there is no external force, it is deterministic and predictable. If one applies to it a periodic forcing, it may become unpredictable. Does it cease to be deterministic? In other words, anybody who admits that {\it some} physical phenomena obey deterministic laws must also admit that some physical phenomena, although deterministic, are not predictable, possibly for ``accidental" reasons. So, a distinction must be made\footnote{In an often quoted lecture to the Royal Society, on the three hundredth anniversary of Newton's Principia, Sir James Lighthill gave an inadvertently perfect example of how to slip from inpredictability to indeterminism: "We are all deeply conscious today that the enthusiasm of our forebears for the marvellous achievements of Newtonian mechanics led them to make generalizations in this area of {\it predictability} which, indeed, we may have generally tended to believe before 1960, but which we now recognize were false. We collectively wish to apologize for having misled the general educated public by spreading ideas about {\it determinism} of systems satisfying Newton's laws of motion that, after 1960, were to be proved incorrect..." \cite{Li} (Italics are mine; quoted e.g. by Reichl \cite{Re}, p.3, and by Prigogine and Stengers, \cite{PS2}, p.93, and \cite{P2}, p.41). See also, \cite{Va} p.7 where, after describing a chaotic system, one concludes that ``the deterministic approach fails" .}. But, once this is admitted, how does one show that {\it any} unpredictable system is {\it truly} non-deterministic, and that the lack of predictability is not merely due to some limitation of our abilities? We can never infer indeterminism from our ignorance alone. Now, what does one mean exactly by determinism? Maybe the best way to explain it is to go back to Laplace :`` Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it- an intelligence sufficiently vast to submit these data to analysis- it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present before its eyes." \cite{La} The idea expressed by Laplace is that determinism depends on what the laws of nature are. Given the state of the system at some time, we have a formula (a differential equation, or a map) that gives in principle the state of the system at a later time. To obtain predictability, one has to be able to measure the present state of the system with enough precision, and to compute with the given formula (to solve the equations of motion). Note that there exist alternatives to determinism: there could be no law at all; or the laws could be stochastic: the state at a given time (even if it is known in every conceivable detail) would determine only a probability distribution for the state at a later time. How do we know whether determinism is true, i.e. whether nature obeys deterministic laws? This is a very complicated issue. Any serious discussion of it must be based on an analysis of the fundamental laws, hence of quantum mechanics, and I do not want to enter this debate here\footnote{ I have expressed my point of view on the foundations of quantum mechanics in \cite{Br}; for related views, see Albert \cite{Al,Al1}, Bell \cite{Be}, D\"urr et al. \cite{DGZ}, Maudlin \cite{Ma}. For a precise discussion of determinism in various physical theories, see Earman \cite{Ea}. \protect\label{note_QM}}. Let me just say that it is conceivable that we shall obtain, some day, a complete set of fundamental physical laws (like the law of universal gravitation in the time of Laplace), and then, we shall see whether these laws are deterministic or not\footnote{Most of the laws that are discussed in the literature on chaos (e.g. on the weather) are actually macroscopic laws, and not fundamental, or microscopic ones. This distinction will be discussed in Section 3.}. Any discussion of determinism outside of the framework of the fundamental laws is useless\footnote{Opponents of determinism are quick to point out that determinism cannot be proven. While it is of course true that no statement about the world can literally be {\it proven}, these opponents do not always see how vacuous are their own arguments in favour of indeterminism, arguments that rely ultimately on our ignorance. Popper, in \cite{Pop5}, gives a long series of such arguments. In a review of this book, the biologist Maynard Smith shows a rather typical misunderstanding of Laplace: first, he agrees with Popper's criticism of Laplace, because the latter's computations are, in practice, impossible to do. But, then he disagrees with Popper about free will and gives, as far as I can see, a perfectly causal and Laplacian account of human actions, which of course, are not practically computable either (\cite{May}, p.244). To avoid misunderstandings, I am not trying to say that determinism is or must be true. All I say is that various arguments against determinism miss the point.}. All I want to stress here is that the existence of chaotic dynamical systems does not affect {\it in any way} this discussion. What are chaotic systems? The simplest way to define them is through sensitivity to initial conditions. This means that, for any initial condition of the system, there is some other initial condition, arbitrarily close to the first one so that, if we wait long enough, the two systems will be markedly different\footnote{Here is a simple example. Consider the ``phase space" to be simply the interval $I= [0,1[$. And take as (discrete time) dynamics the map $f: x \rightarrow 10x$ $mod$ $1$. This means, we take a number between $0$ and $1$, multiply it by $10$, write the result as an integer plus a number between $0$ and $1$ and take the latter as the result (i.e. $f(x)$). This gives again a number between $0$ and $1$, and we can repeat the operation. Upon iteration, we obtain the {\it orbit} of $x$; $x$ itself is the initial condition. To describe concretely the latter, one uses the decimal expansion. Any number in $I$ can be written as $x=0.a_1a_2a_3\dots$, where $a_i$ equals $0,1,2,\dots,9$. It is easy to see that $f(x)=0.a_2a_3\dots$. This is a perfect example of a {\it deterministic} but {\it unpredictable} system. Given the state $x$ at some initial time, one has a rule giving the state of the system for arbitrary times. Moreover, for any fixed time, one can, in principle, find the state after that time, with any desired accuracy, given a sufficiently precise characterization of the initial state. This expresses the deterministic aspect. Unpredictability comes from the fact that, if we take two initial conditions at a distance less than $10^{-n}$, then the corresponding orbits could differ by, say, $1/2$, after n steps, because the difference will be determined by the $n$th decimal. One of the relatively recent discoveries in dynamical systems is that simple physical examples, like a forced pendulum, may behave more or less like this map.\protect\label{note_10x}}. In other words, an arbitrarily small error on the initial conditions makes itself felt after a long enough time. Chaotic dynamical systems are of course unpredictable in practice, at least for long enough times\footnote{How long a time this is depends on the details of the system.}, since there will always be some error in our measurement of the initial conditions. But this does not have any impact on our discussion of determinism, since we are assuming from the beginning that the system obeys some deterministic law. It is only by analysing this deterministic system that one shows that a small error in the initial conditions may lead to a large error after some time. If the system did not obey any law, or if it followed a stochastic law, then the situation would be very different. For a stochastic law, two systems with the {\it same} initial condition could be in two very different states after a short time\footnote{It is worth observing that Turing machines, or the Game of Life, provide examples of deterministic automata whose evolution is more unpredictable (in a precise technical sense) than the one of the usual chaotic dynamical systems.}. It is interesting to note that the notion that small causes can have big effects (in a perfectly deterministic universe) is not new at all. Maxwell wrote: ``There is a maxim which is often quoted, that ``The same causes will always produce the same effects"". After discussing the meaning of this principle, he adds: ``There is another maxim which must not be confounded with that quoted at the beginning of this article, which asserts ``That like cause produce like effects." This is only true when small variations in the initial circumstances produce only small variations in the final state of the system"(\cite{Max1}, p.13)\footnote{As for applications to the weather, Poincar\'e (\cite{Po}, p.69) already noticed that the rainfalls or the storms seem to occur at random, so that people are more likely to pray for rain than for an eclipse (for an exception to this rule, but based on prior knowledge, see \cite{Ti}, p.59). We are not able to predict the storms, because the atmosphere may be in a state of ``unstable equilibrium". It may all depend on a tenth of a degree. And he adds: ``If we had known this tenth of a degree, one could have made predictions", but since our observations are not sufficiently precise, it all appears to be due to randomness. }. One should not conclude from these quotations\footnote{Hadamard, \cite{Ha}, Duhem \cite{Du} and Borel \cite{Bor} made similar observations. See Ruelle, \cite{Ru} for a discussion of that history from a modern perspective, and a good popular exposition of chaos. } that there is nothing new under the sun. A lot more is known about dynamical systems than in the time of Poincar\'e. But, the general idea that not everything is predictable, even in a deterministic universe, has been known for centuries. Even Laplace emphasized this point: after formulating universal determinism, he stresses that we shall always remain ``infinitely distant" from the intelligence that he just introduced. After all, why is this determinism stated in a book on {\it probabilities}? The reason is obvious: for Laplace, probabilities lead to rational inferences in situations of incomplete knowledge (I'll come back below to this view of probabilities). So he is assuming from the beginning that our knowledge is incomplete, and that we shall never be able to {\it predict} everything. It is a complete mistake to attribute to some ``Laplacian dream" the idea of perfect predictability\footnote{ It is interesting to read the rest of the text of Laplace. First of all, he expresses the belief that there are indeed fundamental, universally valid laws of nature which can be discovered through scientific investigation (the only examplethat Laplace had of a fundamental law was that of universal gravitation). In that respect, nothing has changed today. One of the goals of physics is still to discover those fundamental laws. His basic idea could be called universal reductionism rather than universal determinism. Since reductionism is remarkably well defended in Weinberg's book ``Dreams of a Final Theory" \cite{We}, I shall not pursue this point. Reading a little further, we see that Laplace's goal is to use science against superstition. He mentions the fears caused by Halley's comet in the Middle Ages (where it was taken as a sign of the divine wrath) and how our discovery of the laws of the ``system of the world" ``dissipated those childish fears due to our ignorance of the true relations between Man and the Universe". Laplace expresses also a deep optimism about the progress of science. Again nothing of that has been refuted by the evolution of natural sciences over the last two centuries. But one will not find any claim about the computability, by us humans, of {\it all} the consequences of the laws of physics.}. But Laplace does not commit what E. T. Jaynes calls the ``Mind Projection Fallacy": ``We are all under an ego-driven temptation to project our private thoughts out onto the real world, by supposing that the creations of one's own imagination are real properties of Nature, or that one's own ignorance signifies some kind of indecision on the part of Nature" \footnote{Jaynes' criticisms were mostly directed at the way quantum theory is presented, but they also apply to some discussions of chaos theory or of statistical mechanics.}(\cite{Ja2}, p.7). As we shall see, this is a most common error. But, whether we like it or not, the concept of dog does not bark, and we have to carefully distinguish between our representation of the world and the world itself. Let us now see why the existence of chaotic dynamical systems in fact supports universal determinism rather than contradicts it\footnote{Of course, since classical mechanics is not really fundamental (quantum mechanics is), this issue is rather academic. We nevertheless want to discuss it, because there seems to be a lot of confusion in the literature.}. Suppose for a moment that no classical mechanical system can behave chaotically. That is, suppose we have a theorem saying that any such system must eventually behave in a periodic fashion\footnote{Imagine, for example, that the Poncar\'e-Bendixson theorem held in all dimensions.}. It is not completely obvious what the conclusion would be, but certainly {\it that} would be an embarassment for the classical world-view. Indeed, so many physical systems seem to behave in a non-periodic fashion that one would be tempted to conclude that classical mechanics cannot adequately describe those systems. One might suggest that there must be an inherent indeterminism in the basic laws of nature. Of course, other replies would be possible: for example, the period of those classical motions might be enormously long. But it is useless to speculate on this fiction since we know that chaotic behaviour is compatible with a deterministic dynamics. The only point of this story is to stress that deterministic chaos increases the explanatory power of deterministic assumptions, and therefore, according to normal scientific practice, {\it strengthens} those assumptions. And, if we did not know about quantum mechanics, the recent discoveries about chaos would not force us to change a single word of what Laplace wrote\footnote{And, concerning quantum mechanics, see the references in note \protect\ref{note_QM}.}. \vspace{3mm} \noindent {\bf 2.2. Trajectories and probabilities.} \vspace{3mm} Now, I will turn to the main thesis of Prigogine and his collaborators on chaotic dynamical systems: the notion of trajectory should be abandoned, and replaced by probabilities. What does this mean? Let me quote Prigogine: ``Our leitmotiv is that the formulation of the dynamics for chaotic systems must be done at the probabilistic level" (\cite{P2}, p.60). Or: `` We must therefore eliminate the notion of trajectory from our microscopic description. This actually corresponds to a realistic description: no measurement, no computation lead strictly to a point, to the consideration of a {\it unique} trajectory. We shall always face a {\it set} of trajectories" (\cite{P2}, p.60)\footnote{See also \cite{PS2}, p.28: ``As we shall see, there exists, for sufficiently unstable systems, a ``temporal horizon" beyond which no determined trajectory can be attributed to them". To be fair, I should add that these radical statements are combined with more reasonable, but more technical ones, e.g.: ``This formulation implies that one must study the eigenfunctions and the eigenvalues of the evolution operator." (\cite{P2}, p.60) The question is of course which statements will strike most the non-specialized reader.}. Let us first see how reasonable it is to ``eliminate the notion of trajectory" for chaotic systems by considering a concrete example\footnote{ See Batterman \cite{Ba} for a related, but different, critique. Batterman says that the replacement of trajectories by probabilities is ``very much akin to the claim in quantum mechanics that the probabilistic state description given by the $\Psi$-function is complete, that is, that underlying exact states cannot exist." (p.259) But he notes that here, unlike in quantum mechanics, no no-hidden variable argument is given to support that claim (for the exact status of no-hidden variable arguments in quantum mechanics, see Bell, \cite{Be} and Maudlin, \cite{Ma}).}. Take a billiard ball on a sufficiently smooth table, so that we can neglect friction (for some time), and assume that there are suitable obstacles and boundaries so that the system is chaotic. Now suppose that we use an ``irreducible" probabilistic description, that is, instead of assigning a position to the ball, we assign to it a probability distribution\footnote{An absolutely continuous one, i.e., given by a density. If one considers probabilities given by delta functions, it is equivalent to considering trajectories.\protect\label{note_proba}}. Consider next the evolution of that probability distribution. Since we are dealing with a chaotic system, that distribution will spread out all over the billiard table. This means that after a rather short time, there will be an almost uniform probability of finding the ball in any given region of the table. Indeed, even if our initial probability distribution is well peaked around the initial position of the ball, there will be lots of nearby initial conditions that will give rise to very different trajectories (that is exactly what it means to say that the system is chaotic). But now we can hardly take the probability distribution after some time seriously as an ``irreducible" {\it description} of the system. Indeed, whenever we look at the system, we find the ball somewhere, at a rather well defined position on the table. It is certainly not completely described by its probability distribution. The latter describes adequately our knowledge (or rather our ignorance) of the system, obtained on the basis of our initial information. But it would be difficult to commit the Mind Projection Fallacy more radically than to confuse the objective position of the ball and our best bet for it . In fact, chaotic systems illustrate this difference: if all nearby initial conditions followed nearby trajectories, the distinction between probabilities and trajectories would not matter too much. But chaotic system show exactly how unreasonable is the assignment of ``irreducible" probabilities, since the latter quickly spread out over the space in which the system evolves. Of course, nobody will deny that the ball is always somewhere. But this example raises the following the question: what does it mean exactly to ``eliminate trajectories"\footnote{And somewhat more importantly, what does the general educated public, which reads the popular books, understand from such sentences? For example, in a review in ``Le Monde" of Prigogine's most recent book \cite{P4}, Roger-Pol Droit notes that: ``The main discovery explained in ``La fin des certitudes" is the possibility to consider trajectories as probabilistic quantities and to express the laws of dynamics in terms of ensembles." (\cite {Dro}). If one understand probabilities in the classical sense, this is perfectly acceptable, but it is not exactly a discovery (statistical mechanics is more than a century old). And if it is a major discovery, what are these new laws of dynamics?}. Either the dynamics is expressed directly at the level of probability distributions, and we run into the difficulties mentioned in the previous paragraph, or the dynamics is {\it fundamentally} expressed in terms of trajectories (remembering that the discussion takes place in a classical framework), probabilities are a very useful tool, whose properties are {\it derived} mathematically from those of the trajectories, and nothing radically new has been done. In \cite{P2}\footnote{And in a private communication.}, Prigogine emphasizes the ``irreducible" spectral decompositions of the so-called Perron-Frobenius operator. This is a rather technical notion, which I will discuss in Appendix 2. It suffices to say here that this will not solve the dilemma raised above. If one reformulates the laws of physics, or understands them differently, or whatever, there is still presumably something that evolves, in some fashion. The question is: what evolves, and how? What the example of the billiard ball also shows is that we must distinguish different levels of analysis. First of all, we may describe the system in a certain way: we may assign to the ball at least an approximate position at each time, hence an approximate trajectory\footnote{I say ``approximate", because I describe the system as it is seen. I do not yet consider any theory (classical or quantum).}. Certainly the ball is not {\it everywhere}, as the ``irreducible" probabilistic description would suggest. The next thing we can do is to try to find exact or approximate laws of motion for the ball. The laws of elastic reflection against obstacles, for example. Finally, we may try to solve the equations of motion. We may not be able to perform the last step. But this does not mean that one should give up the previous ones. We may even realize that our laws are only approximate (because of friction, of external perturbations, etc\dots). But why give up the notion of (approximate) trajectories? Of course, since we are not able to predict the evolution of trajectories one may {\it choose} to study instead the evolution of probability distributions. This is perfectly reasonable, as long as one does not forget that, in doing so, we are not only studying the physical system but also our ability or inability to analyse it in more detail. This will be very important in the next Section. At this point, I want to briefly discuss the classical status of probability in physics, i.e. of probability as ``ignorance". This will also be very important in the next Section. To quote Laplace again: ``The curve described by a molecule of air or of vapour is following a rule as certainly as the orbits of the planets: the only difference between the two is due to our ignorance. Probabilility is related, in part to this ignorance, in part to our knowledge."\cite{La} Let us consider the usual coin-throwing experiment. We assign a probability $1/2$ to heads and $1/2$ to tails. What is the logic of the argument? We examine the coin, and we find out that it is fair. We also know the person who throws the coin and we know that he does not cheat. But we are unable to control or to know exactly the initial conditions for each throw. We can however determine the average result of a large number of throws. This is simply because, if one considers as a single ``experiment" $N$ consecutive throws of a coin, the overwhelming majority (for $N$ large) of the possible results will have an approximately equal number of heads and of tails. It is as simple as that, and there will be nothing conceptually more subtle in the way we shall use probabilities below. The part ``due to our ignorance" is simply that we {\it use} probabilistic reasoning. If we were omniscient, it would not be needed (but the averages would remain what they are, of course). The part ``due to our knowledge" is what makes the reasoning work. We could make a mistake: the coin could be biased, and we did not notice it. Or we could have a ``record of bad luck" and have many more heads than tails. But that is the way things are: our knowledge {\it is} incomplete and we have to live with that. Nevertheless, probabilistic reasoning is extraordinarily successful in practice, but, when it works, this is due to our (partial) knowledge. It would be wrong to attribute any constructive role to our ignorance. And it is also erroneous to assume that the system must be somehow indeterminate, when we apply probabilistic reasoning to it. Finally, one could rephrase Laplace's statement more carefully as follows: ``Even if the curve described by a molecule of air follows a rule as certainly as the orbits of the planets, our ignorance would force us to use probabilistic reasonings". \section{Irreversibility and the arrow of time} \begin{flushleft} {\footnotesize Since in the differential equations of mechanics\\ themselves there is absolutely nothing analogous to the \\ Second Law of thermodynamics the latter can be mechanically \\ represented only by means of assumptions regarding initial conditions.\\ L. Boltzmann (\cite{Bo}, p.170)} \end{flushleft} \vspace{3mm} \noindent {\bf 3.1. The problem.} \vspace{3mm} What is the problem of irreversibility? The basic physical laws are reversible, which simply means that, if we consider an isolated system of particles, let it evolve for a time $t$, then reverse exactly the velocities of all the particles, and let the system again evolve for a time $t$, we get the original system at the initial time with all velocities reversed\footnote{Mathematically, the microscopic state of the system is represented by a point in its ``phase space" $\Omega$. Each point in that space represents the positions and the velocities of {\it all} the particles of the system under consideration. So the phase space is ${\bf R}^{6.N}$ where $N$ is the number of particles (of the order of $10^{23}$ for a macroscopic system), since one needs three coordinates for each position and three coordinates for each velocity. Hamilton's equations of motion determine, for each time $t$, a map $T^t$ that associates to each initial condition ${\bf x}\in \Omega$, at time zero, the corresponding solution $T^t {\bf x}$ of the equations of motion at that time. Reversibility of the equations of motion means that there is a transformation (an involution) $I$ acting on $\Omega$ that satisfies the following relation: \begin{equation} T^t I T^t {\bf x}=I{\bf x}, \end{equation} or $IT^t=T^{-t}I$. In classical mechanics, $I$ reverses velocities, without changing the positions. (In quantum mechanics, $\Omega$ is replaced by a Hilbert space, and $I$ associates to a wave function its complex conjugate. For the role of weak interactions, see Feynman \cite{Fe2}.) \protect\label{note_involution}}. Now, there are lots of motions that we see, without ever observing their associated ``time-reversed" motion: we go from life to death but not vice versa, coffee does not jump out of the cup, mixtures of liquids do not spontaneously unmix themselves. Some of these examples taken from everyday life involve non-isolated systems, but that is not relevant\footnote{We shall discuss in Section 4.3 a frequent confusion that assigns the {\it source} of irreversibility to the (true but irrelevant) fact that no system is ever perfectly isolated. But let us point out here that it is easy to produce non-isolated systems that behave approximately in ``time reversed" fashion: a refrigerator, for example. Living beings also seem to violate the Second Law of thermodynamics. But put a cat in a well-sealed box for a sufficiently long time and it will evolve towards equilibrium.}. I shall center the discussion below on the canonical physical example (and argue that the other situations can be treated similarly): consider a gas that is initially compressed by a piston in the left half of a box; the piston is then released so that the gas expands into the whole container. We do not expect to see the particles to go back to the left half of the box, although such a motion would be as compatible with the laws of physics as the motion that does take place. So, the question, roughly speaking, is this: if the basic laws are reversible, why do we see some motions but never their time-reversed ones? The first point to clarify is that this irreversibility does not lead to a {\it contradiction} with the basic physical laws\footnote{ Such a contradiction is suggested by the following statement of Prigogine and Stengers: ``Irreversibility is either true on all levels or on none: it cannot emerge as if out of nothing, on going from one level to another"(\cite{PS3}, quoted by Coveney, \cite{Co}, p.412.) Also, ``Irreversibility is conceivable only if the notion of point or of trajectory loose their meaning" \cite{Cer}, p.166.}. Indeed, the laws of physics are always of the form: given some initial conditions, here is the result after some time. But they never tell us how the world {\it is or evolves}. In order to account for that, one always needs to assume something about the initial conditions. The laws of physics are compatible with lots of possible worlds: there could be no earth, no life, no humans. Nothing of that would contradict the fundamental physical laws. So, it is hard to see what kind of argument would imply a contradiction between the reversibility of the laws and the existence of irreversible phenomena. But no argument at all is given, beyond a vague appeal to intuition, as for example: ``No speculation, no body of knowledge ever claimed the equivalence between doing and undoing, between a plant that grows, has flowers and dies, and a plant that resuscitates, becomes younger and goes back to its primitive seed, between a man who learns and becomes mature and a man who becomes progressively a child, then an embryo , and finally a cell. Yet, since its origins, dynamics, the physical theory that identifies itself with the triumph of science, implied this radical negation of time." (\cite{PS2}, p.25. The first of these sentences is quoted again in \cite{P4}, p.178). But nobody says that there is an ``equivalence" between the two motions, only that both are compatible with the laws of physics. Which one, if any, occurs depends on the initial conditions. And, if the laws are deterministic, assumptions about the initial conditions are ultimately assumptions about the initial state of the Universe. Once one has remarked that, a priori, there is no contradiction between irreversibility and the fundamental laws, one could stop the discussion. It all depends on the initial conditions, period. But this is rather unsatisfactory, because, if one thinks about it, one realizes that too many things could be ``explained" by simply appealing to initial conditions. Luckily, much more can be said. It is perfectly possible to give a natural account of irreversible phenomena on the basis of reversible fundamental laws, and of suitable assumptions about initial conditions. This was essentially done a century ago by Boltzmann, and despite numerous misunderstandings and misguided objections (some of them coming from famous scientists, such as Zermelo or Poincar\'e), his explanation still holds today. Yet, Prigogine writes (\cite{P2}, p.41): ``He (Boltzmann) was forced to conclude that the irreversibility postulated by thermodynamics was incompatible with the reversible laws of dynamics"\footnote{I. Stengers goes even further:``The reduction of the thermodynamic entropy to a dynamical interpretation can hardly be viewed otherwise than as an ``ideological claim"\dots" (\cite{St}, p.192). We shall see below in what precise sense this ``reduction" is actually a ``scientific claim". }. This is in rather sharp contrastwith Boltzmann's own words: ``From the fact that the differential equations of mechanics are left unchanged by reversing the sign of time without anything else, Herr Ostwald concludes that the mechanical view of the world cannot explain why natural processes run preferentially in a definite direction. But such a view appears to me to {\it overlook that mechanical events are determined not only by differential equations, but also by initial conditions}. In direct contrast to Herr Ostwald I have called it one of the most brilliant confirmations of the mechanical view of Nature that it provides an extraordinarily good picture of the dissipation of energy, as long as one assumes that the world began in an initial state satisfying certain initial conditions" (italics are mine; quoted in \cite{Le2}, replies, p.115). I will now explain this ``brilliant confirmation of the mechanical view of Nature", and show that all the alleged contradictions are illusory\footnote{This is of course not new at all. Good references, apart from Boltzmann himself, \cite{Bo}, include Feynman \cite{Fe2}, Jaynes \cite{Ja}, Lebowitz \cite{Le1,Le2}, Penrose \cite{Pe2}, and Schr\"odinger \cite{Sc}.}. \vspace{3mm} \noindent {\bf 3.2. The classical solution.}\footnote{By classical, I mean ```standard". However, all the discussion will take place in the context of classical physics. I will leave out quantum mechanics entirely. Although the quantum picture may be more complicated, I do not believe that it renders obsolete the basic ideas explained here.} \vspace{3mm} First of all, I should say that Boltzmann gives a {\it framework} in which to account for irreversible phenomena on the basis of reversible microscopic laws. He does not explain in detail every concrete irreversible phenomenon (like diffusion, or the growth of a plant). For that, more work is needed and, while the general framework that I shall discuss uses very little of the properties of the microscopic dynamics, the latter may be important in the explanation of specific irreversible phenomena\footnote{This is analogous to the theory of natural selection. The latter provides a scheme of explanation for the appearance of complex organs, but more detailed arguments are needed to account for concrete properties of living beings. See Section 6 for further discussion of this analogy.}. Let us now see which systems do behave irreversibly. A good test is to record the behaviour of the system in a movie, and then to run the movie backwards. If it looks funny (e.g. people jump out of their graves), then we are facing irreversible behaviour. It is easy to convince oneself that all the familiar examples of irreversible behaviour involve systems with a large number of particles (or degrees of freedom). If one were to make a movie of the motion of one molecule, the backward movie would look completely natural. The same is true for a billiard ball on a frictionless billiard table\footnote{Of course, the billiard ball itself contains many molecules. But the rigidity of the ball allows us to concentrate on the motion of its center of mass.}. If, however, friction is present, then we are dealing with many degrees of freedom (the atoms in the billiard table, those in the surrounding air etc...). There are two fundamental ingredients in the classical explanation of irreversibility, in addition to the microscopic laws. The first has already been introduced: initial conditions. The second is suggested by the remark that we deal with systems with many degrees of freedom: we {\it have} to distinguish between microscopic and macroscopic variables. Let us consider the phase space $\Omega$ (see note \protect\ref{note_involution}) of the system, so that the system is represented by a point ${\bf x}$ in that space and its evolution is represented by a curve ${\bf x}(t)=T^t({\bf x})$. Various quantities of physical interest, for example the density, or the average energy, or the average velocity in a given cubic millimeter, can be expressed as functions on $\Omega$\footnote{By ``functions", I mean also families of functions indexed by space or time, i.e. fields, such as the local energy density, or the velocity field.}. These functions (call them $F$) tend to be many-to-one, i.e. there are typically a huge number of configurations giving rise to a given value of $F$\footnote{I am a little vague on how to ``count" configurations. If I consider discrete (finite) systems, then it is just counting. Otherwise, I use of course the Lebesgue measure on phase space. All statements about probabilities made later will be based on such ``counting".}. For example, if $F$ is the total energy, then it takes a constant value on a surface in phase space. But if $F$ is the number of particles in a cubic millimeter, there are also lots of microscopic configurations corresponding to a given value of $F$. Now, let me make two statements, the first of which is trivial and the second not. Given a microscopic initial configuration ${\bf x}_0$, giving rise to a trajectory ${\bf x}(t)$, any function on phase space follows an induced evolution $F_0 \rightarrow F_t$, where $F_0 =F({\bf x}_0)$, and $F_t=F({\bf x}(t))$ (here and below, I shall assume that $t$ is positive). That is the trivial part. The non-trivial observation is that, in many situations, one can find a suitable family of functions (I'll still denote by $F$ such a family) so that this induced evolution is actually (approximately) $autonomous$. That is, one can determine $F_t$ given $F_0$ alone, without having to know the microscopic configuration from which it comes\footnote{Although not trivial, this expresses the fact that reproducible macroscopic experiments exist and that a deterministic macroscopic description of the world is possible.}. This means that the different microscopic configurations on which $F$ takes the value $F_0$, will induce the same evolution on $F_t$. A very trivial example is given by the globally conserved quantities (like the total energy): for all microscopic configurations, $F_t = F_0$, for all times. But that is not interesting. It is more interesting to observe that the solutions of all the familiar macroscopic equations (Navier-Stokes, Boltzmann, diffusion, \dots) can be considered as defining such an induced evolution $F_0 \rightarrow F_t$. Actually, there are several provisos to be made here: first of all, it is not true that {\it all} microscopic configurations giving rise to $F_0$ lead to the same evolution for $F_t$. In general, only the (vast) majority of microscopic configurations do that\footnote{To make the micro/macro distinction sharp, one has to consider some kind of limit (hydrodynamic, kinetic, etc\dots), where the number of particles (and other quantities) tend to infinity. That is a convenient mathematical setting to prove precise statements. But one should not confuse this limit, which is an approximation to the real world, with the physical basis of irreversibility. See Lebowitz \cite{Le1} and Spohn \cite{Sp} for a discussion of those limits.}. Moreover, if we want that evolution to hold for all times, then this set of microscopic configurations may become empty\footnote{This is due, for example, to the Poincar\'e recurrences, see Section 4.1 and Appendix 1.}. Finally, the laws used in practice may contain some further approximations. So, the precise justification of a macroscopic law should be given along the following lines: given $F_0$, and given a (not too large) time $T$\footnote{I mean shorter than the Poincar\'e recurrence time.}, there exists a large subset of the set of ${\bf x}$'s giving rise to $F_0$ (i.e. of the preimage in $\Omega$, under the map $F$, of $F_0$) such that the induced evolution of $F_t$ is approximately described by the relevant macroscopic equations up to time $T$. It should be obvious that it is not easy to prove such a statement. One has to deal with dynamical systems with a large number of degrees of freedom, about which very little is known, and in addition one has to identify limits in which one can make sense of the approximations mentioned above (a large subset, a not too large time $T$ \dots). Nevertheless, this can be done in some circumstances, the best known being probably the derivation of Boltzmann's equation by Lanford \cite{La1,La2,La3}. In Appendix 1, I discuss a model due to Mark Kac which, while artificially simple, can be easily analysed and shows exactly what one would like to do in more complicated situations\footnote{It is also important to clarify the role of ``ensembles" here. What we have to explain is the fact that, when a system satifies certain macroscopic initial conditions ($F_0$) it {\it always} (in practice) obeys certain macroscopic laws. The same macroscopic initial conditions will correspond to many different microscopic initial conditions. We may introduce, for mathematical convenience, a probability distribution (an ``ensemble") on the microscopic initial conditions. But one should remember that we are physically interested in ``probability one statements", namely statements that hold for (almost) all microscopic configurations, as opposed to statements about averages, for example. Otherwise, one would not explain why all individual macroscopic systems satisfy a given law. In practice, those ``probability one statements" will only hold in some limit. Physically, they should be interpreted as ``very close to one" for finite systems with a large number of particles. To see how close to one this is, consider all the bets and games of chance that have ever taken place in human history. One would certainly expect the laws of large numbers to apply to such a sample. But this number is minuscule compared to the typical number ($10^{23}$) of molecules in a cubic centimeter. \protect\label{note_numbers}}. Let us come back to the problem of irreversibility: should we expect those macroscopic laws to be reversible? A priori, not at all. Indeed, I have emphasized in the abstract description above the role of initial conditions in their derivation\footnote{This remark is of some interest for the issue of {\it reductionism}: higher level laws, such as the macroscopic laws, are reduced to the microscopic ones {\it plus} assumptions on the initial conditions. If this is the case in statistical mechanics, where it is usually granted that reductionism works, it should clarify the situation in other fields, like biology, where reductionism is sometimes questioned. In particular, the fact that some assumptions must be made on the initial conditions in going from the microscopic to the macroscopic should not be forgotten, nor should it be held as an argument against reductionism. Another frequent confusion about reductionism is to remark that the macroscopic laws do not uniquely determine the microscopic ones. For example, many of the macroscopic laws can be derived from stochastic microscopic laws or from deterministic ones. That is true, but does not invalidate reductionism. What is true on the microscopic level has to be discovered independently of the reductionist programme. Finally, what is considered microscopic or macroscopic is a question of scale. The classical description considered here at the ``microscopic" level is an approximation to the quantum description and neglects the molecular and atomic structure. And the ``macroscopic" level may in turn be considered microscopic if one studies large-scale motions of the atmosphere. But, despite frequent claims to the contrary, reductionists are quite happy not to explain carburetors directly in terms of quarks (see Weinberg \cite{We} for a good discussion of reductionism).}. The macroscopic equations may be reversible or not, depending on the situation. But since {\it initial} conditions enter their derivation, there is no {\it logical} reason to expect them to be reversible\footnote{Note that I am not discussing irreversibility in terms of the increase of entropy, but rather in terms of the macroscopic laws. After all, when we observe the mixing of different fluids, we see a phenomenon described by the diffusion equation, but we do not see entropy flowing. The connection with entropy will be made in Section 5.}. \vspace{3mm} \noindent {\bf 3.3. The reversibility objection.} \vspace{3mm} Let me illustrate this explanation of irreversibility in a concrete physical example (see also Appendix 1 for a simple mathematical model). Consider the gas introduced in Section 3.1 that is initially compressed by a piston in the left half of a box, and that expands into the whole box. Let $F$ be the density of the gas. Initially, it is one (say) in one half of the box and zero in the other half. After some time $t$, it is (approximately) one half everywhere. The explanation of the irreversible evolution of $F$ is that the overwhelming majority of the microscopic configurations corresponding to the gas in the left half, will evolve deterministically so as to induce the observed evolution of $F$. There may of course be some exceptional configurations, for which all the particles stay in the left half. All one is saying is that those configurations are extraordinarily rare, and that we do not expect to see even one of them appearing when we repeat the experiment many times, not even once ``in a million years", to put it mildly \cite{Fe2} (see the end of note (\protect\ref{note_numbers})). This example also illustrates the answer to the reversibility objection. Call ``good" the microscopic configurations that lead to the expected macroscopic behaviour. Take all the good microscopic configurations in the left half of the box, and let them evolve until the density is approximately uniform. Now, reverse all the velocities. We get a set of configurations that still determines a density one half in the box. However, they are not good. Indeed, from now on, if the system remains isolated, the density just remains uniform according to the macroscopic laws. But for the configurations just described, the gas will move back to the left half, leading to a gross violation of the macroscopic law. What is the solution? Simply that those ``reversed-velocities" configurations form a very tiny subset of all the microscopic configurations giving rise to a uniform density. And, of course, the original set of configurations, those coming from the left half of the box, also form such a small subset. Most configurations corresponding to a uniform density do not go to the left half of the box, neither in the future nor in the past (at least for reasonable periods of time, see Sect. 4.1). So that, if we prepare the system with a uniform density, we do not expect to ``hit" even once one of those bad configurations\footnote{To put it in formulas, let $\overline \Omega_t $ be the configurations giving to $F$ its value at time $t$. If we denote by $F_t$ that value, $\overline \Omega_t $ is simply the preimage of $F_t$ under the map $F$. Let $\Omega_t$ be the set of good configurations, at time $t$, that lead to a behaviour of $F$, for later times (again, not for {\it too} long, because of Poincar\'e recurrences), which is described by the macroscopic laws. In general, $\Omega_t$ is a very large subset of $\overline \Omega_t$, but is not identical to $\overline \Omega_t$. Thus, $\overline \Omega_0$ are all the configurations in the left half of the box at time zero, and $\Omega_0 $ is the subset consisting of those configurations whose evolution lead to a uniform density. Microscopic reversibility says that $T^t (I(T^t (\Omega_0)))=I(\Omega_0)$ (this is just (1) in note (\protect\ref{note_involution}) applied to $\Omega_0$). A reversibility paradox would follow from $T^t(I(\Omega_t))=I(\Omega_0)$ (one takes all the good configurations at time $t$, reverses their velocities, lets them evolve for a time $t$ and thereby gets the original set of initial conditions, with velocities reversed). But $\Omega_t$ is {\it not equal}, in general, to $T^t (\Omega_0)$ (and this is the source of much confusion). In our example, $T^t (\Omega_0)$ is a tiny subset of $ \Omega_t$, because most configurations in $\Omega_t$ were not in the left half of the box at time zero. Actually, $I(T^t (\Omega_0))$ provides an example of configurations that belong to $\overline \Omega_t$ but not to $ \Omega_t$. These configurations correspond to a uniform density at time $t$, but not at time $2t$. \protect\label{note_good}}. Now comes a real problem. We are explaining that we never expect to get a microscopic configuration that will lead all the gas to the left of the box. {\it But we started from such a configuration}. How did we get there in the first place? The real problem is not to explain why one goes to equilibrium, but why there are systems out of equilibrium to start with. For the gas, obviously the system was not isolated: an experimentalist pushed the piston. But why was there an experimentalist? Human beings are also systems out of equilibrium, and they remain so (for some time) thanks to the food they eat, which itself depends on the sun, through the plants and their photosynthesis. Of course, in order to be able to take advantage of their food, humans also need their genetic program, which itself results from the long history of natural selection. As discussed e.g. in Penrose \cite{Pe2}, the earth does not gain energy from the sun (that energy is re-radiated by the earth), but low entropy (likewise, we seek low entropy rather than energy in our food); the sun sends (relatively) few high energy photons and the earth re-radiates more low energy photons (in such a way that the total energy is conserved). Expressed in terms of ``phase space", the numerous low energy photons occupy a much bigger volume than the incoming high energy ones. So, the solar system, as a whole, moves towards a larger part of its phase space while the sun burns its fuel. That evolution accounts, by far, for what we observe in living beings or in other ``self-organized" structures\footnote{Failure to realize this leads to strange statements, as for example in Cohen and Stewart (\cite{CS}, p.259): speaking of the evolution since the Big Bang, the authors write: ``For systems such as these, the thermodynamic model of independent subsystems whose interactions switch on but not off is simply irrelevant. The features of thermodynamics either don't apply or are so long term that they don't model anything interesting. Take Cairns-Smith's scenario of clay as scaffolding for life. The system consisting of clay alone is {\it less} ordered than that of clay plus organic molecules: Order is increasing with time. Why?" The explanation given afterwards ignores both the action of the sun, and the original ``improbable state" discussed here. As Ruelle wrote in a review of this book ``if life violates the Second Law, why can't one build a power plant (with some suitable life forms in it) producing ice cubes and water currents from the waters of Loch Ness?" (\cite{Ru2}). A similar confusion can be found in Popper: ``This law of the increase of disorder, interpreted as a cosmic principle, made the evolution of life incomprehensible, apparently even paradoxical." (\cite{Pop5}, p.172)\protect\label{note_life}}. I shall come back to this point in Section 6. Of course, for the sun to play this role, it has to be itself out of equilibrium, and to have been even more so in the past. We end up with an egg and hen problem and we have ultimately to assume that the Universe started in a state far from equilibrium, an ``improbable state" as Boltzmann called it. To make the analogy with the gas in the box, it is as if the Universe had started in a very little corner of a huge box\footnote{I neglect here the effect of gravity: see Penrose \cite{Pe2}.}. To account in a natural way for such a state is of course a major open problem, on which I have nothing to say (see Penrose \cite{Pe2} for further discussion, and Figure 7.19 there for an illustration), except that one cannot avoid it by ``alternative" explanations of irreversibility. Given the laws of physics, as they are formulated now, the world could have started in equilibrium, and then we would not be around to discuss the problem\footnote{As Feynman says: ``Therefore I think it necessary to add to the physical laws the hypothesis that in thepast the universe was more ordered, in the technical sense, than it is today - I think this is the additional statement that is needed to make sense, and to make an understanding of the irreversibility." (\cite{Fe2}, p.116)\protect\label{note_Feynman}}. To summarize: the only real problem with irreversibility is not to explain irreversible behaviour in the future, but to account for the ``exceptional" conditions of the Universe in the past. \vspace{3mm} \noindent {\bf 3.4. Chaos and irreversibility.} \vspace{3mm} Now, I come to my basic criticism of the views of Prigogine and his collaborators, who argue that dynamical systems with very good chaotic properties, such as the baker's map, are ``intrinsically irreversible". Let me quote from a letter of a collaborator of Prigogine, D. Driebe \cite{Dr}, criticizing an article of Lebowitz \cite{Le2} explaining Boltzmann's ideas. This letter is remarkably clear and summarizes well the main points of disagreement. ``If the scale-separation argument were the whole story, then irreversibility would be due to our approximate observation or limited knowledge of the system. This is difficult to reconcile with the constructive role of irreversible processes\dots Irreversibility is not to be found on the level of trajectories or wavefunctions but is instead manifest on the level of probability distributions\dots Irreversible processes are well observed in systems with few degrees of freedom, such as the baker and the multibaker transformations\dots The arrow of time is not due to some phenomenological approximations but is an intrinsic property of classes of unstable dynamical systems"\footnote{In a recent textbook one reads, after a discussion of the baker's map: ``Irreversibility appears only because the instantaneous state of the system cannot be known with an infinite precision" (\cite{Va}, p.198).}. Let us discuss these claims one by one. First of all, as I emphasized above, the scale-separation (i.e. the micro/macro distinction) is not ``the whole story". Initial conditions have to enter into the explanation (and also the dynamics, of course). Next, what does it mean that ``irreversible processes are observed in systems such as the baker transformation"? This transformation describes a chaotic system with few degrees of freedom, somewhat like the billiard ball on a frictionless table\footnote{The baker map is quite similar to the map discussed in note \protect\ref{note_10x}, and has the same chaotic properties as the latter, but is invertible.}. For those systems, there is no sense of a micro/macro distinction: how could one define the macroscopic variables? To put it otherwise, we can make a movie of the motion of a point in the plane evolving under the baker's map, or of a billiard ball, or of any isolated chaotic system with few degrees of freedom, and run it backwards, we shall not be able to tell the difference. There is nothing funny or implausible going on, unlike the backward movie of any real irreversible macroscopic phenomenon. So, the first critique of this alleged connection between unstable dynamical systems (i.e. what I call here chaotic systems) and irreversibility is that one ``explains" irreversibility in systems in which nothing irreversible happens, and where therefore there is nothing to be explained. It is true that probability distributions for those systems evolve ``irreversibly", meaning that any (absolutely continuous, see note \protect\ref{note_proba}) probability distribution will spread out all over the phase space and will quickly tend to a uniform distribution. This just reflects the fact that different points in the support of the initial distribution, even if they are close to each other initially, will be separated by the chaotic dynamics. So, it is true, in a narrow sense, that ``irreversibility is manifest on the level of probability distributions". But what is the physical meaning of this statement? A physical system, chaotic or not, is described by a trajectory in phase space, and is certainly not described adequately by the corresponding probability distributions. As I discussed in Section 2.2, the latter reflects, in part, our ignorance of that trajectory. Their ``irreversible" behaviour in this sense is therefore not a genuine physical property of the system. We can, if we want, focus our attention on probabilities rather than on trajectories, but that ``choice" cannot have a basic role in our explanations. One cannot stress strongly enough the difference between the role played by probabilities here and in the classical solution. In the latter, we use probabilities as in the coin-throwing experiment. We have some macroscopic constraint on a system (the coin is fair; the particles are in the left half of the box), corresponding to a variety of microscopic configurations. We predict that the behaviour of certain macroscopic variables (the average number of heads; the average density) will be the one induced by the vast majority of microscopic configurations, compatible with the initial constraints. That's all. But it works only because a large number of variables are involved, {\it in each single physical system}. However, each such system is described by a point in phase space (likewise, the result of many coin throwings is a particular sequence of heads and tails). In the ``intrinsic irreversibility" approach, a probability distribution is assigned to {\it each single physical system}, as an ``irreducible" description. The only way I can make sense of that approach is to consider a {\it large number} of billiard balls or of copies of the baker's map, all of them starting with nearby initial conditions. Then, it would be like the particles in the box, the average density would tend to become uniform, and we are back to the standard picture. But this does not force us to ``rethink the notion of law of nature". \vspace{3mm} \noindent {\bf 3.5. Is irreversibility subjective?} \vspace{3mm} I will now discuss the alleged ``subjectivity" of this account of irreversibility (i.e., that it is due to our approximate observation or limited knowledge of the system). I shall consider in Section 6 the ``constructive role" of irreversible processes, mentioned in Driebe's letter \cite{Dr}. Branding Boltzmann's ideas as ``subjective" is rather common. For example, Prigogine writes: ``In the classical picture, irreversibility was due to our approximations, to our ignorance." (\cite{P2}, p.37) But, thanks to the existence of unstable dynamical systems, ``the notion of probability that Boltzmann had introduced in order to express the arrow of time does not correspond to our ignorance and acquires an objective meaning" (\cite{P2}, p.42)\footnote{See e.g. Coveney (\cite{Co}, p.412): ``Another quite popular approach has been to relegate the whole question of irreversibility as illusory." See also Lestienne (\cite{Les}, p.176) and Prigogine and Stengers (\cite{PS1} p.284) for similar remarks.}. To use Popper's image: ``Hiroshima is not an illusion" (I shall come back to Popper's confusions in Section 4.4.). This is only a dramatization of the fact that irreversible events are not subjective, or so it seems. The objection is that, if the microscopic variables behave reversibly and if irreversibility only follows when we {\it ``choose"} to concentrate our attention on macroscopic variables, then our explanation of irreversibility is unavoidably tainted by subjectivism. I think that this charge is completely unfair, and reflects some misunderstanding of what irreversible phenomena really are. The point is that, upon reflection, one sees that all irreversible phenomena deal with these macroscopic variables. There is no subjectivism here: the evolution of the macroscopic variables is objectively determined by the microscopic ones, and they behave as they do whether we look at them or not. In that sense they are completely objective. But it is true that, if we look at a single molecule, or at a collection of molecules represented by a point in phase space, there is no sense in which they evolve ``irreversibly", if we are not willing toconsider some of the macroscopic variables that they determine. However, the apparently ``subjective" aspect of irreversibility has been sometimes overemphasized, at least as a way to speak. Heisenberg wrote: ``Gibbs was the first to introduce a physical concept which can only be applied to an object when our knowledge of the object is incomplete. If for instance the motion and the position of each molecule in a gas were known, then it would be pointless to continue speaking of the temperature of the gas."(\cite{He}, p.38)\footnote{Pauli made a similar remark, see \cite{Pa}, quoted in Popper, \cite{Pop4}, p.109.}. And Max Born said: ``Irreversibility is therefore a consequence of the explicit introduction of ignorance into the fundamental laws." (\cite{Bo2}, p.72). These formulations, although correct if they are properly interpreted, lead to unnecessary confusions. For example, Popper wrote: ``It is clearly absurd to believe that pennies fall or molecules collide in a random fashion {\it because we do not know} the initial conditions, and that they would do otherwise if some demon were to give their secret away to us: it is not only impossible, it is absurd to explain objective statistical frequencies by subjective ignorance." (\cite{Pop4}, p.106)\footnote{In his textbook on Statistical Mechanics; S.-K. Ma shows similar concerns: ``In one point of view, probability expresses the knowledge of the observer. If he knows more about the system, the probability is more concentrated. This is obviously incorrect. The motion of the system is independent of the psychological condition of the observer." (\cite{Ma1}, p.448) And H. Bondi wrote: ``It is somewhat offensive to our thought to suggest that if we know a system in detail then we cannot tell which way time is going, but if we take a blurred view, a statistical view of it, that is to say throw away some information, then we can\dots" (\cite{Bon}, quoted in \cite{Lan}, p.135. T. Gold expressed similar views, see \cite{Lan}).}. However, just after saying this, Popper gives what he calls ``an objective probabilistic explanation of irreversible processes" (\cite{Pop4}, p.107), attributed to Planck, which, as far as I can tell, is not very different from what I call the classical solution. The source of the confusion comes from two uses of the word ``knowledge". Obviously, the world does what it does, whether we know about it or not. So, indeed, if ``some demon" were to provide us with a detailed knowledge of the microscopic state of the gas in the left half of the box, nothing would change to the future evolution of that gas. But we may imagine situations where one can {\it control} more variables, hence to ``know" more about the system. When the piston forces the gas to be in the left half of the box, the set of available microscopic states is different than when the piston is not there, and obviously we have to take that ``knowledge" into account. But there is nothing mysterious here. I believe that statistical mechanics would become easier to understand by students if it were presented without using an anthropomorphic language and subjective sounding notions such as information, observation or knowledge. Or, at least, one should explain precisely why these notions are introduced and why they do not contradict an objectivist view of natural phenomena (see the writings of Jaynes on this point \cite{Ja,Ja4}). But I also believe that the charge of subjectivity should be completely reversed: to ``explain" irreversibility through the behaviour of probability distributions (which {\it are} describing our ignorance), as Prigogine does, is to proceed as if the limitations of human knowledge played a fundamental physical role. \section{Some misconceptions about irreversibility} \begin{flushleft} {\footnotesize The Second Law can never be proved mathematically \\ by means of the equations of dynamics alone.\\ L. Boltzmann (\cite{Bo}, p.204).} \end{flushleft} \vspace{3mm} \noindent {\bf 4.1. The Poincar\'e recurrence theorem.} \vspace{3mm} According to Prigogine (\cite{P2}, p.23) Poincar\'e did not recommend reading Boltzmann, because his conclusions were in contradiction with his premises. Discussing our example of a gas expanding in a container, Prigogine observes that ``if irreversibility was only that, it would indeed be an illusion, because, if we wait even longer, then it may happen that the particles go back to the same half of the container. In this view, irreversibility would simply be due to the limits of our patience." (\cite{P2}, p.24) This is basically the argument derived from the Poincar\'e recurrence theorem (and used by Zermelo against Boltzmann \cite{Ze}), which says that, if the container remains isolated long enough, then indeed the particles will return to the half of the box from which they started. Replying to that argument, Boltzmann supposedly said ``You should live that long". For any realistic macroscopic system, the Poincar\'e recurrence times (i.e. the time needed for the particles to return to the left half of the box) are much much larger than the age of the universe. So that again no contradiction can be derived, from a physical point of view, between Boltzmann's explanations and Poincar\'e's theorem. However, there is still a mathematical problem (and this may be what Poincar\'e had in mind): if one tries to rigorously derive an irreversible macroscopic equation from the microscopic dynamics and suitable assumptions on initial conditions, the Poincar\'e recurrence time will put a limit on the length of the time interval over which these statements can be proven. That is one of the reasons why one discusses these derivations in suitable limits (e.g. when the number of particles goes to infinity) where the Poincar\'e recurrence time becomes infinite. But one should not confuse the fact that one takes a limit for mathematical convenience and the source of irreversibility. In the Kac model discussed in Appendix 1, one sees clearly that there are very different time scales: one over which convergence to equilibrium occurs, and a much larger one, where the Poincar\'e recurrence takes place. But the first time scale is not an ``illusion". In fact, it is on that time scale that all phenomena that we can possibly observe do take place. \vspace{3mm} \noindent {\bf 4.2. Ergodicity and mixing.} \vspace{3mm} One often hears that, for a system to reach ``equilibrium", it must be ergodic, or mixing. The fact is that those properties, like the ``intrinsic irreversibility" discussed above, {\it are neither necessary nor sufficient} for a system to approach equilibrium. Let me start with ergodicity. A dynamical system is {\it ergodic} if the average time spent by a trajectory in any region of the phase space is proportional to the volume of that region. To be more precise: average means in the limit of infinite time and this property has to hold for all trajectories, except (possibly) those lying in a subset of zero volume. One says that it holds for ``almost all" trajectories. This property implies that, for any reasonable function on phase space, the average along almost all trajectories will equal the average over the phase space\footnote{In formulas, let $\Omega$ be the ``phase space" on which the motion is ergodic ( i.e. a constant energy surface, which is a subset of the space considered in note \protect\ref{note_involution}, on which is defined the measure induced by the Lebesgue measure, normalised to one, and denoted $d{\bf x}$). Then, ergodicity means that for $F$ integrable, \begin{equation} \lim_{T \rightarrow \infty} \frac{1}{T} \int_0^T F(T^t {\bf x}) dt = \int_{\Omega} F({\bf x}) d{\bf x}, \end{equation} for almost all initial conditions ${\bf x} \in \Omega$. The LHS is the time average and the RHS the space average. If we take for $F$ the characteristic function of a (measurable) set $A\subset \Omega$, the time average equals the fraction of time spent by the trajectory in $A$, and the space average is the volume of $A$.}. Then, the argument goes, the measurement of any physical quantity will take some time. This time is long compared to the ``relaxation time" of molecular processes. Hence, we can approximately regard it as infinite. Therefore, the measured quantity, a time average, will approximately equal the average over phase space of the physical quantity under consideration. But this latter average is exactly what one calls the equilibrium value of the physical quantity. So, according to the usual story, if a dynamical system is ergodic, it converges towards equilibrium. This appeal to ergodicity in order to justify statistical mechanics is rather widespread\footnote{ For a history of the concept of ergodicity, and some very interesting modern developments, see Gallavotti \cite{Gal}. It seems that the (misleading) emphasis on the modern notion of ergodicity goes back to the Ehrenfests' paper \cite{Eh}, more than to Boltzmann. A careful, but nevertheless exaggerated interest in ergodicity and mixing is found in the work of Khinchin \cite{Kh} and Krylov \cite{Kr}; it is also found e.g. in Chandler \cite{Ch}, p.57, Hill \cite{Hi}, p.16, S. K. Ma \cite{Ma1} Chap.26, Thompson \cite{Tho}, App.B, and Dunford and Schwartz \cite{DS}, p.657 (but see Schwartz \cite{Sch} for a self-criticism of \cite{DS}); in the recent textbook of Vauclair \cite{Va}, one reads: ``One considers that during the time $\delta t$ of the measurement, the system has gone through all the possibly accessible states, and that it spent in each state a time proportional to its probability." (p.11) And: ``Only the systems having this property (mixing) tend to an equilibrium state, when they are initially in a state out of equilibrium." (p.197).} even though it has been properly criticized for a long time by, e.g., Tolman \cite{To}, p.65, Jaynes \cite{Ja}, p.106, and Schwartz \cite{Sch}. Let us see the problems with this argument: a well-known, but relatively minor, problem is that it is very hard to give a mathematical proof that a realistic mechanical system is ergodic. But let us take such a proof for granted, for the sake of the discussion. Here is a more serious problem. Assume that the argument given above is true: how would it then be possible to observe or measure {\it any non-equilibrium} phenomenon? In the experiment with the box divided in two halves, we should not be able to see any intermediate stage, when the empty half gets filled, since the time for our measurements is supposed to be approximately infinite. So, where is the problem? We implicitly identified the ``relaxation time" with what one might call the ``ergodic time", i.e. the time taken by the system to visit all regions of phase space sufficiently often so that the replacement of time averages by spatial averages is approximately true. But, whatever the exact meaning of the word ``relaxation time" (for a few molecules) is, the ergodic time is certainly enormously longer. Just consider how large is the volume in phase space that has to be ``sampled" by the trajectory. For example, all the particles could be in the right half of the box, and ergodicity says that they will spend some time there (note that this is not implied by Poincar\'e's theorem; the latter only guarantees that the particles will return to the part of the box from which they started, i.e. the left half here). To be more precise, let us partition the phase space into a certain number of cells, of a given volume, and consider the time it takes for a given trajectory to visit each cell, even once, let us say\footnote{If there is some cell which has not been visited even once, there will be a function on phase space for which the space average and the time average, computed up to that time, differ a lot: just take the function which takes value one on that cell and is zero elsewhere.}. That, obviously, will depend on the size (hence, on the number) of the cells. By taking finer and finer partitions, we can make that time as large as one wishes. So, if one were to take the argument outlined above literally, the ``ergodic time" is infinite, and speaking loosely about a relaxation time is simply misleading. At this point of the discussion, one often says that we do not need the time and space average to be (almost) equal for all functions, but only for those of physical relevance (like the energy or particle densities). This is correct, but the criticism of the ``ergodic" approach then changes: instead of not being {\it sufficient} to account for irreversibility, we observe that it is not {\it necessary}. To see this, consider another partition of phase space: fix a set of macroscopic variables, and partition the phase space according to the values taken by these variables (see e.g. figures 7.3 and 7.5 in Penrose \cite{Pe2} for an illustration, and Appendix 1 here for an example). Each element of the partition consists of a set of microscopic states that give the same value to the chosen macroscopic variables. Now, these elements of the partition have very different volumes. This is similar to the law of large numbers. There are (for $N$ large) vastly more results of $N$ throws of a coin where the number of heads is approximately one half than throws where it is approximately one quarter (the ratio of these two numbers varies exponentially with $N$). By far the largest volumes correspond to the {\it equilibrium values} of the macroscopic variables (and that is how ``equilibrium" should be defined). So, we need a much weaker notion than ergodicity. All we need is that the microscopic configuration evolves in phase space towards those regions where the relevant macroscopic variables take their equilibrium values. The Kac model (see Appendix 1) perfectly illustrates this point: it is not ergodic in any sense, yet, on proper time scales, the macroscopic variables evolve towards equilibrium. There is a hierarchy of ``ergodic" properties that are stronger than ergodicity: mixing, K-system, Bernoulli, see Lebowitz and Penrose \cite{LP}. But none of these will help us to understand, in principle, irreversible behaviour any more than ergodicity. The problem with all those approaches is that they try to give a purely mechanical criterion for ``irreversible behaviour". Here is the basic dilemna: either we are willing to introduce a macro/micro distinction and to give a basic role to initial conditions in our explanation of irreversibility or we are not. If we make the first choice, then, as explained in Section 3, there is no deep problem with irreversibility, and subtle properties of the dynamics (like ergodic properties) play basically no role. On the other hand, nobody has ever given a consistent alternative, namely an explanation of irreversibility that would hold for {\it all} initial conditions or apply to {\it all} functions on configuration space (therefore avoiding the micro/macro distinction). So, we have to make the first choice. But then, everything is clear and nothing else is needed. Another critique of the ``ergodic" approach is that systems with one or few degrees of freedom may very well be ergodic, or mixing, or Bernoulli (like the baker's transformation). And, as we discussed in Section 3.4, it makes no sense to speak about irreversibility for those systems. So, this is another sense in which the notion of ergodicity is not sufficient (see e.g. Vauclair (\cite{Va} p.197), where the approach to equilibrium is illustrated by the baker's transformation). To avoid any misunderstandings, I emphasize that the study of ergodic properties of dynamical systems gives us a lot of interesting information on those systems, especially for chaotic systems. Besides, ergodic properties, like other concrete dynamical properties of a system, may play a role in the form of the macroscopic equations obeyed by the system, in the value of some transport coefficients or in the speed of convergence to equilibrium. But, and this is the only point I wanted to make, the usual story linking ergodicity (or mixing) and ``approach to equilibrium" is highly unsatisfactory. \vspace{3mm} \noindent {\bf 4.3. Real systems are never isolated.} \vspace{3mm} Sometimes it is alleged that, for some reason (the Poincar\'e recurrences, for example) a truly isolated system will never reach equilibrium. But it does not matter, since true isolation never occurs and external (``random") disturbances will always drive the system towards equilibrium \footnote{One can even invoke a theorem to that effect: the ergodic theorem for Markov chains. But this is again highly misleading. This theorem says that probability distributions will converge to an ``equilibrium"`distribution (for suitable chains). This is similar, and related, to what happens with strongly chaotic systems. But it does not explain what happens to a single system, unless we are willing to distinguish between microscopic and macroscopic variables, in which case the ergodic theorem is not necessary.}. This is true but irrelevant\footnote{ Borel \cite{Bor} tried to answer the reversibility objection, using the lack of isolation and the instability of the trajectories. As we saw in Section 3.3, this objection is not relevant, once one introduces the micro/macro distinction. And Fred Hoyle wrote: ``The thermodynamic arrow of time does not come from the physical system itself\dots it comes from the connection of the system with the outside world" \cite{Ho}, quoted in \cite{Lan}. See also Cohen and Stewart (\cite{CS}, p. 260) and \cite{DH} for similar ideas.}. In order to understand this problem of non-isolation, we have to see how to deal with idealizations in physics. Boltzmann compares this with Galilean invariance (see \cite{Bo}, p.170). Because of non-isolation, Galilean (or Lorentz) invariance can never be applied strictly speaking (except to the entire universe, which is not very useful). Yet, there are many phenomena whose explanation involve Galilean (or Lorentz) invariance. We simply do as if the invariance was exact and we then argue that the fact that it is only approximate does not spoil the argument. One uses a similar reasoning in statistical mechanics. If we can explain what we want to explain (e.g. irreversibility) by making the assumption that the system is perfectly isolated, then we do not have to introduce the lack of isolation in our explanations. We have only to make sure that this lack of isolation does not conflict with our explanation. And how could it? The lack of isolation should, in general, speed up the convergence towards equilibrium\footnote{One has to be careful here. If we shake a mixture of fluids, it should become homogeneous faster. But of course, there are external influences that prevent the system from going to equilibrium, as with a refrigerator. Also, the time scale on which approach to equilibrium takes place may vary enormously, depending on the physical situation. This is what is overlooked in \cite{DH}.}. Also, if we want to explain why a steamboat cannot use the kinetic energy of the water to move, we apply irreversibility arguments to the system boat$+$water, even though the whole system is not really isolated. Another way to see that lack of isolation is true but irrelevant is to imagine a system being more and more isolated. Is irreversibility going to disappear at some point? That is, will different fluids not mix themselves, or will they spontaneously unmix? I cannot think of any example where this could be argued. And I cannot tell with a straight face to a student that (part of) our explanation for irreversible phenomena on earth depends on the {\it existence} of Sirius. \vspace{3mm} \noindent {\bf 4.4. Bergson, Popper, Feyerabend (and others).} \vspace{3mm} Here, I will discuss various confusions that have been spread by some philosophers. Bergson was a rather unscientific thinker, and many readers may wonder why he belongs here. I have myself been very surprised to see how much sympathy Prigogine and Stengers seem to have for Bergson (see the references to Bergson in \cite{PS1,PS2}). But Bergson has been extremely influential, at least in the French culture, and, I am afraid, still is\footnote{I remember that when I first heard, as a teenager, about the special theory of relativity, it was through Bergson's alleged refutation of that theory! He thought, probably rightly so, that there was aconflict between his intuitive views on duration and the absence of absolute simultaneity in Relativity. So he simply decided that there was a ``time of consciousness", as absolute as Newtonian time, and that the Lorentz transformations were merely some kind of coordinates ``attributed" by one observer to the other. Running into trouble with the twin paradox, he decided that acceleration is relative, like uniform motion, and that, when both twins meet again, they have the same age! (see \cite{Ber1}). At least, Bergson had the good sense, after his lengthy polemic with Einstein, to stop the republication of his book. But, and this is a remarkable aspect of our ``intellectual" culture, the {\it very same mistake} is repeated by some of his admirers, Jankelevitch (\cite{Ya}, Chap. 2), Merleau-Ponty (\cite{MP}, p. 319) and Deleuze (\cite{Del}, p.79; see also his later writings). Of course, all this is explained by telling to the physicists that they should stick to their ``mathematical expressions and language" (Merleau-Ponty, \cite{MP}, p. 320), while leaving the deep problems of the ``time of consciousness" to philosophers. For a modern attempt to make some sense of Bergson's universal time, see the first Appendix of ``Entre le Temps et l'Eternit\'e" (\cite{PS2}). }. In particular, he is one source of the widespread confusion that there is contradiction between life and the Second Law of thermodynamics. Roughly speaking, Bergson saw a great opposition between ``matter" and ``life", and a related one between intellect and intuition. The intellect can understand matter, but intuition is needed to apprehend life\footnote{See Monod \cite{Mo} and B. Russell \cite{Rus} for a critique of his philosophy and \cite{Mor} for the relation between Bergson and Prigogine. The main problem with Bergson's lasting influence is well expressed by Bertrand Russell: ``One of the bad effects of an anti-intellectual philosophy such as that of Bergson, is that it thrives upon the errors and confusions of the intellect. Hence it is led to prefer bad thinking to good, to declare every momentary difficulty insoluble, and to regard every foolish mistake as revealing the bankruptcy of intellect and the triumph of intuition." (\cite{Rus}, p.831)}. Bergson was not a precursor of the discovery of DNA, to put it mildly\footnote{This remark is not as anachronistic as it may seem. Think of the work of Weismann, at the turn of the century, on the continuity of the germ-plasm.}. The Second Law of thermodynamics, which he called the ``most metaphysical of the laws of physics" (\cite{Ber}, p.264), was very important for him\footnote{As we shall see in Section 5, it is probably the least metaphysical of those laws (although I do not like this terminology), since it is not a purely dynamical law.}. It reinforced his ``vision of the material world as that of a falling weight." (\cite{Ber}, p.266), hence, that ``all our analyses show indeed in life an effort to climb the slope that matter has descended." (\cite{Ber}, p.267) ``The truth is that life is possible wherever energy goes down the slope of Carnot's law, and where a cause, acting in the opposite direction, can slow down the descent."(\cite{Ber}, p.278) It's all metaphorical, of course, but Bergson's philosophy {\it is} entirely a ``metaphorical dialectics devoid of logic, but not of poetry", as Monod calls it (\cite{Mo}). In any case, life is perfectly compatible with the Second Law (see Section 3.3). Turning to Popper, we have already seen that he had lots of problems with statistical mechanics. Since Popper is generally considered positively by scientists\footnote{See, e.g. the introduction by Monod to the French edition of ``The Logic of Scientific Discovery" \cite{Pop3} and also Prigogine and Stengers, e.g. (\cite{PS2}, p.173). For philosophical critiques of Popper, see Putnam \cite{Pu}, and Stove \cite{Sto}. For a critique of his views on the arrow of time, see Ghins \cite{Gh}.}, it is worth looking more closely at his objections. He took too literally the claims of Heisenberg, Born and Pauli on irreversibility as ``subjective" (see Section 3.5), which he thought (maybe rightly so) were precursors of the subjectivism of the Copenhagen interpretation of quantum mechanics (see \cite {Pop4}). Besides, he was strongly opposed to determinism and he was convinced that ``the strangely law-like behaviour of the statistical sequences remain, for the determinist, {\it ultimately irreducible and inexplicable}." (\cite {Pop5}, p.102). As I discussed in Section 2.2, there is no problem in using probabilities, even in a deterministic universe. He then invented a rather obscure ``propensity" interpretation of probabilities. He also felt that one should define ``objectively" what a random sequence is. A sequence (of zeros and ones) will be random if there are (almost) as many zeroes and ones, as many pairs $00$, $01$, $10$, $11$, etc\dots (see e.g. \cite{Pop4}, p.112). He did not seem to realize that this is like saying that a ``microscopic configuration" (a sequence) gives to certain ``macroscopic variables" (the average number of occurences of finite subsequences) the values which are given to them by the overwhelming majority of sequences. So that the difference with what he calls the ``subjective" viewpoint is not so great. Finally, Popper was very critical of Boltzmann. Although he admires Boltzmann's realist philosophy, he calls Boltzmann's interpretation of time's arrow ``idealist" and claims that it was a failure. As we saw, any explanation of irreversibility ultimately forces us to say that the universe started in an ``improbable" state. Boltzmann tried to explain it as follows: in an eternal and infinite universe globally in equilibrium, all kinds of fluctuations will occur. What we call our universe is just the result of one such gigantic fluctuation, on its way back to equilibrium. But this explanation does not really work. Indeed, the most probable assumption, if a fluctuation theory is to hold, is simply that my brain is a fluctuation out of equilibrium, just at this moment and in this small region of space, while none of the familiar objects of the universe (stars, planets, other human beings) exist and all my (illusory) perceptions and memories are simply encoded in the states of my neurons (a ``scientific" version of solipsism). However improbable such a fluctuation is, it is still far more probable than a fluctuation giving rise to the observed universe, of which my brain is a part. Hence, according to the fluctuation theory, that ``solipsist" fluctuation must actually have occurred many more times than the big fluctuation in which we live, and therefore no explanation is given for the fact that we happen to live in the latter (see Feynman \cite{Fe2} and Lebowitz \cite{Le1} for a discussion of that fluctuation theory). Boltzmann's cosmology does not work. So, what? When Popper wrote (1974), no one took Boltzmann's cosmology seriously anyway: it had long since been superseded by cosmologies based on general relativity. Besides, Popper does not raise the objection I just made. His criticism is, rather, that this view would render time's arrow ``subjective" and make Hiroshima an ``illusion". This is complete gibberish. Boltzmann gives a complete and straightforward explanation of irreversible processes in which Hiroshima is as objective as it unfortunately is (when it is described at the macroscopic level, which is what we mean by ``Hiroshima"). Of course, questions remain concerning the initial state of the universe. In the days of Boltzmann, very little was known about cosmology. What the failure of Boltzmann's hypothesis on the origin of the initial state shows is that cosmology, like the rest of science, cannot be based on pure thought alone\footnote{There are indications that Boltzmann did not take his fluctuation theory too seriously. For example, he wrote ``that the world began from a very unlikely initial state, this much can be counted amongst the fundamental hypotheses of the whole theory and we can say that the reason for it is as little known as that for why the world is as it is and not otherwise."(\cite{Bo}, p.172; compare with note \protect\ref{note_Feynman}). In general, Boltzmann is quite opposed to unscientific speculations. In his criticism of Schopenhauer, he takes a very Darwinian (and surprisingly modern) view of mankind. He starts by observing that drinking fermented fruit juices can be very good for your health: ``if I were an anti-alcoholic I might not have come back alive from America, so severe was the dyssentry that I caught as a result of bad water\dots it was only through alcololic beverages that I was saved." (\cite{Bo}, p.194) But, with alcohol, one can easily overshoot the mark. It is the same thing with moral ideas. ``We are in the habit of assessing everything as to its value; according to whether it helps or hinders the conditions of life, it is valuable or valueless. This becomes so habitual that we imagine we must ask ourselves whether life itself has a value. This is one of those questions utterly devoid of sense." (\cite{Bo}, p.197) Finally, for theoretical ideas, he observes that our thoughts should correspond to experience and that overshooting the mark should be kept within proper bounds: ``Even if this ideal will presumably never be completely realized, we can nevertheless come nearer to it, and this would ensure cessation of the disquiet and the embarassing feeling that it is a riddle that we are here, that the world is at all and is as it is, that it is incomprehensible what is the cause of this regular connection between cause and effect, and so on. Men would be freed from the spiritual migraine that is called metaphysics."(\cite{Bo}, p.198).}. Popper was also too much impressed with Zermelo's objections to Boltzmann, based on the Poincar\'e recurrence theorem, and discussed above (see \cite{Pop1}). But he has even stranger criticisms: in \cite{Pop2}, he argues that Brownian motion (where fluctuations may pull the particle against gravity) is a serious problem for the Second Law. Maxwell had already observed that ``The Second Law is constantly being violated\dots in any sufficiently small group of molecule\dots As the number \dots is increased \dots the probability of a measurable variation \dots may be regarded as practically an impossibility." (\cite{Max}, quoted in \cite{Le2}) Going from bad to worse, Feyerabend invents a ``perpetuum mobile of the second kind" (i.e. one respecting the first law but not the second) using {\it a single molecule} \cite{Fe1}. He adds that he assumes ``frictionless devices" (he had better do so!). Those claims are then repeated in his popular book ``Against Method" \cite{Fe}, where it is explained that Brownian motion refutes the Second Law\footnote{This error is repeated, with many others, in \cite{Wo}, p.177.}. This is how the general educated public is misled into believing that there are deep open problems which are deliberately ignored by the ``official science"! Unfortunately, this is not the end of it. Contemporary (or post-modern) French ``philosophy" is an endless source of confusions on chaos and irreversibility. Here are just a few examples. The well-known philosopher Michel Serres says, in an interview with the sociologist of science Bruno Latour, entitled paradoxically ``Eclaircissements": ``Le temps ne coule pas toujours selon une ligne (la premi\`ere intuition se trouve dans un chapitre de mon livre sur Leibniz, pp.~284--286) ni selon un plan, mais selon une vari\'et\'e extraordinairement complexe, comme s'il montrait des points d'arr\^et, des ruptures, des puits, des chemin\'ees d'acc\'el\'eration foudroyante, des d\'echirures, des lacunes, le tout ensemenc\'e al\'eatoirement, au moins dans un d\'esordre visible. Ainsi le d\'eveloppement de l'histoire ressemble vraiment \`a ce que d\'ecrit la th\'eorie du chaos \ldots "\footnote{I will leave these texts in the original language, and provide only a rough translation, because nonsense is hard to translate: the book is called ``Clarifications" and the quotation is: ``Time does not always flow along a line, nor along a plane, but along an extraordinarily complex manifold, as if it showed stopping points, ruptures, sinks, chimneys of striking acceleration, rips, lacunas, everything being randomly sowned, at least in a visible disorder. So, the development of history really ressembles what is described by chaos theory."} (\cite{Se}). Another philosopher, Jean-Fran\c cois Lyotard writes:``L'id\'ee que l'on tire de ces recherches (et de bien d'autres) est que la pr\'e\'eminence de la fonction continue \`a deriv\'ee comme paradigme de la connaissance et de la pr\'evision est en train de dispara\^{\i}tre. En s'int\'eressant aux ind\'ecidables, aux limites de la pr\'ecision du contr\^ole, aux quanta, aux conflits \`a l'information non compl\`ete, aux ``{\em fracta}\/'', aux catastrophes, aux paradoxes pragmatiques, la science postmoderne fait la th\'eorie de sa propre \'evolution comme discontinue, catastrophique, non rectifiable, paradoxale. Elle change le sens du mot savoir, et elle dit comment ce changement peut avoir lieu. Elle produit non pas du connu, mais de l'inconnu. Et elle sugg\`ere un mod\`ele de l\'egitimation qui n'est nullement celui de la meilleure performance, mais celui de la diff\'erence comprise comme paralogie."\footnote{``The idea derived from those researches (and from many others) is that the pre-eminence of the continuous function with a derivative as paradigm of knowledge and forecast is disappearing. By being interested in undecidables, in limits of precision of control, in quanta, in conflicts with incomplete information , in ``fracta", in catastrophes, in pragmatical paradoxes, postmodern science makes the theory of its own evolution as discontinuous, catastrophic, not rectifiable, paradoxical. It changes the meaning of the word knowledge, and it says how this change can occur. It produces not the known, but the unknown. And it suggests a model of legitimation which is not at all the one of the best performance, but rather the one of difference understood as paralogism."} (\cite{Ly}). A sociologist, Jean Baudrillard observes that ``Il faut peut-\^etre consid\'erer l'histoire elle-m\^eme comme une formation chaotique o\`u l'acc\'el\'eration met fin \`a la lin\'earit\'e, et o\`u les turbulences cr\'e\'ees par l'acc\'el\'eration \'eloignent d\'efinitivement l'histoire de sa fin, comme elles \'eloignent les effets de leurs causes. La destination, m\^eme si c'est le Jugement dernier, nous ne l'atteindrons pas, nous en sommes d\'esormais s\'epar\'es par un hyperespace \`a r\'efraction variable. La r\'etroversion de l'histoire pourrait fort bien s'interpr\'eter comme une turbulence de ce genre, due \`a la pr\'ecipitation des \'ev\'enements qui en inverse le cours et en ravale la trajectoire."\footnote{``One must, maybe, consider history itself as a chaotic formation where acceleration puts an end to linearity, and where turbulence created by acceleration separates definitively history from its end, as it separates effects from their causes. The destination, even if it is the Last Judgment, we shall not reach it, we are separated from it by a hyperspace with variable refraction. The retroversion of history could very well be interpreted as such a turbulence, due to the precipitancy of events which inverts its path and swallows its trajectory."} (\cite{Bau}). Finally, Gilles Deleuze and F\'elix Guattari understood chaos as follows: `` On d\'efinit le chaos moins par son d\'esordre que par la vitesse infinie avec laquelle se dissipe toute forme qui s'y \'ebauche. C'est un vide qui n'est pas un n\'eant, mais un {\em virtuel}\/, contenant toutes les particules possibles et tirant toutes les formes possibles qui surgissent pour dispara\^{\i}tre aussit\^ot, sans consistance ni r\'ef\'erence, sans cons\'equence (Ilya Prigogine et Isabelle Stengers, {\em Entre le temps et l'\'eternit\'e}\/, pp.~162--163)."\footnote{``Chaos is defined not so much by its disorder than by the infinite speed with which every form being sketched is dissipated. It is a vacuum which is not a nothingness, but a {\em virtual}\/, containing all possible particles and extracting all possible forms, which appear and disappear immediately, without consistence, nor reference, nor consequence."} (\cite{DG}) Of course, Prigogine and Stengers are not responsible for {\it these} confusions (in that reference, they discuss the origin of the universe). But this illustrates the difficulties and the dangers of the popularization of science. Besides, Guattari wrote a whole book on ``Chaosmose" (\cite{Gu}), which is full of references to non-existent concepts such as ``nonlinear irreversibility thresholds" and ``fractal machines"\footnote{I recommand to people interested in {\it tensors} (applied to psychology, sociology, etc \dots) Guattari's contribution to \cite{Cer}.}. \section{Entropies} \begin{flushleft} {\footnotesize Holy Entropy! It's boiling!\\ Mr Tompkins (G. Gamow) (\cite{Ga}, p.111).} \end{flushleft} There is some kind of mystique about entropy. According to \cite{Den}, \cite{Tr}, von Neumann suggested to Shannon to use the word ``entropy" adding that ``it will give you a great edge in debates because nobody really knows what entropy is anyway". But there is a very simple way to understand the notion of entropy. Just consider any set of macroscopic variables (at a given time) and consider the volume of the subset of phase space (of the microscopic variables) on which these macroscopic variables take a given value. The {\it Boltzmann entropy} (defined as a function of the values taken by the macroscopic variables) equals the logarithm of that volume. Defined this way, it looks quite arbitrary. We may define as many entropies as we can find sets of macroscopic variables. Furthermore, since the micro/macro distinction is not sharp, we can always take finer grained entropies, until we reach the microscopic variables (the positions and the momenta of the particles), in which case the entropy is constant and equals zero (giving a volume equal to one to a single microstate, which is rather a quantum-mechanical way to count). But one should make several remarks: \begin{enumerate} \item[1)] These entropies are not necessarily ``subjective". They are as objective as the corresponding macroscopic variables. Jaynes, following Wigner, calls these entropies ``anthropomorphic" (\cite{Ja}, p.85). A better word might be ``contextual", i.e. they depend on the physical situation and on its level of description. \item[2)] The ``usual" entropy of Clausius, the one which is most useful in practice, corresponds to a particular choice of macroscopic variables (e.g. energy and number of particles per unit volume for a monoatomic gas without external forces). The derivative with respect to the energy of {\em that} entropy, restricted to equilibrium values, defines the inverse temperature. One should not confuse the ``flexible" notion of entropy introduced above with the more specific one used in thermodynamics\footnote{This is again the source of much confusion, just like the ``subjectivity of irreversibility" discussed in Section 3.5, see for example Popper \cite{Pop4}, p.111, Denbigh \cite{Den}, S.K. Ma \cite{Ma1}. In the popular book, ``The Quark and The Jaguar", we read: ``Entropy and information are closely related. In fact, entropy can be regarded as a measure of ignorance."(\cite{Ge}, p.219) And further: `` Indeed, it is mathematically correct that the entropy of a system described in perfect detail would not increase; it would remain constant."(\cite{Ge}, p.225). This is correct, if properly understood, but it might be useful to emphasize that one does not refer to the ``usual" thermodynamic entropy.}. \item[3)] The Second Law seems now a bit difficult to state precisely. ``Entropy increases"; yes, but which one? One can take several attitudes. The most conservative one is to restrict oneselve to the evolution of a given isolated system between two equilibrium states and then the increasing entropy is the one discussed in point (2) above. The Second law is then a rather immediate consequence of the irreversible evolution of the macroscopic variables: the microscopic motion will go from small regions of phase space to larger ones (in the sense of the partitions discussed in Section 4.2). The gas in the box goes from an equilibrium state in the left half of the box to another equilibrium state in the whole box. There are many more microscopic configurations corresponding to a uniform density than there are configurations corresponding to the gas being entirely in one half of the box. But this version of the Second Law is rather restrictive, since most natural phenomena to which we apply ``Second Law" arguments are not in equilibrium. When used properly in non-equilibrium situations, reasonings based on the Second Law give an extremely reliable way to predict how a system will evolve. We simply assume that a system will never go spontaneously towards a very small subset of its phase space (as defined by the macroscopic variables). Hence, if we observe such an evolution, we expect that some hidden external influence is forcing the system to do so, and we try to discover it (see also Jaynes \cite{Ja4} for a nice discussion of apparent violations of the Second Law)\footnote{See e.g. the constraints on the plausible mechanisms for the origin of life, due to the Second Law, discussed by Elitzur (\cite{El}, Sect.11). There is some similarity between this use of the Second Law and the way biologists use the law of natural selection. The biologists do not believe that complex organs appear ``spontaneously". Hence, when they occur, they look for an adaptative explanation (see \cite{Da1} for an introduction to the theory of evolution). Both attitudes are of course similar to elementary probabilistic reasoning: if we throw a coin a million times and find a significant deviation from one-half heads one-half tails, we shall conclude that the coin is biased (rather than assuming that we observe a miracle). }. \item[4)] In most non-equilibrium situations, most of these entropies are very hard to compute or even to estimate. However, Boltzmann was able to find an approximate expression of his entropy (minus his $H$ function), valid for dilute gases (e.g. for the gas in the box initially divided in two of Section 3) and to write down an equation for the evolution of that approximate entropy. A lot of confusion is due to the identification between the ``general" Boltzmann entropy defined above, and the approximation to it given by (minus) the $H$-function (as emphasized by Lebowitz in \cite{Le1}). Another frequent confusion about Boltzmann's equation is to mix two conceptually different ingredients entering in its derivation\footnote{As for example in: ``This so-called hypothesis of ``molecular chaos" admits the absence of correlations between the velocities of the molecules in the initial state of the gas, although, obviously, correlations exists between the molecules after the collisions. The hypothesis of molecular chaos amounts to introduce, in a subtle way, the irreversibility that one tries demonstrate." (Lestienne \cite{Les}, p.172) Boltzmann never said that he would demonstrate irreversibility without assuming something about initial conditions. Another, more radical, confusion is due to Bergmann: ``It is quite obvious that the Boltzmann equation, far from being a consequence of the laws of classical mechanics, is inconsistent with them." (in \cite{Go}, p. 191)}: one is an assumption about {\it initial conditions} and the other is to make a particular approximation (i.e. one consider the Boltzmann-Grad limit, see Spohn \cite{Sp}, in which the equation becomes exact; in the Kac model in Appendix 1, this limit reduces simply to letting $n$ go to infinity for fixed $t$). To account for irreversible behaviour, one has always, as we saw, to assume something on initial conditions, and the justification of that assumption is statistical. But that part does not require, in principle, any approximation. To write down a concrete (and reasonably simple) equation, as Boltzmann did, one uses this approximation. Failure to distinguish these two steps leads one to believe that there is some deep problem with irreversibility outside the range of validity of that approximation\footnote{To make a {\it vague} analogy, in equilibrium statistical mechanics, one has the concept of phase transition. Mean field theory (or the van der Waals theory, Curie-Weiss or molecular field approximation) gives an approximate description of the phase transition. But the concept of phase transition is much wider than the range of validity of that approximation.}. \item[5)] Liouville's theorem\footnote{This theorem says that, if $A$ is a subset of the phase space $\Omega$, then $Vol(T^t(A))=Vol(A)$, where $Vol(A)=\int_A d{\bf x}$. } is sometimes invoked against such ideas. For instance, we read in Prigogine and Stengers (\cite{PS2}, p.104): ``All attempts to construct an entropy function, describing the evolution of a set of trajectories in phase space, came up against Liouville's theorem, since the evolution of such a set cannot be described by a function that increases with time"\footnote{See also \cite{Cer}, p.160: ``According to the mechanical view of the world, the entropy of the universe is today identical to what it was at the origin of time." Or as Coveney says (\cite{Co}, p.411): ``As long as the dynamical evolution is unitary, irreversibility cannot arise. This is the fundamental problem of non-equilibrium statistical mechanics."} (see \cite{DH}, p.8 for a similar statement). What is the solution of that ``paradox"? Here I consider {\it a single system} evolving in time and associate to it a certain set of macroscopic variables to which in turn an entropy is attached. But, since the values of the macroscopic variables change with time, the corresponding set of microstates changes too. For the gas in the box, the initial set of microstates are all those where the particles are in the left half, while the final set consists of the microstates giving rise to a uniform density. In other words, I ``embed" my microscopic state into different sets of microscopic states as time changes, and the evolution of that set should not be confused with a set of {\em trajectories}, whose volume is indeed forced to remain constant (by Liouville's theorem)\footnote{Let me use the notations of note \protect\ref{note_good}. By Liouville's Theorem, indeed $Vol(T^t(\Omega_0))=Vol(\Omega_0)$. But $T^t(\Omega_0)$ is a very small subset of $\Omega_t$. Confusing the two sets leads to the (wrong) idea that $Vol(T^t(\Omega_0))=Vol(\Omega_t)$. The evolution of $\Omega_t$ does not coincide with a set of trajectories.}. \item[6)] A related source of confusion comes from the fact that Gibbs' entropy, $-\int \rho \log\rho d{\bf x}$, which is sometimes viewed as more ``fundamental" (because it is expressed via a distribution function $\rho$ on phase space), is indeed constant in time (by Liouville's theorem again). But why should one use this Gibbs entropy out of equilibrium? In equilibrium, it agrees with Boltzmann and Clausius entropies (up to terms that are negligible when the number of particles is large) and everything is fine\footnote{Note, that these entropies agree with (minus) Boltzmann's $H$ function only when the interparticle forces are negligeable (as in a very dilute gas). This is rather obvious since the $H$ function is an approximation to the Boltzmann entropy, see Jaynes (\cite{Ja}, p.81).}. When we compare two different equilibrium states all these entropies change, and the direction of change agrees with the Second Law\footnote{ Amusingly enough, this conclusion can be reached using only Liouville's theorem (see Jaynes \cite{Ja} p.83) which is blamed as the source of all the troubles!}. The reason being that the values taken by the macroscopic variables are different for different equilibrium states. Actually, trying to ``force" the Gibbs entropy to increase by various coarse-graining techniques, gives then the impression that irreversibility is only due to this coarse-graining and is therefore arbitrary or subjective (see e.g. Coveney (\cite{Co}, p.412).: ``Irreversibility is admitted into the description by asserting that we only observe a coarse-grained probability;"). \item[7)] Finally, why should one worry so much about entropy for non-equilibrium states? A distinction has to be made between two aspects of irreversibility: one is that macroscopic variables tend to obey irreversible laws and the other is that when an isolated system can go from one equilibrium state to another, the corresponding thermodynamic entropies are related by an inequality. Both aspects are connected, of course, and they can be both explained by similar ideas. But this does not mean that, in order to account for the irreversible behaviour of macroscopic variables, we have to introduce an entropy function that evolves monotonically in time. It may be useful or interesting to do so, but it is not required to account for irreversibility. All we really {\it need} is to define suitably the entropy for equilibrium states, and that was done a long time ago. \item[8)] Jaynes rightly says that he does not know what is the entropy of a cat (\cite{Ja} p.86). The same thing could be said for a painting, an eye or a brain. The problem is that there is no well-defined set of macroscopic variables that is specified by the expression ``a cat". \end{enumerate} \section{Order out of Chaos? } \begin{flushleft} {\footnotesize In my view all salvation for philosophy may be expected\\ to come from Darwin's theory. As long as people believe in a special spirit\\ that can cognize objects without mechanical means, or in a special will\\ that likewise is apt to will that which is beneficial to us, the simplest\\ psychological phenomena defy explanation.\\ L. Boltzmann} (\cite{Bo}, p.193) \end{flushleft} In this section, I will discuss the ``constructive role" of irreversible processes\footnote{Note that the word ``chaos" in the title is used in a somewhat ambiguous way: sometimes it has the technical meaning of Section 2, sometimes it means ``disordered" or ``random".}. But I also want to discuss the impact of scientific discoveries on the cultural environment. At least since the Enlightenment and the Encyclopaedia, scientists have communicated their discoveries to society, and, through the popular books and the educational system, have profoundly influenced the rest of culture. But one has to be very careful. In his recent book on Darwin, the philosopher D. Dennett makes a list of popular misconceptions about the theory of evolution (\cite{De}, p.392). One of them is that one no longer needs the theory of natural selection, since we have chaos theory! He does not indicate the precise source of this strange idea, but this illustrates how easily people can be confused by loose talk, analogies and metaphors. I think that one should clearly reaffirm certain principles: first of all, no macroscopic system has ever jumped out of equilibrium spontaneously. Moreover, isolated macroscopic systems always evolve towards equilibrium. These are general qualitative statements that one can make about macroscopic mechanical systems. No violations of them have ever been found. Of course, nobody explicitly denies those principles, but I am nevertheless afraid that many people are confused about this point\footnote{Here are some examples; in Cohen and Stewart, one reads: ``The tendency for systems to segregate into subsystems is just as common as the tendency for different systems to get mixed together." (\cite{CS}, p. 259). Or in Meessen: ``by watching some phenomena, one is led to say that time has an arrow pointing towards a greater disorder, but by considering other phenomena, it seems that time has an arrow pointing, on the contrary, towards a greater order. Then, what does this arrow mean? If we can orient it in opposite directions, it is better not to talk about it any more." (\cite{Me}, p.119). }. Of course, it has always been known that very complicated and interesting phenomena occur out of equilibrium, human beings for example. But this raises two completely different problems. Oneis to explain those phenomena on the basis of the microscopic laws and of suitable assumptions on initial conditions. Much progress in this direction have been made, but we are far from understanding everything, and, of course, to account for the existence of human beings, Darwin's theory {\it is} needed. The other question, a much easier one, is to understand why there is no {\it contradiction} between the general tendency towards equilibrium and the appearance of self-organization, of complex structures or of living beings. {\it That} is not difficult to explain qualitatively, see Section 3.3 and Penrose \cite{Pe2}. Going back to Popper (again), he wanted to solve the alleged contradiction between life and the Second Law (see note \protect\ref{note_life}) by turning to Prigogine \cite{P1} and saying that ``{\it open systems in a state far from equilibrium } show no tendency towards increasing disorder, even though they produce entropy. But they can export this entropy into their environment, and can increase rather than decrease their internal order. They can develop structural properties, and thereby do the very opposite of turning into an equilibrium state in which nothing exciting can happen any longer." (\cite{Pop5}, p. 173). This is correct, provided that part of the environment {\it is more ordered than the system}, where ``order" is taken in a technical sense: the system plus its environment (considered as approximately isolated) is in a state of low entropy, or is in a small subset of its {\it total} phase space and moves towards a larger subset of that space\footnote{One should always distinguish this precise but technical sense of order, from our intuitive idea of order. In particular, when one identifies increase of entropy and increase of ``disorder", one should realize that this is correct only if it is a tautology (i.e. if ``disorder" is defined through the more precise notion of entropy). Otherwise, it can be misleading. A similar problem occurs with intuitive words like ``complexity" (or ``information" in the past). If we speak of the ``complexity of the brain", it has of course evolved from a less ``complex" structure, except that those words do not have a precise meaning. Note that precise definitions of complexity, like algorithmic complexity, do not at all capture the intuitive meaning of the word, since ``random" sequences are algorithmically complex, and, whatever ``complexity of the brain" means, and whatever ``random" means, they do not mean the same thing; see Gell-mann \cite{Ge}, Chap. 3 for a good discussion of this issue. } (where the subsets are elements of a partition like the one discussed in Section 4.2). But it is misleading to suggest that order is created out of nothing, by rejecting ``entropy" in an unspecified environment\footnote{Besides, as we saw in Section 5, entropy is not really a ``substance" to be ``exported". This is a somewhat strange terminology for a philosopher like Popper, so critical of ``essentialism".}. It is not enough to be an ``open system"; the environment must be in a state of low entropy. While it is correct to say that the Second Law ``applies only to isolated systems", it should not be forgotten that most systems can be considered, at least approximately, as subsystems of isolated ones, and that, therefore, the Second Law does imply some constraints even for open systems. Here are some examples which {\it may} create this confusion\footnote{To avoid misunderstandings, let me stress that my main criticism here is that one may, through ambiguous statements, unwillingly mislead the non-specialized reader.}: In (\cite{Cer}, p.157) Prigogine wants to give an example of one of the ``many phenomena`" that cannot be understood through the ``general interpretation of the growth of entropy" due to Boltzmann. He considers a system of particles (on a line), which start in a disordered configuration. Then ``the strong interactions between those particles" will push them to form an ordered crystal. It looks like a ``passage from a disordered situation to an ordered one". But is this an isolated system? This is not clear if one considers the pictures. The final configuration looks like a perfect crystal. But if there are interactions between the particles favoring an ordered crystal, the disordered initial configuration must have been one of high potential energy, hence the ``ordered" configuration will have a high kinetic energy, and oscillations will occur. Of course, if the total initial energy is sufficiently small, the oscillations will be small and the equilibrium state will be crystalline. But that is not incompatible with the ``general interpretation of the growth of entropy". Equilibrium states maximize entropy, for a given energy, but may be crystalline (at least for higher dimensional lattices). This is one example where maximum entropy is not necessarily the same as maximal disorder (in the intuitive sense of the word). On the other hand, if dissipation takes place, the ``passage from a disordered situation to an ordered one" is possible, even starting from a configuration of high potential energy. But this means that some environment absorbs the energy of the system, in the form of heat, hence it increases {\it its} entropy. And the environment must have been more ``ordered" to start with. Again, this is in agreement with the ``general interpretation of the growth of entropy". To give another example, Prigogine and Stengers emphasize in (\cite{PS1}, p.427) that, for the B\'enard instability\footnote{A fluid is maintained between two horizontal plates, the lower one being hotter than the higher one. If the temperature difference is large enough, rolls will appear, see e.g. \cite{PS1,PS2} for a discussion of the B\'enard instability.} to occur {\it one must provide more heat to the system}. As noticed by Meessen (\cite{Me}, p.118) ``It is remarkable that the creation of a structure is initiated by a source of heat, which is usually a source of disorder". This quotation shows clearly what is confusing: heating suggests an increase of disorder, while the result is the appearance of a self-organized structure. But what is needed, of course, is a temperature {\it difference} between the two plates. So, if one heats up from below, one must have some cooling from above. The cooling acts like a refrigerator, so it requires some ``ordered" source of energy. The more one heats, the more efficient must be the cooling. These are fairly trivial remarks, but which, I believe, have to be made, at least for the general public, if one wants to avoid giving the impression that processes violating the Second Law can occur: all the emergence of complex structures, of whatever one sees, is perfectly compatible with the universal validity of the ``convergence to equilibrium", provided one remembers that our universe started (and still is) in a low entropy state\footnote{Somewhat to my surprise, I found the following {\it theological} commentary on the ``constructive role" of irreversible phenomena: ``Each time that a new order of things appear, it is marked by the dissipation of a chaotic behaviour, by a broken form of movement, that is, by a ``fractal", by non-linearity, by variations or ``fluctuations", by instability and randomness. In this way the dynamics of self-organization of matter, which reaches the great complexity of consciousness, manifests itself"( Ganoczy, \cite{Gan}, p.79). This is a bit of an extrapolation, starting from the B\'enard cells. The quotation appears in a section of a chapter on ``God in the language of the physicists", where the author refers mostly to Prigogine and Stengers, \cite{PS1,P1}. }. Besides, one should be careful with the issue of determinism, at the level of macroscopic laws, for example when bifurcations occur. In many places, Prigogine and Stengers seem to attach a deep meaning to the notion of {\it event}: ``By definition, an event cannot be deduced from a deterministic law: it implies, one way or another, that what happened ``could" not have happened." (\cite{PS2}, p.46)\footnote{And even more misleading: ``In a deterministic world, irreversibility would be meaningless, since the world of tomorrow would already be contained in the world of today, there would be no need to speak of time's arrow."(\cite{Cer} p.166) } Let us consider Buridan's ass. One can describe it as being ``in between" two packs of food. It could choose either. But that is a macroscopic description. Maybe one of the eyes of the ass is tilted in one direction, or some of its neurons are in a certain state favoring one direction. This is an example where the macroscopic description does not lead to an autonomous macroscopic law. At the macroscopic level, things are indeterminate, and the scheme of Section 3 does not apply: the microscopic configurations may fall into different classes, corresponding to different future evolutions for the macroscopic variables, and no single class constitutes an overwhelming majority. Thus, when we repeat the experiment (meaning that we control the same {\it macroscopic} variables) different outcomes will occur, because different experiments will correspond to microscopic variables that belong to different classes. The same thing may happen in a variety of phenomena, e.g. which way a roll in a B\'enard cell will turn. But that (true) remark has nothing to do with the issue of determinism, which is meaningful only at the microscopic level: in a perfectly deterministic universe (at that level) there will always be lots of situations where no simple autonomous macroscopic laws can be found, hence we shall have the illusion of ``indeterminism" if we consider only the macroscopic level\footnote{It is also a bit too fast to say, as Prigogine and Stengers do, that this kind of mechanism allows us to go beyond the ``very old conflict between reductionists and antireductionists." (\cite{PS1}, p.234, quoted in \cite{Bou}, p.274) Any reductionist is perfectly happy to admit that some situations do not have simple, deterministic, macroscopic descriptions, while I doubt that antireductionists such as Popper and Bergson would be satisfied with such a simple admission.}. One should avoid (once more) the Mind Projection Fallacy. The macroscopic description may be all that is accessible to us, hence the future becomes unpredictable, but, again, it does not mean that Nature is indeterminate\footnote{Here is another theological commentary: ``Irreversibility means that things happen in time {\it and thanks to time}, that it could be that they did not happen, or did happen otherwise and that an infinite number of possibilities are always open." ````Inventive" disorder is part of the definition of things\dots Impredictability which is not due to our inability to control the nature of things, but to their nature itself, whose future simply does not yet exist, and could not yet be forecasted, even by ``Maxwell's demon" put on Sirius." (Gesch\'e, \cite{Ges}, p. 121) The author claims to find his inspiration on the ``new scientific understanding of the Cosmos" from, among others, ``La nouvelle alliance" (\cite{Ges}, p.120).}. I will conclude with some remarks on Boltzmann and Darwin, which may also clarify the relation between ``subjective" evaluations of probabilities and what we call an ``explanation". As we saw, Boltzmann had a great admiration for Darwin. While preparing this article, I read in ``La Recherche" that ``the couple random mutations-selection has some descriptive value, but not at all an explanatory one" (\cite{Schu}). That attitude is rather common (outside biology), but it goes a bit too far. Actually, there is an analogy between the kind of explanation given by Darwin and the one given by Boltzman, and they are both sometimes similarly misunderstood\footnote{In a critique of several ``almost mystical views of life", which deny ``an evolutionary role to Darwinian selection", the biologist Elitzur observes that ``such a misleading discussion of evolution is based on a complete distortion of thermodynamics" (\cite{El}, p.450). Besides, I disagree, needless to say, with the comment of Prigogine and Stengers (\cite{PS2}, p.23-24), that there is an ``antithesis" between Boltzmann and Darwin and that the theories of Darwin were a success while those of Boltzmann failed.} (of course, Darwin's discovery, although less quantitative than statistical mechanics, had a much deeper impact on our culture). What does it mean to explain some fact, like evolution or irreversibility? As we saw, we claim to understand some macroscopically observed behaviour when, given some macroscopic constraint on a system, the overwhelming majority of the microscopic configurations compatible with those constraints (and evolving according to the microscopic laws) drive the macroscopic variables in agreement with that observed behaviour. Turning to Darwin, his problem was to explain the diversity of species and, more importantly, the {\it complexity} of living beings, ``those organs of extreme perfection and complication", like eyes or brains, as Darwin called them\footnote{Here, I mean complexity in an intuitive sense. Of course, it is somewhat related to entropy, because, if we consider the set of molecules in an eye, say, there are very few ways to arrange them so as to produce an eye compared to the number of arrangement that cannot be used for vision. But as Jaynes says (see Remark 8 in Section 5), as long as we do not have a well-defined set of macroscopic variables that define precisely what an eye is, we cannot give a precise characterization of this ``complexity" in terms of entropy (and, probably, such an entropy would not be the right concept anyway). }. The fact is that we do not know, and we shall never know every microscopic detail about the world, especially about the past (such as every single mutation, how every animal died etc \dots). Besides, the initial conditions of the world could be just so that complex organs are put together in one stroke. To use a common image, it would be like ``hurling scrap metal around at random and happening to assemble an airliner" (Dawkins,\cite{Da1}, p.8). This does not violate any known law of physics. But it would be similar to various ``exceptional" initial conditions that we encountered before (e.g. the particles going back to the left half of the box). And we would not consider an explanation valid if it appealed to such ``improbable" initial conditions. But to say that such a scenario is ``improbable" simply means that, given our (macroscopic) description of the world, there are very few microscopic configurations compatible with that description and giving rise to this scenario. And, indeed, if the world was four thousand years old, the existence of those complex organs would amount to a miracle. To understand the Darwinian explanation, one must take into account four elements, at the level of the macroscopic description: natural selection (very few animals have offspring), variation (small differences between parents and offsprings occur, at least in the long run), heritability and time (the earth is much older than used to be thought). Then, the claim is that the overwhelming majority of microscopic events (which mutations occur, which animals die whithout children) compatible with such a macroscopic description leads to the appearance of those `` organs of extreme perfection and complication" \footnote{Not being a biologist, I do not want to enter into any debate about the origin of life, the speed of evolution, or how far the Darwinian explanation goes. I only want to underline the similarity with the type of (probabilistic) explanation used in statistical physics. As I learned from V. Bauchau \cite{VB}, this analogy was made already in 1877 by C. S. Peirce :`` Mr. Darwin proposed to apply the statistical method to biology. The same thing has been done in a widely different branch of science, the theory of gases. Though unable to say what the movements of any particular molecule of gas would be on a certain hypothesis regarding the constitution of this class of bodies, Clausius and Maxwell were yet able, eight years before the publicationof Darwin's immortal work, by the application of the doctrine of probabilities, to predict that in the long run such and such a proportion of the molecules would, under given circumstances, acquire such and such velocities; that there would take place, every second, such and such a relative number of collisions, etc.; and from these propositions were able to deduce certain properties of gases, especially in regard to their heat-relations. In like manner, Darwin, while unable to say what the operation of variation and natural selection in any individual case will be, demonstrates that in the long run they will, or would, adapt animals to their circumstances."\cite{Pei}}. Note that we do not need to assume that mutations are genuinely ``random". They may obey perfectly deterministic laws, and the randomness may reflect only our ignorance of the details. A final point which is common to Boltzmann and to Darwin (and his successors) is that they have provided ``brilliant confirmations of the mechanical view of Nature"\footnote{Speaking about DNA as the solution to the enigma of ``life", the biologist Dawkins writes: ``Even those philosophers who had been predisposed to a mechanistic view of life would not have dared hope for such a total fulfillment of their wildest dreams."(\cite{Da}, p.17) Not surprisingly, Popper said that molecular biology became ``almost an ideology" (\cite{Pop5}, p.172). As for Bergson, he must be turning over in his grave.}. Many people simply cannot swallow mechanical and reductionist explanations. They need some vital spirit, some teleological principle or some other animist view. Their philosophies ``thrive upon the errors and confusions of the intellect". And this is probably why the theories of Boltzmann and of Darwin have been constantly attacked and misrepresented. Putting philosophical considerations aside, I believe that what we understand well, we understand in mechanical and reductionist terms. There is no such thing as a holist explanation in science. And thanks to people like Boltzmann and Darwin the ``mechanical view of Nature" is alive and well, and is here to stay. \section{Conclusion: What makes poets happy?} \begin{flushleft} {\footnotesize I do not think we should embrace scientific theories because they are more hopeful, or more exhilarating \dots I feel sensitive on this matter because, as an evolutionary biologist, I know that people who adopt theories because they are hopeful finish up embracing Lamarckism, which is false, although perhaps not obviously so, or Creationism, which explains nothing, and suggests no questions at all. If non-equilibrium thermodynamics makes poets happier, so be it. But we must accept or reject it on other grounds." (\cite{May}, p.257, in a review of \cite{PS3}).} \end{flushleft} \vspace{3mm} This paper has been written mainly for scientists. However, many references to Prigogine are found in the literature of the human sciences and philosophy. But why should anybody in those fields worry about what happens in physics or chemistry? In his most recent book \cite{P4}, Prigogine starts by opposing the objective scientific view of the world with the subjective view (our feeling of time or of ``free will") which some philosophers take as their starting point. His goal is to reconcile both approaches through his new understanding of physics. Of course, it would be nice if one could fulfill that goal. But there are again basic confusions\footnote{See also Maes \cite{Mae} for a discussion of these problems.}. Take the issue of free will. It is true that if the fundamental laws of physics are deterministic, and if one rejects dualism, then free will is, in some sense, an illusion. But it is not clear that an element of ``intrinsic randomness" in the fundamental physical laws would make it less an illusion. The only thing which is clear is that our inability to predict the future is not very relevant for this discussion. So that the fact that I am unable to predict which way a B\'enard cell will rotate is not going to make me feel ``free". Ignorance does not explain anything. And there is no precise sense in which a ``narrow path" has been found between ``blind laws" and ``arbitrary events" (\cite{P4}, p.224). Another confusion concerns the relationship between the natural and the social sciences. In our discussion of the macroscopic level versus the microscopic one, we should locate the problems that psychology or the social sciences deal with at a very macroscopic level. Humans or societies are so many scales above molecules that modifications in the basic physical laws is (probably) almost irrelevant for the understanding of human actions\footnote{In saying this, I do not want to contradict a strongly reductionist viewpoint. But higher level laws, even though they are, in principle, reducible to the lower level ones, are not necessarily modified if the latter change. For example, Navier-Stokes equations were not much affected by the advent of quantum mechanics. Besides, so little is known scientifically about human actions that establishing a link between what is known there and molecules is not an urgent problem, to put it mildly. On the other hand, the word ``almost" is important here. Our knowledge of physics does rule out many irrational beliefs at the human level (see e.g. Weinberg, \cite{We}, p.49 for further discussion of this point).}. The main problem of the social sciences is to exist as sciences, i.e. to discover theories that are well tested and that explain some non-trivial aspect of human affairs. The only thing that people working in those fields might learn from the natural sciences is a general scientific attitude, what one might call the epistemology of the Enlightenment: a critical mind, not to rely on authorities, to compare theory with experiment, etc\dots. But there is no need to ape what happens in the exact sciences. So that, even if there was really a shift of paradigm (whatever that means) from Newtonianism to Prigoginianism in physics, that would be no reason at all for the social scientists to rush towards theories where randomness is important\footnote{For an extreme example of this confusion, see \cite{Bec} where quantum theory is ``applied" to politics.}. And, of course, probabilistic models may be relevant in the social sciences, even if the fundamental laws are determistic. The final confusion concerns the ``end of certainties" \cite{P4} or the ``desillusion with science" \cite{Bod}. The plain fact is that we know much more about the world than we did three centuries ago, or fifty, or twenty years ago. Even the discovery that one cannot predict the weather (for more than a few weeks) means that our understanding of the laws governing the weather has improved. The general feeling that there is a ``crisis in science" in turn fuels various anti-scientific attitudes that combine an extreme skepticism towards science with an equally unreasonable openness towards pseudo-sciences and superstitions\footnote{See, for example, the discussion of parapsychology by Stengers in \cite{St}, p.105. She claims that Rhine, the founder of parapsychology, has ``devoted all his efforts to invent increasingly rigorous experimental protocols, but meets ``non"-interlocutors, ready to admit any hypothesis provided it implies that there are no facts". For a scientific discussion of parapsychology, see e.g. Broch \cite{Bro}. }. In intellectual circles, this attitude is found in cultural and philosophical relativism or in some parts of the ``sociology of science"\footnote{I should emphasize that Prigogine himself does not have an explicit anti-scientific attitude. But, as Gross and Levitt point out in their analysis of superstitions in academic circles, ``his name keeps coming up in postmodern discourses with depressing frequency" (\cite{GL}, p. 96). That is exactly what one would expect when a famous scientist tells to the general educated public, a large part of which believe in New Age, in alternative medicines, or in some such nonsense, that one must ``rethink the notion of law of nature". }. Of course, science is in a perpetual ``crisis", because it is not a dogma, and is subject to revision. But what is not revisable is what I called the epistemology of the Enlightenment, and I have more than a suspicion that this epistemology is really what is being attacked by people who insist that there is a deep ``crisis in science". It is interesting to note (but another article would be needed to develop that point) that skepticism with respect to science is based on two very different lines of thought: the first one is based on traditional philosophical arguments going back to Berkeley, Hume or Kant. While some of these arguments are clever and interesting, the progress of science is such that these a priori skeptical arguments leave many people cold. Another, conceptually different, line of thought is to try to show that science itself has reached some kind of limit, or ``has to admit" that one cannot go further. Quantum mechanics, Chaos, the Big Bang or G\"odel's theorem are usually cited as evidence for those claims. But this is basically pure confusion and misunderstanding, as I tried to show in this paper, at least for one of those examples. When all is said and done, science and reason is all we have. Outside of them, there is no hope. \setcounter{equation}{0} \vspace{5mm} {\bf \huge APPENDIX 1. The Kac ring model.} \vspace{5mm} Let me analyse a simple model, due to Mark Kac (\cite{Ka} p.99, see also Thompson (\cite{Tho} p.23)), which nicely illustrates Boltzmann's solution to the problem of irreversibility, and shows how to avoid various misunderstandings and paradoxes. I shall describe a slightly modified version of the model and state the relevant results, referring to \cite{Ka} for the proofs (the quotations below come from \cite{Ka}). ``On a circle we consider $n$ equidistant points"; $m$ of the intervals between the points are marked and form a set called $S$. The complementary set (of $n-m$ intervals) will be called $\bar S$. ``Each of the $n$ points is a site of a ball which can be either white $(w)$ or black $(b)$. During an elementary time interval each ball moves counterclockwise to the nearest site with the following proviso". If the ball crosses an interval in $S$, it changes color upon completing the move but if it crosses an interval in $\bar S$, it performs the move without changing color. ``Suppose that we start with all white balls; the question is what happens after a large number of moves". Below (after eq. 3), we shall also consider other initial conditions. Let us emphasize the analogy with mechanical laws. The balls are described by their positions and their (discrete) ``velocity", namely their color. One of the simplifying features of the model is that the ``velocity" does not affect the motion. The only reason I call it a ``velocity" is that it changes when the ball collides with a fixed ``scatterer", i.e. an interval in $S$. Scattering with fixed objects tends to be easier to analyse than collisions between particles. The ``equations of motion" are given by the counterclockwise motion, plus the changing of colors (see eqs (5,6) below). These equations are obviously deterministic and reversible: if after a time $t$, we change the orientation of the motion from counterclockwise to clockwise, we return after $t$ steps to the original state\footnote{There is a small abuse here, because I seem to change the laws of motion by changing the orientation. But I can attach another discrete ``velocity" parameter to the particles, having the same value for all of them, and indicating the orientation, clockwise or counterclockwise, of their motion. Then, the motion is truly reversible, and the operation $I$ of note \protect\ref{note_involution} simply changes that velocity parameter.}. Moreover, the motion is strictly periodic: after $2n$ steps each interval has been crossed twice by each ball, hence they all come back to their original color. This is analogous to the Poincar\'e cycles, with the provision that, here, the length of the cycle is the same for all configurations (there is no reason for this feature to hold in general mechanical systems). Moreover, it is easy to find special configurations which obviously do not tend to equilibrium: start with all white balls and let every other interval belong to $S$ (with $m=\frac{n}{2}$). Then, after two steps, all balls are black, after four steps they are all white again, etc... The motion is periodic with period 4. Turning to the solution, one can start by analysing the approach to equilibrium in this model \`a la Boltzmann: {\em Analog of the Classical Solution of Boltzmann.} Let $N_w(t)(N_b(t))$ denote the total number of white (black) balls at time $t$ (i.e., after $t$ moves; $t$ being an integer) and $N_w(S;t)(N_b(S;t))$ the number of white (black) balls which are going to cross an interval in $S$ at time $t$. ``We have the immediate conservation relations: \begin{eqnarray} N_w(t+1) &=& N_w(t) - N_w(S;t) + N_b (S;t) \nonumber \\ N_b (t+1) &=& N_b(t) - N_b (S;t) + N_w (S;t) \label{A1} \end{eqnarray} Now to follow Boltzmann, we introduce the assumption (``Stosszahlansatz" or ``hypothesis of molecular chaos"\footnote{The word ``chaos" here has nothing to do with ``chaos theory", and, of course, Boltzmann's hypothesis is much older than that theory.}) \begin{eqnarray} N_w(S;t) &=& mn^{-1} N_w (t) \nonumber \\ N_b (S;t) &=& mn^{-1} N_b (t)" \label{A2} \end{eqnarray} Of course, if we want to solve (1) in a simple way we have to make some assumption about $N_w(S;t), N_b (S;t)$. Otherwise, one has to write equations for $N_w(S;t), N_b (S;t)$ that will involve new variables and lead to a potentially infinite regress. The intuitive justification for this assumption is that each ball is ``uncorrelated" with the event ``the interval ahead of the ball belongs to $S$", so we write $N_w (S;t)$ as equal to $N_w(t)$, the total number of white balls, times the density $\frac{n}{m}$ of intervals in $S$. This assumption looks completely reasonable. However, upon reflection, it may lead to some puzzlement (just as the hypothesis of ``molecular chaos" does): what does ``uncorrelated" exactly mean? Why do we introduce a statistical assumption in a mechanical model? Fortunately here, these questions can be answered precisely and we shall answer them later by solving the model exactly. But let us return to the Boltzmannian story. ``One obtains $$ N_w(t+1) - N_b (t+1) = (1-2mn^{-1})(N_w(t)-N_b(t)) $$ Thus \begin{eqnarray} n^{-1} [N_w t) - N_b (t)] &=& (1-2mn^{-1})^t n^{-1}[N_w(0)-N_b(0)]\nonumber\\ &=& (1-2mn^{-1})^t \label{A3} \end{eqnarray} and hence if \begin{equation} 2m<n \label{A4} \end{equation} (as we shall assume in the sequel) we obtain a {\em monotonic} approach to equipartition of white and black balls." Note that we get a monotonic approach for {\em all} initial conditions ($N_w(0)-N_b(0)$) of the balls. The variables $N_w(t), N_b(t)$ play the role of macroscopic variables. We can associate to them a Boltzmann entropy\footnote{The simplifying features of the model (the balls do not interact) have the unpleasant consequence that the ``full" Boltzmann entropy introduced here and defined in Section 5 actually coincides with (minus) the Boltzmann $H$-function. But, in general, the latter should only be an approximation to the former.} $S_b = ln \left( \begin{array}{c} n \\ N_w(t) \end{array} \right)$, i.e. the logarithm of the number of (microscopic) configurations whose number of white balls is $N_w(t)$. Since $$ \left( \begin{array}{c} n \\ N_w(t) \end{array} \right) =\frac{n!}{N_w(t)! (n-N_w(t))!} $$ reaches its maximum value for $N_w = \frac{n}{2} = N_b$, we see that (3) predicts a monotone increase of $S$ with time. We can also introduce a partition of the ``phase space" according to the different values of $N_w$, $N_b$. And what the above formula shows is that different elements of the partition have very different number of elements, the vast majority corresponding to ``equilibrium", i.e. to those near $N_w = \frac{n}{2} = N_b$. We can see here in what sense Boltzmann's solution is an approximation. The assumption (2) cannot hold for all times and for all configurations, because it would contradict the reversibility and the periodicity of the motion. However, we can also see why the fact that it is an approximation does not invalidate Boltzmann's ideas about irreversibility. Let us reexamine the model at the microscopic level, first mechanically and then statistically. For each $i=1,\cdots,n,$ we introduce the variable \begin{eqnarray*} \epsilon_i = \left\{ \begin{array}{c} +1 \; \mbox{if the interval in front of} \; i \in \bar S \\ -1 \; \mbox{if the interval in front of} \; i \in S \end{array} \right. \end{eqnarray*} and we let \begin{eqnarray*} \eta_i (t) = \left\{ \begin{array}{c} +1 \; \mbox{if the ball at site} \; i \; \mbox{at time} \; t \; \mbox{is white}\\ -1 \; \mbox{if the ball at site} \; i \; \mbox{at time} \; t \; \mbox{is black} \end{array} \right. \end{eqnarray*} Then, we get the ``equations of motion" \begin{equation} \eta_i (t) = \eta_{i-1} (t-1) \epsilon_{i-1} \label{A5} \end{equation} whose solution is \begin{equation} \eta_i (t) = \eta_{i-t} (0) \epsilon_{i-1} \epsilon_{i-2} \cdots \epsilon_{i-t} \label{A6} \end{equation} (where the subtractions are done modulo $n$). So we have an explicit solution of the equations of motion at the microscopic level. We can express the macroscopic variables in terms of that solution: \begin{equation} N_w (t) - N_b(t) = \sum_{i=1}^n \eta_i (t) = \sum^n_{i=1} \eta_{i-t} (0) \epsilon_{i-1} \epsilon_{i-2} \cdots \epsilon_{i-t} \label{A7} \end{equation} and we want to compute $n^{-1} (N_w (t) - N_b (t))$ for large $n$, for various choices of initial conditions $(\{ \eta_i (0)\})$ and various sets $S$ (determining the $\epsilon_i$'s). It is here that ``statistical" assumptions enter. Namely, we fix an arbitrary initial condition $(\{ \eta_i (0)\})$ and consider all possible sets $S$ with $m=\mu n$ fixed (one can of course think of the choice of $S$ as being part of the choice of initial conditions). Then, for each set $S$, one computes the ``curve" $n^{-1} (N_w (t) - N_b (t))$ as a function of time. The result of the computation, done in \cite{Ka}, is that, for any given $t$ and for $n$ large, the overwhelming majority of these curves will approach $(1-2\frac{m}{n})^t=(1-2\mu)^t$, i.e. what is predicted by (3). (to fix ideas, Kac suggests to think of $n$ as being of the order $10^{23}$ and $t$ of order $10^6$). The fraction of all curves that will deviate significantly from $(1-2\mu)^t$, for fixed $t$, goes to zero as $n^{-\frac{1}{2}}$, when $n\to \infty$. Of course when I say ``compute" I should rather say that one makes an estimate of the fraction of ``exceptional" curves deviating from $(1-2\mu)^t$ at a fixed $t$. This estimate is similar to the law of large number and (7) is indeed of the form of a sum of (almost independent) variables. \vspace{3mm} {\bf Remarks} \begin{enumerate} \item[1.] The Poincar\'e recurrence and the reversibility ``paradoxes" are easily solved: each curve studied is periodic of period $2n$. So that, if we did not fix $t$ and let $n\to \infty$, we would not observe ``irreversible" behaviour. But this limit is physically correct. The recurrence time $(n)$ is enormous compared to any physically accessible time. As for the reversibility objection, let us consider as initial condition a reversed configuration after time $t$. Then we know that, for that configuration and {\it that set} $S$, $n^{-1} (N_w(t)-N_b(t))$ will not be close to $(1-2\mu)^t$ at time $t$ (since it will be back to its initial value $1$). But all we are saying is that, for the vast majority of $S$'s this limiting behaviour will be seen. For the reversed configuration, the original set $S$ happens to be exceptional. The same remark holds for the configuration with period 4 mentioned in the beginning. Note also that, if we consider the set of configurations for which $n^{-1} (N_w(t)-N_b(t))$ is close to $(1-2\mu)^t$ for {\em all times}, then this set is empty, because of the periodicity. \item[2] We could consider other macroscopic variables, such as the number of white and black balls in each half of the circle $(1 \leq i \leq \frac{n}{2}$ and $\frac{n}{2} + 1 \leq i \leq n)$, and define the corresponding entropies. We could go on, with each quarter of the circle etc..., until we reach a microscopic configuration (number of white or black ball at each site) in which case the entropy is trivially equal to zero (and therefore constant). \item[3] This model, although perfectly ``irreversible", is not ergodic! Indeed since it is periodic, no trajectory can ``visit" more than 2$n$ microscopic configurations. But the ``phase space" contains $2^n$ configurations (two possibilities -black or white- at each site). So, only a very small fraction of the phase space is visited by a trajectory. This nicely illustrates the fact that ergodicity is not necessary for irreversibility. What is used here is only the fact that the vast majority of configurations give to the macroscopic variables a value close to their equilibrium one. \end{enumerate} \vspace{3mm} {\bf Conclusion} \vspace{3mm} I do not want to overemphasize the interest of this model. It has many simplifying features (for example, there is no conservation of momentum; the scatterers here are ``fixed", as in the Lorentz gas). However, it has $all$ the properties that have been invoked to show that mechanical systems cannot behave irreversibly, and therefore it is a perfect counterexample that allows us to refute all those arguments (and to understand exactly what is wrong with them): it is isolated (the balls plus the scatterers), deterministic, reversible, has Poincar\'e cycles and is not ergodic. This result, obtained in the Kac model, is exactly what one would like to show for general mechanical systems, in order to establish irreversibility. It is obvious why this is very hard. In general, one does not have an explicit solution (for an $n$-body system !) such as (5,6), in terms of which the macroscopic variables can be expressed, see (7). It is also clear in this example what is exactly the status of our ``ignorance". If we prepare the system many times and if the only variables that we can control are $n$ and $m$, then we indeed expect to see the irreversible behaviour obtained above, simply because this is what happens {\it deterministically} for the vast majority of microscopic initial conditions corresponding to the macroscopic variables that we are able to control. We may, if we wish, say that we ``ignore" the initial conditions, but there is nothing ``subjective" here. Finally, I shall refer to Kac \cite{Ka}, for a more detailed discussion, in this model, of the status of various approximations used in statistical mechanics (e.g. the Master equation). \setcounter{equation}{0} \vspace{5mm} {\bf \huge APPENDIX 2. On Spectral Representations.} \vspace{5mm} I will briefly discuss the mathematical basis of the claim that ``trajectories are eliminated from the probabilistic description." The relevant mathematics are nicely summarized in the Appendix of \cite{P2} and I shall therefore refer to that Appendix. Let $T$ be an invertible transformation on a space $X$ and let $\mu$ be a measure invariant under $T$. In \cite{P2}, $X$ is the unit square, $T$ the baker's map and $\mu$ is the Lebesgue measure. We can associate to $T$ a unitary operator $U$ in $L^2$ ($X,\mu)$ (or an isometry in any $L^p (X,\mu)$): \begin{equation} Uf (x) = f(T^{-1} x) \end{equation} and $U^\dagger = U^{-1}$ is defined\footnote{I follow here the conventions of \cite{P2} for the definition of $U,U^\dagger$. In \cite{P2} non-invertible transformations such as the Bernoulli map are also considered. $U$ describes here the evolution of probability distributions and is called the Perron-Frobenius operator.} by \begin{equation} U^\dagger f(x) = f(Tx) \end{equation} Since the operators $U$ and $U^\dagger$ are {\em entirely defined} in terms of $T$, it seems bizarre, to put it middly, to claim that {\em any} property of $U$, for example its spectral properties, are ``irreducible" to trajectories (i.e. to the action of $T$). ``Irreducible" is a semi-philosophical notion so that one has some freedom in the way one uses this word, but I do not think that the meaning of the word in this context is close to what the general educated public has in mind\footnote{Compare with statements such as: ``The mind is irreducible to the body" or ``The behaviour of a crowd is irreducible to the psychology of individuals".}. Anyway, putting this issue aside, one should note that the operator $U$ can be defined more generally. For example, $U$ acts on distributions. In particular, writing $\delta_{x_0} (x) = \delta (x-x_0)$, we have, for a volume preserving map, like the baker's map, \begin{equation} U \delta_{x_0} (x) = \delta_{Tx_0} (x) \end{equation} and the action of $U$ on such delta functions just reflects the evolution of trajectories. Another observation is that it has been known for some time that properties of the dynamics (of the map $T$), are reflected in spectral properties of $U$: for example, $T$ is ergodic if and only if 1 is a non-degenerate eigenvalue of $U$. The new feature discussed in \cite{P2} is that for chaotic systems such as the bakers's map one can write down a spectral representation of the form: \begin{equation} U = \Sigma | F_n (x,y) \rangle 2^{-n} \langle \widetilde{F_n} (x,y)| \end{equation} where $F_n$ (resp. $ \widetilde{F_n}$) is a product of a polynomial in $x$ (resp. in $y$) times a distribution in $y$ (resp. in $x$). This fact is very interesting mathematically but it is difficult to see why it implies radical consequences on ``the laws of nature". The argument given in \cite{P2} is that the representation (4) cannot be applied to delta functions in $x$ and $y$, which would represent a point, or a trajectory, because $\widetilde{F_n}$ involves a distribution in $x$ and one cannot multiply distributions. Let us see the force of this argument. Here is an analogy. Consider the operator $ \frac{d}{dx}$. In $L^2 ({\bf R}, dx)$ its spectrum is the imaginary axis and one can write \begin{equation} \frac{d}{dx} f(x) = \frac{1}{\sqrt{2\pi}} \int i k e^{ikx} \hat f (k) dk \end{equation} where $\hat f(k)$ is the Fourier transform of $f$. Now obviously for this formula to hold, it is necessary that $\hat f (k)$ exists. But it is easy to find functions that are differentiable but whose Fourier transform does not exist. It would be strange to say that, for those functions, derivatives are eliminated from the irreducible representation (5). And, of course, as we have seen in (3) the operator $U$ can perfectly be defined on delta functions. Simply the formula (4) does not apply in that case. It is also claimed that formula (4) includes the ``approach to equilibrium". But, as we discussed in Section 3, the notion of equilibrium does not make sense for a system with a single degree of freedom, so that this would be one more argument, if any more were needed, if favour of a dynamics expressed {\it fundamentally} in terms of trajectories. All this illustrates the remark made by J.T. Schwartz in his severe critique of the ``pernicious influence of mathematics on science": ``The intellectual attractiveness of a mathematical argument, as well as the considerable mental labor involved in following it, makes mathematics a powerful tool of intellectual prestidigitation - a glittering deception in which some are entrapped, and some, alas, entrappers." \cite{Sch}. \vspace{3mm} \noindent{\Large\bf Acknowledgments} \vspace{3mm} I have discussed many of the issues raised in this paper with colleagues and students, and particularly with S. Goldstein, A. Kupiainen, J. L. Lebowitz, C. Maes, J. Pestieau, O. Penrose, and H. Spohn. I thank I. Antoniou, B. Misra and I. Prigogine for discussions on a preliminary draft of this paper. I have also benefited from discussions with V. Baladi, V. Bauchau, S. Focant, M. Ghins, L. Haine, N. Hirtt, D. Lambert, R. Lefevere, I. Letawe, E. Lieb, J.-C. Limpach, T. Pardoen, P. Radelet P. Ruelle and E. Speer. \vspace{3mm}
proofpile-arXiv_066-7472
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Sunspot groups are formed when magnetic flux tubes rise, likely from the tachocline between the convection and radiation zones \citep{1982A&A...113...99V}. Precisely how they form and the details of their ascent are invisible to observers. However, patterns of sunspot surface activity offer clues to the inner work of the global solar magnetic field. Sunspots are observed to consist of pairs with opposite magnetic polarities. They are generally elongated in the east-west direction and the leading polarities generally lean closer to the equator than the trailing polarities. \citet{1919ApJ....49..153H} noted that most leading spots have opposite polarities in opposite hemispheres, and also that the sense of this hemispheric polarity pattern switches from cycle to cycle. This is known as Hale's hemispheric polarity rule, or simply ``Hale's law''. Separately, A. H. Joy noticed, and \citet{1919ApJ....49..153H} reported, that sunspot axes increasingly tilt with latitude, a trend known as Joy's law. In addition to Hale's and Joy's laws, Sp\"{o}rer's law describes the steady decrease of sunspot latitude through the solar cycle, forming a ``butterfly diagram'' in a time vs.~latitude plot. Discovered by Richard Carrington around 1861, and refined by Gustav Sp\"{o}rer, the sunspot butterfly diagram is interpreted as the product of the dynamo waves \citep{1955ApJ...122..293P} or the poloidal field stretching by differential rotation \citep{1961ApJ...133..572B}. Hale's, Joy's and Sp\"{o}rer's laws indicate that the solar magnetic fields change globally in a cyclic pattern. A widely-held description of solar magnetic activity centers on the interplay between a global poloidal field and differential rotation \citep{1955ApJ...122..293P,1961ApJ...133..572B,2000JApA...21..373C}. When the global field is dominantly a poloidal field, the Sun is in an activity minimum and few or no sunspots are visible. As the global poloidal field is stretched into a toroidal field by differential rotation \citep{1998ApJ...505..390S}, the field lines are stretched in both latitudinal and radial directions, ultimately, giving rise to concentrated magnetic flux tubes. These flux tubes ascend due to magnetic buoyancy and, through the suppression of convection and of radiative output, appear as sunspots \citep{1955ApJ...121..491P}. At this stage, the surface of the Sun is occupied by more and more sunspots first at high latitudes, gradually at mid- and low latitudes, as the solar activity enters a maximum phase. Near the equator, the leading polarities of sunspot pairs annihilate with their counterparts on the opposite hemisphere. At the same time, the trailing polarities disintegrate and are transported to high latitudes by poleward meridional flows \citep{2010Sci...327.1350H}. This dissipation process eventually results in a reversal of the polar fields at the height of solar activity maximum. The global field gradually evolves back to an axisymmetric dipole in the second part of the solar cycle \citep{1998ApJ...501..866V,2014SSRv..186..491J} until the Sun enters the next activity minimum, with a polar fields opposite in sign to the previous minimum. As the new cycle starts, emerging sunspots have polarities opposite from the previous cycle, so accounting for Hale's law. The Coriolis force, acting on the rapid expanding flux ropes as they ascend through the convective zone, is probably responsible for the sunspot tilt \citep{1993A&A...272..621D,1994SoPh..149...23H}. If the magnetic flux is sufficiently strong in the overshooting region, the flux loop rises while having little interaction with the materials in the convection zone \citep{1982A&A...113...99V,2009LRSP....6....4F}. The sunspot tilt angle is determined by the Coriolis acceleration, $-2\omega(\phi)\sin\phi(\Delta s/\Delta t)$, where $\omega(\phi)$ is the sun's spin rate at latitude $\phi$ and $\Delta s/\Delta t$ is the average separation rate of magnetic footpoints. Assuming the acceleration is constant, the sunspot tilt angle can be approximated as \begin{equation} \sin\gamma \sim \omega(\phi) \Delta t \sin\phi \label{coriolis} \end{equation} \noindent where $\Delta t$ is the average time for sunspot group emergence at the surface, and $\phi$ is the latitude \citep{1991ApJ...375..761W}. Equation (\ref{coriolis}) is observable given a large number of sunspots and has the form of Joy's law for an averaged sun's spin rate. Indeed, considerable observational effort has been devoted to the determination of sunspot tilt angles and derive Joy's law \citep{1919ApJ....49..153H,1989SoPh..124...81W,1991SoPh..136..251H,2010A&A...518A...7D,2012Ge&Ae..52..999I,2012ApJ...745..129S,2012ApJ...758..115L,2013SoPh..287..215M}. The tilts of the sunspot magnetic axes provide the poloidal components needed for the global field to revert to an axisymmetric dipole configuration. For example, the contribution of an individual sunspot group to the solar axial dipole field may be expressed as $D_{ss}\propto s \Phi \sin\gamma \cos\phi $ where $s$ is the pole separation, $\Phi$ is the total flux, $\gamma$ is the tilt angle, and $\phi$ is the latitude \citep{2014SSRv..186..491J}. Surface flux transport simulations confirm the importance of the sunspot tilt angles in determining the polar field strength \citep{2010ApJ...719..264C,2015ApJ...808L..28J} but the disintegration and transport of magnetic fields to the poles are imperfectly understood \citep{2009SSRv..144...15P}. \citet{2010A&A...518A...7D} and \cite{2012Ge&Ae..52..999I} found that the mean normalized tilt angles are anti-correlated with the strength of the cycle. A close connection between the tilt angle and the helicity was recently reported by \citet{2014SSRv..186..285P} and helicity has emerged as an important sunspot parameter to assess coronal mass ejections \citep{2006ApJ...644..575Z}. Our main aim in the present work is to improve on previous studies in several respects: 1) we utilize a uniform dataset which is obtained with MDI/SoHO and HMI/SDO. Both instruments is operated in a similar fashion and their performance is stable. 2) we employ systematic measurement procedures to determine the sunspot parameters. and 3) we obtain averaged values of parameters from a large number of independent measurements as each spot rotates across the solar disk. Our dataset is thus relatively immune to systematic errors, and to fluctuations in parameters associated with temporal variability of the spots. The paper is organized as follows: Section 2 describes the data and measurements, Section 3 shows the results, Section 4 gives a discussion while Section 5 is a summary. \section{Data and Measurements} Our study is based on two main data sets. First, we use full disk magnetograms from the {\it Michelson Doppler Imager} (MDI) \citep{1995SoPh..162..129S} onboard the {\it Solar and Heliospheric Observatory} (SoHO) and the {\it Helioseismic and Magnetic Imager} (HMI) \citep{2012SoPh..275..207S} on the {\it Solar Dynamic Observatory} (SDO). Second, we use sunspot records from the daily ``USAF/NOAA Solar Region Summary'' composed by {\it Space Weather Prediction Center} (SWPC). The magnetic data span two decades from May 1996 to the present time covering two solar activity cycles (23 and 24), and one full magnetic cycle. The MDI data cadence is 96 minutes, while HMI's is 12 minutes. To maintain consistency between the magnetograms from the two instruments, we use one HMI magnetogram every 96 minutes, and bin the data into 1024$\times$1024 pixels. Magnetograms from both instruments have the pixel size 2\arcsec$\times$2\arcsec. The magnetic strength obtained from MDI is reduced by a factor 1.4 for consistency with HMI data (c.f.~\cite{2012SoPh..279..295L}). We measured sunspot physical parameters using automatic techniques similar to those employed in \citet{2012ApJ...758..115L}. Our automatic program, written in IDL, runs in the SolarSoftWare (SSW) environment \citep{1998SoPh..182..497F}. On each magnetogram, a sunspot is identified by its location as read from the USAF/NOAA sunspot records. The area is outlined by an initial circle whose radius is 20$\times\sqrt{A/\pi}$ where $A$ is the sunspot ``total corrected area''. The initial circle is iteratively stretched into an ellipse and, finally, two circles having opposite magnetic field polarities are positioned to fit the magnetogram data. Then the tilt angles, $\gamma$, are calculated using \begin{equation} \tan\gamma=\Delta \phi /( \Delta \lambda \cos \bar \phi) \end{equation} \noindent where $\bar \phi$ is the mid-point latitude of the polarity pair, $\Delta \phi$ and $\Delta \lambda$ are the differences in latitude and longitude between the centers of the two magnetic components. Examples of the final fits are illustrated in Figure (\ref{tilt-definition}) in which four sample sunspot magnetograms are shown. The best-fit sunspot polarity pairs are plotted as red and yellow circles with radii calculated from $\sqrt{a/\pi}$ where ``a'' is best fit area of each component. Solid green lines in Figure (\ref{tilt-definition}) show the best-fit tilt axes, in each case. The four quadrants in the Figure distinguish sunspots whose tilt angles agree with both Hale's and Joy's laws in respective hemispheres and cycles. The automatic run does not guarantee a reliable set of sunspot tilt angles during the course of the disk passage. Two main effects lead to scatter in the tilt angles of a sunspot group. First, for very tiny sunspot groups, the code is unable to obtain convergent solutions for the north and south polarity components and so fails to produce reliable tilt angles. Second, very close or overlapping sunspot groups cause the confusion with the procedure producing unreliable solutions. We use the stability of repeated measurements to identify the problem sunspots. Specifically, sunspots with tilt angles varying $>270^\circ$ in the course of their disk passages are targeted for inspection. The strategy greatly improved the sunspot tilt angle measurements along with other parameters. About half of total sunspots went through this correction process. An example of the time dependence of parameters measured for an individual sunspot is shown in Fig. (\ref{example}). To minimize projection effects, we consider only measurements of sunspots taken within $\pm30^\circ$ of meridian-crossing. This accounts for about 4.5-days of observation around the longitudinal disk center for each sunspot group (see the grey area in Fig. \ref{example}). As shown in Figure (\ref{histogram-ssn}), a majority of sunspots are measured 60 to 70 times during this period. Figure (\ref{tilt-sigma}) shows the standard deviations, $\sigma_{\gamma}$, of the repeated tilt angle measurements as a function of the number of measurements for each sunspot group. The Figure shows two things. First, the median and average $\sigma_{\gamma}$ are modest (5.0\degr~and 9.3\degr, respectively) indicating that the tilt angles are systematically variable. Second, $\sigma_{\gamma}$ is independent of the number of measurements per spot, consistent with $\sigma_{\gamma}$ being a real measure of intrinsic temporal variations and not the result of statistical fluctuations in the measurements. Magnetograms are two-dimensional maps of magnetic flux density. Our fitting procedure produces a pair of magnetic fluxes with opposite signs, for each sunspot on each magnetogram. In the current work, each sunspot is represented by a set of parameters, including magnetic flux, averaged over the longitudinal range of $\pm30^\circ$. The total sunspot magnetic flux means the unsigned magnetic flux, i.e., the sum of magnetic fluxes ($\Phi$) of opposite magnetic polarities. The average magnetic strength of a sunspot group is defined by $B=\Phi/a$ [G], where ``$\Phi$'' is the unsigned magnetic flux [Mx]; ``$a$'' is the effective magnetic area [cm$^2$] computed from the total number of pixels in the idealized sunspot pairs. The magnetic flux and area are highly linearly correlated (Pearson $r_{corr}=0.97$ at P-Value $<10^{-5}$). We fitted a relation of the form to all sunspots \begin{equation} \log_{10}\Phi = k_a \log_{10} a+ C_a \label{eflux2area} \end{equation} \noindent to obtain $k_a=1.183\pm0.005$ and $C_a=-1.502\pm0.093$. When Equation (\ref{eflux2area}) is written as $\Phi(a)=\Phi_0 a^{k_a}$, where $\Phi_0=10^{C_a}$, we obtain the magnetic strength as a function of magnetic area: \begin{equation} B(a)=\frac{d\Phi(a)}{d a}=\Phi_0 k_a a^{k_a-1} \label{eb2area} \end{equation} \noindent which is shown in Figure (\ref{bfield}). All sunspots are plotted with ``$\cdot$''. The filled circles represent average (orange) and median (blue) values of the area ($a$) and magnetic strength ($B$) in $\sim3.8\times 10^{19}$ [cm$^2$] bin. The average magnetic strengths are of order 10$^2$ Gauss, comparable to the magnetic field levels of plage regions visible in Ca II emission \citep{1959ApJ...130..366L}. The dashed line is Equation (\ref{eb2area}) plotted with $k_a=1.183$ and $C_a=-1.502$; the direct non-linear least square fit to all sunspots is plotted with the solid curve. There is a factor of 1.2 in amplitude between two curves (see equations in the figure). This is due to the large uncertainty, 0.093, in $C_a$ obtained from fitting Equation (\ref{eflux2area}). The uncertainty causes the amplitude for the analytical equation (\ref{eb2area}) varying from 0.030 and 0.046. \section{Results} Figure (\ref{butterfly}) shows the Butterfly diagram computed from our data in the time period May 1996 to July 2018. Sunspots erupted in Cycles 23 and 24 are plotted with blue and orange filled circles, respectively. The ``anti-Hale'' sunspots are plotted with black ``$\bullet$'' circled in green. The diagram is consistent with the accepted onset of Cycle 24 starting at the end of 2008 \citep{2017SSRv..210..351W}. Sunspots erupted in Cycle 22 are plotted with ``$\circ$'' symbols and excluded from further study here. \subsection{Sunspot Hemispheric Asymmetry} Table (\ref{thaleslaw}) lists the numbers of sunspots with respective hemispheres and cycles. It shows that the numbers of sunspots are distributed asymmetrically between the hemispheres. In Cycle 23, the number of sunspots in the southern hemisphere, $N_s(23)$, is greater than the number in the northern hemisphere, $N_n(23)$, but this asymmetry is reversed in Cycle 24. Specifically, the ratios are $N_s(23)/N_n(23) = 1.20\pm0.03$ for Cycle 23 and $N_n(24)/N_s(24) =1.15\pm0.04$ for Cycle 24, where the listed uncertainty assumes Poisson statistics, $ratio/\sqrt{N_{min}}$ with $N_{min}$ the smaller of two sunspot numbers. The average hemispheric asymmetry \textit{within one cycle} is $1.18\pm0.03$. On the other hand, the ratio $\frac{N_s(23)N_s(24)}{N_n(24) N_n(23)} = 1.04\pm0.04$, is consistent with unity, meaning that the sunspot counts are hemisphere-symmetric over the (22 year) magnetic cycle of the Sun, to within the uncertainties of measurement. More sunspots erupted in Cycle 23 than in Cycle 24, with the ratio of the total numbers being $N(23)/N(24) = 1.56\pm0.04$ where, again, we quote a Poisson error bar. \subsection{Hale's Law} Hale's law appears in Figure (\ref{haleslaw}), where filled ``$\bullet$'' and empty ``$\circ$'' symbols represent sunspots in the northern and southern hemispheres, respectively. In Cycle 23, the leading (trailing) polarities are positive (negative) in the northern hemisphere; the tilt angles are in quadrants I or IV ($|\gamma|\leq\pm90^\circ$, c.f.~Figure \ref{tilt-definition}). The leading (trailing) polarities are negative (positive) in the southern hemisphere; the tilt angles are in quadrants II or III ($90^\circ\leqslant\gamma<180^\circ$ or $-180^\circ\leqslant\gamma <-90^\circ$). The sense of the polarity reverses between the hemispheres in Cycle 24. Figure (\ref{haleslaw}) shows that most sunspots obey Hale's law by following this pattern. Exceptions are plotted in black ``$\bullet$'' circled with green. These are the ``anti-Hale'' sunspots (hemispheres are not distinguished here). Except for numbers of sunspots with respective hemispheres and cycles, Table (\ref{thaleslaw}) also shows the statistics of the ``Hale'' and ``anti-Hale'' sunspots for each hemisphere and cycle. The ``anti-Hale'' sunspots are a stubborn minority: over all, the fraction of ``anti-Hale'' sunspots is $(8.69\pm0.57)\%$ in Cycle 23, and $(7.11\pm0.64)\%$ in Cycle 24. Within the uncertainties (we quote the Poisson error), both fractions of ``anti-Hale'' sunspots are equivalent, and consistent with the mean value $(8.07\pm0.43)\%$. We also see Hale's law in polar coordinates, where the azimuthal angle represents the sunspot tilt, and the radius represents the sunspot latitude, from 0\degr~to 40\degr~(Figure \ref{polar2tilt}). In the Figure, the upper panels show sunspots in Cycles 23 and the lower panels in Cycle 24. The northern hemisphere spots are plotted in the left two panels and the southern hemisphere spots on the right. This presentation distinguishes ``Hale'' from ``anti-Hale'' spots particularly clearly: ``Hale'' sunspots fall in quadrants I \& IV or II \& III, depending on hemisphere and cycle while ``anti-Hale'' sunspots (plotted with black dots circled in green) occupy the other two quadrants. \subsection{Sunspot Magnetic Flux} \subsubsection{Cumulative and Average Magnetic Fluxes} The cumulative magnetic flux, $\sum\Phi$, is the magnetic flux integrated over an entire cycle. The average magnetic flux, $\bar\Phi$, is the flux averaged over the number of sunspots within a certain category. Both quantities are listed in Table (\ref{tmb}). The Table shows that the cumulative magnetic fluxes of the ``Hale'' sunspots are much larger than those of the ``anti-Hale'' sunspots. For example, in Cycle 23, $\sum\Phi$(Hale)$/\sum\Phi$(anti-Hale)$=14.8\pm1.0$ while in Cycle 24, the ratio is $21.3\pm1.9$. The average between cycles is $\sum\Phi$(Hale)$/\sum\Phi$(anti-Hale)$=16.2\pm0.9$. The fraction of the total magnetic flux carried by ``anti-Hale'' sunspots is $(5.8\pm0.3)\%$, averaged over the two cycles. The ratios of average magnetic fluxes in the ``Hale'' and ``anti-Hale'' sunspots are $\bar\Phi$(Hale)$/\bar\Phi$(anti-Hale)$=1.41\pm0.09$ for Cycle 23, and $1.63\pm0.15$ for Cycle 24. These values are consistent with their weighted mean $\bar\Phi$(Hale)$/\bar\Phi$(anti-Hale)$=1.42\pm0.08$. The cumulative magnetic fluxes in Table (\ref{tmb}) dramatically decreased from Cycle 23 to 24, $\sum\Phi(23)/\sum\Phi(24)=2.62\pm0.06$. This reduction is not only due to the decreased number of sunspots, but also to a reduction in the average magnetic flux per spot. Specifically, the number of sunspots in Cycle 23 is 1.56 times that in Cycle 24 while the average magnetic flux ($\bar\Phi$) in Cycle 23 is $1.68\pm0.04$ times that in Cycle 24. The product of these factors gives a decrease in the cumulative magnetic flux by a factor 2.6 from Cycle 23 to Cycle 24. \subsubsection{Magnetic Flux vs. Latitude} Figure (\ref{magflux2lat}) shows the cumulative magnetic flux distributions with latitude for each hemisphere and cycle, $\sum\Phi(\phi)$. The filled circles represent the integrated magnetic flux over an entire cycle binned by $5^\circ$ in latitude. ``Hale'' and ``anti-Hale'' sunspots are represented separately by blue and black/green colors. The solid lines are parabolas fitted to $\log_{10}(\Phi)~vs.~\phi$, to guide the eye. The latitude distributions of the flux are remarkably similar between hemispheres and cycles, and between ``Hale'' and ``anti-Hale'' spots. The latitudes of peak flux are summarised in the last column in Table (\ref{tmb}). The average peak latitude for the emergence of flux is $14.5^\circ\pm0.5^\circ$, regardless of hemisphere, cycle or ``Hale'' vs.~``anti-Hale'' nature of the spots. Figure (\ref{magflux-his}) shows the sunspot number distributions vs.~sunspot magnetic fluxes. A striking feature is that both ``Hale'' (blue) and ``anti-Hale'' (black) sunspot populations have similar distribution functions. \subsubsection{Magnetic Flux and Pole Separation} \label{flux_pole} We define the sunspot pole separation, $s$\degr, as the distance between the best-fit centers of mass of the positive and negative polarities. The pole separation is given by the spherical cosine law: \begin{equation} s=\arccos[\sin \phi_1\sin \phi_2+\cos \phi_1 \cos \phi_2\cos(\Delta \lambda)]\times(180\degr/\pi) \end{equation} \noindent where $\phi_1$ and $\phi_2$ are the heliographic latitudes of the two poles and $\Delta \lambda$ is the difference between the heliographic longitudes of the poles. Table (\ref{tflux2s}) gives the logarithmic average pole separations $\log_{10} (s) = 0.62\pm0.01$ for ``Hale'' and $\log_{10} (s) = 0.44\pm0.02$ for ``anti-Hale'' sunspots, respectively. The difference is statistically significant, with the average pole separation of ``Hale'' sunspots being larger than that of ``anti-Hale'' sunspots. This is also seen in the histogram of Fig. (\ref{histogram-s}), indicated by two arrows. In Figure (\ref{flux2s}), the sunspot magnetic flux ($\Phi$) is plotted as a function of the pole separations ($s$), showing that these quantities are clearly correlated. Pearson correlation coefficients between the two parameters are 0.75 for the ``Hale'' and 0.54 for the ``anti-Hale'' populations, both significant with the $P < 10^{-5}$ probability. We fitted a relation of the form \begin{equation} \log_{10} \Phi(s)=k_s\log_{10} s +C_s \label{eflux2s} \end{equation} \noindent where $k_s$ and $C_s$ are constants for both ``Hale'' and ``anti-Hale'' populations, using data from both cycles and hemispheres. The fitting parameters, their uncertainties, and the correlation coefficients are listed in Table (\ref{tflux2s}). The regression lines are plotted in Figure (\ref{flux2s}) in grey for the ``Hale'' sunspots, and green for ``anti-Hale'' sunspots. The dependence of the sunspot magnetic flux on pole separation, $s$, is different for ``Hale'' and ``anti-Hale'' populations. Expressed as power laws, we find $\Phi(s)=(6.61^{+0.47}_{-0.44})\times10^{20}s^{1.57\pm0.02}$ for ``Hale'' sunspots and $\Phi(s)=(14.79^{+4.71}_{-3.57})\times10^{20}s^{1.06\pm0.09}$ for ``anti-Hale'' sunspots. The relationship between magnetic flux and pole separation was examined by \citet{1989SoPh..124...81W} using Kitt Peak magnetograms. They obtained $\Phi(s)=4\times 10^{20}$ $s^{1.3}$ without distinguishing ``Hale'' from ``anti-Hale'' populations. To compare with Wang's work, we fitted Equation (\ref{eflux2s}) to all sunspots (c.f. last row in Table \ref{tflux2s}), to find $\Phi(s)=7.59^{+0.34}_{-0.36}\times 10^{20} s^{1.49\pm0.02}$. This result is consistent with that obtained by \citet{1989SoPh..124...81W} except that our multiplicative constant, $\Phi_0=7.59\times 10^{20}$, is about twice their value of $4.0\times 10^{20}$. This factor of two occurs simply because Wang and Sheeley described the single polarity flux while we present the sum of the absolute values of the north and south components. Our estimate of the power index, $s$ = 1.49$\pm$0.02, is slightly larger than $s$ =1.3 in Wang and Sheeley but this difference is probably within the uncertainty of their determination (Y.-M. Wang, private communication). \subsection{Sunspot Magnetic Tilt Angles} The sunspot tilt angle, $\gamma$, is the angle between the magnetic axis and the Sun's azimuthal direction or equator. The tilt angle range is [$-90^\circ, 90^\circ$]. We define tilt angles positive when the sunspot magnetic axes tilt toward to the equator, negative when the axes tilt away from the equator. \subsubsection{Tilt Angle Statistics} The statistics of sunspot tilt angles are summarized in Table (\ref{ttiltangles}). We list the average ($\bar\phi$), median ([$\phi$]) and the standard deviation ($\sigma_\phi$) for sunspot latitudes; and the average ($\bar\gamma$), median ([$\gamma$]) and the standard deviation ($\sigma_\gamma$) for sunspot tilt angles. ``Hale'' and ``anti-Hale'' populations are presented separately for each hemisphere and cycle. The average and median sunspot latitudes are roughly identical between the ``Hale'' and ``anti-Hale'' populations, hemispheres and cycles. We obtain the average latitude of all sunspots, $\bar\phi=\pm(15.55^\circ\pm0.23^\circ)$. The listed uncertainty is the Poisson error. The average absolute tilt angle determined from all sunspots is $\bar\gamma=4.58^\circ\pm0.07^\circ$. This is similar to $4.2^\circ\pm0.2$\degr~obtained by \citet{1991SoPh..136..251H}. For ``Hale'' sunspots, we find average tilt angle $\bar\gamma=5.49\degr\pm0.09\degr$. For ``anti-Hale'' sunspots, $\bar\gamma=-5.84\degr\pm0.31\degr$. In general, the ``Hale'' sunspot magnetic axes tilt toward the equator, but the ``anti-Hale'' axes tilt away from it. \subsubsection{Joy's Law} In the polar plots of Figure (\ref{polar2tilt}), ``Hale'' sunspots concentrate near the horizontal axes, and appear in broad, fan-like clusters. This is because the tilt angles of Hale's sunspots generally increase with increasing latitude, following Joy's law. The ``anti-Hale'' sunspots are scattered in the quadrants not occupied by ``Hale'' sunspots. Their distributions show no dependence on the latitudes. Here, we calculate best-fit values of Joy's law parameters using only the ``Hale'' sunspots. Slightly different formulations of Joy's Law are used in the literature. To compare with these different formulations, we fit the following three functions to the tilt vs.~latitude data. First, to compare with the effect of Coriolis force, which varies in proportion to $\sin(\phi)$, we examine Joy's law written as \citep{1989SoPh..124...81W}, \begin{equation} \sin\gamma = k_J\sin \phi+C_J. \label{joysin} \end{equation} \noindent Second, we fitted the simple form \citep{2012ApJ...758..115L}: \begin{equation} \gamma=k_\phi\phi+C_\phi. \label{joydirect} \end{equation} \noindent Finally, we fitted \citep{2012ApJ...745..129S} \begin{equation} \gamma=k_\circ\sin \phi + C_\circ. \label{joystenflo} \end{equation} In these equations, $k_J$, $k_{\phi}$ and $k_\circ$ are the Joy's slopes ($k_\phi$ and $k_\circ$ have units of degree of tilt per degree of latitude) and $C_J$, $C_{\phi}$ and $C_0$ are the equatorial ($\phi=0\degr$) tilt angle (``Joy's constant''). Results of the fits are listed in Table (\ref{tjoyslaw}), for each hemisphere and activity cycle. Note that, traditionally, Joy's law is derived from a few latitude bins to increase the signal-to-noise. But with our large data set, we derive Joy's law directly. Table (\ref{tjoyslaw}) summarises the parameters for Equations (\ref{joysin}), (\ref{joydirect}) and (\ref{joystenflo}). In all cases the Joy's constant intercepts, $C_J$, $C_{\phi}$ and $C_\circ$, are statistically consistent with zero. Furthermore, within the uncertainties, the derived parameters are independent of hemisphere and cycle number, other than for the expected sign differences. Having established that there are no significant differences between hemispheres or solar cycle numbers, we obtain Joy's law expressions using tilt angle determinations for all ``Hale'' sunspots as a function of absolute latitude, $|\phi|$ (i.e.~we merge the data for both hemispheres and cycles). The resulting three forms of Joy's law are plotted in Figure (\ref{fjoyslaw}). All ``Hale'' sunspots are plotted with ``$\cdot$'' symbols, and we show absolute latitudes, $|\phi|$. The red and the blue filled circles represent the average, median latitudes and tilt angles every $5^\circ$ (0.0833 for $\sin]phi$) of the (sine) latitudinal bin, while the vertical and horizontal error bars are the standard deviations of the means. These binned points are to guide the eye, but are not used for fitting the Joy's law parameters. From Equation (\ref{joysin}), we find $\sin\gamma$ =(0.38$\pm$0.05)$\sin\phi$ -(0.01$\pm$0.02). This Joy's slope is slightly but not significantly smaller than 0.48 derived earlier by fitting the flux-weighted tilt angles \citep{1991ApJ...375..761W}. From Equation (\ref{joydirect}), we find $\gamma=(0.39\pm0.06)\phi-(0.66\pm1.00)$. Joy's slope here is consistent with $0.5\pm0.2$ obtained in our previous study \citep{2012ApJ...758..115L}, and the current work offers a more accurate measurement. The Joy's slopes differ insignificantly between Equations (\ref{joysin}) and (\ref{joydirect}) because, for small sunspot tilt angles, $\gamma<30^\circ$, and latitudes, $\phi<20^\circ$, $\phi_r\sim\sin\phi$, and $\gamma_r\sim\sin\gamma$, where $\phi_r$ and $\gamma_r$ are expressed in radians. Finally, we find $\gamma$ =(23.80$\pm$3.51)$\sin\phi$ - (0.86$\pm$1.03) from Equation (\ref{joystenflo}). Our determination of Joy's slope in Equation (\ref{joystenflo}) is $\sim$2.5$\sigma$ smaller than $32.1^\circ\pm0.7^\circ$ as obtained by \citet{2012ApJ...745..129S}. If this difference is real it could be due to the inclusion of bipolar regions of all sizes, from ephemeral to large sunspots, in the study of \citet{2012ApJ...745..129S}. In all three derived forms of Joy's law, the intercepts are statistically consistant with ``0''. \subsubsection{Sunspot Tilt Angle vs~Pole Separation} The Pearson's correlation coefficient between absolute tilt angle, $|\gamma|$, and pole separation, $\log_{10}(s)$, is -0.33 for all sunspots. This is with $P<10^{-5}$ at 0.01 confidence level implicating that the two parameters are highly correlated. Fig (\ref{ftilt2s}) plots all sunspots with black dots. To guide the eye, the orange and light blue filled circles represent the average and median data points at the 0.15 $\log s$ bin width. For a crude estimate, we fit the following equation to all sunspot data: \begin{equation} |\gamma|=k_\gamma log_{10}s+C_\gamma \label{etilt2s} \end{equation} \noindent Unlike the relations shown in Figure (\ref{flux2s}), the sunspots are much more scattered in Figure (\ref{ftilt2s}) (see ``$\cdot$'' symbols). A linear fit to all sunspots gives $|\gamma|=(-24.88\pm1.08)\log s+(35.84\pm0.64)$. This is plotted with a solid line in the Figure (\ref{flux2s}). A visual inspection of the discrete data points inspires a parabola function. A least square fit to all sunspots gives $|\gamma|=(14.6\pm3.1)(\log s)^2-(38.8\pm3.1)\log s+(38.1\pm0.8)$ (see the dashed curve). The relation shows that the tilt angle decreases with increasing pole separation. \citet{1993SoPh..145..105H} used daily white-light photographs taken at Mt. Wilson to reach a different conclusion. A linear least-square fit to their data gives a slope of 0.058 deg Mm$^{-1}$ indicating a positive correlation between tilt angle and pole separation. To compare with Howard's measurement we recomputed the fit between $|\gamma|$ and $s$ expressed in meters (instead of degrees) for {\it all sunspots}. The linear fit gives the slope $-0.20\pm 0.01$ deg Mm$^{-1}$, which is very different from 0.058 deg Mm$^{-1}$ and opposite in sign. We speculate that the discrepancy may be due to the use of magnetograms in our study versus white light photographs in Howard's work. For example, magnetograms inevitably draw in surrounding plage regions but the white light photographs might have included only sunspots and their penumbrae. In support of this possibility, and of the current work, we note that an independent but smaller study of 203 active regions using magnetograms from Huairou Solar Observatory also found that the sunspot tilt angles decrease with increasing pole separation \citep{1999SoPh..189..305T}. \section{Discussion} As remarked above and shown in Table \ref{thaleslaw}, the fraction of ``anti-Hale'' sunspots (namely $8.07\%\pm0.43$\%), is fixed with respect to hemisphere and cycle number. This fraction is consistent with previous studies in which the ``anti-Hale'' fraction ranges from a few to $\sim$10\% \citep{1989SoPh..124...81W,2009ARep...53..281K,2012ApJ...745..129S,2012ApJ...758..115L,2014ApJ...797..130M,2015MNRAS.451.1522S}. It is most consistent with ($8.4\pm0.8$)\% obtained by \citet{2014ApJ...797..130M} and ($8.2\pm0.3$)\% from our previous work \citep{2012ApJ...758..115L}. This is probably because these works employ the same magnetic field data. It is not clear why one out of twelve sunspots should persistently violate the hemispheric polarity rule. The evidence shows that many of these sunspots are not evolved from the ``Hale'' population, but have irregular polarity arrangements in the beginning, as was also noted by \citet{2012ApJ...745..129S}. The question is how sunspots emerge with irregular magnetic polarity orientations, with the toroidal fields pointing in the opposite direction. In numerical simulations, ``anti-Hale'' sunspots are the result of either weak magnetic fields, or ``hemisphere crossing due to convective flows'' \citep{2013SoPh..287..239W}. Indeed, the cumulative magnetic fluxes of ``anti-Hale'' sunspots are at the level of a few percentage ($5.8\pm0.3$)\% of total magnetic flux; ; the average magnetic fluxes of ``anti-Hale'' sunspots are also smaller, only $\sim70$\% of those of the ``Hale'' population. However, some individual ``anti-Hale'' sunspots do possess large magnetic fluxes, as evident in Figure (\ref{magflux-his}). Table (\ref{ttiltangles}) shows that the average and median latitudes of ``anti-Hale'' sunspots do. not differ significantly from ``Hale'' population . It is yet to be examined what special properties are possessed by ``anti-Hale'' sunspots, and what roles they play in the solar cycle progression. The ``Hale'' sunspots can be further divided into two sub-populations, namely those with magnetic axes tilted towards the equator and those tilted away from it. The former are labeled ``Hale/normal'' and the latter ``Hale/inverted'' by \citet{1989SoPh..124...81W}. \citet{2015ApJ...808L..28J} and \citet{2018ApJ...863..116W} attribute the weakness of solar cycle 24 to a few large ``Hale/inverted'' sunspot groups appearing near the equator in Cycle 23. Their argument is that the following polarities of these sunspots traverse the equator to eventually weaken the global dipole. In reality, it is observed that Hale/normal and Hale/inverted spots are continuous states of an evolving sunspot group. Statistically, more than half of sunspots are Hale/normal, and more than 1/3 sunspots are Hale/inverted. Table (\ref{haless}) shows that the fractions of Hale/normal spots are $\sim$55\% in Cycle 23 and $>60\%$ in Cycle 24, a modest difference. The Table also shows that the average latitudes of Hale/inverted spots are lower than Hale/normal spots by $\sim1.5^\circ$. However, in view of the uncertainties in the measurements, and because we are comparing data from only two sunspot cycles, it is not clear that these differences are sufficient to cause the prolonged minimum in the end of Cycle 23. More data are needed to be able to address this issue with greater confidence. Joy's law in ``Hale'' sunspots can be explained by considering the action of the Coriolis force on the flow directions in the flux tubes \citep{2011ApJ...741...11W}, or on the expansion/contraction of the flux tubes \citep{1991SoPh..132..257H,1991SoPh..136..251H,1991ApJ...375..761W}. The magnitude of the tilts is determined by the time of emergence, the solar rotation rate (the latitude) (see Equation \ref{coriolis}). Statistically, the average emergence time $\Delta t\sim1.8$ [days] which is from sunspots appearing in the surface to their magnetic flux reaching maximum; the rotation period is 27.3 [days] for the average sunspot latitude $15.55^\circ$; therefore, $\sin\gamma \sim 0.4\sin\phi$ by Equation (\ref{coriolis}). This agrees with the currently measured Joy's law $\sin\gamma=(0.38\pm0.05)\sin\phi$. A similar calculation by \citet{1991ApJ...375..761W} results in $\sin\gamma\sim 0.5\sin\phi$ with the slightly larger coefficient resulting from the use of a slight longer emergence time, $\Delta t=2.2$ days. Over all, our observations is consistent with the hypothesis that Joy's law is produced by the Coriolis force acting on emerging sunspot magnetic flux tubes. It is inevitable that the determination of sunspot tilt angles is dependent of the area sizes of sunspots. An example is a discrepancy between Joy's law obtained from white-light images and from magnetograms. In general, the Joy's slope is lower from the white light images, $k_\phi \sim 0.2$- $0.3$ \citep{1919ApJ....49..153H,1991SoPh..136..251H,2010A&A...518A...7D,2012Ge&Ae..52..999I,2013SoPh..287..215M,2018arXiv180410479I} than from the magnetograms, $k_\phi\sim 0.5$ \citep{1989SoPh..124...81W, 2012ApJ...745..129S,2012ApJ...758..115L}. \citet{2015ApJ...798...50W} attribute the discrepancy to the inclusion of plage areas in the measurements with magnetograms. The plage regions are not seen in white light images. The same conclusion is also drawn by \citet{1996SoPh..167...95H,1996SoPh..169..293H} using Mt Wilson data who noticed plages normally have higher tilt angles than those of sunspot groups. Our observations support this assessment. The average magnetic strength of active regions is at a magnitude of a few hundreds gauss, which are the signature of plages (see Fig. \ref{bfield}). Whether or not the Joy's slope discrepancy is caused by the sizes of sunspot regions is worth investigation in subsequent work. Our finding, that Joy's law does not vary with hemisphere or with solar cycle, is different from a recent study by \citet{2018arXiv180707913T}. These authors studied sunspot cycles 15 to 24 using white-light sunspot drawings. First, they find that tilt angles reach a peak at 20\degr-30\degr~latitudes then decrease towards the poles. This local maximum is not present in our data. Their observation results in Joy's formula $\gamma$=(0.20$\pm$0.08)$\sin(2.80\phi)$+(-0.00$\pm$0.06), with a frequency number for the latitude which is quite different from other studies on Joy's law. Second, they find that the odd and even cycles have different latitudinal tilt angle profiles. Again, the fact that their conclusion differs from the current work might be partly attributed to the use of sunspot drawings as opposed to magnetograms. However, when they derived Joy's law in the form of Equation (\ref{joydirect}) the result, $\gamma$=(0.41$\pm$0.18)$\phi$+(0.00$\pm$0.06), is consistent with the value derived here, namely $k_{\phi}$ = 0.39$\pm$0.06, with a larger error bar. The average pole separation is $\log_{10}(s^\circ) = 0.62\pm0.01$ for ``Hale'' sunspots and $\log_{10}(s^\circ) = 0.44\pm0.02$ for ``anti-Hale'' spots (see Table \ref{tflux2s}). The difference between two sunspot populations is statistically significant. According to \citet{1993A&A...272..621D}, smaller pole separations represent larger magnetic tension, which opposes magnetic buoyancy and increases the rising time. This may explain why ``anti-Hale'' sunspots do not follow Joy's law because the magnetic tension is the dominant force over Coriolis force within these sunspot magnetic flux systems. The sunspot magnetic flux increases with increasing pole separation (Section \ref{flux_pole}, see also \citep{1989SoPh..124...81W, 1999SoPh..189..305T}). A new result from the current work is that ``Hale'' and ``anti-Hale'' sunspots behave differently in the relationship between two parameters. The sunspot magnetic fluxes increase exponentially with pole separation for ``Hale'' sunspot population ($\Phi(s) \propto s^{1.57}$) . But the magnetic fluxes are almost linearly correlated to the pole separation for ``anti-Hale'' sunspot population ($\Phi(s) \propto s^{1.06}$). This seems to suggest that the topologies of magnetic flux tubes may differ between two populations since the sunspot magnetic flux is dependent of pole separation or vice versa. The current study shows that sunspot tilt angles ($\gamma$), pole separations ($s$) and magnetic fluxes ($\Phi$) are interconnected. An empirical relation between magnetic fluxes and tilt angles can be derived from general relations that we derived earlier: $\log\Phi(s)=1.49\log s+20.88$ and $|\gamma|=-24.88\log s+35.84$. It is $\Phi(\gamma)=1.1\times10^{23}\times(0.87)^{|\gamma|}$. The formulae indicates that sunspot magnetic flux is weakly anti-correlated with the sunspot tilt angles. This is the result of the correlations between $\Phi$, $s$ and $\gamma$ shown by Equations (\ref{eflux2s}) and (\ref{etilt2s}). Our observation shows general trends of sunspot magnetic flux. Larger magnetic flux correlates with bigger pole separation and a smaller tilt angle. Statistically, systems with small magnetic flux have smaller pole separations but larger tilt angles. This is consistent with the general impression \citep{1993A&A...272..621D}, but seems at odds with some observations and simulations \citep{1993SoPh..145..105H,1995ApJ...438..463F,2011ApJ...741...11W,2013SoPh..287..239W}. As it is shown in Equation (\ref{coriolis}) and discussed by \citet{2015ApJ...798...50W}, tilt angles depend on the buoyant rise time. Large sunspots tend to have strong fields, and rise quickly through the convection zone. They are less likely to be affected by Coriolis force. On the other hand, the relations demonstrated in this work are based on the surface magnetic field data, and the statistical behavior of sunspots. More clues about flux loop emergence can be expected from a study of the time variation of these parameters. \section{Summary} We conducted a program of systematic measurements of sunspot parameters using a uniform, high quality dataset including 4385 sunspots erupted in Cycles 23 and 24. We measured magnetic tilt angles, fluxes, areas and pole separations using full-disk magnetograms taken by MDI/SoHO and HMI/SDO. We used this dataset to characterize differences between the ``Hale'' and ``anti-Hale'' sunspot populations. Our results about ``Hale'' and ``anti-Hale'' sunspot populations are: \begin{enumerate} \item The ``anti-Hale'' sunspots constitute ($8.1\pm0.4$)\% of all sunspots. This fraction is constant with respect to hemisphere and solar cycle number. \item The ``Hale'' and ``anti-Hale'' populations have similar latitudinal distributions (with mean value $\bar \phi=15.6^\circ\pm0.2^\circ$) but differ in the distributions of tilt angles and magnetic fluxes. The average tilt angle of the ``Hale'' spots is $\bar\gamma=5.49\degr\pm0.09\degr$ and of the ``anti-Hale'' spots $\bar\gamma=-5.84\degr\pm0.31\degr$. On average, the ``Hale'' sunspots carry a cumulative magnetic flux 16 times that of ``anti-Hale'' sunspots, and have an average magnetic flux per sunspot 1.4 times that of ``anti-Hale'' sunspots. \item The average pole separation of ``Hale'' spots, $4.18^\circ\pm0.07\degr$, is larger than that of ``anti-Hale'' spots, $2.74^\circ\pm0.15\degr$. This suggests that ``anti-Hale'' sunspot magnetic flux loops have generally stronger magnetic tension than ``Hale'' sunspots do. \item Joy's law for ``Hale'' sunspots is equally well-described by 1) $\sin\gamma=(0.38\pm0.05)\sin\phi)$, 2) $\gamma=(0.39\pm0.06)\phi$, and 3) $\gamma=(23.8\pm3.5)\sin\phi$. \item Empirically, we find the sunspot magnetic flux as a function of pole separation, $\Phi(s)= (6.6^{+0.5}_{-0.4})\times 10^{20}s^{1.57\pm0.02}$ for ``Hale'' sunspot populations, and $\Phi(s)=(14.8^{+4.7}_{-3.6})\times 10^{20}s^{1.06\pm0.09}$ for ``anti-Hale'' sunspot populations. For all sunspots, the formula is $\Phi(s)= (7.6^{+0.3}_{-0.4})\times 10^{20}s^{1.49\pm0.02}$. \end{enumerate} Other significant results are: \begin{enumerate} \item Sunspot emergence is hemispheric asymmetric, with the southern hemisphere dominant in Cycle 23 and the northern hemisphere dominant in Cycle 24. But sunspot eruptions are hemisphere-sysmmetric over a magnetic cycle ($\sim22$ years), i.e. $N_n(23)N_s(24)/N_s(23)N_n(24)\sim 1.04$, is close to unity. \item The number of sunspots erupted in Cycle 23 is $\sim$1.6 times that of Cycle 24. The magnetic flux erupted through Cycle 23 is $\sim$2.6 times that of Cycle 24. \item Statistically, we find the tilt angles decrease with increasing pole separations according to $|\gamma(s)|=(-24.88\pm1.08)\log_{10}s+(35.84\pm0.64)$. For a more accurate formula, we fit a parabola: $|\gamma|=(14.6\pm3.1)(\log s)^2-(38.8\pm3.1)\log s+(38.1\pm0.8)$. \item Empirically, magnetic flux is related to the tilt angle: $\Phi(\gamma)=1.1\times10^{23}\times (0.87)^\gamma$, where $\gamma\geq0$. Sunspot magnetic flux decreases modestly with increasing tilt angle. \end{enumerate} \acknowledgments JL thanks David Jewitt and Xiaodong Zhou for discussions. She would like to thank Andr\'{e}s Mun\~{n}oz-Jaramillot for discussions and suggestions at the Fall AGU 2015, Mei Zhang for her visit and discussions, and the enthusiastic supports from Drs. Roger Ulrich, Marco Velli, and Aimee Norton. Dr. Y.-M. Wang has read the manuscripts, and provide valuable comments. The comments by the anonymous referee have helped to greatly improve the paper. This work is partially supported by a NASA grant NNX15AF29G awarded to JL. \clearpage
proofpile-arXiv_066-8423
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{The CMS Collaboration \label{app:collab}}\begin{sloppypar}\hyphenpenalty=5000\widowpenalty=500\clubpenalty=5000\input{TOP-20-007-authorlist.tex}\end{sloppypar} \end{document}
proofpile-arXiv_066-11147
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Throughout the present paper, all rings are assumed to be commutative and noetherian. Let $R$ be a ring, $I$ an ideal of $R$, and $M$ a finitely generated $R$-module. The asymptotic behavior of the quotient modules $M/I^n M$ of $M$ for large integers $n$ is one of the most classical subjects in commutative algebra. Among other things, the asymptotic stability of the associated prime ideals and depths of $M/I^n M$ has been actively studied. Brodmann \cite{B} proved that the set of associated prime ideals of $M/I^n M$ is stable for large $n$. Kodiyalam \cite{Ko} showed that the depth of $M/I^n M$ attains a stable constant value for all large $n$ when $R$ is local. There are a lot of studies about this subject; see \cite{Br, M, RS} for instance. The purpose of this paper is to proceed with the study of the above subject. In particular, we consider the existence of an integer $k$ such that $\depth (M/I^t M)_\p=\depth (M/I^k M)_\p$ for all integers $t\ge k$ and all prime ideals $\p$ of $R$. In this direction, by using the openness of the codepth loci of modules over excellent rings studied by Grothendieck \cite{G}, Rotthaus and \c{S}ega \cite{RS} proved that such an integer $k$ exists if $R$ is excellent, $M$ is Cohen--Macaulay, and $I$ contains an $M$-regular element. We aim to improve their theorem by applying the ideas of their proofs. However, in our proof, we use the methods developed in \cite{K} not those of Grothendieck. The main result of this paper is the following theorem; for the definition of an acceptable ring in the sense of Sharp \cite{S} see Definition \ref{def rings}. Obviously, we may replace all $\bar{R}$ in the result below with $R$. It gives a common generalization of the above mentioned theorems proved in \cite{Ko} and \cite{RS}. \begin{thm}[Corollary \ref{main cor}]\label{main} Let $R$ be a ring, $I$ an ideal of $R$, and $M$ a finitely generated $R$-module. Put $\bar{R}=R/(I+\operatorname{Ann}_R (M))$. Then there is an integer $k>0$ such that $$ \depth (M/I^t M)_\p=\depth (M/I^k M)_\p $$ for all integers $t\ge k$ and all prime ideals $\p$ of $R$ in each of the following cases. \begin{enumerate}[\rm(1)] \item $M$ is Cohen--Macaulay. \item $M/I^n M$ is Cohen--Macaulay for some $n>0$. \item $\bar{R}$ is a homomorphic image of a Cohen--Macaulay ring. \item $\bar{R}$ is semi-local. \item $\bar{R}$ is excellent. \item $\bar{R}$ is quasi-excellent and catenary. \item $\bar{R}$ is acceptable. \end{enumerate} \end{thm} The organization of this paper is as follows. In Section 2, we give several definitions and basic lemmas about graded rings and modules. In Section 3, we study the openness of the codepth loci of graded modules. We give a sufficient condition for the codepth loci of a graded module to be open, and for the depths of localizations of homogeneous components of a graded module to be eventually stable. In Section 4, we prove Theorem \ref{main} and consider some examples. \section{Definitions and lemmas} This section is devoted to preliminaries for the later sections. We give several definitions and basic lemmas about graded rings and modules. In this section, we assume that $A=\bigoplus_{i\ge 0}A_i$ is a graded ring and that $M=\bigoplus_{i\in \Z}M_i$ is a finitely generated graded $A$-module. The ring $A$ is a finitely generated $A_0$-algebra. For any $i\in \Z$, the $A_0$-module $M_i$ is finitely generated. Let $S$ be a multiplicatively closed subset of $A_0$. Then $A_S=\bigoplus_{i\ge 0}(A_i)_S$ is also a graded ring, and $M_S=\bigoplus_{i\in \Z}(M_i)_S$ is a finitely generated graded $A_S$-module. In particular, $A_\p$ is a graded ring having the local base ring $(A_0)_\p$ for any prime ideal $\p$ of $A_0$. Similarly, $A/IA$ and $M/IM$ are graded for any ideal $I$ of $A_0$. A graded ring $A$ which as an $A_0$-algebra is generated by elements of $A_1$ will be called \textit{homogeneous}. Every ring $R$ is a graded ring $A$ with $A_0=R$ and $A_i=0$ for all $i\ge0$. We denote by $\operatorname{Ann}_{A_0} (M)$ the annihilator ideal of $M$. The \textit{dimension} of $M$ as an $A_0$-module is given by $\dim_{A_0}(M) =\dim (A_0/\operatorname{Ann}_{A_0} (M))$. Let $A_0$ be a local ring. In general, $M$ is not finitely generated as an $A_0$-module. Here, the \textit{depth} of $M$ as an $A_0$-module is defined as follows; see \cite[Definition 1.2.1]{RS}. Note that this coincides with the one defined in \cite[Definition 9.1.1]{BH}. \begin{dfn} Let $(A_0, \m_0)$ be a local ring. If $M$ is the zero module, then we set $\depth_{A_0}(M)=\infty$; otherwise, we define $\depth_{A_0}(M)={\rm sup}\{n \ge 0 \mid$ there is an $M$-regular sequence $\bm{x}=x_1,\ldots,x_n$ in $\m_0 \}$. Also, if $M$ is the zero module, then we set $\codepth_{A_0}(M)=-\infty$; otherwise we define $\codepth_{A_0}(M)=\dim_{A_0}(M)-\depth_{A_0}(M)$. \end{dfn} In this paper, the following notation is used. \begin{dfn} Let $R$ be a ring, $I$ an ideal of $R$, and $n\ge 0$ an integer. \begin{itemize} \item $\V_{R}(I) = \{\p\in\spec (R)\mid I\subseteq\p\}$. \item $\cm(R) = \{\p\in\spec (R)\mid \dim(R_\p) \le \depth(R_\p) \}$. \item $\C_n^{A_0}(M) = \{\p\in\spec (A_0)\mid \codepth_{(A_0)_\p} (M_\p) \le n \}$. \item $\cm_{A_0} (M) = \{\p\in\spec (A_0)\mid \dim_{(A_0)_\p} (M_\p) \le \depth_{(A_0)_\p} (M_\p) \} = \C_0^{A_0}(M)$. \end{itemize} \end{dfn} We prepare several basic lemmas. Some of the results below are proved in \cite[Section 1]{RS}. However, due to some differences, we include proofs of those for the benefit of the reader. \begin{lem}\label{RS1.1.1} Suppose that $A$ is homogeneous. Then there exists an integer $k$ such that $\ann_{A_0}(M_t)=\ann_{A_0}(M_k)$ for all integers $t\ge k$. \end{lem} \begin{proof} There is an integer $l$ such that $M_{t+1}=A_1 M_t$ for any integer $t\ge l$ since $A$ is homogeneous and $M$ is a finitely generated $A$-module. For any $t\ge l$, the ideal $\ann_{A_0}(M_{t+1})$ contains $\ann_{A_0}(M_t)$. As $A_0$ is noetherian, there exists an integer $k$ such that $\ann_{A_0}(M_t)=\ann_{A_0}(M_k)$ for all integers $t\ge k$. \end{proof} \begin{lem}\label{RS1.3} The function $F:\ass_A(M) \to \ass_{A_0}(M)$ defined by $F(P)=P\cap A_0$ is well defined and surjective. In particular, $\ass_{A_0}(M)=\bigcup_{i\in \Z} \ass_{A_0}(M_i)$ is a finite set. \end{lem} \begin{proof} For any prime ideal $P$ of $A$, there is the natural injection from $A_0/P\cap A_0$ to $A/P$. Hence $F$ is well defined. Let $\p\in\ass_{A_0}(M)$. It follows from \cite[Theorem 6.2]{Mat} and \cite[Proposition 1.2.1]{BH} that there exists a prime ideal $Q$ of $A_\p$ which belongs to $\ass_{A_\p}(M_\p)$ and contains $\p A_\p$. Then $Q=P A_\p$ for some $P\in \ass_A(M)$. The ideal $Q$ is contained in the maximal ideal $\m=\p(A_0)_\p\oplus\bigoplus_{i>0} (A_i)_\p$ of $A_\p$ as $Q$ is graded; see \cite[Lemma 1.5.6]{BH}. We easily see that $\p=P\cap A_0$. \end{proof} \begin{lem}\label{local lemma} Let $(A_0, \m_0)$ be a local ring. \begin{enumerate}[\rm(1)] \item There is an equality $\dim_{A_0}(M)={\rm sup}\{\dim_{A_0}(M_i) \mid i\in \Z \}$. \item One has the equality $\depth_{A_0}(M)={\rm inf}\{\depth_{A_0}(M_i) \mid i\in \Z \}$. \item Let $0\to N\to M\to L\to 0$ be an exact sequence of finitely generated graded $A$-modules. Then $$ \depth_{A_0}(M)\ge{\rm min}\{\depth_{A_0}(N),\ \depth_{A_0}(L)\}. $$ \item Suppose that a sequence $\bm{x}=x_1,\ldots,x_n$ of elements in $\m_0$ is an $M$-regular sequence. Then we have $$ \dim_{A_0}(M)=\dim_{A_0}(M/\bm{x} M)+n, \ {\rm and} \ \depth_{A_0}(M)=\depth_{A_0}(M/\bm{x} M)+n. $$ \end{enumerate} \end{lem} \begin{proof} (1): Since $\ann_{A_0}(M)$ is contained in $\ann_{A_0}(M_i)$, we get $\dim_{A_0}(M)\ge \dim_{A_0}(M_i)$ for any $i\in \Z$, which means $\dim_{A_0}(M)\ge{\rm sup}\{\dim_{A_0}(M_i) \mid i\in \Z \}$. Conversely, if $\p$ is a prime ideal of $A_0$ containing $\ann_{A_0}(M)$, then $M_\p$ is not the zero module because $M$ is finitely generated as an $A$-module. So $(M_i)_\p\ne 0$ for some $i\in \Z$, and thus $\dim(R/\p)\le\dim_{A_0}(M_i)$. This says that the other inequality holds. (2): By definition, we observe that $\depth_{A_0}(M)\le{\rm inf}\{\depth_{A_0}(M_i) \mid i\in \Z \}$. Let $d=\depth_{A_0}(M)$ and let $\bm{y}=y_1,\ldots,y_d$ be a $M$-regular sequence in $\m_0$. The ideal $\m_0$ consists of zero-divisors of $M/\bm{y} M$. It follows from \cite[Theorem 6.1]{Mat} and Lemma \ref{RS1.3} that $\m_0$ is in $\ass_{A_0}(M/\bm{y} M)=\bigcup_{i\in \Z} \ass_{A_0}(M_i/\bm{y} M_i)$. The other inequality holds as $\bm{y}$ is a maximal $M_i$-regular sequence for some $i\in \Z$. (3): We can choose an integer $i\in \Z$ such that $\depth_{A_0}(M)=\depth_{A_0}(M_i)$ by (2). There is an exact sequence $0\to N_i\to M_i\to L_i\to 0$ of $A_0$-modules. It follows from (2) and \cite[Proposition 1.2.9]{BH} that $$ \depth_{A_0}(M)=\depth_{A_0}(M_i)\ge{\rm min}\{\depth_{A_0}(N_i),\ \depth_{A_0}(L_i)\} \ge{\rm min}\{\depth_{A_0}(N),\ \depth_{A_0}(L)\}. $$ (4): The assertion follows from (1) and (2). \end{proof} \begin{lem}\label{graded lemma} Let $\p$ be a prime ideal of $A_0$, and let $I=\ann_{A_0}(M)$. \begin{enumerate}[\rm(1)] \item If $\p$ belongs to $\supp_{A_0}(M)=\V_{A_0}(I)$, then $\dim_{(A_0)_\p}(M_\p)=\height (\p/I)$. \item Suppose that a sequence $\bm{x}=x_1,\ldots,x_n$ of elements in $\p$ is an $M_\p$-regular sequence. Then there exists $f\in A_0\setminus\p$ such that $\bm{x}$ is an $M_f$-regular sequence. \item The prime ideal $\p$ belongs to $\ass_{A_0}(M)$ if and only if $\depth_{(A_0)_\p}(M_\p)=0$. \end{enumerate} \end{lem} \begin{proof} (1): It is seen that $\ann_{(A_0)_\p}(M_\p)=I(A_0)_\p$ since $M$ is a finitely generated $A$-module. (2): We may assume $n=1$. Let $\varphi$ be the multiplication map of $M$ by $x_1$. The submodule $\ker \varphi$ of $M$ is a finitely generated $A$-module, and $(\ker \varphi)_\p$ is the zero module. We have $(\ker \varphi)_f=0$ for some $f\in A_0\setminus\p$, which means that $x_1$ is an $M_f$-regular element. (3): It follows from \cite[Theorem 6.2]{Mat} that $\p$ is in $\ass_{A_0}(M)$ if and only if $\p(A_0)_\p$ is in $\ass_{(A_0)_\p}(M_\p)$. Hence the ``only if'' part is trivial. In order to prove the ``if'' part, suppose $\depth_{(A_0)_\p}(M_\p)=0$. There is an integer $i\in \Z$ such that $\depth_{(A_0)_\p}(M_i)_\p=\depth_{(A_0)_\p}(M_\p)=0$ by Lemma \ref{local lemma} (2). The prime ideal $\p$ belongs to the subset $\ass_{A_0}(M_i)$ of $\ass_{A_0}(M)$. \end{proof} \begin{rmk}\label{supp} The subset $\supp_{A_0}(M)=\V_{A_0}(\operatorname{Ann}_{A_0} (M))$ of $\spec(A_0)$ is closed. Therefore, if a prime ideal $\p$ of $A_0$ is not in $\supp_{A_0}(M)$, then the codepth locus $\C_n^{A_0}(M)$ contains a nonempty open subset $(\spec(A_0)\setminus\supp_{A_0}(M))\cap\V_{A_0}(\p)$ of $\V_{A_0}(\p)$ for any $n\ge 0$. \end{rmk} We close this section by stating an elementary lemma about open subsets of the spectrum of rings. \begin{lem}\label{RS4.0} Let $R$ be a ring, and let $\{U_n^t\}_{n\ge0 ,t\in\Z}$ be a family of open subsets of $\spec(R)$. Suppose that $U_n^t$ is contained in both $U_n^{t+1}$ and $U_{n+1}^t$ for all $t\in\Z$ and all $n\ge0$. Then there is an integer $k$ such that $U_n^t=U_n^k$ for all $t\ge k$ and all $n\ge 0$. \end{lem} \begin{proof} There is an integer $k_1$ such that $U_t^t=U_{k_1}^{k_1}$ for all $t\ge k_1$ since $R$ is noetherian. Also, there is an integer $k_2$ such that $U_n^t=U_n^{k_2}$ for all $t\ge k_2$ and all $0\le n< k_1$. Put $k={\rm max}\{k_1,k_2\}$. For all $t\ge k$ and all $n\ge 0$, we have $U_n^t=U_n^k$ by considering the case $0\le n< k_1$ and the case $k_1\le n$ separately. \end{proof} \section{The openness of the codepth loci of graded modules} In this section, we study the openness of the codepth loci of graded modules. The purpose of this section is to give a sufficient condition for the depths of localizations of homogeneous components of a graded module to be eventually stable. As in the previous section, we assume in this section that $A=\bigoplus_{i\ge 0}A_i$ is a graded ring and that $M=\bigoplus_{i\in \Z}M_i$ is a finitely generated graded $A$-module. Below is a well known result of Grothendieck \cite[(6.11.5)]{G}. \begin{lem}\label{f.g. gene closed} Let $(R,\m)$ be a local ring, $\p$ a prime ideal of $R$, and $N$ a finitely generated $R$-module. Then we have $$ \codepth_{R_\p}(N_\p) \le \codepth_{R}(N). $$ \end{lem} \begin{proof} We may assume that $\p$ belongs to $\supp_R(N)$. Let $\bm{x}=x_1,\ldots,x_n$ be a maximal $N$-regular sequence in $\p$. There exists an associated prime ideal $\q$ of $N/\bm{x}N$ containing $\p$. By \cite[Theorem 17.2]{Mat}, we obtain \begin{align*} \depth_R(N)-\depth_{R_\p}(N_\p)&\le \depth_R(N)-n= \depth_R(N/\bm{x}N)\le \dim(R/\q) \\ &\le \dim(R/\p)\le \dim_R(N)-\dim_{R_\p}(N_\p). \end{align*} This says that the assertion holds. \end{proof} Lemma \ref{f.g. gene closed} can be extended as follows. The proof of \cite[Lemma 2.5]{RS}, which states a similar fact to below follows the ideas of Grothendieck's proof given in \cite{G}. Our proof is simpler than that. \begin{lem}\label{gene closed} Let $\p$ and $\q$ be prime ideals of $A_0$ with $\p\subseteq\q$. Then we have $$ \codepth_{(A_0)_\p}(M_\p) \le \codepth_{(A_0)_\q}(M_\q). $$ \end{lem} \begin{proof} By (1) and (2) of Lemma \ref{local lemma}, we can take integers $i, j, k, l \in\Z$ such that \begin{align*} &\dim_{(A_0)_\p}(M_\p)=\dim_{(A_0)_\p}(M_i)_\p, \ \depth_{(A_0)_\p}(M_\p)=\depth_{(A_0)_\p}(M_j)_\p, \\ &\dim_{(A_0)_\q}(M_\q)=\dim_{(A_0)_\q}(M_k)_\q,\ {\rm and} \ \depth_{(A_0)_\q}(M_\q)=\depth_{(A_0)_\q}(M_l)_\q. \end{align*} It follows from Lemma \ref{f.g. gene closed} that $$ \codepth_{(A_0)_\p}(M_\p) = \codepth_{(A_0)_\q}(N_\p) \le \codepth_{(A_0)_\p}(N_\q) = \codepth_{(A_0)_\q}(M_\q) $$ as $N:=M_i\oplus M_j\oplus M_k\oplus M_l$ is a finitely generated $A_0$-module. \end{proof} We consider the openness of the codepth loci of a graded module to state the main result of this paper. The following theorem is a graded version of \cite[Theorem 5.4]{K}. \begin{thm}\label{cm2} Let $\p\in\cm_{A_0}(M)$. If $\cm(A_0/\p)$ contains a nonempty open subset of $\spec (A_0/\p)$, then $\cm_{A_0}(M)$ contains a nonempty open subset of $\V_{A_0}(\p)$. \end{thm} \begin{proof} First of all, we may assume that $\p$ belongs to $\supp_{A_0}(M)$ by Remark \ref{supp}. Also, note that we may assume $\supp_{A_0}(M)=\spec(A_0)$ by replacing $A$ with $A/\operatorname{Ann}_{A} (M)$, and can freely replace our ring $A$ with its localization $A_f$ for any element $f\in A_0\setminus\p$ to prove the theorem; see \cite[Lemmas 2.5 and 2.6]{K}. Put $d=\depth_{(A_0)_\p}(M_\p)=\dim_{(A_0)_\p}(M_\p)=\height\p$. We can choose a sequence $\bm{x}=x_1,\ldots,x_d$ in $\p$ such that it is an $M_\p$-regular sequence and $\height\bm{x}A_0=d$ by Lemmas \ref{RS1.3}, \ref{local lemma} (4), and \ref{graded lemma} (3). We may assume that $A_0/\p$ is Cohen--Macaulay and that $\p^r$ is contained in $\bm{x}A_0$ for some $r>0$. Also, Lemma \ref{graded lemma} (2) yields that we may assume that $\bm{x}$ is an $M$-regular sequence. Set $\overline{A}=A/\bm{x} A$, $\overline{\p}=\p\overline{A}$, and $\overline{M}=M/\bm{x} M$. Thanks to \cite[Theorem 24.1]{Mat}, for each $0\le i\le r-1$, we may assume that the graded $A$-module $\overline{\p}^{i}\overline{M}/\overline{\p}^{i+1}\overline{M}$ is free as an $A_0/\p$-module. We claim that $\cm_{A_0}(M)$ contains $\V_{A_0}(\p)$. Let $\q$ be a prime ideal of $A_0$ containing $\p$. We obtain $$ \depth_{(A_0)_\q} (\overline{\p}^{i}\overline{M}/\overline{\p}^{i+1}\overline{M})_\q=\depth (A_0/\p)_\q=\height(\q/\p)=\height(\q/\bm{x} A_0) $$ for each $0\le i\le r-1$. It follows from (3) and (4) of Lemma \ref{local lemma} that $$ \depth_{(A_0)_\q}(M_\q)=\depth_{(A_0)_\q}(\overline{M}_\q)+d \ge \height(\q/\bm{x} A_0)+d=\height\q. $$ Hence, $\q$ belongs to $\cm_{A_0}(M)$. \end{proof} Below is a direct corollary of Theorem \ref{cm2}. It is a graded version of \cite[Corollary 5.5 (1)]{K}. \begin{cor}\label{cm openness} Suppose that $\cm(A_0/\p)$ contains a nonempty open subset of $\spec (A_0/\p)$ for any $\p\in\supp_{A_0}(M)\cap\cm_{A_0}(M)$. Then $\cm_{A_0}(M)$ is an open subset of $\spec(A_0)$. \end{cor} \begin{proof} The assertions follow from \cite[Theorem 24.2]{Mat}, Remark \ref{supp}, Lemma \ref{gene closed} and Theorem \ref{cm2}. \end{proof} When the base ring of $A/\operatorname{Ann}_{A} (M)$ is catenary, Theorem \ref{cm2} can be extended as follows. \begin{thm}\label{codepth} Let $n\ge 0$ be an integer and let $\p\in\C_n^{A_0}(M)$. Suppose that that the ring $A_0/\operatorname{Ann}_{A_0} (M)$ is catenary. If $\cm(A_0/\p)$ contains a nonempty open subset of $\spec (A_0/\p)$, then $\C_n^{A_0}(M)$ contains a nonempty open subset of $\V_{A_0}(\p)$. \end{thm} \begin{proof} In an analogous way as at the beginning of the proof of Theorem \ref{cm2}, we may assume $\p\in\supp_{A_0}(M)$ and can freely replace our ring $A$ with its localization $A_f$ for any element $f\in A_0\setminus\p$ to prove the theorem. We prove the theorem by induction on $n$. We have already shown the case where $n=0$ in Theorem \ref{cm2}. Let $n>0$ and $d=\depth_{(A_0)_\p}(M_\p)$. By the induction hypothesis, we may assume $\codepth_{(A_0)_\p}(M_\p)=n$. Thanks to Lemma \ref{graded lemma} (2), we may assume that the following conditions are satisfied. \begin{enumerate}[\rm (a)] \item The prime ideal $\p$ contains any minimal prime ideal of $\operatorname{Ann}_{A_0} (M)$. \item There is an $M$-regular sequence $\bm{x}=x_1,\ldots,x_d$ in $\p$. \end{enumerate} Set $N=M/\bm{x} M$. Note that the $\p$-torsion submodule $\Gamma_{\p}(N)$ of $N$ is finitely generated and graded as an $A$-module. We easily see that $\supp_{A_0}(\Gamma_{\p}(N))=\V_{A_0}(\p)$; see Lemmas \ref{local lemma} (4) and \ref{graded lemma} (3). Since $$ \dim_{(A_0)_\p}(\Gamma_{\p}(N))_\p=0<n=\codepth_{(A_0)_\p}(M_\p)=\codepth_{(A_0)_\p}(N_\p)=\dim_{(A_0)_\p}(N_\p), $$ it is seen that $\p$ is in $\C_0^{A_0}(\Gamma_{\p}(N))$ and $\dim_{(A_0)_\p}(N/\Gamma_{\p}(N))_\p=\dim_{(A_0)_\p}(N_\p)=n$. On the other hand, Lemma \ref{graded lemma} (3) implies $\depth_{(A_0)_\p}(N/\Gamma_{\p}(N))_\p>0$. Thus $\p$ belongs to $\C_{n-1}^{A_0}(N/\Gamma_{\p}(N))$. By the induction hypothesis, we may assume that the following condition (c) is satisfied. \begin{enumerate}[\rm (c)] \item The set $\V_{A_0}(\p)$ is contained in both $\C_0^{A_0}(\Gamma_{\p}(N))$ and $\C_{n-1}^{A_0}(N/\Gamma_{\p}(N))$. \end{enumerate} We prove that $\V_{A_0}(\p)$ is contained in $\C_n^{A_0}(M)$. Let $\q$ be a prime ideal of $A_0$ containing $\p$. Now the ring $A_0/\operatorname{Ann}_{A_0} (M)$ is catenary. By (a) and (c), we have \begin{align*} \depth_{(A_0)_\q}(\Gamma_{\p}(N))_\q=\dim_{(A_0)_\q}(\Gamma_{\p}(N))_\q=\height(\q/\p) &=\height(\q/\operatorname{Ann}_{A_0} (M))-\height(\p/\operatorname{Ann}_{A_0} (M))\\ &=\dim_{(A_0)_\q}(M_\q)-\dim_{(A_0)_\p}(M_\p) \\ &=\dim_{(A_0)_\q}(M_\q)-(n+d). \end{align*} Note that $\supp_{A_0}(\Gamma_{\p}(N))=\V_{A_0}(\p)$ is contained in $\supp_{A_0}(N/\Gamma_{\p}(N))$. By (b) and (c), we get \begin{align*} \depth_{(A_0)_\q}(N/\Gamma_{\p}(N))_\q \ge \dim_{(A_0)_\q}(N/\Gamma_{\p}(N))_\q-(n-1) &=\dim_{(A_0)_\q}(N_\q)-(n-1) \\ &=(\dim_{(A_0)_\q}(M_\q)-d)-(n-1) \\ &>\dim_{(A_0)_\q}(M_\q)-(n+d). \end{align*} Therefore, we observe that \begin{align*} \depth_{(A_0)_\q}(M_\q)=\depth_{(A_0)_\q}(N_\q)+d &\ge {\rm min} \{\depth_{(A_0)_\q}(\Gamma_{\p}(N))_\q,\ \depth_{(A_0)_\q}(N/\Gamma_{\p}(N))_\q\}+d \\ &= \dim_{(A_0)_\q}(M_\q)-n \end{align*} by Lemma \ref{local lemma} (3), which means that $\q$ belongs to $\C_n^{A_0}(M)$. \end{proof} The same result as Corollary \ref{cm openness} holds for codepth loci. \begin{cor}\label{codepth2} Suppose that the ring $A_0/\operatorname{Ann}_{A_0} (M)$ is catenary. \begin{enumerate}[\rm(1)] \item Let $n\ge 0$ be an integer. Suppose that $\cm(A_0/\p)$ contains a nonempty open subset of $\spec (A_0/\p)$ for any $\p\in\supp_{A_0}(M)\cap\C_n^{A_0}(M)$. Then $\C_n^{A_0}(M)$ is open. \item Suppose that $\cm(A_0/\p)$ contains a nonempty open subset of $\spec (A_0/\p)$ for any $\p\in\supp_{A_0}(M)$. Then $\C_n^{A_0}(M)$ is open for any integer $n\ge 0$. \end{enumerate} \end{cor} \begin{proof} The assertions follow from \cite[Theorem 24.2]{Mat}, Remark \ref{supp}, Lemma \ref{gene closed} and Theorem \ref{codepth}. \end{proof} We study the asymptotic behavior of the depths of localizations of homogeneous components of a graded module. We prepare the following basic lemma to state Lemma \ref{RS4.2}. \begin{lem}\label{RS4.1} Suppose that $A$ is homogeneous and that $(A_0, \m_0, k_0)$ is local. Then there exists an integer $k$ such that $\depth_{A_0} (M_t)=\depth_{A_0} (M_k)$ for all integers $t\ge k$. \end{lem} \begin{proof} It is seen that $\Ext_{A_0}^i(k_0,M)\simeq\bigoplus_{t\in\Z}\Ext_{A_0}^i(k_0,M_t)$ is a finitely generated graded $A$-module for any $0\le i\le \dim(A_0)$. Since $\depth_{A_0} (M_t)={\rm inf}\{ i\mid \Ext_{A_0}^i(k_0,M_t)\ne 0 \}$ for each $t\in\Z$, the assertion follows from Lemma \ref{RS1.1.1}. \end{proof} Applying the ideas of the proof of \cite[Theorem 4.2]{RS}, we can prove the result below, which extends it. \begin{lem}\label{RS4.2} Suppose that $A$ is homogeneous. Denote by $N_t$ the graded $A$-module $\bigoplus_{i\ge t}M_i$ for each $t\in\Z$. If $\C_n^{A_0}(N_t)$ is open for all $t\in \Z$ and all $n\ge 0$, then there is an integer $k$ such that $$ \depth_{(A_0)_\p}(M_t)_\p=\depth_{(A_0)_\p}(M_k)_\p $$ for all integers $t\ge k$ and all prime ideals $\p$ of $A_0$. \end{lem} \begin{proof} It follows from (1) and (2) of Lemma \ref{local lemma} that $\C_n^{A_0}(N_t)$ is contained in both $\C_n^{A_0}(N_{t+1})$ and $\C_{n+1}^{A_0}(N_t)$ for all $t\in\Z$ and all $n\ge 0$. By Lemmas \ref{RS1.1.1} and \ref{RS4.0}, we can choose an integer $l\in\Z$ such that \begin{equation}\label{4.2RS1} J:=\ann_{A_0}(M_t)=\ann_{A_0}(M_l)\ \ {\rm and} \ \ U_n:=\C_n^{A_0}(N_t)=\C_n^{A_0}(N_l) \end{equation} for all $t\ge l$ and $n\ge 0$. Any prime ideal of $A_0$ belongs to $U_n$ for some $n\ge 0$. An analogous argument to the proof of Lemma \ref{RS4.0} shows that there exists an integer $m\ge 0$ such that $U_m=\spec(A_0)$. For each $0\le n\le m-1$, we can write $\V_{A_0}(I_n)=\spec(A_0)\setminus U_n$ for some ideal $I_n$ of $A_0$. The subset $\bigcup_{n=0}^{m-1} \ass_{A_0}(A_0/I_n)$ of $\spec(A_0)$ is finite. It follows from Lemma \ref{RS4.1} that we can take $k\ge l$ such that \begin{equation}\label{4.2RS2} \depth_{(A_0)_\q}(M_t)_\q=\depth_{(A_0)_\q}(M_k)_\q \end{equation} for any $t\ge k$, and any $\q\in\bigcup_{n=0}^{m-1} \ass_{A_0}(A_0/I_n)$. Let $\p$ be a prime ideal of $A_0$. We claim that $\depth_{(A_0)_\p}(M_t)_\p=\depth_{(A_0)_\p}(M_k)_\p$ for all $t\ge k$. We may assume that $\p$ contains $J$. If $\p$ belongs to $U_0$, then we have $$ \depth_{(A_0)_\p}(M_t)_\p\le \dim_{(A_0)_\p}(M_t)_\p\le \dim_{(A_0)_\p}(N_k)_\p\le \depth_{(A_0)_\p}(N_k)_\p\le \depth_{(A_0)_\p}(M_t)_\p $$ for all $t\ge k$ by (1) and (2) of Lemma \ref{local lemma}. This means that the claim holds. If $\p$ does not belong to $U_0$, then $\codepth_{(A_0)_\p}(N_k)_\p=n+1$ for some $0\le n\le m-1$. As $\p$ is not in $U_n$, we see that $\q\subseteq\p$ for some $\q\in\ass_{A_0}(A_0/I_n)$. By (1) and (2) of Lemma \ref{local lemma}, Lemma \ref{f.g. gene closed}, (\ref{4.2RS1}) and (\ref{4.2RS2}), it is seen that $$ n+1\le \codepth_{(A_0)_\q}(N_k)_\q=\codepth_{(A_0)_\q}(M_t)_\q\le \codepth_{(A_0)_\p}(M_t)_\p\le \codepth_{(A_0)_\p}(N_k)_\p=n+1. $$ for all $t\ge k$. Hence we get $\codepth_{(A_0)_\p}(M_t)_\p=n+1$ for all $t\ge k$. The claim follows from (\ref{4.2RS1}). \end{proof} \section{Asymptotic stability of depths of localizations of modules} In this section, we prove the main result of this paper. All of the results of Theorem \ref{main} are given as corollaries of the theorem below. \begin{thm}\label{local depth} Let $R$ be a ring, $I$ an ideal of $R$, and $M$ a finitely generated $R$-module. Suppose that that the ring $\bar{R}:=R/(I+\operatorname{Ann}_R (M))$ is catenary and that $\cm(\bar{R}/\bar{\p})$ contains a nonempty open subset of $\spec (\bar{R}/\bar{\p})$ for any prime ideal $\bar{\p}$ of $\bar{R}$. Then there is an integer $k>0$ such that $$ \depth (M/I^t M)_\p=\depth (M/I^k M)_\p $$ for all integers $t\ge k$ and all prime ideals $\p$ of $R$. \end{thm} \begin{proof} The associated graded ring $A=\bigoplus_{i\ge 0}I^i/I^{i+1}$ is a homogeneous ring. Then $\bigoplus_{i\ge 0}I^i M/I^{i+1} M$ is a finitely generated graded $A$-module. By Corollary \ref{codepth2} (2) and Lemma \ref{RS4.2}, we find an integer $m>0$ such that \begin{equation}\label{last1} \depth (I^t M/I^{t+1} M)_\p=\depth (I^m M/I^{m+1} M)_\p \end{equation} for all integers $t\ge m$ and all prime ideals $\p$ of $R$. Note that \begin{equation}\label{last2} X:=\supp_R(M) \cap \supp_R(R/I)=\supp_R(M/I^{i} M) \end{equation} for any $i>0$. Applying Corollary \ref{codepth2} (2) to $A=A_0=R$, we see that $U_n^t :=\bigcup_{m\le i\le t} \C_n^{R}(M/I^i M)$ is open for any $t\ge m$ and any $n\ge 0$. Lemma \ref{RS4.0} implies that there is an integer $l\ge m$ such that $U_n^t=U_n^l$ for all $t\ge l$ and all $n\ge0$. Put $k=l+1$. By (\ref{last2}), we have only to show that the following claim holds. \begin{spacing}{1.2} \ \textbf{Claim.} \ $\C_n^{R}(M/I^t M)=\C_n^{R}(M/I^k M)$ for all $t\ge k$ and all $n\ge0$. \end{spacing} Fix an integer $n\ge 0$. Let $\p$ be a prime ideal of $R$ belonging to $\C_n^{R}(M/I^t M)$ for some $t\ge k$. We prove that $\p$ is in $\C_n^{R}(M/I^i M)$ for all $i\ge k$. We may assume that $\p$ is in $X$. By (\ref{last1}) and (\ref{last2}), we obtain $r:=\depth (I^k M/I^{k+1} M)_\p=\depth (I^i M/I^{i+1} M)_\p$ and $d:=\dim(M/I^k M)_\p=\dim(M/I^i M)_\p$ for all $i\ge m$. The prime ideal $\p$ belongs to $\C_n^{R}(M/I^s M)$ for some $m\le s \le l$ since $U_n^t=U_n^l$, which means $\depth (M/I^s M)_\p\ge d-n$. For each integer $i\ge s$, there is an exact sequence $$ 0 \to (I^i M/I^{i+1} M)_\p \to (M/I^{i+1} M)_\p \to (M/I^i M)_\p \to 0. $$ Suppose $r<d-n$. It follows from \cite[Proposition 1.2.9]{BH} and by induction on $i$ that $\depth (M/I^{i+1} M)_\p=r$ for any $i\ge s$. In particular, we have $\depth (M/I^t M)_\p=r<d-n$. This is a contradiction. Hence, we get $r\ge d-n$. Similarly, we see by induction on $i$ that $\depth (M/I^i M)_\p\ge d-n$ for any $i\ge s$. This means $\p$ belongs to $\C_n^{R}(M/I^i M)$ for all integers $i\ge s$. The proof of claim is now completed. \end{proof} We recall a few definitions of notions used in our next result. \begin{dfn}\label{def rings} A ring $R$ is said to be \textit{quasi-excellent} if the following two conditions are satisfied. \begin{enumerate}[\rm(1)] \item For all finitely generated $R$-algebras $S$, $\reg(S) = \{\p\in\spec (S)\mid$ the local ring $S_\p$ is regular$ \}$ is open. \item All the formal fibers of $R_\p$ are regular for all prime ideals $\p$ of $R$. \end{enumerate} A ring $R$ is said to be \textit{excellent} if it is quasi-excellent and universally catenary. A ring in which ``regular'' is replaced with ``Gorenstein'' in the definition of an excellent ring is called an \textit{acceptable ring} \cite{S}. \end{dfn} Applying the above theorem, we can prove the main result of this paper. \begin{cor}\label{main cor} Let $R$ be a ring and $I$ an ideal of $R$. Let $M$ be a finitely generated $R$-module. Put $\bar{R}=R/(I+\operatorname{Ann}_R (M))$. Then there is an integer $k>0$ such that $$ \depth (M/I^t M)_\p=\depth (M/I^k M)_\p $$ for all integers $t\ge k$ and all prime ideals $\p$ of $R$ in each of the following cases.\\ {\rm(1)} $M$ is Cohen--Macaulay. {\rm(2)} $M/I^n M$ is Cohen--Macaulay for some $n>0$. {\rm(3)} $\bar{R}$ is a homomorphic image of a Cohen--Macaulay ring. {\rm(4)} $\bar{R}$ is semi-local. {\rm(5)} $\bar{R}$ is excellent. {\rm(6)} $\bar{R}$ is quasi-excellent and catenary. {\rm(7)} $\bar{R}$ is acceptable. \end{cor} \begin{proof} In any of the latter three cases, the assertion follows from Theorem \ref{local depth}. (1): It is seen by \cite[Theorem 2.1.3 (b)]{BH} and \cite[Theorem 5.4]{K} that $R/\operatorname{Ann}_R (M)$ is catenary and $\cm(R/\p)$ contains a nonempty open subset of $\spec (R/\p)$ for any $\p\in\supp_{R}(M)$. Thus the assertion follows from Theorem \ref{local depth}. (2) and (3): The assertion can be shown in a similar way as in the proof of (1); see \cite[Corollary 5.6]{K}. (4): We may assume that $R$ is local. Let $\hat{R}$ be the completion of $R$ and $\hat{M}$ the completion of $M$. For any prime ideal $\p$ of $R$, there exists a prime ideal $\q$ of $\hat{R}$ such that $\p=\q\cap R$ because $\hat{R}$ is faithfully flat over $R$. It follows from \cite[Proposition 1.2.16 (a)]{BH} that $$ \depth_{R_\p} (M/I^t M)_\p=\depth_{\hat{R}_\q} (\hat{M}/I^t \hat{M})_\q- \depth_{\hat{R}_\q} (\hat{R}_\q/\p \hat{R}_\q) $$ for any $t>0$. The assertion follows from (3) since $\hat{R}$ is a homomorphic image of a regular local ring. \end{proof} The assumptions about the ring $\bar{R}$ in Theorem \ref {local depth} and Corollary \ref{main cor} are satisfied if so does the ring $R$. The above corollary recovers the theorem of Rotthaus and \c{S}ega \cite{RS}. \begin{cor}[Rotthaus--\c{S}ega]\label{recover1} Let $R$ be an excellent ring and let $M$ be a Cohen--Macaulay $R$-module. Let $I$ be an ideal of $R$ which is not contained in any minimal prime ideal of $M$. Then there is an integer $k>0$ such that $\depth (M/I^t M)_\p=\depth (M/I^k M)_\p $ for all integers $t\ge k$ and all prime ideals $\p$ of $R$. \end{cor} By using the technique of the proof of Theorem \ref{local depth}, the depths of localizations of $M/I^{n+1}M$ can be measured by those of $I^nM/I^{n+1}M$ for each integer $n$. We provide two examples where Corollary \ref{main cor} is applicable, but Corollary \ref{recover1} is not. \begin{ex} Let $R=K \llbracket x, y, z, w \rrbracket/(xy-zw)$ be a quotient of a formal power series ring over a field $K$. Take the ideal $I=(x)$ of $R$ and the finitely generated $R$-module $M=R/(w)$. The ring $R$ is a local hypersurface of dimension 3 that has an isolated singularity. The module $M$ is Cohen--Macaulay, and all elements of $I$ are zero-divisors of $M$. Then $M$ is also a module over $A=K \llbracket x, y, z \rrbracket$. We see that $M\simeq A/(xy)$, $\ M/I^n M\simeq A/(x^n, xy)$ and $\ I^nM/I^{n+1}M\simeq A/(x,y)$. Let $\p$ be a prime ideal of $A$. A similar argument to the latter part of the proof of Theorem \ref{local depth} shows that $\depth (M/I^n M)_\p=\height\p - 2$ for any integer $n\ge2$ if $\p$ contains the ideal $(x,y)$ of $A$; otherwise, we have $(M/I^{n+1} M)_\p \simeq (M/I^nM)_\p$ for any integer $n\ge1$. This says that the integer $k=2$ satisfies the assertion of Corollary \ref{main cor}. \end{ex} \begin{ex} Let $R=K [x, y, z]$ be a polynomial ring over a field $K$. Take the ideal $I=(x)$ of $R$ and the finitely generated $R$-module $M=R/(x^m y, x^m z)$, where $m>0$. The ring $R$ is regular but not local. All elements of $I$ are zero-divisors of $M$. The $R$-module $M$ is not Cohen--Macaulay; see \cite[Theorem 2.1.2 (a)]{BH}. We have $$ M/I^n M\simeq R/(x^n, x^m y, x^m z), \quad I^nM/I^{n+1}M\simeq \begin{cases} {R/(x) \quad (n<m)}\\ {R/(x, y, z) \quad (n\ge m)}. \end{cases} $$ Let $\p$ be a prime ideal of $R$. Suppose $\p=(x,y,z)$. We get $\depth (M/I^n M)_\p=2$ for any $0\le n\le m$. On the other hand, we obtain $\depth (M/I^n M)_\p=0$ for any $n>m$ since the submodule $I^{n-1}M/I^n M$ of $M/I^n M$ is isomorphic to $R/\p$. It is seen that $(M/I^{n+1} M)_\p \simeq (M/I^nM)_\p$ for any integer $n\ge m$ if $\p\ne (x,y,z)$. This says that the integer $k=m+1$ satisfies the assertion of Corollary \ref{main cor}. \end{ex} For modules all of whose localizations have the same depth, the notion of a regular sequence is consistent. \begin{prop}\label{grade} Let $R$ be a ring. Let $M$ and $N$ be finitely generated $R$-modules. Suppose that $\depth (M_\p)=\depth (N_\p)$ for all prime ideals $\p$ of $R$. Then, for any sequence $\bm{x}=x_1,\ldots,x_n$ in $R$, $\bm{x}$ is an $M$-regular sequence if and only if it is an $N$-regular sequence. In particular, $\grade(J, M)=\grade(J, N)$ for any ideal $J$ of $R$. \end{prop} \begin{proof} We observe that $\supp_R(M)=\supp_R(N)$. We prove the proposition by induction on $n$. It is seen that $\ass_R(M)=\ass_R(N)$ and $\supp_R(M/x M)=\supp_R(N/x N)$ for any $x\in R$ by assumption. This says that the assertion of the proposition holds in the case $n=1$. Suppose $n>1$. We may assume that $\bm{x}'=x_1,\ldots,x_{n-1}$ is a regular sequence on both $M$ and $N$. Then we see that $\depth (M/\bm{x}'M)_\p=\depth (N/\bm{x}'N)_\p$ for all prime ideals $\p$ of $R$. Applying the case $n=1$ shows the assertion. \end{proof} The following two results are direct corollaries of Corollary \ref{main cor} and Proposition \ref{grade}. The latter corollary recovers the theorem of Kodiyalam \cite{Ko}. Note that, unlike Corollary \ref{recover2}, the integer $k$ does not depend on the ideal $J$ in Corollary \ref{sequence cor}. \begin{cor}\label{sequence cor} Let $R$ be a ring, $I$ an ideal of $R$, and $M$ a finitely generated $R$-module. Suppose that we are in one of the cases of {\rm Corollary \ref{main cor}}. Then there is $k>0$ such that $\bm{x}$ is an $M/I^t M$-regular sequence if and only if it is an $M/I^k M$-regular sequence for all integers $t\ge k$ and all sequences $\bm{x}=x_1,\ldots,x_n$ in $R$. In particular, $\grade(J, M/I^t M)=\grade(J, M/I^k M)$ for all integers $t\ge k$ and all ideals $J$ of $R$. \end{cor} \begin{cor}[Kodiyalam]\label{recover2} Let $R$ be a local ring, $I$ and $J$ ideals of $R$, and $M$ a finitely generated $R$-module. Then there is $k>0$ such that $\grade(J, M/I^t M)=\grade(J, M/I^k M)$ for all integers $t\ge k$. \end{cor} Finally, we remark that the theorem proved by Brodmann \cite{B} is recovered from this corollary. \begin{rmk}\label{ASS} Let $R$ be a ring, $I$ an ideal of $R$, and $M$ a finitely generated $R$-module. Lemma \ref{RS1.3} asserts that $\bigcup_{i\ge 0} \ass_{R}(I^i M/I^{i+1} M)$ is a finite set. By induction on $n>0$, it is seen that $\ass_{R}(M/I^{n} M)$ is contained in $\bigcup_{i=0}^{n-1} \ass_{R}(I^i M/I^{i+1} M)$; see \cite[Theorem 6.3]{Mat}. The set $X:=\bigcup_{n>0} \ass_{R}(M/I^{n} M)$ is also a finite set. It follows from Corollary \ref{recover2} that there is an integer $k>0$ such that $$ \depth (M/I^t M)_\p=\depth (M/I^k M)_\p $$ for all integers $t\ge k$ and all prime ideals $\p$ of $R$ belonging to $X$. This says that for all integers $t\ge k$, $$ \ass_{R}(M/I^{t} M)=\ass_{R}(M/I^{k} M). $$ \end{rmk} \begin{ac} The author would like to thank his supervisor Ryo Takahashi for valuable comments. \end{ac}
proofpile-arXiv_066-11790
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{secintro} A basic tenet of modern cosmology is the idea that the present large-scale structure the Universe originated by the gravitational growth of small matter inhomogeneities. These initial density fluctuations are thought to be imprinted in a universe dominated by collisionless dark matter at very high redshifts. Their distribution of amplitudes with spatial scale depends ultimately both on the nature of this collisionless matter and on the physical processes operating prior to the epoch of recombination. A family of these generic models are the moderately-sucessfull hierarchical cosmogonies, which suppose that the variance of initial fluctuations decreases with scale. This means that small structures are the first to collapse and that galaxies, groups and clusters are formed by the merging of non-linear objects into larger and larger units. This merging sequence can be visualized as a hierarchical tree with the thickness of its branches reflecting the mass ratio of the objects involved in the merging (Lacey \& Cole 1993). If we imagine time running from the top of the tree, the main trunk would represent the final object, while its past merging history would be represented schematically by the ramification of this trunk into small branches, representing accretion of small sub-lumps, and by the splitting into branches of comparable thickness when merging of sub-clumps of comparable size occurs. The linear growth of the density field is well-understood, but collapsed objects, or `dark halos', are highly non-linear gravitational structures whose dynamical evolution is difficult to trace. Some progress can be made by the direct numerical integration of the equations of motion in N-body simulations, but these are limited in dynamic range and are very time-consuming. Theoretical models are usually based on the analytic, top-hat model of Gunn \& Gott (1972). Spherical overdensities in a critical density universe reach a maximum size when their linear overdensity reaches 1.06, then recollapse and virialize at an overdensity of approximately $\delta_c=1.69$. Unfortunately real halos are neither uniform nor spherically-symmetric so that their collapse times scatter about the predicted value. In cosmology we are seldom interested in the specific nature of one individual halo, but rather in the statistical properties of the whole population. The analytical approach to this problem was pioneered by Press \& Schechter (1974; hereafter PS). To estimate what proportion of the Universe which is contained in structures of mass $M$ at redshift $z$, the density field is first smoothed with a top-hat filter of radius $R$, where $M=4/3\pi\overline{\rho}R^3$ and $\overline{\rho}$ is the mean density of the Universe. $F(M,z)$ is then defined to be the fractional volume where the smoothed density exceeds $\delta_c$. Assuming a gaussian density field, then \begin{equation} F(M,z)={1\over 2}{\rm erfc}\left(\delta_c\over\sqrt{2}\sigma(M,z)\right), \end{equation} where $\sigma$ is the root-mean-square fluctuations within the top-hat filter and ${\rm erfc}$ is the complementary error function. The key step was to realize that fluctuations on different mass-scales are not independent. In fact, to a first approximation PS assumed that high-mass halos were entirely made up of lower-mass ones with no underdense matter mixed in. Then $F$ must be regarded as a cumulative mass fraction and it can be differentiated to obtain the differential one, \begin{equation} f(M,z)=-{\partial F\over\partial M}=-{1\over\sqrt{2\pi}} {\delta_c\over\sigma^2}{\partial\sigma\over\partial M} e^{-{\delta_c^2/2\sigma^2}}. \end{equation} The main drawback of this approach is that, because of the above assumption of crowding together of low-mass halos into larger ones, it seems to undercount the number of objects. As $M\mapsto0$ (and therefore $\sigma\mapsto\infty$) the fraction of the Universe which exceeds the density threshold tends to one half. For this reason it is usual to multiply $f$ by two to reflect the fact that most of the Universe today is contained in collapsed structures. We call this the corrected PS prediction. Extensions of the PS prescription, to calculate explicitly the integrated merger history of all halos, were first developed by Bower (1991) and then rederived, using a more mathematically motivated theory (called the Excursion Set Theory, hereafter EST, Bond {\rm et al.\thinspace} 1992), and tested against N-body experiments, by Lacey \& Cole (1993, 1994). In this formalism the top-hat smoothing radius about a given point is first set to a very large value and then gradually reduced until the enclosed overdensity exceeds $\delta_c$ (in hierarchical cosmologies this will always occur before the radius shrinks to zero). This gives the largest region which will have collapsed around that point. There may be smaller regions which have a larger overdensity but these merely represent smaller structures which have been subsumed into the larger one. By varying the density threshold one can build up a picture of the collapse and merger-history of the halos: in essence this paper describes a numerical representation of this process. Surprisingly perhaps, the EST predicts the same distribution of halo masses as does the PS theory (but without the need for the extra factor of two in normalization). Despite being very idealized in nature, ignoring both the internal structure and tidal forces, the derived formulae provide a surprisingly good fit to the N-body results (Efstathiou {\rm et al.\thinspace} 1988, Lacey \& Cole 1994, Gelb \& Bertschinger 1994). However we have to regard these sucesses with some scepticism, since the basic hypothesis of the EST works very poorly on a object-by-object basis (White 1995), the numerical simulations are still plagued by resolution effects and limited dynamical range, and the halo statistics are sensitive to the scheme chosen for identifying halos. Moreover one should always bear in mind that the PS treatment is a linear approach to a problem which is fundamentally non-linear in nature. The full non-linear evolution of structure is best described by an N-body simulation. Moreover, with the introduction of techniques such as smoothed particle hydrodynamics (SPH), it is possible to simultaneously follow the evolution of a dissipative, continuous intergalactic medium. However, there are several drawbacks to this approach: N-body simulations are very time-consuming, they have a limited dynamical range and they are very inflexible when trying to model the physical processes happening on small scales (with small numbers of particles). For example, it is likely that the interstellar medium in a protogalaxy will contain a mixture of hold and cold gas as well as stars with a variety of ages and dark matter. Simulations which can handle such situations are only just beginning to appear. Thus it is highly desirable to set up a simple but efficient Monte-Carlo procedure which mimics the general features of the hierarchical clustering process and can be used to carry out a large parameter investigation with little time-consumption. The first model to be presented in those lines was the Block Model\ of Cole \& Kaiser (1989), used first to study the abundance of clusters and subsequently some aspects of galaxy formation (Cole 1991, Cole {\rm et al.\thinspace} 1994). It starts with a large cuboidal block, with sides in the ratio $1:2^{1/3}:2^{2/3}$, and subdivides it into two sub-blocks of the same shape. If the initial block has an overdensity $\delta$ (drawn from a gaussian with variance $\sigma(M)$), then the two sub-blocks will inherit the same overdensity with an extra perturbation, added to one of them and subtracted from the other, drawn from a gaussian with variance $\Sigma$, where $\Sigma^2=\sigma^2(M)-\sigma^2(M/2)$. This quadratic procedure is applied iteratively to each of the sub-blocks until the imposed mass resolution is achieved. The advantage of the method comes from the fact that the relative position of all sub-blocks is known at all times so that it is simple to follow the merger history of any halo detected at any stage of the simulation. Kauffmann \& White (1993) adopt a different approach which makes use of the conditional merging probabilities derived by Bower (1991). Given that a halo has a particular mass at some redshift, then one can work out the probability distribution for the mass of the halo (centred on the same point) at some earlier redshift. By generating a large number of representative halos, say 100 or more, it is possible to allocate sub-halos with the correct spectrum of masses. This method gives a wider mass-spectrum for halos (not restricted to powers of two) but restricts halo formation to occur at specific redshifts and is much more complicated to implement than the Block Model. Here we present a new method for following halo evolution which is much closer in spirit to the N-body simulations without compromising the simplicity and speed of the above analytical techniques. It allows a continuous spectrum of halo masses (above a minimum of 8 unit cells) and a variable collapse time. We start with a full realization of the initial linear density field defined on a cubical lattice. (This constitutes part of the initial conditions for a cosmological simulations, which can therefore be used to test our method.) Secondly we smooth the density field in cubical blocks on a range of scales, using for each scale of refinement a set of eight displaced grids. The blocks are then ordered in decreasing overdensity ({\rm i.e.\ } increasing collapse time). We then run down this list creating a merger tree for halos. (The decision whether to merge two sub-halos together into a larger one is crucial for preventing the growth of unphysically-large structures.) As a bonus our technique retains spatial information about the relative location of halos ({\rm i.e.\ } a measure of their separation, not just the merging topology). In the next section we describe our merger algorithm in more detail. Tests on simple power-law spectra of density perturbations are presented in Section~3, and the relative success, benefits and disadvantages of our method are contrasted with others in Section~4. \section{The algorithm} \label{secmethod} We begin with a realization of the chosen density field in a periodic cubical box of side $L\equiv2^l$, where $l$ is a positive integer. A standard initial condition generator is used which populates the box with waves of random phase and amplitude drawn from a gaussian of mean zero and variance equal to the chosen input power-spectrum. Neither the fact that $L$ is a power of two, nor the periodic boundary conditions are strictly necessary but are chosen for simplicity. Next we average the density fluctuations within cubical \em{blocks} of side 2, 4, $\ldots$, $L$. At each smoothing level we use eight sets of overlapping grids, displaced by half a block-length in each co-ordinate direction relative to one another (see Fig.~\ref{figblock}). This ensures that density peaks will always be approximately centred within one of the blocks and is a major advantage over other methods. \begin{figure} \centering \pssilent \psfig{figure=scheme1.ps,height=4cm} \psfig{figure=scheme2.ps,height=5cm} \caption{A 2-dimensional representation of the blocking scheme. (a) Each block in the upper panel is constructed by averaging the four cells or blocks beneath it. (b) This picture shows two sets of overlapping grids each of which aligns with the same sub-grid from the previous level of smoothing.} \label{figblock} \end{figure} The density fluctuations within blocks and base-cells are now ordered in decreasing density, which is the same order in which they would collapse as the universe ages (under the na\"{\i}ve assumption that they all have the same morphology at all times: we will test the accuracy of this assumption later). The final step is to build up a merger tree to express the collapse history of blocks. This is a much harder problem than in the simple Block Model\ because the blocks we use are not always nested inside one another but may overlap. Our initial guess was to merge together all collapsed blocks which overlap with one another, but this leads to very elongated structures which can stretch across a large fraction of the box. While these may represent large-scale pancakes or filaments, they are clearly not the kind of simple virialized halos which we are trying to identify. In practice they would probably break up into smaller objects and so we need to find some way to limit their growth. The procedure we use to do this is as follows: \begin{itemize} \item First some terminology. Collapsed regions are known as \em{halos}. Initially these coincide with the cubical blocks but they need not do so at later times once overlapping blocks begin to collapse. The merger tree consists of a list of cells and sub-halos which constitute each halo. (For simplicity each cell, block or halo also contains a link to its `parent' halo but these are not strictly required). \item Initially, there are no collapsed halos. We start at the top of the ordered list of cells and blocks and run down it in order of increasing collapse time. \item Each cell that collapses is given a parent halo, provided that it has not already been incorporated into some larger structure (this avoids the cloud-in-cloud problem). \item For each block that collapses we first obtain a list of all the halos with which it overlaps and to what extent. The action to be taken depends upon this degree of overlap: \begin{enumerate} \item Any uncollapsed cells are added to the new halo. This represents accretion of intergalactic material. \item If halos are discovered whose mass is less than that of the block and at least half of which is contained within the block, then these are merged as part of the new structure. This would represent accretion of existing collapsed objects. \item If the collapsing block has half or more of its mass contained in \em{exactly one} pre-existing halo then merge them together as part of the new structure. This would represent accretion of the block by a larger collapsed object. The restriction to exactly one pre-existing halo prevents the linking together of adjacent halos without the collapse of any new matter (see Fig.~\ref{fig:halo}a). It is this condition which prevents the growth of long filamentary structures and limits the axial ratio of halos to be approximately less than 3:2. \end{enumerate} \end{itemize} Initially the method produces halos of mass 1 and 8 cell units, but as blocks begin to merge so they produce halos of a wide variety of shapes and a continuous spectrum of masses. The two most common methods of sudden change in halo mass are creation by the merger of several sub-units (Fig.~\ref{fig:halo}b) or accretion of a new block of approximately equal mass which overlaps with the halo (Fig.~\ref{fig:halo}c). These produce approximately cubic structures, or triaxial with axial ratios ranging from 3:2 to 1:1 (Fig.~\ref{fig:axes}). Contrast this with the Block Model\ where the halo masses always increase by a factor of two at each merger event. \begin{figure} \centering \pssilent \psfig{figure=figblocos1.ps,height=12cm} \caption{(a) We wish to avoid mergers such as that shown in this diagram where two pre-existing halos (solid boxes) are linked together by the collapse of a third block. This is the reason for the restriction on merging discussed in the text. (b) A typical example of the formation of a new halo by merger of many smaller sub-units. (c) The growth of a halo by accretion of another block of almost equal size.} \label{fig:halo} \end{figure} \section{Results} \label{secresult} \subsection{Self-similarity} \label{ssecself} We have tested our algorithm on power-law density fluctuation spectra, which should give self-similar scaling on scales much smaller than the box-size. We take a power-law spectrum $P(k)\propto k^n$, where $n=-2$ or 0 to span the range of solutions expected in the real Universe. In an infinite box these would translate to a root-mean-square density fluctuation spectrum $\sigma(m)\propto m^{-\alpha}$ where $\alpha=(3+n)/6$. However, in practice we are missing a lot of power outside the box and so the decline is steeper than this at high masses, especially for $n=-2$. This is illustrated in Fig.~\ref{figps}a where the spectrum is clearly not a power-law, but is well-fit by the solid line which shows $\sigma(m)$ calculated by direct summation of waves inside the box with a window function associated with a cubical filter. For $n=0$ the effect is not so severe, so we fit the data with the functional form of $\sigma(m)$ for an infinite box. Note that, because we are using a cubical filter, the normalization is different than it would be for a spherical top-hat. This difference is irrelevant for the purposes of this paper because the normalization we use is arbitrary, however it could be important if we were to compare our predictions with the results of N-body simulations. \begin{figure} \centering \pssilent \psfig{figure=sigma1290.ps,height=6cm,width=8cm,angle=270} \vspace{0.5cm} \psfig{figure=sigma1288.ps,height=6cm,width=8cm,angle=270} \caption{The measured root-mean-square power on various mass-scales for the $128^3$ box: (a) $n=-2$, (b) $n=0$. For $n=-2$ the solid line is calculated by direct summation of waves inside the box with a cubical window function. The corresponding curve is normalised to the second point of the data. The plots are in logarithmic scale.} \label{figps} \end{figure} The results presented here were mostly obtained using boxes of side $L=128$. We tried a range of box-sizes, from $L=32$ to 256, to test the effect of variable resolution on our results. The code needs about $2L^3$ words of memory so $L=256$ is the largest practical size on a workstation. If the merger tree is to be used as the basis of galaxy formation models, however, then much more storage is required and $L=128$ would be the largest simulation we can allow for. Fig.~\ref{fig:cummf} shows the cumulative mass function, $F(M,z)$, for $L=128$, averaged over four realizations. The output is shown for four redshifts corresponding to fractions ${1\over16}$, ${1\over8}$, ${1\over4}$ and ${1\over2}$ of the box contained in collapsed regions for $n=-2$ and fractions ${3\over16}$, ${1\over4}$, ${3\over8}$ and ${1\over2}$ for $n=0$ (these choices were made simply to get well-spaced curves in the figure: we can reconstruct the curves at any intermediate time). The dashed lines show the corrected Press-Schechter prediction where $\sigma(m)$ is obtained from fits to the points shown in Fig.~\ref{figps}. \begin{figure} \centering \pssilent \psfig{figure=cmf1290.ps.rec,height=6cm,width=8cm,angle=270} \vspace{0.5cm} \psfig{figure=cmf1288.ps.rec,height=6cm,width=8cm,angle=270} \caption{The average cumulative mass function for the four $L=128$ boxes at four different output times (when a certain fraction of the box is in collapsed regions, as indicated by the figures next to the curves) for (a) $n=-2$ and (b) $n=0$. The dashed lines show the corresponding Press-Schechter predictions (with extra factor of two).} \label{fig:cummf} \end{figure} In both cases the evolution is approximately self-similar. This can be seen more clearly in Fig.~\ref{figdifmf} which shows a differential plot, $-\partial F/\partial\ln\nu$, where $\nu=\delta_c/\sigma(M,z)$ is the ordinate ($\nu=(M/M_*)^{1/2}$ for $n=0$). Also shown is the corrected PS prediction, \begin{equation} -{\partial F\over\partial\ln\nu}={2\nu\over\sqrt{2\pi}} e^{-{1\over2}\nu^2}. \end{equation} When expressed in this way the functional form of the mass distribution is absolutely universal, {\rm i.e.\ } it does not depend on any parameter of the simulation. \begin{figure} \centering \pssilent \psfig{figure=dmf1290.ps.rec,height=6cm,width=8cm,angle=270} \psfig{figure=dmf1288.ps.rec,height=6cm,width=8cm,angle=270} \caption{The differential mass function, $\partial f/\partial\ln \nu$, for the $L=128$ box at the times corresponding to the indicated fractions of the box contained in collapsed regions: (a) $n=-2$, (b) $n=0$. The thick dashed line shows the corresponding Press-Schechter prediction.} \label{figdifmf} \end{figure} Consider first the $n=0$ case. Here the differential mass curves seem to have the same shape as the PS prediction, but with a higher normalization (alternatively one could say that $\delta_c$ should be reduced slightly so as to shift the predicted curve to the right). This is not unexpected and is discussed in Section~\ref{ssechighm} below. There is no evidence of a departure from the PS curve at a mass of 64, corresponding to the size of smoothing blocks of side 4 (this is in contrast to the $n=-2$ case, discussed below). The maximum mass of collapsed halos is quite small, less than 125 even for the largest box, $L=256$. Given that the smallest halos to collapse in our model (apart from isolated cells) have mass 8, then this gives a very small dynamic range. We could force larger objects to form by allowing a larger fraction of the box to collapse (this would be legitimate if, for example, one were to regard the whole box as a single collapsed halo) however one would not then expect the evolution to be self-similar. The curves for the steeper spectrum, $n=-2$, extend to much higher masses because the spectrum has much more power on large scales than for $n=0$. Here we do see evidence of kinks at the blocking masses of 64, 512 and 4096, especially at the final output time when half the box has collapsed: there is an excess of halos of slightly higher mass and a deficit of slightly lower mass than these. Overall the spectrum is a reasonable fit to the PS prediction at masses above 100, but shows and excess between masses of 8 and 100. \subsection{Properties of halos} Fig.~\ref{fig:proj} shows a projection of the largest halos in one $L=128$ box of each spectral type at a time when half the mass has collapsed into halos. Many of the irregular shapes which are visible are due to projection effects. \begin{figure} \centering \pssilent \psfig{figure=snap1290.ps,height=7.5cm,width=7.5cm,angle=270} \vspace{0.5cm} \psfig{figure=snap1288.ps,height=7.5cm,width=7.5cm,angle=270} \caption{Projections of the distribution of halos taken at the time when half the box is contained in collapsed structures: (a) $n=-2$, mass greater than 700; (b) $n=0$, mass greater than 50.} \label{fig:proj} \end{figure} Our halos tend to exhibit more variety of axial ratios than in the Block Model. There the relative length of the major- and minor-axes is fixed all times at approximately 1:1.59, whereas ours start with more typically 1:1 (for collapse of isolated blocks as in Fig.~\ref{fig:halo}b) or 1:1.5 (for the collapse of overlapping blocks as in Fig.~\ref{fig:halo}c), developping rapidly to more complex structures with a great variety of shapes. Fig.~\ref{fig:axes} shows the distribution of axial ratios for all halos of mass greater than or equal to 8 for $n=0$ and greater than 50 for $n=-2$. The overall observation is that there is no much difference of halo shapes if one compare realizations of both spectra. In both, the halos show a wide range of triaxality ranging from prolate to oblate (while in the Block Model\ they are systematically prolate). A drawback of our method comes from the fact that the overdensity of a collapsing block, $\delta_{\rm b}$, is not necessarily equal to the mean overdensity of the resulting halo, $\delta_{\rm h}$. It is the former value which we must associate with the halo if the topology of the merger tree is to be preserved (or at least we must maintain the same ordering of densities for halos as their parent blocks). The differences can be quantified in terms of the ratio $\chi=(\delta_{\rm b}-\delta_{\rm h})/\delta_{\rm b}$ which is plotted in Fig.~\ref{fig:dmas} at a time when half the mass is in collapsed structures: we show the mean value plus one sigma error bars. \begin{figure} \centering \pssilent \psfig{figure=dmms1290.ps,height=6cm,width=8cm,angle=270} \vspace{0.5cm} \psfig{figure=dmms1288.ps,height=6cm,width=8cm,angle=270} \caption{The relative difference between the assigned and the true overdensity of halos, for the $L=128$ box at the time when half of its mass is contained in collapsed regions.} \label{fig:dmas} \end{figure} Note first that halos of fewer than eight cells have overdensities which are much less than the assigned one. These structures are, however, leftovers of the merging process (the smallest blocks have a mass of 8 units) and so they should not be considered as collapsed halos, but rather clouds of intergalactic material to be accreted later by a neighbouring halo. For halos of mass 8 or larger the agreement is much better, but nevertheless the true overdensity of a halo remains systematically lower than the assigned one. The effect is largest for $n=0$ where the mean value of $\chi$ is about 0.15. For $n=-2$, it varies from approximately zero in the largest halos to 0.1 in the low mass ones. The reason for the offset is that high-density cells can contribute to the overdensity of more than one block. Refering again to Fig.~\ref{fig:halo}c, if the region of overlap between the two blocks were of higher density than its surroundings then the density of the whole halo would be lower than that of either block from which it is constructed. If desired the assigned halo densities could be systematically reduced to bring them into agreement with the measured ones; equivalently one could raise the value of the critical density, $\delta_c$, required for collapse above that of the top-hat model. More serious is variance of $\chi$, approximately 0.2, which means that two halos with the same assigned density can have quite disparate true overdensities. The model supposes that they collapse at the same time, whereas the full non-linear evolution would presumably show otherwise. We occasionally find some high-mass halos (mass greater than 8) with big $\chi$, which contribute significantly to the enlargement of the error bars at those scales. These are effectively leftovers of the merging process and should not be treated as collapsed halos in subsequent applications of the method (that is, in a realistic galaxy formation modelling they should be considered as sources of material to be accreted at a later stage of the hierarchy). The variance in $\chi$ is unwelcome but is only one contribution to the dispersion between overdensity and collapse time. We note that N-body simulations show for each particle a poor correspondence between the expected mass of its parent halo (predicted from the initial conditions) and the true value measured from a numerical simulation, evolved from the same initial conditions (White 1995, Bond {\rm et al.\thinspace}\ 1992). Moreover gravitational collapse is clearly not as simple as the spherical model assumes. There is, for example, no guarantee that underdense regions never collapse or that high-dense regions will do so (Bertschinger \& Jain 1994). However, if a simple semi-analytical model of the gravitational clustering is desired, then the simple relation between collapse redshift and initial overdensity given by the spherical model seems the most obvious choice. \begin{figure*} \centering \pssilent \begin{tabular}{cc} \psfig{figure=axes1288.ps,height=7.5cm,width=7.5cm,angle=270} & \psfig{figure=axes1290.ps,height=7.5cm,width=7.5cm,angle=270} \\ \psfig{figure=axes1288-2.ps,height=7.5cm,width=7.5cm,angle=270} & \psfig{figure=axes1290-2.ps,height=7.5cm,width=7.5cm,angle=270} \\ \psfig{figure=axes1288-3.ps,height=7.5cm,width=7.5cm,angle=270} & \psfig{figure=axes1290-3.ps,height=7.5cm,width=7.5cm,angle=270} \\ \end{tabular} \caption{Distribution of axial ratios for halos taken from one $L=128$ realization when half of the box mass is contained in collapsed structures: $n=0$, mass greater or equal than 8; $n=-2$, mass greater than 50. The upper panel shows the distribution of triaxalities in the prolateness-ellipticity plane (with the filled triangle corresponding to the Block Model).} \label{fig:axes} \end{figure*} \subsection{The number density of high-mass halos} \label{ssechighm} Our method passes the test for self-similarity, yet for $n=0$ it predicts far more high-mass halos than Press-Schechter or other methods based on similar ideas, such as the Block Model. This is an expected outcome of our method and points more to a deficiency in the PS model than anything else, as we attempt to explain below. Press-Schechter does not count the number of halos of a given mass. Rather, it counts the fraction of the Universe where, if one were to put down a top-hat filter of the appropriate mass, the overdensity would exceed a certain critical threshold. Regions which just poke above this threshold for a single position of their centres, contribute nothing to the mass-function. It is easy to see that this becomes increasingly likely as one moves to rarer and rarer objects (of higher and higher mass). For these it is much better simply to count the number of peaks which exceed the threshold density after filtering on the appropriate scale (on the other hand Peaks Theory predicts far too many low-mass objects as it does not distinguish between overlapping halos). The necessary theory has been exhaustively analyzed by Bardeen {\rm et al.\thinspace} (1986) who showed that uncorrected PS (without the extra factor of two) underestimates the number of high-mass halos by a factor $\alpha^{3/2}\nu^3$, where $\delta_c=\nu\sigma(m)$ (this result is for a gaussian filter but similar results will hold for all filters with just a small difference in scaling). One way to visualize this result is to think of each peak as having an overdensity profile \begin{equation} \nu\approx\nu_0\left(1-{1\over2}\left(r\over R\right)^2\right) \end{equation} where $R$ is the radius of the top-hat filter. It is then easy to estimate the contribution to the PS mass fraction and to integrate over all values of $\nu_0$ greater than the threshold, $\delta_c/\sigma$, to get the total number of halos. This method suggests that PS should predict $\sqrt{2/9\pi}\,\nu^3$ times as many halos as Peaks Theory, in rough agreement with the above for $n\approx -1$ to 0. A more direct demonstration of the above difference between Press-Schechter and the actual number of high-mass peaks in the density field is shown by the numbers in Table~\ref{tab:npeaks}. Columns 2--9 show the measured number of blocks which exceed the density threshold given in the first column in each of the eight sub-grids of mass 512. \begin{table*} \begin{minipage}{140mm} \centering \caption{Number of blocks of mass 512 ($L=128$) above the density thresholds $3\sigma$, $2\sigma$, $\sigma$: (a) $n=-2$, (b) $n=0$. Columns 2--9 correspond to each of the eight sub-grids used for that level of refinement, Column~10 shows the number of isolated peaks and Column~11 the PS prediction of the expected number of halos of this mass.} \label{tab:npeaks} \begin{tabular}{@{}rrrrrrrrrcc@{}} \hline (a) XYZ & single & & & & & & & &combined&PS expected \\ grid& grid & X & Y & Z & XY & XZ & YZ & XYZ & grids & number\\ \hline $\geq 3\sigma$ &5&7&4&1&6&7&6&8&18& 5.5$\pm$2.3 \\ $\geq 2\sigma$ &88&87&92&77&92&87&79&88&109& 93.2$\pm$9.7 \\ $\geq 1\sigma$ &628&654&630&662&628&648&639&647&281& 650$\pm$26 \\ \end{tabular} \vspace{0.5cm} \begin{tabular}{@{}rrrrrrrrrcc@{}} \hline (b) XYZ & single & & & & & & & &combined&PS expected \\ grid& grid & X & Y & Z & XY & XZ & YZ & XYZ & grids & number\\ \hline $\geq 3\sigma$ &6&7&4&9&4&7&8&7&38&5.5$\pm$2.3 \\ $\geq 2\sigma$ &99&99&96&82&90&86&99&90&324&93.2$\pm$9.7 \\ $\geq 1\sigma$ &630&641&649&656&652&617&670&677&671&650$\pm$26 \\ \end{tabular} \end{minipage} \end{table*} These agree with the Press-Schechter prediction, as indeed they should by construction. When we combine the various grids, however, an interesting thing happens. Column~10 shows the number of separate, ({\rm i.e.\ } non-overlapping), overdense blocks in the combined grid. At high overdensity all the halos we have identified are distinct (they exceed the threshold for just one position of the smoothing grid). The total number of halos is therefore greatly in excess of the PS prediction and far closer to that given by Peaks Theory. For $n=-2$ the excess is approximately a factor of three which brings them into agreement once the PS prediction has the extra factor of two applied. For $n=0$, however, the difference is much larger and the number of peaks is a factor of 3-4 larger than even the corrected PS estimate. This goes in some way to explaining the difference between the PS prediction and the measured cumulative mass function in Fig.~\ref{fig:cummf}. At lower overdensity the disagreement is much less severe. One should note that for $n=-2$ there is a gross underestimate of $1\sigma$ peaks compared to the values obtained in each sub-grid. This is simply a consequence of the Peaks methodology. Remember that for each sub-grid we are just measuring the fraction of the total number of blocks above the threshold, while in the case of combined grids we simply count the number of peaks. Because for $n=-2$ the peaks are larger and more clustered, there is a great chance of finding blocks sitting next each other which are above the imposed threshold. Consequently, if we are only selecting the peaks many of those blocks will be discarded. This situation does not arise for $n=0$, where the peaks are smaller and more evenly distributed. Our use of overlapping grids is therefore crucial. They ensure that all halos are approximately centred within one of the grid cells. Other methods, such as the Block Model, which have fixed borders between mass cells, have difficulty in detecting structures that cross cell boundaries and are, by construction, forced to agree with Press-Schechter. This can lead to a gross underestimate of the number of rare, high-mass peaks, especially for steep spectra. We are not saying that our method necessarily gives a better description of the growth of structure in the Universe because all these theoretical models are highly idealized. Substructure may lengthen collapse times and tidal field may need to be taken into account. Nevertheless, given the simplified prescription which we have adopted, our method does at least seem to detect the correct number of high-mass halos, and many more than other methods. \section{Discussion} \label{secdiscuss} We have presented a new method of constructing a hierarchical merger tree based on actual realizations of the linear density field. We smooth on a set of interlaced, cubical grids on a variety of mass scales, then order in decreasing density. We run down the resulting list, merging together overlapping blocks to form collapsed halos. The main properties of our model are as follows: \begin{itemize} \item The model exhibits the scaling behaviour expected from power-law spectra. \item For a flat, white-noise spectrum, $n=0$, the mass function is well-fit by the PS model, provided that we raise the normalization by a factor of 3-4. This difference arises because of a deficiency in the PS method which fails to count the correct number of massive, rare (high-$\nu$) peaks. The dynamic range for the masses of halos for this spectrum is quite small---at a time when half the box is in collapsed structures, the mass of the largest halo is just 125 cells, even for the box of side $L=256$. \item The mass function for a steeper spectrum, $n=-2$, lie much closer to the PS prediction (with the usual factor of two increase in normalization), but show small kinks at masses of 64, 512 and 4096, corresponding the the masses of smoothing blocks---there is a slight deficit of halos of smaller, and an excess of halos at larger, mass. \item The collapsed halos tend to be triaxial with a wide mixture of prolateness and oblateness (contrary to the Block Model). \item The mean overdensity of halos tends to be lower than that assigned to them by our scheme, and to have a scatter of about 20 percent about the mean value. While undesirable, since it can reverse the collapse ordering in the tree, this can be partialy cured in subsequent applications of the method. \end{itemize} A shortcoming of our method is that the minimum halo mass is eight cells with a corresponding loss of dynamical range. We see in Fig.~\ref{fig:cummf} that for $n=-2$ we achieve approximately a mass-range of approximately two and a half decades when half the mass is contained in collapsed objects, while for $n=0$ we get only slightly more than one decade. This is because $n=-2$ corresponds to a flat spectrum, $\sigma\propto M^{-1/6}$, with almost all scales collapsing simultaneously, while for $n=0$ the spectrum decreases more rapidly, $\sigma\propto M^{-1/2}$, and the peaks are much more isolated. Fortunately, the physically-motivated CDM spectrum can be fitted by an $n=-2$ spectrum over a significant mass range, and so the loss of dynamical range is not so important in a realistic application of the method. All these mass-ranges can be extended if we allow a larger fraction of the box to collapse (as we must if we wish to model a cluster of galaxies, for example), but at the risk of losing a fair representation of the power spectrum on scales approaching the box-size. If we simulate a large portion of the Universe, of mass say $10^{16}\hbox{$\rm\thinspace M_{\odot}$}$, in a box of $L=128$, then a block of 8 cells corresponds to $3.8\times 10^{10}$\hbox{$\rm\thinspace M_{\odot}$}, which could represent at best a dwarf galactic halo. If the box represents a large galactic halo of mass $10^{13}\hbox{$\rm\thinspace M_{\odot}$}$, then we can resolve down almost to globular cluster scales. We also have carried out simulations with a range of box-lengths, $L=16$, 32, 64, 128, and 256, in order to show the effect of variable resolution (we were not able to perform the $L=16,32$ simulations for $n=0$ due to the lack of dynamical range). The base-cells in each case correspond to one set of blocks of side $256/L$ in the $L=256$ simulation. The results are presented in Fig.~\ref{figvarl} which shows the cumulative mass functions sampled at the same collapsed fractions of the box as those corresponding to Fig.~\ref{fig:cummf}. The shape of the mass spectrum is similar but with a slight increase in dynamic range as one moves from $L=64$ to $L=256$. \begin{figure} \centering \pssilent \psfig{figure=difmf1290.ps,height=6cm,width=8cm,angle=270} \psfig{figure=difmf1288.ps,height=6cm,width=8cm,angle=270} \caption{The cumulative mass-function for boxes of variable resolution, as indicated: (a) $n=-2$, (b) $n=0$. The collapsed fractions are the same as those corresponding to Fig.~\ref{fig:cummf} respectively.} \label{figvarl} \end{figure} Whether our method provides a better description of the formation of structure than other methods remains to be seen. The Block Model\ in particular seems to give good agreement with N-body simulations (Lacey \& Cole 1994) which themselves are approximately fit by modified Press-Schechter models ({\rm e.g.\ } Gelb \& Bertschinger 1994). However, there is a limited dynamical range in the simulations, the results are sensitive to the precise model for identifying collapsed halos and the critical overdensity for collapse is usually taken as a free parameter. Given these caveats it is hard to tell whether the fits are mediocre, adequate or good. One advantage of our scheme is that it is based on an actual realization of a density field which can be used as the starting point for an N-body simulation. Thus we will not be limited to a statistical analysis, but will be able to directly compare individual structures identified in the linear density field with non-linear halos that form in the simulation. Initial results from other studies (Bond {\rm et al.\thinspace} 1992, Thomas \& Couchman 1992 ) suggest that the correspondence is approximate at best, and we may be forced to consider the effect of tidal fields on a halos evolution. We have not yet carried out the necessary N-body simulations because we have not up to now had access to the necessary super-computing facilities to evolve (and analyze!) a $256^3$ box of particles. Such datasets will soon become available as part of the Virgo Consortium project on the UK's Cray T3D facility and we intend to report the results in a subsequent paper. Nevertheless, even in the absence of the numerical tests, we feel that our method is a viable alternative to other methods of calculating the merging history of galactic halos. It passes the test of self-similarity yet predicts more high-mass halos than other methods. It has the disadvantage of losing a factor of eight in resolution at low-masses, but above this it has a smooth mass-spectrum and is not restricted to masses which are a power of two. We intend to contrast the predictions of the Block Model\ and this current method in models of galaxy formation such as those discussed by Kauffmann, White \& Guiderdoni (1993) and Cole {\rm et al.\thinspace} (1994), and explore the role of pre-galactic cooling flows (Nulsen \& Fabian 1995). \section*{Acknowledgments} DDCR would like to acknowledge support from JNICT (Portugal) through program PRAXIS XXI (grant number BD/2802/93-RM). Part of this paper was written while PAT was at the Institute for Theoretical Physics at Santa Barbara and as such was supported in part by the National Science Foundation under Grant Number PHY89-04035. The paper was completed while PAT was holding a Nuffield Foundation Science Research Lectureship. We would like to thank Shaun Cole for providing us with a copy of the Block Model program. The production of this paper was aided by use of the STARLINK Minor Node at Sussex.
proofpile-arXiv_066-12356
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The Partite Lemma is immensely useful in structural Ramsey theory (for example it is used in \cite{2}, \cite{3}, \cite{10}, \cite{5}, \cite{6}, and \cite{11}). This lemma was discovered by Ne\v{s}et\v{r}il and R\"{o}dl in \cite{2}, \cite{3}, and \cite{10}. In \cite{6} Solecki gave a dual version of the Partite Lemma. The present paper is about the nature of the object produced in the Partite Lemma and the unification of the original Partite Lemma and the dual version of the lemma. The main theorem of this paper asserts that a certain category has colimits (which are specific cocones) over certain diagrams that are defined using Hales--Jewett lines. Colimits and cocones are standard notions which abstract the ideas of sums and products. While the definition of the diagrams that we consider uses Hales--Jewett lines, the statement and proof of our main theorem do not involve any Ramsey theory, in particular they do not involve the Hales\nobreakdash--Jewett Theorem. After establishing the existence of the colimits, we prove that the main object needed in the Partite Lemma is our colimt. All properties needed for the Partite Lemma follow from this object being a cocone. So our main through the ideas of cocones and colimits isolates the mathematical properties of the construction in the Partite Lemma of structural Ramsey theory. Apart from exhibiting the category theoretic nature of this object, our result shows that it is canonical since colimits are canonical. Additionally, our approach allows us to unify the proofs of the Partite Lemma in \cite{2}, \cite{3}, and \cite {10} with the dual Partite Lemma in \cite{6}. We now describe this paper's organization. In Section 2 we give general background on category theory and recall the definition of cocones and colimits. In Section 3 we give a generalization of structures where we add a category $\mathcal{C}$ to the definition of language and structures. This allows us to unify structures as occurring in \cite{5} and dual structures found in \cite{6}. We then define blocks, which are a generalization of objects in \cite{5} and paritite systems in \cite{2}, \cite{3}, and \cite{10}. In Section 4 we introduce a subcategory of blocks and a diagram in the subcategory using Hales--Jewett lines. Then we prove our main theorem which gives the existence of colimits over these line diagrams. In Section 5 we turn our attention to Ramsey theory. We explain how cocones can be used to transfer the Ramsey property and as a consequence we prove the Partite Lemma using our main theorem. We finish by applying the Partite Lemma to prove the results in \cite{6} and \cite{5} in a unified manner. \\ The author would like to thank S\l awomir Solecki for spending ample time helping refine the presentation of this paper. \section{Background on categories} \subsection{General definitions} In this section we review the definition of a category and the definition of a functor. These concepts are the basic building blocks of category theory. We follow the presentation by Riehl in \cite{9}. \begin{definition} A \textbf{category} $\mathcal{C}$ is a pair of classes: the class $\textnormal{Ob}(\mathcal{C})$ of objects of $\mathcal{C}$ and the class $\textnormal{Hom}(\mathcal{C})$ of morphisms of $\mathcal{C}$. For each $A,B\in \textnormal{Ob}(\mathcal{C})$ there is an associated set $\textnormal{Hom}_\mathcal{C}(A,B)$ of morphisms. Then $$\textnormal{Hom}(\mathcal{C})=\bigsqcup_{A,B\in \textnormal{Ob}({\mathcal{C}})}\textnormal{Hom}_{\mathcal{C}}(A,B).$$ For each $A,B,C\in \mathcal{C}$ there is an associative function called composition, $$ \textnormal{Hom}_\mathcal{C}(A,B)\times \textnormal{Hom}_\mathcal{C}(B,C)\to \textnormal{Hom}_\mathcal{C}(A,C)$$ The composition of $f\in \textnormal{Hom}_\mathcal{C}(A,B)$ and $g\in \textnormal{Hom}_\mathcal{C}(B,C)$ is written $g\circ f$. For all $A\in \mathcal{C}$ there is a morphism $\textnormal{Id}_A\in \textnormal{Hom}_\mathcal{C}(A,A)$ so that for all $f\in \bigcup_{B\in \textnormal{Ob}(\mathcal{C})}\textnormal{Hom}_\mathcal{C}(A,B)$, \\ $f\circ \textnormal{Id}_A=f$ and for all $g\in \bigcup_{C\in \textnormal{Ob}(\mathcal{C})}\textnormal{Hom}_\mathcal{C}(C,A)$, $\textnormal{Id}_A\circ g=\textnormal{Id}_A$. \end{definition} The following is a simple example of a category that we will use in this paper. The category \textnormal{\textbf{Fin}} is the category where $\textnormal{Ob(\textbf{Fin})}$ is the class of all finite sets. For all finite sets $A$ and $B$, $\textnormal{Hom}_{\textnormal{\textbf{Fin}}}(A,B)=B^A$. Composition of morphsims is defined by the standard function composition. We will introduce multiple categories where composition is the standard function composition. In these cases we will omit the definition of composition. Another category we will use is the dual of $\textbf{Fin}$ so we define the idea of a dual category. \begin{definition} Fix a category $\mathcal{C}$. The \textbf{dual category} $\mathcal{C}^{\textnormal{op}}$ is the category with $\textnormal{Ob}(\mathcal{C}^{\textnormal{op}})=\textnormal{Ob}(\mathcal{C})$, $\text{Hom}_{\mathcal{C}^{\textnormal{op}}}(A,B)=\textnormal{Hom}_\mathcal{C}(B,A)$ and where $f\circ_{\mathcal{C}^{\textnormal{op}}} g$ is defined as $g\circ_{\mathcal{C}} f$. \end{definition} In less formal terms the dual category is the category with all the arrows reversed. We now define functors which are maps between categories. \begin{definition} If $\mathcal{C}$ and $\mathcal{D}$ are categories then a \textbf{functor} $F\colon \mathcal{C}\to \mathcal{D}$ is a pair of class functions $$F\colon \textnormal{Ob}(\mathcal{C})\to \textnormal{Ob}(\mathcal{D})\text{ and } F\colon \textnormal{Hom}(\mathcal{C})\to\textnormal{Hom}(\mathcal{D})$$ So that for all $A,B\in \mathcal{C}$, $$F(\textnormal{Hom}_\mathcal{C}(A,B))\subseteq \textnormal{Hom}_\mathcal{D}(F(A),F(B)),$$ for all $f,g\in \textnormal{Hom}(\mathcal{C})$ so that $f\circ g$ exists, $$F(f)\circ F(g)=F(f\circ g)$$ and for all $A\in \mathcal{C}$, $F(\textnormal{Id}_{A})=\textnormal{Id}_{F(A)}$. \end{definition} \subsection{Cocones and colimits} We review the definition of cocones and colimits, which are the main categorical constructions we consider. It suffices to mention that the idea of cocones/colimits captures the notions of direct sum. For more information we refer the reader to Chapter 3 of \cite{9}. \begin{definition}Let $J$ be a category which we call the index category. A diagram of shape $J$ in a category $\mathcal{C}$ is a functor $F\colon J\to \mathcal{C}$. A \textbf{cocone} over $F$ is an object $W\in \mathcal{C}$ along with a morphism $\phi_S\in\textnormal{Hom}_{\mathcal{C}}(F(S),W)$ for each $S\in \text{Ob}(J)$, so that for all $f\in \textnormal{Hom}_{J}(S,T)$, $\phi_S=\phi_T\circ F(f)$.\end{definition} In figure (1) we consider the example where $J$ is a category with two objects $S$ and $T$ and one morphism $f$. Then $W$ being a cocone over $F$ is equivalent to the following diagram commuting.\\ \begin{equation} \begin{tikzcd} & W&\\ F(S)\arrow[rr, "F(f)"]\arrow[ur, "\phi_{S}"] & & F(T)\arrow[ul, swap, "\phi_{T}"] \end{tikzcd}\label{1} \end{equation} \begin{definition}The \textbf{colimit} over $F$ is a cocone $(Z,\phi_S)$ so that for any other cocone $(W,\psi_S)$ there is a unique morphism $u\in \textnormal{Hom}_{\mathcal{C}}(W,Z)$ such that $\psi_S=u\circ \phi_S$ for all $S\in J$. \end{definition} Note that colimits are unique up to isomorphism if they exist. Therefore we can show that a construction is canonical by showing that it is a colimit over a certain diagram. In our example if $Z$ is the colimit over $F$ and $W$ is a cocone over $F$, then the following diagram commutes. \begin{center} \begin{tikzcd} & W&\\ & Z\arrow[u, "u"] &\\ F(T)\arrow[rr, "F(f)"]\arrow[uur, bend left, "\psi_{S}"] \arrow[ur, "\phi_S"]& & F(T)\arrow[uul, bend right, swap, "\psi_{T}"]\arrow[ul,swap,"\phi_T"] \end{tikzcd} \end{center} \section{Structures and blocks} We give a brief overview of the types of structures used in Ramsey theory to motivate our definition of structures. In \cite{2}, \cite{3}, and \cite{10} Ne\v{s}et\v{r}il and R\"{o}dl prove a Ramsey result for linearly ordered hypergraphs. Solecki expands on this result in \cite{5} by showing a Ramsey statement for linearly ordered structures with standard relation symbols and dual function symbols. Furthermore in \cite{6} he proves a dual Ramsey result for linearly ordered structures with the usual function symbols and dual relation symbols. We wish to prove the results in \cite{6} and \cite{5} with a single proof. We will construct a general definition of structure which will include the structures found in \cite{6} and \cite{5}. We create this new definition of structures by adding a category $\mathcal{C}$ to the definition of structures. The definition of structures given in \cite{5} will arise when $\mathcal{C}=\textbf{Fin}$ and the definition of structures in \cite{6} will occur when $\mathcal{C}=\textbf{Fin}^{\textnormal{op}}$. Then we define blocks which are a generalization of objects in \cite{5} which in turn build on the definition of partite-systems in \cite{2}, \cite{3}, and \cite{10}. \subsection{Structures} We develop the concept of structures with a category $\mathcal{C}$ by following the standard development of structures. We start by adding a category $\mathcal{C}$ to the definition of of language. Then we define structures for these new types of languages. Finally we define homomorphisms in the natural way. \begin{definition} For any category $\mathcal{C}$, a $\mathcal{C}$\textbf{-language} $\mathcal{L}$ is a tuple $(\mathcal{C},\mathcal{L}_{F},\mathcal{L}_{R}, ar_{func},ar_{rel})$ where $\mathcal{L}_{F}$ is a set of function symbols, $\mathcal{L}_R$ is a set of relation symbols, $ar_{func}\colon \mathcal{L}_{F}\to \textnormal{Ob}(\mathcal{C})^2$ assigns the arity of function symbols, and $ar_{rel}\colon \mathcal{L}_{F}\to \textnormal{Ob}(\mathcal{C})$ assigns the arity of relation symbols. \end{definition} The usual definition of language has arities whose ranges are finite sets instead of objects in a category $\mathcal{C}$. Thus the definition of a \textbf{Fin}-language is the standard definition of a language. Now that we have the definition of language we can define structures. \begin{definition} An $\mathcal{L}$\textbf{-structure} $\mathsf{X}$ is an object $X\in \text{Ob}(\mathcal{C})$ along with interpretations of the symbols in $\mathcal{L}$ that are implemented as follows,\\ for each relation symbol $R\in \mathcal{L}$ of arity $r\in\text{Ob}(\mathcal{C})$ the interpretation of $R$ is a set\\ $R^{\mathsf{X}}\subseteq \textnormal{Hom}_{\mathcal{C}}(r,X)$ and \\ for each function symbol $F\in \mathcal{L}$ of arity $(r,s)$ where $r,s\in \text{Ob}(\mathcal{C})$ the interpretation of $F$ is a function $F^{\mathsf{X}}\colon \textnormal{Hom}_{\mathcal{C}}(X,r)\to \textnormal{Hom}_{\mathcal{C}}(X,s)$. \end{definition} The standard definition of structures let $X$ be a set and interpretations are functions instead of morphisms. Furthermore if $\mathsf{X}$ is a structure and $F$ is a function symbol of arity $(r,s)$ then under the usual definition of structure $F^{\mathsf{X}}\colon X^r\to X^s$ while in our definition when $\mathcal{C}=\textbf{Fin}$, $F^{\mathsf{X}}\colon r^X\to s^X$. So if $\mathcal{L}$ is \textbf{Fin}-language, then $\mathcal{L}$-structures have relations which are defined in the usual manner and dual interpretations of function symbols. Thus $\mathcal{L}$-structures are defined as in \cite{5}. If $\mathcal{L}$ is a $\textbf{Fin}^{\text{op}}$ language then for any structure $\mathsf{X}$ and relation symbol $R$ of arity $r$, $R^{\mathsf{X}}\subset r^X$. If $F$ is a function symbol of arity $(r,s)$ then $F^{\mathsf{X}}\colon X^r\to X^s$. Thus function symbols are defined in the standard way but the relation symbols are interpreted in a dual manner. Thus $\mathcal{L}$-structures are the same as dual structures found in \cite{6}. Next we define homomorphisms in the natural way. \begin{definition} Let $\mathcal{C}$ be a category and $\mathcal{L}$ be a $\mathcal{C}$-language. If $\mathsf{X},\mathsf{Y}$ are $\mathcal{L}$-structures and $f\in\textnormal{Hom}_{\mathcal{C}}(X,Y)$, then $f$ is an $\mathcal{L}$-\textbf{homomorphism} if:\\ for all relation symbols $R\in \mathcal{L}$ with arity $r$ and all $\eta\in \textnormal{Hom}_{\mathcal{C}}(r,X)$, $$R^\mathsf{X}(\eta)\Leftrightarrow R^\mathsf{Y}(f\circ \eta)$$ and for all function symbols $F\in \mathcal{L}$ with arity $(r,s)$ and all $\gamma\in \textnormal{Hom}_{\mathcal{C}}(Y,r)$, $$F^\mathsf{X}(\gamma\circ f)=F^\mathsf{Y}(\gamma)\circ f.$$ \end{definition} \subsection{Blocks} The objects that we consider in our main theorem are blocks. Blocks are a categorical version of objects in \cite{5}. We will use the term blocks instead of objects to avoid confusion with categorical notation. Objects are a generalization of partite-systems which are used in the Partite Lemma. \begin{definition} Fix a category $\mathcal{C}$ and a $\mathcal{C}$-language $\mathcal{L}$. A \textbf{block} is a pair $\mathcal{X}=(\mathsf{X},\pi)$ where $\mathsf{X}$ is a structure and $\pi\in \textnormal{Hom}_{\mathcal{C}}(X,U)$ for some $U\in \textnormal{Ob}(\mathcal{C})$. \end{definition} If $\mathcal{X}$ is a block where the morphism $\pi$ has a left inverse we call $\mathcal{X}$ a \textbf{monic block}. An example of a monic block is a structure $\mathsf{X}$ which can be viewed as the block $\mathcal{X}=(\mathsf{X},\textnormal{Id}_{X})$. We now define morphisms between blocks. \begin{definition} Fix a category $\mathcal{C}$ and a $\mathcal{C}$-language $\mathcal{L}$. Suppose $\mathcal{X}=(\mathsf{X},\pi)$ and $\mathcal{Y}=(\mathsf{Y},\rho)$ are blocks so that $\pi\in\textnormal{Hom}_{\mathcal{C}}(X,U)$ and $\rho\in\textnormal{Hom}_{\mathcal{C}}(Y,V)$. A \textbf{block-homomorphism} between $\mathcal{X}$ and $\mathcal{Y}$ is a homomorphism $f\in\textnormal{Hom}_{\mathcal{C}}(X,Y)$ for which there is an $i\in \textnormal{Hom}_{\mathcal{C}}(U,V)$ such that $\rho\circ f=i_0\circ \pi$. \end{definition} A homomorphism is a block-homomorphism if and only if the following diagram commutes: \begin{center} \begin{tikzcd} X\arrow[r,"f"]\arrow[d, "\pi"]& Y\arrow[d,"\rho"]\\ U\arrow[r,"i"] & V\\ \end{tikzcd} \end{center} A block-homomorphism is called a \textbf{block-monomorphism} if it has a left inverse in $\mathcal{C}$. Let $Bl$ be the category where objects are blocks and morphisms are block-homomorphisms. Let $Bl^{m}$ be the subcategory of $Bl$ with the same objects but $\text{Hom}(Bl^{m})$ is the class of block-monomorphisms. \section{The main theorem} In this section we will show that a specific subcategory of blocks has colimits over certain diagrams that are defined using Hales--Jewett lines. This result describes the construction of the Partite Lemma in a purely category theoretic manner. We start by defining the category and diagram that we need for our main theorem. We will then state and prove our main result. \subsection{The category $Bl_{i_0}$} In this section we define a subcategory of $Bl$ which can be viewed as a local version of $Bl$. We start by defining the morphisms for this category. For this section fix a category $\mathcal{C}$ and a $\mathcal{C}$-language $\mathcal{L}$. \begin{definition} Suppose $\mathcal{X}$ and $ \mathcal{Y}$ are blocks so that $\pi\in\textnormal{Hom}_{\mathcal{C}}(X,U)$ and $\rho\in\textnormal{Hom}_{\mathcal{C}}(Y,V)$. If $i_0\in \textnormal{Hom}_{\mathcal{C}}$, then an $i_0$-\textbf{homomorphism} between $\mathcal{X}$ and $\mathcal{Y}$ is a block-homomorphism $f\in\textnormal{Hom}_{Bl}(X,Y)$ such that $\rho\circ f=i_0\circ \pi$. \end{definition} An $i_0$-\textbf{monomorphism} is an $i_0$-homomorphism with a left inverse. We will now define the subcategory for our main theorem. Fix a morphism\\ $i_0\in \textnormal{Hom}_{\mathcal{C}}(U,V)$ for some $U,V\in \text{Ob}(\mathcal{C})$. The category $Bl_{i_0}$ has two types of objects, domain objects and codomain objects. Domain objects are objects of the form $$(\mathsf{X},\pi)\in \text{Ob}(Bl)\text{, where } \pi \in \text{Hom}_{\mathcal{C}}(X,U)$$ and codomain objects are of the form$$(\mathsf{Y},\rho)\in \text{Ob}(Bl)\text{, where } \rho \in \text{Hom}_{\mathcal{C}}(Y,V).$$ Morphisms in $Bl_{i_0}$ between a domain objects and a codomain object are $i_0$-homomorphisms, morphisms between domain objects are $\text{Id}_{U}$-homomorphisms, and morphisms between codomain objects are $\text{Id}_{V}$-homomorphisms. Let $Bl_{i_0}^{m}$ be the subcategory of $Bl_{i_0}$ with the same objects as $Bl_{i_0}$ where all morphisms have a left inverse in $\mathcal{C}$. \subsection{The line diagram} To define the diagram that we need for our main theorem we introduce the notion of combinatorial lines. \begin{definition} If $P$ is a set and $N>0$, then a \textbf{line} in $P^N$ is a nonempty $d(\ell)\subseteq N$ along with $\ell_k\in P$ for each $k\in N\backslash d(\ell)$. \end{definition} If $\bar{e}\in P^N$ and $\ell$ is a line in $P^N$ we say that $\bar{e}\in \ell$ if $e_k=\ell_k$ for all $k\notin d(\ell)$ and $\bar{e}$ is constant on $d(\ell)$. If $\bar{e}\in\ell$ we let $\ell(\bar{e})$ be the constant value of $\bar{e}$ on $d(\ell)$. Note that for every $\bar{e}\in P^N$ there is a line $\ell$ so that $\bar{e}\in \ell$. Given the above definition of lines we will construct a diagram. \begin{definition} Fix $N>0$, a category $\mathcal{C}$, a $\mathcal{C}$-language $\mathcal{L}$, $i_0\in \textnormal{Hom}(\mathcal{C})$, a monic domain object $\mathcal{X}\in\text{Ob}(Bl^{m}_{i_0})$, and a codomain object $\mathcal{Y}\in \text{Ob}(Bl^{m}_{i_0})$ . Let $J$ be the category with $$\textnormal{Ob}(J)=\textnormal{Hom}_{Bl^{m}_{i_0}}(\mathcal{X}, \mathcal{Y})^N\cup \{\ell\colon \ell\textnormal{ is a line in } \textnormal{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y})^N\}$$ and the only non-identity morphisms in $J$ are $\textnormal{Hom}_{J}(\bar{e},\ell)=\{(\ell,\bar{e})\}$ where $\bar{e}\in \ell$. Then the \textbf{line diagram} is the functor $G\colon J\to Bl^{m}_{i_0}$, $$G(\bar{e})= \mathcal{X} \textnormal{ if }\bar{e}\in \textnormal{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y})^N$$ $$G(\ell)=\mathcal{Y} \text{ if }\ell \text{ is a line in }\textnormal{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y})^N $$ and on non-identity morphisms $G(\ell,\bar{e})=\ell(\bar{e})$. \end{definition} Note that the definition of the index category $J$ only depends on $N$, $i_0$, $\mathcal{X}$, and $\mathcal{Y}$. Also notice that since $Bl^{m}_{i_0}$ is a subcategory of $Bl_{i_0}$, $G$ can also be considered as a functor $G\colon J\to Bl_{i_0}$. The object of this section is to build a cocone over $G$ in $Bl^{m}_{i_0}$ that is also the colimit over $G$ in $Bl_{i_0}$. In applications we only use the existence of a cocone over $G$ in $Bl^{m}_{i_0}$. Being a colimit in $Bl_{i_0}$ assures canonicity of the construction. \subsection{Statement and proof of the main theorem} The following theorem is the main result of this paper. In Ramsey theoretic application only the existence of the object from the conclusion of Theorem 1 is used. While the proof of this theorem is purely categorical, on a technical level we build on arguments going back to \cite{2}, \cite{3}, and \cite {10}. Our proof is most closely related to the arguments found in \cite{6} and \cite{5}. \begin{theorem} Let $\mathcal{C}$ be a category that has colimits, let $\mathcal{L}$ be a $\mathcal{C}$-language, and let \\ $i_0\in\text{Hom}(\mathcal{C})$ have a left inverse. Then for each line diagram $G$, $Bl_{i_0}$ has a colimit over $G$ that is also a cocone over $G$ in $Bl^{m}_{i_0}$. \end{theorem} \begin{proof} Fix $N>0$, $\mathcal{X}=(\mathsf{X},\pi)$ a monic domain object in $Bl^{m}_{i_0}$ and $\mathcal{Y}=(\mathsf{Y},\rho)$ a codomain object in $Bl^{m}_{i_0}$ . For ease of notation let $\textnormal{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y})=P$. For illustration we will note that any $\mathcal{Z}\in \text{Ob}(Bl^{m}_{i_0})$ is a cocone over the line diagram if and only if the following diagram commutes: \begin{equation*}\begin{tikzcd}[column sep=small] &Z \\ Y_{\ell} \arrow[ur, swap, "f_\ell"] &\cdots& Y_{\ell'} \arrow[ul, "f_{\ell'}"] \\ X_{\bar{e}} \arrow[u, swap, "\ell(\bar{e})"]\arrow[uur, bend left=60, "f_{\bar{e}}", near start]\arrow[urr, "\ell'(\bar{e})" description] &\cdots & X_{\bar{e'}} \arrow[u, "\ell'(\bar{e}')"]\arrow[uul, bend right=60, swap, "f_{\bar{e'}}", near start] \end{tikzcd}\label{eq 1} \end{equation*} Where we denote $G(\ell)$ as $Y_\ell$, and $G(\bar{e})$ as $X_{\bar{e}}$.\\ Let $H\colon Bl_{i_0}\to \mathcal{C}$ be the functor defined by $H(\mathcal{W})=W$ and $H(f)=f$. Then there is a colimit $(Z,f_{\ell},f_{\bar{e}})_{\ell, \bar{e}\in\text{Ob}(J)}$ in $\mathcal{C}$ over the diagram $H\circ G$ by assumption. We will define interpretations of function symbols $F^\mathsf{Z}$, interpretations of relation symbols $R^\mathsf{Z}$, and a morphism $\sigma\in\text{Hom}_{\mathcal{C}}(Z,V)$ so that $\mathcal{Z}=(\mathsf{Z},\sigma)\in\text{Ob}(Bl^{m}_{i_0})$, $f_{\ell}\in \text{Hom}(Bl^{m}_{i_0})$, and $f_{\bar{e}}\in \text{Hom}(Bl^{m}_{i_0})$ for all $\ell,\bar{e} \in\text{Ob}(J)$. If we have found such $F^\mathsf{Z}$, $R^\mathsf{Z}$, and $\sigma$, then $(\mathcal{Z},f_{\ell},f_{\bar{e}})_{\ell, \bar{e}\in\text{Ob}(J)}$ is a cocone over $G$ in $Bl^{m}_{i_0}$. By the definition of $Bl^{m}_{i_0}$ we need to show the following: \begin{enumerate} \item For each line in $P^{N}$, $\sigma\circ f_{\ell}=\rho$ and $\sigma\circ f_{\bar{e}}=i_0\circ \pi$ for all $\bar{e}\in P^N$. \item For all $F$ in $\mathcal{L}$ with arity $(r,s)$, $\gamma\in \text{Hom}_{\mathcal{C}}(Z,r)$, and each line $\ell$ in $P^N$, $$F^\mathsf{Y}(\gamma\circ f_\ell)=F^\mathsf{Z}(\gamma)\circ f_{\ell}.$$ Similarly, for each $\bar{e}\in P^N$, $$F^{\mathsf{X}}(\gamma\circ f_{\bar{e}})= F^\mathsf{Z}(\gamma)\circ f_{\bar{e}}.$$ \item For each line $\ell$ in $P^N$, $f_{\ell}$ has a left inverse in $\mathcal{C}$ and for all $\bar{e}\in P^N$, $f_{\bar{e}}$ has a left inverse in $\mathcal{C}$. \item For all $R$ in $\mathcal{L}$ with arity $r$, $\eta\in \text{Hom}_{\mathcal{C}}(r,Y)$, and each line $\ell$ in $P^N$, $$R^\mathsf{Y}(\eta)\Leftrightarrow R^\mathsf{Z}(f_{\ell}\circ \eta)$$ and for each $\bar{e}\in P^N$, $$R^\mathsf{X}(\eta)\Leftrightarrow R^\mathsf{Z}(f_{\bar{e}}\circ\eta).$$ \end{enumerate} After we have shown the above we will then show that $(\mathcal{Z},f_{\ell},f_{\bar{e}})_{\ell, \bar{e}\in\text{Ob}(J)}$ is the colimit pver $G$ in $Bl_{i_0}$. First we claim that we don't need to check that $f_{\bar{e}}$ is a homomorphism with a left inverse for each $\bar{e}\in P^N$. To prove this claim suppose that for each line $\ell$ in $P^N$, $f_{\ell}$ is a homomorphism with a left inverse. Then for any $\bar{e}\in P^N$ let $\ell$ be a line so that $\bar{e}\in \ell$. So by the definition of cocone $f_{\bar{e}}=f_\ell\circ \ell(\bar{e})$. Thus $f_{\bar{e}}$ will be a homomorphism with a left inverse since $\ell(\bar{e})$ is a homomorphism with a left inverse. We will define the morphism $\sigma$ and the interpretations of function symbols and relation symbols by constructing cocones $(W,g_\ell,g_{\bar{e}})_{\ell,\bar{e}\in \text{Ob}(J)}$ in $\mathcal{C}$ over $H\circ G$. We will then use the definition of colimit to obtain a unique $u$ so that $g_{\ell}=u\circ f_{\ell}$ and $g_{\bar{e}}=u\circ f_{\bar{e}}$ for all $\ell,\bar{e}\in \text{Ob}(J)$. Whenever we construct such a cocone we will give a commuting diagram where the morphism obtained by the definition of colimit is denoted with $\exists$.\\ To define the map $\sigma$ we show that $(V,g_{\ell},g_{\bar{e}})_{\ell,\bar{e}\in \text{Ob}(J)}$ where $g_{\ell}=\rho$ for any line $\ell$ in $P^N$ and $g_{\bar{e}}= i_0\circ \pi$ for all $\bar{e}\in P^N$ is a cocone over $H\circ G$. Suppose that $\bar{e}\in \ell$, then $\ell(\bar{e})\in \text{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y})$, $\ell(\bar{e})$ is an $i_0$-homomorphism. Thus $$\rho\circ \ell(\bar{e})=i_0\circ\pi.$$ This proves that $(V,g_{\ell},g_{\bar{e}})_{\ell,\bar{e}\in \text{Ob}(J)}$ is a cocone over $H\circ G$. Therefore there is a unique $\sigma$ so that $\sigma\circ f_{\ell}=\rho$ and $\sigma \circ f_{\bar{e}}=i_0\circ \pi$ for any $\ell,\bar{e}\in \textnormal{Ob}(J)$ by the definition of colimit. \begin{equation*}\begin{tikzcd}[column sep=small] &V\\&Z\arrow[u,"\exists \sigma"] \\ Y_{\ell} \arrow[ur, swap, "f_\ell"]\arrow[uur,crossing over, bend left=50, swap, "\rho"] &\cdots& Y_{\ell'} \arrow[ul, "f_{\ell'}"] \arrow[uul,crossing over, bend right=50, "\rho"]\\ X_{\bar{e}} \arrow[u, swap, "\ell(\bar{e})"]\arrow[uur, bend left=60, "f_{\bar{e}}", near start]\arrow[uuur,bend left=70, "i_0\circ \pi"]\arrow[urr, "\ell'(\bar{e})" description] &\cdots & X_{\bar{e'}} \arrow[u, "\ell'(\bar{e}')"]\arrow[uul, bend right=60, swap, "f_{\bar{e'}}", near start]\arrow[uuul,bend right=70, swap, "i_0\circ \pi"] \end{tikzcd}\label{eq 2} \end{equation*} Fix $F$ a function symbol in $\mathcal{L}$ with arity $(r,s)$ and $\gamma\in \text{Hom}_{\mathcal{C}}(Z,r)$. We want to show that $(s,F^{\mathsf{Y}}(\gamma\circ f_\ell),F^{\mathsf{X}}(\gamma\circ f_{\bar{e}}))_{\ell,\bar{e}\in \text{Ob}(J)}$ is a cocone over $H\circ G$. Since $Z$ is a cocone in $\mathcal{C}$ over $H\circ G$ if $\bar{e}\in \ell$, then $f_{\ell}\circ \ell(\bar{e})=f_{\bar{e}}$. Since $\ell(\bar{e})$ is a homomorphism, $$F^{\mathsf{X}}(\gamma\circ f_{\bar{e}})=F^{\mathsf{X}}(\gamma \circ f_\ell \circ \ell(\bar{e}))=F^{\mathsf{Y}}(\gamma\circ f_{\ell})\circ \ell(\bar{e}).$$ Thus $(s,F^{\mathsf{X}}(\gamma\circ f_\ell),F^{\mathsf{X}}(\gamma\circ f_{\bar{e}}))_{\ell,\bar{e}\in \text{Ob}(J)}$ is a cocone in $\mathcal{C}$ over $H\circ G$. By the definition of colimit there is a unique $F^\mathsf{Z}(\gamma)$ so that $F^\mathsf{Z}(\gamma)\circ f_{\ell}=F^{\mathsf{Y}}(\gamma\circ f_{\ell})$ and $F^{\mathsf{Z}}(\gamma)\circ f_{\bar{e}}=F^\mathsf{X}(\gamma\circ f_{\bar{e}})$. \begin{equation*} \begin{tikzcd}[column sep=large] &s &\\&Z\arrow[u,"\exists F^{\mathsf{Z}}(\gamma)"] \\ Y_{\ell} \arrow[ur, swap, "f_\ell"]\arrow[uur,crossing over, bend left=50, swap, "F^\mathsf{Y}(\gamma\circ f_\ell)", near start] &\cdots& Y_{\ell'} \arrow[ul, "f_{\ell'}"] \arrow[uul,crossing over, bend right=50, "F^\mathsf{Y}(\gamma\circ f_{\ell'})", near start]\\ X_{\bar{e}} \arrow[u, swap, "\ell(\bar{e})"]\arrow[uuur,bend left=70, "F^{\mathsf{X}}(\gamma\circ f_{\bar{e}})"]\arrow[urr, "\ell'(\bar{e})" description] &\cdots & X_{\bar{e'}} \arrow[u, "\ell'(\bar{e}')"]\arrow[uuul,bend right=70, swap, "F^{\mathsf{X}}(\gamma\circ f_{\bar{e}'})"] \end{tikzcd}\label{eq 4} \end{equation*} To show that each $f_{\ell}$ has a left inverse we need a cocone $(Y, u_{\ell,i},\bar{e}_i)_{\ell,\bar{e}\in \textnormal{Ob}(J)}$ for each $i< N$. Fix $h\in \text{Hom}_{\mathcal{C}}(V,X)$ so that $h\circ i_0\circ \pi=\rm{Id}_{X}$. Such $h$ exists by assumption. For each $i< N$ let $$ u_{\ell,i}=\left\{\begin{array}{ll} \rm{Id}_Y & \text{if } i\in d(\ell)\\ \ell_i\circ h\circ \rho & \text{otherwise} \end{array}\right.$$ We show that $(Y, u_{\ell,i},\bar{e}_i)_{\ell,\bar{e}\in \textnormal{Ob}(J)}$ is a cocone over $H\circ G$. Fix $\ell$ and $\bar{e}$ so that $\bar{e}\in \ell$. Then we prove $u_{\ell,i}\circ \ell(\bar{e})=\bar{e}_i$ by cases. If $i\in d(\ell)$, then $\textnormal{Id}_Y\circ \ell(\bar{e})=\bar{e}_i$ by definition. If $i\notin d(\ell)$, then $\ell_i=\bar{e}_i$, so since $\ell(\bar{e})$ is an $i_0$-homomorphism, $$\ell_i\circ h\circ \rho\circ \ell(\bar{e})=\ell_i\circ h\circ i_0\circ \pi=\ell_i=\bar{e}_i.$$ Thus $(Y, u_{\ell,i},\bar{e}_i)_{\ell,\bar{e}\in \textnormal{Ob}(J)}$ is a cocone over $H\circ G$ for all $i<N$. So for all $i<N$ there is a $v_i$ so that for every line $\ell$ in $P^N$ and every $\bar{e}\in P^N$, $v_i\circ f_\ell=u_{\ell,i}$ and $v_i\circ f_{\bar{e}}=e_{i}$. If $\ell$ is a line in $P^N$ then since $d(\ell)$ is nonempty there is $i\in d(\ell)$ so, $$v_i\circ f_{\ell}=u_{\ell,i}=\textnormal{Id}_{Y}.$$ Thus we have shown that each $f_{\ell}$ has a left inverse in $\mathcal{C}$. \begin{equation*}\begin{tikzcd}[column sep=small] &Y\\&Z\arrow[u,"\exists v_i"] \\ Y_{\ell} \arrow[ur, swap, "f_\ell"]\arrow[uur,crossing over, bend left=50, swap, "v_{\ell,i}"] &\cdots& Y_{\ell'} \arrow[ul, "f_{\ell'}"] \arrow[uul,crossing over, bend right=50, "u_{\ell',i}"]\\ X_{\bar{e}} \arrow[u, swap, "\ell(\bar{e})"]\arrow[uuur,bend left=70, "\bar{e}_i"]\arrow[urr, "\ell'(\bar{e})" description] &\cdots & X_{\bar{e'}} \arrow[u, "\ell'(\bar{e}')"]\arrow[uuul,bend right=70, swap, "\bar{e'}_i"] \end{tikzcd} \label{eq 5} \end{equation*} We define the interpretations of relation symbols on $Z$ as follows. For each relation symbol $R$ in $\mathcal{L}$ of arity $r$, let $R^{\mathsf{Z}}$ be such that: $$R^\mathsf{Z}(\delta)\Leftrightarrow \text{there are } \ell\text{ a line in }P^N \text{ and } \eta\in\textnormal{Hom}_{\mathcal{C}}(Y,r), \text{ so that } \delta=f_{\ell}\circ \eta \text{ and } R^\mathsf{Y}(\eta).$$ Clearly $R^\mathsf{Y}(\eta)$ implies $R^{\mathsf{Z}}(f_\ell \circ \eta)$. So to show that $f_\ell$ is a homomorphism it remains to prove that $R^\mathsf{Z}(f_\ell \circ \eta)$ implies $R^\mathsf{Y}(\eta)$. So suppose $R^{\mathsf{Z}}(f_\ell\circ \eta)$, then by definition there is a $\eta'\in \text{Hom}_{\mathcal{C}}(Y,r)$ and a line $\ell'$ so that $R^\mathsf{Y}(\eta')$ and $f_{\ell'}\circ\eta'=f_{\ell}\circ \eta$. We will let $f_{\ell}\circ \eta=\delta$. We show that $R^\mathsf{Y}(\eta)$ by two cases. If there is $i\in d(\ell)\cap d(\ell')$, then $$v_i\circ \delta=u_{\ell,i}\circ \eta=u_{\ell',i}\circ \eta'.$$ Thus $\eta=\eta'$ by the definition of $u_{\ell,i}$. In the second case there is no $i\in d(\ell)\cap d(\ell')$ so let $i\in d(\ell)$ and $j\in d(\ell')$. Then by the definition of $v_{i}$, $$v_i\circ \delta=u_{\ell,i}\circ\eta=u_{\ell',i}\circ\eta'.$$ Then by the definition of $u_{\ell,i}$ and $u_{\ell{'},i}$, we have $$\textnormal{Id}_{Y}\circ \eta= \ell'_i\circ h\circ \rho \circ \eta'.$$ Note that by the definition of $\sigma$, $\rho=\sigma\circ f_{\ell'}$. So $$\eta=\ell'_{i}\circ h\circ \sigma \circ f_{\ell'}\circ \eta'.$$ Thus by the definition of $\delta$, $$\eta=\ell'_{i}\circ h\circ \sigma\circ \delta.$$ We can construct an analogous argument by replacing $i$ with $j$. So by symmetry, $$\eta'=\ell_{j}\circ h\circ \sigma\circ \delta.$$ Now since $\ell'_i$ and $\ell_j$ are homomorphisms, $$R^\mathsf{Y}(\eta)\Leftrightarrow R^{\mathsf{X}}(h\circ \sigma\circ \delta)\Leftrightarrow R^\mathsf{Y}(\eta').$$ So since $R^\mathcal{Y}(\eta')$ holds by assumption, $R^\mathsf{Y}(\eta)$ holds. Thus $(\mathsf{Z},f_{\ell},f_{\bar{e}})_{\ell,\bar{e}\in \textnormal{Ob}(J)}$ is a cocone of the line diagram in $Bl_{i_0}$. Next we show that $(\mathcal{Z},f_{\ell},f_{\bar{e}})_{\ell,\bar{e}\in \textnormal{Ob}(J)}$ is the colimit over the line diagram in $Bl_{i_0}$. Suppose $ (\mathcal{W},g_{\ell},g_{\bar{e}})_{\ell,\bar{e}\in \textnormal{Ob}(J)}$ is a cocone over the line diagram $G$ in $Bl_{i_0}$ where $\mathcal{W}=(\mathsf{W},\tau)$. We will show that there is a unique $f\in \text{Hom}_{Bl_{i_0}}(\mathcal{Z},\mathcal{W})$ so that $g_\ell=f\circ f_{\ell}$ and $g_{\bar{e}}= f\circ f_{\bar{e}}$ for all $\ell,\bar{e}\in \text{Ob}(J)$. Note that $ (W,g_{\ell},g_{\bar{e}})_{\ell,\bar{e}\in \textnormal{Ob}(J)}$ is a cocone over $H\circ G$, so there is a unique $f\in \textnormal{Hom}_{\mathcal{C}}(Z,W)$ such that $g_\ell=f\circ f_{\ell}$ and $g_{\bar{e}}= f\circ f_{\bar{e}}$ for all $\ell,\bar{e}\in \text{Ob}(J)$. Thus it remains to show that $f\in \textnormal{Hom}_{Bl_{i_0}}(\mathcal{Z},\mathcal{W})$. \begin{equation*}\begin{tikzcd}[column sep=small] & W \\ &Z\arrow[u,"\exists f", near start] \\ Y_{\ell} \arrow[ur, swap, "f_\ell"]\arrow[uur, bend left=30, "g_\ell"] &\cdots& Y_{\ell'} \arrow[uul, swap, bend right=30, "g_{\ell'}"]\arrow[ul, "f_{\ell'}"] \\ X_{\bar{e}} \arrow[u, swap, "\ell(\bar{e})"]\arrow[uuur, bend left=75, "g_{\bar{e}}"]\arrow[uur, bend left=60, "f_{\bar{e}}", near start]\arrow[urr, "\ell'(\bar{e})" description] &\cdots & X_{\bar{e'}} \arrow[u, "\ell'(\bar{e}')"]\arrow[uul, bend right=60, swap, "f_{\bar{e'}}", near start]\arrow[uuul, swap, bend right=75, "g_{\bar{e}'}"] \end{tikzcd}\label{eq 6} \end{equation*} We start by using the uniqueness of $\sigma$ to show that $\tau\circ f=\rho$. First we show that $\mathcal{W}$ is a codomain object. For any line $\ell$ in $P^N$, $g_{\ell}\in \text{Hom}_{Bl_{i_0}}(\mathcal{Y},\mathcal{W})$, so since $\mathcal{Y}$ is a codomain object $\mathcal{W}$ must be a codomain object. Thus each $g_{\ell}$ is an $\textnormal{Id}_V$-homomorphism and each $g_{\bar{e}}$ is an $i_0$-homomorphism. Theref0re $\tau\circ g_{\ell}=\rho$ for all $\ell$ and $\tau\circ g_{\bar{e}}=i_0\circ \pi$ for all $\bar{e}$. Then by the definition of $f$, $\tau\circ f\circ f_{\ell}=\rho$ for all $\ell$ and $\tau\circ f\circ f_{\bar{e}}=i_0\circ \pi$ for all $\bar{e}$. Thus by the uniqueness of $\sigma$, $\tau\circ f=\sigma$. \begin{equation*}\begin{tikzcd}[column sep=small] &V\\ & W\arrow[u, "\tau"]\\ &Z\arrow[uu, bend right=15, swap, "\sigma"]\arrow[u, "f"] \\ Y_{\ell} \arrow[ur, swap, "f_\ell"]\arrow[uur, bend left=30, "g_{\ell}"]\arrow[uuur,crossing over, bend left=50, "\rho"] &\cdots& Y_{\ell'}\arrow[uul, swap, bend right=30, "g_{\ell'}"] \arrow[ul, "f_{\ell'}"] \arrow[uuul,crossing over, swap, bend right=50, "\rho"]\\ X_{\bar{e}} \arrow[u, swap, "\ell(\bar{e})"]\arrow[uur, bend left=60, "f_{\bar{e}}", near start]\arrow[uuuur,bend left=70, "i_0\circ \pi"]\arrow[urr, "\ell'(\bar{e})" description] &\cdots & X_{\bar{e'}} \arrow[u, "\ell'(\bar{e}')"]\arrow[uul, bend right=60, swap, "f_{\bar{e'}}", near start]\arrow[uuuul,bend right=70, swap, "i_0\circ \pi"] \end{tikzcd}\label{eq 7} \end{equation*} We will show that $f$ preserves function symbols in an similar manner. Fix a function symbol $F$ of arity $(r,s)$ in $\mathcal{L}$ and fix $\gamma\in \textnormal{Hom}_{\mathcal{C}}(W,r)$. Since each $g_{\ell}$ is a homomorphism $F^{\mathsf{Y}}(\gamma\circ g_{\ell})=F^{\mathsf{W}}(\gamma)\circ g_{\ell}$ for all $\ell$ and since each $g_{\bar{e}}$ is a homomorphism $F^{\mathsf{X}}(\gamma\circ g_{\bar{e}})=F^{\mathsf{W}}(\gamma)\circ g_{e}$ for all $\bar{e}$. Thus for any $\ell$, $$F^{\mathsf{Y}}(\gamma\circ f\circ f_{\ell})=F^{\mathsf{W}}(\gamma)\circ f\circ f_{\ell}$$ and for any $\bar{e}$, $$F^{\mathsf{X}}(\gamma\circ f\circ f_{\bar{e}})=F^{\mathsf{W}}(\gamma)\circ f\circ f_{\bar{e}}.$$ So by the uniqueness of $F^{\mathsf{Z}}(\gamma\circ f)$, $F^{\mathsf{W}}(\gamma)\circ f=F^{\mathsf{Z}}(\gamma\circ f)$. \begin{equation*} \begin{tikzcd}[column sep=large] &s\\ &W\arrow[u, "F^{\mathsf{W}}(\gamma)"]\\&Z\arrow[u,"f"]\arrow[uu, bend right=15, swap, "F^{\mathsf{Z}}(\gamma)", near end] \\ Y_{\ell} \arrow[ur, swap, "f_\ell"]\arrow[uuur,crossing over, bend left=60, swap, "F^\mathsf{Y}(\gamma\circ g_\ell)"]\arrow[uur, bend left=30, swap, "g_{\ell}"] &\cdots& Y_{\ell'} \arrow[ul, "f_{\ell'}"] \arrow[uuul,crossing over, bend right=60, "F^\mathsf{Y}(\gamma\circ g_{\ell'})"]\arrow[uul, bend right=30, "g_{\ell'}"]\\ X_{\bar{e}} \arrow[u, swap, "\ell(\bar{e})"]\arrow[uuuur,bend left=70, "F^{\mathsf{X}}(\gamma\circ g_{\bar{e}})"]\arrow[urr, "\ell'(\bar{e})" description] &\cdots & X_{\bar{e'}} \arrow[u, "\ell'(\bar{e}')"]\arrow[uuuul,bend right=70, swap, "F^{\mathsf{X}}(\gamma\circ g_{\bar{e}'})"] \end{tikzcd}\label{eq 8} \end{equation*} It remains to show that $f$ preserves relation symbols. Fix a relation symbol $R$ of arity $r$ in $\mathcal{L}$. Then by the definition of $R^{\mathsf{Z}}$ for any $\delta\in\textnormal{Hom}_{\mathcal{C}}(r,Z)$, $R^{\mathsf{Z}}(\delta)$ holds if and only if there is a line $\ell$ and $\eta\in \textnormal{Hom}_{\mathcal{C}}(r,Y)$ so that $f_{\ell}\circ\eta=\delta$ and $R^{\mathsf{Y}}(\eta)$. Then since $g_{\ell}$ is a homomorphism $R^{\mathsf{Y}}(\eta)$ if and only if $R^{\mathsf{W}}(g_{\ell}\circ \delta)$. Thus $R^{\mathsf{Z}}(\delta)$ if and only if $R^{\mathsf{W}}(f\circ f_{\ell}\circ \eta)=R^{\mathsf{W}}(f\circ \delta)$. Therefore $f\in\text{Hom}_{Bl_{i_0}}(\mathcal{W},\mathcal{Z})$, so $(\mathcal{Z},f_{\ell},f_{\bar{e}})_{\ell,\bar{e}\in \textnormal{Ob}(J)}$ is the colimit over the line diagram in $Bl_{i_0}$. \end{proof} \section{Application to Ramsey theory} In this section, we apply Theorem 1 to obtain results in Ramsey theory. First we define the Ramsey property for categories. Then we give a general Transfer Lemma that uses cocones to transfer Ramsey properties between categories. Next we state the Partite Lemma and show how the Partite Lemma follows directly from Theorem 1 and the Transfer Lemma. We then give a categorical version of the Partite Construction. We apply our Partite Construction to prove the results in \cite{6} and \cite{5} in a unified manner. \subsection {Ramsey property} We start by defining the basic notation of Ramsey theory. Then we define the Ramsey property for a category. \begin{definition} For all $r>0$, an $r$-\textbf{coloring} of a set $S$ is a function $\chi\colon S\to r$ where $r=\{0,\dots, r-1\}$. Any $R\subseteq S$ is $\chi$-\textbf{monochromatic} if $\chi(R)=\{i\}$ for some $i\in r$. \end{definition} Using the above notation we define the main property we consider. \begin{definition} Fix a category $\mathcal{C}$, if $A,B,C\in \textnormal{Ob}(\mathcal{C})$ and $r>0$, then we say $C$ is a Ramsey witness for $A$ and $B$ (denoted $C\to (B)^{A}_{r}$) if for any $r$-coloring $\chi$ of $\textnormal{Hom}_{\mathcal{C}}(A,C)$ there is $f\in \textnormal{Hom}_{\mathcal{C}}(B,C)$ so that $f\circ \textnormal{Hom}_{\mathcal{C}}(A,B)$ is $\chi$-monochromatic. A category $\mathcal{C}$ has the Ramsey property if for all $A,B\in \textnormal{Ob}(\mathcal{C})$ and for all $r>0$, there is $C\in \textnormal{Ob}(\mathcal{C})$ so that $C\to (B)^{A}_{r}$. \end{definition} The standard example of a category with the Ramsey property is the category $(\textbf{Fin},\leq)$ whose objects are finite linear orders and where morphisms are increasing injections. The fact that $(\textbf{Fin},\leq)$ has the Ramsey property is equivalent to Ramsey's Theorem. \subsection{Transferring Ramsey Property Over Cocones} Given a category $\mathcal{C}$ with the Ramsey property, a category $\mathcal{D}$, and a map $F\colon \mathcal{C}\to \mathcal{D}$ it is natural to consider when $\mathcal{D}$ to has the Ramsey property. This situation has already been examined in \cite[Proposition 6.4]{12} and in \cite[Lemma 3.1]{7}. In fact our Transfer Lemma is equivalent to ideas found in \cite[Proposition 6.4]{12}. The difference between these theorems and our Transfer Lemma is that we use the established idea of a cocone to transfer a Ramsey statement. Instead of transferring the Ramsey property our lemma will transfer the existence of Ramsey witnesses. More precisely, if $A,B\in \text{Ob}(\mathcal{C})$, $r>0$ and there is a $C\in \text{Ob}(\mathcal{C})$ so that $C\to {B}^A_{r}$ then for any map $F\colon \mathcal{C}\to \mathcal{D}$ we will find condition under which $F(A)$ and $F(B)$ have a witness for the Ramsey property in $\mathcal{D}$. Notice that for this local case if $F(A)=D$ and $F(B)=E$ we only require $F$ to be defined as a map $F\colon \text{Hom}_{\mathcal{C}}(A,B)\to \text{Hom}_{\mathcal{D}}(D,E)$. Our Transfer Lemma will show that there is a Ramsey witness for $D$ and $E$ if $F$ is surjective and there is a cocone in $\mathcal{D}$ over a certain diagram. First we define this diagram and then we prove the Transfer Lemma. \begin{definition}Let $\mathcal{C},\mathcal{D}$ be categories. Suppose there are $A,B,C\in \textnormal{Ob}(\mathcal{C})$, $D,E\in \textnormal{Ob}(\mathcal{D})$ and $F\colon \textnormal{Hom}_{\mathcal{C}}(A,B)\to \textnormal{Hom}_\mathcal{D}(D,E)$. Let $J$ be the category with $$\text{Ob}(J)=\textnormal{Hom}_\mathcal{C}(A,C)\cup\textnormal{Hom}_\mathcal{C}(B,C)$$ and the only non-identity morphisms in $J$ are of the form \\$\text{Hom}_{J}(h,g)=\{(h,g,f)\colon f\in \text{Hom}_{\mathcal{C}}(A,B)\text{ and } g\circ f=h\}$ where $h\in \text{Hom}_{\mathcal{C}}(A,C)$ and $g\in \text{Hom}_{\mathcal{C}}(B,C)$. Then the \textbf{transfer diagram} $H\colon J\to \mathcal{D}$ is defined on objects by $$H(f)=D \text{ for all }f\in \textnormal{Hom}_\mathcal{C}(A,C), \ H(g)=E\text{ for all }g\in \textnormal{Hom}_\mathcal{C}(B,C)$$ and on non-identity morphisms by $H(h,g,f)=F(f).$ \end{definition} Now that we have defined the transfer diagram we can state the Transfer Lemma. \begin{lemma}[Transfer Lemma] Fix $\mathcal{C},\mathcal{D}$ be categories and $r>0$. Suppose there are $A,B,C\in \textnormal{Ob}(\mathcal{C})$ so that $C\to (B)^{A}_r$, $D,E\in \textnormal{Ob}(\mathcal{D})$ and a surjection $F\colon \textnormal{Hom}_{\mathcal{C}}(A,B)\to \textnormal{Hom}_\mathcal{D}(D,E)$. If $\mathcal{D}$ has a cocone $W\in\textnormal{Ob}(\mathcal{D}$) over the transfer diagram, then $W\to (E)^D_r$ in the category $\mathcal{D}$. \end{lemma} \begin{proof} Let $A,B\in \mathcal{C}$, $D,E\in \mathcal{D}$, and $F\colon \textnormal{Hom}_{\mathcal{C}}(A,B)\to \text{Hom}_{\mathcal{D}}(D,E)$. Fix a cocone $(W,\phi_f)_{f\in \text{Ob}(J)}$ over the transfer diagram in $\mathcal{D}$. Thus by the definition of cocone we have the following commutative diagram, \begin{center}\begin{tikzcd}[column sep=small] &W \\ E_{g} \arrow[ur, swap, "\phi_g"] &\cdots& E_{g'} \arrow[ul, "\phi_{g'}"] \\ D_h \arrow[uur, bend left=60, "\phi_{f}", near start]\arrow[u, swap, "F(f)"] &\cdots & D_{h'} \arrow[u, "F(f')"]\arrow[uul, bend right=60, swap, "\phi_{f'}", near start] \end{tikzcd}\end{center} Where we denote $H(g)$ by $E_g$ and $H(h)$ by $D_h$. We will use the above diagram commuting to show that $W\to (E)^D_r$. \\ Let $\chi\colon \text{Hom}_{\mathcal{D}}(D,W)\to r$ be a coloring. We define a coloring $\chi'\colon \text{Hom}_{\mathcal{C}}(A,C)\to r$ by $\chi'(f)=\chi(\phi_f)$. Since $C\to (B)^{A}_r$, there is $g\in \text{Hom}_{\mathcal{C}}(B,C)$ so that $g\circ \text{Hom}_\mathcal{C}(A,B) \text{ is }\chi' \text{-monochromatic}.$ It remains to show that $\phi_g \circ \text{Hom}_\mathcal{D}(D,E) \text{ is }\chi \text{-monochromatic}$. Let $j\in \text{Hom}_\mathcal{D}(D,E)$, since $F$ is a surjection there is $h\in \text{Hom}_\mathcal{C}(A,B)$ so that $F(h)=j$. So by the definition of cocone, $$\phi_{g\circ h}=\phi_g\circ F(h)=\phi_g\circ j.$$ Then by the definition of $\chi'$, $$\chi(\phi_g\circ j)=\chi(\phi_{g\circ h})=\chi'(g\circ h).$$ Because $g\in \text{Hom}_{\mathcal{C}}(B,C)$ is $\chi'$-monochromatic, $\phi_g \circ \text{Hom}_\mathcal{D}(D,E) \text{ is }\chi \text{-monochromatic}.$ \end{proof} \subsection{The Partite Lemma} In this section, we will state and prove the Partite Lemma. We show that Theorem 1 gives us precisely what is necessary to apply the Transfer lemma to the Hales--Jewett Theorem which will prove the Partite Lemma. In order to use the Transfer Lemma we define a category which we call the Hales--Jewett category and give a reformulation of the Hales--Jewett Theorem using the Hales--Jewett category. Then we prove the Partite Lemma by showing that a line diagram is a transfer diagram for the Hales--Jewett category. Fix a finite set $P$. We let $HJ(P)$ be the category with $\textnormal{Ob(HJ(P))}=\mathbb{N}$ and\\ $\text{Hom}_{\textnormal{HJ(P)}}(0,N)=P^N$, in particular $\text{Hom}_{\textnormal{HJ(P)}}(0,1)=P$. Define $\textnormal{Hom}_{\textnormal{HJ(P)}}(1,N)$ as the set of lines in $P^N$ and let all other morphisms be identities. For all $p\in P$, $\ell$ a line in $P^N$ we define $\ell\circ p\in P^N$ by, $$(\ell\circ p)_i=\left\{\begin{array}{ll} p & \text{ if } i\in d(\ell)\\ \ell_i & \text{otherwise}\\ \end{array} \right.$$ Note that for any line $\ell$ in $P^N$ and any $\bar{e}\in P^N$, $\bar{e}\in \ell$ if and only if $\ell\circ \ell(\bar{e})=\bar{e}$ in $HJ(P)$. In \cite{7} the author defines the Graham-Rothschild category, and the category HJ(P) fits nicely as a subcategory of the Graham-Rothschild category with some modification. With our terminology we can now give a reformulation of the Hales--Jewett Theorem. \begin{theorem}[Hales--Jewett] For all $r>0$ and for each finite set $P$, there is $N$ so that $N\to (1)^0_r$ in \textnormal{HJ(P)}. \end{theorem} The version of the Partite Lemma below is a unification of the main lemmas in \cite{6} and \cite {5}. \begin{corollary}[Partite Lemma] Let $\mathcal{C}$ be a category so that for all $X,Y\in \textnormal{Ob}(\mathcal{C})$,\\ $\textnormal{Hom}_{\mathcal{C}}(X,Y)$ is finite and suppose that $\mathcal{C}$ has colimits over all diagrams $G\colon J\to \mathcal{C}$ where the index category $J$ is finite. Fix a $\mathcal{C}$-language $\mathcal{L}$ and let $i_0\in \text{Hom}(\mathcal{C})$ have a left inverse. Then for any $r>0$, any monic domain object $\mathcal{X}\in \text{Ob}(Bl^{m}_{i_0})$, and any codomain object $\mathcal{Y}\in \text{Ob}(Bl^{m}_{i_0})$ there is a $\mathcal{Z}\in \text{Ob}(Bl^{m}_{i_0})$ so that $\mathcal{Z}\to (\mathcal{Y})^{\mathcal{X}}_r$. \end{corollary} \begin{proof} Fix $r>0$, $\mathcal{X}$ a monic domain object in $Bl^{m}_{i_0}$, and $\mathcal{Y}$ a codomain object in $Bl^{m}_{i_0}$. First we show that we can still apply Theorem 1 even though $\mathcal{C}$ no longer has all colimits. Note that the proof of Theorem 1 only used the fact that $\mathcal{C}$ had colimits over diagrams with index category $J$ where the $\text{Ob}(J)=\text{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y})^N\cup \{\ell\colon \ell \text{ is a line in }\text{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y})^N\}$ for some $N$. Since by assumption on $\mathcal{C}$ the set $\textnormal{Hom}_{\mathcal{C}}(X,Y)$ is finite, $\textnormal{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y})$ is finite. Thus the set of objects in $J$ is finite, so we can apply Theorem 1. Thus all line diagrams have cocones in $Bl_{i_0}$.\\ If $P=\textnormal{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y},\rho)$, then $P$ is finite so by the Hales--Jewett Theorem there is a $N\to (1)^{0}_{r}$ in $HJ(P)$. Our goal is to apply the Transfer Lemma to $\text{Id}\colon \text{Hom}_{HJ(P)}(0,1)\to \text{Hom}_{Bl^{m}_{i_0}}(\mathcal{X},\mathcal{Y})$. To do so we show the transfer diagram is the line diagram. Note that the transfer diagram has index $J$ where $$\text{Ob}(J)=\text{Hom}_{HJ(P)}(0,N)\cup \text{Hom}_{HJ(P)}(1,N)=P^N\cup \{\ell\colon \ell\text{ is a line in }P^N\}$$ and the non-identity morphisms are of the form \\$\text{Hom}_{J}(\bar{e},\ell)=\{(\bar{e},\ell,p)\colon p\in P\text{ so that } \ell\circ p=\bar{e}\}$ where $\bar{e}\in P^N$ and $\ell$ is a line in $P^N$. Then for all lines $\ell$ and $\bar{e}\in P^N$, if there is a $p\in P$ such that $p\circ \ell=\bar{e}$, then $\bar{e}\in \ell$ and $\ell(\bar{e})=p$. Thus the only non-identity morphisms in $J$ are $\text{Hom}_{J}(\bar{e},\ell)=\{(\bar{e},\ell,\ell(\bar{e}))\}$ if $\bar{e}\in \ell$. Then the transfer diagram is $G\colon J\to Bl^{m}_{i_0}$ defined by $$G(\bar{e})= \mathcal{X} \textnormal{ if }\bar{e}\in P^N$$ $$G(\ell)=\mathcal{Y} \text{ if }\ell \text{ is a line in }P^N $$ and on non-identity morphisms $G(\bar{e},\ell,\ell(\bar{e}))=\ell(\bar{e})$. Thus the transfer diagram is the line diagram. . \end{proof} \subsection{The Partite Construction} In this section, we expand the Partite Lemma as stated in Section 5.3 from a result in $Bl^{m}_{i_0}$ to a larger category that we call $Bl_{\mathcal{D}}$. The category $Bl_{\mathcal{D}}$, that we define precisely below, is a subcategory of $Bl$. We need to cut down from $Bl$ to $Bl_{\mathcal{D}}$ since $Bl$ does not have an analog of the Partite Lemma while we prove a version of the Partite Lemma for $Bl_{\mathcal{D}}$ below. To define $Bl_{\mathcal{D}}$ we consider a new category $\mathcal{D}$ that will have the Ramsey property and a functor $G\colon \mathcal{D}\to \mathcal{C}$. This category $\mathcal{D}$ is analogous to the category of finite linear orders in the Ne\v{s}et\v{r}il --R\"{o}dl Theorem. \begin{definition} Let $\mathcal{C},\mathcal{D}$ be categories, $G\colon \mathcal{D}\to \mathcal{C}$ be a functor, and $\mathcal{L}$ be a $\mathcal{C}$-language. A $\mathcal{D}$-\textbf{block} is a triple $\mathtt{X}=(\mathsf{X},\pi,K)$ where $\mathsf{X}$ is an $\mathcal{L}$-structure, $K\in\textnormal{Ob}(\mathcal{D})$, and $\pi\in \textnormal{Hom}_{\mathcal{C}}(X,G(K))$.\end{definition} For ease of notation if $\mathcal{X}=(\mathsf{X},\pi)$ we denote the $\mathcal{D}$-block $(\mathsf{X},\pi,K)$ by $(\mathcal{X},K)$. We define a category $Bl^{m}_{\mathcal{D}}$ of $\mathcal{D}$-blocks. Let $\text{Ob}(Bl^{m}_{\mathcal{D}})$ be $\mathcal{D}$-blocks and if\\ $\mathtt{X}=(\mathcal{X},K),\mathtt{Y}=(\mathcal{Y},L)\in \text{Ob}(Bl_{\mathcal{D}})$, then $$\textnormal{Hom}_{Bl_{\mathcal{D}}}(\mathtt{X},\mathtt{Y})=\{f\in \textnormal{Hom}_{\mathcal{C}}(X,Y)\colon \text{there is a }i\in \text{Hom}_{\mathcal{D}}(K,L) \text{ so that }f\in \text{Hom}(Bl^{m}_{G(i)}) \}$$ We now expand the Partite Lemma to the category $Bl^{m}_{\mathcal{D}}$. \begin{corollary}[Partite Construction] Let $\mathcal{C}$ and $\mathcal{D}$ be categories and let $G\colon \mathcal{D}\to \mathcal{C}$ be a functor so that the following hold: \begin{enumerate} \item For all $X,Y\in \textnormal{Ob}(\mathcal{C})$, $\textnormal{Hom}_{\mathcal{C}}(X,Y)$ is finite. Similarly, for all $K,L\in \textnormal{Ob}(\mathcal{D})$, $\textnormal{Hom}_{\mathcal{D}}(K,L)$ is finite. \item For all $f\in \textnormal{Hom}(\mathcal{D})$, $G(f)$ has a left inverse in $\mathcal{C}$. \item If $F\colon J\to \mathcal{C}$ is a functor where $\textnormal{Ob} (J)$ is finite, then $\mathcal{C}$ has a colimit over $F$. \item $\mathcal{D}$ has the Ramsey property. \end{enumerate} Fix a $\mathcal{C}$-language $\mathcal{L}$. Then for any $r>0$, any monic $\mathtt{X}\in \text{Ob}(Bl^{m}_{\mathcal{D}})$, and any $\mathtt{Y}\in \text{Ob}(Bl^{m}_{\mathcal{D}})$ there is a $\mathtt{Z}\in \text{Ob}(Bl^{m}_{\mathcal{D}})$ so that $\mathtt{Z}\to (\mathtt{Y})^{\mathtt{X}}_r$. \end{corollary} The proof of Corollary 5 follows ideas standard in the field of structural Ramsey theory, for example see \cite{10}, though we specifically follow the formulations in \cite{6} and \cite{5}. \begin{proof} Let $\mathtt{X}=(\mathsf{X},\pi,K)\in \textnormal{Ob}(Bl_{\mathcal{D}})$ be monic, $\mathtt{Y}=(\mathsf{Y},\rho,L)\in\textnormal{Ob}(Bl_{\mathcal{D}})$, and let $r>0$. By the Ramsey property of $\mathcal{D}$ there is $M\in \mathcal{D}$ so that $M\to (L)^{K}_r$ in $\mathcal{D}$. We now define a block $(\mathcal{Y}_0,\sigma_0,M)$ where $Y_0$ is defined as the categorical disjoint sum of copies of $Y$. In particular let $J$ be the category with $\text{Ob}(J)=\text{Hom}_{\mathcal{D}}(L,M)$ and the only morphisms of $J$ are identities. Let $H\colon J\to \mathcal{C}$ be defined by $H(i)=Y$ for all $i\in \text{Ob}_J$. Let $(Y_0,h_i)_{i\in \text{Hom}_{\mathcal{D}}(L,M)}$ be the colimit over $H$ in $\mathcal{C}$. Note that since the only morphisms in $J$ are identities, for any $W\in \text{Ob}(\mathcal{C})$ and any collection $(f_i)_{i\in\text{Hom}_{\mathcal{D}(L,M)}}$ where $f_i\in \text{Hom}_{\mathcal{C}}(Y,W)$, $(W,f_i)_{i\in \text{Hom}_{\mathcal{D}}(L,M)}$ is a cocone over $H$. In particular $(G(M),G(i)\circ \rho)_{i\in \text{Hom}_{\mathcal{D}}(L,M)}$ is a cocone over $H$. Thus by the definition of colimit there is $\rho_0\in \text{Hom}_{\mathcal{C}}(Y_0,G(M))$ so that $\rho_0\circ h_i=G(i)$ for all $i\in \text{Hom}_{\mathcal{D}}(L,M)$. Let $\gamma\in\text{Hom}_{\mathcal{C}}(Y_0,r)$ and $F$ be a function symbol in $\mathcal{L}$ of arity $(q,s)$, then consider the cocone $(s,F^{\mathcal{Y}}(\gamma\circ h_i))_{i\in \text{Hom}_{\mathcal{D}}(L,M)}$. By the definition of colimit there is a $F^{\mathsf{Y}_0}(\gamma)$ so that $F^{\mathsf{Y}_0}(\gamma)\circ h_i=F^{\mathsf{Y}}(\gamma\circ h_i)$ for all $i\in \text{Hom}_{\mathcal{D}}(L,M)$. Now given a relation symbol $R$ in $\mathcal{L}$ of arity $q$ and $\delta\in \text{Hom}_{\mathcal{C}}(q,Y_0)$ define $R^{\mathsf{Y}_0}$ so that, $$R^{\mathsf{Y}_0}(\delta)\Leftrightarrow \text{ there are }\eta\in \text{Hom}_{\mathcal{C}}(q,Y),i\in \text{Hom}_{\mathcal{D}}(L,M) \text{ so that }h\delta=h_i\circ \eta \text{ and }R^{\mathcal{Y}}(\eta).$$ It is easy to check that each $h_i$ is a $G(i)$-monomorphism.\\ We enumerate $\text{Hom}_{\mathcal{D}}(K,L)$ by letting $\text{Hom}_{\mathcal{D}}(K,M)=\{j_k\colon k<n\}$. Then recursively define $(\mathcal{Y}_k,M)$ by the Partite Lemma so that $\mathcal{Y}_{k+1}\to (\mathcal{Y}_k)^{\mathcal{X}}_r$ in $Bl_{j_k}$. We show that $\mathsf{Z}=(\mathcal{Y}_{n},M)$ satisfies the Ramsey property.\\ Fix a coloring $\chi\colon \text{Hom}_{Bl_{\mathcal{D}}}(\mathsf{X},\mathsf{Z})\to r$. We define morphisms $g_k$ recursively by the Partite Lemma so that $g_{n}\circ \cdots\circ g_{k}\circ \text{Hom}_{Bl_{j_k}}(\mathcal{X},\mathcal{Y}_k)$ is $\chi$-monochromatic. We claim that if $g=g_n\circ \cdots \circ g_1$ then for all $f\in \text{Hom}_{Bl_{\mathcal{D}}}(G(K),Y_0)$, $\chi(g\circ f)$ depends only on the $j_k\colon K\to M$ so that $\rho_0\circ f=i_k\circ \pi$. To prove the claim suppose that $f$ is a $j_k$-monomorphism. So since $g_l$ is a $\text{Id}_M$-monomorphism for all $l<k$, $g_{k-1}\cdots g_1(f)$ is a $j_k$-monomorphism. Then by the definition of $g_k$, $g_n\cdots g_k \circ g_{k-1}\cdots g_1\circ f$ is a fixed color which proves the claim. Then define a coloring $\chi'$ so that $$\chi'\colon \text{Hom}_{\mathcal{D}}(K,M)\to r, \ \chi'(j_k)=\chi(g\circ f)\text { if }\rho_0\circ f=i_k\circ \pi$$ Then by the Ramsey property of $\mathcal{D}$ there is $i_0\in \text{Hom}_{\mathcal{D}}(L,M)$ so that $i_0\circ \text{Hom}_{\mathcal{D}}(K,L)$ is $\chi'$-monochromatic. Then since $h_{i_0}$ is a $G(i_0)$-monomorphism, $g_n\circ \cdots\circ g_0\circ h_{i_0}$ witnesses that $\mathtt{Z}\to (\mathtt{Y})^{\mathtt{X}}_r$. \end{proof} \subsection{The theorems of Solecki} We will give the results of \cite{6} and \cite{5} as corollaries of the Partite Construction. Thus we will show that the results in \cite{6} and \cite{5} can be proven in a unified manner. First we give some notation so that we can reformulate the results in \cite{6} and \cite{5} into our context. Let $\mathcal{C},\mathcal{D}$ be categories, $G\colon \mathcal{D}\to \mathcal{C}$ be a functor, and $\mathcal{L}$ be a $\mathcal{C}$-language. Then $(\mathcal{L},\mathcal{D})$ is the category where objects are $\mathcal{L}$-structures of the form $G(K)$ where $K\in \mathcal{D}$ which we denote by $\mathsf{K}$ and if $\mathsf{K},\mathsf{M}\in \textnormal{Ob}(\mathcal{L},\mathcal{D})$ then $$\text{Hom}_{(\mathcal{L},\mathcal{D})}(\mathsf{K},\mathsf{M})=\{G(i)\colon i\in \text{Hom}_{\mathcal{D}}(K,M)\text{ and }G(i)\text{ is a homomorphism}\}$$ With this notation we state and prove the main result in \cite{5}. \begin{corollary}[Solecki,\cite{5}] Let $G\colon (\textnormal{\textbf{Fin}},\leq)\to \textnormal{\textbf{Fin}}$ be defined by $G(K,\leq)=K$ and $G(f)=f$. For any $\textnormal{\textbf{Fin}}$-language $\mathcal{L}$ the category $(\mathcal{L},(\textnormal{\textbf{Fin}},\leq))$ has the Ramsey property. \end{corollary} Note that the category $(\mathcal{L},(\textnormal{\textbf{Fin}},\leq))$ is the category whose objects are linearly ordered structures and morphisms are increasing injective homomorphisms. Thus Corollary 6 is an expansion of the Ne\v{s}et\v{r}il--R\"{o}dl Theorem. \begin{proof} Let $\mathsf{K},\mathsf{M}\in \textnormal{Ob}(\mathcal{L},(\textbf{Fin},\leq))$ and $r>0$. We view $\mathsf{K}$ and $\mathsf{M}$ as the $(\textbf{Fin},\leq)$-blocks $\mathsf{K}=(\mathsf{K},\text{Id}_{G(K)},K)$ and $\mathsf{M}=(\mathsf{M},\text{Id}_M,M)$ respectively. Next we show that $\textbf{Fin}$, $(\textbf{Fin},\leq)$, and $G$ satisfy conditions (1)-(4) for the Partite Construction. The only property that is not clear is that the category \textbf{Fin} has colimits over diagrams where the index category $J$ has a finite set of objects. This is a standard result in category theory. The colimit over diagram $F$, is $$Z=\bigsqcup_{S\in \text{Ob}(J)}F(S)/_{\sim'}$$ where $\sim'$ is the transitive closure of the relation given by $$(s,F(S))\sim(t, F(T))\Leftrightarrow \text{ there is an } f\in J,f\colon S\to T, F(f)(s)=t,$$ and $\phi_S\colon F(S)\to Z$ is given by $\phi_S(s)=[s]$.\\ By the Partite Construction there is a $\mathtt{Z}=(\mathsf{Z},\pi,L)$ so that $\mathtt{Z}\to (\mathsf{M})^{\mathsf{K}}_r$ in $Bl_{(\textbf{Fin},\leq)}$. We order $Z$ so that $\pi$ becomes a weakly increasing map, then $\mathsf{Z}\to (\mathsf{M})^{\mathsf{K}}_r$ in $(\mathcal{L},(\textbf{Fin},\leq))$. \end{proof} For the main result in \cite{6} we need to define another category. \textnormal{(\textbf{Fin}},$\leq^{*}$) is the category where $\textnormal{Ob}((\textnormal{\textbf{Fin}},\leq^{*}))$ are finite linear orders and for all $L,K\in \textnormal{Ob}((\textnormal{\textbf{Fin}},\leq^{*}))$, $$\textnormal{Hom}_{(\textnormal{\textbf{Fin}},\leq^{*})}(L,K)=\{f\in K^L\colon \ f\text{ is a rigid surjection}\}.$$ Where a \textbf{rigid surjection} is a map $s\colon L\to K$ between linear orders that is a surjection and images of initial segments of $L$ are initial segments of $K$. This definition allows us to state the main result in \cite{6}. \begin{corollary}[Solecki,\cite{6}] Let $G\colon (\textnormal{\textbf{Fin}},\leq^*)^{\textnormal{op}}\to \textnormal{\textbf{Fin}}^{\textnormal{op}}$ be defined by $G(K,\leq)=K$ and $G(f)=f$. For any $\textnormal{\textbf{Fin}}^{\textnormal{op}}$-language $\mathcal{L}$ the category $(\mathcal{L},(\textnormal{\textbf{Fin}},\leq^*)^{\textnormal{op}})$ has the Ramsey property. \end{corollary} \begin{proof} Let $\mathsf{K},\mathsf{M}\in \textnormal{Ob}(\mathcal{L},(\textbf{Fin},\leq^*)^{\text{op}})$ and $r>0$. We view $\mathsf{K}$ and $\mathsf{M}$ as the $(\textbf{Fin},\leq^*)^{\text{op}}$-blocks $\mathsf{K}=(\mathsf{K},\text{Id}_{G(K)},K)$ and $\mathsf{M}=(\mathsf{M},\text{Id}_M,M)$ respectively. We claim that $\textbf{Fin}^{\text{op}}$, $(\textbf{Fin},\leq^*)^{\text{op}}$, and $G$ satisfy conditions (1)-(4) for the Partite Construction. First note that $\textbf{Fin}^{\text{op}}$ has colimts when the category $J$ has a finite set of objects. The limit over diagram $F$ is $$Z=\{(s_S)_{s\in J}\in \prod_{S\in \text{Ob}(J)}F(S)\colon \text{ for all } S,T\in \text{Ob}(J), f\in \text{Hom}_J(X,Y), F(f)(s_S)=(s_T)\}$$ and $\phi_S\colon Z\to F(S)$ is given by the projection maps $\pi_S$. Also the category $(\textbf{Fin},\leq^*)^\text{op}$ has the Ramsey by the Graham--Rothschild Theorem in \cite{8}. The remaining conditions are trivial to prove. By the Partite Construction there is a $\mathtt{Z}=(\mathsf{Z},\rho,L)$ so that $\mathtt{Z}\to (\mathsf{M})^{\mathsf{K}}_r$ in $Bl_{(\textbf{Fin},\leq^*)^{\text{op}}}$. We linearly order $\rho(L)$ by $a\leq b$ if and only if $\text{min}(\rho^{-1}(a))\leq \text{min}(\rho^{-1}(b))$ and order the rest of $Z$ so that $\rho(L)$ is an initial segment. Then $\mathsf{Z}\to (\mathsf{M})^{\mathsf{K}}_r$ in \\ $(\mathcal{L},(\textbf{Fin},\leq^*)^{\text{op}})$. \end{proof} \bibliographystyle{plain}
proofpile-arXiv_066-12918
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:1} Consider the following kinetic equation for the one-particle distribution function $f(x,v,t)$, \begin{equation} \label{BGK} (\partial_t f+ v\cdot \nabla_x f) (x,v,t) = \lambda(\varrho) \big(\varrho(x,t) M_f (x,v,t)-f(x,v,t)\big)\,, \end{equation} where \[ M_f(x,v,t) = \frac 1 {(2 \pi T(x,t))^{3/2} } \exp\left(-\frac {|v-u(x,t)|^2}{2T(x,t)}\right)\,, \] \[ \begin{split} & \varrho (x,t) = \int\! \mathrm{d} v\, f(x,v,t)\,, \quad \varrho u (x,t) = \int\! \mathrm{d} v\, f(x,v,t) v\,, \\ & \varrho(|u|^2 +3T)(x,t) = \int\! \mathrm{d} v\, f(x,v,t) |v|^2 \,, \end{split} \] and $\lambda$ is a suitable positive function of the spatial density. Here, $(x,v)$ denotes position and velocity of the particle, respectively, and $t>0$ is the time. The evolution equation \eqref{BGK} describes the behavior of a particle moving freely. In addition, the particle thermalizes instantaneously at a random time of intensity $\lambda >0$. The Maxwellian $M_f$ has mean velocity and temperature given by $f$ itself. Such a model was introduced by P.L.~Bhatnagar, E.P.~Gross, and M.~Krook in \cite {BGK} to deal with situations when the mean free path of the particle system is very small, but the hydrodynamical regime is not yet appropriate. In the original paper \cite{BGK} the authors consider $\lambda(\varrho)=\varrho$. This choice, even if natural, presents serious problems from the mathematical side. Indeed, up to now there exists a constructive existence and uniqueness result only when $\lambda=1$ \cite {PP}, which is the case treated here, or when $\lambda$ is a smooth bounded function, case in which the analysis of \cite{PP} can be easily extended. Clearly, the BGK model must preserve mass, momentum, energy, and satisfy the $H$-Theorem. It also exhibits the usual hydrodynamic behavior whenever $\lambda \to \infty$. The interest of the model is related to the fact that the instantaneous thermalization described by \eqref{BGK} is much easier to compute compared to a huge amount of collisions which however produces the same effect at the end. We now try to present an heuristic derivation of the BGK equation, which is indeed not a simple arbitrary toy model but it is based on reasonable physical arguments. The starting point is the usual Boltzmann equation, \begin{equation} \label{B1} (\partial_t +v \cdot \nabla_x ) f =\frac 1{\varepsilon} Q(f,f)\,, \end{equation} where $Q$ is the collision operator, which we do not make explicit here, and $\varepsilon$ is a very small scale parameter. Fixed $t>0$, we represent the solution $f(t)$ to \eqref{B1} with initial datum $f_0$ in terms of the Trotter product formula: letting $n=\lfloor t/\tau \rfloor$ with $\tau >0$ very small, \begin{equation} \label{T} f(t) \approx f(n\tau) \approx (S_0 (\tau) S_h(\tau))^nf_0\,, \end{equation} where $S_0(t)f(x,v)=f(x-vt,v)$ is the free stream operator and $S_h(t) f_0$ is the solution to the homogeneous Boltzmann equation \[ \varepsilon \partial_t f = Q(f,f) \] with initial datum $f_0$. By virtue of the well known properties of the homogeneous Boltzmann equation, \[ \lim_{\varepsilon \to 0} S_0(\tau) f = \varrho M_f\,, \] so that we can replace in Eq.~\eqref{T} the term $S_h(\tau)$ by the transition probability $P$ such that $f \to \varrho M_f$ with probability $\tau <1$ and $f \to f$ with probability $1-\tau $. Thus \[ f(n\tau) \approx (S_0(\tau) P)^nf_0\,, \] i.e., \begin{align} \label{TD} f(n\tau) & \approx S_0(\tau) (S_0(\tau) P)^{n-1}f_0 + S_0(\tau) (P-1) (S_0(\tau) P)^{n-1}f_0 \nonumber \\ & = \cdots\cdots\cdots \nonumber \\ & = S_0(n\tau)f_0 + \sum_{k=1}^n S_0(k\tau) (P-1) (S_0(\tau) P)^{n-k}f_0 \nonumber \\ & \approx S_0(n\tau)f_0 + \sum_{k=1}^n S_0(k\tau) (P-1) f[ (n-k) \tau]\,. \end{align} But \[ \frac 1\tau (P-1)f = \varrho M_f- f\,, \] thus in the limit $\tau\to 0$ Eq.~\eqref{TD} reads \[ f(t) = S_0(t)f_0 + \int_0^t \! \mathrm{d} s\, S_0(s) ( \varrho M_f-f) (t-s)\,, \] or, equivalently, \eqref{BGK} with $\lambda=1$. We assume $\lambda=1$ however, when the density is high, the transition probability of the event $f \to \varrho M_f$ should increase so that we can recover, by the above heuristic argument, the BGK equation with $\lambda=\varrho$ as well, as originally proposed in \cite{BGK}. The heuristic argument leading to the BGK picture starts from the Boltzmann equation and applies in situations close to the hydrodynamical regime, but still with a finite mean free path. Thus, this equation does not seem to be consequence of a scaling limit for which one obtains either the Boltzmann equation in the low density (or Boltzmann-Grad) limit, or the hydrodynamical equations in a mere space-time scaling. In the present paper we introduce a stochastic system of $N$ interacting particles on a $d$-dimensional torus ($d=2,3$), and prove the convergence of the one-particle law to the corresponding solution to the BGK equation in the limit $N \to \infty$ . In doing this, we follow a previous similar approach developed in \cite{BHP}. The improvements we present here are related to the following aspect. In the particle model we introduce a function $x \to \varphi(x)$ ($x$ is the space variable) which we use to compute the local empirical hydrodynamical fields, \[ \begin{split} & \varrho (x) = \frac 1N \sum_{j=1}^N \varphi (x-x_j)\,, \quad \varrho u (x) = \frac 1N \sum_{j=1}^N \varphi (x-x_j)v_j\,, \\ & \varrho(|u|^2 +3T)(x) = \frac 1N \sum_{j=1}^N \varphi (x-x_j)v_j^2\,, \end{split} \] where $(x_1 \cdots x_N; v_1 \cdots v_N)$ are the sequence of positions and velocities of the particle configuration. In \cite{BHP} the function $\varphi$ was assumed to be strictly positive to avoid divergences due to the possibility of a local vacuum. This is physically not very reasonable because long-range effects should not play any role in the jump mechanism of a tagged particle. Here, we remove such an hypothesis allowing $\varphi$ to be compactly supported. In other words, $\varphi$ can be thought as a smoothed version of the characteristic function of a small ball. We remark that the analysis of the present paper is, in a sense, equivalent to the introduction of the stochastic particle system which is the inhomogeneous version of the well known Kac's model \cite{PWZ,CPW}, and it is the conceptual basis of the usual Monte Carlo Direct Simulation Method (Bird's scheme), see, e.g., \cite{CIP}, to approximate the solutions of the usual Boltzmann equation. Here, we work in a canonical context, i.e., the number of particles $N$ is fixed. In \cite{MW} the authors derive a linear version of the homogeneous BGK equation, starting from a suitable two species particle system in the microcanonical setting. Namely, the energy of the system they consider is also fixed. This is more related to the spirit of the original Kac's model. The plan of the paper is the following. We first fix the cutoff function $\varphi$ and a regularized version of the BGK equation (see Eq.~\eqref{eq:kin} below) in which the hydrodynamical fields are smeared. After fixing notation and establishing the main result in Section \ref{sec:2}, we introduce in Section \ref{sec:3} a coupling between the particle system and $N$ independent copies of a one-particle stochastic process associated to the regularized BGK equation, which is the basic tool for the proof of convergence. Sections \ref{sec:4} and \ref{sec:5} contain the preliminary lemmas and the key result on the proximity of the particle system to the regularized equation. In Section \ref{sec:6} we remove the cutoff as in \cite{BHP}. But here we choose a different method, by working not on the equations but on the processes, again with a coupling technique. As matter of facts, the convergence of the laws is obtained in the 2-Wasserstein distance, hence weaker with respect to the result in \cite{BHP} holding in a weighted $L^1$ space, but in addition here we also prove the convergence of the processes, and this is closer to the spirit of the present analysis. The final result follows by a diagonal limit. \section{Notation and results} \label{sec:2} For $d=2,3$ we let \begin{itemize} \item $\bb T^d = \big(\bb R/(\frac 12 + \bb Z)\big)^d$ be the $d$-dimensional torus of side length one. \item $\varphi\in C^\infty(\bb T^d;\bb R_+)$ be a smearing function such that\footnote{The assumption $\varphi_0\ge 1$ in unnecessary but it makes cleaner some estimates. On the other hand, it is not restrictive as we are interested in the case when $\varphi$ converges to the $\delta$-function.} \begin{equation} \label{eq:varphi} \begin{split} & \varphi_0 := \varphi(0) = \max\varphi \ge 1 \,, \quad \varphi(x) = 0 \text{ for } |x| \ge \frac 12\,, \\ & \varphi(x) = \varphi(-x)\,, \quad \int\!\mathrm{d} y\, \varphi(y) = 1\,, \end{split} \end{equation} where for $x\in \bb T^d$ we denote by $|x|$ its distance from $0$ (on the torus). \item $M_{u,T}=M_{u,T}(v)$, $v\in\bb R^d$, be the normalized Maxwellian density of mean velocity $u\in \bb R^d$ and temperature $T$, i.e., \begin{equation} \label{max} M_{u,T}(v) = \frac1{(2\pi T)^{d/2}} \exp\left(-\frac{|v-u|^2}{2T}\right). \end{equation} \end{itemize} Note that \[ u = \int\!\mathrm{d} v\, M_{u,T}(v)\, v \,, \qquad T = \frac 1d \int\!\mathrm{d} v\, M_{u,T}(v)\, |v-u|^2\,. \] \subsection{The BGK equation} \label{sec:2.1} We denote by $f = f(t) = f(x,v,t)$, where $(x,v)\in \bb T^d \times \bb R^d$ and $t\in \bb R_+$ is the time, the solution to the BGK equation, \begin{equation} \label{eq:bgk} \partial_t f + v\cdot \nabla_x f = \varrho_f M_f - f\,, \end{equation} where $\varrho_f= \varrho_f(x,t)$ is the local density defined by \begin{equation} \label{eq:rho} \varrho_f(x,t) = \int\!\mathrm{d} v\, f(x,v,t)\,, \end{equation} while $M_f = M_f(x,v,t)$ is the (local) Maxwellian given by \begin{equation} \label{eq:maxf} M_f(x,v,t) = M_{u_f(x,t),T_f(x,t)}(v)\,, \end{equation} with $u_f=u_f(x,t)$ and $T_f=T_f(x,t)$ the local velocity and temperature, \begin{align} \label{eq:uf} \varrho_f(x,t) u_f(x,t) & = \int\! \mathrm{d} v\, f(x,v,t)\, v\,, \\ \label{eq:tf} \varrho_f(x,t) T_f(x,t) & = \frac 1d \int\! \mathrm{d} v\, f(x,v,t)\, |v-u_f(x,t)|^2\,. \end{align} Well-posedness of the BGK equation together with $L^\infty$ estimates for the hydrodynamical fields can be found in \cite{PP}. In particular, we consider as initial condition a probability density $f_0$ on $\bb T^d\times \bb R^d$ such that there are a function $a\in C(\bb R^d)$ and positive constants $C_1,\alpha>0$ such that \begin{equation} \label{eq:f0} \begin{split} & a(v) \le f_0(x,v) \le C_1\mathrm{e}^{-\alpha|v|^2} \quad \forall\, (x,v)\in \bb T^d\times \bb R^d\,, \\ & a \ge 0\,, \quad C_2 := \int\!\mathrm{d} v \, a(v) >0\,. \end{split} \end{equation} Therefore, from \cite[Theorem 3.1]{PP} the following proposition follows. \begin{proposition} \label{prop:bgk} There exists a mild solution $f=f(t)=f(x,v,t)$ to Eq.~\eqref{eq:bgk} with initial condition $f(x,v,0) = f_0(x,v)$ satisfying Eq.~\eqref{eq:f0}.\footnote{This means that $f$ solves the integral equation, \[ f(x,v,t) = f_0(x-vt,v) + \int_0^t\!\mathrm{d} s\,(\varrho_f M_f-f) (x-v(t-s),v,s)\,, \] which formally derives from Eq.~\eqref{eq:bgk} via Duhamel formula.} Moreover, there are a non-decreasing finite function $t\mapsto K_{q,t} = K_{q,t}(f_0)$, $q\in\bb N$, and a non-increasing positive function $t\mapsto A_t = A_t(f_0)$ such that, for any $(x,t)\in \bb T^d\times \bb R_+$, \begin{align} \label{eq:utf} & |u_f(x,t)| + T_f(x,t) + \mc N_q(f(t)) \le K_{q,t}\,, \\ \label {eq:rT>} & \varrho_f(x,t) \ge C_2 \mathrm{e}^{-t}\,, \quad T_f (x,t) \ge A_t \,, \end{align} where \begin{equation} \label{eq:Nn} \mc N_q(f) := \sup_{(x,v)\in \bb T^d\times \bb R^d} f(x,v) (1+ |v|^q)\,. \end{equation} Finally, the above solution is unique in the class of functions $f=f(t)=f(x,v,t)$ such that, for some $q>d+2$, $\sup_{t\le \tau}\mc N_q(f(t)) < +\infty$ for any $\tau>0$. \end{proposition} \subsection{The stochastic particle system} \label{sec:2.2} We consider a system of $N$ particles confined in the torus $\bb T^d$. We denote by $Z_N=(X_N,V_N)$ the state of the system, where $X_N\in(\bb T^d)^N $ and $V_N\in (\bb R^d)^N $ are the positions and velocities of the particles, respectively. The particles move randomly, governed by the stochastic dynamics defined as below. Setting $X_N=(x_1, \dots, x_N)$ and $V_N=(v_1, \dots, v_N)$, we introduce the smeared empirical hydrodynamical fields $\varrho_N^\varphi$, $u_N^\varphi$, and $T_N^\varphi$ (depending on $Z_N$) defined by \begin{equation} \label{empf} \begin{split} & \varrho_N^\varphi(x) = \frac 1N \sum_{j=1}^N \varphi (x-x_j)\,, \quad \varrho_N^\varphi u_N^\varphi(x) = \frac 1N \sum_{j=1}^N \varphi (x-x_j) v_j\,, \\ & \varrho_N^\varphi T_N^\varphi(x) = \frac 1{Nd} \sum_{j=1}^N \varphi (x-x_j) |v_j-u_N^\varphi(x)|^2 \,, \end{split} \end{equation} with $\varphi$ as in Eq.~\eqref{eq:varphi}. The particle system evolves according to the Markovian stochastic dynamics whose generator $\mc L_N$ is defined as \begin{align} \label{gen} \mc L_N{G}(Z_N) & = [(V_N\cdot\nabla_{X_N})G] (Z_N) + \sum _{i=1}^N \int\! \mathrm{d} \tilde x_i \, \varphi(\tilde x_i - x_i) \nonumber \\ & \quad \times \left[ \int\! \mathrm{d} \tilde v_i\, M_{Z_N}^\varphi (\tilde x_i,\tilde v_i) G(Z_N^{i,(\tilde x_i,\tilde v_i)}) - G(Z_N) \right]\,. \end{align} Above (for given $(y,w) \in \bb T^d\times \bb R^d$), $Z_N^{i,(y,w)}=(X_N^{i,y},V_N^{i,w})$ is the state obtained from $Z_N=(X_N,V_N)$ by replacing the position $x_i$ and velocity $v_i$ of the $i$-th particle by $y$ and $w$ respectively; ${G}$ is a test function on the state space, and $M_{Z_N}^\varphi(x,v) $ is the Maxwellian associated to the empirical fields, i.e., \[ M_{Z_N}^\varphi(x,v) = M_{u_N^\varphi(x),T_N^\varphi(x)}(v)\,. \] Otherwise stated, the evolution $Z_N(t) = (X_N(t),V_N(t))$ is the Markov process in which at a random exponential time of intensity one the $i$-th particle performs a jump from its state $(x_i,v_i)$ to a new one $(\tilde x_i,\tilde v_i)$, extracted according to the distribution $\varphi(\cdot-x_i)$ for the position and then to the empirical Maxwellian $M_{Z_N}^\varphi (\tilde x_i,\cdot)$ for the velocity. The stochastic evolution is well posed since $\varrho_N^\varphi(\tilde x_i) \ge N^{-1} \varphi(\tilde x_i - x_i)$, so that the Maxwellian $M_{Z_N}^\varphi (\tilde x_i,\tilde v_i)$ is well defined when $\varphi(\tilde x_i - x_i)>0$, and the integration in the right-hand side of Eq.~\eqref{gen} makes sense. On the other hand, the smeared hydrodynamical temperature $T_N^\varphi(\tilde x_i)$ may vanish, in which case we replace the Maxwellian $M_{Z_N}^\varphi (\tilde x_i,\tilde v_i)$ by a Dirac mass in $u_N^\varphi(\tilde x_i)$. In particular, this happens in the special case $\varrho_N^\varphi(\tilde x_i) = N^{-1} \varphi(\tilde x_i - x_i)$, which implies $u_N^\varphi(\tilde x_i)=v_i$, so that $M_{Z_N}^\varphi (\tilde x_i,\tilde v_i) = \delta(\tilde v_i - v_i)$ (the velocity does not jump). In the sequel, we will denote by $F_N(t) = F_N(Z_N,t)$ the density of the law of $Z_N(t)$ (but we will often refer to it as simply the law of the process). \subsection{The regularized BGK equation} \label{sec:2.3} The kinetic limit of the particle system introduced in Section \ref{sec:2.2} will be shown to be governed by the following regularized version of Eq.~\eqref{eq:bgk}, \begin{equation} \label{eq:kin} \partial_t g + v\cdot \nabla_x g = \varrho_g^\varphi M_g^\varphi - g\,, \end{equation} for the unknown distribution function $g = g(t) =g(x,v,t)$, where $M_g^\varphi$ is the Maxwellian \begin{equation} \label{eq:maxg} M_g^\varphi(x,v,t) = M_{u_g^\varphi(x,t),T_g^\varphi(x,t)}(v)\,, \end{equation} and the fields $\varrho_g^\varphi = \varrho_g^\varphi(x,t) $, $u_g^\varphi=u_g^\varphi(x,t)$, and $T_g^\varphi=T_g^\varphi(x,t)$ are given by \begin{align} \label{eq:rphi} \varrho_g^\varphi(x,t) & = (\varphi*\varrho_g) (x,t) = \int\!\mathrm{d} y\, \varphi(x-y)\varrho_g(y,t), \\ \label{eq:uphi} \varrho_g^\varphi(x,t) u_g^\varphi (x,t) & = \int\! \mathrm{d} y\, \mathrm{d} v\, \varphi(x-y) g(y,v,t)\, v\,, \\ \label{eq:tphi} \varrho_g^\varphi(x,t) T_g^\varphi (x,t) & = \frac 1d \int\! \mathrm{d} y\, \mathrm{d} v\, \varphi(x-y) g(y,v,t)\, |v-u_g^\varphi(x,t)|^2\,, \end{align} with \begin{equation} \label{eq:rhog} \varrho_g(x,t) = \int\!\mathrm{d} v\, g(x,v,t)\,. \end{equation} The content of Proposition \ref{prop:bgk} extends to the regularized BGK equation, in particular the $L^\infty$ estimates do not depend on the smearing function $\varphi$. This is the matter of \cite[Proposition 2.2]{BHP} - which we report below for the convenience of the reader, noticing that it applies also in the present context since the proof does not depend on the assumption (done in \cite{BHP}) that $\varphi$ is strictly positive. \begin{proposition} \label{prop:stim_uT} Let $g=g(t)=g(x,v,t)$ be the solution to Eq.~\eqref{eq:kin} with initial condition $g(x,v,0) = f_0(x,v)$, $f_0$ as in Proposition \ref{prop:bgk}, i.e., satisfying Eq.~\eqref{eq:f0}. Then, similar estimates hold for the corresponding hydrodynamical fields, namely, \begin{align} \label{eq:utK} & |u_g^\varphi(x,t)| + T_g^\varphi(x,t) + \mc N_q(g(t)) \le K_{q,t}\,, \\ \label{stimrho} & \varrho_g(x,t) \ge C_2 \mathrm{e}^{-t}\,, \quad \varrho_g^\varphi(x,t) \ge C_2\mathrm{e}^{-t}\,, \\ \label{eq:TA} & T_g^\varphi (x,t) \ge A_t\,, \end{align} (with $t\mapsto K_{q,t} = K_{q,t}(f_0)$, $q\in\bb N$ non-decreasing and $t\mapsto A_t = A_t(f_0)$ non-increasing, both positive and independent of $\varphi$). \end{proposition} \subsection{Kinetic limit} \label{sec:2.4} We can now state the key result of the paper, concerning the kinetic limit of the stochastic particle system. \begin{theorem} \label{teo:main} Suppose that the law of $Z_N(0)$ has density $F_N(0) = f_0^{\otimes N}$, where $f_0$ satisfies the assumptions detailed in Eq.~\eqref{eq:f0}, and let $g=g(t)=g(x,v,t)$ be the solution to Eq.~\eqref{eq:kin} with initial condition $g(0)=f_0$. Let $f_j^N(t)$, $j\in\{1,\ldots, N\}$, be the $j$-particle marginal distribution function of the (symmetric) law $F_N(t)$,i.e., \[ f_j^N(x_1,\ldots,x_j,v_1,\ldots,v_j,t) = \int\! \mathrm{d} x_{j+1} \cdots \mathrm{d} x_N\, \mathrm{d} v_{j+1} \cdots \mathrm{d} v_N\, F_N(X_N,V_N,t)\,. \] Then, the 2-Wasserstein distance $\mc W_2\big(f_j^N(t), g(t)^{\otimes j}\big)$ vanishes as $N\to +\infty$ for any $j\in \bb N$ and $t \ge 0$. More precisely, for each $T>0$ there exists $L_T = L_T(f_0)$ such that, for any $j\in\{1,\ldots, N\}$, \begin{equation} \label{w2s} \mc W_2\big(f_j^N(t),g(t)^{\otimes j}\big)^2 \le \frac{jL_T}{N^{1/4} } \exp (L_T\Gamma_\varphi) \quad \forall\, t\in [0,T] \quad \forall\, N>N_\varphi\,, \end{equation} where $\Gamma_\varphi$ and $N_\varphi$ are explicitly computable positive numbers depending solely on the smearing function $\varphi$ (see Eq.~\eqref{N0Gammaphi} below). In particular, the one particle marginal distribution function $f_1^N(t)$ converges weakly to $g(t)$ as $N\to +\infty$ for any $t \ge 0$. \end{theorem} \begin{remark} \label{rem:wass} We recall that if $\mu$ and $\nu$ are two probability measures on a metric space $(M,d)$ with finite second moment, the 2-Wasserstein distance between $\mu$ and $\nu$ is defined as \[ \mc W_2(\mu ,\nu) = \left(\inf_{\gamma\in \mc P(\mu,\nu)} \int_{M\times M}\!\mathrm{d} \gamma(x,x')\, d(x,x')^2\right)^{1/2}, \] where $\mc P(\mu,\nu)$ denotes the collection of all the probability measures on $M\times M$ with marginals $\mu$ and $\nu$. In Theorem \ref{teo:main}, $M=(\bb T^d)^j\times (\bb R^d)^j$ and $\mc W_2\big(f_j^N(t), g(t)^{\otimes j}\big)$ denotes the 2-Wasserstein distance between the probability measures with densities $f_j^N(t)$ and $g(t)^{\otimes j}$, respectively. \end{remark} The proof of Theorem \ref{rem:wass} will be presented in Section \ref{sec:5} after some preliminaries in Sections \ref{sec:3} and \ref{sec:4}. The convergence of the particle system to the true BGK equation Eq.~\eqref{eq:bgk} is now obtained through a rescaling of the smearing function $\varphi$, by setting \begin{equation} \label{scaphi} \varphi (x) = \varphi^{(\varepsilon)} (x) := \frac{1}{\varepsilon^d}\bar \varphi\bigg(\frac{x}{\varepsilon}\bigg)\,, \end{equation} where $\bar \varphi$ is fixed (it varies on the scale of order one) and satisfies \eqref {eq:varphi}. Clearly, in this case $\varphi_0 \approx \varepsilon^{-d}$ and $\|\nabla \varphi\|_\infty \approx \varepsilon^{-d-1}$. We have the following result. \begin{theorem} \label{teo:lim} Let $\varphi=\varphi^{(\varepsilon)}$ be as in Eq.~\eqref{scaphi}, suppose $f_0$ satisfies Eq.~\eqref{eq:f0} and in addition that, for some $q>d+2$, \begin{equation} \label{grad} \mc N_q(|\nabla_x f_0|) < + \infty\,. \end{equation} Then, for each $T>0$ there exists $C_T = C_T(f_0)$ such that, \begin{equation} \label{conveps} \mc W_2(f(t),g(t))^2 \le C_T\, \varepsilon \quad \forall\, t\in [0,T]\,, \end{equation} where $f(t)$ and $g(t)$ are the solutions to Eq.~\eqref{eq:bgk} and Eq.~\eqref{eq:kin} respectively, with same initial condition $f_0$. \end{theorem} We are now in position to formulate the main result. \begin{theorem} \label{teo:main1} Under the hypotheses of Theorem \ref{teo:main} and Theorem \ref{teo:lim}, suppose that $\varepsilon$ vanishes gently when $N$ diverges (for instance $\varepsilon=(\log N)^{-\mu}$ with $\mu$ sufficiently small). Then, for all integer positive $j$, \begin{equation} \label{conv} \lim_{N\to \infty} \mc W_2 ( f^N_j(t), f(t) ^{\otimes j} ) = 0 \quad \forall\, t\in [0,T]\,. \end{equation} \end{theorem} Some comments are in order. Theorem \ref{teo:main1} is actually a corollary of Theorem \ref{teo:main} and Theorem \ref{teo:lim} via a triangular inequality. The short proof will be presented at the end of Section \ref{sec:6}. The convergence expressed in Theorem \ref {teo:main1} is very slow. In particular, choosing $\varepsilon=(\log N)^{-\mu}$ with $\mu$ sufficiently small we obtain \[ \mc W_2(f^N_j(t), f(t) ^{\otimes j})^2 \le \mathrm{const} \; j (\log N)^{-\mu} \,. \] The condition on $\varepsilon$ is due to the fact that (when $\varphi=\varphi^{(\varepsilon)}$) Eq.~\eqref{w2s} holds with $\Gamma_\varphi \approx \varepsilon^{-a}$ and $N_\varphi \approx \varepsilon^{-b}$ for suitable $a,b>1$, so that the condition $N>N_\varphi$ is satisfied and the divergence $\exp\big(C\varepsilon^{-a}\big)$ appearing in the right-hand side is compensated by the term $N^{-1/4}$. We did not tried to optimize further our estimates since an effort in this direction would not improve so much the result. A similar feature is also present in \cite{BHP}, where the physically reasonable scaling is discussed in Section \ref{sec:5}. \section{Reformulation of the problem} \label{sec:3} Following the strategy developed in \cite{BHP}, we prove Theorem~\ref{teo:main} by showing that the stochastic particle system is close (as $N\to +\infty$) to an auxiliary process, whose asymptotic as $N \to \infty$ is obvious. \subsection{Coupling with an independent process} \label{sec:3.1} The auxiliary process is denoted by $\Sigma_N(t) = (Y_N(t),W_N(t)) \in (\bb T^d)^N\times (\bb R^d)^N$ and it is defined according to the following construction. Let $g=g(t) = g(x,v,t)$ be as in Proposition \ref{prop:stim_uT}. Denote by $(x(t),v(t))\in \bb T^d\times \bb R^d$ the one-particle jump process whose generator is given by \begin{equation} \label{nonlin} \mc L_1^g\psi(x,v) = [(v\cdot \nabla_x )\psi](x,v) + \int\! \mathrm{d} \tilde x \, \varphi(\tilde x - x) \left[\int\! \mathrm{d} \tilde v\, M_g^\varphi(\tilde x,\tilde v) \psi(\tilde x,\tilde v) - \psi(x,v) \right]\,, \end{equation} where $\psi$ is a test function and $M_g^\varphi$ is defined in Eq.~\eqref{eq:maxg}. In particular, if the initial distribution has a density, the same holds at any positive time and the probability density of $(x(t),v(t))$ solves the regularized BGK equation \eqref{eq:kin}. This kind of process is usually called non-linear since its generator is implicitly defined through the law of the process itself. The process $\Sigma_N(t)$ is then defined by $N$ independent copies of the above process, i.e., as the Markov process on $(\bb T^d)^N\times (\bb R^d)^N$ with generator \begin{align} \label{genf} \mc L_N^g{G}(Z_N) & = [(V_N\cdot\nabla_{X_N} ){G}] (Z_N) + \sum _{i=1}^N \int\! \mathrm{d} \tilde x_i \, \varphi(\tilde x_i - x_i) \nonumber \\ & \quad \times \left[ \int\!\mathrm{d} \tilde v_i\, M_g^\varphi (\tilde x_i,\tilde v_i) G(Z_N^{i,(\tilde x_i,\tilde v_i)}) - G(Z_N) \right] \,. \end{align} Note that the only difference with respect to Eq.~\eqref{gen} is the replacement of $M_{Z_N}^\varphi$ by $M^\varphi_g$. As in \cite{BHP}, the closeness of $Z_N(t)$ and $\Sigma_N(t)$ is proved by introducing a suitable coupled process $Q_N(t) = (Z_N(t),\Sigma_N(t))$. More precisely, the coupled process is the Markov process whose generator $\mc L_Q$ is defined in the following way. We let $Z_N=(X_N,V_N)$, $\Sigma_N=(Y_N,W_N)$, with $X_N=(x_1, \dots, x_N)$, $V_N=(v_1, \dots, v_N)$, $Y_N=(y_1, \dots, y_N)$, and $W_N=(w_1, \dots, w_N)$. Then, for any test function $G=G(Z_N,\Sigma_N)$, \begin{align} \label{genQ} & \mc L_Q G(Z_N,\Sigma_N) = [(V_N\cdot\nabla_{X_N} + W_N\cdot\nabla_{Y_N}) G] (Z_N,\Sigma_N) \nonumber +\int\! \mathrm{d} \tilde x_i\, \mathrm{d} \tilde y_i \, \Phi_{x_i,y_i}(\tilde x_i,\tilde y_i) \nonumber \\ & \qquad \times \left[ \int\! \mathrm{d} \tilde v_i\, \mathrm{d} \tilde w_i \,\mc M^\varphi (\tilde x_i,\tilde v_i;\tilde y_i,\tilde w_i) {G} (Z_N^{i,(\tilde x_i,\tilde v_i)},\Sigma_N^{i,(\tilde y_i,\tilde w_i)}) - G(Z_N,\Sigma_N) \right]\,, \end{align} In Eq.~\eqref{genQ}, for given $\tilde x,\tilde y\in \bb T^d$, $\mc M^\varphi(\tilde x,v;\tilde y,w)$ is the joint representation of the Maxwellians $M_{Z_N}^\varphi(\tilde x,v)$ and $M_g^\varphi(\tilde y,w)$ that realizes the 2-Wasserstein distance between the marginals, whose square is given by (see, e.g., \cite{OP}) \begin{equation} \label{w2m} \mc W_2\big(M_{Z_N}^\varphi(x,\cdot),M_g^\varphi(y,\cdot)\big)^2 = |u_N^\varphi(x)-u_g^\varphi(y)|^2 + d\Big(\sqrt{T_N^\varphi(x)} - \sqrt{T_g^\varphi(y)}\Big)^2\,. \end{equation} While, for given $x,y\in \bb T^d$, $\Phi_{x,y}(\tilde x,\tilde y)$ is the joint representation of the probability densities $\varphi_x(\tilde x) = \varphi(\tilde x-x)$ and $\varphi_y(\tilde y) = \varphi(\tilde y-y)$ defined as \begin{equation} \label{jointxy0} \Phi_{x,y}(\tilde x,\tilde y) = \varphi_x(\tilde x) \delta(\tilde x -x - \tilde y +y)\,, \end{equation} where $\delta(z)$ denotes the Dirac measure on $\bb T^d$ centered in $z=0$. In particular, for any integrable function $J$ on $\bb T^d$, \begin{equation} \label{jointxy} \int\! \mathrm{d}\tilde x\, \mathrm{d}\tilde y\, \Phi_{x,y}(\tilde x,\tilde y) J(\tilde x-\tilde y) = J(x-y)\,. \end{equation} In words, the coupling is given by the Markov process in which at a random exponential time of intensity one, the $i$-th pair of particles makes the jump from $(x_i,v_i,y_i,w_i)$ to $(\tilde x_i,\tilde v_i, \tilde y_i, \tilde w_i) = (x_i+\xi,\tilde v_i, y_i+\xi, \tilde w_i)$, where $\xi$ is distributed according to $\varphi$, and $(\tilde v_i, \tilde w_i)$ according to the prescribed joint representation of $M^\varphi_{Z_N}(\tilde x_i, \cdot)$ and $M^\varphi_g(\tilde y_i,\cdot)$. We denote by $\mathrm{d} R_N(t) = \mathrm{d} R_N(Z_N,\Sigma_N,t)$ the law of $Q_N(t)$ and assume that, initially, \[ \mathrm{d} R_N(0) = \delta(X_N-Y_N) \delta(V_N-W_N) f_0^{\otimes N}(X_N,V_N) \,\mathrm{d} Z_N\, \mathrm{d} \Sigma_N\,. \] In particular, recalling the notation introduced in Remark \ref{rem:wass}, \[ \mathrm{d} R_N(t) \in \mc P\big(F_N(t)\mathrm{d} Z_N, g(t)^{\otimes N}\mathrm{d} \Sigma_N\big)\,. \] \subsection{Estimating the distance between the processes} \label{sec:3.2} We adopt the same strategy of \cite{BHP} and introduce the quantity \[ I_N(t) := \int\! \mathrm{d} R_N(t)\, (|x_1-y_1|^2 + |v_1-w_1 |^2)\,. \] As $\mathrm{d} R_N(t)$ is symmetric with respect to particle permutations we have \[ I_N(t) = \frac 1j \int\! \mathrm{d} R_N(t)\, \sum_{i=1}^j (|x_i-y_i|^2 + |v_i-w_i |^2) \quad \forall\, j\in\{1,\ldots,N\}\,, \] so that \[ \mc W_2\big(f_j^N(t),g(t)^{\otimes j}\big) \le \sqrt{j I_N(t)} \quad \forall\, j\in\{1,\ldots,N\}\,, \] by the definition of the 2-Wasserstein distance. Therefore, the proof of Theorem \ref{teo:main} reduces to show that for each $T>0$ there exists $L_T = L_T(f_0)$ such that, \begin{equation} \label{I_N} I_N(t) \le \frac{L_T}{N^{1/4} } \exp (L_T\Gamma_\varphi) \quad \forall\, t\in [0,T] \quad \forall\, N>N_\varphi\,, \end{equation} for suitable $\Gamma_\varphi$ and $N_\varphi$. To this end, we compute \begin{align*} \dot I_N(t) & = \int\! \mathrm{d} R_N(t)\, \mc L_Q (|x_1-y_1|^2 + |v_1-w_1 |^2) \\ & = \int\! \mathrm{d} R_N(t)\, (v_1 \cdot \nabla_{x_1} + w_1 \cdot \nabla_{y_1}) |x_1-y_1|^2 \\ & \quad - N \int\! \mathrm{d} R_N(t)\, ( |x_1-y_1|^2 + |v_1-w_1|^2) \\ & \quad + \sum_{i=2}^N \int\! \mathrm{d} R_N(t)\, (|x_1-y_1|^2 + |v_1-w_1|^2) + \int\! \mathrm{d} R_N(t)\, |x_1-y_1|^2 \\ & \quad + \int\! \mathrm{d} R_N(t) \int\! \mathrm{d} \xi\, \varphi(\xi) \int\! \mathrm{d} \tilde v_1\, \mathrm{d} \tilde w_1\, \mc M^\varphi(x_1+\xi, \tilde v_1; y_1+\xi, \tilde w_1) |\tilde v_1- \tilde w_1|^2\,, \end{align*} where the first two terms in the right-hand side arise from the stream part ($V_N\cdot\nabla_{X_N}G + W_N\cdot\nabla_{Y_N}G$) and the loss part ($-NG$) of the generator $\mc L_Q$, respectively. We note that the loss term is partially compensated by the third term, while the stream part is equal to \[ 2 \int\! \mathrm{d} R_N(t)\, (v_1-w_1) \cdot (x_1-y_1) \le \int\! \mathrm{d} R_N(t)\, (|x_1-y_1|^2+|v_1-w_1 |^2)\,, \] where, with an abuse of notation, in the left-hand side we denote by $(x_1-y_1)$ a vector $\eta \in \bb R^d$ in the equivalence class defined by $x_1-y_1\in \bb T^d= \big(\bb R/(\frac 12 + \bb Z)\big)^d$ with $|\eta|=|x_1-y_1|$ and, when not uniquely determined by these conditions, with the minimum value of $(v_1-w_1)\cdot \eta$ (however, this is an event of vanishing measure and will not play any role in the sequel). Finally, the last term is given by Eq.~\eqref{w2m}. Therefore \begin{equation} \label{I<1} \dot I_N(t) \le I_N(t) + \int\! \mathrm{d} R_N(t)\, D(Z_N,\Sigma_N)\,, \end{equation} with \begin{align} \label{w2} D(Z_N,\Sigma_N) & = \int\! \mathrm{d} \xi\, \varphi(\xi) |u_N^\varphi(x_1+\xi)-u_g^\varphi(y_1+\xi)|^2 \nonumber \\ & \quad + \int\! \mathrm{d} \xi\, \varphi(\xi)\, d\Big(\sqrt{T_N^\varphi(x_1+\xi)} - \sqrt{T_g^\varphi(y_1+\xi)}\Big)^2. \end{align} Our goal is to estimate $\int\! \mathrm{d} R_N(t)\, D(Z_N,\Sigma_N)$ from above with a constant (independent of $N$) multiple of $I_N(t)$ plus a small term of order $1/N^{1/4}$, so that Eq.~\eqref{I_N} follows from Gr\"{o}nwall's inequality. As noticed in \cite{BHP}, in estimating $D(Z_N,\Sigma_N)$ it is useful to replace $\varrho_g^\varphi$, $u_g^\varphi$, $T_g^\varphi$ by \begin{equation} \label{empf2} \begin{split} & \tilde \varrho_N^\varphi(x) = \frac 1N \sum_{j=1}^N \varphi (x-y_j)\,, \qquad \tilde \varrho_N^\varphi \tilde u_N^\varphi(x) = \frac 1N \sum_{j=1}^N \varphi (x-y_j) w_j\,, \\ & \tilde \varrho_N^\varphi \tilde T_N^\varphi(x) = \frac 1{Nd} \sum_{j=1}^N \varphi (x-y_j) |w_j - \tilde u_N^\varphi(x)|^2 \,, \end{split} \end{equation} i.e., the empirical fields constructed via the variables $Y_N=(y_1, \dots, y_N) $ and $W_N=(w_1, \dots, w_N)$, distributed independently according to $g(t)^{\otimes N}$. By the law of large numbers, this replacement is expected to be small for large $N$. In the present case, the function $\varphi$ has compact support, so that there are particle configurations for which the smeared empirical densities defined in Eqs.~\eqref{empf} and \eqref{empf2} assume very small values (order $1/N$). This makes impossible to obtain (as in \cite{BHP}) a point-wise estimate of $D(Z_N,\Sigma_N)$. To overcome this difficulty, we decompose the phase space as the union of a ``good set'' $\mc G$, which will be defined in the next section, and its complement, the ``bad set'' $\mc G^\complement$. Roughly speaking, in the set $\mc G$, $D(Z_N,\Sigma_N)$ can be controlled similarly to what done in \cite{BHP}, while the contribution to $\dot I_N(t)$ coming from the integration on $\mc G^c$ will be treated by suitable probability estimates (actually, the decomposition of $\dot I_N(t)$ is more involved, as explained at the beginning of Section \ref{sec:5}). \smallskip \noindent\textbf{A notation warning.} In what follows, we shall denote by $C$ a generic positive constant whose numerical value may change from line to line and it may possibly depend on the fixed time $T$ and the initial condition $f_0$. Furthermore, we will use both the notations ${1 \mskip -5mu {\rm I}}_B$ and ${1 \mskip -5mu {\rm I}}(B)$ to denote the characteristic function of the set $B$. We shall also use the shorten notation $\mathrm{d} g(t)^{\otimes N}$ to denote integration with respect to $ \mathrm{d} \Sigma_N\, g(t)^{\otimes N}$. \section{Preliminary estimates} \label{sec:4} Recalling the assumptions Eq.~\eqref{eq:varphi} on $\varphi$, we fix $r\in (0,\frac{1}{10})$, with $r^{-1}\in \bb N$ and such that \begin{equation} \label{r0} \varphi(x) > \frac{\varphi_0}2 \quad \forall\, x\in [-5r,5r]^d\,, \end{equation} Denote by $\{\Delta\}$ a partition of $\bb T^d$ into square boxes of side $r$. As a consequence, we have the following lower bound on the empirical densities, \begin{equation} \label{Ny} N \tilde \varrho_N^\varphi(x) \ge \frac{\varphi_0}2 N_\Delta^Y \quad \text{ if } x\in \Delta\,, \end{equation} where $N_\Delta^Y$ is the number of particles of the configuration $Y_N$ contained in $\Delta$. \begin{lemma} \label{lem:ny} Given $T>0$ there is $A>0$ (depending only on $T$ and initial condition $f_0$) such that if \begin{equation} \label{ba} \mc B_A := \{(Z_N,\Sigma_N) \colon \tilde \varrho_N^\varphi(x) > Ar^d\varphi_0 \;\; \forall\, x\in \bb T^d\} \end{equation} then \begin{equation} \label{eq:densy} \int\! \mathrm{d} R_N(t) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement} \le \frac{C}{r^{3d}N} \quad \forall\, t\in [0,T]\,. \end{equation} \end{lemma} \begin{proof} By Eq.~\eqref{Ny}, \[ \begin{split} \int\! \mathrm{d} R_N(t) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement} &= \int\! \mathrm{d} g(t)^{\otimes N}\,{1 \mskip -5mu {\rm I}}\big( \{Y_N\colon \exists\, x \in \bb T^d \text{ s.t. }\tilde \varrho_N^\varphi(x) \le Ar^d\varphi_0\}\big) \\ & \le \int\! \mathrm{d} g(t)^{\otimes N}\, {1 \mskip -5mu {\rm I}}\big(\{Y_N\colon \exists\, \Delta \text{ s.t. } N_\Delta^Y \le 2Ar^dN\}\big) \\ & \le \sum_{\{\Delta\}} \int\! \mathrm{d} g(t)^{\otimes N}\, {1 \mskip -5mu {\rm I}}_{N_\Delta^Y \le 2Ar^dN} \le \frac{1}{r^d} \max_{\Delta} \int\! \mathrm{d} g(t)^{\otimes N}\, {1 \mskip -5mu {\rm I}}_{N_\Delta^Y \le 2Ar^dN}\,. \end{split} \] We observe that $N_\Delta^Y = N \xi_N$ with $\xi_N = N^{-1} \sum_{j=1}^N {1 \mskip -5mu {\rm I}}_{y_j\in \Delta}$ the arithmetic mean of $N$ i.i.d.~random variables whose common expected value is \[ \bb E\xi_N = \bb E {1 \mskip -5mu {\rm I}}_{y_1\in \Delta} = \int_\Delta\!\mathrm{d} y\, \varrho_g(x,t) \ge C_2\mathrm{e}^{-T}r^d \qquad \forall\,t\in [0,T]\,, \] having used Eq.~\eqref{stimrho} in the last inequality. We then choose $A=C_2\mathrm{e}^{-T}/4$, whence \[ {1 \mskip -5mu {\rm I}}_{N_\Delta^Y \le 2Ar^dN} = {1 \mskip -5mu {\rm I}}_{\xi_N \le 2Ar^d} \le {1 \mskip -5mu {\rm I}}_{|\xi_N- \bb E\xi_N| \ge \bb E\xi_N/2} \le {1 \mskip -5mu {\rm I}}_{|\xi_N- \bb E\xi_N| \ge C_2\mathrm{e}^{-T}r^dN/4}\,. \] Therefore, by Chebyshev's inequality, \[ \int\! \mathrm{d} \Sigma_N\, g(t)^{\otimes N}\, {1 \mskip -5mu {\rm I}}_{N_\Delta^Y \le 2Ar^dN} \le \frac{16\mathrm{e}^{2T}}{C_2^2 r^{2d}N} \bb E({1 \mskip -5mu {\rm I}}_{y_1\in \Delta}- \bb E{1 \mskip -5mu {\rm I}}_{y_1\in \Delta})^2 \le \frac{C}{r^{2d}N}\,. \] Eq.~\eqref{eq:densy} is thus proved. \end{proof} \begin{lemma}[The good set] \label{lem:nx} Given $A>0$ as in Lemma \ref{lem:ny}, we let \begin{equation} \label{afi} A_\varphi = \frac{Ar^d\varphi_0}{2\|\nabla\varphi\|_\infty} \end{equation} and define \begin{equation} \label{g} \mc G := \mc G_1 \cap \mc B_A \;\text{ with }\; \mc G_1 := \bigg\{(Z_N,\Sigma_N) \colon \frac 1N\sum_{j=1}^N |x_j-y_j| \le A_\varphi \bigg\}\,. \end{equation} Then \begin{equation} \label{stimdens} \varrho_N^\varphi(x) > \frac{Ar^d\varphi_0}2\,, \quad \tilde \varrho_N^\varphi(x) > Ar^d\varphi_0 \quad \forall\, x\in \bb T^d \quad \text{ in the set }\mc G. \end{equation} \end{lemma} \begin{proof} The lower bound on $\tilde \varrho_N^\varphi(x)$ follows trivially from the definition of $\mc B_A$. Concerning the other bound, in the set $\mc G$ we have \[ \begin{split} \varrho_N^\varphi(x) & \ge \tilde \varrho_N^\varphi(x) - |\varrho_N^\varphi(x) - \tilde \varrho_N^\varphi(x)| \ge Ar^d\varphi_0 - \frac{\|\nabla\varphi\|_\infty}N \sum_{j=1}^N |x_j-y_j|\\ & \ge Ar^d\varphi_0 - \|\nabla\varphi\|_\infty A_\varphi = \frac{Ar^d\varphi_0}2\,, \end{split} \] and the lemma is proved. \end{proof} \begin{lemma} \label{lem:stimpij} Define \begin{align} \label{pij} p_{i,j} & = p_{i,j}(\xi) := \frac{\varphi(x_i+\xi-x_j)}{\sum_k \varphi(x_i+\xi-x_k)} = \frac{\varphi(x_i+\xi-x_j)}{N \varrho_N^\varphi(x_i+\xi)}\,, \\ \label{qij} q_{i,j} & = q_{i,j}(\xi) := \frac{\varphi(y_i+\xi-y_j)}{\sum_k \varphi(y_i+\xi-y_k)} = \frac{\tilde \varphi(y_i+\xi-y_j)}{N \varrho_N^\varphi(y_i+\xi)}\,. \end{align} Then, recalling $\varphi_0=\max\varphi$, \begin{align} \label{pijstim} & \sum_{j=1}^N p_{i,j} = 1\,, \qquad \int\!\mathrm{d} \xi\,\varphi(\xi) \sum_{i=1}^N p_{i,j} \le \varphi_0\,, \\ \label{qijstim} & \sum_{j=1}^N q_{i,j} = 1\,, \qquad \int\!\mathrm{d} \xi\,\varphi(\xi) \sum_{i=1}^N q_{i,j} \le \varphi_0\,. \end{align} \end{lemma} \begin{proof} The proofs of Eq.~\eqref{pijstim} and \eqref{qijstim} are the same, let us consider the first one. The normalization property $\sum_{j=1}^N p_{i,j} = 1$ is obvious, while (with the change of variable $\xi' = x_i + \xi$) \[ \begin{split} \int\!\mathrm{d} \xi\, \varphi(\xi) \sum_{i=1}^N p_{i,j} & \le \varphi_0 \int\!\mathrm{d} \xi\, \varphi(\xi) \sum_{i=1}^N \frac{1}{N \varrho_N^\varphi(x_i+\xi)} \\ & = \varphi_0 \sum_{i=1}^N \int\!\mathrm{d} \xi'\, \frac{\varphi(\xi'-x_i)}{N \varrho_N^\varphi(\xi')} = \varphi_0 \int\!\mathrm{d} \xi'\, \frac{N \varrho_N^\varphi(\xi')}{N \varrho_N^\varphi(\xi')} =\varphi_0 \,. \end{split} \] (recall the volume of the torus $\bb T^d$ is one). \end{proof} \begin{lemma} \label{lem:llnw} Given $T>0$, for each $p\in \bb N$ there is $M$ (depending only on $T$, $p$, and initial condition $f_0$) such that the following holds. (1) For any $j=1,\ldots,N$ we have \begin{equation} \label{eq:stimv} \int\!\mathrm{d} R_N(t) |w_j|^p = \int\!\mathrm{d} g^{\otimes N}(t) |w_j|^p = \int\! \mathrm{d} g(t)\, |w|^p \le \frac M2 \quad \forall\, t\in [0,T]\,. \end{equation} (2) If \begin{equation} \label{eq:llnw} \mc G_{M,p} := \bigg\{(Z_N,\Sigma_N) \colon \frac 1N\sum_{j=1}^N |w_j|^p \le M \bigg\} \end{equation} then \begin{equation} \label{eq:stimllnw} \int\!\mathrm{d} R_N(t) {1 \mskip -5mu {\rm I}}_{\mc G_{M,p}^\complement} \le \frac CN \quad \forall\, t\in [0,T]\,. \end{equation} \end{lemma} \begin{proof} From the estimate on $\mc N_q(g)$ in Eq.~\eqref{eq:utK}, there is $M=M(T,p,f_0)$ such that $\int\!\mathrm{d} y\, \mathrm{d} w\, g(y,w,t) |w|^p \le M/2$ for any $t\in [0,T]$, which proves Eq.~\eqref{eq:stimv}. Moreover, letting $\xi_N= \frac 1N\sum_j|w_j|^p$ and $\bb E(\xi_N) = \int\! \mathrm{d} g(t)^{\otimes N} \xi_N$, we have \[ \int\!\mathrm{d} R_N(t) {1 \mskip -5mu {\rm I}}_{\mc G_{M,p}^\complement} = \int_{{\sum}_j |w_j|^p > MN}\! \mathrm{d} g(t)^{\otimes N} \le \int_{|\xi_N-\bb E(\xi_N)|>M/2}\! \mathrm{d} g(t)^{\otimes N} \,, \] whence Eq.~\eqref{eq:stimllnw} follows from the law of large numbers (i.e., Chebyshev's inequality). \end{proof} \section{Proofs} \label{sec:5} We deduce an upper bound for the quantity $D(Z_N,\Sigma_N)$ introduced in Eq.~\eqref{w2}, which is the sum of several terms. To estimate the expectation of some of them, a separate analysis on the good set and its complement will be necessary. To this purpose, we first introduce the ``mixed temperature'' \[ \bar T^\varphi_N(x_1+\xi,y_1+\xi) = \frac 1d \sum_{j=1}^N p_{1,j} |w_j-\tilde u_N^\varphi(y_1+\xi)|^2\,. \] To simplify the notation, in what follows we will omit sometimes the explicit dependence on $x_1+\xi$ and $y_1+\xi$. By virtue of Eq.~\eqref{w2} we have \begin{align} \label{w2<<} D(Z_N,\Sigma_N) & \le \int\! \mathrm{d} \xi\, \varphi(\xi) \Big(2|u_N^\varphi - \tilde u_N^\varphi|^2 + 2|\tilde u_N^\varphi - u_g^\varphi|^2\Big) \nonumber \\ & \quad+ \int\! \mathrm{d} \xi\, \varphi(\xi)\, \Big[2d\Big(\sqrt{T_N^\varphi} - \sqrt{\bar T_N^\varphi}\Big)^2 + 2d\Big(\sqrt{\bar T_N^\varphi} - \sqrt{T_g^\varphi}\Big)^2\Big], \end{align} where $\tilde u_N^\varphi$ is defined in Eq.~\eqref{empf2}. Recalling the definitions Eqs.~\eqref{pij} and \eqref{qij}, from Cauchy-Schwarz inequality, \begin{equation} \label{u-u} |u_N^\varphi - \tilde u_N^\varphi|^2 \le 2\mc V+ 2 \bigg(\sum_{j=1}^N |p_{1,j}-q_{1,j}||w_j|\bigg)^2, \end{equation} where \begin{equation} \label{DV} \mc V = \mc V(\xi,Z_N,\Sigma_N) := \sum_{j=1}^N p_{1,j} |v_j-w_j|^2\,. \end{equation} Concerning the difference between the empirical and mixed temperature, we observe that \[ \begin{split} \Big|T_N^\varphi - \bar T_N^\varphi\Big| & \le \frac 1d \sum_{j=1}^N p_{1,j} \big| |v_j- u_N^\varphi|^2 - |w_j-\tilde u_N^\varphi|^2 \big| \\ & = \sum_{j=1}^N p_{1,j} \Big|(v_j - u_N^\varphi - w_j + \tilde u_N^\varphi)\cdot (v_j - u_N^\varphi + w_j - \tilde u_N^\varphi) \Big|\\ & \le \frac 1d \sum_{j=1}^N p_{1,j} (|v_j -w_j| + |u_N^\varphi -\tilde u_N^\varphi|) |v_j-u_N^\varphi| \\ & \quad + \frac 1d \sum_{j=1}^N p_{1,j} (|v_j -w_j| + |u_N^\varphi -\tilde u_N^\varphi|) |w_j-\tilde u_N^\varphi| \\ & \le \frac{1}{\sqrt d} \Big(\sqrt{\mc V} + |u_N^\varphi -\tilde u_N^\varphi| \Big) \Big(\sqrt{T_N^\varphi} +\sqrt{\bar T_N^\varphi}\Big)\,, \end{split} \] where we used the Cauchy-Schwarz inequality in the last passage. Therefore, by Eq.~\eqref{u-u}, \begin{align} \label{t-bt} d \Big(\sqrt{T_N^\varphi} - \sqrt{\bar T_N^\varphi}\Big)^2 & = d \bigg(\frac{T_N^\varphi - \bar T_N^\varphi}{\sqrt{T_N^\varphi} +\sqrt{ \bar T_N^\varphi}}\bigg)^2 \le 2 \mc V + 2 |u_N^\varphi -\tilde u_N^\varphi|^2\nonumber \\ & \le 6 \mc V + 4 \bigg(\sum_{j=1}^N |p_{1,j}-q_{1,j}||w_j|\bigg)^2. \end{align} On the other hand, from Eq.~\eqref{eq:TA}, \begin{equation} \label{bt-gt} d\Big(\sqrt{\bar T_N^\varphi} - \sqrt{T_g^\varphi}\Big)^2 = d\bigg(\frac{\bar T_N^\varphi - T_g^\varphi}{\sqrt{T_N^\varphi} +\sqrt{T_g^\varphi}}\bigg)^2 \le \frac{2d\big(\bar T_N^\varphi - \tilde T_N^\varphi\big)^2}{A_t} + \frac{2d\big( \tilde T_N^\varphi - T_g^\varphi\big)^2}{A_t}\,, \end{equation} where $\tilde T_N^\varphi$ is defined in Eq.~\eqref{empf2}. Hence, by Eqs.~\eqref{w2<<}, \eqref{u-u}, \eqref{t-bt} and \eqref{bt-gt}, \begin{equation} \label{w2<} D(Z_N,\Sigma_N) \le D_1(Z_N,\Sigma_N) + D_2(Z_N, \Sigma_N) + \mc E(\Sigma_N) \,, \end{equation} with \begin{align} \label{D1} D_1(Z_N,\Sigma_N) & = \int\! \mathrm{d} \xi\, \varphi(\xi) \, 16 \mc V(\xi,Z_N,\Sigma_N)\,, \\ \label{D2} D_2(Z_N,\Sigma_N) & = \int\! \mathrm{d} \xi\, \varphi(\xi) \bigg[ 12 \bigg(\sum_{j=1}^N |p_{1,j}-q_{1,j}||w_j|\bigg)^2 + \frac{4d\big(\bar T_N^\varphi - \tilde T_N^\varphi\big)^2}{A_t} \bigg], \\ \label{E} \mc E(\Sigma_N) & = \int\! \mathrm{d} \xi\, \varphi(\xi) \, 2 |\tilde u_N^\varphi- u_g^\varphi|^2 + \int\! \mathrm{d} \xi\, \varphi(\xi) \, \frac{4d\big( \tilde T_N^\varphi - T_g^\varphi \big)^2 }{A_t}\,. \end{align} From Eqs.~\eqref{I<1}, \eqref{w2<}, and recalling the definition Eq.~\eqref{g} of the good set, we arrive at the following estimate on the derivative of $I_N(t)$, \begin{equation} \label{I<2} \dot I_N(t) \le I_N(t) + \mc D_a(t) + \mc D_b(t) + \mc D_c(t) + \mc D_d(t)\,, \end{equation} where \[ \begin{split} \mc D_a(t) & = \int\! \mathrm{d} R_N(t)\, D_1(Z_N,\Sigma_N)\,, \qquad \mc D_b(t) = \int\! \mathrm{d} R_N(t)\, D_2(Z_N,\Sigma_N) {1 \mskip -5mu {\rm I}}_{\mc G^\complement}\,, \\ \mc D_c(t) & = \int\! \mathrm{d} R_N(t)\, D_2(Z_N,\Sigma_N) {1 \mskip -5mu {\rm I}}_{\mc G}\,, \quad \mc D_d(t) = \int\! \mathrm{d} g(t)^{\otimes N} \, \mc E (\Sigma_N)\,. \end{split} \] \subsection{Upper bound on \texorpdfstring{$\mc D_a(t)$}{a}} \label{sec:5.1} Since $\mathrm{d} R_N(t)$ is symmetric with respect to particle permutations, \begin{align} \label{da} \mc D_a(t) & = \int\! \mathrm{d} R_N(t) \int\! \mathrm{d} \xi\, \varphi(\xi) \, 16 \mc V(\xi,Z_N,\Sigma_N) \nonumber \\ & = \frac{16}N \sum_{i=1}^N \int\! \mathrm{d} R_N(t)\int\! \mathrm{d} \xi\, \sum_{j=1}^N p_{i,j} |v_j-w_j|^2 \nonumber \\ & = \frac{16}N \int\! \mathrm{d} R_N(t) \sum_{j=1}^N \bigg( \int\! \mathrm{d} \xi\, \varphi(\xi) \sum_{i=1}^N p_{i,j}\bigg) |v_j-w_j|^2 \nonumber \\ & \le 16\varphi_0 I_N(t) \,, \end{align} where we used the upper bound of Eq.~\eqref{pijstim} in the last estimate. \subsection{Upper bound on \texorpdfstring{$\mc D_b(t)$}{b}} \label{sec:5.2} By repeatedly applying the Cauchy-Schwarz inequality we have, \[ \bigg(\sum_{j=1}^N |p_{1,j}-q_{1,j}||w_j|\bigg)^2 \le \bigg(\sum_{j=1}^N (p_{1,j} + q_{1,j})|w_j|\bigg)^2 \le 2 \sum_{j=1}^N (p_{1,j} + q_{1,j}) |w_j|^2 \,, \] \[ \big(\bar T_N^\varphi - \tilde T_N^\varphi\big)^2 \le \bigg(\sum_{j=1}^N (p_{1,j} + q_{1,j})|w_j-\tilde u_N^\varphi|^2\bigg)^2 \le C \sum_{j=1}^N (p_{1,j}+q_{1,j}) |w_j|^4 + C |\tilde u_N^\varphi\big|^4\,, \] and \[ |\tilde u_N^\varphi\big|^4 \le \bigg(\sum_{j=1}^N q_{1,j} |w_j| \bigg)^4 \le \sum_{j=1}^Nq_{1,j} |w_j|^4\,. \] Therefore, by Eq.~\eqref{D2} and recalling the definition Eq.~\eqref{g} of $\mc G$, \begin{align} \label{db1} \mc D_b(t) & \le C \int\! \mathrm{d} R_N(t)\, \int\! \mathrm{d} \xi\, \varphi(\xi) \sum_{j=1}^N (p_{1,j}+q_{1,j})(|w_j|^2 + |w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc G^\complement} \nonumber \\ & \le C(\mc R_1 + \mc R_2)\,, \end{align} where \[ \begin{split} \mc R_1 & = \int\! \mathrm{d} R_N(t) \int\! \mathrm{d} \xi\, \varphi(\xi) \sum_{j=1}^N (p_{1,j}+q_{1,j})(|w_j|^2 + |w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement}\,, \\ \mc R_2 & = \int\! \mathrm{d} R_N(t) \int\! \mathrm{d} \xi\, \varphi(\xi) \sum_{j=1}^N (p_{1,j}+q_{1,j})(|w_j|^2 + |w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc G_1^\complement}\,. \end{split} \] Since $\mathrm{d} R_N(t)$ and ${1 \mskip -5mu {\rm I}}_{\mc B_A^\complement}$ are symmetric with respect to particle permutations, \[ \begin{split} \mc R_1 & = \frac 1N \sum_{i=1}^N \int\! \mathrm{d} R_N(t) \int\! \mathrm{d} \xi\, \varphi(\xi) \sum_{j=1}^N (p_{i,j}+q_{i,j})(|w_j|^2 + |w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement} \\ & = \int\! \mathrm{d} R_N(t)\, \frac 1N \sum_{j=1}^N \bigg( \int\! \mathrm{d} \xi\, \varphi(\xi) \sum_{i=1}^N (p_{i,j} + q_{i,j}) \bigg) (|w_j|^2 + |w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement} \\ & \le 2\varphi_0 \int\! \mathrm{d} R_N(t)\, \frac 1N \sum_{j=1}^N (|w_j|^2 + |w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement}\,, \end{split} \] where we used the upper bounds of Eqs.~\eqref{pijstim} and \eqref{qijstim} in the last inequality. Therefore, from the Cauchy-Schwarz inequality and Eqs.~\eqref{eq:densy} and \eqref{eq:stimv}, \begin{align} \label{db2} \mc R_1 & \le 2 \varphi_0 \bigg( \int\! \mathrm{d} R_N(t)\, \frac 1N \sum_{j=1}^N (|w_j|^2 + |w_j|^4)^2 \bigg)^{1/2} \bigg( \int\! \mathrm{d} R_N(t)\, {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement} \bigg)^{1/2} \nonumber \\ &\le \frac{C\varphi_0}{(r^{3d} N)^{1/2}} \bigg( \int\! \mathrm{d} R_N(t)\, \frac 1N \sum_{j=1}^N (|w_j|^4 + |w_j|^8)\bigg)^{1/2} \le \frac{C\varphi_0}{(r^{3d} N)^{1/2}}\,. \end{align} Analogously, since $\mathrm{d} R_N(t)$ and ${1 \mskip -5mu {\rm I}}_{\mc G_1^\complement}$ are symmetric with respect to particle permutations, by applying the upper bound of Eq.~\eqref{pijstim} and the Cauchy-Schwarz inequality, \[ \begin{split} \mc R_2 & = \frac 1N \sum_{i=1}^N \int\! \mathrm{d} R_N(t) \int\! \mathrm{d} \xi\, \varphi(\xi) \sum_{j=1}^N (p_{i,j}+q_{i,j})(|w_j|^2 + |w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc G_1^\complement} \\ & \le 2\varphi_0 \int\! \mathrm{d} R_N(t) \frac 1N \sum_{j=1}^N (|w_j|^2 + |w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc G_1^\complement} \\ & \le C \varphi_0 \int\! \mathrm{d} R_N(t) \bigg(1 + \frac 1N \sum_{j=1}^N |w_j|^4\bigg) {1 \mskip -5mu {\rm I}}_{\mc G_1^\complement}\,. \end{split} \] Recalling Eq.~\eqref{eq:llnw}, we estimate ${1 \mskip -5mu {\rm I}}_{\mc G_1^\complement} \le {1 \mskip -5mu {\rm I}}_{\mc G_1^\complement \cap \mc G_{M,4}} + {1 \mskip -5mu {\rm I}}_{\mc G_{M,4}^\complement}$ so that \begin{align} \label{db3} \mc R_2 & \le C(1+M) \varphi_0\int\! \mathrm{d} R_N(t)\, {1 \mskip -5mu {\rm I}}_{\mc G_1^\complement} + C\varphi_0 \int\! \mathrm{d} R_N(t) \frac 1N \sum_{j=1}^N (1+|w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc G_{M,4}^\complement} \nonumber \\ & \le \frac{C(1+M) \varphi_0}{A_\varphi^2} \int\! \mathrm{d} R_N(t)\, \frac 1N \sum_{i=1}^N |x_i-y_i|^2 \nonumber \\ & \quad + C \varphi_0 \bigg[\int\! \mathrm{d} R_N(t)\, \bigg(\frac 1N \sum_{j=1}^N (1+|w_j|^4)\bigg)\bigg]^{1/2} \bigg( \int\! \mathrm{d} R_N(t)\, {1 \mskip -5mu {\rm I}}_{\mc G_{M,4}^\complement} \bigg)^{1/2} \nonumber \\ & \le \frac{C (1+ M) \varphi_0}{A_\varphi^2} I_N(t) + \frac{C\varphi_0}{N^{1/2}}\,, \end{align} where we used Chebyshev's inequality, Cauchy-Schwarz inequality twice, and finally Eqs.~\eqref{eq:stimv} and \eqref{eq:stimllnw}. From Eqs.~\eqref{db1}, \eqref{db2}, and \eqref{db3}, and by Eq.~\eqref{afi}, we finally obtain \begin{equation} \label{da<} \mc D_b(t) \le C \frac{\|\nabla\varphi\|_\infty^2}{r^{2d}\varphi_0} I_N(t) + \frac{C\varphi_0}{(r^{3d} N)^{1/2}}\,. \end{equation} \subsection{Upper bound on \texorpdfstring{$\mc D_c(t)$}{c}} \label{sec:5.3} As \[ \begin{split} p_{1,j} - q_{1,j} & = \frac{\varphi(x_1+\xi-x_j) - \varphi(y_1+\xi-y_j)}{N\varrho_N^\varphi(x_1+\xi)} \\ & \quad + \varphi(y_1+\xi-y_j) \frac{\sum_k [ \varphi(y_1+\xi-y_k) - \varphi(x_1+\xi-x_k)]}{N^2\varrho_N^\varphi(x_1+\xi) \tilde\varrho_N^\varphi(y_1+\xi)}\,, \end{split} \] from Eq.~\eqref{stimdens} we have that \begin{align} \label{stimp-q} & |p_{1,j} - q_{1,j}| \le \frac{2\|\nabla\varphi\|_\infty}{Ar^d\varphi_0N} \big(|x_1-y_1| + |x_j-y_j| \big) \nonumber \\ & \qquad + \frac{2\varphi_0\|\nabla\varphi\|_\infty}{N^2(Ar^d\varphi_0)^2} \sum_{k=1}^N \big(|x_1-y_1| + |x_k-y_k| \big) \nonumber \\ & \quad \le \frac{C\|\nabla\varphi\|_\infty}{r^{2d}\varphi_0 N} \bigg(|x_1-y_1| + |x_j-y_j| + \frac 1N \sum_{k=1}^N |x_k-y_k| \bigg) \quad \text{in the set }\mc G. \end{align} Therefore, from Cauchy-Schwarz inequality, in the set $\mc G$, \[ \bigg(\sum_{j=1}^N |p_{1,j}-q_{1,j}||w_j|\bigg)^2 \le \frac{C\|\nabla\varphi\|_\infty^2}{r^{4d}\varphi_0^2} \bigg( |x_1-y_1|^2 + \frac 1N\sum_{k=1}^N |x_k-y_k|^2\bigg)\frac 1N \sum_{j=1}^N|w_j|^2 \,. \] Analogously, still in $\mc G$, \[ \begin{split} \big(\bar T_N^\varphi - \tilde T_N^\varphi\big)^2 \le & \bigg(\sum_{j=1}^N |p_{1,j} - q_{1,j}| |w_j-\tilde u_N^\varphi|^2\bigg)^2 \\ &\ \le \frac{C\|\nabla\varphi\|_\infty^2}{ r^{4d}\varphi_0^2} \bigg( |x_1-y_1|^2 + \frac 1N\sum_{k=1}^N |x_k-y_k|^2\bigg) \frac 1N \sum_{j=1}^N |w_j-\tilde u_N^\varphi|^4 \\ &\ \le \frac{C\|\nabla\varphi\|_\infty^2}{r^{5d}\varphi_0^2} \bigg( |x_1-y_1|^2 + \frac 1N\sum_{k=1}^N |x_k-y_k|^2\bigg) \frac 1N \sum_{j=1}^N |w_j|^4\,, \end{split} \] where in the last inequality, we used that, because of Eq.~\eqref{stimdens}, \[ |\tilde u_N^\varphi\big|^4 \le \bigg(\sum_{j=1}^N q_{1,j} |w_j| \bigg)^4 \le \sum_{j=1}^Nq_{1,j} |w_j|^4 \le \frac{1}{Ar^dN} \sum_{j=1}^N |w_j|^4\quad \text{ in the set }\mc G. \] Recalling Eq.~\eqref{D2}, the above estimates allow to control $D_2(Z_N,\Sigma_N)$ in the set $\mc G$, \[ \mc D_c(t) \le \frac{C\|\nabla\varphi\|_\infty^2}{r^{5d}\varphi_0^2} \int\! \mathrm{d} R_N(t)\, \bigg( |x_1-y_1|^2 + \frac 1N\sum_{k=1}^N |x_k-y_k|^2\bigg) \frac 1N \sum_{j=1}^N (1+|w_j|^4)\,. \] We now argue analogously to what done to get Eq.~\eqref{db3}: recalling Eq.~\eqref{eq:llnw} and inserting $1= {1 \mskip -5mu {\rm I}}_{\mc G_{M,4}} + {1 \mskip -5mu {\rm I}}_{\mc G_{M,4}^\complement}$ in the right-hand side we have that \begin{align} \label{dc1} \mc D_c(t) & \le \frac{2C(1+M)\|\nabla\varphi\|_\infty^2}{r^{5d}\varphi_0^2} I_N(t) + \frac{C\|\nabla\varphi\|_\infty^2}{r^{5d}\varphi_0^2} \int\! \mathrm{d} R_N(t)\,\frac 1N \sum_{j=1}^N (1+|w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc G_{M,4}^\complement} \nonumber \\ & \le \frac{2C(1+M)\|\nabla\varphi\|_\infty^2}{r^{5d}\varphi_0^2} I_N(t) + \frac{C\|\nabla\varphi\|_\infty^2}{r^{5d} \varphi_0^2 N^{1/2}}\,, \end{align} where in estimating the integrand in $\mc G_{M,4}^\complement$ we used that the mutual distance among the particles is not greater than one. \subsection{Upper bound on \texorpdfstring{$\mc D_d(t)$}{d}} \label{sec:5.4} We decompose \[ \mc D_d(t) = \mc D_d^{(1)}(t) + \mc D_d^{(2)}(t)\,, \] where \begin{align*} D_d^{(1)}(t) & := \int\! \mathrm{d} g(t)^{\otimes N} \int\! \mathrm{d} \xi\, \varphi(\xi) \, 2 |\tilde u_N^\varphi- u_g^\varphi|^2\,, \\ \mc D_d^{(2)}(t) & := \int\! \mathrm{d} g(t)^{\otimes N} \int\! \mathrm{d} \xi\, \varphi(\xi) \, \frac{4d\big( \tilde T_N^\varphi - T_g^\varphi \big)^2 }{A_t}\,, \end{align*} and analyze the two terms separately. \medskip \noindent \textit{Upper bound on $\mc D_d^{(1)}(t)$.} After introducing the random variables \[ \mc U_j(\xi) := \varrho_g^\varphi(\xi,t)\varphi(\xi-y_j)w_j - \varphi(\xi-y_j) (\varrho_g^\varphi u_g^\varphi) (\xi,t)\,, \] we observe that \[ \begin{split} & \int\! \mathrm{d} \xi\, \varphi(\xi) \,|\tilde u_N^\varphi- u_g^\varphi|^2 \le \int\! \mathrm{d} \xi\, \varphi(\xi)\, \big[|\tilde u_N^\varphi- u_g^\varphi|^2 {1 \mskip -5mu {\rm I}}_{\mc B_A} + (|\tilde u_N^\varphi|^2 + |u_g^\varphi|^2) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement}\big] \\ & \quad \quad = \int\! \mathrm{d} \xi\, \frac{ \varphi(\xi-y_1) }{\tilde\varrho_N^\varphi (\xi)^2 \varrho_g^\varphi(\xi)^2} \bigg|\frac 1N \sum_{j=1}^N \mc U_j(\xi) \bigg|^2{1 \mskip -5mu {\rm I}}_{\mc B_A} + \int\! \mathrm{d} \xi\, \varphi(\xi) \big(|\tilde u_N^\varphi|^2 + |u_g^\varphi|^2\big) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement} \\ & \quad \quad \le \frac{C}{\varphi_0^2 r^{2d}} \int\! \mathrm{d} \xi\, \varphi(\xi-y_1) \bigg|\frac 1N \sum_{j=1}^N \mc U_j(\xi) \bigg|^2 + C \int\! \mathrm{d} \xi\, \varphi(\xi)\sum_{j=1}^N q_{1,j} (1+|w_j|^2) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement}\,. \end{split} \] Above, we applied (after the change of variables $\xi \to \xi+y_1$) the definition Eq.~\eqref{ba} and the lower bound Eq.~\eqref{stimrho} to estimate the first term in the right-hand side, the Cauchy-Schwarz inequality together with Eq.~\eqref{eq:utK} to estimate the second term. We notice that the variables $\mc U_j(\xi)$ are i.i.d.\ and satisfy \begin{equation} \label{Uj} \begin{split} & \int\! \mathrm{d} g(t)^{\otimes N}\, \mc U_j(\xi) = 0\,, \\ & \int\! \mathrm{d} g(t)^{\otimes N}\, |\mc U_j(\xi)|^2 \le C \varphi_0 \int\!\mathrm{d} y\, \mathrm{d} w\, \varphi(\xi-y) g (y,w,t) |w|^2 \le C \varphi_0\,, \end{split} \end{equation} where we used the upper bound on $\mc N_q(g(t))$ given in Eq.~\eqref{eq:utK} with $q>2+d$. Therefore, \begin{align} \label{lln} \int\! \mathrm{d} g(t)^{\otimes N}\, \int\! \mathrm{d} \xi\, \varphi(\xi-y_1) \bigg|\frac 1N \sum_{j=1}^N \mc U_j(\xi) \bigg|^2 & \le \varphi_0 \int\! \mathrm{d} \xi\, \int\! \mathrm{d} g(t)^{\otimes N} \bigg|\frac 1N \sum_{j=1}^N \mc U_j(\xi) \bigg|^2 \nonumber \\ & \le \frac{C\varphi_0^2}N\,. \end{align} On the other hand, \begin{align} \label{stw} & \int\! \mathrm{d} g(t)^{\otimes N}\, \int\! \mathrm{d} \xi\, \varphi(\xi)\sum_{j=1}^N q_{1,j} (1+|w_j|^2) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement} \nonumber \\ & \qquad\qquad= \frac 1N \sum_{i=1}^N \int\! \mathrm{d} g(t)^{\otimes N} \int\!\mathrm{d} \xi\, \varphi(\xi) \sum_{j=1}^Nq_{i,j} (1 + |w_j|^2) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement}\nonumber \\ & \qquad \qquad \le C\varphi_0 \int\! \mathrm{d} g(t)^{\otimes N} \frac 1N \sum_{j=1}^N (1 + |w_j|^2) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement} \le \frac{C\varphi_0}{(r^{3d} N)^{1/2}}\,, \end{align} where we used that $\mathrm{d} g(t)^{\otimes N}$ and ${1 \mskip -5mu {\rm I}}_{\mc B_A^\complement}$ are symmetric with respect to particle permutations, the upper bound of Eqs.~\eqref{qijstim}, and finally, as done in Eq.~\eqref{db2}, the Cauchy-Schwarz inequality and Eqs.~\eqref{eq:densy} and \eqref{eq:stimv}. Putting the above together, we obtain \begin{equation} \label{dd1} \mc D_d^{(1)}(t) \le C \bigg(\frac{1}{r^{2d}N} + \frac{\varphi_0}{(r^{3d} N)^{1/2}}\bigg)\,. \end{equation} \medskip \noindent \textit{Upper bound on $\mc D_d^{(2)}(t)$.} We argue similarly to the previous case. Since \[ d(\tilde T_N^\varphi - T_g^\varphi) = \sum_{j=1}^N q_{1,j} |w_j|^2 - (d T_g^\varphi + |u_g^\varphi|^2) + |u_g^\varphi|^2 - |\tilde u_N^\varphi|^2\,, \] after introducing the random variables \[ \mc T_j(\xi) = \varrho_g^\varphi(\xi,t)\varphi(\xi-y_j) |w_j|^2 - \varphi(\xi-y_j) ( \varrho_g^\varphi d T_g^\varphi + |u_g^\varphi|^2)(\xi,t) \] and using that $(|u_g^\varphi|^2 - |\tilde u_N^\varphi|^2)^2 \le |\tilde u_N^\varphi- u_g^\varphi|^4$, we have \begin{align} \label{t-t} & \int\! \mathrm{d} \xi\, \varphi(\xi) \, (\tilde T_N^\varphi - T_g^\varphi)^2 \le \int\! \mathrm{d} \xi\, \varphi(\xi)\, \big[(\tilde T_N^\varphi - T_g^\varphi)^2 {1 \mskip -5mu {\rm I}}_{\mc B_A} + \big((\tilde T_N^\varphi)^2 + (T_g^\varphi)^2\big){1 \mskip -5mu {\rm I}}_{\mc B_A^\complement}\big] \nonumber \\ & \quad = \int\! \mathrm{d} \xi\, \frac{\varphi(\xi-y_1)}{\tilde\varrho_N^\varphi (\xi)^2 \varrho_g^\varphi(\xi)^2} \bigg\{\bigg(\frac 1{Nd} \sum_{j=1}^N \mc T_j(\xi) \bigg)^2 + \frac{1}{\tilde\varrho_N^\varphi (\xi)^2 \varrho_g^\varphi(\xi)^2} \bigg|\frac 1{Nd} \sum_{j=1}^N \mc U_j(\xi) \bigg|^4 \bigg\} {1 \mskip -5mu {\rm I}}_{\mc B_A} \nonumber \\ & \quad \qquad + \int\! \mathrm{d} \xi\, \varphi(\xi) \big[(\tilde T_N^\varphi)^2 + (T_g^\varphi)^2\big]{1 \mskip -5mu {\rm I}}_{\mc B_A^\complement} \nonumber \\ & \quad \quad \le \frac{C}{\varphi_0^2 r^{2d}} \int\! \mathrm{d} \xi\, \varphi(\xi-y_1)\bigg\{\bigg(\frac 1N \sum_{j=1}^N \mc T_j(\xi) \bigg)^2 + \frac{1}{\varphi_0^2 r^{2d}} \bigg|\frac 1N \sum_{j=1}^N \mc U_j(\xi) \bigg|^4\bigg\} \nonumber \\ & \quad\qquad + C \int\! \mathrm{d} \xi\, \varphi(\xi)\sum_{j=1}^N q_{1,j} (1+|w_j|^4) {1 \mskip -5mu {\rm I}}_{\mc B_A^\complement}\,. \end{align} Clearly, the expectation with respect to $\mathrm{d} g(t)^{\otimes N}$ of the last term in the right-hand side of Eq.~\eqref{t-t} can be bounded as in Eq.~\eqref{stw}, while, by Eq.~\eqref{Uj}, \[ \int\! \mathrm{d} g(t)^{\otimes N}\, \int\! \mathrm{d} \xi\, \varphi(\xi-y_1) \bigg|\frac 1N \sum_{j=1}^N \mc U_j(\xi) \bigg|^4 \le \frac{C\varphi_0^3}{N^2}\,. \] Finally, concerning the expectation with respect to $\mathrm{d} g(t)^{\otimes N}$ of the first term in the right-hand side of Eq.~\eqref{t-t}, we observe that also the variables $\mc T_j(\xi)$ are i.i.d.\ and satisfy \[ \begin{split} & \int\! \mathrm{d} g(t)^{\otimes N}\, \mc T_j(\xi) = 0\,, \\ & \int\! \mathrm{d} g(t)^{\otimes N}\, |\mc T_j(\xi)|^2 \le C \varphi_0 \int\!\mathrm{d} y\, \mathrm{d} w\, \varphi(\xi-y) g (y,w) |w|^4 \le C \varphi_0\,, \end{split} \] where we used the upper bound on $\mc N_q(g(t))$ given in Eq.~\eqref{eq:utK} with $q>4+d$. Therefore, analogously to Eq.~\eqref{lln}, \[ \int\! \mathrm{d} g(t)^{\otimes N}\, \int\! \mathrm{d} \xi\, \varphi(\xi-y_1) \bigg(\frac 1N \sum_{j=1}^N \mc T_j(\xi) \bigg)^2 \le \frac{C\varphi_0^2}N\,. \] Collecting together the above bounds, we obtain \begin{equation} \label{dd2} \mc D_d^{(2)}(t) \le C \bigg(\frac{1}{r^{2d}N} + \frac{1}{r^{4d}\varphi_0N^2} + \frac{\varphi_0}{(r^{3d} N)^{1/2}}\bigg)\,. \end{equation} From Eqs.~\eqref{dd1} and \eqref{dd2} we conclude that \begin{equation} \label{dd5} \mc D_d(t) \le C \bigg(\frac{1}{r^{2d}N} + \frac{1}{r^{4d}\varphi_0 N^2} + \frac{\varphi_0}{(r^{3d} N)^{1/2}}\bigg)\,. \end{equation} \subsection{Proof of Eq.~(\ref{I_N})} \label{sec:5.5} From Eqs.~\eqref{I<2}, \eqref{da}, \eqref{da<}, \eqref{dc1}, and \eqref{dd5} we get \[ \begin{split} \dot I_N(t) & \le C \bigg( \varphi_0 + \frac{\|\nabla\varphi\|_\infty}{r^{2d}\varphi_0} + \frac{\|\nabla\varphi\|_\infty^2}{r^{5d}\varphi_0^2} \bigg) I_N(t) \\ & \quad + \bigg(\frac{1}{r^{2d} N} + \frac{1}{r^{4d}\varphi_0 N^2} + \frac{\varphi_0}{(r^{3d} N)^{1/2}} + \frac{\|\nabla\varphi\|_\infty^2}{r^{5d} \varphi_0^2 N^{1/2}} \bigg)\,. \end{split} \] We then choose \begin{equation} \label{N0Gammaphi} \Gamma_{\varphi} = \frac{\varphi_0^3 + \|\nabla\varphi\|_\infty^2}{r^{5d}\varphi_0^2}\,, \qquad N_\varphi = \frac{\varphi_0^4} {r^{6d}} + \frac{\|\nabla\varphi\|_\infty^8}{(r^{5d}\varphi_0^2)^4}\,, \end{equation} so that, recalling $\varphi_0\ge1$, \[ \bigg(\varphi_0 + \frac{\|\nabla\varphi\|_\infty}{r^{2d}\varphi_0} + \frac{\|\nabla\varphi\|_\infty^2}{r^{5d}\varphi_0^2} \bigg) \le 2\Gamma_\varphi \] and \[ \bigg(\frac{1}{r^{2d}N} + \frac{1}{r^{4d}\varphi_0N^2} + \frac{\varphi_0}{(r^{3d} N)^{1/2}} + \frac{\|\nabla\varphi\|_\infty^2}{r^{5d} \varphi_0^2 N^{1/2}} \bigg) \le \frac{4}{N^{1/4}} \qquad \forall\, N>N_\varphi\,. \] Therefore, $\dot I_N(t) \le C\Gamma_\varphi I_N(t) + C/N^{1/4}$ for any $N>N_\varphi$, from which Eq.~\eqref{I_N} follows by Gr\"{o}nwall's inequality and hence the proof of Theorem \ref {teo:main} is achieved. \section{Removing the cutoff and conclusion} \label{sec:6} The regularized BGK equation Eq.~\eqref{eq:kin} reduces to the usual one Eq.~\eqref{BGK} when the cutoff function $\varphi$ converges to the $\delta$-function, at least formally. In this section, $\varphi=\varphi^{(\varepsilon)}$ is the rescaled function given by \eqref{scaphi} and we assume $t\in [0,T]$ for fixed $T>0$. We recall that $C$ denotes a generic positive constant, possibly depending only on $T$ and $f_0$, hence independent of $\varepsilon$. \begin{proof}[Proof of Theorem \ref {teo:lim}] The limit $\varepsilon \to 0$ was investigated in \cite{BHP}, where the convergence $g \to f$ is proven in a weighted $L^1$ space, see \cite[Theorem 2.4]{BHP}. Here, given $ f$ and $g$, we rather study the processes $(x(t),v(t))$ and $(y(t),w(t))$ whose generators are given by ($\psi$ denotes a test function) \begin{equation} \label{nonlin1} \mc L_1\psi(x,v) = (v\cdot \nabla_x -1)\psi (x,v) + \int\! \mathrm{d} \tilde v\, M_f( x,\tilde v) \psi(x,\tilde v) \end{equation} and $\mc L_2 = \mc L_1^g$ as in Eq.~\eqref{nonlin}, i.e., \begin{equation} \label{nonlin2} \mc L_2\psi(y,w) = (w\cdot \nabla_y -1) \psi(y,w) + \int\! \mathrm{d} \tilde y \, \varphi(\tilde y - y) \int\! \mathrm{d} \tilde w\, M_g^\varphi(\tilde y,\tilde w) \psi(\tilde y,\tilde w) \,, \end{equation} respectively. We fix $f_0$ as common initial datum for Eqs.~\eqref{eq:bgk} and \eqref{eq:kin}, and assume that it satisfies the hypotheses Eqs.~\eqref{eq:f0} and \eqref{grad}. Note that, in particular, the hypothesis of Propositions \ref{prop:bgk} and \ref{prop:stim_uT} are satisfied. We couple the two processes by defining in the product space the process whose generator is given by ($G$ denotes a test function) \[ \begin{split} \mc L_Q G (x,v,y,w) & = ( v\cdot \nabla_x+w\cdot \nabla_y -1) G(x,v,y,w) \\ & \quad + \int\! \mathrm{d} \tilde y \int\! \mathrm{d} \tilde v \int\! \mathrm{d} \tilde w\, \mc M (x,\tilde v;\tilde y, \tilde w) \varphi(\tilde y - y) G (x, \tilde v , \tilde y,\tilde w)\,, \end{split} \] where, analogously to what done in Sec.~\ref{sec:3.1}, $\mc M(x,\tilde v;\tilde y, \tilde w)$ is the joint representation of the Maxwellians $M_f$ and $M^\varphi_g$ that realizes the 2-Wasserstein distance between the marginals. We denote by $\mathrm{d} R(t) = \mathrm{d} R(x,v;y,w;t)$ the law of this process assuming that initially \[ \mathrm{d} R(x,v;y,w;0) = f_0(x,v) \delta (x-y) \delta (v-w). \] Clearly, \begin{equation} \label{W<I} \mc W_2(f(t),g(t))^2 \le I(t) := \int\! \mathrm{d} R(t)\, \big(|x-y|^2 + |v-w|^2\big). \end{equation} To obtain an upper bound for $I(t)$ we compute, \[ \begin{split} \dot I(t) & = 2\int\! \mathrm{d} R(t)\, (x-y) \cdot (v-w) -\int\! \mathrm{d} R(t)\, |x-y|^2 \\ & \quad + \int\! \mathrm{d} R(t) \int\! \mathrm{d} \tilde y \, \varphi(y-\tilde y) |x-\tilde y|^2 + \frac{\mathrm{d}}{\mathrm{d} t} \int\! \mathrm{d} R(t)\, |v-w|^2 \\ & \le \int\! \mathrm{d} R(t)\, |v-w|^2 + 2 \int\! \mathrm{d} R(t)\, |x-y|^2 + E_1 + \frac{\mathrm{d}}{\mathrm{d} t} \int\! \mathrm{d} R(t)\, |v-w|^2 \,, \end{split} \] where \[ E_1= 2\int\! \mathrm{d} R(t) \int\! \mathrm{d} \tilde y\, \varphi(y -\tilde y ) |y-\tilde y|^2 \le C \varepsilon^2\,, \] while, by virtue of the choice of $\mc M(x,\tilde v;\tilde y, \tilde w)$, \[ \frac{\mathrm{d}}{\mathrm{d} t} \int\! \mathrm{d} R(t)\, |v-w|^2 = S_1 + S_2 - \int\! \mathrm{d} R(t)\, |v-w|^2\,, \] with \[ S_1 = \int\! \mathrm{d} R(t)\, |u_f(x,t) -u_g^\varphi (y,t) |^2\,, \quad S_2 = \int\! \mathrm{d} R(t)\, d \Big(\sqrt {T_f(x,t)} - \sqrt {T_g^\varphi (y,t)}\Big)^2\,. \] Therefore, \begin{equation} \label{I<} \dot I(t) \le 2 I(t) + S_1 + S_2 + C \varepsilon^2 \end{equation} and it remains to estimate $S_1$ and $S_2$. \medskip \noindent \textit{Upper bound on $S_1$.} Setting \[ \begin{split} \mathrm{d} r(x,y;t) & = \int_{v,w}\! \mathrm{d} R(x,v;y,w;t)\,, \\ \mathrm{d} R^\varphi(x,v;y,w;t) & = \int\! \mathrm{d} \tilde y\, \varphi(y-\tilde y) \, \mathrm{d} R(x,v;\tilde y,w;t) \,,\\ \mathrm{d} r^\varphi(x,y;t) & = \int\! \mathrm{d} \tilde y\, \varphi(y-\tilde y)\, \mathrm{d} r(x,\tilde y;t)\,, \end{split} \] we have \[ S_1=\bar S_1+E_2\,, \] where \[ \bar S_1= \int\! \mathrm{d} r^\varphi(t)\, |u_f(x,t) -u_g^\varphi (y,t) |^2\,, \quad E_2= \int\! (\mathrm{d} r- \mathrm{d} r^\varphi) (t)\, |u_f(x,t) -u_g^\varphi (y,t) |^2\,. \] We start by estimating $\bar S_1$, noticing that if \[ g^\varphi(y,w,t) := \int\!\mathrm{d} \tilde y\, \varphi(y-\tilde y) g(\tilde y, w,t) \] then \begin{align} \label{T1} \bar S_1= & \int\! \mathrm{d} r^\varphi(t)\, \bigg|\frac {\int\!\mathrm{d} v\, f(x,v,t) v}{\varrho_f(x,t)} - \frac {\int\!\mathrm{d} w\, g^\varphi(y,w,t)w}{ \varrho_g^\varphi(y,t)}\bigg|^2 \nonumber \\ = & \int\! \mathrm{d} r^\varphi(t)\,\bigg|\int\! \mathrm{d} \Lambda_{x,y}(v,w)\, (v-w) \bigg|^2 \le \int\! \mathrm{d} r^\varphi(t) \int\! \mathrm{d} \Lambda_{x,y}(v,w)\, |v-w|^2\,, \end{align} with (remarking also that $\varrho_g^\varphi = \varrho_{g^\varphi}$) \begin{equation} \label{rc} \mathrm{d} \Lambda_{x,y}(v,w)\in \mc P\bigg(\frac {f(x,v,t)\, \mathrm{d} v}{\varrho_f(x,t)},\frac{g^\varphi(y,w,t) \,\mathrm{d} w}{\varrho_g^\varphi(y,t)}\bigg) \end{equation} to be fixed later (recall $\mc P(\mu,\nu)$ denotes the collection of all joint probability measures in the product space with marginals $\mu$ and $\nu$, see Remark \ref{rem:wass}). Looking at the coupled stochastic processes $(x(t),y(t))$, we realize it is of the form \[ \begin{split} x(t) & = x_0 + v_0 t_1 + v_1 (t_2-t_1) +v_2 (t_3-t_2) \cdots\,, \\ y(t) & = x_0 + v_0 t_1 + \xi_1+ w_1 (t_2-t_1) + \xi_2 + w_2 (t_3-t_2) \cdots \end{split} \] where $t_1 <t_2 < \cdots < t_k $ are exponential times in which the jumps in velocity are simultaneously performed. The outgoing velocities are Maxwellian computed via the hydrodynamical fields depending on $x(t_k)$ and $y(t_k) +\xi_k$. Finally, the extra displacements $\xi_k$ are i.i.d.~distributed according to $\varphi$. Now, let $R(\mathrm{d} v\, \mathrm{d} w|x,y;t)$, resp.~$R^\varphi(\mathrm{d} v\, \mathrm{d} w|x,y;t)$, be the conditional probability of $\mathrm{d} R(x,v;y,w;t)$, resp.~$\mathrm{d} R^\varphi (x,v;y,w;t)$, conditioned to the values $x,y$ at time $t$. For what just noticed on the structure of the coupled process, the conditional probability has the property that $\int\! R(\mathrm{d} v\, \mathrm{d} w|x,y;t) \, \psi (v)$ and $\int\! R(\mathrm{d} v\, \mathrm{d} w|x,y;t)\, \psi (w)$ are independent of $y$ and $x$, respectively. Similarly, also $\int\! R^\varphi(\mathrm{d} v\, \mathrm{d} w|x,y;t)\, \psi (v)$ and $\int\! R^\varphi(\mathrm{d} v\, \mathrm{d} w|x,y;t)\, \psi (w)$ are independent of $y$ and $x$, respectively. \begin{remark} \label{rem:ma} Note that the measures $\mathrm{d} R (x,v;y,w;t)$ and $\mathrm{d} r (x,y;t) $ are not absolutely continuous with respect to the Lebesgue measure. Indeed, the contribution due to the event with zero jumps transports the initial delta function in position and velocity. Incidentally, this event does not give any contribution in the evaluation of $I(t)$. On the other hand, due to the convolution with $\varphi$, $\mathrm{d} r^\varphi$ is absolutely continuous with respect to the Lebesgue measure, and we denote by $r^\varphi$ its density. \end{remark} We now observe that \[ R^\varphi(\mathrm{d} v\, \mathrm{d} w|x,y;t) \in \mc P\bigg(\frac {f(x,v,t)\, \mathrm{d} v}{\varrho_f(x,t)},\frac{g^\varphi(y,w,t) \,\mathrm{d} w}{\varrho_g^\varphi(y,t)}\bigg)\,. \] Indeed, since $\varrho_g^\varphi(y,t)^{-1} \int\! \mathrm{d} x\, r^\varphi(x,y;t) =1$, for any test function $\psi$, \[ \begin{split} & \int\! \mathrm{d} y \int\! R^\varphi(\mathrm{d} v\, \mathrm{d} w|x,y;t)\, \psi(y,w) = \int\! \mathrm{d} x \int\! \mathrm{d} y\, \frac{r^\varphi(x,y;t)}{\varrho_g^\varphi(y,t)} \int\! R^\varphi(\mathrm{d} v\, \mathrm{d} w|x,y;t)\, \psi(y,w) \\ & \qquad \quad = \int\! \mathrm{d} R^\varphi(x,v;y,w;t)\, \frac{\psi(y,w)}{\varrho_g^\varphi(y,t)}= \int\!\mathrm{d} y \int\! \frac{g^\varphi(y,w,t) \,\mathrm{d} w}{\varrho_g^\varphi(y,t)} \psi(y,w)\,, \end{split} \] where, in the first step, we used the independence of $x$ of the left-hand side. The same argument can be used to prove that \[ \int\!\mathrm{d} x \int\!R^\varphi(\mathrm{d} v\, \mathrm{d} w|x,y;t)\, \psi(x,v) = \int\!\mathrm{d} x \int\!\frac {f(x,v,t)\, \mathrm{d} v}{\varrho_f(x,t)} \psi(x,v)\,. \] By choosing $\mathrm{d} \Lambda_{x,y}(v,w) = R^\varphi(\mathrm{d} v \, \mathrm{d} w|x,y;t)$ in Eq.~\eqref{T1} we obtain \begin{align} \label{est:T1} \bar S_1 & \le \int\! \mathrm{d} r^\varphi(t) \int\! R^\varphi(\mathrm{d} v \, \mathrm{d} w|x,y;t) \, |v-w|^2 \nonumber \\ & = \int\!\mathrm{d} R^\varphi(t)\, |v-w|^2 = \int\!\mathrm{d} R (t)\, |v-w|^2\,. \end{align} Concerning the error term $E_2$, since $\int (\mathrm{d} r- \mathrm{d} r^\varphi) (t)\, |u_f(x)|^2 =0$ we have \[ E_2 = \int\! (\mathrm{d} r- \mathrm{d} r^\varphi) (t)\, \big(|u_g^\varphi(y,t)|^2 - 2 u_f(x,t) \cdot u_g^\varphi (y,t)\big) = E_2^{(1)} +E_2^{(2)}\,, \] with \[ \begin{split} E_2^{(1)} & = \int\! \mathrm{d} r(t) \int\!\mathrm{d} \tilde y\, \varphi(y-\tilde y) \big(|u_g^\varphi(y,t)|^2 - |u_g^\varphi(\tilde y,t)|^2\big)\,, \\ E_2^{(2)} & = - 2 \int\! \mathrm{d} r(t) \int\!\mathrm{d} \tilde y\, \varphi(y-\tilde y) u_f(x,t) \cdot \big(u_g^\varphi(y,t) - u_g^\varphi(\tilde y,t) \big)\,. \end{split} \] The assumptions Eqs.~\eqref{eq:f0} and \eqref{grad} are the hypotheses of \cite[Lemma 4.1]{BHP}, which in particular states \[ \mc N_q(|\nabla_x g(t)|) \le C\,. \] This implies (see the proof of the same lemma) \begin{equation} \label{gradstim} |\nabla_x \varrho_g| + |D_x u_g| + |\nabla_x T_g| \le C\,. \end{equation} Therefore, using also Eqs.~\eqref{eq:utf} and \eqref{eq:utK}, \[ \big|E_2^{(1)} \big| + \big|E_2^{(2)} \big| \le C \int\! \mathrm{d} r(t) \int\! \mathrm{d} \tilde y\, \varphi(y -\tilde y ) |y-\tilde y| \le C \varepsilon\,. \] In conclusion, from Eq.~\eqref{est:T1} and the above estimate, \begin{equation} \label{S1} S_1 \le C \int\! \mathrm{d} R(t)\, |v-w|^2 + C\varepsilon \le I(t) + C\varepsilon \,. \end{equation} \medskip \noindent \textit{Upper bound on $S_2$.} We proceed analogously by setting \[ S_2=\bar S_2+E_3\,, \] where \[ \bar S_2 = \int\! \mathrm{d} r^\varphi(t)\, d \Big(\sqrt {T_f(x,t)} - \sqrt {T_g^\varphi (y,t)}\Big)^2 \] and, by arguing as done for $E_2$, \[ \begin{split} E_3 & = \int\! (\mathrm{d} r- \mathrm{d} r^\varphi) (t)\, \Big(T_g^\varphi(y,t) - 2 \sqrt{T_f(x,t)}\sqrt{T_g^\varphi(y,t)}\Big) \\ & = \int\! \mathrm{d} r(t) \int\!\mathrm{d} \tilde y\, \varphi(y-\tilde y) \big(T_g^\varphi(y,t) - T_g^\varphi(\tilde y,t)\big) \\ & \quad - 2 \int\! \mathrm{d} r(t) \int\!\mathrm{d} \tilde y\, \varphi(y-\tilde y) \sqrt{T_f(x,t)}\Big(\sqrt{T_g^\varphi(y,t)} - \sqrt{T_g^\varphi(\tilde y,t)}\Big)\,. \end{split} \] From Eqs.~\eqref{gradstim} and \eqref{eq:TA} the error term $E_3$ satisfies \begin{equation} \label{E3} |E_3| \le C \int\! \mathrm{d} r(t) \int\! \mathrm{d} \tilde y\, \varphi(y -\tilde y ) |y-\tilde y| \le C \varepsilon\,. \end{equation} Concerning $\bar S_2$, we observe that (omitting for brevity the dependence on $(x,t)$ in $u_f(x,t), T_f(x,t)$ and on $(y,t)$ in $u_g^\varphi(y,t), T_g^\varphi(y,t)$) \[ \begin{split} |T_f -T^\varphi_g| & = \frac 1d \bigg| \int\! \mathrm{d} \Lambda_{x,y} (v,w)\, \Big( |v-u_f|^2 - |w-u^\varphi_g|^2 \Big) \bigg| \\ & = \frac 1d \bigg| \int\! \mathrm{d} \Lambda_{x,y} (v,w)\, \big(v-w + u_g^\varphi - u_f\big) \cdot \big(v- u_f+ w - u_g^\varphi \big) \bigg| \\ & \le \frac 2d \bigg[\int\! \mathrm{d} \Lambda_{x,y} (v,w)\, \big(|v-w|^2 + |u_g^\varphi - u_f|^2\big)\bigg]^{1/2} \\& \qquad \times \bigg[\int\! \mathrm{d} \Lambda_{x,y} (v,w)\, \big(|v - u_f|^2 + |w - u_g^\varphi|^2\big)\bigg]^{1/2} \\ & \le C \bigg[\int\! \mathrm{d} \Lambda_{x,y} (v,w)\, \big(|v-w|^2 + |u_g^\varphi - u_f|^2\big)\bigg]^{1/2} \big( \sqrt {T_f} + \sqrt {T^\varphi_g} \big)\,. \end{split} \] where in the first and last step we used \eqref{rc}. Therefore, \[ \begin{split} \bar S_2 & = \int\! \mathrm{d} r^\varphi(t)\, d \bigg(\frac{T_f(x,t) -T_g^\varphi (y,t)}{\sqrt {T_f(x,t)} + \sqrt {T_g^\varphi (y,t)}}\bigg)^2 \nonumber \\ & \le C \int\! \mathrm{d} r^\varphi(t) \int\! \mathrm{d} \Lambda_{x,y} (v,w)\, \big(|v-w|^2 + |u_g^\varphi(y,t) - u_f(x,t)|^2\big) \nonumber \\ & = C \int\! \mathrm{d} R(t)\, |v-w|^2 + C \int\! \mathrm{d} R(t)\, |u_g^\varphi(y,t) - u_f(x,t)|^2 \nonumber \\ & = C \int\! \mathrm{d} R(t)\, |v-w|^2 + C S_1\,. \end{split} \] Therefore, from Eq.~\eqref{E3} and the above estimate, \begin{equation} \label{S2} S_2 \le C I(t) + C S_1 + C\varepsilon\,. \end{equation} \smallskip From Eqs.~\eqref{W<I}, \eqref{I<}, \eqref{S1}, \eqref{S2}, and Gr\"{o}nwall's inequality, Eq.~\eqref{conveps} follows and Theorem \ref{teo:lim} is proved. \end{proof} \begin{proof}[Proof of Theorem \ref {teo:main1}] We have, \[ \begin{split} \mc W_2 ( f^N_j(t), f(t) ^{\otimes j} ) & \leq \mc W_2 ( f^N_j(t), g(t)^{\otimes j} )+ \mc W_2(g(t)^{\otimes j}, f(t) ^{\otimes j}) \\ & \le C \sqrt j \bigg(\frac 1{N^{1/8}} \exp \frac{C(\varphi_0^3 + \|\nabla\varphi\|_\infty^2)}{r^{5d}\varphi_0^2}+ \sqrt\varepsilon \bigg), \end{split} \] after using Theorem \ref {teo:lim}, Theorem \ref{teo:main}, and inserting the expression of $\Gamma_\varphi$ given in Eq.~\eqref{N0Gammaphi}. Since $ \|\nabla \varphi \|_\infty \le C\varepsilon^{-(d+1)}$, $ \varphi_0 \le C\varepsilon^{-d}$, $r>C\varepsilon$ (in order to satisfy Eq.~\eqref{r0}), and $N>N_\varphi$, the limit Eq.~\eqref{conv} follows provided $\varepsilon$ is vanishing slowly when $N$ diverges (e.g., $\varepsilon=(\log N)^{-\mu}$ with $\mu$ sufficiently small). The theorem is thus proved. \end{proof}
proofpile-arXiv_067-329
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} Patient clinical records typically contain longitudinal data about patients' health status, diseases, conducted tests and response to treatments. Analysing such information can prove of immense value not only for clinical practice, but also for the organisation and management of healthcare services. \textit{Concept extraction} (CE) aims to identify mentions to medical concepts such as problems, test and treatments in clinical records (e.g., discharge summaries and progress reports) and classify them into predefined categories. The concepts in clinical records are often expressed with unstructured, ``free'' text, making their automatic extraction a challenging task for clinical Natural Language Processing (NLP) systems. Traditional approaches have extensively relied on rule-based systems and lexicons to recognise the concepts of interest. Typically, the concepts represent drug names, anatomical nomenclature and other specialized names and phrases which are not part of everyday vocabularies. For instance, ``resp status'' should be interpreted as ``response status''. Such use of abbreviated phrases and acronyms is very common within the medical community, with many abbreviations having a specific meaning that differ from that of other lexicons. Dictionary-based systems perform concept extraction by looking up terms on medical ontologies such as the Unified Medical Language System (UMLS) ~\cite{kipper2008system}. Intrinsically, dictionary- and rule-based systems are laborious to implement and inflexible to new cases and misspellings~\cite{liu2015drug}. Although these systems can achieve high precision, they tend to suffer from low recall (i.e., they may miss a significant number of concepts). To overcome these limitations, various machine learning approaches have been proposed (e.g., conditional random fields (CRFs), maximum-entropy classifiers and support vector machines) to simultaneously exploit the textual and contextual information while reducing the reliance on lexicon lookup~\cite{lafferty2001conditional,berger1996maximum,joachims1998text}. State-of-the-art machine learning approaches usually follow a two-step process of \textit{feature engineering} and \textit{classification}. The feature engineering task is, in its own right, very laborious and demanding on expert knowledge, and it can become the bottleneck of the overall approach. For this reason, this paper proposes a highly streamlined alternative: to employ a contemporary neural network - the bidirectional LSTM-CRF - initialized with general-purpose, off-the-shelf word embeddings such as GloVe ~\cite{Pennington:14} and Word2Vec~\cite{Mikolov:13}. The experimental results over the authoritative 2010 i2b2/VA benchmark show that the proposed approach outperforms all recent approaches and ranks closely to the best from the literature. \begin{table*}[] \centering \scalebox{0.95}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \bf Sentence & \textit{His}& \textit{HCT} & \textit{had}& \textit{dropped} & \textit{from} &\textit{36.7} &\textit{despite} &\textit{2U} &\textit{PRBC} &\textit{and} \\ \hline \textbf{Concept class}& \textit{O}& \textit{B-test}& \textit{O} & \textit{O}& \textit{O} & \textit{O} & \textit{ O} & \textit{B-treatment} & \textit{I-treatment} & \textit{O} \\ \hline \end{tabular}} \caption{Example sentence in a concept extraction task. The concept classes are represented in the standard in/out/begin (IOB) format.} \label{table1} \end{table*} \section{Related Work} \label{relatedworks} Most of the research to date has framed CE as a specialized case of named-entity recognition (NER) and employed a number of supervised and semi-supervised machine learning algorithms with domain-dependent attributes and text features~\cite{uzuner20112010}. Hybrid models obtained by cascading CRF and SVM classifiers along with several pattern-matching rules have shown to produce effective results~\cite{boagcliner}. Moreover, \cite{fu2014improving} have given evidence to the importance of including preprocessing steps such as truecasing and annotation combination. The system that has reported the highest accuracy on the 2010 i2b2/VA concept extraction benchmark is based on unsupervised feature representations obtained by Brown clustering and a hidden semi-Markov model as classifier~\cite{de2011machine}. However, the use of a ``hard'' clustering technique such as Brown clustering is not suitable for capturing multiple relations between the words and the concepts. For this reason, Jonnalagadda et al. \cite{jonnalagadda2012enhancing} demonstrated that a random indexing model with distributed word representations can improve clinical concept extraction. Moreover, Wu et al. \cite{wu2015study} have jointly used word embeddings derived from the entire English Wikipedia \cite{collobert2011natural} and binarized word embeddings derived from domain-specific corpora (e.g. the MIMIC-II corpus \cite{MIMIC2}). In the broader field of machine learning, the recent years have witnessed a proliferation of deep neural networks, with outstanding results in tasks as diverse as visual, speech and named-entity recognition~\cite{hinton2012deep,krizhevsky2012imagenet,lample2016neural}. One of the main advantages of neural networks over traditional approaches is that they can learn the feature representations automatically from the data, thus avoiding the expensive feature-engineering stage. Given the promising performance of deep neural networks and the recent success of unsupervised word embeddings in general NLP tasks \cite{Pennington:14,Mikolov:13,lebret2013word}, this paper sets to explore the use of a state-of-the-art deep sequential model initialized with general-purpose word embeddings for a task of clinical concept extraction. \section{ The Proposed Approach} CE can be formulated as a joint segmentation and classification task over a predefined set of classes. As an example, consider the input sentence provided in Table~\ref{table1}. The notation follows the widely adopted in/out/begin (IOB) entity representation with, in this instance, \textit{HCT} as the test and \textit{2U PRBC} as the treatment. In this paper, we approach the CE task by the bidirectional LSTM-CRF framework where each word in the input sentence is first mapped to either a random vector or a vector from a word embedding. We therefore provide a brief description of both word embeddings and the model hereafter. Word embeddings are vector representations of natural language words that aim to preserve the semantic and syntactic similarities between them. The vector representations can be generated by either count-based approaches such as Hellinger-PCA~\cite{lebret2013word} or trained models such as Word2Vec (including skip-grams and continuous-bag-of-words) and GloVe trained over large, unsupervised corpora of general-nature documents. In its embedded representation, each word in a text is represented by a real-valued vector, $x$, of arbitrary dimensionality, $d$. Recurrent neural networks (RNNs) are a family of neural networks that operate on sequential data. They take as input a sequence of vectors ($x_1,x_2, ...,x_n$) and output a sequence of class posterior probabilities, ($y_1,y_2,...,y_n$). An intermediate layer of hidden nodes, ($h_1,h_2,...,h_n$), is also part of the model. In an RNN, the value of the hidden node at time $t$, $h_{t}$, depends on both the current input, $x_t$, and the previous hidden node, $h_{t-1}$. This recurrent connection from the past timeframe enables a form of short-term memory and makes the RNNs suitable for the prediction of sequences. Formally, the value of a hidden node is described as: \begin{equation} h_t = f(U \bullet x_t + V \bullet h_{t-1} ) \end{equation} \vspace{0.2cm} \noindent where $U$ and $V$ are trained weight matrices between the input and the hidden layer, and between the past and current hidden layers, respectively. Function $f(\cdot)$ is the sigmoid function, $f(x)={1}/{1+e^{-x}}$, that adds non-linearity to the layer. Eventually, $h(t)$ is input into the output layer and convolved with the output weight matrix, $W$: \begin{equation} \label{eq4} y_t = g(W \bullet h_t), \hspace{0.03in} \text{with} \hspace{0.03in} g(z_{m}) = \frac{e^{z_{m}}}{\Sigma _{k=1}^Ke^{z_{k}} } \end{equation} \vspace{0.2cm} Eventually, the output is normalized by a multi-class logistic function, $g(\cdot)$, to become a proper probability over the class set. Therefore, the output dimensionality is equal to the number of concept classes. Although an RNN can, in theory, learn long-term dependencies, in practice it tends to be biased towards its most recent inputs. For this reason, the Long Short-Term Memory (LSTM) network incorporates an additional ``gated'' memory cell that can store long-range dependencies~\cite{hochreiter1997long}. In its bidirectional version, the LSTM computes both a forward, $\overrightarrow{h_t}$, and a backward, $\overleftarrow{h_t}$, hidden representation at each timeframe $t$. The final representation is created by concatenating them as $h_t = [\overrightarrow{h_t}$;$\overleftarrow{h_t}]$. In all these networks, the hidden layer can be regarded as an implicit, learned feature that enables concept prediction. A further improvement to this model is provided by performing joint decoding of the entire input sequence in a Viterbi-style manner using a CRF ~\cite{lafferty2001conditional} as the final output layer. The resulting network is commonly referred to as the \textit{bidirectional LSTM-CRF} \cite{lample2016neural}. \section{Experiments} \subsection{Dataset} \label{sec:length} \begin{table*}[] \centering \scalebox{1.1}{ \begin{tabular}{|c|c|c|} \hline & Training set & Test set\\ \hline notes &$170$ &$256$ \\ sentences &$16315$&$27626$\\ \hline problem & $7073$ & $12592$ \\ test& $4608$ & $9225$\\ treatment& $4844$ & $9344$\\ \hline \end{tabular}} \caption{Statistics of the training and test data sets used for the 2010-i2b2/VA concept extraction.} \label{table2} \end{table*} The 2010 i2b2/VA Natural Language Processing Challenges for Clinical Records include a concept extraction task focused on the extraction of medical concepts from patient reports. For the challenge, a total of 394 concept-annotated reports for training, 477 for testing, and 877 unannotated reports were de-identified and released to the participants alongside a data use agreement~\cite{uzuner20112010}. However, part of this data set is no longer being distributed due to restrictions later introduced by the Institutional Review Board (IRB). Thus, Table~\ref{table2} summarizes the basic statistics of the training and test data sets which are currently publicly available and that we have used in our experiments. \subsection{Evaluation Methodology} \label{sec:length} Our models have been blindly evaluated on the 2010 i2b2/VA CE test data using a strict evaluation criterion requiring the predicted concepts to exactly match the annotated concepts in terms of both boundary and class. To facilitate the replication of our experimental results, we have used a publicly-available library for the implementation of the LSTM (i.e. the Theano neural network toolkit \cite{bergstra2010theano}) and we publicly release our code\footnote{https://github.com/raghavchalapathy/Bidirectional-LSTM-CRF-for-Clinical-Concept-Extraction}. We have split the training set into two parts (sized at approximately 70\% and 30\%, respectively), using the first for training and the second for selection of the hyper-parameters (``validation'')~\cite{bergstra2012random}.The hyper-parameters include the embedding dimension, $d$, chosen over $\{50, 100, 300, 500\}$, and two additional parameters, the learning and drop-out rates, that were sampled from a uniform distribution in the range $[0.05, 0.1]$. All weight matrices were randomly initialized from the uniform distribution within range $[-1, 1]$. The word embeddings were either initialized randomly in the same way or fetched from Word2Vec and GloVe~\cite{w2vcode,glovecode}. Approximately $25\%$ of the tokens were alphanumeric, abbreviated or domain-specific strings that were not available as pre-trained embeddings and were always randomly initialized. Early stopping of training was set to $50$ epochs to mollify over-fitting, and the model that gave the best performance on the validation set was retained. The accuracy is reported in terms of micro-average F$_1$ score computed using the CoNLL score function~\cite{Nadeau:07}. \subsection{Results and Analysis} Table \ref{table3} shows the performance comparison between state-of-the-art CE systems and the proposed bidirectional LSTM-CRF with different initialization strategies. As a first note, the bidirectional LSTM-CRF initialized with GloVe outperforms all recent approaches (2012-2015). On the other hand, the best submission from the 2010 i2b2/VA challenge~\cite{de2011machine} still outperforms our approach. However, based on the description provided in \cite{uzuner20112010}, these results are not directly comparable since the experiments in~\cite{de2011machine,jonnalagadda2012enhancing} had used the original dataset which has a significantly larger number of training samples. Using general-purpose, pre-trained embeddings improves the F$_1$ score by over 5 percentage points over a random initialization. In general, the results achieved with the proposed approach are close and in many cases above the results achieved by systems based on hand-engineered features. \begin{table*}[] \centering \scalebox{1.1}{ \begin{tabular}{|c|c|c|c|} \hline Methods & Precision & Recall & F$_1$ Score \\ \hline Hidden semi-Markov Model \cite{de2011machine} &$86.88$ &$83.64$ & $85.23$ \\ Distributonal Semantics CRF \cite{jonnalagadda2012enhancing} &$85.60$&$82.00$ &$83.70$ \\ \hline Binarized Neural Embedding CRF \cite{wu2015study}&$85.10$&$80.60$ & $82.80$ \\ CliNER \cite{boagcliner}&$79.50$&$81.20$ & $80.00$ \\ Truecasing CRFSuite \cite{fu2014improving}&$80.83$&$7 1.47$ & $75.86$ \\ \hline Random - Bidirectional LSTM-CRF &$81.06$ &$75.40$ & $78.13$ \\ Word2Vec - Bidirectional LSTM-CRF &$82.61$ &$80.03$ & $81.30$ \\ GloVe - Bidirectional LSTM-CRF &$84.36$ &$83.41$ & $83.88$ \\ \hline \end{tabular}} \caption{Performance comparison between the bidirectional LSTM-CRF (bottom three lines) and state-of-the-art systems (top five lines) over the 2010 i2b2/VA concept extraction task.} \label{table3} \end{table*} \label{sec:length} \section*{Conclusion} This paper has explored the effectiveness of the contemporary bidirectional LSTM-CRF for clinical concept extraction. The most appealing feature of this approach is its ability to provide end-to-end recognition using general-purpose, off-the-shelf word embeddings, thus sparing effort from time-consuming feature construction. The experimental results over the authoritative 2010 i2b2/VA reference corpora look promising, with the bidirectional LSTM-CRF outperforming all recent approaches and ranking closely to the best submission from the original 2010 i2b2/VA challenge. A potential way to further improve its performance would be to explore the use of unsupervised word embeddings trained from domain-specific resources such as the MIMIC-III corpora \cite{MIMIC}. \bibliographystyle{acl}
proofpile-arXiv_067-929
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Sparsifying transforms \cite{SparseRep} allow efficient representation of data when a data-dependent overcomplete dictionary is available. Overcomplete dictionaries are useful in image processing \cite{DictImageDenosing, DictAstronomicalImageDenoising, DictDeblurr}, speech processing \cite{Speech1} and wireless communications \cite{Ayach_TWC14, OurICC}. Unfortunately, the selection of a sparsifying transform involves solving a non-convex optimization problem for a dictionary matrix $\mathbf{D}$ such that a real data set can be represented with a sparse representation matrix $\mathbf{X}$ whose sparsity level is constrained. Because direct solution of the optimization method is difficult \cite{SparseIsNPHard, DictLearningIsNPHard}, proposed algorithms seek a suboptimal solution via alternating minimization. Most prior work considers alternating minimization for dictionaries that are overcomplete. Algorithms like the method of optimal directions (MOD) \cite{MOD}, K-SVD \cite{KSVD} and algorithms based on direct optimization \cite{DirectDictionary} all perform alternating updates, differing in the ways they actually perform the update of the dictionary and of the sparse representations. Unfortunately, most general solutions to the dictionary learning problem are relatively slow in computing the dictionary and they lack a formal analysis of performance. Some of these difficulties stem from the fact that the proposed algorithms produce non-orthonormal, even overcomplete, dictionaries. Furthermore, overcomplete transforms themselves present some disadvantages when compared to the classical, fixed and fast, transforms. In any application, one general drawback of these computed dictionaries is that they need to be stored (or transmitted) along with the encoded/compressed data. Another, and more important, drawback is that representing vectors in a non-orthonormal or overcomplete dictionary involves a non-linear, computationally expensive, procedure \cite{GreedIsGood, JustRelax}. Fast transforms allow more efficient application of the dictionary to compute the sparse representation. For example, the discrete cosine, Fourier, Hadamard or wavelet transforms all have computationally efficient implementations, i.e., for example $O(n\log n)$ computational complexity \cite{FFT}. These fast transforms are widely used in signal and image processing but unfortunately are not the best sparsifying transforms in every situation. Recent work has devised fast sparsifying dictionaries that are built from fast transforms. For example, one of the first proposed algorithm called sparse K-SVD \cite{DoubleSparsity}, considers constructing a dictionary by using sparse linear combinations of the components of a fast transform. These dictionaries are efficient to apply since a linear combination of just a few components (which themselves are computed fast) can be done efficiently. The second, more recent, approach \cite{EfficientDictionaries} considers factorizing the dictionary as a product of a few very sparse matrices that can be easily manipulated. This is in the spirit of several fixed sparsifying transform that have this property, like the aforementioned Hadamard case which enjoys a factorization as a product of sparse matrices. Other approaches, like the one in \cite{DictCirculant} treats each atom of the dictionary as the composition of several circular convolutions so that the overall dictionary can be manipulated quickly, via Fourier transforms. The approach in \cite{Rusu2013} is to construct an overcomplete dictionary from concatenations of several orthonormal sub-dictionaries and partition the sparse representations such that they belong exclusively to only one sub-dictionary. Tree structures have been used to quickly constructing sparse approximations \cite{QuickSparse}. The learning algorithms proposed in \cite{DoubleSparsity}--\cite{QuickSparse} are slow in general, lack performance analysis or guarantees and usually involve relatively complex algorithms and extra data structures for the description of the dictionary. The approach in \cite{ShiftInvariantDictLearning}, provides a fast procedure for learning circulant dictionaries but, unfortunately, these dictionaries are not a general solution due to their low number of degrees of freedom. In this paper we develop algorithms for finding orthonormal dictionaries that can be used directly and inversely faster than the unconstrained, general, orthonormal dictionaries. We reduce the computational complexity of manipulating the dictionaries by considering that they are products of only a few Householder reflectors \cite{Golub1996}. While any orthonormal dictionary of size $n \times n$ can be factorized into $n$ reflectors, in this paper we use $m \ll n$ reflectors in the structure of the dictionary. This way, by applying the reflectors sequentially, low complexity dictionary manipulation is achieved. We choose to use Householder reflectors as the building blocks of our dictionaries since they enjoy low complexity manipulation, e.g., the reflector-vector product is computed in $O(n)$. By using fewer reflectors than needed, our algorithms cannot explore the entire space of orthonormal dictionaries rather only a subset of these. The main advantage though is the low complexity manipulation of the dictionaries designed this way. In general, an open question is if all orthonormal and Hessian matrices can be well represented and approximated with low complexity \cite{FastApproximations} (factored into $(1/2) n \log n$ Givens rotations). In this paper, we propose two algorithms that compute the coefficients of the Householder reflectors. The first approach builds an orthonormal dictionary composed of just a few reflectors by updating all the coefficients of each reflector sequentially, keeping the other ones fixed. The main advantages of this approach are: (i) each reflector update is done efficiently by solving an eigenvalue problem and (ii) the overall performance of this method approaches the performance of general orthonormal dictionary learning when the number of reflectors increases. Since each reflector is updated individually, this approach is relatively slow due to the large number of matrix manipulations that need to be performed. A natural question is if it possible to decouple the problem such that all reflectors can be updated simultaneously. This idea, which is realized by adding an additional orthogonal constraint of the reflector coefficients, is at the core of the second proposed method. The main benefit of this second approach is that it outperforms the first in terms of running time due to fewer manipulations required, but is slightly inferior in terms of representation quality. Additionally, for this second approach, we are able to perform a detailed performance analysis. While the dictionaries designed by both proposed methods enjoy fast (controllable computational complexity) manipulation the first, slower, approach provides better representation results. We compare the proposed algorithms in image processing applications, a classical scenario for the evaluation/comparison of sparsifying transforms. We show that the proposed methods cover the full performance range of computational complexity and representation error. Adjusting the number of reflectors in the transform, we can construct anything from dictionaries as fast as the well-known, fixed bases used in image compression with similar representation performance to slower dictionaries that have representation errors matching those of general orthonormal dictionaries. We provide insight into ways of choosing the number of reflectors thus allowing full flexibility to the proposed solutions. Furthermore, we show that in our experimental runs we are always able to construct a fast dictionary that matches the performance of the general orthonormal dictionary with a relative low number of reflectors. Based on these results we conclude that the proposed algorithms are well suited to produce solutions that balance the computational complexity and representation quality of learned dictionaries. The paper is organized as follows. Section II reviews the concept of orthonormal dictionary learning, Section III presents the proposed algorithms, Section IV provides performance insights into the proposed methods while Section V shows experimentally their effectiveness. \section{General orthonormal dictionary learning} In this section, we review prior work on learning general orthonormal dictionaries and provide some new insights. The objective is to describe the mathematical foundations of the dictionary learning problem, introduce the main notation, formulation and previously proposed solutions. Given a real dataset $\mathbf{Y} \in \mathbb{R}^{n \times N}$ and sparsity level $s$, the orthonormal dictionary learning algorithm (which we will call Q--DLA) \cite{OrthoDictionary} is formulated as: \begin{equation} \begin{aligned} & \underset{\mathbf{Q},\ \mathbf{X}; \ \mathbf{QQ}^T = \mathbf{Q}^T\mathbf{Q} = \mathbf{I}}{\text{minimize}} & & \|\mathbf{Y}-\mathbf{QX}\|_F^2 \\ & \text{\ \ \ \ \ \ subject to} & & \|\mathbf{x}_i\|_{0} \leq s,\ 1 \leq i \leq N, \end{aligned} \label{eq:dictionaryOrtho} \end{equation} where the objective function describes the representation error achieved by the orthonormal dictionary $\mathbf{Q} \in \mathbb{R}^{n \times n}$ with the sparse representations $\mathbf{X} \in \mathbb{R}^{n \times N}$ whose columns are subject to the $\ell_0$ pseudo-norm $\| \mathbf{x}_i \|_0$ (the number of non-zero elements of columns $\mathbf{x}_i$). To avoid trivial solutions, the dimensions obey $s \ll n \ll N$. The problem described in \eqref{eq:dictionaryOrtho} has been extensively studied and used in many applications especially in image processing for compression \cite{SOT, ICIPConf, EMAlgorithm}. Optimizations similar to \eqref{eq:dictionaryOrtho} have been proposed in the past to learn incoherent dictionaries \cite{IncoherentDictionary} or to build initial dictionaries for the general dictionary learning problem \cite{DictInit}. The solution to \eqref{eq:dictionaryOrtho} proposed in \cite{OrthoDictionary} alternates between computing $\mathbf{X}$ and $\mathbf{Q}$ with one of them fixed, just like in the general dictionary learning case \cite{MOD}. We detail the steps next. Since the dictionary $\mathbf{Q}$ is orthonormal the sparse representation step reduces to $\mathbf{X} = \mathcal{T}_s(\mathbf{Q}^T\mathbf{Y})$ where $\mathcal{T}_s()$ is an operator that given an input vector zeros all entries except the largest $s$ in magnitude and given an input matrix applies the same operation columnwise. To select the largest entries, per signal, a fast partial sorting algorithm \cite{PartialSort} can be used whose complexity is only $O(n)$. To solve \eqref{eq:dictionaryOrtho} for variable $\mathbf{Q}$ and fixed $\mathbf{X}$, a problem also known as the orthonormal Procrustes problem \cite{Proc}, a closed form solution $\mathbf{Q} = \mathbf{UV}^T$ is given by the singular value decomposition (SVD) of $\mathbf{YX}^T = \mathbf{U\Sigma V}^T$. Notice that with the representations $\mathbf{X}$ fixed, the reduction in the objective function of \eqref{eq:dictionaryOrtho} achieved by a general orthonormal dictionary $\mathbf{Q}$ is given by: \begin{equation} \begin{aligned} & \| \mathbf{Y} - \mathbf{QX} \|_F^2 = \| \mathbf{Y} \|_F^2 + \| \mathbf{X} \|_F^2 + C, \\ &\text{with } C = - 2\text{tr}(\mathbf{Q}^T\mathbf{YX}^T). \end{aligned} \label{eq:costDetailed3} \end{equation} Develop further to reach \begin{equation} \text{tr}(\mathbf{Q}^T\mathbf{YX}^T) = \text{tr}(\mathbf{V}\mathbf{U}^T \mathbf{U} \mathbf{\Sigma V}^T) = \text{tr}(\mathbf{\Sigma}) = \| \mathbf{YX}^T \|_*. \label{eq:theobjofortho} \end{equation} Thus, the reduction in the objective function is $2 \| \mathbf{YX}^T \|_*$. This shows that when considering orthonormal dictionaries, the learning problem can be seen as a nuclear norm maximization with sparsity constraints (and with $\| \mathbf{X} \|_F^2 \leq \| \mathbf{Y} \|_F^2$ to avoid trivial unbounded solutions). Also, notice that at the optimum we have the symmetric positive semidefinite \begin{equation} \mathbf{Q}^T\mathbf{YX}^T = \mathbf{V} \mathbf{\Sigma V}^T, \ \mathbf{QXY}^T = \mathbf{U} \mathbf{\Sigma U}^T. \label{eq:symmetry} \end{equation} The two are identical since $\mathbf{Q}^T (\mathbf{QXY}^T) \mathbf{Q} = \mathbf{XY}^T\mathbf{Q} = (\mathbf{Q}^T\mathbf{YX}^T)^T = \mathbf{Q}^T\mathbf{YX}^T$, i.e, $\mathbf{V} = \mathbf{Q}^T \mathbf{U}$. \noindent\textbf{Remark 1.} A positive semidefinite condition for the symmetric $\mathbf{X} (\mathbf{Q}^T\mathbf{Y})^T$, based on the Gershgorin disk theorem, can be stated. Starting from the positive semidefinite condition \eqref{eq:symmetry}, the focus falls on the spectral properties of the symmetric $\mathbf{R} = \mathbf{X} (\mathbf{Q}^T\mathbf{Y})^T = \mathcal{T}_s(\mathbf{Q}^T\mathbf{Y}) (\mathbf{Q}^T\mathbf{Y})^T$. The diagonal elements of this matrix are positive since they are the squared $\ell_2$ norms of the rows of $\mathcal{T}_s(\mathbf{Q}^T\mathbf{Y})$ and moreover they have relative large magnitude since the sparse representation step keeps only the largest $s$ entries $\left(\text{in fact }\text{tr}(\mathbf{R}) = \| \mathcal{T}_s( \mathbf{Q}^T \mathbf{Y}) \|_F^2 = \| \mathbf{X} \|_F^2\right)$. Therefore, we can assume that $\mathbf{R}$ is diagonally dominant. We also assume that we eliminate zero rows or rows with very few non-zero entries from $\mathbf{R}$, which corresponds to having atoms in the dictionary that are never/rarely used in the representations. To be more precise, let us denote by $\mathbf{\phi}_i^T$ the $i^\text{th}$ row of $\mathbf{Q}^T\mathbf{Y}$ and with $\mathbf{\psi}_j^T$ the $j^\text{th}$ row of $\mathcal{T}_s(\mathbf{Q}^T\mathbf{Y})$ and we have that $R_{jj} = \mathbf{\psi}_j^T \mathbf{\phi}_j = \mathbf{\psi}_j^T \mathbf{\psi}_j$ and $R_{ij} = \mathbf{\psi}_i^T \mathbf{\phi}_j$ and, by Gershgorin's disk theorem, the conditions for a positive semidefinite $\mathbf{R}$ are: \begin{equation} \mathbf{\psi}_j^T \mathbf{\psi}_j \leq \sum_{i=1, i \neq j}^n | \mathbf{\psi}_j^T \mathbf{\phi}_i| \leq (n-1)\mu, \label{eq:positivedef} \end{equation} for $j = 1,\dots,n$ and where $\mu = \max_{i\neq j} | \mathbf{\psi}_j^T \mathbf{\phi}_i|$. The result states that if rows of $\mathbf{Q}^T\mathbf{Y}$ are weakly correlated with the rows of $\mathcal{T}_s(\mathbf{Q}^T\mathbf{Y})$, except for the rows with the same indices, then the pair $(\mathbf{Q}, \mathbf{X})$ is a local minimum of the orthonormal dictionary learning problem.$\hfill \blacksquare$ \noindent \textbf{Remark 2.} Given a dataset $\mathbf{Y}$ and its factorization in a general dictionary $\mathbf{D}$ with sparse representations $\mathbf{X}$, there is no orthonormal transformation $\mathbf{Q}$ such that $\mathbf{QD}$ achieves better representation than $\mathbf{D}$ if $\mathbf{Y}\mathbf{D}^T\mathbf{X}^T$ is symmetric. \noindent\textit{Proof.} Consider the Procrustes optimization problem in variable $\mathbf{Q}$: \begin{equation} \underset{\mathbf{Q}; \ \mathbf{QQ}^T = \mathbf{Q}^T\mathbf{Q} = \mathbf{I}}{\text{minimize}} \| \mathbf{Y} - \mathbf{QDX} \|_F^2, \end{equation} and notice that the minimizer is $\mathbf{Q} = \mathbf{U U}^T = \mathbf{I}$ given that $\mathbf{Y}\mathbf{D}^T\mathbf{X}^T = \mathbf{U\Sigma U}^T$ is symmetric.$\hfill \blacksquare$ \noindent\textbf{Remark 3.} A necessary condition that a general orthonormal dictionary $\mathbf{Q}$ with representations $\mathbf{X}$ is a local minimum of the dictionary learning problem is that $\| \mathbf{X} \|_F^2 = \| \mathbf{YX}^T \|_*$. For a general overcomplete dictionary $\mathbf{D}$ with representations $\mathbf{X}$, the necessary condition reads $\text{tr}(\mathbf{Y}\mathbf{X}^T\mathbf{D}^T) = \| \mathbf{Y}\mathbf{X}^T\mathbf{D}^T \|_*$. \noindent\textit{Proof.} With the optimum choice of $\mathbf{Q}$ from the Procrustes result, $\mathbf{QXY}^T$ and $\mathbf{Q}^T\mathbf{YX}^T$ are symmetric by \eqref{eq:symmetry}. By matching the objective function value from \eqref{eq:costDetailed3} with the performance of the orthonormal dictionary from \eqref{eq:theobjofortho}, for fixed $\mathbf{X}$ and $\mathbf{Y}$ there is no general orthonormal dictionary $\mathbf{Q}$ that provides better representation performance than the identity dictionary $\mathbf{I}$ if \begin{equation} \text{tr}(\mathbf{YX}^T) = \| \mathbf{YX}^T \|_*, \label{eq:localoptimumU} \end{equation} which holds for example whenever $\mathbf{YX}^T$ is normal (orthogonal, symmetric or skew-symmetric in general) and positive semidefinite -- notice that orthogonal and positive definite just mean that the solution to the Procrustes problem is $\mathbf{Q} = \mathbf{I}$ because $\mathbf{YX}^T = \mathbf{I}$. In general, following the same reasoning, we know from \eqref{eq:localoptimumU} that general orthonormal $\mathbf{Q}$ with representations $\mathbf{X} = \mathcal{T}_s(\mathbf{Q}^T\mathbf{Y})$ is a local minimum when \begin{equation} \text{tr}(\mathbf{Y}\mathbf{X}^T\mathbf{Q}^T) = \| \mathbf{Y}\mathbf{X}^T\mathbf{Q}^T \|_*. \label{eq:theequation} \end{equation} Finally, using the fact that $\| \mathbf{Y}\mathbf{X}^T\mathbf{Q}^T \|_* = \| \mathbf{Y}\mathbf{X}^T \|_*$ and that $\text{tr}(\mathbf{Y}\mathbf{X}^T\mathbf{Q}^T) = \text{tr}(\mathbf{Q}^T\mathbf{Y}\mathbf{X}^T) = \| \mathbf{X} \|_F^2$ we reach \begin{equation} \| \mathbf{X} \|_F^2 = \| \mathbf{Y}\mathbf{X}^T \|_*, \end{equation} and therefore the objective function in \eqref{eq:costDetailed3} takes the value $\| \mathbf{Y} \|_F^2 - \| \mathbf{X} \|_F^2$. Equation \eqref{eq:theequation} is also a necessary condition for the local optimality of a general overcomplete dictionary $\mathbf{D}$ with representations $\mathbf{X}$, i.e., $\text{tr}(\mathbf{Y}\mathbf{X}^T\mathbf{D}^T) = \| \mathbf{Y}\mathbf{X}^T\mathbf{D}^T \|_*$ meaning that there is no orthonormal transformation that improves the representation performance of $\mathbf{D}$.$\hfill \blacksquare$ Previous work in the literature deals with the description of local minimum $(\mathbf{D}, \mathbf{X})$ of general dictionary learning schemes \cite{DictionaryIdentification, DictionaryIdentification2}, while other work is concerned with the sample complexity of recovering a dictionary \cite{Spielman12-pp, SampleComplexity, ProvableDictionaryLearning, AlternatingMinimization} under various statistical assumptions and dictionary dimensions. The general analysis in \cite{SampleOfFactorizations} provides sample complexity estimates to control how much the empirical average deviates from the expected objective functions of matrix factorization problems. As with any alternating minimization solution, the initialization procedure plays an important role. For Q--DLA, our experimental findings show that a very good initial point is the orthonormal basis $\mathbf{Q}$ created from the SVD of the dataset: $\mathbf{Y} = \mathbf{Q \Sigma V}^T$. This choice is also intuitive \cite{DictInit}. A full factorization of $\mathbf{Y}$ is not necessary since we are interested only in the basis $\mathbf{Q}$. As such, a reduced or so called economy size SVD can be performed. Still, depending on the size of the dataset $N$, this step can become expensive in terms of running time. In this paper we propose to approximate $\mathbf{Q}$ with a new orthonormal basis $\mathbf{\bar{Q}}$ obtained by: \begin{enumerate} \item Approximate first $\bar{n} \ll n$ principal components in by using iterative methods \cite{Arnoldi}. \item Complete the partial structure with random components to obtain the full basis. Finalize by QR orthogonalization to get $\mathbf{\bar{Q}}$. \end{enumerate} This initialization works well because typically the lowest singular values of a dataset consisting of real world data have low magnitude. There are several limitations associated with conventional orthonormal dictionaries. Although the sparse representation step is fast when using an orthonormal dictionary, i.e., no matching \cite{GreedIsGood} or basis pursuit \cite{JustRelax} is necessary and only correlations need to be computed, the representation performance is inferior to that of general dictionaries while the computational complexity is comparable to these dictionaries. For this reason we now move to explore transform structures that allow for a computationally cheaper orthonormal dictionary without destroying the sparsifying properties. \section{A Householder approach to orthonormal dictionary learning} In this section, we describe our new approach for dictionary learning based on Householder reflectors. We use the same alternative optimization procedure generally used for dictionary learning and described in Section II. Since we are using orthonormal dictionaries, the sparse approximation step is the same, and thus the focus falls on the dictionary update step which is detailed in this section. Therefore, we start by analyzing the properties of Householder reflectors and then introduce two dictionary learning procedures that build orthonormal dictionaries directly factorized into a product of reflectors. We finish the discussion by making some considerations on the initialization of the proposed methods. \subsection{Householder reflectors for dictionary learning} Let $\mathbf{u}_1 \in \mathbb{R}^n$ be a normalized vector, i.e., $\| \mathbf{u}_1 \|_2 = 1$. We define the orthonormal symmetric Householder reflector $\mathbf{U}_1 \in \mathbb{R}^{n \times n}$ as \begin{equation} \mathbf{U}_1 = \mathbf{I} - 2\mathbf{u}_1\mathbf{u}_1^T. \label{eq:Reflector} \end{equation} The reflector $\mathbf{U}_1$ is completely defined by the vector $\mathbf{u}_1$ and as such they may be used equivalently to refer to the reflector. Given a Householder reflector $\mathbf{U}_1 \in \mathbb{R}^{n \times n}$ and a vector $\mathbf{x} \in \mathbb{R}^n$, the product \begin{equation} \mathbf{U}_1\mathbf{x} = \left( \mathbf{I} - 2\mathbf{u}_1\mathbf{u}_1^T \right) \mathbf{x} = \mathbf{x} - 2\mathbf{u}_1 ( \mathbf{u}_1^T\mathbf{x} ) = \mathbf{x} - \nu \mathbf{u}_1, \label{eq:ReflectorMultiplication} \end{equation} where $\nu = 2 \mathbf{u}_1^T \mathbf{x}$. The computational complexity of \eqref{eq:ReflectorMultiplication} is $N_\text{op} = 4n$, an order of magnitude lower than the general matrix-vector multiplication complexity of $N_\text{op} = n(2n - 1)$. Given $\mathbf{X} \in \mathbb{R}^{n \times N}$ a result similar to \eqref{eq:ReflectorMultiplication} also holds for matrix-matrix multiplication \begin{equation} \mathbf{U}_1\mathbf{X} = \left( \mathbf{I} - 2\mathbf{u}_1\mathbf{u}_1^T \right) \mathbf{X} = \mathbf{X} - \mathbf{u}_1\mathbf{v}_1^T, \end{equation} where $\mathbf{v}_1 = 2\mathbf{X}^T\mathbf{u}_1$. Householder reflectors are often used to introduce zeros in the entries of vectors and to reduce full matrices to upper (or lower) triangular forms with applications to computing least square solutions and QR decompositions. Given a general orthonormal basis $\mathbf{Q} \in \mathbb{R}^{n \times n}$, there exists a sequence of $n-1$ Householder reflectors $\mathbf{U}_j$ such that the following factorization holds: \begin{equation} \mathbf{Q} = \mathbf{U}_{n-1} \mathbf{U}_{n-2} \cdots \mathbf{U}_1 \mathbf{D}, \end{equation} where $\mathbf{D}$ is a diagonal matrix of size $n \times n$ with entries $D_{ii} = \{ \pm 1 \}, i = 1,\dots,n$. This result follows from the QR factorization of a unitary matrix with Householder reflectors, and from the facts that an orthonormal upper (or lower) triangular matrix is actually diagonal and a product of unitary matrices is itself orthonormal. In this case the reflectors enjoy additional sparse structure since the reflectors vectors $\mathbf{u}_j$ have the first $j-1$ entries set to zero. In the following section we will consider general reflector vectors without any sparsity assumptions. Furthermore we will consider products of $m$ Householder reflectors with $m \ll n$ which will open the way to orthonormal dictionaries that can be manipulated fast. Related work explores the ways of representing an orthonormal basis \cite{OrthoBasis}. In this section we describe algorithms to learn an orthonormal dictionary $\mathbf{U} \in \mathbb{R}^{n \times n}$ that is a product of a few Householder reflectors, balancing performance and computational complexity. We consider dictionaries with the following structure: \begin{equation} \mathbf{U} = \mathbf{U}_m \mathbf{U}_{m-1} \cdots \mathbf{U}_2 \mathbf{U}_1, \label{eq:MultipleHouseholderDict} \end{equation} where all $\mathbf{U}_j$ are Householder reflectors and the number $m$ is on the order $O(\log n)$. Of course, we have that all $\| \mathbf{u}_j \|_2 = 1$. For brevity we do not copy these constraints, but consider them imposed. \subsection{Learning products of Householder reflectors: an extra orthonormal constraint} We first explore matrix structures that allow for the simultaneous update of all reflectors in the product $\mathbf{U}$. We keep the same overall dictionary formulation as in \eqref{eq:MultipleHouseholderDict} but with the additional constraint that the reflector vectors obey $\mathbf{u}_i^T \mathbf{u}_j = 0 \text{ for all } i \neq j$. With this orthogonal constraint the new overall orthonormal symmetric dictionary is \begin{equation} \mathbf{U} = \mathbf{U}_m \mathbf{U}_{m-1} \cdots \mathbf{U}_2 \mathbf{U}_1 = \mathbf{I} - 2 \sum_{j=1}^m \mathbf{u}_j \mathbf{u}_j^T. \label{eq:MultipleOrthoHouseholderDict} \end{equation} Using the fact that the reflector vectors $\mathbf{u}_j$ are orthogonal, the objective function simplifies as \begin{equation} \begin{aligned} \| \mathbf{Y} - \mathbf{UX} \|_F^2 = & \left\| \mathbf{Y} - \mathbf{X} + 2\sum_{j=1}^m \mathbf{u}_j\mathbf{u}_j^T \mathbf{X} \right\|_F^2 \\ = & \| \mathbf{Y} - \mathbf{X} \|_F^2 + \sum_{j=1}^m \mathbf{u}_j^T \mathbf{Z} \mathbf{u}_j, \end{aligned} \label{eq:householdersymmetric} \end{equation} where we have defined \begin{equation} \mathbf{Z} = 2(\mathbf{XY}^T + \mathbf{YX}^T) = 2\mathbf{\tilde{Z}}. \label{eq:theFirstZ} \end{equation} To minimize \eqref{eq:householdersymmetric}, the reflector vectors $\mathbf{u}_j$ are chosen to be the eigenvectors associated with the lowest $m$ negative eigenvalues of $\mathbf{Z}$ (assuming that $m$ negative eigenvalues of $\mathbf{Z}$ exist). Since $\mathbf{Z}$ is symmetric, its eigenvectors are orthonormal and thus obey the constraint that we consider on the reflector vectors $\mathbf{u}_j$. If $\mathbf{Z}$ does not possess $m$ negative eigenvectors then fewer than $m$ reflectors should be constructed, the rest up to $m$ can be set to the zero vector (the reflector becomes the identity). The full proposed learning procedure, which we call QH$_m$--DLA, is detailed in Algorithm 1. Notice that the product $\mathbf{U}^T \mathbf{Y}$ in the computation of $\mathbf{X}$, step 3) of the iterative process, can be efficiently carried out by using the Householder factorization of $\mathbf{U}$ (complexity $O(nN \log n)$ instead of $O(n^2N)$). This is due to the numerical efficiency of the dictionary $\mathbf{U}$. The updates of the reflectors in $\mathbf{U}$ and of the sparse representations $\mathbf{X}$ are done exactly at each alternating step of the algorithm and thus the objective function decreases monotonically to a local optimum. Additionally, QH$_m$--DLA is satisfactory from a theoretically perspective since, as we will see, it allows performance analysis and comparison with Q--DLA. Furthermore, notice that the orthonormal dictionaries created by QH$_m$--DLA are also symmetric. If we consider a general dictionary $\mathbf{D}$ for sparse representations, then the pair dictionary/representations $(\mathbf{D}, \mathbf{X})$ is equivalent to the pair $(-\mathbf{D}, -\mathbf{X})$ \cite{DictionariesHard2006}. In our setup, notice that if $\mathbf{U}_1$ is a Householder reflector then $-\mathbf{U}_1$ cannot be constructed by \eqref{eq:Reflector}, as $\mathbf{U}_1$ is. Now assume that the matrix $\mathbf{T} = \begin{bmatrix} \mathbf{u}_1 & \mathbf{u}_2 & \dots & \mathbf{u}_n \end{bmatrix}$ contains all $n$ eigenvectors of $\mathbf{\tilde{Z}}$ ordered in increased order of their corresponding eigenvalues. Let $\mathbf{T}_{i:j}$ denote a matrix consisting of all the reflector vectors from the $i^\text{th}$ to the $j^\text{th}$ column of $\mathbf{T}$. Then, due to $\mathbf{T}_{1:i}\mathbf{T}_{1:i}^T + \mathbf{T}_{i+1:n}\mathbf{T}_{i+1:n}^T = \mathbf{I}$, we have that: \begin{equation} -(\mathbf{I} - 2\mathbf{T}_{1:i}\mathbf{T}_{1:i}^T) = \mathbf{I} - 2\mathbf{T}_{i+1:n}\mathbf{T}_{i+1:n}^T. \end{equation} This shows that there is a correspondence in performance according to the number of reflectors that are selected: with the first $m$ reflectors we have the dictionary $\mathbf{U}$ (and representations $\mathbf{X}$) while with the other $n-m$ reflectors we have the dictionary $-\mathbf{U}$ (and representations $-\mathbf{X}$). \begin{algorithm}[t] \caption{ \textbf{-- QH$_m$--DLA (Orthogonal Householder Dictionary Learning Algorithm).} \newline \textbf{Input: } The dataset $\mathbf{Y} \in \mathbb{R}^{n \times N}$, the number of Householder reflectors in the transform $m$, the target sparsity $s$ and the maximum number of iterations $K$. \newline \textbf{Output: } The sparsifying transform $\mathbf{U} = \mathbf{U}_m \cdots \mathbf{U}_1$ with $\mathbf{u}_i^T \mathbf{u}_j = 0,\ i \neq j$ and sparse representations $\mathbf{X}$ such that $\| \mathbf{Y} - \mathbf{UX} \|_F^2$ is reduced.} \begin{algorithmic} \State \textbf{Initialization:} \begin{enumerate} \setlength{\itemindent}{+.25in} \item Perform the economy size singular value decomposition of size $m+1$ of the dataset $\mathbf{Y} = \mathbf{Q} \mathbf{\Sigma} \mathbf{V}^T$. \item Reduce $\mathbf{Q} \in \mathbb{R}^{n \times (m+1)}$ to an upper triangular form with Householder reflectors defined by $\mathbf{u}_1,\ldots,\mathbf{u}_m$. The reflector that introduces zeros in the first column is $\mathbf{u}_m$. \item Orthogonalize $\mathbf{u}_1,\dots,\mathbf{u}_m$ by the QR algorithm. \item Compute sparse representations $\mathbf{X} = \mathcal{T}_s(\mathbf{U}^T \mathbf{Y})$. \end{enumerate} \State \textbf{Iterations} $1,\dots,K$: \begin{enumerate} \setlength{\itemindent}{+.25in} \item Construct the matrix: $\mathbf{\tilde{Z}} = \mathbf{XY}^T + \mathbf{YX}^T$ \item Compute the $m$ lowest eigenvalue/eigenvector pairs of $\mathbf{\tilde{Z}}$. Set to $\mathbf{0}$ the eigenvectors associated to nonnegative eigenvalues. Update reflector vectors $\mathbf{u}_j$ with the eigenvectors just computed. The eigenvector of the lowest negative eigenvalue goes to $\mathbf{u}_m$. \item Compute sparse representations $\mathbf{X} = \mathcal{T}_s(\mathbf{U}^T \mathbf{Y})$. \end{enumerate} \end{algorithmic} \end{algorithm} \subsection{Learning products of Householder reflectors: the unconstrained case} We again consider the case where the dictionary $\mathbf{U}$ has the structure from \eqref{eq:MultipleHouseholderDict} but now no additional constraints are assumed on the reflectors. This time we update each reflector sequentially. The new objective function becomes $\| \mathbf{Y} - \mathbf{U}_m \mathbf{U}_{m-1} \cdots \mathbf{U}_2 \mathbf{U}_1 \mathbf{X} \|_F^2$. To optimize the $j^\text{th}$ Householder reflector, we write the objective function as \begin{equation} \| \left( \mathbf{U}_{j+1} \cdots \mathbf{U}_m \right) \mathbf{Y} - \mathbf{U}_j \left( \mathbf{U}_{j-1} \cdots \mathbf{U}_1 \right) \mathbf{X} \|_F^2, \label{eq:UpdateOnlyOne} \end{equation} where we have used that all unitary matrices, and thus Householder reflectors, preserve the Frobenius norm and the fact that the reflectors are symmetric: \begin{equation} \| \mathbf{Y} - \mathbf{U}_1\mathbf{X} \|_F^2 = \| \mathbf{U}_1^T\mathbf{Y} - \mathbf{X} \|_F^2 = \|\mathbf{U}_1\mathbf{Y} - \mathbf{X} \|_F^2. \end{equation} We have now reduced the problem to the QH$_1$--DLA case for the updated dataset $\left( \mathbf{U}_{j+1} \cdots \mathbf{U}_m \right) \mathbf{Y}$ and the updated representations $\left( \mathbf{U}_{j-1} \cdots \mathbf{U}_1 \right) \mathbf{X}$. Following the same computation that leads to \eqref{eq:theFirstZ}, we now reach that the best update for the fixed $\mathbf{u}_j$ is the eigenvector associated with the lowest negative eigenvalue of \begin{equation} \begin{aligned} \mathbf{Z} = & 2 \left( \mathbf{U}_{j-1} \cdots \mathbf{U}_1 \right) \mathbf{XY}^T \left( \mathbf{U}_{j+1} \cdots \mathbf{U}_m \right)^T + \\ & \quad \quad \quad 2 \left( \mathbf{U}_{j+1} \cdots \mathbf{U}_m \right) \mathbf{YX}^T \left( \mathbf{U}_{j-1} \cdots \mathbf{U}_1 \right)^T. \end{aligned} \label{eq:theSecondZ} \end{equation} Each reflector in the product of $\mathbf{U}$ is updated sequentially in this manner. The full procedure, which we call H$_m$--DLA, is detailed in Algorithm 2. We expect the performance of this algorithm to be in general inferior to that of Q--DLA in terms of representation error, approaching it as $m$ approaches $n$, and to be superior to that of QH$_m$--DLA, due to the missing additional orthogonal constraints. Still, since all reflectors are computed together and no extensive matrix manipulation is required QH$_m$--DLA runs faster than H$_m$--DLA. This opens the possibility of using QH$_m$--DLA as an initialization procedure for H$_m$--DLA. Finally, QH$_1$--DLA and H$_1$--DLA are equivalent. Also notice that the computation of $\mathbf{\tilde{Z}}$ can be optimized across the iterative process in step 1a): denote $\mathbf{R}_j = \left( \mathbf{U}_{j-1} \cdots \mathbf{U}_1 \right) \mathbf{XY}^T \left( \mathbf{U}_m \cdots \mathbf{U}_{j+1} \right)$ from the $j^\text{th}$ iteration, the for the next iteration when computing $\mathbf{U}_{j+1}$ we simply have that $\mathbf{R}_{j+1}=\mathbf{U}_j \mathbf{R}_j \mathbf{U}_{j+1}^T$ -- which can be done efficiently by left and right reflector multiplication formulas. Just as in the case of QH$_m$--DLA, the update of each reflector $\mathbf{U}_j$ and of the representations $\mathbf{X}$ are done by solving exactly the optimization problems (with the other variables fixed) and thus the objective function monotonically decreases to a local minimum point. \begin{algorithm}[t] \caption{ \textbf{-- H$_m$--DLA (Householder Dictionary Learning Algorithm).} \newline \textbf{Input: } The dataset $\mathbf{Y} \in \mathbb{R}^{n \times N}$, the number of Householder reflectors in the transform $m$, the target sparsity $s$ and the maximum number of iterations $K$. \newline \textbf{Output: } The sparsifying transform $\mathbf{U} = \mathbf{U}_m \cdots \mathbf{U}_1$ and sparse representations $\mathbf{X}$ such that $\| \mathbf{Y} - \mathbf{UX} \|_F^2$ is reduced.} \begin{algorithmic} \State \textbf{Initialization:} \begin{enumerate} \setlength{\itemindent}{+.25in} \item Perform the economy size singular value decomposition of size $m+1$ of the dataset $\mathbf{Y} = \mathbf{Q}\mathbf{\Sigma} \mathbf{V}^T$. \item Reduce $\mathbf{Q} \in \mathbb{R}^{n \times (m+1)}$ to an upper triangular form by Householder reflectors defined by $\mathbf{u}_1,\ldots,\mathbf{u}_m$. The reflector that introduces zeros in the first column is $\mathbf{u}_m$. \item Compute sparse representations $\mathbf{X} = \mathcal{T}_s(\mathbf{U}^T \mathbf{Y})$. \end{enumerate} \State \textbf{Iterations} $1,\dots,K$: \begin{enumerate} \setlength{\itemindent}{+.25in} \item For $j = 1,\dots,m$: \begin{enumerate} \setlength{\itemindent}{+.25in} \item Construct the matrix: \quad \quad $ \mathbf{\tilde{Z}} = \left( \mathbf{U}_{j-1} \cdots \mathbf{U}_1 \right) \mathbf{XY}^T \left( \mathbf{U}_{j+1} \cdots \mathbf{U}_{m} \right)^T $, \quad \quad $\mathbf{\tilde{Z}} = \mathbf{\tilde{Z}} + \mathbf{\tilde{Z}}^T$. \item Compute lowest eigenvalue $\lambda_\text{min}$ of $\mathbf{\tilde{Z}}$ with eigenvector $\mathbf{v}$. If $\lambda_\text{min} \geq 0$ then set $\mathbf{v} = \mathbf{0}$. Update reflector vector $\mathbf{u}_j = \mathbf{v}$. \end{enumerate} \item Compute sparse representations $\mathbf{X} = \mathcal{T}_s(\mathbf{U}^T \mathbf{Y})$. \end{enumerate} \end{algorithmic} \end{algorithm} \subsection{The initializations of H$_m$--DLA and QH$_m$--DLA} Initialization is important for any alternating minimization algorithm. In principle, the proposed methods can be initialized with random reflectors $\mathbf{u}_j$ but the idea is to provide an initialization such that the methods converge in few iterations. The computational complexity of the initialization should be much lower than that of the learning algorithms. In both the cases of H$_m$--DLA and QH$_m$--DLA, the initialization procedures start by computing the reduced singular value decomposition of size $m$ of the dataset $\mathbf{Y} = \mathbf{Q\Sigma V}^T$. Then $\mathbf{Q}$ is diagonalized by Householder reflectors thus providing the $n$ reflectors. Among these we choose $m$ reflectors to initialize our algorithms. In the case of QH$_m$--DLA the reflectors previously obtained are further orthogonalized by the QR algorithm thus ensuring compliance with all the constraints of the method. \section{Comments on the proposed algorithms and connections to previous work} Now that the main algorithms have been described, in this section we examine the achievable representation performance of Householder based dictionaries. First, we analyze the simple case of a single Householder reflector dictionary (analysis that is pertinent also to each step of the H$_m$--DLA) and then consider the QH$_m$--DLA. Finally, we show the similarities between the representation error achievable by our proposed dictionaries and that of general orthonormal dictionaries. \subsection{Performance of a single Householder reflector dictionary} Considering a dictionary composed of a single Householder reflector, the objective function in \eqref{eq:householdersymmetric} reduces to \begin{equation} \begin{aligned} & \| \mathbf{Y} - \mathbf{U}_1 \mathbf{X} \|_F^2 = \| \mathbf{Y} \|_F^2 + \| \mathbf{X} \|_F^2 + C, \\ &\text{with } C = - 2\text{tr}(\mathbf{XY}^T) + 2\mathbf{u}_1^T (\mathbf{XY}^T + \mathbf{YX}^T)\mathbf{u}_1. \end{aligned} \label{eq:costDetailed} \end{equation} Assuming some normalization of the dataset like mean subtraction and $\ell_2$ normalization of the columns, it is reasonable to consider $\| \mathbf{Y} \|_F^2 = N$. The norm $ \| \mathbf{X} \|_F^2 $ is maximized in the sparse reconstruction step, where we keep the largest absolute value entries in the representations. The goal is twofold: \begin{itemize} \item Maximize the trace of $\mathbf{XY}^T$. \item Minimize the lowest eigenvalue of $\mathbf{\tilde{Z}} = \mathbf{XY}^T + \mathbf{YX}^T$. \end{itemize} The two goals are related since $\text{tr}(\mathbf{\tilde{Z}}) = 2\text{tr}(\mathbf{XY}^T)$. Therefore, the performance of our algorithms depends on the spectral properties of $\mathbf{\tilde{Z}}$. In an ideal situation, the lowest, negative, eigenvalue of this matrix should be maximally reduced while the rest of the eigenvalues remain positive and their sum is maximized. An ideal case would be that the spectrum obeys $\Lambda(\mathbf{\tilde{Z}}) = \{ -\alpha_1, \beta_1, \ldots, \beta_{n-1} \}$, one negative eigenvalue and $n-1$ non-negative. Now the cost in \eqref{eq:costDetailed} is maximally reduced by the sum of the singular values of $\mathbf{\tilde{Z}}$ also known as its nuclear norm, i.e., $C = - \| \mathbf{\tilde{Z}} \|_* = -\left(\alpha_1 + \sum_{i=1}^{n-1} \beta_i\right)$. \subsection{Performance of Householder based dictionaries} We now analyze the dictionaries created by QH$_m$--DLA. In the case of H$_m$--DLA, since the reflectors are updated sequentially, we defer to the discussion for QH$_1$--DLA. The case that can be more easily approached from an analysis perspective is the one of QH$_m$--DLA, where all reflectors are updated simultaneously. In this case, the objective function \eqref{eq:householdersymmetric} reduces to \begin{equation} \begin{aligned} & \| \mathbf{Y} - \mathbf{U} \mathbf{X} \|_F^2 = \| \mathbf{Y} \|_F^2 + \| \mathbf{X} \|_F^2 + C, \\ &\text{with } C = - 2\text{tr}(\mathbf{XY}^T) + 2 \sum_{j=1}^m \mathbf{u}_j^T (\mathbf{XY}^T + \mathbf{YX}^T) \mathbf{u}_j. \end{aligned} \label{eq:costDetailed2} \end{equation} Similar to the single Householder reflector case, the performance depends on the spectrum $\Lambda(\mathbf{\tilde{Z}}) = \{ -\alpha_1, \ldots, -\alpha_m, \beta_1, \dots, \beta_{n-m} \}$. To minimize the objective function in \eqref{eq:costDetailed2}, we need to choose $m$ Householder reflectors corresponding to the $m$ negative eigenvalues in $\Lambda(\mathbf{\tilde{Z}})$. In this way, \eqref{eq:costDetailed2} is maximally reduced by the nuclear norm of $\mathbf{\tilde{Z}}$, i.e., $C = -\|\mathbf{\tilde{Z}} \|_*=-\left(\sum_{i=1}^m \alpha_i + \sum_{i=1}^{n-m} \beta_i \right)$. If the spectrum of $\mathbf{\tilde{Z}}$ is non-negative, then no reflector can decrease the objective function and the dictionary is set to $\mathbf{U} = \mathbf{I}$; with the given $\mathbf{Y}$ and $\mathbf{X}$ there is no Householder reflector that can improve upon the representation performance. Equally bad, if the spectrum is non-positive then all $n$ eigenvectors are selected and by \eqref{eq:MultipleOrthoHouseholderDict} it follows that the dictionary is $\mathbf{U} = -\mathbf{I}$. In practice, depending on the magnitude of all the $m$ negative eigenvalues of $\mathbf{\tilde{Z}}$ we may choose a smaller number of reflectors to construct $\mathbf{U}$. Of course, the representation performance is slightly inferior this way but the benefit is a faster transform. The trade-off can be balanced based on application specific requirements. A situation that is of interest is when the sparse factorization can be done exactly, i.e., $\mathbf{Y} = \mathbf{UX}$. Considering that some normalization has taken place for the dataset such that $\| \mathbf{Y} \|_F^2 = N$ and because orthonormal transformations preserve $\ell_2$ norms we have that $\| \mathbf{X} \|_F^2 = N$, i.e., we have in effect exactly $\mathbf{X} = \mathbf{U}^T\mathbf{Y}$. The objective function of the optimization problem reaches zero and thus the nuclear norm of $\mathbf{\tilde{Z}}$ is maximized to $2N$. A last comment concerns the addition to the reflector $\mathbf{u}_i$ of the sparse structure typical of QR decompositions, i.e., consider $\mathbf{u}_i = \begin{bmatrix} \mathbf{0}; & \mathbf{\tilde{u}}_i \end{bmatrix}$. With this new structure the minimizer $\mathbf{\tilde{u}}_i$ of the expression in \eqref{eq:costDetailed} is given by the eigenvector associated with the smallest, negative, eigenvalue of the lower right-hand side square sub-matrix of size $(n-i+1)$ from $\mathbf{\tilde{Z}}$. This structure appears during the initialization step discussed in Section III. \subsection{Connections between Householder based dictionaries and general orthonormal dictionaries} The proposed algorithms are closely connected to the task of learning a general orthonormal dictionary. Increasing $m$ for H$_m$--DLA and QH$_m$--DLA will reduce the performance gap between dictionaries designed by these methods and the orthonormal dictionaries designed via Q--DLA, of course at the cost of higher computational demand. We discuss now some properties and connections between the various dictionary learning procedures. \noindent\textbf{Remark 4.} Given a dataset $\mathbf{Y}$ represented in the general dictionary $\mathbf{D}$ with the sparse representations $\mathbf{X}$, there is no reflector $\mathbf{U}_1$ such that $\mathbf{U}_1\mathbf{D}$ achieves lower representation error than $\mathbf{D}$ if $\mathbf{DXY}^T + \mathbf{Y}(\mathbf{DX})^T$ is positive semidefinite. \noindent\textit{Proof.} Check for the existence of a reflector $\mathbf{U}_1$ such that no left dictionary update improves the representation \begin{equation} \| \mathbf{Y} - \mathbf{D} \mathbf{X}\|_F^2 > \| \mathbf{Y} - \mathbf{U}_1\mathbf{D}\mathbf{X}\|_F^2. \label{eq:BetterThanI2} \end{equation} If such a reflector does not exist then $\mathbf{D}$ may be viewed as a local minimum (this is a necessary condition). This is equivalent to considering an updated dictionary $\mathbf{U}_1\mathbf{D}$. Therefore, if the symmetric matrix \begin{equation} \mathbf{Z}_1 = \mathbf{DXY}^T + \mathbf{Y}(\mathbf{DX})^T, \label{eq:Cond1} \end{equation} is positive semidefinite then $\mathbf{D}$ is a local minimum of, i.e., there is no reflector $\mathbf{U}_1$ such that $\mathbf{U}_1\mathbf{D}$ is able to achieve a lower objective function value in than $\mathbf{D}$. Compare this with Remark 2. $\hfill \blacksquare As we have seen in the previous sections, the positive semidefinite condition is necessary and sufficient when describing local minima of the Householder based dictionaries. In the case of general orthonormal and, due to \eqref{eq:BetterThanI2} and \eqref{eq:Cond1}, also general (even overcomplete) dictionaries the condition is necessary, but not sufficient. \noindent\textbf{Remark 5.} Q--DLA always performs better than QH$_m$--DLA, the performance matches when $\mathbf{YX}^T$ is symmetric. We have shown by \eqref{eq:theobjofortho} that the objective function reduction possible by a general orthogonal dictionary is $2\| \mathbf{YX}^T \|_*$. Due to the triangle inequality which is obeyed by the nuclear norm, this quantity is larger or equal at worse to the reduction achievable when using a symmetric dictionary designed via QH$_m$--DLA, which is $\| \mathbf{XY}^T + \mathbf{YX}^T \|_*$. As expected, due to its additional constraints, QH$_m$--DLA performs worse than the Q--DLA. In general, only H$_m$--DLA, with a sufficiently large $m$, has the capability to match the Q--DLA.$\hfill \blacksquare$ \noindent\textbf{A simple example in $\mathbb{R}^2$.} To illustrate the previous results, consider a dataset $\mathbf{Y} \in \mathbb{R}^{2 \times N}$ and the initial dictionary $\mathbf{Q} = \mathbf{I}$. With target sparsity $s = 1$ we have, under a permutation of columns to highlight the row structure, the representations \begin{equation} \mathbf{X} = \begin{bmatrix} \mathbf{y}_{11}^T & \mathbf{0}^T \\ \mathbf{0}^T & \mathbf{y}_{22}^T \end{bmatrix} \text{ where } \mathbf{Y} = \begin{bmatrix} \mathbf{y}_{11}^T & \mathbf{y}_{12}^T \\ \mathbf{y}_{21}^T & \mathbf{y}_{22}^T \end{bmatrix}, \end{equation} and therefore \begin{equation} \mathbf{YX}^T = \begin{bmatrix} \|\mathbf{y}_{11}\|_2^2 & \mathbf{y}_{12}^T\mathbf{y}_{22} \\ \mathbf{y}_{11}^T \mathbf{y}_{21} & \|\mathbf{y}_{22}\|_2^2 \end{bmatrix},\ \mathbf{\tilde{Z}} = \mathbf{YX}^T + \mathbf{XY}^T. \label{eq:theYXT} \end{equation} By \eqref{eq:positivedef} and with \eqref{eq:theYXT} we see that there is no Householder based dictionary that improves the presentation error if $2 \| \mathbf{y}_{11} \|_2^2 < \mathbf{y}_{11}^T\mathbf{y}_{21} + \mathbf{y}_{22}^T\mathbf{y}_{12}$ and $2 \| \mathbf{y}_{22} \|_2^2 < \mathbf{y}_{11}^T\mathbf{y}_{21} + \mathbf{y}_{22}^T\mathbf{y}_{12}$, i.e., $\mathbf{\tilde{Z}}$ is positive semidefinite. Since $\text{tr}(\mathbf{\tilde{Z}}) = 2\| \mathbf{X} \|_F^2 > 0$ one of the eigenvalues is necessarily positive and therefore the previous two conditions lead to $2\| \mathbf{y}_{11} \|_2 \| \mathbf{y}_{22} \|_2 < \mathbf{y}_{11}^T\mathbf{y}_{21} + \mathbf{y}_{22}^T\mathbf{y}_{12}$. Therefore, with a Householder based dictionary the possible reduction in the representation error is $\|\mathbf{\tilde{Z}} \|_* = 2\sqrt{ \text{tr}(\mathbf{\tilde{Z}})^2/4 - \det(\mathbf{\tilde{Z}}) } = 2\sqrt{ \|\mathbf{X}\|_F^4 - \det(\mathbf{\tilde{Z}}) }$. The Frobenius norm of the representations is maximized in the sparse approximation step while $-\det(\mathbf{\tilde{Z}})$, which is always positive, is increased when maximizing $\mathbf{y}_{11}^T\mathbf{y}_{21} + \mathbf{y}_{22}^T\mathbf{y}_{12}$. Assuming $\mathbf{YX}^T$ is positive semidefinite then we know there is no Householder based dictionary that can improve the representations. If we consider now general orthonormal dictionaries with \eqref{eq:theYXT} we know from \eqref{eq:localoptimumU} that if $\mathbf{y}_{12}^T\mathbf{y}_{22} \approx \pm \mathbf{y}_{11}^T \mathbf{y}_{21}$ (i.e., $\mathbf{YX}^T$ is approximately symmetric or skew-symmetric) there is also no orthonormal dictionary that can perform much better in terms of representation than the identity. $\hfill \blacksquare$ \subsection{Householder reflectors vs. Givens rotations for learning fast dictionaries} Householder reflectors are not the only elementary building blocks for orthonormal structures. Any orthonormal dictionary of size $n \times n$ can also be factorized in a product of Givens rotations \cite{Golub1996} parameterized by $c, s$ and the indices $(i,j)$ like $\mathbf{G}_{ij} = \begin{bmatrix} \mathbf{I}_{i-1} & & & & \\ & c & & s & \\ & & \mathbf{I}_{j-i-1} & & \\ & -s & & c & \\ & & & & \mathbf{I}_{n-j} \\ \end{bmatrix},\ c^2 + s^2 = 1$. Givens rotations have been previously used with great success in several matrix factorization applications \cite{Treelets2008, SparseMatrixTransform2011, MultiresolutionMatrixFactorization2014}. Consider using a single Givens rotation as a dictionary. We reach the optimization problem $\underset{c, s, (i,j);\ c^2 + s^2 = 1}{\text{minimize}} \ \ \| \mathbf{Y} - \mathbf{G}_{ij} \mathbf{X} \|_F^2$, which is equivalent to \begin{equation*} \underset{c, s, (i,j);\ c^2 + s^2 = 1}{\text{minimize}} \left\| \begin{bmatrix} \mathbf{y}_i^T \\ \mathbf{y}_j^T \end{bmatrix} - \begin{bmatrix} c & s \\ -s & c \end{bmatrix} \begin{bmatrix} \mathbf{x}_i^T \\ \mathbf{x}_j^T \end{bmatrix} \right\|_F^2, \label{eq:GivensLearning2} \end{equation*} where $\mathbf{y}_i^T$ and $\mathbf{x}_i^T$ are the $i^\text{th}$ rows of $\mathbf{Y}$ and $\mathbf{X}$, respectively. When indices $(i,j)$ are fixed, the optimization reduces to a two dimensional orthogonal Procrustes problem. While to select the indices $(i,j)$, among the ${n \choose 2}$ possibilities, an appropriate strategy needs to be defined. Indeed, Givens rotations also seem an appropriate tool to approach the fast dictionary learning problem, but it is beyond the scope of this paper to analyze it in detail. \section{Results} \begin{figure}[t] \centering \includegraphics[width=0.38\textwidth]{plot2_nws.eps} \caption{Normalized eigenvalues of $\mathbf{\tilde{Z}}$ after convergence of QH$_m$--DLA for images peppers and barb with sparsity $s = 4$ and $m = 12$ reflectors.} \label{fig:LenaVsBarb} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{plot3_nws.eps} \caption{For the proposed methods we show the evolution of the relative representation error $\| \mathbf{Y} - \mathbf{DX} \|_F^2 \| \mathbf{Y} \|_F^{-2}$ for the dataset $\mathbf{Y}$ created from the patches of the images couple, peppers and boat with sparsity $s = 4$ and for $m \in \{ 12, 32\}$ reflectors. For reference we show Q--DLA \cite{OrthoDictionary}.} \label{fig:IterationEvolution} \end{figure} \def1.2{1.2} \begin{table*}[t] \begin{center} \caption{RMSE in the case of several dictionaries computed from known test images. Sparsity level is $s=4$ and the dataset is $\mathbf{Y} \in \mathbb{R}^{64 \times 4096}$ in each case. The learning procedures run after mean extraction and normalization $\mathbf{Y} = \mathbf{Y}/255.$ The best results of the fast dictionaries are shown in bold font.}\label{tb:Table1} \begin{tabular}{|c|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|c<{\hspace{-3pt}}|} \hline & peppers & boat & pollen & mri & cameraman & pirate & barb & baboon & hill & couple & house & fingerprint \\ \hline DCT & 0.0395 &0.0419 & 0.0461 & 0.0721 & 0.0619 & 0.0507 & \textbf{0.0435} & 0.0694 & 0.0361 & 0.0432 & 0.0374 & 0.0765 \\ \hline H$_6$--DLA & 0.0294 & 0.0371 & 0.0421 & 0.0649 & 0.0568 & 0.0453 & 0.0508 & 0.0738 & 0.0331 & 0.0405 & 0.0298 & 0.0536 \\ \hline H$_{12}$--DLA & \textbf{0.0261} & \textbf{0.0324} & \textbf{0.0376} & \textbf{0.0611} & \textbf{0.0512} & \textbf{0.0421} & 0.0436 & \textbf{0.0691} & \textbf{0.0302}& \textbf{0.0353} & \textbf{0.0255} & \textbf{0.0497} \\ \hline QH$_6$--DLA & 0.0306 & 0.0375 & 0.0425 & 0.0656 & 0.0575 & 0.0457 & 0.0508 & 0.0739 & 0.0334 & 0.0411& 0.0302 & 0.0542 \\ \hline QH$_{12}$--DLA & 0.0278 & 0.0336 & 0.0388 & 0.0626 & 0.0533 & 0.0434 & 0.0444 & 0.0702 & 0.0313 & 0.0366 & 0.0275 & 0.0512 \\ \hline \hline H$_{32}$--DLA & 0.0253 & 0.0310 & 0.0371 & 0.0594 & 0.0472 & 0.0407 & 0.0348 & 0.0649 & 0.0288 & 0.0336 & 0.0234 & 0.0492 \\ \hline QH$_{32}$--DLA & 0.0278 & 0.0332 & 0.0385 & 0.0617 & 0.0519 & 0.0428 & 0.0397 & 0.0681 & 0.0305 & 0.0364 & 0.0265 & 0.0511 \\ \hline Q--DLA \cite{OrthoDictionary} & 0.0256 & 0.0312 & 0.0372 & 0.0596 & 0.0473 & 0.0409 & 0.0361 & 0.0654 & 0.0292 & 0.0339 & 0.0241 & 0.0496 \\ \hline SK--SVD \cite{SKSVD} & 0.0191 & 0.0231 & 0.0275 & 0.0462 & 0.0311 & 0.0328 & 0.0266 & 0.0561 & 0.0235 & 0.0266 & 0.0143 & 0.0344 \\ \hline \end{tabular} \end{center} \end{table*} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{plot1_nws.eps} \caption{Relative representation error $\| \mathbf{Y} - \mathbf{DX} \|_F^2 \| \mathbf{Y} \|_F^{-2}$, in percent, for the proposed algorithms with the dataset composed of all patches from the images couple, peppers and boat for sparsity $s=4$. For reference we show the DCT, Q--DLA \cite{OrthoDictionary} and SK--SVD \cite{SKSVD}.} \label{fig:HouseholderComparisons} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.38\textwidth]{plot2a_nws-eps-converted-to.pdf} \caption{Normalized eigenvalues of $\mathbf{\tilde{Z}}$ after convergence of QH$_{12}$--DLA with various sparsity levels for the dataset in Figure \ref{fig:HouseholderComparisons}.} \label{fig:Plot_of_sparsity} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.38\textwidth]{plot2cc-eps-converted-to.pdf} \caption{Normalized eigenvalues of $\mathbf{\tilde{Z}}$ after convergence of QH$_m$--DLA with sparsity $s = 4$ for various number of reflectors for the dataset in Figure \ref{fig:HouseholderComparisons}.} \label{fig:Plot_of_reflectors} \end{figure} In this section we provide experimental results to illustrate the representation capabilities of the proposed methods. \subsection{Sparsely representing data} The input data that we consider is taken from popular test images from the image processing literature (pirate, peppers, boat etc.). The test datasets $\mathbf{Y} \in \mathbb{R}^{64 \times N}$ consist of $8 \times 8$ non-overlapping patches with their means removed and normalized $\mathbf{Y} = \mathbf{Y}/255$. We choose to compare the proposed methods on image data since in this setting fast transforms that perform very well, like the Discrete Cosine Transform (DCT) for example, are available. Our goal is to provide Householder based dictionaries that perform well in terms of representation error with a small number of reflectors $m$ in their composition. Table \ref{tb:Table1} shows the root mean squared error (RMSE) achieved by dictionaries trained on each test image separately and then used to sparsely represent those particular images. We show the performances of HQ$_m$--DLA and H$_m$--DLA for $m=6$ and $m=12$ reflectors. For perspective, we also show the performance achieved by the DCT on one hand and general (orthonormal and unconstrained) dictionary learning on the other -- we use Q--DLA and Stagewise K--SVD (SK--SVD) \cite{SKSVD}. For non-orthonormal dictionaries we use the OMP algorithm \cite{AKSVD} in the sparse reconstruction step. As expected, increasing the number of reflectors decreases RMSE in all cases. The best performing method of the ones proposed in this paper and shown in the table is H$_{12}$--DLA. The worse performance of this approach is achieved for the barb test image. To understand why we can see in Figure \ref{fig:LenaVsBarb} the eigenvalue distribution of the matrix $\mathbf{\tilde{Z}}$ from \eqref{eq:theFirstZ} for barb and peppers. As shown, most of the eigenvalues are close to (or exactly) zero. The difference comes when analyzing the negative eigenvalues which in the case of peppers are fewer and have larger magnitude than those of barb. We mention that for the barb test image the performance of Q--DLA is matched only by H$_{24}$--DLA. Table \ref{tb:Table1} shows on top the reference DCT and the proposed \textit{fast} dictionaries performance while the bottom shows the \textit{slower} dictionaries, including the H$_{32}$--DLA which generally performs slightly better even than Q--DLA. We would like to note here that the general dictionaries designed via K--SVD or SK--SVD do exhibit high mutual coherence in general, even though we do not construct overcomplete dictionaries. For example, the dictionary designed via SK--SVD and that reaches the best performance in terms of RMSE for the image peppers has mutual coherence over $0.9$, very high. \def1.2{1.2} \begin{table*}[t] \begin{center} \caption{Speed-up provided by Householder based dictionaries as compared to the general orthonormal dictionaries and DCT -- in this case a fast implementation, the Fast Cosine Transform (FCT) \cite{FCT}, is considered. We count the number of operations necessary to apply the dictionary as a direct and inverse operator, i.e., the computation of the correlations $\mathbf{D}^T\mathbf{y}$. We do not compare with the general sparse approximation methods like OMP since they are much slower -- they are at least $s$ times slower than an orthonormal dictionary, by \eqref{eq:NOMPCholesky}. The number of reflectors $m$ for which the complexity of the proposed dictionaries approximately coincides with Q--DLA and FCT is $m = 32$ and $m=3$ respectively.}\label{tb:Table0} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Number of reflectors $m$ & 1 & 2 & 3 & 4 & 6 & 8 & 12 & 16 & 20 & 24\\ \hline speed-up $\rho_\text{Q--DLA}$ \eqref{eq:speedupFCT} & 32$\times$ & 16$\times$ & 11$\times$ & 8$\times$ & 5$\times$ & 4$\times$ & 3$\times$ & 2$\times$ & 1.6$\times$ & 1.3$\times$ \\ \hline speed-up $\rho_\text{FCT}$ \eqref{eq:speedupFCT} & 3$\times$ & 1.5$\times$ & 1$\times$ & 0.8$\times$ & 0.5$\times$ & 0.4$\times$ & 0.3$\times$ & 0.2$\times$ & 0.2$\times$ & 0.1$\times$ \\ \hline \end{tabular} \end{center} \end{table*} In the case of H$_m$--DLA we have tested two strategies to update the reflectors: sequential (in order of their index) and random. Since the difference between the two is negligible, the results shown use sequential update. In Figure \ref{fig:IterationEvolution} we show the representation error evolution of the proposed algorithms and of Q--DLA with each iteration. The plot shows the effectiveness of the initialization procedures and the monotonically decrease in the objective function value. As expected, Q--DLA and SK--SVD perform best while QH$_m$--DLA the worse. Still, for the number of reflectors considered $m \in \{ 12, 32 \}$ the differences are not large. When we consider a larger number of reflectors like, $m=32$, we see that in all cases the RMSE is only slightly higher than that of Q--DLA. In Figure \ref{fig:HouseholderComparisons} we show the representation error for a dataset $\mathbf{Y}$ consisting of $N = 12288$ patches from several test images. For reference, we show again the DCT and Q--DLA representation performance. It is easy to see from the plot that the performance of the fixed transform is reached with a small number of reflectors $m$ (3 in both the cases of the proposed methods). When we increase the number of reflectors, H$_m$--DLA reaches the performance of Q--DLA for $m =20$ while QH$_m$--DLA converges to a slightly worse result. As discussed in Section IV, we did expect QH$_m$--DLA to always perform worse than Q--DLA. Notice that for a small number of reflectors the performance of H$_m$--DLA and QH$_m$--DLA are very close suggesting that the extra orthogonal constraint is natural in this regime. The results are interesting when comparing with the references: it is clear that the dictionaries based on reflectors match the performance of Q--DLA for $m < n/2$ while they outperform the fixed DCT transform for $m \ll n$. This shows that a full orthonormal dictionary can be avoided without sacrificing performance. In Figures \ref{fig:Plot_of_sparsity} and \ref{fig:Plot_of_reflectors} we show the eigenvalues of $\mathbf{\tilde{Z}}$ for Householder dictionaries created by H$_m$--DLA using the dataset described in Figure \ref{fig:HouseholderComparisons}. The eigenvalues are distributed similarly, independent of the choice of sparsity $s$ and number of reflectors $m$. In Figure \ref{fig:Plot_of_reflectors} notice that the choice of $m$ determines the number of negative eigenvalues with large magnitudes. As explained in Section IV this drives the reduction in the objective function of the Householder dictionary learning problem. As seen, QH$_m$--DLA and H$_m$--DLA perform similarly. For best performance H$_m$--DLA is preferred but when the dictionary learning procedure is time critical QH$_m$--DLA is a better choice given the small loss in performance. \begin{figure*}[t] \centering \includegraphics[width=0.66\textwidth]{barb_noisy_nws.pdf} \caption{The figure contains from left to right: the original image, the corrupted image missing 40\% of the pixels chosen uniformly at random, the reconstruction using the orthonormal dictionary $(\text{MAE} = 0.0305, \text{MSE} = 0.0492)$ and the reconstruction using the Householder based dictionary with $m = 14$ reflectors $(\text{MAE} = 0.0321, \text{MSE} = 0.0512)$. We always have that $s=6$.} \label{fig:BarbNoisy} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.66\textwidth]{man_noisy_nws.pdf} \caption{Analogous with Figure \ref{fig:BarbNoisy}. The orthonormal dictionary reaches $\text{MAE} = 0.0333, \text{MSE} = 0.0548$ and the Householder dictionary reaches $\text{MAE} = 0.0334, \text{MSE} = 0.0549$.} \label{fig:ManNoisy} \end{figure*} In Table \ref{tb:Table0} we show the speed-ups provided by the Householder based dictionaries as a function of the number of reflectors. We show the comparative computational complexity of using the dictionaries, not their training. We compare against the complexity of using a general orthonormal dictionary and against that of the DCT (we compare against an efficient implementation, the fast cosine transform). The cost of finding the largest entries in magnitude is the same for all methods and thus it is not accounted for. Since computing the correlations between the dictionary and a target signal takes $4nm$ for a Householder based dictionary with $m$ reflectors, the speed-ups are computed as \begin{equation} \rho_\text{Q--DLA} = \frac{(2n-1)n}{4nm},\ \rho_\text{FCT} = \frac{5/2 n \log n - 3n+6}{4nm}. \label{eq:speedupFCT} \end{equation} The computational complexity of FCT is taken from \cite{FCT}. The latter is for perspective since it does not seem reasonable to assume that for image data we can construct a dictionary faster than the FCT that achieves the performance of Q--DLA. Still, notice that a Householder based dictionary with $m=3$ components closely matches the performance of the FCT both in terms of speed and in terms of performance (see Figure \ref{fig:HouseholderComparisons}). An important observation is that with $m=20$ reflectors we match closely the performance of Q--DLA while we still keep a computational advantage. From \eqref{eq:speedupFCT} it is clear that the proposed methods have lower computational complexity than general orthonormal dictionaries whenever $m \ll n$. We do not compare with the computational complexity of iterative methods since they are in general much slower than the methods discussed in this paper; for example, a batch variant of OMP called OMP--Cholesky \cite{AKSVD} needs \begin{equation} N_\text{OMP--Cholesky} = 2sn^2 + 2s^2 n + 4sn+s^3 \label{eq:NOMPCholesky} \end{equation} operations. Since in general we do assume that we are dealing with sparse representations, i.e., $s \ll n$, the computational complexity of OMP--Cholesky is dominated by the first term which expresses the complexity of the explicit dictionary operator, the term that dominates also the computational complexity of using an orthonormal dictionary The final advantage of the proposed methods is the space requirement. As stated, in the case of dictionary learning the entries of the dictionaries need to be stored (or transmitted) together with the encoded data. With the proposed methods only the reflector vectors need to be stored, i.e., $mn$ entries. In terms of the computational complexity of the learning procedures themselves we report that in constructing the dictionaries for Figure \ref{fig:HouseholderComparisons} we have the approximate running times of 15 seconds for Q--DLA, 13 seconds for H$_8$--DLA, 7 seconds for QH$_8$--DLA all running for $K=100$ iterations while SK--SVD took over one minute. All running times include the initialization procedures. The simulations were conducted in the Mathworks Matlab$^\text{\textregistered}$ 2014 environment, using a modern laptop computer i7 processor, 16 GB RAM running Windows$^\text{\textregistered}$. As such, more efficient implementations are possible and the purpose of reporting the running times here is to provide a sense of the complexity of the learning procedure itself. \subsection{Application: denoising images} We also choose to test the trained dictionaries in reconstructions scenarios to fill in missing pixels from an image \cite{KSVD}. The experimental environment is as follows. We train a general orthonormal dictionary and one based on Householder reflectors on uncorrupted data (non-overlapping $8 \times 8$ image patches). We then blank a fixed percentage of the pixels in the images and perform the reconstruction using the previously trained dictionaries. Performance is measured in mean absolute error (MAE) and mean squared error (MSE) and the results are shown in Figures \ref{fig:BarbNoisy} and \ref{fig:ManNoisy}. We compare Q--DLA and H$_{14}$--DLA to show that there are no large performance drawbacks when using dictionaries that are computationally efficient. \section{Conclusions} In this manuscript we describe algorithms for the orthonormal dictionary learning task based on Householder reflectors. We are able to construct dictionaries that can be efficiently manipulated and that also perform very well in terms of representation capabilities where we compare with the fast, fixed transforms and general orthonormal, learned dictionaries. We are also able to provide local minimum conditions for the Householder based and general orthonormal dictionary learning problems. \section*{Acknowledgment} The authors would like to thank anonymous reviewers and Bogdan Dumitrescu whose comments greatly improved the clarity of this manuscript. \bibliographystyle{IEEEtran}
proofpile-arXiv_067-932
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\setcounter{equation}{0}\oldsection} \renewcommand\thesection{\arabic{section}} \renewcommand\theequation{\thesection.\arabic{equation}} \allowdisplaybreaks \def\it{Proof.}\rm\quad{\it{Proof.}\rm\quad} \DeclareMathOperator*{\Cat}{\mathbf{Cat}} \newcommand\ww{\bm {w}} \newcommand\bb{\bm b} \newcommand\vv{\bm v} \newcommand\UU{\mbox{\bfseries U}} \newcommand\FF{\mbox{\bfseries \itshape F}} \newcommand\h{\mbox{\bfseries \itshape h}}\newcommand\dd{\mbox{d}} \newcommand\g{\mbox{\bfseries \itshape g}} \newcommand\xx{\mbox{\bfseries \itshape x}} \def\mathbb{R}}\def\pa{\partial{\mathbb{R}}\def\pa{\partial} \def\mathbb{N}}\def\Z{\mathbb{Z}{\mathbb{N}}\def\Z{\mathbb{Z}} \def{\rm A}{{\rm A}} \def{\rm sn}{{\rm sn}} \def{\rm cn}{{\rm cn}} \def{\rm dn}{{\rm dn}} \def\sum\limits_{n=1}^\infty{\displaystyle\!sum\limits_{n=1}^\infty} \newcommand\tpi{{\tilde{\pi}}} \newcommand\om{{\omega}} \newcommand\bfk{{\bf k}} \newcommand\bfl{{\bf l}} \newcommand\bfn{{\bf n}} \newcommand\bfp{{\bf p}} \newcommand\bfq{{\bf q}} \newcommand\bfB{{\bf B}} \newcommand\bfTB{{\bf TB}} \newcommand\bfeta{{\boldsymbol \eta}} \newcommand\divg{{\text{div}}} \newtheorem{thm}{Theorem}[section] \newtheorem{defn}[thm]{Definition} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{con}[thm]{Conjecture} \newtheorem{re}[thm]{Remark} \newtheorem{exa}{Example}[section] \newtheorem{pro}[thm]{Proposition} \setlength{\arraycolsep}{0.5mm} \begin{document} \title {\bf Some results on Arakawa-Kaneko, Kaneko-Tsumura functions and related functions} \author{ {Maneka Pallewatta$^{a,}$\thanks{Email: maneka.osh@gmail.com} \quad Ce Xu$^{a,b,}$\thanks{Email: cexu2020@ahnu.edu.cn}}\\[1mm] \small a. Graduate School of Mathematics, Kyushu University, Motooka\\ \small Nishi-ku, Fukuoka 819-0389, Japan\\ \small b. School of Mathematics and Statistics, Anhui Normal University,\\ \small Wuhu 241000, P.R. China\\ [5mm] Dedicated to professor Masanobu Kaneko on the occasion of his 60th birthday} \date{} \maketitle \noindent{\bf Abstract} Recently, the level two analogue of multiple polylogarithm function ${\rm A}(k_1,\ldots,k_r;z)$ and Arakawa-Kaneko zeta function $\psi(k_1,\ldots,k_r;s)$ were introduced by M.~Kaneko and H.~Tsumura, for $k_1,\ldots,k_r \in \mathbb{Z}_{\ge 1}$ . In this paper, we investigate some of their special relations. In particular, we prove some explicit forms of ${\rm A}(k_1,\ldots,k_r;z)$ and $\psi(k_1,\ldots,k_r;s)$. Also, we introduce a level $m$ anlogue of the Arakawa-Kaneko zeta functions. \\[2mm] \noindent{\bf Keywords} Arakawa-Kaneko zeta function, Kaneko-Tsumura zeta function, multiple zeta function, multiple $T$-function, polylogarithm. \\[2mm] \noindent{\bf AMS Subject Classifications (2020):} 11B68, 11M32, 11M99. \section{Introduction} We begin with some basic notations. Let us consider the positive index set $ {\bfk_r}:= (k_1,\ldots, k_r)$. The quantities $|{\bfk_r}|:=k_1+\cdots+k_r$ and $ {\rm dep}({\bfk_r}):=r,$ are called the weight and depth of ${\bfk_r}$, respectively. If $k_r>1$, ${\bfk_r}$ is called \emph{admissible}. For ${\bfk_r}:= (k_1,\ldots, k_r)$, set ${\bfk_0}:=\emptyset$, $({\bfk_r})_{+}:=(k_1,\ldots,k_{r-1},k_r+1)$ and $({\bfk_r})_{-}:=(k_1,\ldots,k_{r-1},k_r-1)$. The subject of this paper is the level two analogue of Arakawa-Kaneko and related functions, which is a generalisation of the single-variable multiple zeta function. The \emph{Arakawa-Kaneko function} zeta (\cite{AM1999}) is defined by \begin{align}\label{a1} \xi(k_1,k_2\ldots,k_r;s):=\displaystyle\!frac{1}{\Gamma(s)} \displaystyle\!int\limits_{0}^\infty \displaystyle\!frac{t^{s-1}}{e^t-1}{\rm Li}_{k_1,k_2,\ldots,k_r}(1-e^{-t})dt, \end{align} for $k_1,k_2,\ldots,k_{r} \in \mathbb{Z}_{\ge 1}$ and $\Re(s)>0$, where ${\mathrm{Li}}_{{{k_1},{k_2}, \cdots ,{k_r}}}\left( z \right)$ is the \emph{multiple poly-logarithm} defined by \begin{align}\label{a2} &{\mathrm{Li}}_{{{k_1},{k_2}, \cdots ,{k_r}}}\left( z \right): = \displaystyle\!sum\limits_{1 \le {n_1} < \cdots < {n_r}} {\displaystyle\!frac{{{z^{{n_r}}}}}{{n_1^{{k_1}}n_2^{{k_2}} \cdots n_r^{{k_r}}}}}\quad z \in \left[ { - 1,1} \right). \end{align} In a recent paper \cite{KT2018}, Kaneko and Tsumura introduced and studied a new kind of Arakawa-Kaneko type functions \begin{align}\label{Def-eta} \eta(k_1,k_2\ldots,k_r;s):=\displaystyle\!frac{1}{\Gamma(s)} \displaystyle\!int\limits_{0}^\infty \displaystyle\!frac{t^{s-1}}{1-e^t}{\rm Li}_{k_1,k_2,\ldots,k_r}(1-e^{t})dt. \end{align} We call them \emph{Kaneko-Tsumura $\eta$-function}. In the past two decades, the study of Arakawa-Kaneko and related functions has attracted the attention of many mathematicians. Apart from the actual evaluation of the functions, one of the main questions that one sets out to solve is that whether or not Arakawa-Kaneko zeta functions can be expressed in terms of a linear combination of \emph{multiple zeta values} (MZVs) (\cite{H1992,DZ1994}) \begin{align}\label{a4} \zeta(k_1,\ldots,k_{r-1},k_r):=\displaystyle\!sum\limits_{0<n_1<\cdots<n_r}\displaystyle\!frac{1}{n_1^{k_1}\cdots n_{r-1}^{k_{r-1}} n_r^{k_r}}, \end{align} and \emph{single-variable multiple zeta functions} \begin{align}\label{a6} \zeta(k_1,\ldots,k_{r-1},s):=\displaystyle\!sum\limits_{0<n_1<\cdots<n_r}\displaystyle\!frac{1}{n_1^{k_1}\cdots n_{r-1}^{k_{r-1}} n_r^{s}}, \end{align} where $\bfk_r$ is an admissible index and $\Re(s)>1$. In \cite{AM1999,KT2018,Ku2010}, for any index ${\bf k}_r:=(k_1,\ldots,k_r)$, the special values $\xi({\bf k}_r;s)$ at positive integers are analytically computed and written in terms of multiple zeta values. Further, M.~Kaneko and H.~Tsumura \cite{KT2018} also proved that for a general index set ${\bf k}_r$, the function $\xi({\bf k}_r;s)$ can also be expressed by multiple zeta functions (but not explicit formulas). For example, \begin{align}\label{a7} \xi(\{1\}_{r-1},k;s)=&(-1)^{k-1}\displaystyle\!sum\limits_{a_+\cdots+a_k=r \atop \forall a_j\geq 0} \binom{s+a_k-1}{a_k} \zeta(a_1+1,\ldots,a_{k-1}+1,a_k+s)\nonumber\\ &+\displaystyle\!sum\limits_{j=0}^{k-2}(-1)^j \zeta(\{1\}_{r-1},k-j)\zeta(\{1\}_{j},s). \end{align} Some related results (e.g. duality formulas etc.) for Arakawa-Kaneko type functions can be find in the works of \cite{KTB2018,KO2018,Y2016}. Here, $\{l\}_m$ denotes the sequence $\underbrace{l,\ldots,l}_{m \text{\;-times}}$. Recently, Kaneko and Tsumura defined the single variable multiple zeta function of level two as follows. \begin{defn}\label{def:1}(Kaneko, Tsumura \cite{KTA2018}) For $k_1,\ldots,k_{r-1} \in \mathbb{Z}_{\ge 1}$ and $\Re{(s)}>1$, we write \begin{equation}\label{a10} T_0(k_1,\ldots,k_{r-1},s)=\displaystyle\!sum_{\substack{0<m_1<\cdots<m_r \\ m_i\equiv i \pmod{2}}} \displaystyle\!frac{1}{m_1^{k_1}\cdots m_{r-1}^{k_{r-1}}m_r^s}. \end{equation} Furthermore, as its normalized version, \begin{equation}\label{a11} T(k_1,\ldots,k_{r-1},s)=2^r T_0(k_1,\ldots,k_{r-1},s), \end{equation} which is called multiple $T$-function. The values $T(k_1,\ldots,k_{r-1},k_r)$ ($k_j \in \mathbb{Z}_{\ge 1}$, $k_r\ge 2:$ admissible) are called the multiple T-values (MTVs). \end{defn} According to these functions, Kaneko and Tsumura defined a level two analogue of $\xi(k_1,\ldots,k_r;s)$ which we call Kaneko-Tsumura function as follows. \begin{defn}(Kaneko, Tsumura \cite[\S 5]{KTA2018})\label{df-3.3} For index $\bfk_r$ and $\Re{(s)}>0$, we write \begin{equation}\label{a8} \psi(k_1,\ldots,k_r;s)=\displaystyle\!frac{1}{\Gamma(s)}\displaystyle\!int_0^\infty t^{s-1} \displaystyle\!frac{{\rm A}(k_1,\ldots,k_r;\tanh t/2)}{\sinh(t)} dt, \end{equation} where \begin{align}\label{a9} &{\rm A}(k_1,k_2,\ldots,k_r;z): = 2^r\displaystyle\!sum\limits_{1 \le {n_1} < \cdots < {n_r}\atop n_i\equiv i\ {\rm mod}\ 2} {\displaystyle\!frac{{{z^{{n_r}}}}}{{n_1^{{k_1}}n_2^{{k_2}} \cdots n_r^{{k_r}}}}},\quad z \in \left[ { - 1,1} \right). \end{align} \end{defn} In particular, if $s\in \mathbb{Z}_{\ge 1}$ then we call (\ref{a8}) the Kaneko-Tsumura $\psi$-values. Here, ${\rm A}(k_1,\ldots,k_r;z)$ is $2^r$ times ${\rm Ath}(k_1,\ldots,k_r;z)$ which was introduced in \cite[\S5]{KTA2018}. When $k_r >1$, we see that \begin{equation*} {\rm A}(k_1,\ldots,k_r;1)=T(k_1,\ldots,k_r). \end{equation*} ${\rm A}(\mathbf{k}_r;z)$ satisfies the shuffle relation corresponding to multiple zeta values (see \cite{H1992} and \cite{HO2003}). Let $\mathfrak{H} := \mathbb{Q} \langle x,y \rangle$ be the non-commutative polynomial ring in two indeterminates $x$ and $y$. We refer to monomials in $x$ and $y$ as words. We also define subrings, $\mathfrak{H}^1 := \mathbb{Q} +y \mathfrak{H} $ and $\mathfrak{H}^0 := \mathbb{Q} + y \mathfrak{H} x$. For any integer $k >0$, put $z_k=yx^{k-1}$. Then the ring $\mathfrak{H}^1$ is freely generated by $z_k$ $(k \ge 1)$. When $k \ge 2$, $z_k$ is contained in $\mathfrak{H}^0$. But $\mathfrak{H}^0$ is not freely generated by $z_k$ $(k \ge 2)$. Now let us define the evaluation map $Z:\mathfrak{H}^0 \to \mathbb{R} $ by \begin{equation} Z(z_{k_1} \cdots z_{k_r}) := {\rm A}(k_1, \ldots , k_r;z). \end{equation} We define the shuffle product $\shuffle$ on $\mathfrak{H}$ inductively by \begin{align*} 1\shuffle w &=w\shuffle1 =w \\ u_1 w_1\shuffle u_2 w_2 &=u_1(w_1\shuffle u_2 w_2)+u_2(u_1 w_1 \shuffle w_2) \end{align*} for any words $w,w_1,w_2 \in \mathfrak{H}$ and $u_1,u_2 \in \{x,y\}$, with $\mathbb{Q}$-biliniarity. The shuffle product is commutative and associative. The main purpose of this paper is to study the functions $\psi({\bf k}_r;s), {\rm A}({\bf k}_r;z)$ and MTVs. In particular, we give some explicit formulas of $\psi({\bf k}_r;s)$ in terms of multiple $T$-functions. In $\S2$, we first obtain new formula for $\psi({\bf k}_r;z)$ corresponding to $\xi({\bf k}_r;s)$ function. Secondly, we prove some explicit forms of new identities for ${\rm A}({\bf k}_r;z)$ and $\psi({\bf k}_r;s)$ by using the methods of iterated integral representations of series. Similarly, we can obtain formulas for ${\rm Li}_{{\bf k}_r}(z)$ and $\xi({\bf k}_r;s)$. Moreover, we prove a combinatorial duality formula for Kaneko-Tsumura $\eta$-values. In $\S3$, we discuss a general duality relation of Kaneko-Tsumura $\psi$-values and give an explicit formula. In $\S4$, we introduce and study a ``level $m$" analogue of the multiple polylogarithm and the multiple zeta functions. Moreover, we give an equation system and prove that. If the solution of this equation system exists, then the level $m$ analogue of $\xi(k_1,\ldots,k_r;s)$ can be defined. Furthermore, we can deduce many results corresponding to $\xi(k_1,\ldots,k_r;s)$ and $\psi(k_1,\ldots,k_r;s)$. \section{Main results and proofs} \subsection{Relations among the functions $\psi$ and $T$} In this section, we present our main results on the Kaneko-Tsumura zeta functions. We deduce that the Kaneko-Tsumura zeta function can be written as a linear combination of multiple $T$-functions. In order to prove the main results, we establish the following lemmas. \begin{lem}\label{e1}(\cite{KT2018}, cf. \cite{AM1999}) {\rm (i)} For index $\bfk_r$, \begin{align}\label{b3} \displaystyle\!frac{d}{dz}{\mathrm{A}}({{k_1}, \cdots ,k_{r-1},{k_r}}; z)= \left\{ {\begin{array}{*{20}{c}} \displaystyle\!frac{1}{z} {\mathrm{A}}({{k_1}, \cdots ,{k_{r-1}},{k_r-1}};z) {\ \ (k_r\geq 2),} \\ {\displaystyle\!frac{2}{1-z^2}{\mathrm{A}}({{k_1}, \cdots ,{k_{r-1}}};z)\;\;\;\ \ \ (k_r = 1).} \\ \end{array} } \right. \end{align} {\rm (ii)} For $r \ge 1$, \begin{align}\label{b4} {\rm A}({\{1\}_r};z)=\displaystyle\!frac{1}{r!}({\rm A}(1;z))^r=\displaystyle\!frac{(-1)^r}{ r!}\log^r\left(\displaystyle\!frac{1-z}{1+z}\right). \end{align} \end{lem} Similarly, we can obtain the following lemma. \begin{lem}\label{e2} {\rm (i)} For index $\bfk_r$, \begin{align} \displaystyle\!frac{d}{dz} {\rm A}\left(k_1,\ldots,k_r;\displaystyle\!frac{1-z}{1+z}\right)=\left \{ \begin{array}{c} -\displaystyle\!frac{2}{1-z^2} {\rm A}\left(k_1,\ldots,k_{r-1},k_r-1;\displaystyle\!frac{1-z}{1+z}\right) \; \; \;\;\; \; \;\;\; (k_r \ge 2) \\ -\displaystyle\!frac{1}{z} {\rm A}\left(k_1,\ldots,k_{r-1};\displaystyle\!frac{1-z}{1+z}\right) \;\;\; \; \;\;\; \; \;\;\;\;\; \; \;\;\; \; \;\;\;\;\; \; \;\;\; \; (k_r = 1). \end{array} \right. \end{align} {\rm (ii)} For $r \ge 1$ \begin{align} {\rm A}\left(\{1\}_r;\displaystyle\!frac{1-z}{1+z}\right)=\displaystyle\!frac{1}{r!}{\rm A}^r\left(1;\displaystyle\!frac{1-z}{1+z}\right)=\displaystyle\!frac{(-1)^r}{r!}\log^r z.&& \end{align} \end{lem} \begin{lem}\label{lem-4.6} For any index ${\bf{k}}_r$, we have \begin{align}\label{eq-4.4} \displaystyle\!frac{2}{1-z^2}{\rm A}\left(\{1\}_j;\displaystyle\!frac{1-z}{1+z}\right) {\rm A}({\bf{k}}_r;z)=\displaystyle\!frac{d}{dz} \left(\displaystyle\!sum_{i=0}^j {\rm A}\left(\{1\}_{j-i};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}({\bf{k}}_r,i+1;z)\right). \end{align} \end{lem} \it{Proof.}\rm\quad By using Lemma \ref{e1} and Lemma \ref{e2}, we can easily obtain the desired result by induction on $j$. \hfill$\square$ Similar to the Euler-type connection formula of multiple polylogarithm functions in \cite{KT2018}, we obtain the following formula associated with the level two analogue, for ${\rm A}({\bf k}_r;z)$. \begin{thm}\label{thm-Ath} Let ${\bf k}_r$ be any index. Then we have \begin{equation*} {\rm A}\left(\mathbf{k}_r;\displaystyle\!frac{1-z}{1+z}\right)=\displaystyle\!sum_{\mathbf{k}',j \ge 0} C_{\mathbf{k}_r}(\mathbf{k}';j) {\rm A}\left(\{1\}_j;\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(\mathbf{k}';z), \end{equation*} where the sums on the right runs over indices ${\bf k'}$ and integers $j\geq 0$ that satisfy $|{\bf k'}|+j\leq |{\bf k}_r|$, and $C_{{\bf k}_r}({\bf k'};j)$ is a $\mathbb{Q}$-linear combination of multiple $T$-values of weight $|{\bf k}_r|- |{\bf k'}|-j$. We understand ${\rm A}_{\emptyset}(z)=1$ and $|\emptyset|=0$ for the empty index $\emptyset$, and the constant $1$ is regarded as a multiple $T$-value of weight $0$. \end{thm} \it{Proof.}\rm\quad We prove this by induction on the weight $\mathbf{k}_r$. When $\mathbf{k}_r=(1)$, the trivial identity \begin{equation*} {\rm A}_1\left(\displaystyle\!frac{1-z}{1+z}\right)={\rm A}_1\left(\displaystyle\!frac{1-z}{1+z}\right) \end{equation*} itself gives the desired form, thus $C_{(1)}(\emptyset;0)=C_{(1)}((1);0)=0$ and $C_{(1)}(\emptyset;1)=1$. Suppose the weight $|\mathbf{k}_r|>1$ and assume the statement holds for any index of weight less than $|\mathbf{k}_r|$. For $\mathbf{k}_r=(k_1,\ldots,k_r)$, set $(\mathbf{k}_r)_-=(k_1,\ldots,k_{r-1},k_r-1)$. First, assume that $\mathbf{k}_r$ is admissible. Then by the differential relation and the induction hypothesis, we get \begin{align}\label{eq-2-2} \displaystyle\!frac{d}{dz}{\rm A}\left(\mathbf{k}_r;\displaystyle\!frac{1-z}{1+z}\right)&=-\displaystyle\!frac{2}{1-z^2}{\rm A}\left((\mathbf{k}_r)_-;\displaystyle\!frac{1-z}{1+z}\right) \nonumber \\ &=-\displaystyle\!frac{2}{1-z^2}\displaystyle\!sum_{\mathbf{l},j \ge 0} C_{(\mathbf{k}_r)_-}(\mathbf{l};j) {\rm A}\left(\{1\}_j;\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(\mathbf{l};z). \end{align} Let the depth of $\mathbf{l}$ be $s$. By substituting (\ref{eq-4.4}) from Lemma \ref{lem-4.6} into (\ref{eq-2-2}) and integrating, we get \begin{equation*} {\rm A}\left(\mathbf{k}_r;\displaystyle\!frac{1-z}{1+z}\right)=-\displaystyle\!sum_{\mathbf{l} ,j \ge 0} C_{(\mathbf{k}_r)_-}(\mathbf{l};j) \left(\displaystyle\!sum_{i=0}^j {\rm A}\left(\{1\}_{j-i};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(\mathbf{l},i+1;z)\right)+C, \end{equation*} where $C$ is a constant. Since \begin{equation*} \displaystyle\!lim_{z\to 0} {\rm A}\left(\{1\}_{j-i};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(\mathbf{l},i+1;z)=0, \end{equation*} we have $C=T(\mathbf{k}_r)$. Now we can obtain the desired result. In order to prove the non-admissible case, we recall that ${\rm A}\left(\mathbf{k}_r;\displaystyle\!frac{1-z}{1+z}\right)$ satisfies the shuffle relation (cf. \cite{IKZ2006}). Suppose $\mathbf{k}_r$ is not admissible. Then, we can write ${\rm A}\left(\mathbf{k}_r;\displaystyle\!frac{1-z}{1+z}\right)$ as a polynomial of ${\rm A}\left(1;\displaystyle\!frac{1-z}{1+z}\right)$ with each coefficient of ${\rm A}^i\left(1;\displaystyle\!frac{1-z}{1+z}\right)$ being a linear combination of ${\rm A}\left({\mathbf{k}}';\displaystyle\!frac{1-z}{1+z}\right), \mathbf{k}':\rm{admissible}$. Write this polynomial as \begin{equation*} {\rm A}\left(\mathbf{k}_r;\displaystyle\!frac{1-z}{1+z}\right)=\displaystyle\!sum_{j=0}^m a_i\cdot{\rm A}^j\left(1;\displaystyle\!frac{1-z}{1+z}\right) . \end{equation*} Then $a_i$ can be written in the desired form (admissible case). We know that \begin{equation*} {\rm A}^j\left(1;\displaystyle\!frac{1-z}{1+z}\right)=j! {\rm A}\left(\{1\}_j;\displaystyle\!frac{1-z}{1+z}\right) \end{equation*} and \begin{equation*} {\rm A}\left(\{1\}_i;\displaystyle\!frac{1-z}{1+z}\right) {\rm A}\left(\{1\}_j;\displaystyle\!frac{1-z}{1+z}\right) =\binom{i+j}{i}{\rm A}\left(\{1\}_{i+j};\displaystyle\!frac{1-z}{1+z}\right). \end{equation*} Hence $a_i\cdot{\rm A}^j\left(1;\displaystyle\!frac{1-z}{1+z}\right)$ can be written in the claimed form, and the proof is done. \hfill$\square$ We obtain a level two version of \cite[Proposition 2]{AM1999} which will be needed in proving our main results as follows. \begin{pro}\label{prop-2-1} \begin{enumerate} \item For $\Re {(s)}>1$ \begin{equation*} T(k_1,\ldots,k_{n-1},s)=\displaystyle\!frac{1}{\Gamma(s)}\displaystyle\!int_0^\infty \displaystyle\!frac{t^{s-1}}{\sinh(t)}{{\rm A}(k_1,\ldots,k_{n-1};e^{-t})} dt. \end{equation*} \item For $\Re {(s)}>1, n\ge2, j\ge0$ \begin{equation*} \displaystyle\!int_0^\infty t^{s+j-1}{{\rm A}(k_1,\ldots,k_{n-1};e^{-t})} dt=\Gamma(s+j)T(k_1,\ldots,k_{n-2},s+j+k_{n-1}). \end{equation*} \end{enumerate} \end{pro} The proof is similar to the proof of \cite[Proposition 2]{AM1999}. Therefore, we omit the proof. From Theorem \ref{thm-Ath}, we can obtain formulas expressing $\psi(\mathbf{k}_r;s)$ in terms of multiple $T$-zeta functions. \begin{thm}\label{thm-psi} Let $\mathbf{k}_r$ be any index set. The function $\psi(\mathbf{k}_r;s)$ can be written in terms of multiple $T-$functions as \begin{equation*} \psi(\mathbf{k}_r;s)=\displaystyle\!sum_{\mathbf{k}',j \ge 0} C_{\mathbf{k}_r}(\mathbf{k}';j)\binom{s+j-1}{j}T(\mathbf{k}';s+j) \end{equation*} Here, the sum is over indices $\mathbf{k}'$ and integers $j\ge0$ that satisfy $|\mathbf{k}'|+j\le |\mathbf{k}_r|$, and $C_{\mathbf{k}_r}(\mathbf{k}';j)$ is the same as in Theorem \ref{thm-Ath}. \end{thm} \it{Proof.}\rm\quad Let $r, \; l$ be the depths of $\mathbf{k}_r$ and $\mathbf{k}'$ respectively. Put $z=e^{-t}$ in Theorem \ref{thm-Ath}. \begin{equation*} {\rm A}\left(\mathbf{k}_r;\displaystyle\!frac{1-e^{-t}}{1+e^{-t}}\right)=\displaystyle\!sum_{\mathbf{k}',j \ge 0} C_{\mathbf{k}_r}(\mathbf{k}';j) {\rm A}\left(\{1\}_j;\displaystyle\!frac{1-e^{-t}}{1+e^{-t}}\right) {\rm A}(\mathbf{k}';e^{-t}). \end{equation*} By using Lemma \ref{e1} we can write the above equation as \begin{equation}\label{eq-2-3} {\rm A}\left(\mathbf{k}_r;\tanh t/2\right)=\displaystyle\!sum_{\mathbf{k}',j \ge 0} C_{\mathbf{k}_r}(\mathbf{k}';j) \displaystyle\!frac{t^j}{j!} {\rm A}(\mathbf{k}';e^{-t}). \end{equation} Recall the definition \begin{align*} \psi(\mathbf{k}_r;s)&=\displaystyle\!frac{1}{\Gamma(s)}\displaystyle\!int_0^\infty t^{s-1} \displaystyle\!frac{{\rm A}(\mathbf{k}_r;\tanh t/2)}{\sinh(t)} dt, \end{align*} and we substitute equation (\ref{eq-2-3}) into the above equation and apply Proposition \ref{prop-2-1} to obtain the desired formula for $\psi(\mathbf{k}_r;s)$. \hfill$\square$ \subsection{Some explicit forms of Arakawa-Kaneko and Kaneko-Tsumura zeta functions} Theorem \ref{thm-Ath} and Theorem \ref{thm-psi} can be written explicitly for some special arguments. In this section, we obtain some explicit forms of Theorem \ref{thm-Ath} and Theorem \ref{thm-psi}. Let us consider the integral representation of the multiple polylogarithm ${\mathrm{Li}}_{{\bf k}_r}(z)$ and the level two multiple polylogarithm ${\rm A}({\bf k}_r;z)$ as follows. \begin{align*} &{\mathrm{Li}}_{{\bf k}_r}\left( z \right)=\displaystyle\!int_{0<t_1<t_2\cdots<t_{k}<z}\displaystyle\!frac{dt_1}{1-t_1}\underbrace{\displaystyle\!frac{dt_2}{t_2}\cdots\displaystyle\!frac{dt_{k_1}}{t_{k_1}}}_{(k_1-1) \rm{-times}}\cdots \displaystyle\!frac{dt_{k-k_r+1}}{1-t_{k-k_r+1}}\underbrace{\displaystyle\!frac{dt_{k-k_r+2}}{t_{k-k_r+2}}\cdots\displaystyle\!frac{dt_{k}}{t_{k}}}_{(k_r-1) \rm{-times}} \end{align*} and \begin{align*} {\rm A}({\bf k}_r;z) =\displaystyle\!int_{0<t_1<t_2\cdots<t_{k}<z}\displaystyle\!frac{2dt_1}{1-t_1^2}&\underbrace{\displaystyle\!frac{dt_2}{t_2}\cdots\displaystyle\!frac{dt_{k_1}}{t_{k_1}}}_{(k_1-1) \rm{-times}}\cdots \displaystyle\!frac{2dt_{k-k_r+1}}{1-t_{k-k_r+1}^2}\underbrace{\displaystyle\!frac{dt_{k-k_r+2}}{t_{k-k_r+2}}\cdots\displaystyle\!frac{dt_{k}}{t_{k}}}_{(k_r-1) \rm{-times}}, \end{align*} where ${\bf k}_r=({k_1}, \cdots ,k_{r-1},{k_r})$. By using the formula \begin{align}\label{b5} \displaystyle\!int_{a<t_1\cdots<t_r<b}\underbrace{\displaystyle\!frac{dt_1}{t_1}\cdots\displaystyle\!frac{dt_r}{t_r}}_{r\text{-times}} =\displaystyle\!frac{1}{r!}\left(\log \displaystyle\!frac{b}{a}\right)^r, \end{align} we can write above integral expressions as \begin{align}\label{2.6} {\mathrm{Li}}_{{\bf k}_r}(z) =\displaystyle\!frac{1}{\prod_{i=1}^r(k_i-1)!}\displaystyle\!int_{0<t_1<t_2\cdots<t_r<z}\displaystyle\!frac{dt_1}{1-t_1}&\left(\log\displaystyle\!frac{t_2}{t_1}\right)^{k_1-1}\cdots \displaystyle\!frac{dt_r}{1-t_r}\left(\log\displaystyle\!frac{z}{t_r}\right)^{k_r-1} \end{align} and \begin{align}\label{2.7} {\rm A}({\bf k}_r;z) =\displaystyle\!frac{2^r}{\prod_{i=1}^r(k_i-1)!}\displaystyle\!int_{0<t_1<t_2\cdots<t_r<z}\displaystyle\!frac{dt_1}{1-t_1^2}&\left(\log\displaystyle\!frac{t_2}{t_1}\right)^{k_1-1}\cdots \displaystyle\!frac{dt_r}{1-t_r^2}\left(\log\displaystyle\!frac{z}{t_r}\right)^{k_r-1} \end{align} respectively. Then, applying the changes of variables $t_j \mapsto 1-t_{r+1-j}$ and $t_j \mapsto \displaystyle\!frac{1-t_{r+1-j}}{1+t_{r+1-j}}\quad (j=1,2,\ldots,r)$ to (\ref{2.6}) and (\ref{2.7}), respectively, we obtain \begin{align}\label{b8} {\mathrm{Li}}_{{\bf k}_r}\left( z \right)=\left\{\prod\limits_{j=1}^r\displaystyle\!frac{(-1)^{k_j-1}}{(k_j-1)!}\right\}\displaystyle\!int\nolimits_{E_r(z)} &\left\{\prod\limits_{j=1}^{r-1}\displaystyle\!frac{\log^{k_j-1}\left(\displaystyle\!frac{1-t_{r+1-j}}{1-t_{r-j}}\right)}{t_{r+1-j}}dt_{r+1-j}\right\}\nonumber\\ &\times\displaystyle\!frac{\log^{k_r-1}\left(\displaystyle\!frac{1-t_1}{z}\right)}{t_1}dt_1, \end{align} and \begin{align}\label{b9} &{\mathrm{A}}({\bf k}_r; z )=\left\{\prod\limits_{j=1}^r\displaystyle\!frac{(-1)^{k_j-1}}{(k_j-1)!}\right\}\nonumber\\&\times\displaystyle\!int\nolimits_{F_r(z)} \left\{\prod\limits_{j=1}^{r-1}\displaystyle\!frac{\log^{k_j-1}\left(\displaystyle\!frac{(1-t_{r+1-j})(1+t_{r-j})}{(1+t_{r+1-j})(1-t_{r-j})}\right)}{t_{r+1-j}}dt_{r+1-j}\right\}\displaystyle\!frac{\log^{k_r-1}\left(\displaystyle\!frac{1-t_1}{(1+t_1)z}\right)}{t_1}dt_1, \end{align} where $$E_{r}(z):=\{(t_1,\ldots,t_r)\mid 1-z<t_1<\cdots<t_r<1\},$$ $$F_{r}(z):=\left\{(t_1,\ldots,t_r)\mid \displaystyle\!frac{1-z}{1+z}<t_1<\cdots<t_r<1\right\}.$$ For the convenience (later use), let $E'_{r}(z):=E_{r}(1-z)$ and $F'_{r}(z):=F_{r}\left(\displaystyle\!frac{1-z}{1+z}\right)$. Let us consider the following lemma which will be needed in proving our main results under this section. \begin{lem}\label{e3} For integers $m\ge 0$ and $n>0$, we get \begin{align}\label{e4} \displaystyle\!int\limits_{0}^z \log^m(t)\log^n\left(\displaystyle\!frac{1-t}{1+t}\right)\displaystyle\!frac{dt}{t}=(-1)^nn!\displaystyle\!sum\limits_{l=0}^m l!\binom{m}{l} (-1)^l (\log(z))^{m-l} {\rm A}(\{1\}_{n-1},l+2;z). \end{align} In particular, \begin{align}\label{e5} \displaystyle\!int\limits_{0}^1 \log^m(t)\log^n\left(\displaystyle\!frac{1-t}{1+t}\right)\displaystyle\!frac{dt}{t}&=(-1)^{n+m}n!m!T(\{1\}_{n-1},m+2). \end{align} \end{lem} \it{Proof.}\rm\quad From Lemma (\ref{e2}), we have \begin{align*} \displaystyle\!int\limits_{0}^z \log^m(t)\log^n\left(\displaystyle\!frac{1-t}{1+t}\right)\displaystyle\!frac{dt}{t}&=(-1)^n n!\displaystyle\!int\limits_{0}^z \displaystyle\!frac{\log^m(t){\rm A}(\{1\}_n;t)}{t}dt\\ &=(-1)^n n!\displaystyle\!sum\limits_{l=0}^m l!\binom{m}{l} (-1)^l (\log(z))^{m-l} {\rm A}(\{1\}_{n-1},l+2;z). \end{align*} By setting $z\rightarrow 1$ in the above equation, we get \begin{align*} \displaystyle\!int\limits_{0}^1 \log^m(t)\log^n\left(\displaystyle\!frac{1-t}{1+t}\right)\displaystyle\!frac{dt}{t}&=(-1)^{n+m}n!m!{\rm A}(\{1\}_{n-1},m+2;1)\nonumber\\ &=(-1)^{n+m}n!m!T(\{1\}_{n-1},m+2). \end{align*} This completes the proof of the lemma. \hfill$\square$ \begin{thm}\label{thm2.4} For any positive integers $j$ and $r$ with $j\leq r$, \begin{align*} {\rm A}\left(\{1\}_{j-1},2,\{1\}_{r-j};\displaystyle\!frac{1-z}{1+z}\right)=&\displaystyle\!sum\limits_{i=0}^{r-j}(-1)^{i}\binom{i+j}{i} T(i+j+1){\rm A}\left(\{1\}_{r-j-i};\displaystyle\!frac{1-z}{1+z}\right)\nonumber\\ &+(-1)^{r-j-1}\displaystyle\!sum\limits_{l=r-j}^{r}\binom{l}{r-j} {\rm A}\left(\{1\}_{r-l};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(l+1;z). \end{align*} \end{thm} \it{Proof.}\rm\quad The proof of Theorem \ref{thm2.4} is similar as the proof of Theorem \ref{thm2.3}. Letting $k_1=\cdots=k_{j-1}=1,k_j=2,k_{j+1}=\cdots=k_r=1$ and replacing $z$ by $\displaystyle\!frac{1-z}{1+z}$ in (\ref{b9}), using (\ref{b5}), we have \begin{align}\label{b16} &{\rm A}\left(\{1\}_{j-1},2,\{1\}_{r-j};\displaystyle\!frac{1-z}{1+z}\right)\nonumber\\=&-\displaystyle\!int\nolimits_{F'_r(z)}\displaystyle\!frac{\log\left(\displaystyle\!frac{(1-t_{r+1-j})(1+t_{r-j})}{(1+t_{r+1-j})(1-t_{r-j})}\right)}{t_1\cdots t_r}dt_1\cdots dt_r\nonumber\\ =&\displaystyle\!sum\limits_{i=0}^{r-j} \displaystyle\!frac{(-1)^{r-i}}{(r-j-i)!} \displaystyle\!frac{i+j}{i!j!} \log^{r-j-i}(z) \displaystyle\!int\limits_{z}^1 \displaystyle\!frac{\log^{i+j-1}(t)\log\left(\displaystyle\!frac{1-t}{1+t}\right)}{t}dt. \end{align} Substituting (\ref{e4}) and (\ref{e5}) with $n=1$ into (\ref{b16}). Then, we get \begin{align} {\rm A}&\left(\{1\}_{j-1},2,\{1\}_{r-j};\displaystyle\!frac{1-z}{1+z}\right)\nonumber\\ &=\displaystyle\!sum\limits_{i=0}^{r-j} \displaystyle\!frac{(-1)^{r+j}}{(r-j-i)!} \binom{i+j}{i} \log^{r-j-i}(z) T(i+j+1)\nonumber\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\displaystyle\!sum\limits_{i=0}^{r-j}\displaystyle\!sum\limits_{l=0}^{i+j-1} \displaystyle\!frac{(-1)^{r-i+l}}{(r-l-1)!} \binom{i+j}{i}\binom{r-l-1}{r-j-i} \log^{r-l-1}(z) {\rm A}(l+2;z). \end{align} By substituting Lemma \ref{e2} into the above equation, we get \begin{align}\label{eq:4.12} {\rm A}&\left(\{1\}_{j-1},2,\{1\}_{r-j};\displaystyle\!frac{1-z}{1+z}\right)\nonumber\\ &=\displaystyle\!sum\limits_{i=0}^{r-j} (-1)^{i} \binom{i+j}{i}T(i+j+1) {\rm A}\left(\{1\}_{r-j-i};\displaystyle\!frac{1-z}{1+z}\right) \nonumber \\ &\quad+\displaystyle\!sum\limits_{i=0}^{r-j}\displaystyle\!sum\limits_{l=0}^{i+j-1} (-1)^{i+1} \binom{i+j}{i}\binom{r-l-1}{r-j-i} {\rm A}\left(\{1\}_{r-l-1};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(l+2;z). \end{align} In order to obtain the desired formula, let us simplify the last term as follows. \begin{align*} \displaystyle\!sum\limits_{i=0}^{r-j}\displaystyle\!sum\limits_{l=0}^{i+j-1} &(-1)^{i+1} \binom{i+j}{i}\binom{r-l-1}{r-j-i} {\rm A}\left(\{1\}_{r-l-1};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(l+2;z) \\ &=\displaystyle\!sum\limits_{i=0}^{r-j}\displaystyle\!sum\limits_{l=1}^{i+j} (-1)^{i+1} \binom{i+j}{i}\binom{r-l}{r-j-i} {\rm A}\left(\{1\}_{r-l};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(l+1;z) \\ &=\displaystyle\!sum\limits_{l=1}^{r}\displaystyle\!sum\limits_{i=0}^{r-j} (-1)^{i+1} \binom{i+j}{i}\binom{r-l}{r-j-i} {\rm A}\left(\{1\}_{r-l};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(l+1;z). \end{align*} Here, \begin{align*} \binom{r-l}{r-j-i}= \binom{r-l}{i+j-1}. \end{align*} By using the binomial identity 176 in \cite{michael}, we get \begin{align*} \displaystyle\!sum\limits_{i=0}^{r-j} (-1)^{i+1} \binom{i+j}{i}\binom{r-l}{i+j-l}= \left\{ {\begin{array}{*{20}{c}} 0 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{\ \ (j < r-l),} \\ {(-1)^{r-j-1}\binom{l}{r-j}\;\;\;(r-l\leq j\leq r).} \\ \end{array} } \right. \end{align*} Substituting this into the above equation, we get \begin{align*} \displaystyle\!sum\limits_{i=0}^{r-j}\displaystyle\!sum\limits_{l=0}^{i+j-1} &(-1)^{i+1} \binom{i+j}{i}\binom{r-l-1}{r-j-i} {\rm A}\left(\{1\}_{r-l-1};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(l+2;z) \\ &=\displaystyle\!sum\limits_{l=r-j}^{r}(-1)^{r-j-1}\binom{l}{r-j} {\rm A}\left(\{1\}_{r-j};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(l+1;z). \end{align*} By substituting this in equation (\ref{eq:4.12}), we get the desired results.\hfill$\square$ Similarly, using (\ref{b5}) and (\ref{b8}) we can obtain the following formula for multiple polylogarithm function. \begin{thm}\label{thm2.3} For any positive integers $j$ and $r$ with $j\leq r$, \begin{align}\label{b11} {\rm Li}_{\{1\}_{j-1},2,\{1\}_{r-j}}(1-z)=&\displaystyle\!sum\limits_{i=0}^{r-j}(-1)^{i}\binom{i+j}{i} \zeta(i+j+1){\rm Li}_{\{1\}_{r-j-i}}(1-z)\nonumber\\ &+(-1)^{r-j-1}\displaystyle\!sum\limits_{l=r-j}^{r}\binom{l}{r-j} {\rm Li}_{\{1\}_{r-l}}(1-z) {\rm Li}_{l+1}(z). \end{align} \end{thm} \it{Proof.}\rm\quad The proof of Theorem \ref{thm2.3} is similar as the proof of Theorem \ref{thm2.4} and is thus omitted. We leave the detail to the interested reader.\hfill$\square$ In order to prove the next result we consider the following shuffle product identity. \begin{lem}\label{lm-4.14} For the integers $m, n \ge1$, we have \begin{align*} \displaystyle\!sum\limits_{j=1}^{m}(-1)^j (y^{m-j} \shuffle y^j x^n)=-\displaystyle\!sum_{\substack{\alpha_1+\cdots+\alpha_m=m+n, \forall \alpha_i \ge 1 }} y x^{\alpha_1} \cdots yx^{\alpha_{m-1}}yx^{\alpha_{m}}. \end{align*} \end{lem} \it{Proof.}\rm\quad Consider the left hand-side of the above equation. \begin{align}\label{4.17} \displaystyle\!sum\limits_{j=1}^{m}(-1)^j& y^{m-j} \shuffle y^j x^n \nonumber\\ &=(-1)^m y^mx^n+\displaystyle\!sum\limits_{j=1}^{m-1} (-1)^j \left( y(y^{m-j-1} \shuffle y^j x^n)+y(y^{m-j} \shuffle y^{j-1} x^n) \right) \nonumber\\ &=(-1)^m y^mx^n+\displaystyle\!sum\limits_{j=2}^{m} (-1)^{j-1} y(y^{m-j} \shuffle y^{j-1} x^n)+ \displaystyle\!sum\limits_{j=1}^{m-1}(-1)^j y(y^{m-j} \shuffle y^{j-1} x^n) \nonumber\\ &=(-1)^m y^mx^n+(-1)^{m-1} y^mx^n-y(y^{m-1} \shuffle x^n) \nonumber \\ &=-y(y^{m-1} \shuffle x^n) . \end{align} By using the shuffle product formula. we obtain the desired result.\hfill$\square$ By using the above mentioned lemma, we can obtain the following formula for ${\rm A}(\mathbf{k}_r;z)$. \begin{pro}\label{2.8} For $n,m\in \mathbb{N}}\def\Z{\mathbb{Z}$, we have \begin{align}\label{b21} \displaystyle\!sum\limits_{j=1}^{m}(-1)^j{\rm A}(\{1\}_{m-j};z){\rm A}(\{1\}_{j-1},n+1;z)=-\displaystyle\!sum_{\substack{|{\bf{k}}'|=m+n, \forall k_i \ge 1 \\d({\bf{k}}')=m}}{\rm A}({\bf{k}}';z). \end{align} \end{pro} \it{Proof.}\rm\quad We know that ${\rm A}(\mathbf{k}_r;z)$ satisfies the shuffle relation. Using Lemma \ref{lm-4.14}, we can obtain the desired results. \hfill$\square$ \begin{thm}\label{thm2.5} For any positive integers $r$ and $k$, \begin{align*} {\rm A}\left(\{1\}_{r-1},k;\displaystyle\!frac{1-z}{1+z}\right) &=\displaystyle\!sum\limits_{j=0}^{k-2} (-1)^{k-j}T(\{1\}_{j},r+1){\rm A}(\{1\}_{k-2-j};z)\nonumber\\ &\quad +(-1)^{k-1} \displaystyle\!sum_{\substack{a_1+\cdots+a_k=r \\ \forall a_j \ge 0}}{\rm A}\left(\{1\}_{a_k};\displaystyle\!frac{1-z}{1+z}\right){\rm A}(a_1+1,\cdots,a_{k-1}+1;z) . \end{align*} \end{thm} \it{Proof.}\rm\quad Set $k_1=\cdots=k_{r-1}=1,k_r=k$ in (\ref{b9}) and replacing $z$ by $\displaystyle\!frac{1-z}{1+z}$. Then, we get \begin{align*} {\rm A}\left(\{1\}_{r-1},k;\displaystyle\!frac{1-z}{1+z}\right)&=\displaystyle\!frac{1}{(k-1)!}\displaystyle\!int\nolimits_{F'_r(z)}\log^{k-1}\displaystyle\!frac{(1-z)(1+t_1)}{(1+z)(1-t_1)}\displaystyle\!frac{dt_1}{t_1}\cdots \displaystyle\!frac{dt_r}{t_r}\nonumber\\ &=\displaystyle\!sum\limits_{j=1}^{k-1} \displaystyle\!frac{(-1)^j}{(k-1-j)!j!}\log^{k-1-j}\left(\displaystyle\!frac{1-z}{1+z}\right)\displaystyle\!int\nolimits_{F'_r(z)} \log^j\left(\displaystyle\!frac{1-t_1}{1+t_1}\right)\displaystyle\!frac{dt_1}{t_1}\cdots \displaystyle\!frac{dt_r}{t_r}\nonumber \\ &\;\;\;\;\;\;\quad+\displaystyle\!frac{1}{(k-1)!}\log^{k-1}\left(\displaystyle\!frac{1-z}{1+z}\right)\displaystyle\!int\limits_{F'_r(z)} \displaystyle\!frac{dt_1}{t_1}\cdots \displaystyle\!frac{dt_r}{t_r}. \end{align*} By using (\ref{b5}), we get \begin{align}\label{b20} {\rm A}\left(\{1\}_{r-1},k;\displaystyle\!frac{1-z}{1+z}\right)&=\displaystyle\!sum\limits_{j=1}^{k-1} \displaystyle\!frac{(-1)^{r-1+j}}{(k-1-j)!j!(r-1)!}\log^{k-1-j}\left(\displaystyle\!frac{1-z}{1+z}\right)\displaystyle\!int\nolimits_{z}^1 \log^{r-1}(t)\log^j\left(\displaystyle\!frac{1-t}{1+t}\right)\displaystyle\!frac{dt}{t}\nonumber \\ &\;\;\;\;\;\;\quad+\displaystyle\!frac{(-1)^r}{(k-1)!r!}\log^{k-1}\left(\displaystyle\!frac{1-z}{1+z}\right)\log^r(z). \end{align} Substitute (\ref{e4}) and (\ref{e5}) into above equation. Then, we get \begin{align} {\rm A}&\left(\{1\}_{r-1},k;\displaystyle\!frac{1-z}{1+z}\right)\nonumber\\ &=\displaystyle\!sum\limits_{j=1}^{k-1} \displaystyle\!frac{1}{(k-1-j)!} \log^{k-1-j}\left(\displaystyle\!frac{1-z}{1+z}\right) T(\{1\}_{j-1},r+1)\nonumber\\ &\;\;\;\;\;\;\;\;\;\;\;+\displaystyle\!sum\limits_{j=1}^{k-1}\displaystyle\!sum\limits_{l=0}^{r-1} \displaystyle\!frac{(-1)^{r-l}}{(k-1-j)!(r-1-l)!} \log^{k-1-j}\left(\displaystyle\!frac{1-z}{1+z}\right)\log^{r-l-1}(z) {\rm A}(\{1\}_{j-1},l+2;z)\nonumber\\ &\;\;\;\;\;\;\;\;\;\;\;\quad+\displaystyle\!frac{(-1)^r}{(k-1)!r!}\log^{k-1}\left(\displaystyle\!frac{1-z}{1+z}\right)\log^r(z). \end{align} By substituting Lemma \ref{e2} into the above equation, we get \begin{align*} {\rm A}&\left(\{1\}_{r-1},k;\displaystyle\!frac{1-z}{1+z}\right)\\ &=(-1)^{k-1}{\rm A}(\{1\}_{k-1};z){\rm A}\left(\{1\}_r;\displaystyle\!frac{1-z}{1+z}\right)+\displaystyle\!sum\limits_{j=1}^{k-1} (-1)^{k-1-j} {\rm A}(\{1\}_{k-1-j};z) T(\{1\}_{j-1},r+1) \end{align*} \begin{align*} \;\;\;\;\;\;\;\;\;\;\;+\displaystyle\!sum\limits_{j=1}^{k-1}\displaystyle\!sum\limits_{i=0}^{r-1} (-1)^{k-j}{\rm A}(\{1\}_{k-1-j};z) {\rm A}\left(\{1\}_{r-i-1};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(\{1\}_{j-1},i+2;z). \end{align*} This can be written as \begin{align}\label{eq:4.16} {\rm A}&\left(\{1\}_{r-1},k;\displaystyle\!frac{1-z}{1+z}\right)\nonumber\\ &=\displaystyle\!sum\limits_{j=1}^{k-1} (-1)^{k-1-j}T(\{1\}_{j-1},r+1){\rm A}(\{1\}_{k-1-j};z)\nonumber\\ &\;\;\;\;\;\;\;\;+(-1)^k\displaystyle\!sum\limits_{i=0}^{r-1}{\rm A}\left(\{1\}_i;\displaystyle\!frac{1-z}{1+z}\right)\Bigg(\displaystyle\!sum\limits_{j=1}^{k-1}(-1)^j{\rm A}(\{1\}_{k-1-j};z){\rm A}(\{1\}_{j-1},r+1-i;z) \Bigg)\nonumber\\ &\;\;\;\;\;\;\;\;\;\;\;\;+(-1)^{k-1} {\rm A}\left(\{1\}_r;\displaystyle\!frac{1-z}{1+z}\right){\rm A}(\{1\}_{k-1};z). \end{align} Setting $m=k-1$ and $n=r-j$ in Proposition \ref{2.8}, we can write the inner summation of the second term of equation (\ref{eq:4.16}) as bellow. \begin{align*} \displaystyle\!sum\limits_{j=1}^{k-1}(-1)^j{\rm A}(\{1\}_{k-1-j};z){\rm A}(\{1\}_{j-1},r+1-i;z)=-\displaystyle\!sum_{\substack{|{\bf{k}}'|=k-1+r-i, \forall k_i \ge 1 \\d({\bf{k}}')=k-1}}{\rm A}({\bf{k}}';z). \end{align*} By substituting this into equation (\ref{eq:4.16}), we get \begin{align} {\rm A}\left(\{1\}_{r-1},k;\displaystyle\!frac{1-z}{1+z}\right) &=\displaystyle\!sum\limits_{j=0}^{k-2} (-1)^{k-j}T(\{1\}_{j},r+1){\rm A}(\{1\}_{k-2-j};z)\nonumber\\ &\quad+(-1)^{k-1}\displaystyle\!sum\limits_{i=0}^{r-1}{\rm A}\left(\{1\}_i;\displaystyle\!frac{1-z}{1+z}\right)\displaystyle\!sum_{\substack{|{\bf{k}}'|=k-1+r-i, \forall k_i \ge 1 \\d({\bf{k}}')=k-1}}{\rm A}({\bf{k}}';z)\nonumber\\ &\quad+(-1)^{k-1} {\rm A}\left(\{1\}_r;\displaystyle\!frac{1-z}{1+z}\right){\rm A}(\{1\}_{k-1};z)\nonumber\\ &=\displaystyle\!sum\limits_{j=0}^{k-2} (-1)^{k-j}T(\{1\}_{j},r+1){\rm A}(\{1\}_{k-2-j};z)\nonumber\\ &\quad+(-1)^{k-1}\displaystyle\!sum\limits_{i=0}^{r}{\rm A}\left(\{1\}_i;\displaystyle\!frac{1-z}{1+z}\right)\displaystyle\!sum_{\substack{|{\bf{k}}'|=k-1+r-i, \forall k_i \ge 1 \\d({\bf{k}}')=k-1}}{\rm A}({\bf{k}}';z). \end{align} We can write the second term of the above equation as \begin{align} (-1)^{k-1} \displaystyle\!sum_{\substack{a_1+\cdots+a_k=r \\ \forall a_j \ge 0}}{\rm A}\left(\{1\}_{a_k};\displaystyle\!frac{1-z}{1+z}\right){\rm A}(a_1+1,\cdots,a_{k-1}+1;z). \end{align} From this we can obtain the desired result.\hfill$\square$ By setting $j=1$ in Theorems \ref{thm2.4} and \ref{thm2.3}, we obtain the following corollaries respectively. \begin{cor}(\cite{KTB2018}) For positive integer $r$, \begin{align}\label{b25} {\rm A}\left(2,\{1\}_{r-1};\displaystyle\!frac{1-z}{1+z}\right)&=(-1)^rr{\rm A}(r+1;z)-(-1)^r{\rm A}(r;z)\log(z)\nonumber\\ &\quad-(-1)^r\displaystyle\!sum\limits_{i=0}^{r-1}\displaystyle\!frac{i+1}{(r-1-i)!}T(i+2)\log^{r-1-i}(z). \end{align} \end{cor} \begin{cor}(\cite{KTA2018}) For positive integer $r$, \begin{align}\label{b24} {\rm Li}_{2,\{1\}_{r-1}}(1-z)&=(-1)^rr{\rm Li}_{r+1}(z)-(-1)^r{\rm Li}_r(z)\log(z)\nonumber\\ &\quad-(-1)^r\displaystyle\!sum\limits_{i=0}^{r-1}\displaystyle\!frac{i+1}{(r-1-i)!}\zeta(i+2)\log^{r-1-i}(z). \end{align} \end{cor} Now, accordingly, we obtain explicit formulas for $\psi({\bf k}_r;s)$ and $\xi({\bf k}_r;s)$. \begin{thm}\label{th:4.11} For positive integers $j,r$ and $\Re(s)>1$ with $j\leq r$, \begin{align*} \psi&(\{1\}_{j-1},2,\{1\}_{r-j};s)\\ &=\displaystyle\!sum\limits_{i=0}^{r-j} (-1)^i \binom{i+j}{i} \binom{s+r-i-j-1}{r-i-j}T(i+j+1)T(s+r-i-j)\nonumber\\ &\;\;\;\;\;\;\;\;\;\;\;\;+(-1)^{r-j-1}\displaystyle\!sum\limits_{l=r-j}^{r} \binom{l}{r-j}\binom{s+r-l-1}{r-l}T(l+1,s+r-l). \end{align*} \end{thm} \it{Proof.}\rm\quad Let us consider Theorem \ref{thm2.5}. \begin{align}\label{4.15} {\rm A}\left(\{1\}_{j-1},2,\{1\}_{r-j};\displaystyle\!frac{1-z}{1+z}\right)=&\displaystyle\!sum\limits_{i=0}^{r-j}(-1)^{i}\binom{i+j}{i} T(i+j+1){\rm A}\left(\{1\}_{r-j-i};\displaystyle\!frac{1-z}{1+z}\right)\nonumber\\ &+(-1)^{r-j-1}\displaystyle\!sum\limits_{l=r-j}^{r}\binom{l}{r-j} {\rm A}\left(\{1\}_{r-l};\displaystyle\!frac{1-z}{1+z}\right) {\rm A}(l+1;z). \end{align} Now we can see that the right side the the above equation is in the form of Theorem \ref{thm-Ath}. We can write the each term of the equation (\ref{4.15}) in the form of Theorem \ref{thm-psi}. This readily gives the desired result. \hfill$\square$ Similarly, we can obtain the formula for $\xi({\bf k}_r;s)$ as follows. \begin{thm}\label{thm2.12} For positive integers $j,r$ and $\Re(s)>1$ with $j\leq r$, \begin{align}\label{b29} \xi(\{1\}_{j-1},2,\{1\}_{r-j};s)=&\displaystyle\!sum\limits_{i=0}^{r-j} (-1)^i \binom{i+j}{i} \binom{s+r-i-j-1}{r-i-j}\zeta(i+j+1)\zeta(s+r-i-j)\nonumber\\ &+(-1)^{r-j-1}\displaystyle\!sum\limits_{l=r-j}^{r} \binom{l}{r-j}\binom{s+r-l-2}{r-l-1}\zeta(l+2,s+r-l-1). \end{align} \end{thm} Note that the explicit formulas of $\xi(2,\{1\}_{r-1};s)$ and $\psi(2,\{1\}_{r-1};s)$ were given by Kaneko and Tsumura in \cite{KT2018} and \cite{KTA2018}, respectively. Finally, we end this section by the following theorem. \begin{thm}\label{thm4} For any positive integers $k,p$ and $r$, we have \begin{align}\label{bb2} \xi(\{1\}_{r-1},k;p)=\displaystyle\!sum_{j=1}^r \displaystyle\!sum_{k_1+\cdots+k_j=k+j-1,\atop k_1,\ldots,k_j\geq 1} (-1)^{j-1} \binom{p+r-j-1}{p-1} \xi(k_1,k_2,\ldots,k_j;p+r-j-1), \end{align} \begin{align}\label{bb3} \displaystyle\!sum_{k_1+\cdots+k_r=k+r-1,\atop k_,\ldots,k_r\geq 1} \xi(k_1,k_2,\ldots,k_r;p)=\displaystyle\!sum_{j=1}^r (-1)^{j-1}\binom{p+r-j-1}{p-1} \xi(\{1\}_{j-1},k;p+r-j) \end{align} and \begin{align}\label{bb4} \psi (\{1\}_{r-1},k;p)=\displaystyle\!sum_{j=1}^r \displaystyle\!sum_{k_1+\cdots+k_j=k+j-1,\atop k_1,\ldots,k_j\geq 1} (-1)^{j-1} \binom{p+r-j-1}{p-1} \psi (k_1,k_2,\ldots,k_j;p+r-j-1), \end{align} \begin{align}\label{bb5} \displaystyle\!sum_{k_1+\cdots+k_r=k+r-1,\atop k_,\ldots,k_r\geq 1} \psi (p;k_1,k_2,\ldots,k_r)=\displaystyle\!sum_{j=1}^r (-1)^{j-1}\binom{p+r-j-1}{p-1} \psi (\{1\}_{j-1},k;p+r-j). \end{align} \end{thm} \it{Proof.}\rm\quad We only prove the identities (\ref{bb4}) and (\ref{bb5}), the proofs of (\ref{bb2}) and (\ref{bb3}) are similar. First, using (\ref{b9}) and summing both sides, then we have \begin{align}\label{b22} &\displaystyle\!sum\limits_{k_1+\cdots+k_r=k+r-1\atop k_1,\ldots,k_r\geq 1} {\rm A}\left(k_1,\ldots,k_r;z\right)\nonumber\\&=\displaystyle\!sum\limits_{k_1+\cdots+k_r=k+r-1\atop k_1,\ldots,k_r\geq 1}\left\{\prod\limits_{j=1}^r\displaystyle\!frac{(-1)^{k_j-1}}{(k_j-1)!}\right\}\displaystyle\!int\nolimits_{F_r(z)} \left\{\prod\limits_{j=1}^{r-1}\displaystyle\!frac{\log^{k_j-1}\left(\displaystyle\!frac{(1-t_{r+1-j})(1+t_{r-j})}{(1+t_{r+1-j})(1-t_{r-j})}\right)}{t_{r+1-j}}dt_{r+1-j}\right\}\nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\times\displaystyle\!frac{\log^{k_r-1}\left(\displaystyle\!frac{1-t_1}{(1+t_1)z}\right)}{t_1}dt_1\nonumber\\ &=\displaystyle\!frac{(-1)^{k-1}}{(k-1)!}\displaystyle\!int\limits_{F_r(z)}\displaystyle\!frac{\log^{k-1}\left(\displaystyle\!frac{1-t_r}{(1+t_r)z}\right)}{t_1 t_2 \cdots t_r}dt_1dt_2\cdots dt_r\nonumber\\ &=\displaystyle\!frac{(-1)^{k+r}}{(k-1)!}\displaystyle\!sum\limits_{l=0}^{r-1} \displaystyle\!frac{(-1)^l}{l!(r-1-l)!}\log^{r-1-l}\left(\displaystyle\!frac{1-z}{1+z}\right)\displaystyle\!int\limits_{(1-z)/(1+z)}^1 \displaystyle\!frac{\log^l(t)\log^{k-1}\left(\displaystyle\!frac{1-t}{(1+t)z}\right)}{t}dt. \end{align} From (\ref{b9}), setting ${\bf k}_r=(\{1\}_{r-1},k)$ and using (\ref{b5}), we can find that \begin{align}\label{b23} {\rm A}\left(\{1\}_{r-1},k;z\right)=\displaystyle\!frac{(-1)^{k+r}}{(k-1)!(r-1)!}\displaystyle\!int\limits_{(1-z)/(1+z)}^1\displaystyle\!frac{\log^{r-1}(t)\log^{k-1}\left(\displaystyle\!frac{1-t}{(1+t)z}\right)}{t}dt. \end{align} Hence, combining (\ref{b22}) and (\ref{b23}), we obtain \begin{align}\label{cc6} \displaystyle\!sum\limits_{k_1+\cdots+k_r=k+r-1\atop k_1,\ldots,k_r\geq 1} {\rm A}\left(k_1,\ldots,k_r;z\right)=(-1)^{r-1}\displaystyle\!sum\limits_{j=1}^{r} \displaystyle\!frac{\log^{r-j}\left(\displaystyle\!frac{1-z}{1+z}\right) }{(r-j)!}{\rm A}\left(\{1\}_{j-1},k;z\right) \end{align} and \begin{align}\label{cc7} {\rm A}\left(\{1\}_{r-1},k;z\right)=(-1)^{r-1} \displaystyle\!sum_{j=1}^{r} \displaystyle\!frac{\log^{r-j}\left(\displaystyle\!frac{1-z}{1+z}\right)}{(r-j)!} \displaystyle\!sum_{k_1+\cdots+k_{j}=k+j-1,\atop k_1,\ldots,k_{j}\geq 1} {\rm A}\left(k_1,\ldots,k_{j};z\right). \end{align} Noting that setting $\tanh(t/2)=z$ and $s=p$, in (\ref{a8}) then \begin{align}\label{cc8} &\psi(k_1,k_2\ldots,k_r;p)=\displaystyle\!frac{(-1)^{p-1}}{(p-1)!}\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{\log^{p-1}\left(\displaystyle\!frac{1-z}{1+z}\right){\rm A}(k_1,k_2,\ldots,k_r;z)}{z}dz. \end{align} Hence, multiplying (\ref{cc6}) and (\ref{cc7}) by ${\log^{p-1}\left(\displaystyle\!frac{1-z}{1+z}\right)}/{z}$ and applying (\ref{cc8}), we obtain the formulas (\ref{bb4}) and (\ref{bb4}) with an elementary calculation. Similarly, we obtain the identities (\ref{bb2}) and (\ref{bb3}). Thus, we complete the proof of Theorem \ref{thm4}.\hfill$\square$ \subsection{A duality relation for Kaneko-Tsumura $\eta$-values} In this subsection, we will prove a combinatorial duality formula for Kaneko-Tsumura $\eta$-values. We need the following a lemma. \begin{lem}\label{lembc18} For positive integers $k$ and $r$, \begin{align}\label{bc0} {\rm Li}_{\{1\}_{r-1},k}\left(\displaystyle\!frac{x}{x-1}\right)=\displaystyle\!sum_{j=1}^r (-1)^{j-1} \displaystyle\!frac{\log^{r-j}(1-x)}{(r-j)!} \displaystyle\!sum_{k_1+\cdots+k_j=k+j-1,\atop k_1,\ldots,k_j\geq 1} {\rm Li}_{k_1,k_2,\ldots,k_j}\left(\displaystyle\!frac{x}{x-1}\right) \end{align} and \begin{align}\label{bc1} \displaystyle\!sum_{k_1+\cdots+k_r=k+r-1,\atop k_1,\ldots,k_r\geq 1} {\rm Li}_{k_1,k_2,\ldots,k_r}\left(\displaystyle\!frac{x}{x-1}\right)=\displaystyle\!sum_{j=1}^r (-1)^{j-1} \displaystyle\!frac{\log^{r-j}(1-x)}{(r-j)!} {\rm Li}_{\{1\}_{j-1},k}\left(\displaystyle\!frac{x}{x-1}\right). \end{align} \end{lem} \it{Proof.}\rm\quad The proof is similar to that of identities (\ref{cc6}) and (\ref{cc7}). We leave the detail to the interested reader. \hfill$\square$ Hence, using Lemma \ref{lembc18}, we obtain follows. \begin{thm}\label{thm5} For positive integers $k,p$ and $r$, we have \begin{align}\label{abc3} \eta(\{1\}_{r-1},k;p)=(-1)^{r-1}\displaystyle\!sum_{j=1}^r \displaystyle\!sum_{k_1+\cdots+k_j=k+j-1,\atop k_1,\ldots,k_j\geq 1} \binom{p+r-j-1}{p-1} \eta(k_1,k_2,\ldots,k_j;p+r-j-1) \end{align} and \begin{align}\label{abc4} \displaystyle\!sum_{k_1+\cdots+k_r=k+r-1,\atop k_,\ldots,k_r\geq 1} \eta (k_1,k_2,\ldots,k_r;p)=(-1)^{r-1}\displaystyle\!sum_{j=1}^r \binom{p+r-j-1}{p-1} \eta (\{1\}_{j-1},k;p+r-j). \end{align} \end{thm} \it{Proof.}\rm\quad According to the definition of Kaneko-Tsumura $\eta$-function (see (\ref{Def-eta})), by changing variable $1-e^{-t}=x$ and let $s=p\in \mathbb{N}}\def\Z{\mathbb{Z}$ in (\ref{Def-eta}), then \begin{align} &\eta(k_1,k_2\ldots,k_r;p)=\displaystyle\!frac{(-1)^{p}}{(p-1)!}\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{\log^{p-1}(1-x){\mathrm{Li}}_{{{k_1},{k_2}, \cdots ,{k_r}}}\left(\displaystyle\!frac{x}{x-1} \right)}{x}dx.\label{cc9} \end{align} Hence, multiplying (\ref{bc0}) and (\ref{bc1}) by $\displaystyle\!frac{\log^{p-1}(1-x)}{x}$ and applying (\ref{cc9}), we obtain the formulas (\ref{abc3}) and (\ref{abc4}). \hfill$\square$ Note that in \cite[Theorem 3.6]{X-2019}, the second named author proved that \begin{align}\label{ccd1} \displaystyle\!sum\limits_{k_1+k_2+\cdots+k_r=k+r-1,\atop k_1,k_2,\ldots,k_r\geq 1} \eta (k_1,k_2,\ldots,k_r;p)=(-1)^{r-1} \displaystyle\!sum\limits_{n=1}^\infty \displaystyle\!frac{\zeta^\star_n(\{1\}_{k-1})\zeta^\star_n(\{1\}_{p-1})\zeta_{n-1}(\{1\}_{r-1})}{n^2}, \end{align} where $\zeta_n(k_1,k_2,\ldots,k_r)$ and $\zeta^\star_n(k_1,k_2,\ldots,k_r)$ stand for multiple harmonic sums and multiple harmonic star sums, respectively, defined by \begin{align*} &\zeta_n(k_1,k_2,\ldots,k_r):=\displaystyle\!sum\limits_{0<n_1<\cdots<n_r\leq n}\displaystyle\!frac{1}{n_1^{k_1}n_2^{k_2}\cdots n_r^{k_r}},\\ &\zeta^\star_n(k_1,k_2,\ldots,k_r):=\displaystyle\!sum\limits_{0<n_1\leq\cdots\leq n_r\leq n}\displaystyle\!frac{1}{n_1^{k_1}n_2^{k_2}\cdots n_r^{k_r}}. \end{align*} Therefore, using (\ref{abc4}) and (\ref{ccd1}) we can get the following a more general duality theorem involving Kaneko-Tsumura $\eta$-values. \begin{thm} For positive integers $p,k$ and $r$, \begin{align}\label{ccd2} \displaystyle\!sum_{j=1}^r \binom{p+r-j-1}{p-1} \eta (\{1\}_{j-1},k;p+r-j)=\displaystyle\!sum_{j=1}^r \binom{k+r-j-1}{k-1} \eta (\{1\}_{j-1},p;k+r-j). \end{align} \end{thm} In particular, if $r=1$ in (\ref{ccd2}) we obtain the well-known duality relation \begin{align*} \eta (k;p)=\eta (p;k), \end{align*} which conjectured in \cite{KT2018} and was proved previously by several authors (see \cite{KO2018,Y2016}). \section{Duality relation for Kaneko-Tsumura $\psi$-values} In \cite{KTA2018}, Kaneko and Tsumura gave the following duality formula \begin{align}\label{c1} &\psi(\{1\}_{r-1},k;m+1)+(-1)^k\psi(\{1\}_{m-1},k;r+1)\nonumber\\ &=\displaystyle\!sum\limits_{j=0}^{k-2} (-1)^j T(\{1\}_{r-1},k-j) T(\{1\}_j,m+1). \end{align} In this section, we will give a general duality formula for Kaneko-Tsumura $\psi$-values. \subsection{Multiple $T$-harmonic sums and multiple $S$-harmonic sums} Firstly, we define the multiple $T$-harmonic sums and the multiple $S$-harmonic sums, which can be regarded as two variants of the classical multiple harmonic sums of level two. For indexes ${\bfk_r}:= (k_1,\ldots, k_r)\in \mathbb{Z}_{\ge 1}^r$ and ${\bfl_s}:=(l_1,l_2,\ldots,l_s)\in \mathbb{Z}_{\ge 1}^r$, and any positive integers $m$ and $p$, let \begin{align*} \bfk_{2m-1}:=(k_1,k_2,\ldots,k_{2m-1}),\quad \bfk_{2m}:=(k_1,k_2,\ldots,k_{2m}) \end{align*} and \begin{align*} \bfl_{2p-1}:=(l_1,l_2,\ldots,l_{2p-1}),\quad \bfl_{2p}:=(l_1,l_2,\ldots,l_{2p}). \end{align*} For positive integers $n_1,n_2,\ldots,n_{r}$ and $n$, if $r=2m-1$ is an odd, we define \begin{align*} &D_n({{\bf n}_{2m-1}}):=\left\{(n_1,n_2,\ldots,n_{2m-1},n)\mid 0<n_1\leq n_2 <\cdots\leq n_{2m-2}<n_{2m-1}\leq n \right\},\ (n\geq m)\\ &E_n({{\bf n}_{2m-1}}):=\left\{(n_1,n_2,\ldots,n_{2m-1},n)\mid 1\leq n_1<n_2\leq \cdots< n_{2m-2}\leq n_{2m-1}< n \right\},\ (n>m) \end{align*} and when $r=2m$ is even, we define \begin{align*} &D_n({{\bf n}_{2m}}):=\left\{(n_1,n_2,\ldots,n_{2m},n)\mid 0<n_1\leq n_2 <\cdots\leq n_{2m-2}<n_{2m-1}\leq n_{2m}<n \right\},\ (n> m)\\ &E_n({{\bf n}_{2m}}):=\left\{(n_1,n_2,\ldots,n_{2m},n)\mid 1\leq n_1<n_2\leq \cdots< n_{2m-2}\leq n_{2m-1}< n_{2m}\leq n \right\},\ (n>m). \end{align*} \begin{defn}\emph{(\cite[Def. 1.1]{XZ2020})}\label{def1} For positive integer $m$, the multiple $T$-harmonic sums ({\rm MTHSs} for short) and multiple $S$-harmonic sums ({\rm MSHSs} for short) are defined by \begin{align} &T_n({\bfk_{2m-1}}):= \displaystyle\!sum_{D_n({{\bf n}_{2m-1}})} \displaystyle\!frac{2^{2m-1}}{(\prod_{j=1}^{m-1} (2n_{2j-1}-1)^{k_{2j-1}}(2n_{2j})^{k_{2j}})(2n_{2m-1}-1)^{k_{2m-1}}},\label{MOT}\\ &T_n({\bfk_{2m}}):= \displaystyle\!sum_{D_n({{\bf n}_{2m}})} \displaystyle\!frac{2^{2m}}{\prod_{j=1}^{m} (2n_{2j-1}-1)^{k_{2j-1}}(2n_{2j})^{k_{2j}}},\label{MET}\\ &S_n({\bfk_{2m-1}}):= \displaystyle\!sum_{E_n({{\bf n}_{2m-1}})} \displaystyle\!frac{2^{2m-1}}{(\prod_{j=1}^{m-1} (2n_{2j-1})^{k_{2j-1}}(2n_{2j}-1)^{k_{2j}})(2n_{2m-1})^{k_{2m-1}}},\label{MOS}\\ &S_n({\bfk_{2m}}):= \displaystyle\!sum_{E_n({{\bf n}_{2m}})} \displaystyle\!frac{2^{2m}}{\prod_{j=1}^{m} (2n_{2j-1})^{k_{2j-1}}(2n_{2j}-1)^{k_{2j}}},\label{MES} \end{align} where $T_n({\bfk_{2m-1}}):=0$ if $n<m$, and $T_n({\bfk_{2m}})=S_n({\bfk_{2m-1}})=S_n({\bfk_{2m}}):=0$ if $n\leq m$. Moreover, for convenience we let $T_n(\emptyset)=S_n(\emptyset):=1$. We call \eqref{MOT} and \eqref{MET} are multiple $T$-harmonic sums, and call \eqref{MOS} and \eqref{MES} are multiple $S$-harmonic sums. \end{defn} Clearly, according to the definitions of MTHSs and MSHSs, we have the following relations \begin{alignat*}{3} &T_n({\bfk_{2m}})=2\displaystyle\!sum_{j=1}^{n-1} \displaystyle\!frac{T_j({\bfk_{2m-1}})}{(2j)^{k_{2m}}}, \qquad & &T_n({\bfk_{2m-1}})=2\displaystyle\!sum_{j=1}^{n} \displaystyle\!frac{T_j({\bfk_{2m-2}})}{(2j-1)^{k_{2m-1}}},\\ &S_n({\bfk_{2m}})=2\displaystyle\!sum_{j=1}^{n} \displaystyle\!frac{S_j({\bfk_{2m-1}})}{(2j-1)^{k_{2m}}}, \qquad & &S_n({\bfk_{2m-1}})=2\displaystyle\!sum_{j=1}^{n-1} \displaystyle\!frac{S_j({\bfk_{2m-2}})}{(2j)^{k_{2m-1}}}. \end{alignat*} In \cite{XZ2020}, the second author and Zhao used MTHSs and MSHSs to define the convoluted $T$-values $T({\bfk_{r}}\circledast {\bfl_{s}})$, which can be regarded as a $T$-variant of Kaneko-Yamamoto MZVs $\zeta({\bfk}_r\circledast {\bfl}_s)$ (see \cite{KY2018}). \begin{defn} For positive integers $m$ and $p$, the \emph{convoluted $T$-values} \begin{align} &T({\bfk_{2m}}\circledast{\bfl_{2p}})=2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n({\bfk_{2m-1}})T_n({\bfl_{2p-1}})}{(2n)^{k_{2m}+l_{2p}}},\\ &T({\bfk_{2m-1}}\circledast{\bfl_{2p-1}})=2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n({\bfk_{2m-2}})T_n({\bfl_{2p-2}})}{(2n-1)^{k_{2m-1}+l_{2p-1}}},\\ &T({\bfk_{2m}}\circledast{\bfl_{2p-1}})=2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n({\bfk_{2m-1}})S_n({\bfl_{2p-2}})}{(2n)^{k_{2m}+l_{2p-1}}},\\ &T({\bfk_{2m-1}}\circledast{\bfl_{2p}})=2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n({\bfk_{2m-2}})S_n({\bfl_{2p-1}})}{(2n-1)^{k_{2m-1}+l_{2p}}}. \end{align} \end{defn} Note that the MTVs are special cases of the convoluted $T$-values since \begin{align*} &T({\bfk_{r}}\circledast (1))=T((\bfk_r)_{+}),\quad T((1)\circledast {\bfl_{2p-1}})=T((\bfl_{2p-1})_{+}). \end{align*} In particular, in \cite{XZ2020}, the second author and Zhao shown the following theorem. \begin{thm}(\cite[Thm. 4.5]{XZ2020}) For positive integers $l_1,l_2$ and index ${\bfk}_r=(k_1,k_2,\ldots,k_r)$, the convoluted T -values $T({\bfk}_r\circledast (l_1,l_2))$ can be expressed in terms of products of MTVs and Riemann zeta values. \end{thm} \subsection{Duality formula of Kaneko-Tsumura $\psi$-values} Now, for index $\bfk_r=(k_1,k_2,\ldots,k_r)\in \mathbb{N}}\def\Z{\mathbb{Z}^r$ and $|\bfk_r|:=k_1+k_2+\cdots+k_r$, we adopt the following notations: \begin{align*} &\overrightarrow{\bfk_j}:=(k_1,k_2,\ldots ,k_j),\quad \overleftarrow{\bfk_j}:=(k_r,k_{r-1},\ldots,k_{r+1-j}),\\ &|\overrightarrow{\bfk_j}|:=k_1+k_2+\cdots+k_j,\quad |\overleftarrow{\bfk_j}|:=k_r+k_{r-1}+\cdots+k_{r+1-j} \end{align*} with $\overrightarrow{\bfk_0}=\overleftarrow{\bfk_0}:=\emptyset$ and $|\overrightarrow{\bfk_0}|=|\overleftarrow{\bfk_0}|:=0$. \begin{thm}\label{thm3.1} For positive integers $p,q,r$ and index ${\bfk}=(k_1,\ldots,k_r)$ with $k_1,k_2,\ldots,k_r\in \mathbb{N}}\def\Z{\mathbb{Z}\setminus\{1\}$, let $\mathbb{N}}\def\Z{\mathbb{Z}^{+}_o$ be the set of positive odd numbers and $\mathbb{N}}\def\Z{\mathbb{Z}^{+}_e$ be the set of positive even numbers, \begin{align}\label{c2} &\psi\left(\{1\}_{q-1},\overrightarrow{\bfk}_r^{-};p+1\right)-(-1)^{|{\bfk}|}\psi\left(\{1\}_{p-1},\overleftarrow{\bfk}_r^{-};q+1\right)\nonumber\\ &=\displaystyle\!sum\limits_{j=0}^{r-1} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_j\mid}\displaystyle\!sum\limits_{i=1}^{k_{r-j}-2} (-1)^{i-1} T\left(\{1\}_{p-1},\overleftarrow{\bfk}_j,i+1\right)T\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1},k_{r-j}-i\right)\nonumber\\ &\quad+\displaystyle\!sum\limits_{j=0}^{r-2} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_{j+1}\mid}\left\{\begin{array}{l} T\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1}\right)T\left(\Big(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1}\Big)^{-}\circledast(1,1)\right)\\ - T\left(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1}\right)T\left(\Big(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1}\Big)^{-}\circledast(1,1)\right)\\ +2\delta_{p+j,q+r-j-2}\log(2)T\left(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1}\right)T\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1}\right)\end{array}\right\}, \end{align} where ${\bfk_r^{-}}=(k_1,\ldots,k_{r-1},k_r-1)$ and \begin{align*} \delta_{r,s}= \left\{ {\begin{array}{*{20}{c}}\ \ 0 \quad(r,s\in \mathbb{N}}\def\Z{\mathbb{Z}^{+}_e)\quad\quad\quad\quad\\ \ \ 0\quad(r,s\in \mathbb{N}}\def\Z{\mathbb{Z}^{+}_o)\quad\quad\quad\quad\\ -1 \quad(r\in \mathbb{N}}\def\Z{\mathbb{Z}^{+}_e,\ s\in \mathbb{N}}\def\Z{\mathbb{Z}^{+}_o)\\ \ \ \ 1 \quad(r\in \mathbb{N}}\def\Z{\mathbb{Z}^{+}_o,\ s\in \mathbb{N}}\def\Z{\mathbb{Z}^{+}_e). \end{array} } \right. \end{align*} \end{thm} It is clear that formula (\ref{c1}) is an immediate corollary of Theorem \ref{thm3.1} with $r=1$. \subsection{Proof of duality formula} In this section, we will prove the duality formula (\ref{c2}). We need the following a lemma. \begin{lem}\label{lem3.5} Let sequences $A_n,B_n$ define the finite sums ${A_n} := \displaystyle\!sum\limits_{k = 1}^n {{a_k}} ,\ {B_n} := \displaystyle\!sum\limits_{k = 1}^n {{b_k}}\ ( {a_n},{b_n} =o(n^{-p}),\ {\mathop{\Re}\nolimits} \left( p \right) > 1 $ if $n\rightarrow \infty$) and $A = \mathop {\displaystyle\!lim }\limits_{n \to \infty } {A_n},B = \mathop {\displaystyle\!lim }\limits_{n \to \infty } {B_n}$, then \begin{align*} \sum\limits_{n=1}^\infty \left\{ \displaystyle\!frac{A_nB}{n+\alpha}- \displaystyle\!frac{AB_n}{n+\beta}\right\}=AB(\psi(\beta+1)-\psi(\alpha+1))+A\sum\limits_{n=1}^\infty b_nH_{n-1}(\beta)-B\sum\limits_{n=1}^\infty a_nH_{n-1}(\alpha), \end{align*} where $\alpha,\beta \notin \{-1,-2,-3,\ldots\},$ $\psi(\alpha+1)$ is digamma function, and $H_n(\alpha)$ is defined by \[H_n(\alpha)=\displaystyle\!sum_{k=1}^n\displaystyle\!frac{1}{k+\alpha}.\] It is clear that $T_n(1)=H_n(-1/2)$ and $S_n(1)=H_{n-1}(1)$. \end{lem} \it{Proof.}\rm\quad The lemma is almost obvious. We leave the detail to the interested reader.\hfill$\square$ Now, we prove a formula which is not completely explicit. \begin{thm}\label{thm3.2} For positive integers $p,q,r$ and index $\bfk_r$ with $k_1,\ldots,k_r\in \mathbb{N}}\def\Z{\mathbb{Z}\setminus\{1\}$, \begin{align}\label{c3} &\psi\left(\{1\}_{q-1},\overrightarrow{\bfk}_r^{-};p+1\right)-(-1)^{|{\bfk}|}\psi\left(\{1\}_{p-1},\overleftarrow{\bfk}_r^{-};q+1\right)\nonumber\\ &=\displaystyle\!sum\limits_{j=0}^{r-1} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_j\mid}\displaystyle\!sum\limits_{i=1}^{k_{r-j}-2} (-1)^{i-1} T\left(\{1\}_{p-1},\overleftarrow{\bfk}_j,i+1\right)T\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1},k_{r-j}-i\right)\nonumber\\ &\quad+\displaystyle\!sum\limits_{j=0}^{r-2} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_{j+1}\mid} \displaystyle\!lim_{x\rightarrow 1}\left\{\begin{array}{l} {\rm A}\left(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1};x\right){\rm A}\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1},1;x\right)\\ -{\rm A}\left(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1},1;x\right){\rm A}\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1};x\right)\end{array}\right\}. \end{align} \end{thm} \it{Proof.}\rm\quad We change variable $\tanh(t/2)=x$ in (\ref{a8}), and let $s=p+1$, then \begin{align} &\psi(k_1,k_2\ldots,k_r;p+1)=\displaystyle\!frac{(-1)^p}{p!}\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{\log^p\left(\displaystyle\!frac{1-x}{1+x}\right){\rm A}(k_1,k_2,\ldots,k_r;x)}{x}dx.\label{b35} \end{align} From (\ref{b4}) and (\ref{b35}), we can find that \begin{align}\label{c3} &\psi(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-1;p+1)\nonumber \\&=\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{{\rm A}(\{1\}_p;u){\rm A}(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-1;u)}{u}du. \end{align} Then by using (\ref{b3}), \begin{align*} \displaystyle\!frac{d}{dz}{\rm A}({{k_1}, \cdots ,k_{r-1},{k_r}}; z)= \left\{ {\begin{array}{*{20}{c}} \displaystyle\!frac{1}{z}{\rm A}({{k_1}, \cdots ,{k_{r-1}},{k_r-1}};z) {\ \ (k_r\geq 2),} \\ {\displaystyle\!frac{2}{1-z^2}{\rm A}({{k_1}, \cdots ,{k_{r-1}}};z)\;\;\;\ \ \ (k_r = 1).} \\ \end{array} } \right. \end{align*} Hence, by integrating by parts, we give \begin{align}\label{c4} &\psi(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-1;p+1)\nonumber \\&= T(\{1\}_{p-1},2)T(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-1)\nonumber\\&\quad-\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{{{\rm A}}(\{1\}_{p-1},2;u){{\rm A}}(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-2;u)}{u}du\nonumber\\ &=\cdots\nonumber\\ &=\displaystyle\!sum\limits_{i=1}^{k_r-2}(-1)^{i-1}T(\{1\}_{p-1},i+1)T(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-i)\nonumber\\ &\quad+(-1)^{k_r-2}\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{{{\rm A}}(\{1\}_{p-1},k_r-1;u){{\rm A}}(\{1\}_{q-1},k_1,\ldots,k_{r-1},1;u)}{u}du\nonumber\\ &=\displaystyle\!sum\limits_{i=1}^{k_r-2}(-1)^{i-1}T(\{1\}_{p-1},i+1)T(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-i)\nonumber\\ &\quad+(-1)^{k_r}\displaystyle\!lim_{x\rightarrow 1} \left\{{\rm A}(\{1\}_{p-1},k_r;x){\rm A}(\{1\}_{q-1},k_1,\ldots,k_{r-1},1;x)\atop- 2\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{{\rm A}(\{1\}_{p-1},k_r;u){\rm A}(\{1\}_{q-1},k_1,\ldots,k_{r-1};u)}{1-u^2}du\right\}\nonumber\\ &=\displaystyle\!sum\limits_{i=1}^{k_r-2}(-1)^{i-1}T(\{1\}_{p-1},i+1)T(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-i)\nonumber\\ &\quad+(-1)^{k_r}\displaystyle\!lim_{x\rightarrow 1} \left\{{\rm A}(\{1\}_{p-1},k_r;x){\rm A}(\{1\}_{q-1},k_1,\ldots,k_{r-1},1;x)\atop-{\rm A}(\{1\}_{p-1},k_r,1;x){\rm A}(\{1\}_{q-1},k_1,\ldots,k_{r-1};x)\right\}\nonumber\\ &\quad+(-1)^{k_r}\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{{\rm A}(\{1\}_{p-1},k_r,1;u){\rm A}(\{1\}_{q-1},k_1,\ldots,k_{r-2},k_{r-1}-1;u)}{u}du\nonumber\\ &=\cdots\nonumber\\ &=\displaystyle\!sum\limits_{j=0}^{r-2} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_j\mid}\displaystyle\!sum\limits_{i=1}^{k_{r-j}-2} (-1)^{i-1} T(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_{r+1-j},i+1)\nonumber\\&\quad\quad\quad\quad \quad\quad \quad\quad \quad\quad \times T(\{1\}_{q-1},k_1,k_{2},\ldots,k_{r-j-1},k_{r-j}-i)\nonumber\\ &\quad+\displaystyle\!sum\limits_{j=0}^{r-2} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_{j+1}\mid} \displaystyle\!lim_{x\rightarrow 1}\left\{\begin{array}{l} {\rm A}(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_{r-j};x){\rm A}(\{1\}_{q-1},k_1,k_2,\ldots,k_{r-j-1},1;x)\\ -{\rm A}(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_{r-j},1;x){\rm A}(\{1\}_{q-1},k_1,k_2,\ldots,k_{r-j-1};x)\end{array}\right\}\nonumber\\ &\quad+(-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_{r-1}\mid}\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{{\rm A}(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_2,1;u){\rm A}(\{1\}_{q-1},k_1-1;u)}{u}du\nonumber\\ &=\displaystyle\!sum\limits_{j=0}^{r-1} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_j\mid}\displaystyle\!sum\limits_{i=1}^{k_{r-j}-2} (-1)^{i-1} T(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_{r+1-j},i+1)\nonumber\\&\quad\quad\quad\quad \quad\quad \quad\quad \quad\quad \times T(\{1\}_{q-1},k_1,k_{2},\ldots,k_{r-j-1},k_{r-j}-i)\nonumber\\ &\quad+\displaystyle\!sum\limits_{j=0}^{r-2} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_{j+1}\mid} \displaystyle\!lim_{x\rightarrow 1}\left\{\begin{array}{l} {\rm A}(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_{r-j};x){\rm A}(\{1\}_{q-1},k_1,k_2,\ldots,k_{r-j-1},1;x)\\ -{\rm A}(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_{r-j},1;x){\rm A}(\{1\}_{q-1},k_1,k_2,\ldots,k_{r-j-1};x)\end{array}\right\}\nonumber\\ &\quad+(-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_{r}\mid}\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{{\rm A}(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_2,k_1-1;u){\rm A}(\{1\}_{q};u)}{u}du. \end{align} Hence, the formula (\ref{c3}) holds.\hfill$\square$ Next, we evaluate the limit in (\ref{c3}). \begin{thm}\label{thm3.3} For indexes $\bfk_r$ and $\bfl_s$, and positive integers $r$ and $s$, we have \begin{align}\label{c5} &\displaystyle\!lim_{x\rightarrow 1} \left\{{\rm A}(\bfk_r;x) {\rm A}(\bfl_s,1;x)-{\rm A}(\bfk_r,1;x) {\rm A}(\bfl_s;x)\right\}\nonumber\\ &=T(\bfl_s)T\left(\bfk_r^{-}\circledast(1,1)\right)-T(\bfk_r)T\left(\bfl_s^{-}\circledast(1,1)\right)+2\delta_{r,s}\log(2)T(\bfk_r)T(\bfl_s). \end{align} \end{thm} \it{Proof.}\rm\quad According to the definitions of ${\rm A}(k_1,k_2,\ldots,k_r;x)$ and MTHSs, by an elementary calculation, we can find that \begin{align}\label{bc11} &{\rm A}(\bfk_{2m-1};x)=2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n(\bfk_{2m-2})}{(2n-1)^{k_{2m-1}}}x^{2n-1} \end{align} and \begin{align}\label{bc12} &{\rm A}(\bfk_{2m};x)=2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n(\bfk_{2m-1})}{(2n)^{k_{2m}}}x^{2n}. \end{align} Hence, applying Lemma \ref{lem3.5}, by straightforward calculations, it is easy to see that if $r=2m-1$ and $s=2p-1$, then \begin{align}\label{c6} &\displaystyle\!lim_{x\rightarrow 1} \left\{{\rm A}(\bfk_{2m-1};x) {\rm A}(\bfl_{2p-1},1;x)-{\rm A}(\bfk_{2m-1},1;x) {\rm A}(\bfl_{2p-1};x)\right\}\nonumber\\ &=\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n(\bfl_{2p-1})T(\bfk_{2m-1})-T_n(\bfk_{2m-1})T(\bfl_{2p-1})}{n}\nonumber\\ &=T(\bfl_{2p-1})2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n(\bfk_{2m-2})S_n(1)}{(2n-1)^{k_{2m-1}}}-T(\bfk_{2m-1})2\sum\limits_{n=1}^\infty\displaystyle\!frac{T_n(\bfl_{2p-2})S_n(1)}{(2n-1)^{l_{2p-1}}}\nonumber\\ &=T(\bfl_{2p-1})T\left(\bfk_{2m-1}^{-}\circledast(1,1) \right)-T(\bfk_{2m-1})T\left(\bfl_{2p-1}^{-}\circledast(1,1) \right), \end{align} if $r=2m-1$ and $s=2p$, then \begin{align}\label{c7} &\displaystyle\!lim_{x\rightarrow 1} \left\{{\rm A}(\bfk_{2m-1};x) {\rm A}(\bfl_{2p},1;x)-{\rm A}(\bfk_{2m-1},1;x) {\rm A}(\bfl_{2p};x)\right\}\nonumber\\ &=\sum\limits_{n=1}^\infty \left\{\displaystyle\!frac{T_n(\bfl_{2p})T(\bfk_{2m-1})}{n-1/2} -\displaystyle\!frac{T_n(\bfk_{2m-1})T(\bfl_{2p})}{n} \right\}\nonumber\\ &=2\log(2)T(\bfl_{2p})T(\bfk_{2m-1})+T(\bfl_{2p})2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n(\bfk_{2m-2})S_n(1)}{(2n-1)^{k_{2m-1}}}-T(\bfk_{2m-1})2\sum\limits_{n=1}^\infty\displaystyle\!frac{T_n(\bfl_{2p-1})T_n(1)}{(2n)^{l_{2p}}}\nonumber\\ &=2\log(2)T(\bfl_{2p})+T(\bfl_{2p})T\left(\bfk_{2m-1}^{-}\circledast(1,1) \right)-T(\bfk_{2m-1})T\left(\bfl_{2p}^{-}\circledast(1,1) \right), \end{align} if $r=2m$ and $s=2p-1$, then \begin{align}\label{c8} &\displaystyle\!lim_{x\rightarrow 1} \left\{{\rm A}(\bfk_{2m};x) {\rm A}(\bfl_{2p-1},1;x)-{\rm A}(\bfk_{2m},1;x) {\rm A}(\bfl_{2p-1};x)\right\}\nonumber\\ &=\sum\limits_{n=1}^\infty \left\{\displaystyle\!frac{T_n(\bfl_{2p-1})T(\bfk_{2m})}{n}-\displaystyle\!frac{T_n(\bfk_{2m})T(\bfl_{2p-1})}{n-1/2} \right\}\nonumber\\ &=-2\log(2)T(\bfk_{2m})T(\bfl_{2p-1})+T(\bfl_{2p-1})2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n(\bfk_{2m-1})T_n(1)}{(2n)^{k_{2m}}}-T(\bfk_{2m})2\sum\limits_{n=1}^\infty\displaystyle\!frac{T_n(\bfl_{2p-2})S_n(1)}{(2n-1)^{l_{2p-1}}}\nonumber\\ &=-2\log(2)T(\bfk_{2m})T(\bfl_{2p-1})+T(\bfl_{2p-1})T\left(\bfk_{2m}^{-}\circledast(1,1) \right)-T(\bfk_{2m})T\left(\bfl_{2p-1}^{-}\circledast(1,1) \right), \end{align} if $r=2m$ and $s=2p$, then \begin{align}\label{c9} &\displaystyle\!lim_{x\rightarrow 1} \left\{{\rm A}(\bfk_{2m};x) {\rm A}(\bfl_{2p},1;x)-{\rm A}(\bfk_{2m},1;x) {\rm A}(\bfl_{2p};x)\right\}\nonumber\\ &=\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n(\bfl_{2p})T(\bfk_{2m})-T_n(\bfk_{2m})T(\bfl_{2p})}{n-1/2}\nonumber\\ &=T(\bfl_{2p})2\sum\limits_{n=1}^\infty \displaystyle\!frac{T_n(\bfk_{2m-2})T_n(1)}{(2n)^{k_{2m}}}-T(\bfk_{2m})2\sum\limits_{n=1}^\infty\displaystyle\!frac{T_n(\bfl_{2p-1})T_n(1)}{(2n)^{l_{2p}}}\nonumber\\ &=T(\bfl_{2p})T\left(\bfk_{2m}^{-}\circledast(1,1) \right)-T(\bfk_{2m})T\left(\bfl_{2p}^{-}\circledast(1,1) \right). \end{align} Thus, combining (\ref{c6})-(\ref{c9}) we deduce the desired result.\hfill$\square$ Therefore, from Theorem \ref{thm3.2}, we can get the following formula \begin{align}\label{c10} &\displaystyle\!lim_{x\rightarrow 1}\left\{\begin{array}{l} {\rm A}\left(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1};x\right){\rm A}\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1},1;x\right)\\ -{\rm A}\left(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1},1;x\right){\rm A}\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1};x\right)\end{array}\right\}\nonumber\\ &=T\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1}\right)T\left(\Big(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1}\Big)^{-}\circledast(1,1)\right)\nonumber\\&\quad - T\left(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1}\right)T\left(\Big(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1}\Big)^{-}\circledast(1,1)\right)\nonumber\\ &\quad+2\delta_{p+j,q+r-j-2}\log(2)T\left(\{1\}_{p-1},\overleftarrow{\bfk}_{j+1}\right)T\left(\{1\}_{q-1},\overrightarrow{\bfk}_{r-j-1}\right). \end{align} {\bf Proof of Theorem \ref{thm3.1}.} Substituting (\ref{c10}) into (\ref{c3}) yields the desired evaluation (\ref{c2}). \hfill$\square$ \section{Further discussion}\label{sec3} {\bf Question}: Is there a function $y=y(z)$ that satisfies the following system for any $m\in\mathbb{N}}\def\Z{\mathbb{Z}$? \begin{align*} \left\{ {\begin{array}{*{20}{c}} \displaystyle\!frac{dy}{dz}+y^m=1, \\ \\ \displaystyle\!frac{d\log(y)}{dz}=\displaystyle\!frac{m^2e^{-mz}}{1-e^{-m^2z}}, \\ \end{array} } \right. \end{align*} with the far-field boundary condition and the initial condition: \begin{align*} \left\{ {\begin{array}{*{20}{c}} y(0)=0, \\ \\ y(z)\rightarrow1\quad {\rm as} \quad z\rightarrow +\infty.\\ \end{array} } \right. \end{align*} Does the solution of this system exists? In particular, if $m=1$, then $y(z)=1-e^{-z}$. If $m=2$, then $y(z)=\tanh(z)$. But for $m\geq 3$, does the solution of this system exists? {\bf Next, we assume this solution $y=f_m(z)$ exists. Hence, $f_1(z)=1-e^{-z}$ and $f_2(z)=\tanh(z)$.} \begin{defn} For $k_1,\ldots,k_r\in\mathbb{N}}\def\Z{\mathbb{Z}$, the multiple polylogarithm of level $m$ is defined by \begin{align}\label{d1} {\rm Ath}^{(m)}(k_1,k_2,\ldots,k_r;z): = \displaystyle\!sum\limits_{1 \le {n_1} < \cdots < {n_r}\atop n_i\equiv i\ {\rm mod}\ m} {\displaystyle\!frac{{{z^{{n_r}}}}}{{n_1^{{k_1}}n_2^{{k_2}} \cdots n_r^{{k_r}}}}},\quad z \in \left[ { - 1,1} \right). \end{align} \end{defn} Note that if $z\in [0,+\infty)$, then $|f_m(z)|< 1$, we have \begin{align}\label{d2} {\rm Ath}^{(m)}(1;f_m(z))=\displaystyle\!sum\limits_{n=1}^\infty \displaystyle\!frac{f_m^{mn+1}(z)}{mn+1}=\displaystyle\!sum\limits_{n=1}^\infty \displaystyle\!int\limits_{0}^z f_m^{mn}(x)f'_m(x) dx =\displaystyle\!int\limits_{0}^z\displaystyle\!frac{f'_m(x)}{1-f_m^m(x)}dx=z. \end{align} Similar to (\ref{b3}), we can easily obtain the following. \begin{lem} {\rm (i)} For $r,k_1,\ldots,k_r\in \mathbb{N}}\def\Z{\mathbb{Z}$, \begin{align}\label{d3} \displaystyle\!frac{d}{dz}{\mathrm{Ath}^{(m)}}({{k_1}, \cdots ,k_{r-1},{k_r}}; z)= \left\{ {\begin{array}{*{20}{c}} \displaystyle\!frac{1}{z} {\mathrm{Ath}^{(m)}}({{k_1}, \cdots ,{k_{r-1}},{k_r-1}};z) {\ \ (k_r\geq 2),} \\ {\displaystyle\!frac{1}{1-z^m}{\mathrm{Ath}^{(m)}}({{k_1}, \cdots ,{k_{r-1}}};z)\;\;\;\ \ \ (k_r = 1).} \\ \end{array} } \right. \end{align} {\rm (ii)} For $r\in\mathbb{N}}\def\Z{\mathbb{Z}$, \begin{align}\label{d4} {\rm Ath}^{(m)}({\{1\}_r};z)=\displaystyle\!frac{1}{r!}({\rm Ath}^{(m)}(1;z))^r. \end{align} \end{lem} By (\ref{d3}), we obtain \begin{align}\label{d5} &{\mathrm{Ath}^{(m)}}({{k_1}, \cdots,k_{r-1} ,{k_r}};z)=\displaystyle\!int\limits_{0}^z \underbrace{\displaystyle\!frac{dt}{t}\cdots\displaystyle\!frac{dt}{t}}_{k_r-1}\displaystyle\!frac{dt}{1-t^m}\underbrace{\displaystyle\!frac{dt}{t}\cdots\displaystyle\!frac{dt}{t}}_{k_{r-1}-1}\displaystyle\!frac{dt}{1-t^m}\cdots \underbrace{\displaystyle\!frac{dt}{t}\cdots\displaystyle\!frac{dt}{t}}_{k_1-1}\displaystyle\!frac{dt}{1-t^m}\nonumber\\ &=\left\{\prod\limits_{j=1}^r\displaystyle\!frac{(-1)^{k_j-1}}{(k_j-1)!}\right\}\displaystyle\!int\nolimits_{D_r(z)} \displaystyle\!frac{\log^{k_1-1}\left(\displaystyle\!frac{t_1}{t_2}\right)\cdots \log^{k_{r-1}-1}\left(\displaystyle\!frac{t_{r-1}}{t_r}\right)\log^{k_r-1}\left(\displaystyle\!frac{t_r}{z}\right)}{(1-t_1^m)\cdots (1-t_{r-1}^m)(1-t_r^m)}dt_1\cdots dt_r, \end{align} Corresponding to Definition \ref{def:1}, we define the multiple zeta function of level $m$ as follows. \begin{defn} For $k_1,\ldots,k_{r-1}\in\mathbb{N}}\def\Z{\mathbb{Z}$ and $\Re(s)>1$, let \begin{align}\label{d6} &T^{(m)}_0(k_1,\ldots,k_{r-1},s):= \displaystyle\!sum\limits_{1 \le {n_1} < \cdots < {n_r}\atop n_i\equiv i\ {\rm mod}\ m} {\displaystyle\!frac{1}{{n_1^{{k_1}} \cdots n_{r-1}^{{k_{r-1}}}n_r^s}}}. \end{align} Furthermore, as its normalized version, let \begin{align}\label{d7} T^{(m)}(k_1,\ldots,k_{r-1},s):=m^r T^{(m)}_0(k_1,\ldots,k_{r-1},s). \end{align} \end{defn} When $k_r>1$, we see that $${\rm Ath}^{(m)}(k_1,\ldots,k_r;1)=T^{(m)}_0(k_1,\ldots,k_r).$$ \begin{lem} For $k_1,\ldots,k_{r-1}\in\mathbb{N}}\def\Z{\mathbb{Z}$ and $\Re(s)>1$, \begin{align}\label{d8} T^{(m)}_0(k_1,k_2,\ldots,k_{r-1},s)=\displaystyle\!frac{\underbrace{\displaystyle\!int\limits_{0}^\infty\cdots\displaystyle\!int\limits_{0}^\infty}_{r} \left\{\prod\limits_{j=1}^{r-1}x^{k_j-1}_j\right\}x^{s-1}_r \left\{\prod\limits_{j=1}^{r} \displaystyle\!frac{e^{(m-1)(x_j+\cdots+x_r)}}{e^{m(x_j+\cdots+x_r)}-1}\right\}dx_1\cdots dx_r}{\left\{\prod\limits_{j=1}^{r-1}\Gamma(k_j)\right\}\Gamma(s)}. \end{align} \end{lem} \it{Proof.}\rm\quad The result immediately follows from the definition (\ref{d6}) and the expression $\displaystyle\!frac{1}{n^s}=\displaystyle\!frac{1}{\Gamma(s)}\displaystyle\!int\limits_{0}^\infty e^{-nt}t^{s-1}dt$. \hfill$\square$ \subsection{Some connections between $\psi^{(m)}({\bf k}_r;s)$ and $f_m(z)$} \begin{defn} For $k_1,\ldots,k_{r}\in\mathbb{N}}\def\Z{\mathbb{Z}$ and $\Re(s)>0$, if $f_m(z)$ exist, we can define the level $m$-version of $\xi(k_1,\ldots,k_r;s)$ by \begin{align}\label{d9} \psi^{(m)}(k_1,k_2\ldots,k_r;s):=\displaystyle\!frac{1}{\Gamma(s)} \displaystyle\!int\limits_{0}^\infty \displaystyle\!frac{t^{s-1}}{e^t-e^{(1-m)t}}{\rm A}^{(m)}\left({k_1,k_2,\ldots,k_r};f_m\left(\displaystyle\!frac{t}{m}\right)\right)dt, \end{align} where ${\rm A}^{(m)}({k_1,k_2,\ldots,k_r};z):=m^r{\rm {Ath}}^{(m)}({k_1,k_2,\ldots,k_r};z)$. \end{defn} It is clear that $$\psi^{(1)}(k_1,k_2\ldots,k_r;s)=\xi(k_1,k_2\ldots,k_r;s)\quad{\rm and}\quad \psi^{(2)}(k_1,k_2\ldots,k_r;s)=\psi(k_1,k_2\ldots,k_r;s).$$ \begin{thm} For $r,k\in\mathbb{N}}\def\Z{\mathbb{Z}$, if $f_m(z)$ exist, the following identity holds. \begin{align}\label{d10} \psi^{(m)} (\{1\}_{r-1},k;s)&=(-1)^{k-1} \displaystyle\!sum\limits_{a_1+\cdots+a_k=r\atop a_1,\ldots,a_k\geq 0} \binom{s+a_k-1}{a_k} T^{(m)}(a_1+1,\ldots,a_{k-1}+1,s+a_k)\nonumber\\ &\quad+\displaystyle\!sum\limits_{j=0}^{k-2} (-1)^j T^{(m)}(\{1\}_{r-1},k-j)\cdot T^{(m)}(\{1\}_j,s). \end{align} \end{thm} \it{Proof.}\rm\quad The method of the proof is similar to that in \cite[Theorem 8]{AM1999} and \cite[Theorem 5.3]{KTA2018}. Given $r,k\geq 1$, introduce the following integrals \begin{align*} I^{(r,k)}_{j,m}(s):=\displaystyle\!frac{m^{k}}{\Gamma(s)} \underbrace{\displaystyle\!int\limits_{0}^\infty\cdots\displaystyle\!int\limits_{0}^\infty}_{k-j+1} \displaystyle\!frac{{\rm A}\left(\{1\}_{r-1},j;f_m\left(\displaystyle\!frac{x_j+\cdots+x_k}{m}\right)\right)}{\prod\limits_{l=j}^k \left(e^{x_l+\cdots+x_k}-e^{(1-m)(x_l+\cdots+x_k)} \right)}x^{s-1}_k dx_l \cdots dx_k. \end{align*} We compute $I^{(r,k)}_{1,m}(s)$ in two different ways. First, from (\ref{d2}) and (\ref{d4}), \begin{align*} {\rm A}^{(m)}\left(\{1\}_{r};f_m\left(\displaystyle\!frac{x_j+\cdots+x_k}{m}\right)\right)=\displaystyle\!frac{(x_1+\cdots+x_r)^r}{r!}. \end{align*} Then, by (\ref{d8}), we obtain \begin{align}\label{d11} I^{(r,k)}_{1,m}(s)=&\displaystyle\!frac{m^{k}}{\Gamma(s)r!} \underbrace{\displaystyle\!int\limits_{0}^\infty\cdots\displaystyle\!int\limits_{0}^\infty}_{k} \displaystyle\!frac{(x_1+\cdots+x_k)^r x^{s-1}_{k}}{\prod\limits_{l=1}^k \left(e^{x_l+\cdots+x_k}-e^{(1-m)(x_l+\cdots+x_k)} \right)}dx_1\cdots dx_k\nonumber\\ =&\displaystyle\!frac{m^k}{\Gamma(s)} \displaystyle\!sum\limits_{a_1+\cdots+a_k=r \atop a_1,\ldots,a_k\geq 0} \displaystyle\!frac{1}{a_1!\cdots a_k!}\underbrace{\displaystyle\!int\limits_{0}^\infty\cdots\displaystyle\!int\limits_{0}^\infty}_{k}\displaystyle\!frac{x_1^{a_1}\cdots x_{k-1}^{a_{k-1}}x_k^{s+a_k-1}}{\prod\limits_{l=1}^k \left(e^{x_l+\cdots+x_k}-e^{(1-m)(x_l+\cdots+x_k)} \right)}dx_1\cdots dx_k\nonumber\\ =& \displaystyle\!sum\limits_{a_1+\cdots+a_k=r\atop a_1,\ldots,a_k\geq 0} \binom{s+a_k-1}{a_k} T^{(m)}(a_1+1,\ldots,a_{k-1}+1,s+a_k). \end{align} Secondly, by using \begin{align}\label{d12} &\displaystyle\!frac{\partial}{\partial x_j}{\rm Ath}^{(m)}\left({\{1\}_{r-1},j+1;f_m\left(\displaystyle\!frac{x_j+\cdots+x_k}{m}\right)}\right)\nonumber\\ &=m \displaystyle\!frac{{\rm Ath}^{(m)}\left({\{1\}_{r-1},j;f_m\left(\displaystyle\!frac{x_j+\cdots+x_k}{m}\right)}\right)}{e^{x_j+\cdots+x_k}-e^{(1-m)(x_j+\cdots+x_k)}} \end{align} and (\ref{d8}), we compute \begin{align*} I^{(r,k)}_{j,m}(s)&=\displaystyle\!frac{m^{k}}{\Gamma(s)} \underbrace{\displaystyle\!int\limits_{0}^\infty\cdots\displaystyle\!int\limits_{0}^\infty}_{k-j+1}\displaystyle\!frac{ \displaystyle\!frac{\partial}{\partial x_j}\left\{{\rm A}\left(\{1\}_{r-1},j+1;f_m\left(\displaystyle\!frac{x_j+\cdots+x_k}{m}\right)\right)\right\}}{\prod\limits_{l=j+1}^k \left(e^{x_l+\cdots+x_k}-e^{(1-m)(x_l+\cdots+x_k)} \right)}x^{s-1}_k dx_l \cdots dx_k\nonumber\\ &=T^{(m)}(\{1\}_{r-1},j+1)\cdot T^{(m)}(\{1\}_{k-j-1},s)-I^{(r,k)}_{j+1,m}(s). \end{align*} Therefore, using this definition relation repeatedly, we obtain \begin{align*} &I^{(r,k)}_{1,m}(s)=\displaystyle\!sum\limits_{j=1}^{k-1} (-1)^{j-1} T^{(m)}(\{1\}_{r-1},j+1)\cdot T^{(m)}(\{1\}_{k-j-1},s)+(-1)^{k-1}I^{(r,k)}_{k,m}(s). \end{align*} By definition, we have $$I^{(r,k)}_{k,m}(s)=\psi^{(m)}(\{1\}_{r-1},k;s),$$ and thus \begin{align}\label{d13} &I^{(r,k)}_{1,m}(s)=\displaystyle\!sum\limits_{j=0}^{k-2} (-1)^{k-j} T^{(m)}(\{1\}_{r-1},k-j)\cdot T^{(m)}(\{1\}_j,s)+(-1)^{k-1}\psi^{(m)}(\{1\}_{r-1},k;s). \end{align} Comparing (\ref{d11}) and (\ref{d13}), we obtain the assertion.\hfill$\square$ \begin{thm}\label{thm4.7} For $r,k\in\mathbb{N}}\def\Z{\mathbb{Z}$ and $p\in\mathbb{N}}\def\Z{\mathbb{Z}_0$, if $f_m(z)$ exist, the following identity holds. \begin{align}\label{d14} \psi^{(m)}(\{1\}_{r-1},k;p+1)=\displaystyle\!sum\limits_{a_1+\cdots+a_k=p\atop a_,\ldots,a_k\geq 0} \binom{a_k+r}{r} T^{(m)}(a_1+1,\ldots,a_{k-1}+1,a_k+r+1). \end{align} \end{thm} \it{Proof.}\rm\quad By (\ref{d12}), we have \begin{align*} &\psi^{(m)}(\{1\}_{r-1},k;p+1)\nonumber\\&=\displaystyle\!frac{m}{\Gamma(p+1)} \displaystyle\!int\limits_{0}^\infty \displaystyle\!frac{t^{p}}{e^t-e^{(1-m)t}}{\rm A}^{(m)}\left(\{1\}_{r-1},k;f_m\left(\displaystyle\!frac{t}{m}\right)\right)dt\nonumber\\ &=\displaystyle\!frac{m^2}{\Gamma(p+1)} \displaystyle\!int\limits_{0}^\infty \displaystyle\!frac{t^{p}_k}{e^{t_k}-e^{(1-m)t_k}}\displaystyle\!int\limits_{0}^{t_k}\displaystyle\!frac{{\rm A}^{(m)}\left(\{1\}_{r-1},k-1;f_m\left(\displaystyle\!frac{t_{k-1}}{m}\right)\right)}{e^{t_{k-1}}-e^{(1-m)t_{k-1}}}dt_{k-1}dt_k\nonumber\\ &=\displaystyle\!frac{m^k}{p!} \underbrace{\displaystyle\!int\limits_{0}^\infty \displaystyle\!int\limits_{0}^{t_k}\cdots\displaystyle\!int\limits_{0}^{t_2}}_{k} \displaystyle\!frac{t^p_k {{\rm A}}^{(m)}\left(\{1\}_r;f_m\left(\displaystyle\!frac{t_1}{m}\right)\right)}{(e^{t_k}-e^{(1-m)t_k})\cdots (e^{t_1}-e^{(1-m)t_1})}dt_1dt_2\cdots dt_k\\ &=\displaystyle\!frac{m^k}{p!r!} \underbrace{\displaystyle\!int\limits_{0}^\infty \displaystyle\!int\limits_{0}^{t_k}\cdots\displaystyle\!int\limits_{0}^{t_2}}_{k} \displaystyle\!frac{t^p_k t^r_1}{(e^{t_k}-e^{(1-m)t_k})\cdots (e^{t_1}-e^{(1-m)t_1})}dt_1dt_2\cdots dt_k. \end{align*} By the change of variables $$t_1=x_k,t_2=x_{k-1}+x_k,\ldots,t_k=x_1+\cdots+x_k,$$ we obtain \begin{align*} \psi^{(m)}(\{1\}_{r-1},k;p+1)=&\displaystyle\!frac{m^k}{p!r!} \underbrace{\displaystyle\!int\limits_{0}^\infty \displaystyle\!int\limits_{0}^{t_k}\cdots\displaystyle\!int\limits_{0}^{t_2}}_{k} \displaystyle\!frac{(x_1+\cdots+x_k)^p x_k^r}{\prod\limits_{l=1}^k (e^{x_l+\cdots+x_k}-e^{(1-m)(x_l+\cdots+x_k)})}dt_1dt_2\cdots dt_k\\ =&\displaystyle\!sum\limits_{a_1+\cdots+a_k=p\atop a_,\ldots,a_k\geq 0} \binom{a_k+r}{r} T^{(m)}(a_1+1,\ldots,a_{k-1}+1,a_k+r+1). \end{align*} Thus, the proof of Theorem \ref{thm4.7} is finished.\hfill$\square$ \begin{cor} For $r,k\in\mathbb{N}}\def\Z{\mathbb{Z}$, if $f_m(z)$ exist, then we have the ``height one" duality \begin{align}\label{d15} T^{(m)}(\{1\}_{r-1},k+1)=T^{(m)}(\{1\}_{k-1},r+1). \end{align} \end{cor} \it{Proof.}\rm\quad Setting $p=0$ in (\ref{d14}) gives \begin{align}\label{d16} \psi^{(m)}(\{1\}_{r-1},k;1) =T^{(m)}(\{1\}_{k-1},r+1). \end{align} In general, from the definition we have \begin{align*} \psi^{(m)}(k_1,\ldots,k_r;1)&=m \displaystyle\!int\limits_{0}^\infty \displaystyle\!frac{{{\rm A}}^{(m)}\left(k_1,\ldots,k_r;f_m(t/m)\right)}{e^t-e^{(1-m)t}}dt\\ &=T^{(m)}(k_1,\ldots,k_{r-1},k_r) \end{align*} and in particular \begin{align}\label{d17} \psi^{(m)}(\{1\}_{r-1},k;1) =T^{(m)}(\{1\}_{r-1},k+1). \end{align} Thus, from (\ref{d16}) and (\ref{d17}) we obtain (\ref{d15}).\hfill$\square$ \subsection{Duality relation for $\psi^{(m)}({\bf k}_r;p+1)$} If $f_m(z)$ exist, from (\ref{d2}), we have $${\rm A}^{(m)}(\{1\}_r;f_m(t/m))=\displaystyle\!frac{t^r}{r!}.$$ Hence, \begin{align*} \psi^{(m)}(k_1,\ldots,k_r;p+1)&=m \displaystyle\!int\limits_{0}^\infty \displaystyle\!frac{{\rm A}^{(m)}\left(\{1\}_p;f_m(t/m)\right){\rm A}^{(m)}\left(k_1,\ldots,k_r;f_m(t/m)\right)}{e^t-e^{(1-m)t}}dt. \end{align*} By the change of variable $u=f_m(t/m)$, we obtain \begin{align}\label{d18} \psi^{(m)}(k_1,\ldots,k_r;p+1)&= \displaystyle\!int\limits_{0}^1 \displaystyle\!frac{{\rm A}^{(m)}(\{1\}_p;u){\rm A}^{(m)}(k_1,\ldots,k_r;u)}{u}du. \end{align} \begin{thm}\label{thm4.9} For positive integers $p,q,r\in\mathbb{N}}\def\Z{\mathbb{Z}$ and $k_1,\ldots,k_r\in \mathbb{N}}\def\Z{\mathbb{Z}\setminus\{1\}$, if $f_m(z)$ exist, then the following duality relation holds: \begin{align}\label{d19} &\psi^{(m)}(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-1;p+1)-(-1)^{k_1+k_2+\cdots+k_r}\psi^{(m)}(\{1\}_{p-1},k_r,\ldots,k_{2},k_1-1;q+1)\nonumber\\ &=\displaystyle\!sum\limits_{j=0}^{r-1} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_j\mid}\displaystyle\!sum\limits_{i=1}^{k_{r-j}-2} (-1)^{i-1} T^{(m)}(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_{r+1-j},i+1)\nonumber\\&\quad\quad\quad\quad \quad\quad \quad\quad \quad\quad \times T^{(m)}(\{1\}_{q-1},k_1,k_{2},\ldots,k_{r-j-1},k_{r-j}-i)\nonumber\\ &\quad+\displaystyle\!sum\limits_{j=0}^{r-2} (-1)^{\mid\stackrel{\leftarrow}{{\bf k}}_{j+1}\mid} \displaystyle\!lim_{x\rightarrow 1}\left\{\begin{array}{l} {\rm A}^{(m)}(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_{r-j};x)\\ \quad\times{\rm A}^{(m)}(\{1\}_{q-1},k_1,k_2,\ldots,k_{r-j-1},1;x)\\ -{\rm A}^{(m)}(\{1\}_{p-1},k_r,k_{r-1},\ldots,k_{r-j},1;x)\\ \quad\times{\rm A}^{(m)}(\{1\}_{q-1},k_1,k_2,\ldots,k_{r-j-1};x)\end{array}\right\}. \end{align} \end{thm} \it{Proof.}\rm\quad By considering \begin{align}\label{d20} &\psi^{(m)}(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-1;p+1)\nonumber \\&=\displaystyle\!int\limits_{0}^1 \displaystyle\!frac{{\rm A}^{(m)}(\{1\}_p;u){\rm A}^{(m)}(\{1\}_{q-1},k_1,\ldots,k_{r-1},k_r-1;u)}{u}du. \end{align} Then, by applying the same arguments as in the proof of Theorem \ref{thm3.3}, we may easily deduce the result.\hfill$\square$ \begin{re} It is possible to obtain the explicit evaluation of the limit in (\ref{d19}) by using the similar method as in the proof of Theorem \ref{thm3.3}. \end{re} {\bf Acknowledgments.} The authors express their deepest gratitude to Professor Masanobu Kaneko for his valuable comments and encouragement. The first author is supported by the China Scholarship Council (No. 201806310063) and the Scientific Research Foundation for Scholars of Anhui Normal University. {\small
proofpile-arXiv_067-2268
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The study of collective excitations in finite systems originated in Nuclear Physics and subsequently were carried out in free atoms and metal clusters as well. It is understood that in case of atomic nuclei, the coherent motion of shell nucleons driven by a sufficiently confining short-range interaction potential gives rise to collective resonances in specific modes, and these long-lived, large amplitude oscillations are termed ``giant resonances''. While single particle excitations are adequately described by Hartree-Fock approximation, its time-dependent extension accounts for the collective behavior of the constituent fermions \cite{Wang,Cassing}, and the damping of the giant resonances is explained by the non-perturbative treatment of the residual collisions of quasi particles \cite{Blasio}. In case of atoms and metal clusters the valence elctrons execute the resonance motion. Though long-ranged, the screening makes the Coulomb potential effectively short-ranged, enabling the local confinement of the electrons that is necessary for the resonance to occur \cite{Brech}. In this article we present some results of the study of collective motions of yet another finite system -- a 13 particle inert gas atom cluster. Being a classical, non-fermionic and very weakly interacting system, the situation is quite different here. In nuclei and metal clusters, the collective excitations are essentially given by the poles of the Green's function corresponding to the response of the charge density fluctuations, and are amenable to detection by the electro-magnetic probe, whereas the constituents of the system under present investigation are charge-neutral point particles, so there is no possibility of charge fluctuations. Nevertheless, the particles do exhibit coherent motion owing to the presence of interaction potential between them, and these are simply the normal modes of the mass density fluctuations. In section II we will describe how the normal mode components of the cluster can be projected out of the complex dynamics that it undergoes. It is well know that molecular dynamics simulations of $Ar_{13}$ cluster display solid-liquid phase transitions. The cluster has a near-rigid regular icosahedron structure in the solid-like ``phase'', the individual atoms executing vibrational motions about their mean positions, whereas in the liquid-like phase the cluster does not have any regular shape and the individual atoms exhibit largely diffusive motion. In addition, the cluster also shows a ``coexistence phase'', wherein after spending some time in one of the phases the cluster spontaneously switches to the other and eventually back again. The ``phase changes'' in $Ar_{13}$ have been well characterized \cite{Berry1,efermi}. The three phases are best illustrated by the curve of caloric equation of state- the plot of long time $(\sim 10^6 $ time steps) average of kinetic energy against that of total energy. The plot distinctly shows three regions- the solid-like region towards low kinetic/total energy, the liquid-like region on the higher end, and the coexistence phase in between. Another good diagnostic of the phase change is the root mean square bond length fluctuations, which shows a steep rise at the ``melting point'', indicating that there is a large increase in the mobility of the atoms of the cluster as it enters in to the liquid-like region. There are few other diagnostics given in detail in Ref. \cite{Berry1} which demonstrate the occurance of the change of phase. The original motivation behind this work was to study the collective excitations in $Ar_{13}$ cluster as it passes through the phase changes. It was naively expected that in the transition region the system would show a marked change in behavior, much like the bond length fluctuations. However, this effort had to be abandoned as it was soon realized that the results obtained from different runs in the liquid-like region show a very large spread in values and hence it becomes hard to arrive at an unambiguous result. On the other hand, we have had some surprising results in the solid-like region itself. We found that there is yet another, much lower characteristic temperature of the system, much more sharply demarcated on the total energy axis than the solid-liquid phase change. We shall argue that at this temperature the single particle excitations begin to destroy the collective excitations, and the effect is manifest as a qualitative change in behavior of almost all the physical observables of the cluster. Details of the computational procedure are discussed in section II. The main results are presented in section III followed by a discussion. Finally we summarize the results and conclude. \section{Computational Procedure} We perform isoergic molecular dynamics simulation of the 13-particle Argon cluster choosing the pairwise 6-12 Lennard-Jones interaction \begin{equation}{ V_{ij}(r_{ij})=4\epsilon \left[ {\left( {\sigma\over r_{ij}} \right)}^{12}- {\left( {\sigma\over r_{ij}} \right)}^6 \right]} \end{equation} with parameters $\sigma=3.4\times 10^{-8}$cm and $\epsilon=1.67 \times 10^{-14}$erg. The classical equations of motion were solved using Verlet algorithm \cite{Verlet} with a time step of $2.0\times 10^{-15}$sec. The total energy is found to be conserved to within 0.001\%. In an interacting many body system, we know that the average potential of the static system contributes to the single particle excitations, and the interaction treated dynamically gives rise to collective excitations. Similarly it should be possible to think of the complex dynamics of the cluster to be composed of collective motions in addition to the single particle undulations. Then we need to identify the collective modes of the cluster. In view of the nuclear deformations, it has been recognized \cite{Leder} that collective variables of an arbitrary density distribution can be parametrised in terms of the moments of density distribution, and in particular for small deviations from spherical symmetry, expansion in terms of spherical harmonics components is the most natural description of the normal modes. Adapting the same idea, the contributions to different normal modes of the cluster as a function of time can be projected out as \begin{equation} C_{lm}(t)=\int\rho({\bf r},t) Y^m_l(\theta,\phi) r^l dr \end{equation} where $\rho({\bf r},t)$ is the density distribution, which is discrete in the present case. Obviously the collective oscillations here are the shape oscillations. The monopole $(\ell =0)$ mode component is projected out simply as the average radius of the cluster as a function of time, hence it corresponds to the uniform radial motion of the particles (``breathing'' mode). It should be noted at this point that in charged systems like the atomic nuclei or the metal clusters, the dipole mode $(\ell=1)$ usually is the strongest and the monopole is always very weak. In contrast the inert gas atom cluster $Ar_{13}$ has monopole mode as the strongest and the dipole mode simply does not exist. The reason is the following: A dipole motion in general corresponds to the vibration of the centroid of the distribution about its mean position, and in case of charged systems, it is the out of phase motion of the centroids of the opposite charge distributions that stabilizes the dipole oscillation. But there is no negative polarity for the mass distribution, so a dipole oscillation is ruled out, and so are all the modes with odd $\ell$, as they would correspond to the oscillation of the center of mass of the system about its origin. Here we present some observations made on the monopole oscillations of the cluster. A monopole excitation is given to the cluster as follows: The system equilibrated at any temperature performs shape oscillations of its own accord owing to the interplay of its kinetic and potential energies. First, a reference time t=0 is chosen such that at that time the cluster has expanded to its maximum, and is just about to start contracting. At this stage, when most of its energy is in the potential energy form, the cluster is given a sudden, radially uniform expansion, so as to raise the total potential energy of the cluster by a predetermined value. The velocities and hence the kinetic energies of the atoms are left unchanged, therefore the sudden expansion has the effect of instantaneous increase of total energy by a predetermined value. The cluster responds to the increase in the amplitude and sets itself into oscillation in a pure mode and this amounts to giving a monopole excitation to the cluster. We will add more on the method of giving the excitation to the system later in the discussion. Fig. 1 shows the time evolution of the monopole component of the cluster at three different temperatures- the first one at $5^o$K, the second at $20^o$K, and the last at $30.5^o$K which is very close to the melting point$(\sim 34^o K)$. A monopole excitation corresponding to an excitation energy of $\delta E=0.05 \times 10^{-14}$ erg/atom is given at time t=0. The solid curve shows the time evolution of the cluster and one can immediately recognize that the relaxation of the excitation resembles the time evolution of a damped harmonic oscillator. The dotted curve in the plot actually is a fit to a damped oscillator \begin{equation} y(t)=y_0+A e^{-\lambda t} \cos(\omega't+\delta) \end{equation} where $\lambda$ is the decay constant and $\omega'$ is the reduced frequency of the damped oscillator. It should be mentioned here that the curves shown are actually not the ones obtained out of single runs, but in fact are the averages over 500 independent runs. Though the excitation is given at a configuration at which the cluster is in a state of maximum expansion, in general not necessarily would all the constituents of the cluster be at their respective maximum displacements from the origin. As a consequence the response of the system to the excitation is quite sensitive to the initial configuration, and the quantities derived out of different runs show a spread in value about their mean, increasingly so at higher temperatures. Hence it becomes necessary that some kind of ensemble averaging be done in order that good consistency is obtained. Note that the monopole oscillation of the cluster is essentially simple harmonic at very low temperatures, and the oscillations are progressively damped as the temperature is increased. The fit is remarkably good, suggesting that one may map the monopole mode motion of the cluster to a 1-dimensional under-damped harmonic oscillator. A plot of the potential energy of the cluster as a function of its radius as it evolves deviates very little from a perfect parabola [Fig. 2], giving further justification to the mapping. Hence we shall carry out further analysis in terms of the parameters of the equivalent harmonic oscillator, an exercise which would prove very fruitful. From the fit to the time evolution of the excitation [Fig. 1] we already have the values of two of the parameters of the equivalent harmonic oscillator, the damping coefficient $\lambda$ and the reduced frequency $\omega'$. From Fig. 2 we notice that the oscillator potential can well be taken to be parabolic with respect to the cluster radius, so we could make use of the Hooke's law to obtain the spring constant $k$ of the oscillator. We can go further. The reduced frequency $\omega'$ of a damped harmonic oscillator is related to its natural frequency $\omega$ by \begin{equation} \omega'=\sqrt{\omega^2-\lambda^2} \end{equation} and from the value of the natural frequency we can obtain the mass $m$ of the equivalent oscillator from the relation $\omega=\sqrt{k/m}$. Thus, we now have the values of all the parameters of the equivalent oscillator, and henceforth we find it more convenient to discuss in terms of these parameters. We use the term ``reduced mass'' $m$ to refer to the mass of the equivalent oscillator to distinguish it from the actual mass $M$ of the Argon atom. \section{Results and discussion} The results are summarized in Fig 3. The data are completely in the solid-like domain of the cluster (The last four points are from the coexistence region). The abrupt change in qualitative behavior of the oscillator parameters at $E_{tot}=-5.444 \times 10^{-14}$ erg/atom, which corresponds to a kinetic temperature of $T_s\sim 7.0 ^oK$ is quite puzzling. The damping coefficient is zero for $T<T_s$ and then has a linear rise throughout the solid phase (Fig 3(a)); the period of oscillation changes slope at $T_s$ (Fig 3(b)) ; the spring constant has a slow linear rise at low temperatures but shows a dramatic $\sqrt{E_{tot}}$ rise beyond $T_s$ (Fig 3(c)) ; and finally, the reduced mass drops drastically at $T_s$ (Fig 3(d)). Some observations : \begin{enumerate} \item Of the four parameters of the damped harmonic oscillator, three (viz. the spring constant, the reduced mass and the damping coefficient) are completely independent of each other and only the frequency of oscillation is a derived quantity. However, the transition at $T_s$ brings a qualitative change in all of them. \item It is on the total energy axis that these parameters indicate a clear-cut transition. Therefore it is the total energy and not the kinetic temperature that is the most relavent parameter to study the system. \item The change of behavior along the total energy axis at this transition is much too sharp compared to that in the case of solid-liquid phase change. \item Other dynamical properties like the rms bond-length fluctuations, velocity autocorrelation function, specific heat and Lyapunov characteristic exponent should also manifest the signature of this transition. \end{enumerate} The sharpness of the transition at $T=T_s$ is unprecedented for a system of its size. Earlier investigations do not seem to have paid much attention to the system at such low temperatures. From Fig 3(a) we find that at temperatures below $T_s$ the oscillations are completely undamped, which means that the collective mode is very stable and the individual atoms of the cluster perform periodic motion without losing coherence. However, it should be noted that anharmonicity is inherent in the underlying interaction potential, and manifests itself as a slow linear rise in the time period of oscillation with the increase in total energy. Then the question is what causes the damping of the collective mode at temperatures above $T_s$. Looking at the plot of the reduced mass, Fig 3(d), it is curious to note that below $T_s$ the mass of the equivalent oscillator is smaller by a factor of 12 against the mass of the Argon atom, a factor same as the number of surface atoms. In addition, the value of the reduced mass drops drastically at $T_s$ and reaches an asymptotic value of 1 at higher temperatures. This leads us to the picture of a set of a coupled oscillators, which continues to oscillate without losing strength if set-in in one of its normal modes; and the normal mode gradually decays if one or a few of the constituents are given additional independent disturbances. Based on this reasoning, we infer that for $T<T_s$ collective modes are stable whereas above $T_s$, particles somehow begin to make independent motions and these independent motions cause the damping of the collective mode. One can then argue that as the temperature is raised, these independent particle motions become more and more prominent, destroying the collective oscillations faster than before. If this argument is indeed correct, a Fourier analysis of motion of the particles should reflect this behavior. We took the power spectra of individual radial motion of the particles and plotted in Fig 4, adding them together. The plot clearly shows that for $T<T_s$ (Fig 4(a)), the spectrum has a single sharp peak at the frequency component $\omega=11.3 \times 10^{10}$ Hz, and at $T_s$ (Fig 4(b)), additional components have just begun to appear. At a slightly higher temperature (Fig 4(c)), the spectrum shows a continuum at low frequencies, with a considerable reduction in the strength of the collective mode. The presence of an almost flat continuous spectrum is a clear indication of the incoherent motion of the particles. However, it should be noted that there are also additional peaks in the spectrum, which could mean the presence of other collective modes. Indeed, most of the individual spectra do show prominence at these peaks. This may lead us to another scenario, that anharmonicity sets in at $T_s$; as a consequence of which other modes are excited due to mode-mode coupling, and thus causing the decay of the original pure mode. But it was found that the strengths are never the same at any component from the spectra of any two of the particles. On the other hand the individual spectra of all the particles are stunningly identical at $T<T_s$. This gives a convincing evidence to the proposition that it is the onset of the independent particle motions that damps the collective oscillations at $T>T_s$. Finally, at still larger temperatures (Fig 4(d)), there is no trace of any collective mode. What remains to be explained then is how is it that these independent particle motions are triggered and why are they absent below $T_s$. Interestingly, the situation is quite similar to some of the interacting quantum many body systems, like for example the case of a system with magnetically ordered ground state, wherein the interaction gives rise to a finite gap in the single particle excitation spectrum, besides the formation of low lying collective modes. Another surprising feature is the behavior of the spring constant above $T_s$. We know that the potential energy surface gets widened as the energy is increased, so we would expect the spring constant to decrease as the energy is increased. Instead, we notice that the spring constant increases, as square root of the total energy. To gain a better understanding, we plot the potential energy curves corresponding to different energies on the same graph in Fig 5. Observe that for $T<T_s$, the oscillations at all temperatures lie on the same potential energy curve, only the amplitude increases with the increase in energy, just as in the case of an actual harmonic oscillator. On the other hand, above $T_s$, the oscillations of the cluster trace different potential energy curves at different temperatures. This implies that it is not the same harmonic oscillator any more at different temperatures. We reason that this happens because the potential energy hyper-surface of the cluster in the 3N dimensional co-ordinate space has a global minimum, and so long as the cluster is confined within this well the cluster sustains collective oscillations, individual particles retaining their relative phase coherence. Total energy $TE=5.444 \times 10^{-14} ergs/$atom, corresponding to $T_s$ can now be seen as the depth of this global minimum well, within which pure normal mode of the cluster is stable. Above $T_s$, the cluster has sufficient energy to come out of this global minimum, and the particles find local minima pockets in the potential energy hyper-surface accessible to them. Once they start making excursions to such local minima pockets, the motion of different particles become asynchronous, causing them lose coherence, and this has the effect of damping the collective oscillation. These excursions are the ``single particle excitations'' of this classical system. Different particles now go through the potential energy minimum at different instances of time, and that is the reason why the potential energy curve is much raised and the amplitude of the monopole mode is much smaller (Fig 5). However, it is not clear yet as to why, with the increase in total energy, the potential energy curve becomes narrower and the spring constant increases as square root of total energy. The existence of a global minimum in the potential energy surface has long been known, the local minima are termed ``particle-hole structures'' \cite{Berry2}. It has been accepted that the cluster has icosahedron structure so long as it is inside the global minimum and attains liquid-phase the moment it comes out of this well. Our results do not agree with this view, in fact the depth of this global minimum is found to be just about $3.2\times10^{-14}~erg$, whereas it requires about $15.6\times10^{-14}~erg$ of energy to take the cluster to the coexistence phase. At temperatures below $T_s$, since all the 12 surface atoms on the icosahedron execute coupled oscillations, the reduced mass $m$ of the normal mode is given by $${1\over m}=\sum_{i=1}^{12} {1\over M_i}={12\over M}$$ or $${M\over m}=12$$ in agreement with the results of the simulation (Fig 3(d)). The transition temperature $T_s$ corresponds to the situation wherein the motion of only one of the 12 surface atoms on the average becomes incoherent with the motion of the rest. This is reflected in the reduced mass, which at this temperature drops to one eleventh of the mass of the Ar atom, as can be seen from the figure. With the increase in temperature the reduced mass drops drastically and reaches an asymptotic value of 1. Since $(1/m)$ is a measure of the coherence of motion of the particles in the normal mode, the value of the ratio $(M/m)$ being 1.0 amounts to the constituents executing independent incoherent motion just as in the case of a viscous fluid. This transition to a liquid-like behavior at $T_s$, which is far below the melting temperature is a phenomenon new to the clusters, and is absent in case of bulk solids. Hence it is expected that it will be more difficult to observe this phenomenon with the increase in cluster size. As has already been mentioned we expect the transition at $T_s$ to manifest itself in other observables as well. Here are the other results: \begin{itemize} \item The rms bond length fluctuation (not shown here) starts from a value of zero and shows a quick rise in value till the kinetic temperature $T_s$, and only then settles down to the slow linear rise with temperature, as shown in Ref. \cite{Berry1}. \item Fig 6 is the plot of velocity autocorrelation-- the solid curve just below $T_s$ and the dotted one just above. This once again demonstrates the qualitative change in the dynamics of the cluster at $T_s$, there being a complete reversal of velocities at temperatures below $T_s$, and only a partial reversal the moment the temperature goes above $T_s$. A Fourier transform would show similar features as that of single particle radial coordinates. \item The average radius of the cluster has a linear temperature dependence all along the solid-phase; nevertheless shows a sudden though small increase in the value at $T_s$. \item The maximal Lyapunov exponent (MLE) is close to zero at $T<T_s$ and shows a sudden rise at $T_s$ (Fig 7). \end{itemize} It is quite satisfying that an independent dynamical quantity, MLE, gives yet another illustration of the effect we have observed. It was shown earlier that MLE data indicates solid-liquid phase change by a sharp jump in its value \cite{saroj}, and now we see that the same dynamical quantity once again shows a steep rise-- starting from a value of zero this time-- to mark the presence of another transition. The MLE data implies that the system is integrable at $T<T_s$, and becomes chaotic above $T_s$. In other words, the onset of single particle excitations drives the system into ergodicity. Finally a few words about the method of giving excitation to the cluster. It is essential that the cluster is set into a pure mode oscillation to observe all the effects we have presented. In particular, if one starts from a cluster equilibrated at $T>T_s$ and cools it below $T_s$, it is unlikely that such a system would show the features as distinctly as is presented here. The reason being that the single particle excitations present in the system above $T_s$ continue to be present while the system is cooled and get proportionately amplified by the radial displacement of the particles meant to provide the monopole excitation. If started from low temperature side, this problem would not arise until the temperature $T_s$, as the system continues to perform monopole oscillations without losing strength. However, beyond $T_s$, there is no way one could get rid of the presence of single particle excitations to begin with, so we take recourse to the option of taking statistical average over many independent runs, with the hope that the effects of the independent particle motions are averaged out, leaving behind the systematics. The essence of the analysis is the presence of a kinetic temperature $T_s$, below which the cluster can sustain pure monopole oscillation for ever and above which the particles suffer a loss of memory, invariably. \section{Summary} In this article we have presented some results of the detailed analysis of monopole excitation in $Ar_{13}$ cluster. Ensemble average of time evolution of the monopole excitation is found to fit very well with a damped harmonic oscillator. The parameters of the equivalent oscillator show a marked change in behavior at kinetic temperature $T_s=7.0^oK$ which marks the existence of yet another characteristic temperature in the system. Below $T_s$ the cluster remains confined within the global minimum of the potential energy surface, due to which the normal mode oscillations remain undamped. The velocity autocorrelation function shows a complete bounce-back revealing that there is absolutely no loss of coherence. The system moves out of the global minimum well at $T_s$, above which the local minima pockets in the potential energy surface act as single particle excitation channels in damping the collective modes. A continuous, flat power spectrum of the radial motions of the particles at low frequency components confirms the onset of these single particle excitations. The system is naturally driven into ergodicity at this point due to the availability of local energy minima pockets. This work paves way for many interesting questions. First of all, one may like to investigate to see whether or not the presence of $T_s$ is a generic feature of small clusters, irrespective of the interaction potential used- including the {\it ab initio} dynamics. We obviously do not expect this phenomenon to occur in large clusters. A closed shell structure plays a vital role in this phenomenon, so we might still see it in 55 atom icosahedron cluster. One could also analyze higher angular momentum components in the same fashion. \vskip 1.5cm {\bf Acknowledgments} We are grateful to V. Mehra and Prof. R. Ramaswamy for providing us the MLE data on our request. One of us (UAS) likes to thank Profs. S. D. Mahanti and D. G. Kanhere for some stimulating discussions. We also thank the referee for his suggestions to calculate the power spectra of the single particle coordinates and the maximal Lyapunov exponent.
proofpile-arXiv_067-2836
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The discoveries presented in this paper are motivated by deep-seated and long-standing problems in machine learning and statistics. In this article, I\ will formulate the problem of learning unknown, linear and quadratic discriminant functions from data as a locus problem, thereby formulating geometric locus methods within a statistical framework. I will devise fundamental, data-driven, locus equations of binary classification for linear and quadratic classification systems in statistical equilibrium, where the opposing forces and influences of a system are balanced with each other, and the eigenenergy and the corresponding expected risk of a classification system are minimized. Geometric locus problems involve equations of curves or surfaces, where the coordinates of any given point on a curve or surface satisfy an equation, and all of the points on any given curve or surface possess a uniform geometric property. For example, a circle is a locus of points, all of which are at the same distance (the radius) from a fixed point (the center). Classic geometric locus problems involve algebraic equations and Cartesian coordinate systems \citep{Nichols1893,Tanner1898,Whitehead1911,Eisenhart1939 . The primary purpose of the article is to devise data-driven, mathematical laws that generate optimal, statistical classification systems which achieve minimum error rates for statistical distributions with unchanging statistics. Such optimal decision rules divide two-class feature spaces into decision regions that have minimal conditional probabilities of classification error. The data-driven, mathematical laws involve finding a solution of locus equations, subject to fundamental statistical laws for a classification system in statistical equilibrium. The data-driven, mathematical laws are based on unexpected relations between likelihood ratio tests, geometric locus methods, statistical methods, Hilbert space methods, and reproducing kernel Hilbert space methods. Moreover, the data-driven, mathematical laws govern learning machine architectures and generalization performance. The other purpose of the article is to introduce new ways of thinking about learning machines: in terms of fundamental \emph{statistical laws} that \emph{unknown discriminant functions} of \emph{data} are \emph{subject to}. Mathematical models of physical systems are often derived from physical laws that physical systems in equilibrium are subject to. For example, Kirchhoff's circuit laws and Newton's laws of motion are both based on the law of conservation of energy: In a closed system, i.e., a system that is isolated from its surroundings, the total energy of the system is conserved. Forms of energy include: $\left( 1\right) $ kinetic energy or energy associated with motion, $\left( 2\right) $ potential energy or energy of location with respect to some reference point, $\left( 3\right) $ chemical energy or energy stored in chemical bonds, which can be released in chemical reactions, $\left( 4\right) $ electrical energy or energy created by separating charges, e.g., energy stored in a battery, and $\left( 5\right) $ thermal energy or energy given off as heat, such as friction \citep[see][]{Strang1986 . In this paper, I will introduce a statistical form of energy that is conserved: statistical eigenenergy or \emph{eigenenergy of location with respect to the primary reference point for a geometric locus}, where a primary reference point is the locus of the principal eigenaxis of a conic curve or a quadratic surface. Thinking about learning machines in terms of statistical laws will lead to the discovery of equations of statistical equilibrium along with equations of minimization of eigenenergy and expected risk that \emph{model-based architectures} of learning machines \emph{are subject to}. I\ will derive three model-based architectures from fundamental statistical laws that classification systems in statistical equilibrium are subject to. My discoveries are summarized below. I\ will devise a system of fundamental equations of binary classification for a classification system in statistical equilibrium that must be satisfied by likelihood ratios and decision boundaries that achieve minimum error rates. I will demonstrate that classification systems seek a point of statistical equilibrium where the opposing forces and influences of a classification system are balanced with each other, and the eigenenergy and the corresponding expected risk of a classification system are minimized. I\ will use these results to rigorously define three classes of learning machines that are scalable modules for optimal, statistical classification or pattern recognition systems, where each class of learning machines exhibits optimal generalization performance for a category of statistical distributions. One class of learning machines achieves minimum error rates for data sets drawn from statistical distributions that have unchanging statistics and similar covariance matrices. The other two classes of learning machines achieve minimum error rates for any given data sets drawn from statistical distributions that have either similar or dissimilar covariance matrices and unchanging statistics. All three classes of learning machines are solutions to fundamental integral equations of likelihood ratios and corresponding decision boundaries, so that each class of learning machines finds a point of statistical equilibrium where the opposing forces and influences of a statistical classification system are balanced with each other, and the eigenenergy and the corresponding expected risk of the learning machine are minimized. Thereby, for each class of learning machines, the generalization error of any given learning machine is a function of the amount of overlap between data distributions. Thus, the generalization error of each class of learning machines is determined by \emph{the minimum probability of classification error}, which is the \emph{lowest error rate} that can be achieved by a discriminant function and the \emph{best generalization error} that can be achieved by a learning machine. I\ will also define optimal ensemble systems for each class of learning machines so that any given ensemble system exhibits optimal generalization performance. I will devise three systems of data-driven, vector-based locus equations that generate optimal discriminant functions and decision boundaries. The three systems of locus equations involve solving variants of the inequality constrained optimization problem for linear, polynomial, and Gaussian kernel support vector machines (SVMs). All three classes of learning machines are capable of performing a wide variety of statistical pattern recognition tasks, where any given learning machine exhibits optimal generalization performance for a two-class feature space. For each class of learning machines, I will demonstrate that any given learning machine is a scalable, individual component of an optimal ensemble system, where any given ensemble system of learning machines exhibits optimal generalization performance for an $M$-class feature space. By way of motivation, I will begin with an overview of long-standing problems and unresolved questions in machine learning. \section{Long-standing Problems in Machine Learning} Machine learning is concerned with the design and development of computer programs or algorithms that enable computers to identify and discover patterns or trends contained within collections of digitized signals, images, documents, or networks. For example, deep learning algorithms enable computers to recognize objects contained within digitized videos. Machine learning algorithms are said to enable computers to "learn from data." Supervised machine learning algorithms, called "learning with a teacher," involve estimating unknown, input-output mappings or functions from training examples, where each example consists of a unique input signal and a desired (target) response or output. Accordingly, computers "learn" from training examples by constructing input-output mappings for unknown functions of data. Unsupervised machine learning algorithms involve estimating correlations or connections between training examples, e.g., clusters or groupings of training data \citep{Geman1992,Hastie2001,Haykin2009 . Supervised machine learning problems are generally considered extrapolation problems for unknown functions, e.g., nontrivial, black box estimates. Black boxes are defined in terms of inputs, subsequent outputs, and the mathematical functions that relate them. Because training points will never cover a space of possible inputs, practical learning machines must extrapolate in manners that provide effective generalization performance. The generalization performance of any given learning machine depends on the quality and quantity of the training data, the complexity of the underlying problem, the learning machine architecture, and the learning algorithm used to train the network. \citep{Geman1992,Gershenfeld1999,Haykin2009 . Fitting learning machine architectures to unknown functions of data involves multiple and interrelated difficulties. Learning machine architectures are sensitive to algebraic and topological structures that include functionals, reproducing kernels, kernel parameters, and constraint sets \citep [see, e.g.,][]{Geman1992,Burges1998,Gershenfeld1999,Byun2002,Haykin2009,Reeves2015resolving} as well as regularization parameters that determine eigenspectra of data matrices \citep[see, e.g.,][]{Haykin2009,Reeves2009,Reeves2011,Reeves2015resolving . Identifying the correct form of an equation for a statistical model is also a large concern \citep{Daniel1979,Breiman1991,Geman1992,Gershenfeld1999,Duda2001 . Fitting all of the training data is generally considered bad statistical practice. Learning machine architectures that correctly interpolate collections of noisy training points tend to fit the idiosyncrasies of the noise and are not expected to exhibit good generalization performance. Highly flexible architectures with indefinite parameter sets are said to overfit the training data \citep {Wahba1987,Breiman1991,Geman1992,Barron1998,Boser1992,Gershenfeld1999,Duda2001,Hastie2001,Haykin2009 . Then again, learning machine architectures that interpolate insufficient numbers of data points exhibit underfitting difficulties \citep{Guyon1992,Ivanciuc2007applications . Architectures with too few parameters ignore both the noise and the meaningful behavior of the data \citep{Gershenfeld1999 . All of the above difficulties indicate that learning unknown functions from training data involves trade-offs between underfittings and overfittings of data points. The bias/variance dilemma describes statistical facets of these trade-offs \citep{Geman1992,Gershenfeld1999,Duda2001,Scholkopf2002,Haykin2009 . \subsection{The Bias/Variance Dilemma} All learning machine architectures are composed of training data. Moreover, the estimation error between a learning machine and its target function depends on the training data in a twofold manner. \citet*{Geman1992} examined these dual sources of estimation error in their article titled\textit{\ Neural Networks and the Bias/Variance Dilemma}. The crux of the dilemma is that estimation error is composed of two distinct components termed a bias and a variance. Large numbers of parameter estimates raise the variance, whereas incorrect statistical models increase the bias \citep{Geman1992,Gershenfeld1999,Duda2001,Hastie2001,Haykin2009 . The bias/variance dilemma can be summarized as follows. Model-free architectures based on insufficient data samples are unreliable and have slow convergence speeds. However, model-based architectures based on incorrect statistical models are also unreliable. Except, model-based architectures based on accurate statistical models are reliable and have reasonable convergence speeds. Even so, proper statistical models for model-based architectures are difficult to identify. \subsubsection{Prewiring of Important Generalizations} I\ have considered the arguments regarding the bias/variance dilemma presented by \citet{Geman1992 , and I have come to the overall conclusion that learning an unknown function from data requires prewiring important generalizations into a learning machine's architecture, where generalizations involve suitable representations of statistical decision systems. For statistical classification systems, I\ will show that generalizations involve \emph{joint representations} of discriminant functions \emph{and} decision boundaries. So how do we identify the important generalizations for a given problem? How should these generalizations be prewired? How do important generalizations represent key aspects of statistical decision systems? What does it really mean to introduce a carefully designed bias into a learning machine's architecture? How does the introduction of a proper bias involve a model-based architecture? How do we discover model-based architectures? What is a model-based architecture? I will consider all of these problems in terms of the fundamental modeling question posed next. The general problem is outlined in \citet{Naylor1971 . \subsection{The System Representation Problem} I propose that effective designs of learning machine architectures involve an underlying system modeling problem. In particular, effective designs of model-based, learning machine architectures involve the formulation of a mathematical system which simulates essential stochastic behavior and models key aspects of a real statistical system. Accordingly, statistical model formulation for learning machine architectures involves the development of a mathematically tractable, statistical model that provides a useful representation of a statistical decision system. In general terms, a system is an interconnected set of elements which are coherently organized in a manner that achieves a useful function or purpose \citep{Meadows2008 . This indicates that prewiring relevant aspects of statistical decision systems within learning machine architectures involves effective interconnections between suitable sets of coherently organized data points. I will devise three classes of learning machine architectures that provide substantial examples of effective interconnections between suitable sets of coherently organized data points. \subsection{Suitable Representations for Learning Machines} The matter of identifying suitable representations for learning machine architectures is important. Indeed, for many scientific and engineering problems, there is a natural and elegant way to represent the solution. For example, each of the well-known special functions, e.g., Legendre polynomials, Bessel functions, Fourier series, Fourier integrals, etc., have the common motivation of being most appropriate for certain problems and quite unsuitable for others, where each special function represents the relevant aspects of a physical system \citep{Keener2000 . Moreover, most mathematical models of physical systems are based on a fundamental principle that nature acts to minimize energy. Accordingly, physical systems seek a point of equilibrium where the opposing forces and influences of the system are balanced with each other, and the energy of the system is minimized \citep[see][]{Strang1986 . Yet, most machine learning methods attempt to approximate unknown functions with methods that assume no sort of representation, e.g., nonparametric inference methods \citep{Geman1992,Cherkassky1998,Duda2001,Hastie2001,Haykin2009} or assume representations that are tentative and ill-defined, e.g., indefinite interpolations of SVM margin hyperplanes in unknown, high-dimensional spaces \citep{Boser1992,Cortes1995 . Likewise, consider the notion of the asymptotic convergence of a learning machine architecture to some unknown function. Can we picture what this actually means? So how do we devise mathematically tractable, statistical models for learning machine architectures? \subsection{Learning Statistical Laws from Training Data} Tangible representations provide objects and forms which can be seen and imagined, along with a perspective for seeing and imagining them \citep{Hillman2012 . I will devise effective hyperparameters and substantial geometric architectures for linear, polynomial, and Gaussian kernel SVMs, each of which is based on a mathematically tractable, statistical model that can be depicted and understood in two and three-dimensional Euclidean spaces and fully comprehended in higher dimensions. Each statistical model is derived from fundamental laws in mathematics and statistics, whereby computers "learn" optimal decision rules from data by finding a solution of locus equations, subject to fundamental statistical laws that are satisfied by classification systems in statistical equilibrium. In this paper, I\ will devise model-based architectures for learning machines that determine optimal decision functions and boundaries for training data drawn from any two statistical distributions that have unchanging statistics and $\left( 1\right) $ similar covariance matrices, $\left( 2\right) $ dissimilar covariance matrices, and $\left( 3\right) $ homogeneous distributions which are completely overlapping with each other. Any given discriminant function and decision boundary satisfies fundamental statistical laws for a binary classification system in statistical equilibrium. Thereby, each class of learning machines achieves the lowest possible error rate, i.e., the learning machine minimizes the average risk $\mathfrak{R}_{\mathfrak{\min }}$: which is the lowest error rate that can be achieved by any linear or quadratic discriminant function \citep{VanTrees1968,Fukunaga1990,Duda2001 . All of the problems outlined above indicate that learning unknown functions from data involves indeterminate problems which remain largely unidentified. I have identified a problem which I\ have named the geometric locus dilemma \citep{,Reeves2015resolving . The geometric locus dilemma is summarized below. \subsection{The Geometric Locus Dilemma} Any given conic section or quadratic surface is a \emph{predetermined} geometric \emph{configuration} of points, i.e., endpoints of directed line segments called vectors, whose Cartesian coordinate locations satisfy, i.e., are \emph{determined by}, an algebraic \emph{equation}. Moreover, curves or surfaces of standard locus equations are determined by properties of geometric loci with respect to coordinate axes of \emph{arbitrary} Cartesian coordinate systems. Thereby, an algebraic equation of a classical locus of points generates an \emph{explicit} curve or surface in an \emph{arbitrarily specified} Cartesian space. It follows that any point on a classical geometric locus \emph{naturally} exhibits the uniform property of the locus. So, consider fitting a collection of training data to standard locus equations. Given the correlated, algebraic and geometric constraints on a traditional locus of points, it follows that any attempt to fit\ an $N$-dimensional set of $d$-dimensional, random data points to the standard equation(s) of a geometric locus, involves the unsolvable problem of determining an effective constellation of an $\left( N-M\right) \times d$ subset of $N\times d$ random vector coordinates that $(1)$ inherently satisfy preset, and thus fixed, length constraints on each of the respective $d$ Cartesian coordinate axes and thereby $(2)$ generate predetermined curves or surfaces in Cartesian space \mathbb{R} ^{d}$. Such estimation tasks are not possible. It follows that fitting collections of random data points to standard locus equations is \emph{an impossible estimation task}. Given the limitations imposed by the geometric locus dilemma, the design and development of learning machine architectures has primarily been based on curve and surface fitting methods of interpolation or regression, alongside statistical methods of reducing data to minimum numbers of relevant parameters. For example, multilayer artificial neural networks (ANNs) estimate nonlinear regressions with optimally pruned architectures. Good generalization performance for ANNs is considered an effect of a good nonlinear interpolation of the training data \citep{Geman1992,Haykin2009 . Alternatively, support vector machines (SVMs) fit linear curves or surfaces to minimum numbers of training points. Good generalization performance for SVMs is largely attributed to maximally separated, linear decision borders \citep{Boser1992,Cortes1995,Bennett2000,Cristianini2000,Scholkopf2002 . \subsection{The Geometric Locus Dilemma for SVMs} SVMs estimate linear and nonlinear decision boundaries by solving a quadratic programming problem. SVM methods specify a pair of linear borders, termed margin hyperplanes, that pass through data points called support vectors. The capacity or complexity of SVM decision boundaries is regulated by means of a geometric margin of separation between a pair of margin hyperplanes. The SVM method minimizes the capacity of a separating hyperplane by maximizing the distance between a pair of margin hyperplanes or linear borders. Large distances between margin hyperplanes $(1)$ allow for considerably fewer hyperplane orientations and $(2)$ enforce a limited capacity to separate training data. Thus, maximizing the distance between margin hyperplanes regulates the complexity of separating hyperplane estimates \citep {Boser1992,Cortes1995,Burges1998,Bennett2000,Cristianini2000,Scholkopf2002 . \subsubsection{Separation of Overlapping Data} Identifying interpolation methods that provide effective fits of separating lines, planes, or hyperplanes involves the long-standing problem of fitting linear decision boundaries to overlapping sets of data points \citep[see, e.g.,][]{Cover1965,Cortes1995 . Soft margin, linear kernel SVMs\ are said to resolve this problem by means of non-negative, random slack variables $\xi_{i}\geq0$, each of which allows a correlated data point $\mathbf{x}_{i}$ that lies between a pair of linear decision borders to satisfy a linear border. Nonlinear kernel SVMs also employ non-negative, random slack variables, each of which allows a transformed,\ correlated data point to satisfy a hyperplane\textit{\ }decision border in some higher dimensional feature space \citep{Cortes1995,Bennett2000,Cristianini2000,Hastie2001 . \subsubsection{An Impossible Estimation Task} SVM applications of slack variables imply that non-negative, random slack variables \emph{specify distances} of data points \emph{from unknown} linear curves or surfaces. Clearly, this is an \emph{impossible estimation task}. Therefore, given $l$ overlapping data points $\left\{ \mathbf{x}_{i}\right\} _{i=1}^{l}$ and $l$ non-negative, random slack variables $\left\{ \xi_{i}|\xi_{i}\geq0\right\} _{i=1}^{l}$, it is concluded that computing effective values for $l$ non-negative, random slack variables $\left\{ \xi_{i}|\xi_{i}\geq0\right\} _{i=1}^{l}$ is an impossible estimation task. \subsubsection{Hyperparameter Tuning} SVMs are widely reported to perform well on statistical classification tasks. However, SVMs require \emph{computationally expensive} hyperparameter \emph{tuning} \citep{Byun2002,Liang2011 . For nonlinear kernel SVMs, the user must select the polynomial degree or the kernel width. In addition, the user must select regularization parameters for linear and nonlinear kernel SVMs. Generally speaking, the performance of SVMs depends on regularization parameters, the type of kernel, and the kernel parameter for nonlinear kernels. The choice of a nonlinear kernel and its parameter for a given problem is a research issue \citep{Burges1998 . In this paper, I\ will introduce new ways of thinking about linear, polynomial, and Gaussian kernel SVMs. In the process, I will resolve the geometric locus dilemma for all three classes of SVMs. I\ will define effective hyperparameters for polynomial and Gaussian kernel SVMs. I\ will also define regularization methods for linear, polynomial, and Gaussian kernel SVMs. Thereby, I will devise three classes of high-performance learning machines that are scalable modules for statistical pattern recognition systems. \subsection{Statistical Pattern Recognition Systems} Statistical pattern recognition systems provide an automated means to identify or discover information and objects contained within large collections of digitized $(1)$ images or videos, $(2)$ waveforms, signals, or sequences and $(3)$ documents. Statistical pattern recognition systems provide automated processes such as optical character recognition, geometric object recognition, speech recognition, spoken language identification, handwriting recognition, waveform recognition, face recognition, system identification, spectrum identification, fingerprint identification, and DNA sequencing \citep{Srinath1996,Jain2000statistical,Duda2001 . \subsubsection{Design of Statistical Pattern Recognition Systems} Statistical pattern recognition systems divide pattern spaces into decision regions that are separated by decision boundaries; pattern spaces are commonly known as feature spaces. The design of statistical pattern recognition systems involves two fundamental problems. The first problem concerns identifying measurements or numerical features of the objects being classified and using these measurements to form pattern or feature vectors for each pattern class. For $M$ classes of patterns, a pattern space is composed of $M$ regions, where each region contains the pattern vectors of a class. The second problem involves generating decision boundaries that divide a pattern space into $M$ regions. Functions that determine decision boundaries are called discriminant functions \citep{VanTrees1968,Srinath1996,Duda2001 . \section{Theory of Binary Classification} The problem of generating decision boundaries involves specifying discriminant functions. The fundamental problem of explicitly defining discriminant functions is termed the binary classification problem. I\ will present an overview of existing criterion for the binary classification problem, and I\ will use these results to develop new criteria and a theorem for binary classification. \subsubsection{The Binary Classification Problem} The basic classification problem of deciding between two pattern or feature vectors is essentially one of partitioning a feature space $Z$ into two suitable regions $Z_{1}$ and $Z_{2}$, such that whenever a pattern vector $\mathbf{x}$ lies in region $Z_{1}$, the classifier decides that $\mathbf{x}$ belongs to class $\omega_{1}$ and whenever $\mathbf{x}$ lies in region $Z_{2} $, the classifier decides that $\mathbf{x}$ belongs to class $\omega_{2}$. When the classifier makes a wrong decision, the classifier is said to make an error. A suitable criterion is necessary to determine the best possible partitioning for a given feature space $Z$ \citep{VanTrees1968,Srinath1996,Duda2001 . \subsection{Bayes' Criterion} Bayes' criterion divides a feature space $Z$ in a manner that minimizes the probability of classification error $\mathcal{P}_{\min_{e}}\left( Z\right) $. Bayes' decision rule is determined by partitioning a feature space $Z$ into two regions $Z_{1}$ and $Z_{2}$, where the average risk $\mathfrak{R _{\mathfrak{B}}\left( Z\right) $ of the total probability of making a decision error $\mathcal{P}_{\min_{e}}\left( Z\right) $ is minimized. The overall cost $C$ that determines the Bayes' risk $\mathfrak{R}_{\mathfrak{B }\left( Z\right) $ is controlled by assigning points to two suitable regions $Z_{1}$ and $Z_{2}$, where each region $Z_{1}$ or $Z_{2}$ has a Bayes' risk $\mathfrak{R}_{\mathfrak{B}}\left( Z_{1}\right) $ or $\mathfrak{R _{\mathfrak{B}}\left( Z_{2}\right) $ that determines the minimum probability of error $\mathcal{P}_{\min_{e}}\left( Z_{1}\right) $ or $\mathcal{P _{\min_{e}}\left( Z_{2}\right) $ for the region. A Bayes' test is based on two assumptions. The first is that prior probabilities $P\left( \omega_{1}\right) $ and $P\left( \omega_{2}\right) $ of the pattern classes $\omega_{1}$ and $\omega_{2}$ represent information about the pattern vector sources. The second assumption is that a cost is assigned to each of the four possible outcomes associated with a decision. Denote the costs for the four possible outcomes by $C_{11}$, $C_{21}$, $C_{22}$, and $C_{12}$, where the first subscript indicates the chosen class and the second subscript indicates the true class. Accordingly, each time that the classifier makes a decision, a certain cost will be incurred \citep{VanTrees1968,Srinath1996,Duda2001 . \subsection{Bayes' Likelihood Ratio Test} Minimization of the Bayes' risk $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) $ produces decision regions defined by the following expression: I \[ P\left( \omega_{1}\right) \left( C_{21}-C_{11}\right) p\left( \mathbf{x}|\omega_{1}\right) \geq P\left( \omega_{2}\right) \left( C_{12}-C_{22}\right) p\left( \mathbf{x}|\omega_{2}\right) \text{, \] then assign the pattern vector $\mathbf{x}$ to region $Z_{1}$ and say that class $\omega_{1}$ is true. Otherwise, assign the pattern vector $\mathbf{x} $ to region $Z_{2}$ and say that class $\omega_{2}$ is true. The resulting discriminant function is Bayes' likelihood ratio test \begin{equation} \Lambda\left( \mathbf{x}\right) \triangleq\frac{p\left( \mathbf{x |\omega_{1}\right) }{p\left( \mathbf{x}|\omega_{2}\right) }\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}\frac{P\left( \omega_{2}\right) \left( C_{12}-C_{22}\right) }{P\left( \omega_{1}\right) \left( C_{21}-C_{11}\right) }\text{,} \label{Bayes' LRT \end{equation} where $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x |\omega_{2}\right) $ are class-conditional density functions. Given Bayes' likelihood ratio test, a\ decision boundary divides a feature space $Z$ into two decision regions $Z_{1}$ and $Z_{2}$, where a decision rule $\Lambda \left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2 }{\gtrless}}\eta$ is based on a threshold $\eta$. Figure $\ref{Bayes' Decision Rule}$ illustrates how Bayes' decision rule divides a feature space\ $Z$ into decision regions \citep{VanTrees1968,Srinath1996 \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure1.png }\caption{Bayes' likelihood ratio test divides a feature space $Z$ into decision regions $Z_{1}$ and $Z_{2}$ which minimize the Bayes' risk $\mathfrak{R}_{\mathfrak{B}}\left( Z|\Lambda\right) $. \label{Bayes' Decision Rule \end{figure} Bayes' decision rule in Eq. (\ref{Bayes' LRT}) computes the likelihood ratio for a feature vector $\mathbf{x} \[ \Lambda\left( \mathbf{x}\right) =\frac{p\left( \mathbf{x}|\omega _{1}\right) }{p\left( \mathbf{x}|\omega_{2}\right) \] and makes a decision by comparing the ratio $\Lambda\left( \mathbf{x}\right) $ to the threshold $\eta \[ \eta=\frac{P\left( \omega_{2}\right) \left( C_{12}-C_{22}\right) }{P\left( \omega_{1}\right) \left( C_{21}-C_{11}\right) }\text{. \] Costs and prior probabilities are usually based on educated guesses. Therefore, it is common practice to determine a likelihood ratio $\Lambda\left( \mathbf{x}\right) $ that is independent of costs and prior probabilities and let $\eta$ be a variable threshold that accommodates changes in estimates of cost assignments and prior probabilities \citep{,VanTrees1968,Srinath1996 . Bayes' classifiers are difficult to design because the class-conditional density functions and the decision threshold are usually not known. Instead, a collection of "training data" is used to estimate either decision boundaries or class-conditional density functions \citep{Fukunaga1990,Duda2001,Hastie2001,Haykin2009 . \subsubsection{Minimal Probability of Error Criterion} If $C_{11}=C_{22}=0$ and $C_{21}=C_{12}=1$, then the average risk $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) $ is given by the expressio \begin{equation} \mathfrak{R}_{\mathfrak{B}}\left( Z\right) =P\left( \omega_{2}\right) \int_{Z_{1}}p\left( \mathbf{x}|\omega_{2}\right) d\mathbf{x}+P\left( \omega_{1}\right) \int_{Z_{2}}p\left( \mathbf{x}|\omega_{1}\right) d\mathbf{x} \label{Bayes' Risk \end{equation} which is the total probability of making an error. Given this cost assignment, the Bayes' test $\ln\Lambda\left( \mathbf{x}\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}\ln\eta \[ \ln\frac{p\left( \mathbf{x}|\omega_{1}\right) }{p\left( \mathbf{x |\omega_{2}\right) }\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless} \ln\frac{P\left( \omega_{2}\right) }{P\left( \omega_{1}\right) }\text{, \] which can be written a \[ \ln p\left( \mathbf{x}|\omega_{1}\right) -\ln p\left( \mathbf{x}|\omega _{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}\ln P\left( \omega_{2}\right) -\ln P\left( \omega_{2}\right) \text{, \] is minimizing the total probability of error. When two given pattern classes are equally likely, the decision threshold is zero: $\eta=0$. The probability of misclassification is called the Bayes' error \citep{VanTrees1968,Fukunaga1990,Srinath1996,Duda2001 . \subsection{Bayes' Error} The performance of a discriminant function can be determined by evaluating errors associated with making decisions. For a binary classification problem, there are two types of decision errors. A classifier may decide class $\omega_{2}$ when the true class is $\omega_{1}$ (denoted by $D_{2}|\omega _{1}$) or a classifier may decide class $\omega_{1}$ when the true class is $\omega_{2}$ (denoted by $D_{1}|\omega_{2}$). Each type of decision error has a probability associated with it which depends on the class-conditional densities and the decision rule. Let $P_{1}$ denote the probability of error $P\left( D_{2}|\omega_{1}\right) $ corresponding to deciding class $\omega_{2}$ when the true class is $\omega_{1}$. The $P_{1}$ Bayes' error is given by the integra \begin{align} P_{1} & =P\left( D_{2}|\omega_{1}\right) =\int_{-\infty}^{\eta}p\left( \mathbf{x}|\omega_{1}\right) d\mathbf{x}\label{Type One Error}\\ & =\int_{Z_{2}}p\left( \mathbf{x}|\omega_{1}\right) d\mathbf{x}\nonumber \end{align} which is a conditional probability given the class-conditional density $p\left( \mathbf{x}|\omega_{1}\right) $ and the decision region $Z_{2}$. Let $P_{2}$ denote the probability of error $P\left( D_{1}|\omega_{2}\right) $ corresponding to deciding class $\omega_{1}$ when the true class is $\omega_{2}$. The $P_{2}$ Bayes' error is given by the integra \begin{align} P_{2} & =P\left( D_{1}|\omega_{2}\right) =\int_{\eta}^{\infty}p\left( \mathbf{x}|\omega_{2}\right) d\mathbf{x}\label{Type Two Error}\\ & =\int_{Z_{1}}p\left( \mathbf{x}|\omega_{2}\right) d\mathbf{x}\nonumber \end{align} which is a conditional probability given the class-conditional density $p\left( \mathbf{x}|\omega_{2}\right) $ and the decision region $Z_{1}$. Once the $Z_{1}$ and $Z_{2}$ decision regions are chosen, the values of the error integrals in Eqs (\ref{Type One Error}) and (\ref{Type Two Error}) are determined \citep{VanTrees1968,Srinath1996 . I\ will redefine decision regions in the next section. \subsection{Practical Decision Regions} Bayes' decision rule defines the $Z_{1}$ and $Z_{2}$ decision regions to consist of values of $\mathbf{x}$ for which the likelihood ratio $\Lambda\left( \mathbf{x}\right) $ is, respectively, greater than or less than a threshold $\eta$, where any given set of $Z_{1}$ and $Z_{2}$ decision regions spans an entire feature space over the interval of $\left( -\infty,\infty\right) $. Accordingly, the lower limit of the integral for the $P_{1}$ Bayes' error in Eq. (\ref{Type One Error}) is $-\infty$, and the upper limit of the integral for the $P_{2}$ Bayes' error in Eq. (\ref{Type Two Error}) is $\infty$. Figure $\ref{Bayes' Decision Rule}$ depicts an example for which the $Z_{1}$ and $Z_{2}$ decision regions span an entire feature space $Z$, wher \[ Z=Z_{1}+Z_{2}=Z_{1}\cup Z_{2}\text{. \] I\ will now redefine decision regions based solely on regions that are associated with decision errors or lack thereof. Accordingly, regions associated with decision errors involve regions associated with overlapping data distributions and regions associated with no decision errors involve regions associated with non-overlapping data distributions. For overlapping data distributions, Fig. $\ref{Bayes' Error and Decision Regions Overlapping Data}$ illustrates how values of the $P_{1}$ and $P_{2}$ error integrals in Eqs (\ref{Type One Error ) and (\ref{Type Two Error}) are determined by a decision threshold $\eta$ and the $Z_{1}$ and $Z_{2}$ decision regions. Accordingly, the risk $\mathfrak{R _{\mathfrak{B}}\left( Z\right) $ is determined by the class-conditional densities $p\left( \mathbf{x}|\omega_{2}\right) $ and $p\left( \mathbf{x}|\omega_{1}\right) $ and the corresponding $Z_{1}$ and $Z_{2}$ decision regions \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure2.png }\caption{The average risk $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) $, i.e., the decision error $\mathcal{P}_{\min_{e}}\left( Z\right) $, is determined by class-conditional densities $p\left( \mathbf{x}|\omega _{2}\right) $ and $p\left( \mathbf{x}|\omega_{1}\right) $ and decision regions $Z_{1}$ and $Z_{2}$ that may or may not be contiguous, where region $Z_{1}$ or $Z_{2}$ has a conditional risk $\mathfrak{R}\left( Z_{1}\right) $ or $\mathfrak{R}\left( Z_{2}\right) $ that determines the conditional probability of error $\mathcal{P}_{\min_{e}}\left( Z_{1}\right) $ or $\mathcal{P}_{\min_{e}}\left( Z_{2}\right) $ for the region. \label{Bayes' Error and Decision Regions Overlapping Data \end{figure} \subsubsection{Decision Regions for Overlapping Data Distributions} For overlapping data distributions, decision regions are defined to be those regions that span regions of \emph{data distribution overlap}. Accordingly, the $Z_{1}$ decision region, which is associated with class $\omega_{1}$, spans a finite region between the decision threshold $\eta$ and the region of distribution overlap between $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, whereas the $Z_{2}$ decision region, which is associated with class $\omega_{2}$, spans a finite region between the region of distribution overlap between $p\left( \mathbf{x |\omega_{2}\right) $ and $p\left( \mathbf{x}|\omega_{1}\right) $ and the decision threshold $\eta$ (see, e.g., Fig. $\ref{Bayes' Error and Decision Regions Overlapping Data}$). It follows that the risk $\mathfrak{R}\left( Z_{1}\right) $ in the $Z_{1}$ decision region involves pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega _{2}\right) $, where pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $ contribute to the $P_{2}$ decision error in Eq. (\ref{Type Two Error}) and the total risk $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) $ in Eq. (\ref{Bayes' Risk}). It also follows that the risk $\mathfrak{R}\left( Z_{2}\right) $ in the $Z_{2}$ decision region involves pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $ and $p\left( \mathbf{x}|\omega_{1}\right) $, where pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ contribute to the $P_{1}$ decision error in Eq. (\ref{Type One Error}) and the total risk $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) $ in Eq. (\ref{Bayes' Risk}). Accordingly, the risk $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) $ in Eq. (\ref{Bayes' Risk}) is determined by the risks $\mathfrak{R}_{\mathfrak{B }\left( Z_{1}\right) $ and $\mathfrak{R}_{\mathfrak{B}}\left( Z_{2}\right) $ in the corresponding $Z_{1}$ and $Z_{2}$ decision regions, which involve functionals of pattern vectors $\mathbf{x}$ that lie in overlapping regions of data distributions. \subsubsection{Decision Regions for Non-overlapping Data Distributions} For non-overlapping data distributions, the $Z_{1}$ decision region, which is associated with class $\omega_{1}$, spans a finite region between the decision threshold $\eta$ and the tail region of $p\left( \mathbf{x}|\omega _{1}\right) $, whereas the $Z_{2}$ decision region, which is associated with class $\omega_{2}$, spans a finite region between the tail region of $p\left( \mathbf{x}|\omega_{2}\right) $ and the decision threshold $\eta$. Because the risk $\mathfrak{R}_{\mathfrak{B}}\left( Z_{1}\right) $ in the $Z_{1}$ decision region only involves functionals of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, the risk $\mathfrak{R}_{\mathfrak{B}}\left( Z_{1}\right) $ in region $Z_{1}$ is zero: $\mathfrak{R}_{\mathfrak{B}}\left( Z_{1}\right) =0$. Likewise, because the risk $\mathfrak{R}_{\mathfrak{B}}\left( Z_{2}\right) $ in the $Z_{2}$ decision region only involves functionals of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $, the risk $\mathfrak{R}_{\mathfrak{B}}\left( Z_{2}\right) $ in region $Z_{2}$ is zero: $\mathfrak{R}_{\mathfrak{B}}\left( Z_{2}\right) =0$. Accordingly, the risk $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) $ in the decision space in $Z$ is zero$\ $because $\mathfrak{R}_{\mathfrak{B}}\left( Z_{1}\right) +\mathfrak{R}_{\mathfrak{B}}\left( Z_{2}\right) =0$, where the risks $\mathfrak{R}_{\mathfrak{B}}\left( Z_{1}\right) $ and $\mathfrak{R _{\mathfrak{B}}\left( Z_{2}\right) $ in the $Z_{1}$ and $Z_{2}$ decision regions involve functionals of pattern vectors $\mathbf{x}$ that lie in tail regions of data distributions \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure3.png }\caption{For non-overlapping data distributions, the average risk $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) $ in the decision space $Z$ is zero because the conditional risks $\mathfrak{R}\left( Z_{1}\right) $ and $\mathfrak{R}\left( Z_{2}\right) $ in the $Z_{1}$ and $Z_{2}$ decision regions are zero: $\mathfrak{R}\left( Z_{1}\right) =\mathfrak{R}\left( Z_{2}\right) =0$. \label{Bayes' Error and Decision Regions Non-overlapping Data \end{figure} For non-overlapping data distributions, Fig. $\ref{Bayes' Error and Decision Regions Non-overlapping Data}$ illustrates that values of the $P_{1}$ and $P_{2}$ error integrals in Eqs (\ref{Type One Error}) and (\ref{Type Two Error}), which are determined by the risks $\mathfrak{R}_{\mathfrak{B}}\left( Z_{2}\right) $ and $\mathfrak{R _{\mathfrak{B}}\left( Z_{1}\right) $ in the corresponding $Z_{2}$ and $Z_{1}$ decision regions, are effectively zero, so that the risk $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) $ in the decision space $Z$ is zero: $\mathfrak{R}_{\mathfrak{B}}\left( Z\right) =0$. \subsection{Decision Error in Terms of the Likelihood Ratio} The $Z_{1}$ and $Z_{2}$ decision regions consist of values of $\mathbf{x}$ for which the likelihood ratio $\Lambda\left( \mathbf{x}\right) $ is, respectively, less than or greater than the decision threshold $\eta$. Accordingly, the error $P\left( D_{2}|\omega_{1}\right) $ in Eq. (\ref{Type One Error}) can be written in terms of the likelihood ratio $\Lambda\left( \mathbf{x}\right) $ a \begin{equation} P_{1}=\int_{Z_{2}}p\left( \Lambda\left( \mathbf{x}\right) |\omega _{1}\right) d\Lambda\text{,} \label{Type One Error Likelihood Ratio \end{equation} and the error $P\left( D_{1}|\omega_{2}\right) $ in Eq. (\ref{Type Two Error}) can be written in terms of the likelihood ratio $\Lambda\left( \mathbf{x}\right) $ a \begin{equation} P_{2}=\int_{Z_{1}}p\left( \Lambda\left( \mathbf{x}\right) |\omega _{2}\right) d\Lambda\text{,} \label{Type Two Error Likelihood Ratio \end{equation} where $P_{1}$ and $P_{2}$ are conditional probabilities given the decision regions $Z_{1}$ and $Z_{2}$ and the likelihood ratio $\Lambda\left( \mathbf{x}\right) $ \citep{VanTrees1968,Srinath1996 . Figure $\ref{Bayes' Decision Rule and Risk}$ illustrates how the decision error is a function of the likelihood ratio $\Lambda\left( \mathbf{x}\right) $ and the $Z_{1}$ and $Z_{2}$ decision regions \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure4.png }\caption{Illustration of a decision rule that partitions a feature space $Z$ in a manner that minimizes the conditional probabilities of decision error $P_{1}$ and $P_{2}$ given the likelihood ratio $\Lambda\left( \mathbf{x \right) $ and the decision regions $Z_{1}$ and $Z_{2}$. \label{Bayes' Decision Rule and Risk \end{figure} \subsection{Bayes' Minimax Criterion} The Bayes' criterion requires that costs be assigned to various decisions and that values be assigned to prior probabilities for pattern classes. If not enough is known about the sources or mechanisms generating a set of pattern or feature vectors, then prior probabilities cannot be determined, and the Bayes' criterion cannot be applied. However, a decision rule can be obtained by using a criterion known as the minimax risk, which involves minimization of the maximum risk. A decision rule that satisfies Bayes' minimax risk is called a minimax test. In some cases, the threshold for the minimax test is identical to the threshold for the minimum probability of error test \citep{VanTrees1968,Poor1996,Srinath1996 . \subsubsection{Equalizer Rules} A minimax test is also called an \emph{equalizer rule}, where any given equalizer rule performs well over a range of prior probabilities \citep{Poor1996 . Bayes' minimax risk $\mathfrak{R}_{mm}\left( Z\right) $, which is given by the integral equatio \begin{align} \mathfrak{R}_{mm}\left( Z\right) & =C_{22}+\left( C_{12}-C_{22}\right) \int\nolimits_{Z_{1}}p\left( \mathbf{x}|\omega_{2}\right) d\mathbf{x \label{Minimax Risk}\\ & =C_{11}+\left( C_{21}-C_{11}\right) \int\nolimits_{Z_{2}}p\left( \mathbf{x}|\omega_{1}\right) d\mathbf{x}\text{,}\nonumber \end{align} determines a decision rule for which the $Z_{1}$ and $Z_{2}$ decision regions have equal Bayes' risks: $\mathfrak{R}_{\mathfrak{B}}\left( Z_{1}\right) =\mathfrak{R}_{\mathfrak{B}}\left( Z_{2}\right) $ \citep{VanTrees1968,Srinath1996 . \subsection{Bayes' Minimax Theorem} For the case when $C_{11}=C_{22}=0$ and $C_{21}=C_{12}=1$, the minimax integral equation reduces to $P_{2}=P_{1}$, where the minimax cost is the average probability of error. It follows that the integral equation for Bayes' minimax risk $\mathfrak{R}_{mm}\left( Z\right) $ is given b \begin{align} \mathfrak{R}_{mm}\left( Z\right) & =\int\nolimits_{Z_{1}}p\left( \mathbf{x}|\omega_{2}\right) d\mathbf{x}\label{Minimax Criterion}\\ & =\int\nolimits_{Z_{2}}p\left( \mathbf{x}|\omega_{1}\right) d\mathbf{x \text{,}\nonumber \end{align} where the conditional probability of making an error $P_{2}$ for class $\omega_{2}$ is equal to the conditional probability of making an error $P_{1}$ for class $\omega_{1}$ \[ \int\nolimits_{Z_{1}}p\left( \mathbf{x}|\omega_{2}\right) d\mathbf{x =\int\nolimits_{Z_{2}}p\left( \mathbf{x}|\omega_{1}\right) d\mathbf{x \text{. \] Accordingly, Bayes' minimax risk $\mathfrak{R}_{mm}\left( Z\right) $ involves solving the integral equation in Eq. (\ref{Minimax Criterion}) where the conditional probabilities of decision error $P\left( D_{1}|\omega _{2}\right) $ and $P\left( D_{2}|\omega_{1}\right) $ for class $\omega_{2}$ and class $\omega_{1}$ are symmetrically balanced with each othe \[ P\left( D_{1}|\omega_{2}\right) \equiv P\left( D_{2}|\omega_{1}\right) \] about a decision threshold $\eta$. \subsection{Bayes' Decision Rules for Gaussian Data} For Gaussian data, Bayes' decision rule and boundary are defined by the likelihood ratio test \begin{align} \Lambda\left( \mathbf{x}\right) & =\frac{\left\vert \mathbf{\Sigma _{2}\right\vert ^{1/2}\exp\left\{ -\frac{1}{2}\left( \mathbf{x -\boldsymbol{\mu}_{1}\right) ^{T}\mathbf{\Sigma}_{1}^{-1}\left( \mathbf{x}-\boldsymbol{\mu}_{1}\right) \right\} }{\left\vert \mathbf{\Sigma }_{1}\right\vert ^{1/2}\exp\left\{ -\frac{1}{2}\left( \mathbf{x -\boldsymbol{\mu}_{2}\right) ^{T}\mathbf{\Sigma}_{2}^{-1}\left( \mathbf{x}-\boldsymbol{\mu}_{2}\right) \right\} \label{Likelihood Ratio Gaussian Data}\\ & \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}\frac{P\left( \omega_{2}\right) \left( C_{12}-C_{22}\right) }{P\left( \omega_{1}\right) \left( C_{21}-C_{11}\right) }\text{,}\nonumber \end{align} where $\boldsymbol{\mu}_{1}$ and $\boldsymbol{\mu}_{2}$ are $d$-component mean vectors, $\mathbf{\Sigma}_{1}$ and $\mathbf{\Sigma}_{2}$ are $d$-by-$d$ covariance matrices, $\mathbf{\Sigma}^{-1}$ and $\left\vert \mathbf{\Sigma }\right\vert $ denote the inverse and determinant of a covariance matrix, and $\omega_{1}$ or $\omega_{2}$ is the true data category. The transform $\ln\left( \Lambda\left( \mathbf{x}\right) \right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}$ $\ln\left( \eta\right) $ of the likelihood ratio test in Eq. (\ref{Likelihood Ratio Gaussian Data}) \begin{align} \ln\left( \Lambda\left( \mathbf{x}\right) \right) & =\mathbf{x ^{T}\left( \mathbf{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}-\mathbf{\Sigma _{2}^{-1}\boldsymbol{\mu}_{2}\right) +\frac{1}{2}\mathbf{x}^{T}\left( \mathbf{\Sigma}_{2}^{-1}\mathbf{x}-\mathbf{\Sigma}_{1}^{-1}\mathbf{x}\right) \label{Likelihood Ratio General Gaussian}\\ & +\frac{1}{2}\boldsymbol{\mu}_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu }_{2}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma}_{1}^{-1 \boldsymbol{\mu}_{1}\nonumber\\ & +\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) -\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma _{1}\right\vert ^{1/2}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}\ln\left( \eta\right) \text{,}\nonumber \end{align} where the decision threshold $\ln\left( \eta\right) $ is defined by the algebraic expressio \[ \ln\left( \eta\right) \triangleq\ln\left( P\left( \omega_{2}\right) \right) -\ln\left( P\left( \omega_{1}\right) \right) \text{, \] defines the general form of the discriminant function for the general Gaussian, binary classification problem, where no costs ($C_{11}=C_{22}=0$) are associated with correct decisions, and $C_{12}=C_{21}=1$ are costs associated with incorrect decisions \citep{VanTrees1968,Duda2001 . The likelihood ratio test in Eq. (\ref{Likelihood Ratio General Gaussian}) generates decision boundaries $D\left( \mathbf{x}\right) $ that are determined by the vector equation \begin{align} D\left( \mathbf{x}\right) & :\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\mathbf{x}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma _{1}\right\vert ^{1/2}\right) \label{General Gaussian Decision Boundary}\\ & -\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x+}\frac{1}{2}\boldsymbol{\mu }_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \nonumber\\ & =\ln\left( P\left( \omega_{2}\right) \right) -\ln\left( P\left( \omega_{1}\right) \right) \nonumber \end{align} and are characterized by the class of hyperquadric decision surfaces which include hyperplanes, pairs of hyperplanes, hyperspheres, hyperellipsoids, hyperparaboloids, and hyperhyperboloids \citep{VanTrees1968,Duda2001 . Let $\widehat{\Lambda}\left( \mathbf{x}\right) $ denote the transform $\ln\left( \Lambda\left( \mathbf{x}\right) \right) $ of the likelihood ratio $\Lambda\left( \mathbf{x}\right) $ in Eq. (\ref{Likelihood Ratio Gaussian Data}). I will now use Eqs (\ref{Likelihood Ratio General Gaussian}) and (\ref{General Gaussian Decision Boundary}) to derive statistical equilibrium laws that are satisfied by classification systems. \subsection{Equilibrium Laws for Classification Systems} It can be argued that the decision threshold $\eta$ of a likelihood test $\Lambda\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}\eta$ for a binary classification system does not depend on prior probabilities $P\left( \omega_{1}\right) $ and $P\left( \omega _{2}\right) $ and costs $C_{11}$, $C_{21}$, $C_{22}$, and $C_{12}$. It can also be argued that the expected risk $\mathfrak{R}_{\mathfrak{\min}}$ for a binary classification system is minimized by letting $\eta=1$. Therefore, let $\eta=0$ in Eq. (\ref{Likelihood Ratio General Gaussian}). It follows that the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ generates decision boundaries $D\left( \mathbf{x}\right) $ that satisfy the vector equation \begin{align} D\left( \mathbf{x}\right) & :\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\mathbf{x}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma _{1}\right\vert ^{1/2}\right) \label{Vector Equation Binary Decision Boundary}\\ & -\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x+}\frac{1}{2}\boldsymbol{\mu }_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \nonumber\\ & =0\text{,}\nonumber \end{align} where the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{1}\right) $ given class $\omega_{1}$ satisfies the vector expression \begin{align*} p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) & =\mathbf{x}^{T}\mathbf{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}-\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{1}^{-1}\mathbf{x}-\frac{1}{2}\boldsymbol{\mu }_{1}^{T}\mathbf{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{1}\right\vert ^{1/2}\right) \\ & =p\left( \mathbf{x}|\omega_{1}\right) \text{, \end{align*} and the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$ satisfies the vector expression \begin{align*} p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) & =\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}-\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x-}\frac{1}{2}\boldsymbol{\mu }_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \\ & =p\left( \mathbf{x}|\omega_{2}\right) \text{. \end{align*} Therefore, the class-conditional likelihood ratios $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) $ and $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ satisfy the vector equation \begin{equation} p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \label{Equilibrium Equation for Class Conditional Densities \end{equation} such that the probability density functions $p\left( \mathbf{x}|\omega _{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ for class $\omega_{1}$ and class $\omega_{2}$ are balanced with each other \[ p\left( \mathbf{x}|\omega_{1}\right) =p\left( \mathbf{x}|\omega_{2}\right) \text{, \] and the class-conditional likelihood ratios $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) $ and $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) $ satisfy the statistical equilibrium equation \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \text{. \] Accordingly, the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0 \begin{align} \widehat{\Lambda}\left( \mathbf{x}\right) & =\mathbf{x}^{T}\left( \mathbf{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}-\mathbf{\Sigma}_{2 ^{-1}\boldsymbol{\mu}_{2}\right) \label{General Gaussian Equalizer Rule}\\ & +\frac{1}{2}\mathbf{x}^{T}\left( \mathbf{\Sigma}_{2}^{-1}\mathbf{x -\mathbf{\Sigma}_{1}^{-1}\mathbf{x}\right) \nonumber\\ & +\frac{1}{2}\boldsymbol{\mu}_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu }_{2}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma}_{1}^{-1 \boldsymbol{\mu}_{1}\nonumber\\ & +\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) -\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma _{1}\right\vert ^{1/2}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0\nonumber \end{align} generates decision boundaries $D\left( \mathbf{x}\right) \begin{align*} \widehat{\Lambda}\left( \mathbf{x}\right) & :\mathbf{x}^{T}\mathbf{\Sigma }_{1}^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\mathbf{x}^{T}\mathbf{\Sigma _{1}^{-1}\mathbf{x}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma _{1}^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma }_{1}\right\vert ^{1/2}\right) \\ & -\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x+}\frac{1}{2}\boldsymbol{\mu }_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \\ & =0 \end{align*} that determine decision regions $Z_{1}$ and $Z_{2}$ for which \emph{the likelihood ratio} $\widehat{\Lambda}\left( \mathbf{x}\right) $ \emph{is in statistical equilibrium} \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \text{, \] and the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ over the decision space $Z \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) =\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}+\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda \] is determined by the \emph{total allowed probability of error} that is given by the $P_{2}$ integral for the decision error $P\left( D_{1}|\omega _{2}\right) $: \[ P_{2}=\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}=P\left( D_{1}|\omega_{2}\right) \text{, \] over the $Z_{1}$ decision region, and the \emph{total allowed probability of error} that is given by the $P_{1}$ integral for the decision error $P\left( D_{2}|\omega_{1}\right) $ \[ P_{1}=\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}=P\left( D_{2}|\omega_{1}\right) \text{, \] over the $Z_{2}$ decision region, where the $Z_{1}$ and $Z_{2}$ decision regions involve either regions of data distribution overlap or tail regions of data distributions. Returning to Eq. (\ref{Equilibrium Equation for Class Conditional Densities}), the vector equatio \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] indicates that the total allowed probabilities of decision errors $P\left( D_{1}|\omega_{2}\right) $ and $P\left( D_{2}|\omega_{1}\right) $ for a classification system involve \emph{opposing forces} that depend on the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ and the corresponding decision boundary $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$. This indicates that classification systems $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ that are determined by the likelihood ratio test in Eq. (\ref{General Gaussian Equalizer Rule}) are in \emph{statistical equilibrium}. I will now motivate the concept of forces associated with binary classification systems in terms of forces that are associated with positions and potential locations of feature vectors. By way of motivation, I will define feature vectors in terms of samples of electromagnetic forces. \subsection{Sampled Sources of Electromagnetic Forces} Pattern vectors or feature vectors are generated by a wide variety of physical sources or mechanisms. Generally speaking, pattern vectors are generated by \emph{sampled sources} of electromagnetic radiation which include radio waves, infrared radiation, visible light, ultraviolet radiation, X-rays, and gamma rays. For example, hydrophones and microphones are acoustic mechanisms that convert sound waves into electrical signals, whereas seismometers measure motion of the ground. Radar systems use radio waves to detect objects such as aircraft, ships, guided missiles, weather formations, and terrains. Lidar systems use light waves in the form of pulsed laser to make high resolution maps of the Earth. Multispectral or hyperspectral sensors use infrared and ultraviolet radiation to collect and process information to find objects, identify materials, and detect processes. Accordingly, feature vectors can be defined as samples of electromagnetic forces, where any given sample of an electromagnetic force is a pattern vector that has a magnitude and a direction. Pattern vectors may also be generated by responses to electromagnetic radiation. For example, electrophoresis is a method that sorts proteins according to their response to an electric field: electrophoresis may be used to determine DNA sequence genotypes, or genotypes that are based on the length of specific DNA fragments. Electromagnetic radiation sources are characterized in terms of energy, wavelength, or frequency, where any given source of electromagnetic radiation is determined by electromagnetic forces. Generally speaking, a force is anything that can change an object's speed or direction of motion. Therefore, let a feature vector be the locus of a directed, straight line segment that starts at the origin and terminates at the endpoint of the feature vector. Accordingly, the endpoint of a feature vector is due to an electromagnetic force that changed an object's speed and direction of motion, where the magnitude of the feature vector represents the distance an object traveled from the origin. So, we can think about forces associated with positions and potential locations of endpoints of feature vectors, where the forces involve sampled sources of electromagnetic radiation: feature vectors that have magnitudes and directions. I will now derive a system of fundamental equations of binary classification for a classification system in statistical equilibrium. \subsection{Fundamental Equations of Binary Classification} Take any given collection of data that is drawn from class-conditional probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $and $p\left( \mathbf{x}|\omega_{2}\right) $ that have unknown statistical distributions. Let the decision space $Z$ and the corresponding $Z_{1}$ and $Z_{2}$ decision regions be determined by either overlapping or non-overlapping data distributions, as depicted in Figs $\ref{Bayes' Error and Decision Regions Overlapping Data}$ or $\ref{Bayes' Error and Decision Regions Non-overlapping Data}$. For overlapping data distributions, the $Z_{1}$ and $Z_{2}$ decision regions involve pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ \emph{and} $p\left( \mathbf{x}|\omega _{2}\right) $, where pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $ and are located in region $Z_{1}$ contribute to the $P_{2}$ decision error and the average risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z\right) $ in the decision space $Z$, and pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ and are located in region $Z_{2}$ contribute to the $P_{1}$ decision error and the average risk $\mathfrak{R _{\mathfrak{\min}}\left( Z\right) $ in the decision space $Z$. For non-overlapping data distributions, the $Z_{1}$ and $Z_{2}$ decision regions involve pattern vectors $\mathbf{x}$ that are generated according to either $p\left( \mathbf{x}|\omega_{1}\right) $ \emph{or} $p\left( \mathbf{x |\omega_{2}\right) $ so that the average risk $\mathfrak{R}_{\mathfrak{\min }\left( Z\right) $ in the decision space $Z$ is zero. Therefore, let the $P_{2}$ integral for the probability of decision error $P\left( D_{1}|\omega_{2}\right) $ be given by the integra \begin{align} \mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda \label{Total Risk of Class Two}\\ & =\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) +\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \nonumber\\ & =\int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) d\widehat{\Lambda}\text{,}\nonumber \end{align} over the $Z_{1}$ and $Z_{2}$ decision regions, where the integral $\in _{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) d\widehat{\Lambda}$ over the $Z_{2}$ decision region, which involves pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $ and are located in region $Z_{2}$, and is denoted by $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $, involves forces that \emph{oppose} forces associated with the average risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z\right) $ in the decision space $Z$: because the expected risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ involves pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega _{2}\right) $ and are located in region $Z_{1}$. Thus, Eq. (\ref{Total Risk of Class Two}) involves opposing forces for pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x |\omega_{2}\right) $, where opposing forces are associated with risks and counter risks that depend on positions and potential locations of pattern vectors $\mathbf{x}$ in the $Z_{1}$ and $Z_{2}$ decision regions. Therefore, Eq. (\ref{Total Risk of Class Two}) also represents the \emph{total allowed eigenenergy} of a classification system for class $\omega_{2}$, where the total allowed eigenenergy is the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$. Accordingly, let $E_{2}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ denote the total allowed eigenenergy of a classification system for class $\omega_{2}$ \[ E_{2}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) =E_{2}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) +E_{2}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{, \] where $E_{2}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and $E_{2}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ are eigenenergies associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions for class $\omega_{2}$ that depend on positions and potential locations of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x |\omega_{2}\right) $. It follows that the integral in Eq. (\ref{Total Risk of Class Two}) also represents the total allowed eigenenergy $E_{2}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ of a classification system for class $\omega_{2}$ \begin{align} E_{2}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) & =\int_{Z_{1}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\in _{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) d\widehat{\Lambda}\label{Energy of Class Two}\\ & =E_{2}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) +E_{2}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \nonumber\\ & =\int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) d\widehat{\Lambda}\text{,}\nonumber \end{align} which is given by the area under the class-conditional likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ over the decision space $Z$, where the total allowed eigenenergy is the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$. Likewise, let the $P_{1}$ integral for the probability of decision error $P\left( D_{2}|\omega_{1}\right) $ be given by the integra \begin{align} \mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) & =\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}+\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda \label{Total Risk of Class One}\\ & =\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) +\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) \nonumber\\ & =\int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}\text{,}\nonumber \end{align} over the $Z_{1}$ and $Z_{2}$ decision regions, where the integral $\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}$ over the $Z_{1}$ decision region, which involves pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ and are located in region $Z_{1}$, and is denoted by $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $, involves forces that \emph{oppose} forces associated with the average risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z\right) $ in the decision space $Z$: because the expected risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ involves pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega _{1}\right) $ and are located in region $Z_{2}$. Thus, Eq. (\ref{Total Risk of Class One}) involves opposing forces for pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x |\omega_{1}\right) $, where opposing forces are associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$: which depend on positions and potential locations of pattern vectors $\mathbf{x}$ in the $Z_{2}$ and $Z_{1}$ decision regions. Therefore, Eq. (\ref{Total Risk of Class One}) also represents the \emph{total allowed eigenenergy} of a classification system for class $\omega_{1}$, where the total allowed eigenenergy is the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$. Accordingly, let $E_{1}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ denote the total allowed eigenenergy of a classification system for class $\omega_{1}$ \[ E_{1}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) =E_{1}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) +E_{1}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \text{, \] where $E_{1}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and $E_{1}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ are eigenenergies associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions for class $\omega_{1}$ that depend on positions and potential locations of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $. It follows that the integral in Eq. (\ref{Total Risk of Class One}) also represents the total allowed eigenenergy $E_{1}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ of a classification system for class $\omega_{1}$ \begin{align} E_{1}\left( Z|\omega_{1}\right) & =\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}+\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}\label{Energy of Class One}\\ & =E_{1}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +E_{1}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) \nonumber\\ & =\int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}\text{,}\nonumber \end{align} which is given by the area under the class-conditional likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) $ over the decision space $Z$, where the total allowed eigenenergy is the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$. Using Eqs (\ref{Total Risk of Class Two}) and (\ref{Total Risk of Class One}), it follows that the expected risk $\mathfrak{R}_{\mathfrak{\min}}$ of a binary classification system is given by the integral \begin{align} \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}+\int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\label{Total Allowed Risk of Classification System}\\ & =\mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +\mathfrak{R}_{\mathfrak{\min }}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) \right) \text{,}\nonumber \end{align} over a finite decision space $Z$, where the expected risk $\mathfrak{R _{\mathfrak{\min}}$ of a classification system is the probability of misclassification or decision error. Using Eqs (\ref{Energy of Class Two}) and (\ref{Energy of Class One}), it follows that the total allowed eigenenergy of a classification system is given by the corresponding integral \begin{align} E\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}+\int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{2}\right) d\widehat{\Lambda \label{Total Allowed Energy of Classification System}\\ & =E_{1}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +E_{2}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{,}\nonumber \end{align} over a finite decision space $Z$, where the total allowed eigenenergy of a classification system is the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ and the corresponding decision boundary $D\left( \mathbf{x}\right) $: $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$. Figure $\ref{Regions of Risks and Counter Risks}$ provides an overview of regions of risks and counter risks in the $Z_{1}$ and $Z_{2}$ decisions regions, where the counter risks for class $\omega_{1}$ and class $\omega_{2 $ \[ \overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \text{ \ and \ }\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \] are \emph{opposing forces} for the risks or decision errors for class $\omega_{1}$ and class $\omega_{2}$ \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{ \ and \ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \text{. \ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure5.png }\caption{Counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \protect\widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) \right) $ and $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \protect\widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decisions regions are \emph{opposing forces} for the risks or decision errors\emph{ }$\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|p\left( \protect\widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|p\left( \protect\widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decisions regions. \label{Regions of Risks and Counter Risks \end{figure} I\ will now show that classification systems seek a point of statistical equilibrium where the opposing forces and influences of a system are balanced with each other, and the eigenenergy and the corresponding expected risk of a classification system are minimized. As a result, I\ will devise a system of fundamental equations of binary classification that must be satisfied by classification systems in statistical equilibrium. \subsection{Classification Systems in Statistical Equilibrium} Take any given decision boundary $D\left( \mathbf{x}\right) \begin{align*} D\left( \mathbf{x}\right) & :\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\mathbf{x}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma _{1}\right\vert ^{1/2}\right) \\ & -\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x+}\frac{1}{2}\boldsymbol{\mu }_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \\ & =0\text{, \end{align*} that is generated according to the likelihood ratio test $\widehat{\Lambda }\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0 \begin{align*} \widehat{\Lambda}\left( \mathbf{x}\right) & =\mathbf{x}^{T}\left( \mathbf{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}-\mathbf{\Sigma}_{2 ^{-1}\boldsymbol{\mu}_{2}\right) \\ & +\frac{1}{2}\mathbf{x}^{T}\left( \mathbf{\Sigma}_{2}^{-1}\mathbf{x -\mathbf{\Sigma}_{1}^{-1}\mathbf{x}\right) \\ & +\frac{1}{2}\boldsymbol{\mu}_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu }_{2}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma}_{1}^{-1 \boldsymbol{\mu}_{1}\\ & +\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) -\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma _{1}\right\vert ^{1/2}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0\text{, \end{align*} where $\boldsymbol{\mu}_{1}$ and $\boldsymbol{\mu}_{2}$ are $d$-component mean vectors, $\mathbf{\Sigma}_{1}$ and $\mathbf{\Sigma}_{2}$ are $d$-by-$d$ covariance matrices, $\mathbf{\Sigma}^{-1}$ and $\left\vert \mathbf{\Sigma }\right\vert $ denote the inverse and determinant of a covariance matrix, and $\omega_{1}$ or $\omega_{2}$ is the true data category. Given that the decision boundary $D\left( \mathbf{x}\right) $ and the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) $ satisfy the vector equatio \begin{equation} p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0\text{,} \label{Vector Equation of Likelihood Ratio and Decision Boundary \end{equation} where the decision boundary $D\left( \mathbf{x}\right) $ and the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) $ are in statistical equilibriu \begin{equation} p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \text{,} \label{Equilibrium Equation of Likelihood Ratio and Decision Boundary \end{equation} it follows that the decision boundary $D\left( \mathbf{x}\right) $ and the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) $ satisfy the integral equatio \begin{equation} \int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}=\int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{,} \label{Integral Equation of Likelihood Ratio and Decision Boundary \end{equation} over the decision space $Z$. Therefore, the decision boundary $D\left( \mathbf{x}\right) $ and the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) $ satisfy the fundamental integral equation of binary classificatio \begin{align} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda \label{Equalizer Rule}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda \text{,}\nonumber \end{align} over the $Z_{1}$ and $Z_{2}$ decision regions, such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min }\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system are minimized, and the classification system is in statistical equilibrium. Given Eqs (\ref{Total Allowed Risk of Classification System}) and (\ref{Equalizer Rule}), it follows that the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, must be \emph{equal} to the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega _{2}\right) $. Given Eqs (\ref{Total Allowed Energy of Classification System}) and (\ref{Equalizer Rule}), it follows that the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$ must be \emph{equal} to the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$. I will now use Eq. (\ref{Equalizer Rule}) to develop an integral equation of binary classification, where the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are \emph{balanced} with the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region. Given that the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ are opposing forces for the risks $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $, let the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{1}$ and class $\omega_{2}$ in the $Z_{1}$ and $Z_{2}$ decision regions be positive forces \[ \overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) >0\text{ \ and \ }\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) >0\text{, \] and let the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{2}$ and class $\omega_{1}$ in the $Z_{1}$ and $Z_{2}$ decision regions be negative forces \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) <0\text{ \ and \ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) <0\text{. \] Given these assumptions and using Eqs (\ref{Total Risk of Class Two}) - (\ref{Total Allowed Energy of Classification System}) and Eq. (\ref{Equalizer Rule}), it follows that the \emph{expected risk} $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) \begin{align*} \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \\ & +\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) +\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \end{align*} and the corresponding \emph{eigenenergy} $E_{\min}\left( Z|\widehat{\Lambda }\left( \mathbf{x}\right) \right) \begin{align*} E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) & =E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) \\ & +E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) +E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \end{align*} of classification systems are minimized in the following manner \begin{align} \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\;\int_{Z_{1}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}-\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) d\widehat{\Lambda \label{Balancing of Bayes' Risks and Counteracting Risks}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}-\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda \text{,}\nonumber \end{align} where positive forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and negative forces associated with the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are \emph{balanced} with positive forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and negative forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) : & \overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \\ & =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \text{, \end{align*} and the \emph{eigenenergies} associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are \emph{balanced} with the \emph{eigenenergies} associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) : & E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) -E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \\ & =E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) -E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) \text{. \end{align*} Therefore, it is concluded that the expected risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of a classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equatio \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} over the $Z_{1}$ and $Z_{2}$ decision regions, where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the expected risk of the classification system are minimized, and the classification system is in statistical equilibrium. Figure $\ref{Balancing Eiigenenergies Risks and Counter Risks}$ illustrates that the equilibrium point of the integral equation $f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) $ in Eq. (\ref{Equalizer Rule}) determines a statistical fulcrum which is located at a center of total allowed eigenenergy and a corresponding center of risk \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure6.png }\caption{The equilibrium point of a binary classification system determines a statistical fulcrum which is located at $\left( a\right) $ a center of total allowed eigenenergy $E_{\min}\left( Z|\protect\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and $\left( b\right) $ a corresponding center of expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\protect\widehat{\Lambda}\left( \mathbf{x}\right) \right) $. \label{Balancing Eiigenenergies Risks and Counter Risks \end{figure} Equations (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) - (\ref{Balancing of Bayes' Risks and Counteracting Risks}) are the fundamental equations of binary classification for a classification system in statistical equilibrium. Because Eq. (\ref{Balancing of Bayes' Risks and Counteracting Risks}) is derived from Eq. (\ref{Equalizer Rule}), I\ will refer to Eq. (\ref{Equalizer Rule}) as the fundamental integral equation of binary classification for a classification system in statistical equilibrium. Thus, it is concluded that classification systems seek a point of statistical equilibrium where the opposing forces and influences of any given system are balanced with each other, and the eigenenergy and the expected risk of the classification system are minimized. I will now consider likelihood ratio tests and decision boundaries within the mathematical framework of a \emph{geometric locus}. I will show that the \emph{point of statistical equilibrium} sought by classification systems is \emph{a locus of points} that jointly satisfies Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) - (\ref{Balancing of Bayes' Risks and Counteracting Risks}). \subsection{Geometric Locus} The general idea of a curve or surface which at any point of it exhibits some uniform property is expressed in geometry by the term \emph{locus} \citep{Whitehead1911 . Generally speaking, a geometric locus is a curve or surface formed by points, \emph{all} of which \emph{possess} some \emph{uniform property}. Any given point on a geometric locus possesses a property which is common to all points on the locus and \emph{no other points}. Any given geometric locus is determined by either an algebraic or a vector equation, where the locus of an algebraic or a vector equation is the location of all those points whose coordinates are solutions of the equation. Standard locus methods involve \emph{algebraic} equations of elliptic, parabolic, hyperbolic, circular, and linear curves \citep{Nichols1893,Tanner1898,Whitehead1911,Eisenhart1939 . I will now use the notion of a geometric locus to show that the \emph{equilibrium point} of a classification system involves a \emph{locus} of \emph{points} $\mathbf{x}$ that \emph{jointly} satisfy the likelihood ratio test in Eq. (\ref{General Gaussian Equalizer Rule}), the decision boundary in Eq. (\ref{Vector Equation Binary Decision Boundary}), and the fundamental equations of binary classification for a classification system in statistical equilibrium in Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) - (\ref{Balancing of Bayes' Risks and Counteracting Risks}). \subsection{Loci of Likelihood Ratios and Decision Boundaries} Any given decision boundary $D\left( \mathbf{x}\right) $ in Eq. (\ref{Vector Equation Binary Decision Boundary}) that is determined by the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ in Eq. (\ref{General Gaussian Equalizer Rule}), where the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) \begin{align*} \widehat{\Lambda}\left( \mathbf{x}\right) & =p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \\ & =\left( \mathbf{x}^{T}\mathbf{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1 -\frac{1}{2}\mathbf{x}^{T}\mathbf{\Sigma}_{1}^{-1}\mathbf{x}-\frac{1 {2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1 -\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{1}\right\vert ^{1/2}\right) \right) \\ & -\left( \mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2 -\frac{1}{2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x-}\frac{1 {2}\boldsymbol{\mu}_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2 -\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \right) \end{align*} and the decision boundary $D\left( \mathbf{x}\right) $ satisfy the vector equation \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0\text{, \] the statistical equilibrium equation \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \text{, \] the corresponding integral equation \[ \int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}=\int_{Z}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \] and the fundamental integral equation of binary classification \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} for a classification system in statistical equilibrium \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\;\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}-\int_{Z_{1}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}-\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\text{, \end{align*} is the \emph{locus}, i.e., the position or location, of all of the \emph{endpoints} of the vectors $\mathbf{x}$ whose \emph{coordinates} are \emph{solutions} of the vector \emph{equation} \begin{align*} D\left( \mathbf{x}\right) & :\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\mathbf{x}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma _{1}\right\vert ^{1/2}\right) \\ & -\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x+}\frac{1}{2}\boldsymbol{\mu }_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \\ & =0\text{. \end{align*} It follows that any given point $\mathbf{x}$ that satisfies the above vector equation must also satisfy the vector equation in Eq. (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}), the statistical equilibrium and corresponding integral equations in Eqs (\ref{Equilibrium Equation of Likelihood Ratio and Decision Boundary}) and (\ref{Integral Equation of Likelihood Ratio and Decision Boundary}), and the fundamental integral and corresponding integral equation of binary classification in Eqs (\ref{Equalizer Rule}) and (\ref{Balancing of Bayes' Risks and Counteracting Risks}), where endpoints of vectors $\mathbf{x}$ whose coordinates are solutions of Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) - (\ref{Balancing of Bayes' Risks and Counteracting Risks}) are located in regions that are either $\left( 1\right) $ associated with decision errors due to overlapping distributions or $\left( 2\right) $ associated with no decision errors due to non-overlapping distributions. This indicates that the \emph{equilibrium point }of a classification system involves \emph{a locus of points} that are determined by a nonlinear system of locus equations. Given this assumption, I\ will now define a dual locus of binary classifiers. \subsection{Dual Locus of Binary Classifiers} Let $\widehat{\Lambda}\left( \mathbf{x}\right) $ denote the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ for the binary classification system in Eq. (\ref{General Gaussian Equalizer Rule}). Using the general idea of a geometric locus, it follows that the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ in Eq. (\ref{General Gaussian Equalizer Rule}) determines the \emph{dual locus} of a decision boundary $D\left( \mathbf{x}\right) $ and a likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) $, where any given point\ $\mathbf{x}$ that satisfies the likelihood ratio $\widehat{\Lambda }\left( \mathbf{x}\right) $ \emph{and} the decision boundary $D\left( \mathbf{x}\right) $ \emph{also} satisfies the fundamental integral equation of binary classification for a classification system in statistical equilibrium \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} wher \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \] an \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}-\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}-\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\text{, \end{align*} I\ will show that Eq. (\ref{General Gaussian Equalizer Rule}) is the basis of a data-driven \emph{dual locus of binary classifiers} for Gaussian data \[ \widehat{\Lambda}\left( \mathbf{x}\right) =\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i}}k_{\mathbf{x}_{1_{i}}}-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i }k_{\mathbf{x}_{2_{i}} \] where $\mathbf{x}_{1i}\sim p\left( \mathbf{x}|\omega_{1}\right) $, $\mathbf{x}_{2i}\sim p\left( \mathbf{x}|\omega_{2}\right) $, $k_{\mathbf{x _{1_{i}}}$ and $k_{\mathbf{x}_{2_{i}}}$ are reproducing kernels for respective data points $\mathbf{x}_{1_{i}}$ and $\mathbf{x}_{2_{i}}$, and $\psi_{1i}$ and $\psi_{2i}$ are scale factors that provide measures of likelihood for respective data points $\mathbf{x}_{1i}$ and $\mathbf{x}_{2i}$ which lie in either overlapping regions or tails regions of data distributions. The coefficients $\left\{ \psi_{1i}\right\} _{i=1}^{l_{1}}$ and $\left\{ \psi_{2i}\right\} _{i=1}^{l_{2}}$ and the locus of data points {\textstyle\sum\nolimits_{i=1}^{l_{1}}} \psi_{1i}\mathbf{x}_{1i} {\textstyle\sum\nolimits_{i=1}^{l_{2}}} \psi_{2i}\mathbf{x}_{2i}$ are determined by finding the equilibrium point of Eq. (\ref{Equalizer Rule}). Therefore, take any given likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) $ and decision boundary $D\left( \mathbf{x}\right) $ that are determined by the likelihood ratio test in Eq. (\ref{General Gaussian Equalizer Rule}). It follows that all of the points $\mathbf{x}$ that satisfy the decision boundary $D\left( \mathbf{x}\right) $ and the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) $ \emph{must} possess a geometric and statistical \emph{property} which is \emph{common} to all points $\mathbf{x}$ that jointly satisfy the fundamental equations of binary classification in Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) - (\ref{Balancing of Bayes' Risks and Counteracting Risks}). I\ will identify this essential property later on. \subsection{Learning Dual Loci of Binary Classifiers} I\ have conducted simulation studies which demonstrate that properly regularized, linear kernel SVMs learn optimal linear decision boundaries for \emph{any} two classes of Gaussian data, where $\mathbf{\Sigma}_{1}=$ $\mathbf{\Sigma}_{2}=\mathbf{\Sigma}$ \citep[see ][]{Reeves2009,Reeves2011 , including \emph{completely overlapping} data distributions \citep[see ][]{Reeves2015resolving . I\ have also conducted simulation studies which demonstrate that properly regularized, second-order, polynomial kernel SVMs learn optimal decision boundaries for data drawn from \emph{any} two Gaussian distributions, including completely overlapping data distributions \citep{,Reeves2015resolving . In addition, given an effective value for the hyperparameter $\gamma$ of a Gaussian reproducing kernel $\exp\left( -\gamma\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s}\right\Vert ^{2}\right) $, I\ have conducted simulation studies which demonstrate that properly regularized, Gaussian kernel SVMs learn optimal decision boundaries for data drawn from \emph{any} two Gaussian distributions, including completely overlapping data distributions. Suppose that the expression for the likelihood ration test in Eq. (\ref{General Gaussian Equalizer Rule}) is not considered within the context of a binary classification system in statistical equilibrium. Then my findings are both unexpected and surprising. Using Eq. (\ref{General Gaussian Equalizer Rule}), for any given pair of homogeneous distributions, where $\mathbf{\Sigma}_{1}=\mathbf{\Sigma}_{2}=\mathbf{\Sigma}$ and $\boldsymbol{\mu}_{1}=\boldsymbol{\mu}_{2}=\boldsymbol{\mu}$, it follows that the likelihood ratio tes \begin{align*} \widehat{\Lambda}\left( \mathbf{x}\right) & =\mathbf{x}^{T}\mathbf{\Sigma }^{-1}\left( \boldsymbol{\mu}-\boldsymbol{\mu}\right) +\frac{1}{2 \mathbf{x}^{T}\left( \mathbf{\Sigma}^{-1}\mathbf{x}-\mathbf{\Sigma ^{-1}\mathbf{x}\right) \\ & +\frac{1}{2}\boldsymbol{\mu}^{T}\mathbf{\Sigma}^{-1}\boldsymbol{\mu -\frac{1}{2}\boldsymbol{\mu}^{T}\mathbf{\Sigma}^{-1}\boldsymbol{\mu}\\ & +\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}\right\vert ^{1/2}\right) -\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}\right\vert ^{1/2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\\ & =0\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0 \end{align*} reduces to the constant $0$ and \emph{is undefined}. Therefore, the decision rule \emph{and} the decision boundary \emph{are also undefined}. However, the decision rule and the decision boundary \emph{are both defined} in terms of an equilibrium point of a binary classification system for which \emph{a locus of points}, which is determined by a nonlinear system of locus equations, satisfies a statistical equilibrium equation. I\ have conducted simulation studies which show that an optimal decision function and decision boundary \emph{are both determined} by a system of data-driven, locus equations, where \emph{data points} satisfy a \emph{data-driven,} \emph{dual locus }of a likelihood ratio \emph{and} a decision boundary. The system of data-driven, locus equations is based on the support-vector network algorithm developed by \citet*{Boser1992} and \citet*{Cortes1995 . \subsubsection{Data-Driven Dual Locus of Optimal Linear Classifiers} In this paper, I\ will devise data-driven, locus equations of likelihood ratio tests and linear decision boundaries that satisfy data-driven versions of the fundamental equations of binary classification in Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) - (\ref{Balancing of Bayes' Risks and Counteracting Risks}). The data-driven, locus equations are based on variants of the inequality constrained optimization problem for linear, polynomial, and Gaussian kernel SVMs. I will formulate a system of data-driven, locus equations that determines an optimal likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ and a linear decision boundary $D\left( \mathbf{x}\right) $, for any two classes of data: where $\mathbf{\Sigma}_{1}=$ $\mathbf{\Sigma}_{2}=\mathbf{\Sigma}$, where a \emph{data-driven, dual locus of points} determines the \emph{locus} of the decision boundary in Eq. (\ref{Vector Equation Binary Decision Boundary}) \emph{and} the likelihood ratio in Eq. (\ref{General Gaussian Equalizer Rule ). The data-driven, dual locus of points satisfies a data-driven version of the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}). Thereby, the decision rule determines the locus of a linear decision boundary for \emph{any} two classes of pattern vectors. \subsubsection{Data-Driven Dual Locus of Optimal Quadratic Classifiers} In this paper, I\ will also devise data-driven, locus equations of optimal likelihood ratio tests and quadratic decision boundaries that satisfy data-driven versions of the fundamental equations of binary classification in Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) - (\ref{Balancing of Bayes' Risks and Counteracting Risks}). The data-driven, locus equations are based on variants of the inequality constrained optimization problem for polynomial and Gaussian kernel SVMs. I will formulate a system of data-driven, locus equations that determines an optimal likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ and a quadratic decision boundary, for any two classes of data: where $\mathbf{\Sigma}_{1 \neq$ $\mathbf{\Sigma}_{2}$, where a \emph{data-driven, dual locus of points} determines the locus of the decision boundary in\ Eq. (\ref{Vector Equation Binary Decision Boundary}) and the likelihood ratio in Eq. (\ref{General Gaussian Equalizer Rule}). The data-driven, dual locus of points satisfies a data-driven version of the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}). Thereby, the decision rule determines the locus of a quadratic decision boundary for any two classes of pattern vectors that have dissimilar covariance matrices. Furthermore, the decision rule provides an estimate of the locus of a linear decision boundary for any two classes of pattern vectors that have similar covariance matrices and different mean vectors. By way of motivation, graphical illustrations of decision regions in optimal decision spaces are presented next. \subsection{Decision Regions in Optimal Decision Spaces} Take any two sets of data that have been drawn from any two Gaussian distributions, where both data sets are completely characterized by the mean vectors and the covariance matrices of the Gaussian distributions. I\ will demonstrate that decision regions in optimal decision spaces satisfy symmetrical border constraints in relation to circular, parabolic, elliptical, hyperbolic, or linear decision boundaries. Thus, each decision region is determined by a \emph{decision border} in relation to a \emph{decision boundary}. Given that decision boundaries and borders are both determined by a data-driven version of the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}), it follows that the decision boundaries and decision borders for a given classification system involve similar types of curves or surfaces. \subsubsection{Decision Regions for Dissimilar Covariance Matrices} For data distributions that have dissimilar covariance matrices, I\ will demonstrate that decision regions satisfy border constraints in relation to circular, parabolic, elliptical, or hyperbolic decision boundaries. Thus, each decision region is determined by a nonlinear decision border \emph{in relation to} a nonlinear decision boundary \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.2206in {Figure7.png }\caption{For overlapping data distributions that have dissimilar covariance matrices and common means, optimal decision functions divide feature spaces $Z$ into decision regions $Z_{1}$ and $Z_{2}$ that have minimum expected risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}\right) $ and $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}\right) $. \label{Overlapping Data Same Mean One \end{figure} \paragraph{Example One} Figure $\ref{Overlapping Data Same Mean One}$a illustrates how the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ in Eq. (\ref{General Gaussian Equalizer Rule}) determines decision regions $Z_{1}$ and $Z_{2}$ for overlapping data distributions that have dissimilar covariance matrices and common means, where the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|\omega_{2}\right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) $ for class $\omega_{1}$ are determined by the statistical balancing feat \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\omega_{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1 |\omega_{2}\right) =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2 |\omega_{1}\right) \text{, \] over the $Z_{1}$ and $Z_{2}$ decision regions. Because the means are equal: $\boldsymbol{\mu}_{1}=\boldsymbol{\mu}_{2}$, the decision border of region $Z_{2}$ \emph{satisfies} the decision boundary, so that the decision border of region $Z_{2}$ \emph{contains} region $Z_{2}$. So, if the decision boundary is an elliptical curve or surface, then the decision border of region $Z_{1}$ is also an elliptical curve or surface, so that region $Z_{1}$ is constrained by the elliptical decision border of region $Z_{1}$ and the elliptical decision boundary. For example, Fig. $\ref{Overlapping Data Same Mean One}$b depicts an overview of a decision space that has been generated in MATLAB, where the elliptical decision border of region $Z_{1}$ is red, and the elliptical decision border of region $Z_{2}$ and the corresponding elliptical decision boundary is dark blue \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.2015in {Figure8.png }\caption{Illustration of a binary classification system of symmetrically balanced hyperbolic curves that divides a feature space $Z$ into decision regions $Z_{1}$ and $Z_{2}$ that have minimum expected risks $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}\right) $ and $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}\right) $. \label{Overlapping Data Same Mean Two \end{figure} If the decision boundary is a hyperbolic curve or surface, it follows that the $Z_{1}$ and $Z_{2}$ decision regions are each constrained by two hyperbolic curves or surfaces that satisfy symmetrical boundary conditions in relation to the decision boundary. For example, Fig. $\ref{Overlapping Data Same Mean Two $b depicts an overview of a decision space that has been generated in MATLAB, where the hyperbolic decision borders of region $Z_{1}$ are red, the hyperbolic decision borders of region $Z_{2}$ are aqua blue, and the hyperbolic curves of the decision boundary are dark blue. \paragraph{Example Two} Figure $\ref{Equalizer Rule for Overlapping Data Two}$a illustrates how the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ in Eq. (\ref{General Gaussian Equalizer Rule}) determines decision regions $Z_{1}$ and $Z_{2}$ for overlapping data distributions that have dissimilar covariance matrices and different means, where the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|\omega_{2}\right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) $ for class $\omega_{1}$ are determined by a statistical balancing feat \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\omega_{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1 |\omega_{2}\right) =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2 |\omega_{1}\right) \text{, \] for which the risks in the decision regions are similar \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) \sim \mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) \text{. \] Accordingly, the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system is minimized when the counter risks are similar in the $Z_{1}$ and $Z_{2}$ decision regions \[ \overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\omega_{1}\right) \sim\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) \text{. \ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.0995in {Figure9.png }\caption{Optimal likelihood ratio tests divide feature spaces $Z$ into decision regions $Z_{1}$ and $Z_{2}$ that have minimum expected risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}\right) $ and $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}\right) $. \label{Equalizer Rule for Overlapping Data Two \end{figure} For example, Fig. $\ref{Equalizer Rule for Overlapping Data Two}$b depicts an overview of a decision space that has been generated in MATLAB, where the parabolic decision border of region $Z_{1}$ is red, the parabolic decision border of region $Z_{2}$ is aqua blue, and the parabolic curve of the decision boundary is dark blue. If the counter risks are different in the decision regions are different, then the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ in Eq. (\ref{General Gaussian Equalizer Rule}) determines decision regions $Z_{1}$ and $Z_{2}$ that satisfy border constraints in relation to parabolic, elliptical, or hyperbolic decision boundaries such that the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) $ and the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1 |\omega_{1}\right) $ and $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) $ are effectively balanced with each other. \subsubsection{Decision Regions for Similar Covariance Matrices} For data distributions that have similar covariance matrices, the risks in the decision regions are equal. Figure $\ref{Equalizer Rule for Overlapping Data Similar Covariance}$a illustrates how the decision rule $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}$ $0$ in in Eq. (\ref{General Gaussian Equalizer Rule}) determines decision regions for overlapping data drawn from Gaussian distributions that have similar covariance matrices, where the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) $ in the $Z_{1}$ and $Z_{2}$ decision regions are equal for similar covariance matrices. Because the risks in the decision regions are equal \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) =\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) \text{, \] it follows that the counter risks in the $Z_{1}$ and $Z_{2}$ decision regions are also equal \[ \overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\omega_{1}\right) =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) \text{. \ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.1955in {Figure10.png }\caption{For overlapping distributions that have similar covariance matrices, optimal likelihood ratio tests $\protect\widehat{\Lambda}\left( \mathbf{x}\right) \protect\overset{\omega_{1}}{\protect\underset{\omega _{2}}{\gtrless}}$ $0$ divide feature spaces $Z$ into decision regions $Z_{1}$ and $Z_{2}$ that have equal and minimum expected risks: $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}\right) =\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}\right) $. \label{Equalizer Rule for Overlapping Data Similar Covariance \end{figure} Figure $\ref{Equalizer Rule for Overlapping Data Similar Covariance}$b depicts an overview of a decision space that has been generated in MATLAB, where the linear decision border of region $Z_{1}$ is red, the linear decision border of region $Z_{2}$ is aqua blue, and the linear curve of the decision boundary is dark blue. The linear decision borders exhibit bilateral symmetry with respect to the linear decision boundary. In order to develop data-driven locus equations of optimal, statistical classification systems, I\ need to devise fundamental locus equations for linear and quadratic loci. By way of motivation, an overview of locus methods will be followed by a summary of geometric methods in Hilbert spaces. \section{Locus Methods} The graph of an equation is the locus (the \emph{place}) of all points whose coordinates are solutions of the equation. Any given point on a locus possesses a geometric property which is common to all points of the locus and no other points. For example, a\ circle is a locus of points $\left( x,y\right) $, all of which are at the same distance, the radius $r$, from a fixed point $\left( x_{0},y_{0}\right) $, the center. The algebraic equation of the locus of a circle in Cartesian coordinates i \begin{equation} \left( x-x_{0}\right) ^{2}+\left( y-y_{0}\right) ^{2}=r^{2}\text{.} \label{Coordinate Equation of Circle \end{equation} Only those coordinates $\left( x,y\right) $ that satisfy Eq. (\ref{Coordinate Equation of Circle}) contribute to the geometric locus of a specified circle \citep{Eisenhart1939 . Classic examples of geometric loci include circles, ellipses, hyperbolas, parabolas, lines, and points. Locus problems for all of the second-order curves have been widely studied in analytic (coordinate) geometry \citep{Nichols1893,Tanner1898,Eisenhart1939 . Solving a locus problem requires finding the equation of a curve defined by a given property and drawing the graph or locus of a given equation. Methods for solving locus problems are based on two fundamental problems in analytic geometry, both of which involve the graph or locus of an equation. The identification of the geometric property of a locus of points is a central problem in coordinate geometry. The inverse problem finds the algebraic form of an equation whose solutions give the coordinates of all of the points on a locus which has been defined geometrically. A geometric figure is any set of points which exhibit a uniform property. Accordingly, any point, line, line segment, angle, polygon, curve, region, plane, surface, solid, etc. is a geometric figure. Geometric figures are defined in two ways: $(1)$ as a figure with certain known properties and $(2)$ as the path of a point which moves under known conditions \citep{Nichols1893,Tanner1898,Eisenhart1939 . Finding the algebraic form of an equation for a given geometric figure or locus is usually a difficult problem. Solving locus problems involves identifying algebraic and geometric constraints for a given locus of points. The algebraic form of an equation of a locus \emph{hinges on} both the \emph{geometric property} and the frame of reference (the \emph{coordinate system}) of the locus. Moreover, changing the positions of the coordinate axes of any given locus changes the algebraic form of the locus that references the axes and the coordinates of any point on the locus. The equation of a locus and the identification of the geometric property of the locus can be greatly simplified by changing the positions of the axes to which the locus of points is referenced \citep{Nichols1893,Tanner1898,Eisenhart1939 . \subsection{Locus of a Straight Line} The equations of a straight line in the coordinate plane have been widely studied in analytical and coordinate geometry. Standard forms of the equation of a linear locus are outlined below. \subsubsection{Standard Equations of the First Degree} The geometric locus of every equation of the first degree is a straight line. The general equation of the first degree in two coordinate variables $x $ and $y$ has the for \[ Ax+By+C=0\text{, \] where $A$, $B$, $C$ are constants which may have any real values, subject to the restriction that $A$ and $B$ cannot both be zero. Only two geometric conditions are deemed necessary to determine the equation of a particular line. Either a line should pass through two given points, or should pass through a given point and have a given slope. Standard equations of a straight line include the point-slope, slope-intercept, two-point, intercept, and normal forms \citep{Nichols1893,Tanner1898,Eisenhart1939 . Excluding the point, a straight line appears to be the simplest type of geometric locus. Yet, the locus of \emph{a point} is ill-defined because a locus has \emph{more} than \emph{one point}. Moreover, the uniform geometric property of a straight line remains unidentified. I will identify several, correlated, uniform properties exhibited by all of the points on a linear locus shortly. I will now define the locus of a point in terms of the locus of a position vector. \subsection{Locus of a Position Vector} The locus of a position vector will play a significant role in analyses that follow. A position vector $\mathbf{x} \begin{pmatrix} x_{1}, & x_{2}, & \cdots, & x_{d \end{pmatrix} ^{T}$ is defined to be the locus of a directed, straight line segment formed by two points $P_{\mathbf{0} \begin{pmatrix} 0, & 0, & \cdots, & 0 \end{pmatrix} $ and $P_{\mathbf{x} \begin{pmatrix} x_{1}, & x_{2}, & \cdots, & x_{d \end{pmatrix} $ which are at a distance o \[ \left\Vert \mathbf{x}\right\Vert =\left( x_{1}^{2}+x_{2}^{2}+\cdots+x_{d ^{2}\right) ^{1/2 \] from each other, where $\left\Vert \mathbf{x}\right\Vert $ denotes the length of a position vector $\mathbf{x}$, such that each point coordinate $x_{i}$ or vector component $x_{i}$ is at a signed distance of $\left\Vert \mathbf{x \right\Vert \cos\mathbb{\alpha}_{ij}$ from the origin $P_{\mathbf{0}}$, along the direction of an orthonormal coordinate axis $\mathbf{e}_{j}$, where $\cos\mathbb{\alpha}_{ij}$ is the direction cosine between the vector component $x_{i}$ and the orthonormal coordinate axis $\mathbf{e}_{j}$. It follows that the locus of a position vector $\mathbf{x}$ is specified by an ordered set of signed magnitude \begin{equation} \mathbf{x}\triangle \begin{pmatrix} \left\Vert \mathbf{x}\right\Vert \cos\mathbb{\alpha}_{x_{1}1}, & \left\Vert \mathbf{x}\right\Vert \cos\mathbb{\alpha}_{x_{2}2}, & \cdots, & \left\Vert \mathbf{x}\right\Vert \cos\mathbb{\alpha}_{x_{d}d \end{pmatrix} ^{T} \label{Geometric Locus of Vector \end{equation} along the axes of the standard set of basis vector \[ \left\{ \mathbf{e}_{1}=\left( 1,0,\ldots,0\right) ,\ldots,\mathbf{e _{d}=\left( 0,0,\ldots,1\right) \right\} \text{, \] all of which describe a unique, ordered $d$-tuple of geometric locations on $d$ axes $\mathbf{e}_{j}$, where $\left\Vert \mathbf{x}\right\Vert $ is the length of the vector $\mathbf{x}$, $\left( \cos\alpha_{x_{1}1},\cdots ,\cos\alpha_{x_{d}d}\right) $ are the direction cosines of the components \begin{pmatrix} x_{1}, & \cdots, & x_{d \end{pmatrix} $ of the vector $\mathbf{x}$ relative to the standard set of orthonormal coordinate axes $\left\{ \mathbf{e}_{j}\right\} _{j=1}^{d}$, and each vector component $x_{i}$ specifies a point coordinate $x_{i}$ of the endpoint $P_{\mathbf{x}}$ of the vector $\mathbf{x}$. Using Eq. (\ref{Geometric Locus of Vector}), a point is the endpoint on the locus of a position vector, such that a correlated point $P_{\mathbf{x}}$ and position vector $\mathbf{x}$ both describe an ordered pair of real numbers in the real Euclidean plane or an ordered $d$-tuple of real numbers in real Euclidean space, all of which jointly determine a geometric location in \mathbb{R} ^{2}$ or \mathbb{R} ^{d}$. In the analyses that follow, the term vector refers to a position vector. The definition of the locus of a vector is based on inner product statistics of vectors in a Hilbert space $\mathfrak{H}$. I will now demonstrate that the multiplication of any two vectors in Hilbert space determines a rich system of geometric and topological relationships between the loci of the two vectors. The inner product statistics defined next determine a Hilbert space $\mathfrak{H}$. \subsection{Inner Product Statistics} The inner product expression $\mathbf{x}^{T}\mathbf{x}$ defined b \[ \mathbf{x}^{T}\mathbf{x}=x_{1}x_{1}+x_{2}x_{2}+\cdots+x_{d}x_{d \] generates the norm $\left\Vert \mathbf{x}\right\Vert $ of the vector $\mathbf{x} \[ \left\Vert \mathbf{x}\right\Vert =\left( x_{1}^{2}+x_{2}^{2}+\cdots+x_{d ^{2}\right) ^{1/2 \] which determines the Euclidean distance between the endpoint $P_{\mathbf{x}}$ of $\mathbf{x}$ and the origin $P_{\mathbf{o}}$, where the norm $\left\Vert \mathbf{x}\right\Vert $ measures the length of the vector $\mathbf{x}$, which is also the magnitude of $\mathbf{x}$. For any given scalar $\zeta\i \mathbb{R} ^{1}$, $\left\Vert \zeta\mathbf{x}\right\Vert =|\zeta|\left\Vert \mathbf{x}\right\Vert $ \citep{Naylor1971 . The inner product function $\mathbf{x}^{T}\mathbf{y}$ also determines the angle between two vectors $\mathbf{x}$ and $\mathbf{y}$ in \mathbb{R} ^{d}$. Given any two vectors $\mathbf{x}$ and $\mathbf{y}$, the inner product expressio \begin{equation} \mathbf{x}^{T}\mathbf{y}=\ x_{1}y_{1}+x_{2}y_{2}+\cdots+x_{d}y_{d} \label{Inner Product Expression1 \end{equation} is equivalent to the vector relationshi \begin{equation} \mathbf{x}^{T}\mathbf{y}=\left\Vert \mathbf{x}\right\Vert \left\Vert \mathbf{y}\right\Vert \cos\theta\text{,} \label{Inner Product Expression2 \end{equation} where $\theta$ is the angle between the vectors $\mathbf{x}$ and $\mathbf{y $\textbf{.} If $\theta=90^{\circ}$, then $\mathbf{x}^{T}\mathbf{y}=0$, and the vectors $\mathbf{x}$ and $\mathbf{y}$ are said to be orthogonal to each other. Accordingly, the inner product function $\mathbf{x}^{T}\mathbf{y}$ allows us to determine vectors which are orthogonal or perpendicular to each other; orthogonal vectors are denoted by $\mathbf{x}\perp\mathbf{y}$ \citep{Naylor1971 . \subsubsection{Energy of a Vector} The functional $\mathbf{x}^{T}\mathbf{x}=\left\Vert \mathbf{x}\right\Vert ^{2}$ determines the energy of the vector $\mathbf{x}$. Using Eq. (\ref{Geometric Locus of Vector}), a vector $\mathbf{x}$ exhibits an energy $\left\Vert \mathbf{x}\right\Vert ^{2}$ according to its locus, so that the scaled vector $\zeta\mathbf{x}$ exhibits the scaled energy $\zeta ^{2}\left\Vert \mathbf{x}\right\Vert ^{2}$ of its scaled locus. Principal eigenaxes of quadratic curves and surfaces exhibit an eigenenergy according to the locus of a major axis. The relationships in Eqs (\ref{Inner Product Expression1}) and (\ref{Inner Product Expression2}) are derived from second-order distance statistics. I will now demonstrate that second-order distance statistics determine a rich system of geometric and topological relationships between the loci of two vectors. \subsection{Second-order Distance Statistics} The relationship $\boldsymbol{\upsilon}^{T}\boldsymbol{\nu}=\left\Vert \boldsymbol{\upsilon}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\varphi$ between two vectors $\boldsymbol{\upsilon}$ and $\boldsymbol{\nu }$ can be derived by using the law of cosines \citep{Lay2006 \begin{equation} \left\Vert \boldsymbol{\upsilon}-\boldsymbol{\nu}\right\Vert ^{2}=\left\Vert \boldsymbol{\upsilon}\right\Vert ^{2}+\left\Vert \boldsymbol{\nu}\right\Vert ^{2}-2\left\Vert \boldsymbol{\upsilon}\right\Vert \left\Vert \boldsymbol{\nu }\right\Vert \cos\varphi\label{Inner Product Statistic \end{equation} which reduces t \begin{align*} \left\Vert \boldsymbol{\upsilon}\right\Vert \left\Vert \boldsymbol{\nu }\right\Vert \cos\varphi & =\upsilon_{1}\nu_{1}+\upsilon_{2}\nu_{2 +\cdots+\upsilon_{d}\nu_{d}\\ & =\boldsymbol{\upsilon}^{T}\boldsymbol{\nu}=\boldsymbol{\nu}^{T \boldsymbol{\upsilon}\text{. \end{align*} The vector relationships in Eq. (\ref{Inner Product Statistic}) indicate that the inner product statistic $\boldsymbol{\upsilon}^{T}\boldsymbol{\nu}$ determines the length $\left\Vert \boldsymbol{\upsilon}-\boldsymbol{\nu }\right\Vert $ of the vector from $\boldsymbol{\nu}$ to $\boldsymbol{\upsilon }$, i.e., the vector $\boldsymbol{\upsilon}-\boldsymbol{\nu}$, which is the distance between the endpoints of $\boldsymbol{\upsilon}$ and $\boldsymbol{\nu }$, so tha \[ \boldsymbol{\upsilon}^{T}\boldsymbol{\nu}=\left\Vert \boldsymbol{\upsilon }\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\varphi=\left\Vert \boldsymbol{\upsilon}-\boldsymbol{\nu}\right\Vert \text{. \] Because second-order distance statistics are symmetric, the law of cosine \[ \left\Vert \boldsymbol{\nu}-\boldsymbol{\upsilon}\right\Vert ^{2}=\left\Vert \boldsymbol{\nu}\right\Vert ^{2}+\left\Vert \boldsymbol{\upsilon}\right\Vert ^{2}-2\left\Vert \boldsymbol{\nu}\right\Vert \left\Vert \boldsymbol{\upsilon }\right\Vert \cos\varphi \] also determines the length $\left\Vert \boldsymbol{\nu}-\boldsymbol{\upsilon }\right\Vert $ of the vector from $\boldsymbol{\upsilon}$ to $\boldsymbol{\nu }$ (the vector $\boldsymbol{\nu}-\boldsymbol{\upsilon}$), which is also the distance between the endpoints of $\boldsymbol{\upsilon}$ and $\boldsymbol{\nu }$. Therefore, the inner product statistic $\boldsymbol{\upsilon}^{T \boldsymbol{\nu}$ between two vectors $\boldsymbol{\upsilon}$ and $\boldsymbol{\nu}$ in Hilbert space $\mathfrak{H} \begin{align} \boldsymbol{\upsilon}^{T}\boldsymbol{\nu} & =\upsilon_{1}\nu_{1 +\upsilon_{2}\nu_{2}+\cdots+\upsilon_{d}\nu_{d \label{Locus Statistics in Hilbert Space}\\ & =\left\Vert \boldsymbol{\upsilon}\right\Vert \left\Vert \boldsymbol{\nu }\right\Vert \cos\varphi\nonumber\\ & =\left\Vert \boldsymbol{\upsilon}-\boldsymbol{\nu}\right\Vert \nonumber \end{align} determines the distance between the geometric loc \ \begin{pmatrix} \left\Vert \boldsymbol{\upsilon}\right\Vert \cos\mathbb{\alpha}_{\upsilon _{1}1}, & \left\Vert \boldsymbol{\upsilon}\right\Vert \cos\mathbb{\alpha }_{\upsilon_{2}2}, & \cdots, & \left\Vert \boldsymbol{\upsilon}\right\Vert \cos\mathbb{\alpha}_{\upsilon_{d}d \end{pmatrix} \] an \ \begin{pmatrix} \left\Vert \boldsymbol{\nu}\right\Vert \cos\mathbb{\alpha}_{\nu_{1}1}, & \left\Vert \boldsymbol{\nu}\right\Vert \cos\mathbb{\alpha}_{\nu_{2}2}, & \cdots, & \left\Vert \boldsymbol{\nu}\right\Vert \cos\mathbb{\alpha}_{\nu _{d}d \end{pmatrix} \] of the given vectors. Thus, it is concluded that the vector relationships contained within Eq. (\ref{Inner Product Statistic}) determine a rich system of topological relationships between the loci of two vectors. Figure $\ref{Second-order Distance Statisitcs}$ depicts correlated algebraic, geometric, and topological structures determined by an inner product statistic (see Fig. $\ref{Second-order Distance Statisitcs}$a). Inner product statistics include the component of a vector along another vector, which is also known as a scalar projection. Figure $\ref{Second-order Distance Statisitcs}$ illustrates the geometric nature of scalar projections for obtuse angles (see Fig. $\ref{Second-order Distance Statisitcs}$b) and acute angles (see Fig. $\ref{Second-order Distance Statisitcs}$c) between vectors \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4368in {Figure11.png }\caption{$\left( a\right) $ Inner product statistics specify angles and corresponding distances between the geometric loci of vectors. Scalar projection statistics specify $\left( b\right) $ negative signed magnitudes or $\left( c\right) $ positive signed magnitudes along the axes of given vectors. \label{Second-order Distance Statisitcs \end{figure} \subsubsection{Scalar Projection Statistics} Scalar projection statistics specify signed magnitudes along the axes of given vectors. The inner product statistic $\mathbf{x}^{T}\mathbf{y}=\left\Vert \mathbf{x}\right\Vert \left\Vert \mathbf{y}\right\Vert \cos\theta$ can be interpreted as the length $\left\Vert \mathbf{x}\right\Vert $ of $\mathbf{x}$ times the scalar projection of $\mathbf{y}$ onto $\mathbf{x}$ \begin{equation} \mathbf{x}^{T}\mathbf{y}=\left\Vert \mathbf{x}\right\Vert \times\left[ \left\Vert \mathbf{y}\right\Vert \cos\theta\right] \text{,} \label{Scalar Projection \end{equation} where the scalar projection of $\mathbf{y}$ onto $\mathbf{x}$, also known as the component of $\mathbf{y}$ along $\mathbf{x}$, is defined to be the signed magnitude $\left\Vert \mathbf{y}\right\Vert \cos\theta$ of the vector projection, where $\theta$ is the angle between $\mathbf{x}$ and $\mathbf{y}$ \citep{Stewart2009 . Scalar projections are denoted by $\operatorname{comp _{\overrightarrow{\mathbf{x}}}\left( \overrightarrow{\mathbf{y}}\right) $, where $\operatorname{comp}_{\overrightarrow{\mathbf{x}}}\left( \overrightarrow{\mathbf{y}}\right) <0$ if $\pi/2<\theta\leq\pi$. The scalar projection statistic $\left\Vert \mathbf{y}\right\Vert \cos\theta$ satisfies the inner product relationship $\left\Vert \mathbf{y}\right\Vert \cos \theta=\frac{\mathbf{x}^{T}\mathbf{y}}{\left\Vert \mathbf{x}\right\Vert }$ between the unit vector $\frac{\mathbf{x}}{\left\Vert \mathbf{x}\right\Vert }$ and the vector $\mathbf{y}$. \subsubsection{Locus Methods in Dual Hilbert Spaces \citet{Naylor1971} note that a truly amazing number of problems in engineering and science can be fruitfully treated with geometric methods in Hilbert space. In this paper, I will devise three systems of data-driven, locus equations, subject to statistical laws that satisfy a binary classification theorem, that involve geometric and statistical methods in dual Hilbert spaces. In the process, I will devise data-driven, mathematical laws that generate optimal statistical classification systems for digital data. In the next section, I\ will formulate equations and identify geometric properties of linear loci. \section{Loci of Lines, Planes, and Hyperplanes} I will now devise the fundamental locus equation that determines loci of lines, planes, and hyperplanes. I will use this equation to identify correlated, uniform properties exhibited by all of the points on a linear locus. I will also devise a general eigen-coordinate system that determines all forms of linear loci. The analysis that follows will denote both points and vectors by $\mathbf{x}$. \subsection{Vector Equation of a Linear Locus} Let $\boldsymbol{\nu}\triangle \begin{pmatrix} \nu_{1}, & \nu_{2 \end{pmatrix} ^{T}$ be a \emph{fixed vector} in the real Euclidean plane and consider the line $l$ at the endpoint of $\boldsymbol{\nu}$ that is perpendicular to $\boldsymbol{\nu}$. It follows that the endpoint of the vector $\boldsymbol{\nu}$ is a point on the line $l$. Thereby, the coordinates $\left( \nu_{1},\nu_{2}\right) $ of $\boldsymbol{\nu}$ delineate and satisfy $l$. In addition, consider an arbitrary vector $\mathbf{x}\triangle \begin{pmatrix} x_{1}, & x_{2 \end{pmatrix} ^{T}$ whose endpoint is also on the line $l$. Thereby, the coordinates $\left( x_{1},x_{2}\right) $ of the point $\mathbf{x}$ also delineate and satisfy $l$. Finally, let $\phi$ be the acute angle between the vectors $\boldsymbol{\nu}$ and $\mathbf{x}$, satisfying the constraints $0\leq\phi \leq\pi/2$ and the relationship $\cos\phi=\frac{\left\Vert \boldsymbol{\nu }\right\Vert }{\left\Vert \mathbf{x}\right\Vert }$. Using all of the above assumptions, it follows that the locus of points $\left( x_{1},x_{2}\right) $ on a line $l$ is determined by the equation \begin{equation} \mathbf{x}^{T}\boldsymbol{\nu}=\left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\phi\label{Linear Locus Functional \end{equation} which is the vector equation of a line \citep{Davis1973 . By way of illustration, Fig. $\ref{Geometric Locus of Line}$ depicts the geometric locus of a line in the real Euclidean plane \mathbb{R} ^{2}$ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure12.png }\caption{An elegant, general eigen-coordinate system for lines that is readily generalized to planes and hyperplanes. Any vector $\mathbf{x} \protect\begin{pmatrix} x_{1}, & x_{2 \protect\end{pmatrix} ^{T}$ whose endpoint \protect\begin{pmatrix} x_{1}, & x_{2 \protect\end{pmatrix} $ is on the line $l$ explicitly and exclusively references the fixed vector $\boldsymbol{\nu} \protect\begin{pmatrix} \nu_{1}, & \nu_{2 \protect\end{pmatrix} ^{T}$ which is shown to be the principal eigenaxis of the line $l$. \label{Geometric Locus of Line \end{figure} \subsection{Fundamental Equation of a Linear Locus} Take any fixed vector $\boldsymbol{\nu}$, and consider the line $l$ that is determined by Eq. (\ref{Linear Locus Functional}), where the axis of $\boldsymbol{\nu}$ is perpendicular to the specified line $l$, and the endpoint of $\boldsymbol{\nu}$ is on $l$. Given that any vector $\mathbf{x}$ with its endpoint on the given line $l$ satisfies the identity $\left\Vert \mathbf{x}\right\Vert \cos\phi=\left\Vert \boldsymbol{\nu}\right\Vert $ with the fixed vector $\boldsymbol{\nu}$, it follows that the locus of a line $l$ is also determined by the vector equation \begin{equation} \mathbf{x}^{T}\boldsymbol{\nu}=\left\Vert \boldsymbol{\nu}\right\Vert ^{2}\text{.} \label{Normal Eigenaxis Functional \end{equation} Equations (\ref{Linear Locus Functional}) and (\ref{Normal Eigenaxis Functional}) are readily generalized to planes $p$ and hyperplanes $h$ in \mathbb{R} ^{d}$ by letting $\boldsymbol{\nu}\triangle \begin{pmatrix} \nu_{1}, & \nu_{2}, & \cdots, & \nu_{d \end{pmatrix} ^{T}$ and $\mathbf{x}\triangle \begin{pmatrix} x_{1}, & x_{2}, & \cdots, & x_{d \end{pmatrix} ^{T}$. Because Eq. (\ref{Normal Eigenaxis Functional}) contains no constants or parameters, Eq. (\ref{Normal Eigenaxis Functional}) is the fundamental equation of a linear locus. So, take any line, plane, or hyperplane in \mathbb{R} ^{d}$. It follows that the linear locus is delineated by a fixed vector $\boldsymbol{\nu}$, where the axis of $\boldsymbol{\nu}$ is perpendicular to the linear locus and the endpoint of $\boldsymbol{\nu}$ is on the linear locus. Assuming that $\left\Vert \boldsymbol{\nu}\right\Vert \neq0$, Eq. (\ref{Normal Eigenaxis Functional}) can also be written as \begin{equation} \frac{\mathbf{x}^{T}\boldsymbol{\nu}}{\left\Vert \boldsymbol{\nu}\right\Vert }=\left\Vert \boldsymbol{\nu}\right\Vert \text{.} \label{Normal Form Normal Eigenaxis \end{equation} The axis $\boldsymbol{\nu}/\left\Vert \boldsymbol{\nu}\right\Vert $ has length $1$ and points in the direction of the vector $\boldsymbol{\nu}$, such that $\left\Vert \boldsymbol{\nu}\right\Vert $ is the distance of a specified line $l$, plane $p$, or hyperplane $h$ to the origin. Using Eq. (\ref{Normal Form Normal Eigenaxis}), it follows that the distance $\mathbf{\Delta}$ of a line, plane, or hyperplane from the origin is specified by the magnitude $\left\Vert \boldsymbol{\nu}\right\Vert $ of the axis $\boldsymbol{\nu}$. I will now argue that the fixed vector $\boldsymbol{\nu}$ is the \emph{principal eigenaxis} of linear loci. \subsection{Principal Eigenaxes of Linear Loci} Figure $\ref{Geometric Locus of Line}$ shows how the locus of a fixed vector $\boldsymbol{\nu}$ specifies the locus of a line $l$. I will show that such fixed vectors $\boldsymbol{\nu}$ provide exclusive, intrinsic reference axes for loci of lines, planes, and hyperplanes. Intrinsic reference axes, i.e., coordinate axes, of geometric loci are inherently specified by locus equations. Therefore, an intrinsic reference axis is an integral component of a locus of points. Furthermore, intrinsic reference axes are principal eigenaxes of conic sections and quadratic surfaces \citep{Hewson2009 . I will now demonstrate that the axis denoted by $\boldsymbol{\nu}$ in Eqs (\ref{Linear Locus Functional}), (\ref{Normal Eigenaxis Functional}), and (\ref{Normal Form Normal Eigenaxis}) is the principal eigenaxis of linear loci. All of the major axes of conic sections and quadratic surfaces are major intrinsic axes which may also coincide as exclusive, fixed reference axes. Major axes have been referred to as "the axis of the curve" \citep{Nichols1893 , "the principal axis," and "the principal axis of the curve" \citep{Tanner1898 . In order to demonstrate that the vector $\boldsymbol{\nu}$ is the principal eigenaxis of linear loci, it must be shown that $\boldsymbol{\nu}$ is a major intrinsic axis which is also an exclusive, fixed reference axis. It will first be argued that $\boldsymbol{\nu}$ is a major intrinsic axis for a linear locus of points. Using the definitions of Eqs (\ref{Linear Locus Functional}), (\ref{Normal Eigenaxis Functional}), or (\ref{Normal Form Normal Eigenaxis}), it follows that the axis $\boldsymbol{\nu}$ is a major intrinsic axis because all of the points $\mathbf{x}$ on a linear locus satisfy similar algebraic and geometric\ constraints related to the locus of $\boldsymbol{\nu}$ that are inherently specified by Eqs (\ref{Linear Locus Functional}), (\ref{Normal Eigenaxis Functional}), and (\ref{Normal Form Normal Eigenaxis}). Therefore, $\boldsymbol{\nu}$ is a major intrinsic axis of a linear locus. The uniform algebraic and geometric constraints satisfied by all of the points on a linear locus specify the properties exhibited by each point on the linear locus. Again, using the definitions of Eqs (\ref{Linear Locus Functional}), (\ref{Normal Eigenaxis Functional}), or (\ref{Normal Form Normal Eigenaxis}), it follows that the axis $\boldsymbol{\nu}$ is an exclusive, fixed reference axis because the uniform properties possessed by all of the points $\mathbf{x}$ on a linear locus are defined solely by their relation to the axis of $\boldsymbol{\nu}$. Therefore, all of the points $\mathbf{x}$ on any given line, plane, or hyperplane explicitly and exclusively reference the major intrinsic axis $\boldsymbol{\nu}$ of the linear locus. It follows that the vector $\boldsymbol{\nu}$ provides an exclusive, fixed reference axis for a linear locus. Thus, it is concluded that the vector $\boldsymbol{\nu}$ is a major intrinsic axis that coincides as an exclusive, fixed reference axis for a linear locus. It follows that the vector $\boldsymbol{\nu}$ is the major axis of linear loci. Therefore, the vector denoted by $\boldsymbol{\nu}$ in Eqs (\ref{Linear Locus Functional}), (\ref{Normal Eigenaxis Functional}), and (\ref{Normal Form Normal Eigenaxis}) is the principal eigenaxis of linear curves and plane or hyperplane surfaces in \mathbb{R} ^{d}$. I will now use Eq. (\ref{Normal Form Normal Eigenaxis}) to devise a coordinate form locus equation which I will use to identify a uniform property exhibited by any point on a linear curve or surface. \subsubsection{Coordinate Form Equation of a Linear Locus} Using Eq. (\ref{Normal Form Normal Eigenaxis}), it follows that any line $l$ in the Euclidean plane \mathbb{R} ^{2}$ and any plane $p$ or hyperplane $h$ in Euclidean space \mathbb{R} ^{d}$ is determined by the vector equatio \begin{equation} \mathbf{x}^{T}\mathbf{u}_{P_{e}}\mathbf{=\Delta}\text{\textbf{,}} \label{normal form linear functional \end{equation} where $\mathbf{u}_{P_{e}}$ is a unit length principal eigenaxis that is perpendicular to $l$, $p$, or $h$ and $\mathbf{\Delta}$ denotes the distance of $l$, $p$, or $h$ to the origin. The unit eigenvector $\mathbf{u}_{P_{e}}$ specifies the direction of the principal eigenaxis $\boldsymbol{\nu}$ of a linear curve or surface, while the distance $\mathbf{\Delta}$ of a line, plane, or hyperplane from the origin is specified by the magnitude $\left\Vert \boldsymbol{\nu}\right\Vert $ of its principal eigenaxis $\boldsymbol{\nu}$. Express $\mathbf{u}_{P_{e}}$ in terms of standard orthonormal basis vector \[ \left\{ \mathbf{e}_{1}=\left( 1,0,\ldots,0\right) ,\ldots,\mathbf{e _{d}=\left( 0,0,\ldots,1\right) \right\} \] so tha \[ \mathbf{u}_{P_{e}}=\cos\alpha_{1}\mathbf{e}_{1}+\cos\alpha_{2}\mathbf{e _{2}+\cdots+\cos\alpha_{d}\mathbf{e}_{d}\text{, \] where $\cos\alpha_{i}$ are the direction cosines between $\mathbf{u}_{P_{e}}$ and $\mathbf{e}_{i}$. The term $\cos\alpha_{i}$ is the $i^{\text{th}}$ component of the unit principal eigenaxis $\mathbf{u}_{P_{e}}$ along the coordinate axis $\mathbf{e}_{i}$, where each scale factor $\cos\alpha_{i}$ is said to be normalized. Substitution of the expression for $\mathbf{u}_{P_{e}}$ into Eq. (\ref{normal form linear functional}) produces a coordinate form locus equatio \begin{equation} x_{1}\cos\alpha_{1}+x_{2}\cos\alpha_{2}+\cdots+x_{d}\cos\alpha_{d =\mathbf{\Delta} \label{Unit Normal Coordinate Form Equation \end{equation} which is satisfied by the transformed coordinates $\cos\alpha_{i}x_{i}$ of all of the points $\mathbf{x}$ on the locus of a line, plane, or hyperplane. Equation (\ref{Unit Normal Coordinate Form Equation}) is similar to the well-known coordinate equation version of a linear locu \[ \alpha_{1}x_{1}+\alpha_{2}x_{2}+\ldots+\alpha_{n}x_{n}=p \] which is a unique equation for a given linear locus if and only if it contains the components $\cos\alpha_{i}$ of the unit principal eigenaxis $\mathbf{u _{P_{e}}$ of the linear locus. I will now use Eq. (\ref{Unit Normal Coordinate Form Equation}) to define a uniform property which is exhibited by any point on a linear locus. \subsubsection{Uniform Property of a Linear Locus} Using Eq. (\ref{Unit Normal Coordinate Form Equation}), it follows that a\ line, plane, or hyperplane is a locus of points $\mathbf{x}$, all of which possess a set of transformed coordinates \[ \mathbf{u}_{P_{e}}^{T}\mathbf{x}=\left( \cos\alpha_{1}x_{1},\cos\alpha _{2}x_{2},\ldots,\cos\alpha_{d}x_{d}\right) ^{T}\text{, \] such that the sum of those transformed coordinates equals the distance $\mathbf{\Delta}$ that the line, plane, or hyperplane is from the origin $\left( 0,0,\ldots,0\right) $ \begin{equation} \sum\nolimits_{i=1}^{d}\cos\alpha_{i}x_{i}=\mathbf{\Delta}\text{,} \label{Geometric Property of Linear Loci \end{equation} where $x_{i}$ are point coordinates or vector components, and $\cos\alpha_{i}$ are the direction cosines between a unit principal eigenaxis $\mathbf{u _{P_{e}}$ and the coordinate axes $\mathbf{e}_{i}:\left\{ \mathbf{e _{1}=\left( 1,0,\ldots,0\right) ,\ldots,\mathbf{e}_{d}=\left( 0,0,\ldots,1\right) \right\} $. Therefore, a point $\mathbf{x}$ is on the locus of a line $l$, plane $p$, or hyperplane $h$ if and only if the transformed coordinates of $\mathbf{x}$ satisfy Eq. (\ref{Geometric Property of Linear Loci}); otherwise, the point $\mathbf{x}$ is not on the locus of points determined by Eqs (\ref{Linear Locus Functional}), (\ref{Normal Eigenaxis Functional}), and (\ref{normal form linear functional}). Thereby, it is concluded that all of the points $\mathbf{x}$ on a linear locus possess a characteristic set of transformed coordinates, such that the inner product of each vector $\mathbf{x}$ with the unit principal eigenaxis $\mathbf{u}_{P_{e}}$ satisfies the distance $\mathbf{\Delta}$ of the linear locus from the origin. Likewise, it is concluded that the sum of transformed coordinates of any point on a linear locus satisfies the magnitude of the principal eigenaxis of the linear locus. Properties of principal eigenaxes are examined next. \subsection{Properties of Principal Eigenaxes} Take any line, plane, or hyperplane in \mathbb{R} ^{d}$. Given the line, plane, or hyperplane and Eqs (\ref{Linear Locus Functional}) or (\ref{Normal Eigenaxis Functional}), it follows that a principal eigenaxis $\boldsymbol{\nu}$ exists, such that the endpoint of $\boldsymbol{\nu}$ is on the line, plane, or hyperplane and the axis of $\boldsymbol{\nu}$ is perpendicular to the line, plane, or hyperplane. Using Eq. (\ref{Normal Form Normal Eigenaxis}), it follows that the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of $\boldsymbol{\nu}$\ is specified by the line, plane, or hyperplane. Using Eq. (\ref{Unit Normal Coordinate Form Equation}), it follows that the unit principal eigenaxis $\mathbf{u}_{P_{e}}$ of the linear curve or surface is characterized by a unique set of direction cosines $\left\{ \cos\alpha _{i}\right\} _{i=1}^{d}$ between $\mathbf{u}_{P_{e}}$ and the standard set of basis vectors $\left\{ \mathbf{e}_{i}\right\} _{i=1}^{d}$. Next, take any principal eigenaxis $\boldsymbol{\nu}$ in \mathbb{R} ^{d}$. Given the principal eigenaxis $\boldsymbol{\nu}$ and Eqs (\ref{Linear Locus Functional}) or (\ref{Normal Eigenaxis Functional}), it follows that a line, plane, or hyperplane exists that is perpendicular to $\boldsymbol{\nu}$, such that the endpoint of the principal eigenaxis $\boldsymbol{\nu}$ is on the line, plane, or hyperplane. Using Eq. (\ref{Normal Form Normal Eigenaxis}), it follows that the distance of the line, plane, or hyperplane from the origin is specified by the magnitude $\left\Vert \boldsymbol{\nu}\right\Vert $ of the principal eigenaxis $\boldsymbol{\nu}$. Thus, it is concluded that the principal eigenaxis of a linear locus is determined by a unique set of direction cosines, and that the principal eigenaxis of a linear locus determines a unique, linear locus. I will now show that the principal eigenaxis of any linear locus satisfies the linear locus in terms of its eigenenergy. \subsubsection{Characteristic Eigenenergy} Take the principal eigenaxis $\boldsymbol{\nu}$ of any line, plane, or hyperplane in \mathbb{R} ^{d}$. Therefore, the principal eigenaxis $\boldsymbol{\nu}$ satisfies Eqs (\ref{Linear Locus Functional}), (\ref{Normal Eigenaxis Functional}), and (\ref{Normal Form Normal Eigenaxis}). Using Eqs (\ref{Linear Locus Functional ) or (\ref{Normal Eigenaxis Functional}), it follows that the principal eigenaxis $\boldsymbol{\nu}$ satisfies a linear locus in terms of its squared length $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ \begin{equation} \boldsymbol{\nu}^{T}\boldsymbol{\nu}=\left\Vert \boldsymbol{\nu}\right\Vert ^{2}=\sum\nolimits_{i=1}^{d}\nu_{i\ast}^{2}\text{,} \label{Characteristic Eigenenergy \end{equation} where $\nu_{i\ast}$ are the eigen-coordinates of $\boldsymbol{\nu}$ and $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ is the eigenenergy exhibited by the locus of $\boldsymbol{\nu}$. Thus, the principal eigenaxis $\boldsymbol{\nu}$ of any given line, plane, or hyperplane exhibits a characteristic eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ that is unique for the linear locus. It follows that the locus of any given line, plane, or hyperplane is determined by the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the locus of its principal eigenaxis $\boldsymbol{\nu}$. \subsection{Properties Possessed by Points on Linear Loci} Take any point $\mathbf{x}$ on any linear locus. Given Eq. (\ref{Linear Locus Functional}) and the point $\mathbf{x}$ on the linear locus, it follows that the length of the component $\left\Vert \mathbf{x \right\Vert \cos\phi$ of the vector $\mathbf{x}$ along the principal eigenaxis $\boldsymbol{\nu}$ of the linear locus satisfies the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of $\boldsymbol{\nu}$, i.e., $\left\Vert \mathbf{x}\right\Vert \cos\phi=\left\Vert \boldsymbol{\nu}\right\Vert $, where the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of $\boldsymbol{\nu}$ specifies the distance $\mathbf{\Delta}$ of the linear locus from the origin. Accordingly, the signed magnitude of any given vector $\mathbf{x}$ along the principal eigenaxis $\boldsymbol{\nu}$ of the linear locus satisfies the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of $\boldsymbol{\nu}$. Given Eq. (\ref{normal form linear functional}) and the point $\mathbf{x}$ on the linear locus, it follows that the inner product $\mathbf{x}^{T \mathbf{u}_{P_{e}}$ of the vector $\mathbf{x}$ with the unit principal eigenaxis $\mathbf{u}_{P_{e}}$ of the linear locus satisfies the distance $\mathbf{\Delta}$ of the linear locus from the origin, i.e., $\mathbf{x ^{T}\mathbf{u}_{P_{e}}\mathbf{=\Delta}$. Likewise, using Eq. (\ref{Geometric Property of Linear Loci}), it follows that the sum of the normalized, transformed coordinates of the point $\mathbf{x}$ also satisfies the distance $\mathbf{\Delta}$ of the linear locus from the origin, i.e., $\sum\nolimits_{i=1}^{d}\cos\alpha_{i}x_{i}=\mathbf{\Delta}$. Finally, using Eq. (\ref{Normal Eigenaxis Functional}), it follows that the inner product $\mathbf{x}^{T}\boldsymbol{\nu}$ of the vector $\mathbf{x}$ with the principal eigenaxis $\boldsymbol{\nu}$ of the linear locu \[ \mathbf{x}^{T}\boldsymbol{\nu}=\left\Vert \boldsymbol{\nu}\right\Vert ^{2 \] satisfies the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ of the principal eigenaxis $\boldsymbol{\nu}$ of the linear locus. Therefore, any given point $\mathbf{x}$ on any given linear locus satisfies the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the principal eigenaxis $\boldsymbol{\nu}$ of the linear locus. \subsection{Inherent Property of a Linear Locus} It has been shown that the principal eigenaxis of any given linear locus satisfies the linear locus in terms of its eigenenergy. It has also been shown that any given point on any given linear locus satisfies the eigenenergy exhibited by the principal eigenaxis of the linear locus. Thereby, it has been demonstrated that the locus of any given line, plane, or hyperplane is determined by the eigenenergy of its principal eigenaxis. Therefore, the inherent property of a linear locus and its principal eigenaxis $\boldsymbol{\nu}$ is the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the locus of the principal eigenaxis $\boldsymbol{\nu}$. In the next section, I will show that a principal eigenaxis $\boldsymbol{\nu}$ of a linear locus is the \emph{focus} of the linear locus. \subsection{Overall Conclusions for Linear Loci} It has been shown that the uniform properties exhibited by all of the points $\mathbf{x}$ on any given linear locus are specified by the locus of its principal eigenaxis $\boldsymbol{\nu}$, where each point $\mathbf{x}$ on the linear locus and the principal eigenaxis $\boldsymbol{\nu}$ of the linear locus satisfy the linear locus in terms of the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by its principal eigenaxis $\boldsymbol{\nu}$. Thereby, it is concluded that the vector components of a principal eigenaxis specify all forms of linear curves and surfaces, and that all of the points $\mathbf{x}$ on any given line, plane, or hyperplane explicitly and exclusively reference the principal eigenaxis $\boldsymbol{\nu }$ in Eqs (\ref{Linear Locus Functional}), (\ref{Normal Eigenaxis Functional ), and (\ref{Normal Form Normal Eigenaxis}). Therefore, the\ important generalizations for a linear locus are specified by the eigenenergy exhibited by the locus of its principal eigenaxis. Moreover, a principal eigenaxis is an exclusive and distinctive coordinate axis that specifies all of the points on a linear locus. Thereby, it is concluded that the principal eigenaxis of a linear locus provides an elegant, general eigen-coordinate system for a linear locus of points. In the next section, I\ will formulate fundamental locus equations and identify geometric properties of quadratic loci. \section{Loci of Quadratic Curves and Surfaces} I will now devise fundamental locus equations that determine loci of conic sections and quadratic surfaces. I will use these equation to identify uniform properties exhibited by all of the points on a quadratic locus. I will also devise general eigen-coordinate systems that determine all forms of quadratic loci. \subsection{Solving Quadratic Locus Problems} Methods for solving geometric locus problems hinge on the identification of algebraic and geometric correlations for a given locus of points. Geometric correlations between a set of points which lie on a definite curve or surface correspond to geometric and algebraic constraints that are satisfied by the coordinates of any point on a given locus or geometric figure. Geometric figures are defined in two ways: $(1)$ as a figure with certain known properties and $(2)$ as the path of a point which moves under known conditions \citep{Nichols1893,Tanner1898 . The path of a point which moves under known conditions is called a \emph{generatrix}. Common generatrices involve curves or surfaces called quadratics which include conic sections and quadratic surfaces. Quadratic surfaces are also called quadrics \citep{Hilbert1952 . I\ will now use the definition of a generatrix of a quadratic to devise fundamental locus equations for conic sections and quadratic surfaces. \subsection{Generatrices of Quadratic Curves and Surfaces} A generatrix is a point $P_{\mathbf{x}}$ which moves along a given path such that the path generates a curve or surface. Three of the quadratics are traced by a point $P_{\mathbf{x}}$ which moves so that its distance from a fixed point $P_{\mathbf{f}}$ always bears a constant ratio to its distance from a fixed line, plane, or hyperplane $D$. Quadratics which are generated in this manner include $d$-dimensional parabolas, hyperbolas, and ellipses. The geometric nature of the generatrix of these quadratics is considered next. Take a fixed point $P_{\mathbf{f}}$ in the Euclidean plane, a line $D$ not going through $P_{\mathbf{f}}$, and a positive real number $e$. The set of points $P_{\mathbf{x}}$ such that the distance from $P_{\mathbf{x}}$ to $P_{\mathbf{f}}$ is $e$ times the shortest distance from $P_{\mathbf{x}}$ to $D$, where distance is measured along a perpendicular, is a locus of points termed a conic section. For any given conic section, the point $P_{\mathbf{f }$ is called the focus, the line $D$ is called the directrix, and the term $e$ is called the eccentricity. If $e<1$, the conic is an ellipse; if $e=1$, the conic is a parabola; if $e>1$, the conic is an hyperbola. The quantities which determine the size and shape of a conic are its eccentricity $e$ and the distance of the focus $P_{\mathbf{f}}$ from the directrix $D$ \citep{Zwillinger1996 . Figure $\ref{Parabolic Conic Section}$ depicts a parabolic conic section in the Euclidean plane \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure13.png }\caption{A parabolic conic section is the set of points $P_{\mathbf{x}}$ such that the distance from a point $P_{\mathbf{x}}$ to the focus $P_{\mathbf{f}}$ is $1$ times the shortest distance from a point $P_{\mathbf{x}}$ to the directrix $D$, where distance is measured along a perpendicular. \label{Parabolic Conic Section \end{figure} The definition of a conic section is readily generalized to quadratic surfaces by taking a fixed point $P_{\mathbf{f}}$ in \mathbb{R} ^{d}$, a $\left( d-1\right) $-dimensional hyperplane not going through $P_{\mathbf{f}}$, and a positive real number $e$. I will now devise a vector equation of conic generatrices. \subsection{Vector Equation of Conic Generatrices} Consider the generatrix of a conic section $c$, where a point $P_{\mathbf{x}}$ moves so that its distance from a fixed point $P_{\mathbf{f}}$ bears a constant ratio to its distance from a fixed line $D$. Accordingly, take any given conic $c$. \subsubsection{Assumptions} Denote the major axis of the conic $c$ by $\boldsymbol{\nu}$. Let the focus $P_{\mathbf{f}}$ of the conic $c$ be the endpoint of the major axis $\boldsymbol{\nu}$ of the conic $c$. Accordingly, the focus $P_{\mathbf{f}}$ and the major axis $\boldsymbol{\nu}$ describe the same geometric location, where the distance between the focus $P_{\mathbf{f}}$ and the directrix $D$ is the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of the major axis $\boldsymbol{\nu}$, i.e., $\boldsymbol{\nu}\perp D$. Any given point $P_{\mathbf{x}}$ on the conic $c$ is the endpoint of a vector $\mathbf{x}$. Therefore, any given point $P_{\mathbf{x}}$ and correlated vector $\mathbf{x} $ describe the same geometric location. The analysis that follows will denote both points and vectors by $\mathbf{x}$. Let $\boldsymbol{\nu}\triangle \begin{pmatrix} \nu_{1}, & \nu_{2 \end{pmatrix} ^{T}$ be the major axis of a conic $c$ in the real Euclidean plane. Consider an arbitrary vector $\mathbf{x}\triangle \begin{pmatrix} x_{1}, & x_{2 \end{pmatrix} ^{T}$ whose endpoint is on the conic $c$ and let $\theta$ be the angle between $\boldsymbol{\nu}$ and $\mathbf{x}$. Given the above assumptions, it follows that a conic $c$ is a set of points $\mathbf{x}$ for which the distance $\left\Vert \mathbf{x-}\boldsymbol{\nu}\right\Vert $ between any given point $\mathbf{x}$ and the focus $\boldsymbol{\nu}$ is equal to $e$ times the shortest distance between the given point $\mathbf{x}$ and the directrix $D$. Now take any given point $\mathbf{x}$ on any given conic $c$. Using Eq. (\ref{Scalar Projection}), the inner product statistic $\mathbf{x ^{T}\boldsymbol{\nu}=\left\Vert \boldsymbol{\nu}\right\Vert \left\Vert \mathbf{x}\right\Vert \cos\theta$ can be interpreted as the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of $\boldsymbol{\nu}$ times the scalar projection of $\mathbf{x}$ onto $\boldsymbol{\nu} \[ \mathbf{x}^{T}\boldsymbol{\nu}=\left\Vert \boldsymbol{\nu}\right\Vert \times\left[ \left\Vert \mathbf{x}\right\Vert \cos\theta\right] \text{, \] where the scalar projection of $\mathbf{x}$ onto $\boldsymbol{\nu}$, also known as the component of $\mathbf{x}$ along $\boldsymbol{\nu}$, is defined to be the signed magnitude $\left\Vert \mathbf{x}\right\Vert \cos\theta$ of the vector projection, where $\theta$ is the angle between $\boldsymbol{\nu}$ and $\mathbf{x}$. It follows that the shortest distance between the point $\mathbf{x}$ and the directrix $D$ is specified by the signed magnitude $\left\Vert \mathbf{x}\right\Vert \cos\theta$ of the component of the vector $\mathbf{x}$ along the major axis $\boldsymbol{\nu}$. Accordingly, the geometric locus of a conic section $c$ involves a rich system of geometric and topological relationships between the locus of its major axis $\boldsymbol{\nu}$ and the loci of points $\mathbf{x}$ on the conic section $c$. By way of illustration, Fig. $\ref{Geometric Locus of Conic Section}$ depicts the system of geometric and topological relationships between the locus of a major axis $\boldsymbol{\nu}$ and the loci of points $\mathbf{x}$ on a parabola \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure14.png }\caption{The geometric locus of a parabola involves a rich system of geometric and topological relationships between the locus of a major axis $\boldsymbol{\nu}$ \ and the loci of points $\mathbf{x}$ on a parabola. \label{Geometric Locus of Conic Section \end{figure} \subsection{Locus Equation of Conic Generatrices} Using the law of cosines, it follows that a locus of points $\mathbf{x}$ on an ellipse, hyperbola, or parabola is determined by the vector equation \begin{equation} \left\Vert \mathbf{x-}\boldsymbol{\nu}\right\Vert ^{2}=\left\Vert \mathbf{x}\right\Vert ^{2}+\left\Vert \boldsymbol{\nu}\right\Vert ^{2}-2\left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\theta\text{,} \label{Equation of Generatrices \end{equation} where $\boldsymbol{\nu}$ is the major eigenaxis of a conic section $c$, $\mathbf{x}$ is an arbitrary point on $c$, and $\theta$ is the angle between $\boldsymbol{\nu}$ and $\mathbf{x}$. Given the geometric and topological relationships depicted in Fig. $\ref{Geometric Locus of Conic Section}$, it follows that the uniform geometric property exhibited by a locus of points on a conic section $c$ is determined by the vector equatio \begin{equation} \left\Vert \mathbf{x-}\boldsymbol{\nu}\right\Vert =e\times\left\Vert \mathbf{x}\right\Vert \cos\theta\text{,} \label{Geometric Property of Conics \end{equation} where the scaled $e$ signed magnitude $\left\Vert \mathbf{x}\right\Vert \cos\theta$ of the vector projection of $\mathbf{x}$ onto the major axis $\boldsymbol{\nu}$ specifies the distance between a point $\mathbf{x}$ on a given conic $c$ and its directrix $D$. \subsection{Fundamental Equation of Conic Generatrices} Substitution of Eq. (\ref{Geometric Property of Conics}) into Eq. (\ref{Equation of Generatrices}) produces the vector equation of a conic section $c \[ e^{2}\left\Vert \mathbf{x}\right\Vert ^{2}\cos^{2}\theta=\left\Vert \mathbf{x}\right\Vert ^{2}+\left\Vert \boldsymbol{\nu}\right\Vert ^{2}-2\left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\theta \] which reduces t \begin{equation} 2\mathbf{x}^{T}\boldsymbol{\nu}+\left( e^{2}\cos^{2}\theta-1\right) \left\Vert \mathbf{x}\right\Vert ^{2}=\left\Vert \boldsymbol{\nu}\right\Vert ^{2}\text{,} \label{Vector Equation of a Conic \end{equation} where $\boldsymbol{\nu}$ is the major axis of a conic section $c$, $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ is the eigenenergy exhibited by the major axis $\boldsymbol{\nu}$, $\mathbf{x}$ is a point on a conic $c$, $\left\Vert \mathbf{x}\right\Vert ^{2}$ is the energy exhibited by the vector $\mathbf{x $, $e$ is the eccentricity of the conic $c$, and $\theta$ is the angle between the vector $\mathbf{x}$ and the major axis $\boldsymbol{\nu}$ of the conic $c$. It follows that any point $\mathbf{x}$ on a conic section $c$ is the endpoint of the locus of a vector $\mathbf{x}$, such that the energy $\left\Vert \mathbf{x}\right\Vert ^{2}$ exhibited by $\mathbf{x}$ is scaled by $\left( e^{2}\cos^{2}\theta-1\right) $. Equations (\ref{Equation of Generatrices}), (\ref{Geometric Property of Conics}), and (\ref{Vector Equation of a Conic}) are readily generalized to $d$-dimensional ellipses, hyperbolas, and parabolas by lettin \[ \boldsymbol{\nu}\triangle \begin{pmatrix} \nu_{1}, & \nu_{2}, & \cdots, & \nu_{d \end{pmatrix} ^{T \] an \[ \mathbf{x}\triangle \begin{pmatrix} x_{1}, & x_{2}, & \cdots, & x_{d \end{pmatrix} ^{T}\text{. \] Equation (\ref{Vector Equation of a Conic}) and the system of vector and topological relationships depicted in Fig. $\ref{Geometric Locus of Conic Section}$ jointly indicate that all of the points on a $d$-dimensional ellipse, hyperbola, or parabola exclusively reference the major axis $\boldsymbol{\nu}$ of the second degree locus. I will now demonstrate that all of the points $\mathbf{x}$ on a $d -dimensional ellipse, hyperbola, or parabola are essentially characterized by the geometric locus of its major axis $\boldsymbol{\nu}$. \subsubsection{Eigen-transformed Coordinates} Given Eq. (\ref{Vector Equation of a Conic}) and assuming that $\left\Vert \boldsymbol{\nu}\right\Vert \neq0$, it follows that a locus of points on a $d $-dimensional ellipse, hyperbola, or parabola is determined by the vector equation \begin{equation} \frac{2\mathbf{x}^{T}\boldsymbol{\nu}}{\left\Vert \boldsymbol{\nu}\right\Vert }+\frac{\left( e^{2}\cos^{2}\theta-1\right) \left\Vert \mathbf{x}\right\Vert ^{2}}{\left\Vert \boldsymbol{\nu}\right\Vert }=\left\Vert \boldsymbol{\nu }\right\Vert \text{,} \label{Normal Form Second-order Locus \end{equation} where the major axis $\boldsymbol{\nu}/\left\Vert \boldsymbol{\nu}\right\Vert $ has length $1$ and points in the direction of the principal eigenaxis $\boldsymbol{\nu}$. Given Eq. (\ref{Normal Form Second-order Locus}), it follows that all of the points on a $d$-dimensional ellipse, hyperbola, or parabola explicitly and exclusively reference the major axis $\boldsymbol{\nu }$ of the second degree locus. I\ will now use Eq. (\ref{Normal Form Second-order Locus}) to devise a coordinate form locus equation which I will use to identify a uniform property exhibited by any point on a $d$-dimensional ellipse, hyperbola, or parabola. Let $\mathbf{u}_{\boldsymbol{\nu}}$ denote the unit major eigenaxis $\frac{\boldsymbol{\nu}}{\left\Vert \boldsymbol{\nu}\right\Vert }$ in Eq. (\ref{Normal Form Second-order Locus}) and express $\mathbf{u _{\boldsymbol{\nu}}$ in terms of orthonormal basis vector \[ \left\{ \mathbf{e}_{1}=\left( 1,0,\ldots,0\right) ,\ldots,\mathbf{e _{d}=\left( 0,0,\ldots,1\right) \right\} \] so tha \[ \mathbf{u}_{\boldsymbol{\nu}}=\cos\alpha_{1}\mathbf{e}_{1}+\cos\alpha _{2}\mathbf{e}_{2}+\cdots+\cos\alpha_{d}\mathbf{e}_{d}\text{, \] where $\cos\alpha_{i}$ are the direction cosines between $\mathbf{u _{\boldsymbol{\nu}}$ and $\mathbf{e}_{i}$. Each $\cos\alpha_{i}$ is the $i^{\text{th}}$ component of the unit major eigenaxis $\mathbf{u _{\boldsymbol{\nu}}$ along the coordinate axis $\mathbf{e}_{i}$. Projection of a vector $\mathbf{x}$ onto $\mathbf{u}_{\boldsymbol{\nu}}$ transforms the coordinates $\left( x_{1},\ldots,x_{d}\right) $ of $\mathbf{x}$ by $\left( \cos\alpha_{1}x_{1},\ldots,\cos\alpha_{d}x_{d}\right) $. Substitution of the expression for $\mathbf{u}_{\boldsymbol{\nu}}$ into Eq. (\ref{Normal Form Second-order Locus}) produces a coordinate form locus equatio \begin{equation} \begin{pmatrix} x_{1}, & \cdots, & x_{d \end{pmatrix} ^{T \begin{pmatrix} \cos\alpha_{1}, & \cdots, & \cos\alpha_{d \end{pmatrix} +\frac{\left( e^{2}\cos^{2}\theta-1\right) }{\sum\nolimits_{i=1}^{d}\nu_{i }\sum\nolimits_{i=1}^{d}x_{i}^{2}=\sum\nolimits_{i=1}^{d}\nu_{i} \label{Eigen-Coordinate Equation Conic Section \end{equation} which is satisfied by the transformed coordinates $\left( \cos\alpha_{1 x_{1},\ldots,\cos\alpha_{d}x_{d}\right) $ of all of the points $\mathbf{x}$ on the geometric locus of a $d$-dimensional ellipse, hyperbola, or parabola. It follows that all of the points $\mathbf{x}$ on a $d$-dimensional ellipse, hyperbola, or parabola possess a characteristic set of coordinates, such that the inner product of each vector $\mathbf{x}$ with the unit major eigenaxis $\mathbf{u}_{\boldsymbol{\nu}}$ of a given locus satisfies the vector equatio \[ \sum\nolimits_{i=1}^{d}\cos\alpha_{i}x_{i}=\frac{1}{2}\left\Vert \boldsymbol{\nu}\right\Vert -\frac{\left( e^{2}\cos^{2}\theta-1\right) }{2\left\Vert \boldsymbol{\nu}\right\Vert }\left\Vert \mathbf{x}\right\Vert ^{2}\text{, \] where the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of $\boldsymbol{\nu }$ is scaled by $\frac{1}{2}$ and the energy $\left\Vert \mathbf{x}\right\Vert ^{2}$ of $\mathbf{x}$ is scaled by $\frac{\left( e^{2}\cos^{2}\theta -1\right) }{2\left\Vert \boldsymbol{\nu}\right\Vert }$. Therefore, all of the points $\mathbf{x}$ on any given $d$-dimensional ellipse, hyperbola, or parabola possess a characteristic set of transformed coordinates, such that the sum of those coordinates satisfies half the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of the major eigenaxis $\boldsymbol{\nu}$ minus the scaled $\frac{\left( e^{2}\cos^{2}\theta-1\right) }{2\left\Vert \boldsymbol{\nu}\right\Vert }$ energy $\left\Vert \mathbf{x}\right\Vert ^{2}$ of $\mathbf{x}$. Thus, the uniform properties exhibited by all of the points $\mathbf{x}$ on a $d$-dimensional ellipse, hyperbola, or parabola are specified by the locus of its principal eigenaxis $\boldsymbol{\nu}$, where the statistic $\sum \nolimits_{i=1}^{d}\cos\alpha_{i}x_{i}$ varies with the eccentricity $e$ of the second degree locus. \subsection{Properties of Major Axes} All of the major axes of conic sections and quadratic surfaces are major intrinsic axes which may also coincide as exclusive, fixed reference axes. Moreover, major axes are principal eigenaxes of quadratic forms \citep{Hewson2009 . Recall that major axes have been referred to as "the axis of the curve" \citep{Nichols1893 , "the principal axis," and "the principal axis of the curve" \citep{Tanner1898 . I will now show that the principal eigenaxis of any $d$-dimensional ellipse, hyperbola, or parabola satisfies the quadratic locus in terms of its eigenenergy. I will also demonstrate that any given point on a quadratic locus satisfies the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the principal eigenaxis $\boldsymbol{\nu}$ of the quadratic locus. \subsection{Characteristic Eigenenergy} Take the principal eigenaxis $\boldsymbol{\nu}$ of any given $d$-dimensional ellipse, hyperbola, or parabola. Accordingly, the principal eigenaxis $\boldsymbol{\nu}$ satisfies Eq. (\ref{Vector Equation of a Conic}). Using Eq. (\ref{Vector Equation of a Conic}), it follows that the principal eigenaxis $\boldsymbol{\nu}$ satisfies the quadratic locus in terms of its squared length $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ \begin{equation} 2\mathbf{x}^{T}\boldsymbol{\nu}+\left( e^{2}\cos^{2}\theta-1\right) \left\Vert \mathbf{x}\right\Vert ^{2}=\left\Vert \boldsymbol{\nu}\right\Vert ^{2} \label{Characteristic Eigenenergy of Quadratic \end{equation} so that the principal eigenaxis $\boldsymbol{\nu}$ and any given vector $\mathbf{x}$ which satisfy the vector expressio \[ 2\mathbf{x}^{T}\boldsymbol{\nu}+\left( e^{2}\cos^{2}\theta-1\right) \left\Vert \mathbf{x}\right\Vert ^{2 \] equally satisfy the vector expressio \[ \left\Vert \boldsymbol{\nu}\right\Vert ^{2}=\sum\nolimits_{i=1}^{d}\nu_{i\ast }^{2}\text{, \] where $\nu_{i\ast}$ are the eigen-coordinates of $\boldsymbol{\nu}$ and $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ is the eigenenergy exhibited by $\boldsymbol{\nu}$. It follows that the principal eigenaxis $\boldsymbol{\nu}$ of any given $d$-dimensional ellipse, hyperbola, or parabola and any given point $\mathbf{x}$ on the quadratic locus satisfy the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the principal eigenaxis $\boldsymbol{\nu}$ of the quadratic locus. Therefore, the principal eigenaxis $\boldsymbol{\nu}$ of any given $d$-dimensional ellipse, hyperbola, or parabola exhibits a characteristic eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ that is unique for the quadratic locus. It follows that the locus of any given $d$-dimensional ellipse, hyperbola, or parabola is determined by the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ of its principal eigenaxis $\boldsymbol{\nu }$. \subsection{Inherent Property of $d$-Dimensional Conics} Let a quadratic locus be a $d$-dimensional ellipse, hyperbola, or parabola. It has been shown that the principal eigenaxis $\boldsymbol{\nu}$ of any given quadratic locus satisfies the quadratic locus in terms of its eigenenergy. It has also been shown that any given point on any given quadratic locus satisfies the eigenenergy exhibited by the principal eigenaxis of the quadratic locus. Thereby, it has been demonstrated that the locus of any given $d$-dimensional ellipse, hyperbola, or parabola is determined by the eigenenergy of its principal eigenaxis. Therefore, the inherent property of a quadratic locus and its principal eigenaxis $\boldsymbol{\nu}$ is the eigenenergy $\left\Vert \boldsymbol{\nu }\right\Vert ^{2}$ exhibited by the locus of the principal eigenaxis $\boldsymbol{\nu}$. \subsection{Overall Conclusions for $d$-Dimensional Conics} Let a quadratic locus be a $d$-dimensional ellipse, hyperbola, or parabola. It has been shown that the uniform properties exhibited by all of the points $\mathbf{x}$ on any given quadratic locus are specified by the locus of its principal eigenaxis $\boldsymbol{\nu}$, where each point $\mathbf{x}$ on the quadratic locus and the principal eigenaxis $\boldsymbol{\nu}$ of the quadratic locus satisfy the quadratic locus in terms of the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by its principal eigenaxis $\boldsymbol{\nu}$. Thereby, it is concluded that the vector components of a principal eigenaxis specify all forms of conic curves and quadratic surfaces, and that all of the points $\mathbf{x}$ on any given quadratic locus explicitly and exclusively reference the principal eigenaxis $\boldsymbol{\nu}$ in Eqs (\ref{Vector Equation of a Conic}) and (\ref{Normal Form Second-order Locus}). Therefore, the\ important generalizations for a quadratic locus are specified by the eigenenergy exhibited by the locus of its principal eigenaxis. Moreover, a principal eigenaxis is an exclusive and distinctive coordinate axis that specifies all of the points on a quadratic locus. Accordingly, a principal eigenaxis is the focus of a quadratic locus. Thereby, it is concluded that the principal eigenaxis of a quadratic locus provides an elegant, general eigen-coordinate system for a quadratic locus of points. Circles and lines are considered special cases of conic sections \citep{Zwillinger1996 . In the next section, I will devise fundamental locus equations that determine loci of circles and $d$-dimensional spheres. I\ will use these equations to identify uniform geometric properties exhibited by all of the points on a spherically symmetric, quadratic locus. I will also devise a general eigen-coordinate system that determines all forms of spherically symmetric, quadratic loci. By way of motivation, I will first define the eccentricity of lines, planes, and hyperplanes. \subsection{Eccentricity of Lines, Planes, and Hyperplanes} Let $e$ denote the eccentricity of any given conic section $c$. Returning to Eq. (\ref{Geometric Property of Conics}), it has been shown that the uniform geometric property exhibited by a locus of points on a conic section $c$ is determined by the vector equatio \[ \left\Vert \mathbf{x-}\boldsymbol{\nu}\right\Vert =e\times\left\Vert \mathbf{x}\right\Vert \cos\theta\text{, \] where $e$ is the eccentricity of a conic section $c$, and the signed magnitude $\left\Vert \mathbf{x}\right\Vert \cos\theta$ of the vector projection of $\mathbf{x}$ onto the major axis $\boldsymbol{\nu}$ specifies the distance between a point $\mathbf{x}$ on a given conic $c$ and its directrix $D$. Given the vector equation of a line $l$ in Eq. (\ref{Linear Locus Functional} \[ \mathbf{x}^{T}\boldsymbol{\nu}=\left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\phi\text{, \] where $\phi$ is the acute angle between the vectors $\boldsymbol{\nu}$ and $\mathbf{x}$, and using the inner product relationship in Eq. (\ref{Locus Statistics in Hilbert Space} \[ \mathbf{x}^{T}\boldsymbol{\nu}=\left\Vert \mathbf{x-}\boldsymbol{\nu }\right\Vert \text{, \] it follows that the eccentricity $e$ of a line $l$ is determined by the vector equatio \begin{align*} \left\Vert \mathbf{x-}\boldsymbol{\nu}\right\Vert & =e\times\left\Vert \mathbf{x}\right\Vert \cos\phi\\ & =\left\Vert \boldsymbol{\nu}\right\Vert \left\Vert \mathbf{x}\right\Vert \cos\phi \end{align*} so that the eccentricity $e$ of a line $l$ is the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of the principal eigenaxis $\boldsymbol{\nu}$ of the line $l$. Therefore, the eccentricity $e$ of a line, plane, or hyperplane is the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of the principal eigenaxis $\boldsymbol{\nu}$ of the linear locu \[ e=\left\Vert \boldsymbol{\nu}\right\Vert \text{. \] It follows that the principal eigenaxis $\boldsymbol{\nu}$ of a linear locus is the \emph{focus} of the linear locus. I will now devise fundamental locus equations that determine loci of circles and $d$-dimensional spheres. \subsection{Eigen-Centric Equations of Circles and Spheres} Circles and lines are special cases of parabolas, hyperbolas, and ellipses. A\ circle is a locus of points $\left( x,y\right) $, all of which are at the same distance, the radius $r$, from a fixed point $\left( x_{0},y_{0}\right) $, the center. Using Eq. (\ref{Coordinate Equation of Circle}), the algebraic equation for the geometric locus of a circle in Cartesian coordinates is \[ \left( x-x_{0}\right) ^{2}+\left( y-y_{0}\right) ^{2}=r^{2}\text{. \] I\ will now define the eccentricity of circles and spheres. \subsection{Eccentricity of Circles and Spheres} A circle or $d$-dimensional sphere is considered a special case of a $d$-dimensional ellipse, where the eccentricity $e\approx0$ in the limit $e\rightarrow0$ \citep{Zwillinger1996 . However, $e$ \emph{cannot be zero}. Indeed, if $e\approx0$, the \begin{align*} \left\Vert \mathbf{x-}\boldsymbol{\nu}\right\Vert & =e\times\left\Vert \mathbf{x}\right\Vert \cos\theta\\ & \approx0 \end{align*} which indicates that the radius $r\approx0$ because $\left\Vert \mathbf{x- \boldsymbol{\nu}\right\Vert \approx0$ specifies that $\left\Vert \mathbf{r}\right\Vert \approx0$. Instead, the eccentricity $e$ for a circle or a sphere \emph{varies} with $\left\Vert \mathbf{x}\right\Vert $ \emph{and} $\arccos\theta$ \[ e=\frac{\left\Vert \mathbf{r}\right\Vert }{\left\Vert \mathbf{x}\right\Vert }\arccos\theta\text{, \] where the length $\left\Vert \mathbf{r}\right\Vert $ of the radius $\mathbf{r}$ is fixed. Therefore, consider again the generatrix of a conic section $c$, where a point $P_{\mathbf{x}}$ moves so that its distance from a fixed point $P_{\mathbf{f }$ bears a constant ratio to its distance from a fixed line $D$. The generatrix of a circle differs in the following manner. The generatrix of a circle involves a point $P_{\mathbf{x}}$ which moves so that its distance from a fixed point $P_{\mathbf{f}}$ is constant and its distance from a fixed line $D$ varies with the locus of the point $P_{\mathbf{x}}$. So, take a fixed point $P_{\mathbf{f}}$ in the Euclidean plane and a line $D$ not going through $P_{\mathbf{f}}$. The set of points $P_{\mathbf{x}}$ such that the distance from $P_{\mathbf{x}}$ to $P_{\mathbf{f}}$ is \emph{constant} and the distance from $P_{\mathbf{x}}$ to the fixed line $D$ \emph{varies} with the \emph{locus} of $P_{\mathbf{x}}$ is a circle. For any given circle, the point $P_{\mathbf{f}}$ is called the focus, the line $D$ is called the directrix, and the fixed distance between the focus and a point on the circle is called the radius. The definition of a circle is readily generalized to $d$-dimensional spheres by taking a fixed point $P_{\mathbf{f}}$ in \mathbb{R} ^{d}$ and a $\left( d-1\right) $-dimensional hyperplane not going through $P_{\mathbf{f}}$. I will now devise the vector equation of circle generatrices. \subsection{Vector Equation of Circle Generatrices} Consider the generatrix of a circle, where a point $P_{\mathbf{x}}$ moves so that its distance from a fixed point $P_{\mathbf{f}}$ is constant and its distance from a fixed line $D$ varies with the locus of $P_{\mathbf{x}}$. Accordingly, take any given circle. \subsubsection{Assumptions} Denote the major axis of the circle by $\boldsymbol{\nu}$. Let the center $P_{\mathbf{f}}$ of the circle be the endpoint of the major axis $\boldsymbol{\nu}$ of the circle. It follows that the center $P_{\mathbf{f}}$ and the major axis $\boldsymbol{\nu}$ of the circle describe the same geometric location, where the distance between the center $P_{\mathbf{f}}$ and the directrix $D$ is the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of the major axis $\boldsymbol{\nu}$, i.e., $\boldsymbol{\nu}\perp D$. Let any given point $P_{\mathbf{x}}$ on the circle be the endpoint of a vector $\mathbf{x}$. It follows that any given point $P_{\mathbf{x}}$ and correlated vector $\mathbf{x}$ describe the same geometric location. The analysis that follows will denote both points and vectors by $\mathbf{x}$. Let $\boldsymbol{\nu}\triangle \begin{pmatrix} \nu_{1}, & \nu_{2 \end{pmatrix} ^{T}$ be the major axis of a circle in the real Euclidean plane. Let the vector $\mathbf{r}\triangle \begin{pmatrix} r_{1}, & r_{2 \end{pmatrix} ^{T}$ be the radius of the circle. In addition, consider an arbitrary vector $\mathbf{x}\triangle \begin{pmatrix} x_{1}, & x_{2 \end{pmatrix} ^{T}$ whose endpoint is on the circle. Let $\theta$ be the angle between $\boldsymbol{\nu}$ and $\mathbf{x}$; let $\phi$ be the angle between $\boldsymbol{\nu}$ and $\mathbf{r}$. Using all of the above assumptions, it follows that a circle is a set of points $\mathbf{x}$ for which the distance $\left\Vert \mathbf{x- \boldsymbol{\nu}\right\Vert $ between any given point $\mathbf{x}$ and the center $\boldsymbol{\nu}$ is equal to the length $\left\Vert \mathbf{r \right\Vert $ of the radius $\mathbf{r}$ of the circle, where the radius $\mathbf{r}$ of the circle satisfies the expression $\mathbf{r=x- \boldsymbol{\nu}$. Thereby, for any given circle, the focus (center) of the circle is determined by the locus of its major axis $\boldsymbol{\nu}$, and the diameter of the circle is determined by its radius $\mathbf{r}$. Accordingly, the geometric locus of any given circle involves a rich system of vector and topological relationships between the locus of its major axis $\boldsymbol{\nu}$, the loci of its radius $\mathbf{r}$, and the loci of points $\mathbf{x}$ on the circle. Figure $\ref{Geometric Locus of Circle}$ depicts the system of vector and topological relationships between the locus of a major axis $\boldsymbol{\nu}$, the loci of a radius $\mathbf{r}$, and the loci of points $\mathbf{x}$ on a circle \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure15.png }\caption{The geometric locus of a circle involves a rich system of vector and topological relationships between the locus of a major axis $\boldsymbol{\nu $, the loci of a radius $\mathbf{r}$, and the loci of points $\mathbf{x}$ on a circle. The distance between the focus $\boldsymbol{\nu}$ \ of a circle and any point $\mathbf{x}$ on the circle is determined by the length $\left\Vert \mathbf{r}\right\Vert $ of the radius $\mathbf{r}$. \label{Geometric Locus of Circle \end{figure} \subsection{Vector Equation of Circle Generatrices} Using the law of cosines, it follows that the geometric locus of points $\mathbf{x}$ on a circle is determined by the vector equation \[ \left\Vert \mathbf{r}\right\Vert ^{2}=\left\Vert \mathbf{x}\right\Vert ^{2}+\left\Vert \boldsymbol{\nu}\right\Vert ^{2}-2\left\Vert \mathbf{x \right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\theta\text{, \] where $\theta$ is the angle between $\mathbf{x}$ and $\boldsymbol{\nu}$, the radius $\mathbf{r}$ of the circle has a fixed length $\left\Vert \mathbf{r}\right\Vert $ and a constant energy $\left\Vert \mathbf{r \right\Vert ^{2}$, and the major axis $\boldsymbol{\nu}$ of the circle has a fixed length $\left\Vert \boldsymbol{\nu}\right\Vert $ and a constant energy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$. The above equation is readily generalized to $d$-dimensional spheres by letting $\boldsymbol{\nu}\triangle \begin{pmatrix} \nu_{1}, & \nu_{2}, & \cdots, & \nu_{d \end{pmatrix} ^{T}$, $\mathbf{r}\triangle \begin{pmatrix} r_{1}, & r_{2}, & \cdots, & r_{d \end{pmatrix} ^{T}$, and $\mathbf{x}\triangle \begin{pmatrix} x_{1}, & x_{2}, & \cdots, & x_{d \end{pmatrix} ^{T}$. \subsection{Fundamental Equation of Circle Generatrices} Using the geometric properties of the circle depicted in Fig. $\ref{Geometric Locus of Circle}$, it follows that any vector $\mathbf{x}$ whose endpoint is on a circle or a $d$-dimensional sphere satisfies the equation $\mathbf{x}=\boldsymbol{\nu}+\mathbf{r}$. Substituting this equation for $\mathbf{x}$ into the above equation produces a vector equation that determines the geometric loci of circles or $d$-dimensional sphere \begin{align*} \left\Vert \mathbf{r}\right\Vert ^{2} & =\left\Vert \boldsymbol{\nu }+\mathbf{r}\right\Vert ^{2}+\left\Vert \boldsymbol{\nu}\right\Vert ^{2}-2\left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\theta\text{,}\\ & =\left\Vert \boldsymbol{\nu}\right\Vert ^{2}+2\left\Vert \mathbf{r \right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\phi+\left\Vert \mathbf{r}\right\Vert ^{2}-2\left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\theta \end{align*} which reduces to \[ 2\left( \left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu }\right\Vert \cos\theta-\left\Vert \mathbf{r}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\phi\right) =\left\Vert \boldsymbol{\nu }\right\Vert ^{2}\text{. \] Thus, it is concluded that the geometric locus of a circle denoted by $c$ or a $d$-dimensional sphere denoted by $S$ is determined by the vector equatio \begin{equation} 2\left( \left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu }\right\Vert \cos\theta-\left\Vert \mathbf{r}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\phi\right) =\left\Vert \boldsymbol{\nu }\right\Vert ^{2}\text{,} \label{Vector Equation of Circles and Spheres \end{equation} where $\boldsymbol{\nu}$ is the major axis and $\mathbf{r}$ is the radius of $c$ or $S$, $\mathbf{x}$ is a point on $c$ or $S$, $\theta$ is the angle between $\mathbf{x}$ and $\boldsymbol{\nu}$, $\phi$ is the angle between $\mathbf{r}$ and $\boldsymbol{\nu}$, and $\left\Vert \boldsymbol{\nu }\right\Vert ^{2}$ is the eigenenergy exhibited by the locus of the major axis $\boldsymbol{\nu}$. Figure $\ref{Geometric Locus of Circle}$ and Eq. (\ref{Vector Equation of Circles and Spheres}) jointly indicate that all of the points on a circle or a $d$-dimensional sphere explicitly reference the major axis $\boldsymbol{\nu}$. I will now demonstrate that all of the points $\mathbf{x}$ on a circle or a $d$-dimensional sphere are essentially characterized by the geometric locus of its major axis $\boldsymbol{\nu}$. \subsubsection{Eigen-transformed Coordinates} Given Eq. (\ref{Vector Equation of Circles and Spheres}) and assuming that $\left\Vert \boldsymbol{\nu}\right\Vert \neq0$, a locus of points on a circle or a $d$-dimensional sphere is determined by the vector equation \begin{equation} \frac{2\left( \mathbf{x-r}\right) ^{T}\boldsymbol{\nu}}{\left\Vert \boldsymbol{\nu}\right\Vert }=\left\Vert \boldsymbol{\nu}\right\Vert \text{,} \label{Normal Form Circle and Sphere \end{equation} where the major eigenaxis $\boldsymbol{\nu}/\left\Vert \boldsymbol{\nu }\right\Vert $ has length $1$ and points in the direction of the principal eigenvector $\boldsymbol{\nu}$. I\ will now use Eq. (\ref{Normal Form Circle and Sphere}) to devise a coordinate form locus equation which I will use to identify a uniform property exhibited by any point on a circle or a $d$-dimensional sphere. Let $\mathbf{u}_{\boldsymbol{\nu}}$ denote the unit major eigenaxis $\frac{\boldsymbol{\nu}}{\left\Vert \boldsymbol{\nu}\right\Vert }$ in Eq. (\ref{Normal Form Circle and Sphere}) and express $\mathbf{u}_{\boldsymbol{\nu }}$ in terms of orthonormal basis vector \[ \left\{ \mathbf{e}_{1}=\left( 1,0,\ldots,0\right) ,\ldots,\mathbf{e _{d}=\left( 0,0,\ldots,1\right) \right\} \] so tha \[ \mathbf{u}_{\boldsymbol{\nu}}=\cos\alpha_{1}\mathbf{e}_{1}+\cos\alpha _{2}\mathbf{e}_{2}+\cdots+\cos\alpha_{d}\mathbf{e}_{d}\text{, \] where $\cos\alpha_{i}$ are the direction cosines between $\mathbf{u _{\boldsymbol{\nu}}$ and $\mathbf{e}_{i}$. Each $\cos\alpha_{i}$ is the $i^{\text{th}}$ component of the unit major eigenaxis $\mathbf{u _{\boldsymbol{\nu}}$ along the coordinate axis $\mathbf{e}_{i}$. Projection of a vector $\mathbf{x}$ onto $\mathbf{u}_{\boldsymbol{\nu}}$ transforms the coordinates $\left( x_{1},\ldots,x_{d}\right) $ of $\mathbf{x}$ by $\left( \cos\alpha_{1}x_{1},\ldots,\cos\alpha_{d}x_{d}\right) $. Substitution of the expression for $\mathbf{u}_{\boldsymbol{\nu}}$ into Eq. (\ref{Normal Form Circle and Sphere}) produces a coordinate form locus equatio \begin{equation \begin{pmatrix} x_{1}-r_{1}, & \cdots, & x_{d}-r_{d \end{pmatrix} ^{T \begin{pmatrix} \cos\alpha_{1}, & \cdots, & \cos\alpha_{d \end{pmatrix} =\frac{1}{2}\sum\nolimits_{i=1}^{d}\nu_{i}\text{,} \label{Eigen-Coordinate Equation Circle \end{equation} where $\frac{1}{2}\sum\nolimits_{i=1}^{d}\nu_{i}=\frac{1}{2}\left\Vert \boldsymbol{\nu}\right\Vert $, which is satisfied by the transformed coordinates $\left( \cos\alpha_{1}\left( x_{1}-r_{1}\right) ,\ldots ,\cos\alpha_{d}\left( x_{d}-r_{d}\right) \right) $ of all of the points $\mathbf{x}$ on the geometric locus of a circle or a $d$-dimensional sphere. It follows that all of the points $\mathbf{x}$ on any given circle or $d$-dimensional sphere possess a characteristic set of transformed coordinates, such that the sum of those coordinates satisfies half the length $\left\Vert \boldsymbol{\nu}\right\Vert $ of the principal eigenaxis $\boldsymbol{\nu}$ of a spherically symmetric, quadratic locu \begin{equation} \sum\nolimits_{i=1}^{d}\cos\alpha_{i}\left( x_{i}-r_{i}\right) =\frac{1 {2}\left\Vert \boldsymbol{\nu}\right\Vert \text{.} \label{Geometric Property Exhibited by Points on Circles \end{equation} Therefore, it is concluded that the uniform properties exhibited by the transformed points $\mathbf{x-r}$ on any given circle or $d$-dimensional sphere are specified by the locus of its principal eigenaxis $\boldsymbol{\nu }$. Thus, all of the points on any circle or $d$-dimensional sphere explicitly reference the major axis $\boldsymbol{\nu}$ of the spherically symmetric, quadratic locus. Thereby, the principal eigenaxis denoted by $\boldsymbol{\nu }$ in Eqs (\ref{Vector Equation of Circles and Spheres}) and (\ref{Normal Form Circle and Sphere}) is an exclusive and distinctive coordinate axis that inherently characterizes all of the points on a circle or a $d$-dimensional sphere. I will now show that the principal eigenaxis of any given circle or $d$-dimensional sphere satisfies the locus in terms of its eigenenergy. I will also demonstrate that any given point on a spherically symmetric, quadratic locus satisfies the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the principal eigenaxis $\boldsymbol{\nu}$ of the quadratic locus. \subsection{Characteristic Eigenenergy} Take the principal eigenaxis $\boldsymbol{\nu}$ of any given circle or $d$-dimensional sphere. Accordingly, the principal eigenaxis $\boldsymbol{\nu }$ satisfies Eq. (\ref{Vector Equation of Circles and Spheres}). Using Eq. (\ref{Vector Equation of Circles and Spheres} \[ 2\left( \left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu }\right\Vert \cos\theta-\left\Vert \mathbf{r}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\phi\right) =\left\Vert \boldsymbol{\nu }\right\Vert ^{2 \] and the vector relationship $\mathbf{r}=\mathbf{x-}\boldsymbol{\nu}$, it follows that the principal eigenaxis $\boldsymbol{\nu}$ satisfies the circle or $d$-dimensional sphere in terms of its squared length $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ \begin{equation} 2\left( \left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu }\right\Vert \cos\theta-\left\Vert \mathbf{x-}\boldsymbol{\nu}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\phi\right) =\left\Vert \boldsymbol{\nu}\right\Vert ^{2} \label{Characteristic Eigenenergy of Quadratic 2 \end{equation} so that the principal eigenaxis $\boldsymbol{\nu}$ and any given vector $\mathbf{x}$ which satisfy the vector expressio \[ 2\left( \left\Vert \mathbf{x}\right\Vert \left\Vert \boldsymbol{\nu }\right\Vert \cos\theta-\left\Vert \mathbf{x-}\boldsymbol{\nu}\right\Vert \left\Vert \boldsymbol{\nu}\right\Vert \cos\phi\right) \] equally satisfy the vector expressio \[ \left\Vert \boldsymbol{\nu}\right\Vert ^{2}=\sum\nolimits_{i=1}^{d}\nu_{i\ast }^{2}\text{, \] where $\nu_{i\ast}$ are the eigen-coordinates of $\boldsymbol{\nu}$ and $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ is the eigenenergy exhibited by $\boldsymbol{\nu}$: It follows that the principal eigenaxis $\boldsymbol{\nu} $ of any given circle or $d$-dimensional sphere and any given point $\mathbf{x}$ on the spherically symmetric, quadratic locus satisfy the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the principal eigenaxis $\boldsymbol{\nu}$ of the quadratic locus. Therefore, the principal eigenaxis $\boldsymbol{\nu}$ of any given circle or $d$-dimensional sphere exhibits a characteristic eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ that is unique for the spherically symmetric, quadratic locus. It follows that the locus of any given circle or $d$-dimensional sphere is determined by the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the locus of its principal eigenaxis $\boldsymbol{\nu}$. \subsection{Inherent Property of Circles and Spheres} Let a quadratic locus be a circle or a $d$-dimensional sphere. It has been shown that the principal eigenaxis $\boldsymbol{\nu}$ of any given spherically symmetric, quadratic locus satisfies the quadratic locus in terms of its eigenenergy. It has also been shown that any given point on any given spherically symmetric, quadratic locus satisfies the eigenenergy exhibited by the principal eigenaxis of the quadratic locus. Thereby, it has been demonstrated that the locus of any given circle or $d$-dimensional sphere is determined by the eigenenergy of its principal eigenaxis. Therefore, the inherent property of a spherically symmetric, quadratic locus and its principal eigenaxis $\boldsymbol{\nu}$ is the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the locus of the principal eigenaxis $\boldsymbol{\nu}$. \subsection{Overall Conclusions for Circles and Spheres} Let a quadratic locus be a circle or a $d$-dimensional sphere. It has been shown that the uniform properties exhibited by all of the points $\mathbf{x}$ on any given spherically symmetric, quadratic locus are specified by the locus of its principal eigenaxis $\boldsymbol{\nu}$, where each point $\mathbf{x}$ on the quadratic locus and the principal eigenaxis $\boldsymbol{\nu}$ of the quadratic locus satisfy the quadratic locus in terms of the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by its principal eigenaxis $\boldsymbol{\nu}$. Thereby, it is concluded that the vector components of a principal eigenaxis specify all forms of spherically symmetric, conic curves and quadratic surfaces, and that all of the points $\mathbf{x}$ on any given quadratic locus explicitly and exclusively reference the principal eigenaxis $\boldsymbol{\nu }$ in Eqs (\ref{Vector Equation of Circles and Spheres}) and (\ref{Normal Form Circle and Sphere}). Therefore, the\ important generalizations for a spherically symmetric, quadratic locus are specified by the eigenenergy exhibited by the locus of its principal eigenaxis. Moreover, a principal eigenaxis is an exclusive and distinctive coordinate axis that specifies all of the points on a spherically symmetric, quadratic locus. Accordingly, a principal eigenaxis is the focus of a spherically symmetric, quadratic locus. Thereby, it is concluded that the principal eigenaxis of a spherically symmetric, quadratic locus provides an elegant, general eigen-coordinate system for a spherically symmetric, quadratic locus of points. I\ will now summarize my primary findings for quadratic loci. \subsection{Inherent Property of Quadratic Loci} It has been shown that the principal eigenaxis of any given quadratic locus satisfies the quadratic curve or surface in terms of its eigenenergy. It has also been shown that any given point on any given quadratic curve or surface satisfies the eigenenergy exhibited by the principal eigenaxis of the quadratic locus. Thereby, it has been demonstrated that the locus of any given quadratic curve or surface is determined by the eigenenergy of its principal eigenaxis. Therefore, the inherent property of a quadratic locus and its principal eigenaxis $\boldsymbol{\nu}$ is the eigenenergy $\left\Vert \boldsymbol{\nu }\right\Vert ^{2}$ exhibited by the locus of the principal eigenaxis $\boldsymbol{\nu}$, where $\boldsymbol{\nu}$ is the focus of the quadratic locus. \subsection{Overall Conclusions for Quadratic Loci} It has been shown that the uniform properties exhibited by all of the points $\mathbf{x}$ on any given quadratic locus are specified by the locus of its principal eigenaxis $\boldsymbol{\nu}$, where each point $\mathbf{x}$ on the quadratic locus and the principal eigenaxis $\boldsymbol{\nu}$ of the quadratic locus satisfy the quadratic locus in terms of the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by its principal eigenaxis $\boldsymbol{\nu}$. Thereby, it is concluded that the vector components of a principal eigenaxis specify all forms of quadratic curves and surfaces, and that all of the points $\mathbf{x}$ on any given quadratic curve or surface explicitly and exclusively reference the principal eigenaxis $\boldsymbol{\nu}$ in Eqs (\ref{Vector Equation of a Conic}), (\ref{Normal Form Second-order Locus}), (\ref{Vector Equation of Circles and Spheres}), and (\ref{Normal Form Circle and Sphere}). Therefore, the\ important generalizations for a quadratic locus are specified by the eigenenergy exhibited by the locus of its principal eigenaxis. Moreover, a principal eigenaxis is an exclusive and distinctive coordinate axis that specifies all of the points on a quadratic locus, where the principal eigenaxis is the focus of the quadratic locus. Thereby, it is concluded that the principal eigenaxis of a quadratic locus provides an elegant, general eigen-coordinate system for a quadratic locus of points. Conic sections and quadratic surfaces involve \emph{first and second degree} point coordinates or vector components. A reproducing kernel Hilbert space is a Hilbert space that specifies reproducing kernels for points or vectors, where reproducing kernels determine first and second degree coordinates or components for points or vectors. In the next section, I\ will define how a reproducing kernel Hilbert space extends algebraic and topological properties for vectors and second-order distance statistics between the loci of two vectors. \section{Reproducing Kernel Hilbert Spaces} A Hilbert space $\mathfrak{H}$ is a reproducing kernel Hilbert space \emph{(}$\mathfrak{rkH}$\emph{)} if the members of $\mathfrak{H}$ are functions $f$ on some set $T$, and if there is a function called a reproducing kernel $K(\mathbf{s,t})$ defined on $T\times T$ such tha \[ K(\cdot\mathbf{,t})\in\mathcal{H}\text{ for all }\mathbf{t}\in T \] an \[ \left\langle f,K(\cdot\mathbf{,t})\right\rangle =f\left( \mathbf{t}\right) \text{ for all }\mathbf{t}\in T\text{ and }f\in\mathcal{H}\text{. \] A $\mathfrak{rkH}$ space defined on vectors $\mathbf{q}\i \mathbb{R} ^{d}$ has a reproducing kernel $K(\mathbf{s,q})$ such tha \[ K\left( \cdot,\mathbf{q}\right) \in\mathcal{H}\text{ for all }\mathbf{q}\i \mathbb{R} ^{d \] an \[ \left\langle f\left( \cdot\right) ,k(\cdot\mathbf{,q})\right\rangle =f\left( \mathbf{q}\right) \text{ for any }\mathbf{q}\i \mathbb{R} ^{d}\text{ and any }f\in\mathcal{H}\left( k\right) \text{. \] So, given a $\mathfrak{rkH}$ space, take any vector $\mathbf{y}\i \mathbb{R} ^{d}$. Then there exists a unique vector $k_{\mathbf{y}}\in\mathcal{H} \mathbb{R} ^{d}$ such that for every $f\in\mathcal \mathbb{R} }^{d}$, $f(\mathbf{y})=\left\langle f,k_{\mathbf{y}}\right\rangle $. The function $k_{\mathbf{y}}$ is called the reproducing kernel for the point $\mathbf{y}$. The $2$-variable function defined by $K(\mathbf{x,y )=k_{\mathbf{y}}(\mathbf{x})$ is called the reproducing kernel for $\mathfrak{H}$ \citep{Aronszajn1950,Small1994 . It follows tha \begin{align*} K(\mathbf{x},\mathbf{y}) & =k_{\mathbf{y}}(\mathbf{x})\\ & =\left\langle k_{\mathbf{y}}(\mathbf{x}),k_{\mathbf{x}}(\mathbf{y )\right\rangle \\ & =\left\langle K\left( .,\mathbf{y}\right) ,K\left( .,\mathbf{x}\right) \right\rangle \text{. \end{align*} Accordingly, a reproducing kernel $K\left( \mathbf{t},\mathbf{s}\right) $ implements the inner produc \[ \left\langle K\left( .,\mathbf{s}\right) ,K\left( .,\mathbf{t}\right) \right\rangle =K\left( \mathbf{t},\mathbf{s}\right) =K\left( \mathbf{s ,\mathbf{t}\right) \] of the vectors $\mathbf{s}$ and $\mathbf{t}$ \citep{Aronszajn1950,Small1994 . \paragraph{Enriched Coordinate Systems} Given Eq. (\ref{Geometric Locus of Vector}), it follows that the geometric locus of any given vector $\mathbf{y}$ contains first degree point coordinates$\ y_{i}$ or vector components $y_{i}$. A\text{ reproducing kernel }$K\left( \mathbf{\cdot},\mathbf{y}\right) $ for a point $\mathbf{y}$ defines an enriched point $P_{k_{\mathbf{y}}}$ in terms of a vector $k_{\mathbf{y}}$ that contains\ first $y_{i}$\emph{, }second $y_{i}^{2}$, third $y_{i}^{3}$, and up to $d$\ degree point coordinates or vector components. Reproducing kernels extend algebraic and topological properties of the geometric loci of vectors in the manner outlined next. \subsection{Sinuous Approximations of Vectors} Topology is the study of geometric properties and spatial relations which are unaffected by the continuous change of shape or size of a figure. Thus, topology deals with geometric properties of figures which are unchanged by continuous transformations. Henri Poincare noted that geometric properties of figures would remain true even if figures were copied by a draftsman who grossly changed proportions of figures and \emph{replaced} all \emph{straight lines} by lines more or less \emph{sinuous} \citep{Rapport1963 . It follows that geometric properties of vectors in Hilbert spaces $\mathfrak{H}$ are unchanged by continuous transformations. Therefore, topological properties exhibited by geometric loci of vectors are unchanged if directed line segments of vectors are replaced by sinuous curves. Given this assumption, it follows that directed, straight line segments of vectors are best approximated by second-order curves. \subsection{Inner Product Statistics for Reproducing Kernels} Reproducing kernels approximate vectors, which are directed line segments, with continuous curves. Therefore, reproducing kernels satisfy algebraic and topological relationships which are similar to those satisfied by vectors. Using the inner product expression $\mathbf{x}^{T}\mathbf{y}=\left\Vert \mathbf{x}\right\Vert \left\Vert \mathbf{y}\right\Vert \cos\theta$ in Eq. (\ref{Inner Product Expression2}) satisfied by any two vectors $\mathbf{x}$ and $\mathbf{y}$ in Hilbert space, it follows that any two reproducing kernels $k_{\mathbf{s}}(\mathbf{x})$ and $k_{\mathbf{x}}(\mathbf{s})$ for any two points $\mathbf{s}$ and $\mathbf{x}$ in a reproducing kernel Hilbert space satisfy the following relationshi \begin{equation} K(\mathbf{x},\mathbf{s})=\left\Vert k_{\mathbf{s}}(\mathbf{x})\right\Vert \left\Vert k_{\mathbf{x}}(\mathbf{s})\right\Vert \cos\varphi\text{,} \label{Inner Product Relationship for Reproducing Kernels \end{equation} where $K(\mathbf{x},\mathbf{s})=$ $k_{\mathbf{s}}(\mathbf{x})$ is the reproducing kernel for $\mathcal{H}$, $k_{\mathbf{s}}(\mathbf{x})$ is the reproducing kernel for the point $\mathbf{s}$, $k_{\mathbf{x}}(\mathbf{s})$ is the reproducing kernel for the point $\mathbf{x}$, and $\varphi$ is the angle between the reproducing kernels $k_{\mathbf{s}}(\mathbf{x})$ and $k_{\mathbf{x}}(\mathbf{s})$. \subsection{Why Reproducing Kernels Matter} Practically speaking, reproducing kernels $K(\mathbf{x},\mathbf{s})$ replace directed, straight line segments of vectors $\mathbf{s}$ with curves, such that vectors and correlated points contain first degree vectors components and point coordinates, \emph{as well as} \emph{second }$x_{i}^{2}$\emph{, third }$x_{i}^{3}$\emph{, and up to }$\emph{d}$\emph{\ degree} vector components and point coordinates, where the highest degree $d$ depends on the reproducing kernel $K(\mathbf{x},\mathbf{s})$. \subsubsection{Replacements of Directed Straight Line Segments} Polynomial reproducing kernels $k_{\mathbf{q}\left( d\right) }=\left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{d}$ replace directed, straight line segments of vectors with $d$-order polynomial curves. Given that geometric and topological properties of Hilbert spaces remain true when straight\emph{\ lines are replaced by lines which are more or less sinuous, it follows that the directed, straight line segment of any given vector $\mathbf{q}$ is best approximated by a second-order, polynomial curve. Therefore, second-order, polynomial reproducing kernel \[ k_{\mathbf{q}\left( 2\right) }=\left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2 \] are vectors such that the relationships contained within Eq. (\ref{Inner Product Statistic Reproducing Kernels}) determine a rich system of topological relationships between the geometric loci of reproducing kernels, where topological and geometric relationships between vectors $\left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2}$ and $\left( \mathbf{x}^{T \mathbf{s}+1\right) ^{2}$ and corresponding points include topological and geometric relationships between first and second degree point coordinates or vector components (see Fig. $\ref{Second-order Distance Statisitcs RKHS}$). Second-order, polynomial reproducing kernels $k_{\mathbf{q}\left( 2\right) }=\left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2}$ replace straight lines of vectors $\mathbf{q}$ with second-order, polynomial curves. It will now be demonstrated that second-order, polynomial reproducing kernels $k_{\mathbf{q \left( 2\right) }=\left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2}$ contain first $q_{i}$ and second $q_{i}^{2}$ degree point coordinates or vector components of vectors $\mathbf{q}$. \subsection{Second-order Polynomial Reproducing Kernels} A second-order, polynomial reproducing kernel $\left( \mathbf{x ^{T}\mathbf{q}+1\right) ^{2}$ for a point or vector $\mathbf{q}\i \mathbb{R} ^{d}$ specifies a transformed point $P_{k_{\mathbf{q}}}$ $\i \mathbb{R} ^{d}$ or vector $k_{\mathbf{q}}\i \mathbb{R} ^{d}$ that contains first and second degree point coordinates or vector components. The\ reproducing kernel $\left( \mathbf{x}^{T}\mathbf{q +1\right) ^{2}$ replaces the straight line segment of the vector $\mathbf{q}$ with a second-order, polynomial curv \begin{align*} \left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2} & =\left( \mathbf{x ^{T}\mathbf{q}+1\right) \left( \mathbf{x}^{T}\mathbf{q}+1\right) \\ & =\left( \left( \mathbf{\cdot}\right) \mathbf{^{T}}\left( \mathbf{\cdot }\right) \right) \left( \mathbf{q^{T}q}\right) +2\left( \mathbf{\cdot }\right) ^{T}\mathbf{q}+1\text{, \end{align*} where $\left( \mathbf{\cdot}\right) $ denotes the argument of the reproducing kernel $\left( \left( \mathbf{\cdot}\right) ^{T}\mathbf{q +1\right) ^{2}$ of the point $\mathbf{q}$. Because the space $V$ of all vectors is closed under addition and scaling, such that for any $\mathbf{u,v \in V$ and number $\lambda$, $\mathbf{u+v}\in V$ and $\lambda\mathbf{v}\in V$, it follows that the reproducing kernel $\left( \left( \mathbf{\cdot}\right) ^{T}\mathbf{q}+1\right) ^{2}$ specifies first and second degree vector components and point coordinates of $\mathbf{q}$ in the following manner \begin{align*} \left( \mathbf{q}+1\right) ^{2} & =\left( \mathbf{q}+1\right) \left( \mathbf{q}+1\right) =\mathbf{q}^{2}+2\mathbf{q}+\mathbf{1}\\ & \begin{pmatrix} q_{1}^{2}+2q_{1}+1, & q_{2}^{2}+2q_{2}+1, & \cdots, & q_{d}^{2}+2q_{d}+1 \end{pmatrix} ^{T}\text{. \end{align*} Accordingly, each point coordinate or vector component of the reproducing kernel $\left( \left( \mathbf{\cdot}\right) ^{T}\mathbf{q}+1\right) ^{2}$ contains first $q_{i}$ and second degree $q_{i}^{2}$ components. Let $P_{k_{\mathbf{q}}}$ denote the transformed $\left( \mathbf{q}+1\right) ^{2}$ point $\mathbf{q}$. Because the square root of the inner product $\sqrt{\left( \mathbf{q}^{T}\mathbf{q}+1\right) ^{2}}$ describes the distance between the endpoint $P_{k_{\mathbf{q}}}$ of the reproducing kernel $\left( \left( \mathbf{\cdot}\right) ^{T}\mathbf{q}+1\right) ^{2}$ and the origin $P_{\mathbf{o}}$, it follows that the distance between the point $P_{k_{\mathbf{q}}}$ and the origin $P_{\mathbf{o}}$, which is the length $\left\Vert k_{\mathbf{q}}\right\Vert $ of the vector $k_{\mathbf{q}}$, is $\mathbf{q}^{T}\mathbf{q}+1=\left\Vert \mathbf{q}\right\Vert ^{2}+1$. \subsection{Gaussian Reproducing Kernels} Gaussian reproducing kernels $k_{\mathbf{s}}=\exp\left( -\gamma\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s}\right\Vert ^{2}\right) $ replace directed, straight line segments of vectors $\mathbf{s}$ with second-order curves $k_{\mathbf{s}}(\mathbf{x})$, where the hyperparameter $\gamma$ is a scale factor for the second-order, distance statistic $\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s}\right\Vert ^{2}$. Given that geometric and topological properties of Hilbert spaces remain true when straight lines are replaced by lines which are more or less sinuous, it follows that the directed, straight line segment of any given vector $\mathbf{s}$ is naturally approximated by a second-order curve. Therefore, given an effective hyperparameter $\gamma$, it follows that a Gaussian reproducing kernel $k_{\mathbf{s}}=\exp\left( -\gamma\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s}\right\Vert ^{2}\right) $ naturally approximates any given vector $\mathbf{s}$. Gaussian reproducing kernels $k_{\mathbf{s}}=\exp\left( -\gamma\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $ of points $\mathbf{s}$ are also vectors $k_{\mathbf{s}}(\mathbf{x})$, where topological and geometric relationships between vectors $\exp\left( -\gamma\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{x}\right\Vert ^{2}\right) $ and $\exp\left( -\gamma\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s}\right\Vert ^{2}\right) $ and corresponding reproducing kernels of points $\mathbf{x}$ and $\mathbf{s}$ include topological and geometric relationships between first and second degree point coordinates or vector components (see Fig. $\ref{Second-order Distance Statisitcs RKHS}$). I\ will now develop an effective value for the hyperparameter $\gamma$ of a Gaussian reproducing kernel $k_{\mathbf{s}}=\exp\left( -\gamma\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s}\right\Vert ^{2}\right) $. \subsubsection{An Effective Value for the Gaussian Hyperparameter} Later on, I\ will examine an elegant, statistical balancing feat which involves inner product statistics of feature vectors. Therefore, Gaussian reproducing kernels $k_{\mathbf{s}}=\exp\left( -\gamma\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s}\right\Vert ^{2}\right) $ need to specify effective inner product statistics of vectors. By way of motivation, consider the expression for the inner product statistic $K\left( \mathbf{x},\mathbf{s}\right) \begin{align} K\left( \mathbf{x},\mathbf{s}\right) & =\exp\left( -\gamma\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) \label{Inner Product Gaussian Reproducing Kernel}\\ & =\exp\left( -\gamma\left\{ \left\Vert \mathbf{x}\right\Vert ^{2}+\left\Vert \mathbf{s}\right\Vert ^{2}-2\mathbf{x}^{T}\mathbf{s}\right\} \right) \nonumber \end{align} of a Gaussian reproducing kernel. I propose that an effective value for the hyperparameter $\gamma$ of a Gaussian reproducing kernel $k_{\mathbf{s} =\exp\left( -\gamma\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s \right\Vert ^{2}\right) $ determines an effective scaling factor $\gamma$ for the inner product statistics determined by $\exp\left( -\gamma\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $. Equation (\ref{Inner Product Gaussian Reproducing Kernel}) indicates that inner product statistics determined by Gaussian reproducing kernel \begin{align*} K\left( \mathbf{x},\mathbf{s}\right) & =\exp\left( -\gamma\left\Vert \mathbf{x}\right\Vert ^{2}\right) \times\exp\left( -\gamma\left\Vert \mathbf{s}\right\Vert ^{2}\right) \times\exp\left( \gamma2\mathbf{x ^{T}\mathbf{s}\right) \\ & =k_{\mathbf{s}}(\mathbf{x})\times k_{\mathbf{x}}(\mathbf{s}) \end{align*} involve the multiplication of three, exponential transforms \[ \exp\left( -\gamma\left\Vert \mathbf{x}\right\Vert ^{2}\right) \text{, \exp\left( -\gamma\left\Vert \mathbf{s}\right\Vert ^{2}\right) \text{, and }\exp\left( \gamma2\mathbf{x}^{T}\mathbf{s}\right) \] of three, corresponding, \emph{scaled}, inner product statistics \[ -\gamma\left\Vert \mathbf{x}\right\Vert ^{2}\text{, }-\gamma\left\Vert \mathbf{s}\right\Vert ^{2}\text{, and }\gamma2\mathbf{x}^{T}\mathbf{s}\text{. \] Therefore, we need to choose a value for $\gamma$ which specifies \emph{effective proportions} for the inner product statistics determined by inner products of Gaussian reproducing kernels $\exp\left( -\gamma\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $. I will now define a value for $\gamma$ which specifies effective proportions for the inner product statistics determined by the Gaussian reproducing kernel $\exp\left( -\gamma\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $. First, consider the statistics $\exp\left( -\gamma\left\Vert \mathbf{x \right\Vert ^{2}\right) $ and $\exp\left( -\gamma\left\Vert \mathbf{s \right\Vert ^{2}\right) $. Take the statistic $\exp\left( -\gamma\left\Vert \mathbf{z}\right\Vert ^{2}\right) $ and let $\gamma$ be greater than or equal to $0.1$. If $\gamma\geq0.1$, then $\exp\left( -\gamma\left\Vert \mathbf{z}\right\Vert ^{2}\right) \ll1$, which indicates that the inner product statistics $\left\Vert \mathbf{x}\right\Vert ^{2}$ and $\left\Vert \mathbf{s}\right\Vert ^{2}$ are substantially diminished for any value of $\gamma\geq0.1$. Next, consider the statistic $\exp\left( \gamma2\mathbf{x}^{T}\mathbf{s \right) $. If $\gamma\geq0.1$, then $\exp\left( \gamma2\mathbf{x ^{T}\mathbf{s}\right) \gg1$, which indicates that the inner product statistic $2\mathbf{x}^{T}\mathbf{s}$ is substantially magnified for any value of $\gamma\geq0.1$. Therefore, it is concluded that values of $\gamma\geq0.1$ do not specify effective proportions for inner product statistics determined by the Gaussian reproducing kernel $\exp\left( -\gamma\left\Vert \mathbf{x}-\mathbf{s \right\Vert ^{2}\right) $. Now, suppose that we let $\gamma=1/100$. Becaus \[ \exp\left( -0.01\times\left( 0.001\right) \right) =1\text{, \ \[ \exp\left( -0.01\times\left( 1\right) \right) =0.99\text{, \ \[ \exp\left( -0.01\times\left( 100\right) \right) =0.369\text{, \] an \[ \exp\left( -0.01\times\left( 100000\right) \right) =0\text{, \] it follows tha \[ 0\leq\exp\left( -0.01\left\Vert \mathbf{x}\right\Vert ^{2}\right) \leq1\text{. \] So, if $\gamma=1/100$, the inner product statistics $\exp\left( -0.01\left\Vert \mathbf{x}\right\Vert ^{2}\right) $ and $\exp\left( -0.01\left\Vert \mathbf{s}\right\Vert ^{2}\right) $ have reasonable proportions. Therefore, let $\gamma=1/100$ and consider the statistic $\exp\left\{ \gamma2\mathbf{x}^{T}\mathbf{s}\right\} $. If $\gamma=1/100$, the inner product statistic $\exp\left( 0.02\times \mathbf{x}^{T}\mathbf{s}\right) $ is not substantially magnified. For example \[ \exp\left( 0.02\times\left( 100\right) \right) =7.39\text{. \] Moreover, the magnification of the inner product statistic $\exp\left( 0.02\times\left( \mathbf{x}^{T}\mathbf{s}\right) \right) $ is balanced with diminished proportions of the inner product statistics $\exp\left( -0.01\left\Vert \mathbf{x}\right\Vert ^{2}\right) $ and $\exp\left( -0.01\left\Vert \mathbf{s}\right\Vert ^{2}\right) $. Therefore, the value of $\gamma=1/100$ specifies effective proportions for inner product statistics determined by the Gaussian reproducing kernel $\exp\left( -\gamma\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $. Thus, it is concluded that an effective choice for the hyperparameter $\gamma$ of a Gaussian reproducing kernel $k_{\mathbf{s}}=\exp\left( -\gamma\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s}\right\Vert ^{2}\right) $ is $\gamma=1/100$, where inner products between Gaussian reproducing kernels $\exp\left( -0.01\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{x \right\Vert ^{2}\right) $ and $\exp\left( -0.01\left\Vert \left( \mathbf{\cdot}\right) -\mathbf{s}\right\Vert ^{2}\right) $ satisfy the vector expression \begin{align*} \left\langle k_{\mathbf{x}},k_{\mathbf{s}}\right\rangle & =\exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) \\ & =\exp\left( -0.01\times\left\{ \left\Vert \mathbf{x}\right\Vert ^{2}+\left\Vert \mathbf{s}\right\Vert ^{2}-2\mathbf{x}^{T}\mathbf{s}\right\} \right) \\ & =\exp\left( -0.01\times\left\{ \left\Vert \mathbf{x}\right\Vert ^{2}+\left\Vert \mathbf{s}\right\Vert ^{2}-2\left\Vert \mathbf{x}\right\Vert \left\Vert \mathbf{s}\right\Vert \cos\varphi\right\} \right) \\ & =\exp\left( -0.01\times\left\Vert \mathbf{x}\right\Vert ^{2}\right) \times\exp\left( -0.01\times\left\Vert \mathbf{s}\right\Vert ^{2}\right) \times\exp\left( 0.02\times\mathbf{x}^{T}\mathbf{s}\right) \text{. \end{align*} I will now define the locus of a reproducing kernel. The notion of the locus of a reproducing kernel will play a significant role in analyses that follow. \subsection{Locus of a Reproducing Kernel} A reproducing kernel $K(\mathbf{x},\mathbf{s})=k_{\mathbf{s}}(\mathbf{x})\i \mathbb{R} ^{d}$ for a point $\mathbf{s}$ is defined to be the geometric locus of a curve formed by two points $P_{\mathbf{0}}$ and $P_{k_{\mathbf{s}}}$ which are at a distance of $\left\Vert k_{\mathbf{s}}\right\Vert =\left( K(\mathbf{s ,\mathbf{s})\right) ^{1/2}$ from each other, such that each point coordinate or vector component $k\left( s_{i}\right) $ is at a signed distance of $\left\Vert k_{\mathbf{s}}\right\Vert \cos\mathbb{\alpha}_{ij}$ from the origin $P_{\mathbf{0}}$, along the direction of an orthonormal coordinate axis $\mathbf{e}_{j}$, where $\cos\mathbb{\alpha}_{k\left( s_{i}\right) j}$ is the direction cosine between a vector component $k\left( s_{i}\right) $ and an orthonormal coordinate axis $\mathbf{e}_{j}$. It follows that the geometric locus of a reproducing kernel $k_{\mathbf{s}}$ is specified by an ordered set of signed magnitude \[ k_{\mathbf{s}}\triangle \begin{pmatrix} \left\Vert k_{\mathbf{s}}\right\Vert \cos\mathbb{\alpha}_{k\left( s_{1}\right) 1}, & \left\Vert k_{\mathbf{s}}\right\Vert \cos\mathbb{\alpha }_{k\left( s_{2}\right) 2}, & \cdots, & \left\Vert k_{\mathbf{s}}\right\Vert \cos\mathbb{\alpha}_{k\left( s_{d}\right) d \end{pmatrix} ^{T \] along the axes of the standard set of basis vector \[ \left\{ \mathbf{e}_{1}=\left( 1,0,\ldots,0\right) ,\ldots,\mathbf{e _{d}=\left( 0,0,\ldots,1\right) \right\} \text{, \] all of which describe a unique, ordered $d$-tuple of geometric locations on $d$ axes $\mathbf{e}_{j}$, where $\left\Vert k_{\mathbf{s}}\right\Vert $ is the length of the vector $k_{\mathbf{s}}$, $\left( \cos\mathbb{\alpha }_{k\left( s_{1}\right) 1}\cdots,\cos\mathbb{\alpha}_{k\left( s_{d}\right) d}\right) $ are the direction cosines of the components $k\left( s_{i}\right) $ of the vector $k_{\mathbf{s}}$ relative to the standard set of orthonormal coordinate axes $\left\{ \mathbf{e}_{j}\right\} _{j=1}^{d}$, and each vector component $k\left( s_{i}\right) $ specifies a point coordinate $k\left( s_{i}\right) $ of the endpoint $P_{k_{\mathbf{s}}}$ of the vector $k_{\mathbf{s}}$. Given the above assumptions and notation, the reproducing kernel $\left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2} \[ \left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2} \begin{pmatrix} q_{1}^{2}+2q_{1}+1, & q_{2}^{2}+2q_{2}+1, & \cdots, & q_{d}^{2}+2q_{d}+1 \end{pmatrix} ^{T \] is defined to be the geometric locus of a second-order, polynomial curve formed by two point \[ P_{\mathbf{0} \begin{pmatrix} 0, & 0, & \cdots, & 0 \end{pmatrix} \] an \[ P_{k_{\mathbf{q}} \begin{pmatrix} q_{1}^{2}+2q_{1}+1, & q_{2}^{2}+2q_{2}+1, & \cdots, & q_{d}^{2}+2q_{d}+1 \end{pmatrix} \] which are at a distance of $\left\Vert \mathbf{q}\right\Vert ^{2}+1$ from each other, such that each point coordinate $x_{i}^{2}+2x_{i}+1$ or vector component $x_{i}^{2}+2x_{i}+1$ is at a signed distance of $\left( \left\Vert \mathbf{x}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( q_{i}\right) j}$ from the origin $P_{\mathbf{0}}$, along the direction of an orthonormal coordinate axis $\mathbf{e}_{j}$, where $\cos\mathbb{\alpha }_{k\left( q_{i}\right) j}$ is the direction cosine between the vector component $x_{i}^{2}+2x_{i}+1$ and the orthonormal coordinate axis $\mathbf{e}_{j}$. It follows that the geometric locus of the reproducing kernel $\left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2}$ is specified by an ordered set of signed magnitude \[ \left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2}\triangle \begin{pmatrix} \left( \left\Vert \mathbf{q}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha }_{k\left( q_{1}\right) 1}, & \cdots, & \left( \left\Vert \mathbf{q \right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( q_{d}\right) d \end{pmatrix} ^{T \] along the axes of the standard set of basis vector \[ \left\{ \mathbf{e}_{1}=\left( 1,0,\ldots,0\right) ,\ldots,\mathbf{e _{d}=\left( 0,0,\ldots,1\right) \right\} \text{, \] all of which describe a unique, ordered $d$-tuple of geometric locations on $d$ axes $\mathbf{e}_{j}$, where $\left\Vert \mathbf{q}\right\Vert ^{2}+1$ is the length of the reproducing kernel $\left( \mathbf{x}^{T}\mathbf{q +1\right) ^{2}$, $\left( \cos\mathbb{\alpha}_{k\left( q_{1}\right) 1},\cdots,\cos\mathbb{\alpha}_{k\left( q_{d}\right) d}\right) $ are the direction cosines of the components $x_{i}^{2}+2x_{i}+1$ of the reproducing kernel $\left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2}$ relative to the standard set of orthonormal coordinate axes $\left\{ \mathbf{e}_{j}\right\} _{j=1}^{d}$, and each vector component $x_{i}^{2}+2x_{i}+1$ specifies a point coordinate $x_{i}^{2}+2x_{i}+1$ of the endpoint $P_{k_{\mathbf{q}}}$ of $\left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2}$. It follows that the endpoint of the geometric locus of a reproducing kernel and the vector determined by the reproducing kernel both describe an ordered pair of real numbers in the real Euclidean plane or an ordered $d$-tuple of real numbers in real Euclidean space, all of which jointly determine a geometric location in \mathbb{R} ^{2}$ or \mathbb{R} ^{d}$. Therefore, it is concluded that the reproducing kernel $\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}$ is a vector $k_{\mathbf{s}}\i \mathbb{R} ^{d}$ that has a magnitude and a direction such that the endpoint of the reproducing kernel $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2} \[ P_{k_{\mathbf{s}} \begin{pmatrix} s_{1}^{2}+2s_{1}+1, & s_{2}^{2}+2s_{2}+1, & \cdots, & s_{d}^{2}+2s_{d}+1 \end{pmatrix} \] is specified by the ordered set of signed magnitude \begin{equation} \left( \mathbf{x}^{T}\mathbf{q}+1\right) ^{2}\triangle \begin{pmatrix} \left( \left\Vert \mathbf{q}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha }_{k\left( q_{1}\right) 1}, & \cdots, & \left( \left\Vert \mathbf{q \right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( q_{d}\right) d \end{pmatrix} ^{T}\text{.} \label{Geometric Locus of Second-order Reproducing Kernel \end{equation} Depending on the geometric context, reproducing kernels will be referred to as points or vectors. Vectors $\mathbf{s}$ which are specified by reproducing kernels $K(\mathbf{x},\mathbf{s})$ will be denoted by $k_{\mathbf{s}}$. It will now be demonstrated that second-order distance statistics determine a rich system of algebraic and topological relationships between the geometric loci of two reproducing kernels. \subsection{Second-order Distance Statistics for Kernels} Returning to Eq. (\ref{Inner Product Relationship for Reproducing Kernels}), the vector relationshi \[ K(\mathbf{x},\mathbf{s})=\left\Vert k_{\mathbf{x}}(\mathbf{s})\right\Vert \left\Vert k_{\mathbf{s}}(\mathbf{x})\right\Vert \cos\varphi \] between two reproducing kernels $k_{\mathbf{x}}(\mathbf{s})$ and $k_{\mathbf{s}}(\mathbf{x})$ can be derived by using the law of cosines \citep[see, e.g.,][]{Lay2006 \begin{equation} \left\Vert k_{\mathbf{x}}-k_{\mathbf{s}}\right\Vert ^{2}=\left\Vert k_{\mathbf{x}}\right\Vert ^{2}+\left\Vert k_{\mathbf{s}}\right\Vert ^{2}-2\left\Vert k_{\mathbf{x}}\right\Vert \left\Vert k_{\mathbf{s }\right\Vert \cos\varphi\label{Inner Product Statistic Reproducing Kernels \end{equation} which reduces t \begin{align*} \left\Vert k_{\mathbf{x}}\right\Vert \left\Vert k_{\mathbf{s}}\right\Vert \cos\varphi & =k\left( x_{1}\right) k\left( s_{1}\right) +k\left( x_{2}\right) k\left( s_{2}\right) +\cdots+k\left( x_{d}\right) k\left( s_{d}\right) \\ & =K(\mathbf{x},\mathbf{s})=K(\mathbf{s},\mathbf{x})\text{. \end{align*} The vector relationships in Eq. (\ref{Inner Product Statistic Reproducing Kernels}) indicate that the inner product statistic $K(\mathbf{x},\mathbf{s}) $ determines the length $\left\Vert k_{\mathbf{x}}-k_{\mathbf{s}}\right\Vert $ of the vector from $k_{\mathbf{s}}$ to $k_{\mathbf{x}}$, i.e., the vector $k_{\mathbf{x }-k_{\mathbf{s}}$, which is the distance between the endpoints of $k_{\mathbf{x}}$ and $k_{\mathbf{s}}$, so tha \[ K(\mathbf{x},\mathbf{s})=\left\Vert k_{\mathbf{x}}(\mathbf{s})\right\Vert \left\Vert k_{\mathbf{s}}(\mathbf{x})\right\Vert \cos\varphi=\left\Vert k_{\mathbf{x}}-k_{\mathbf{s}}\right\Vert \text{. \] Because second-order distance statistics are symmetric, the law of cosine \[ \left\Vert k_{\mathbf{s}}-k_{\mathbf{x}}\right\Vert ^{2}=\left\Vert k_{\mathbf{s}}\right\Vert ^{2}+\left\Vert k_{\mathbf{x}}\right\Vert ^{2}-2\left\Vert k_{\mathbf{s}}\right\Vert \left\Vert k_{\mathbf{x }\right\Vert \cos\varphi \] also determines the length $\left\Vert k_{\mathbf{s}}-k_{\mathbf{k }\right\Vert $ of the vector from $k_{\mathbf{x}}$ to $k_{\mathbf{s}}$, i.e., the vector $k_{\mathbf{s}}-k_{\mathbf{k}}$, which is the distance between the endpoints of $k_{\mathbf{s}}$ and $k_{\mathbf{x}}$. Therefore, the inner product statistic $K(\mathbf{x},\mathbf{s})$ between two reproducing kernels $k_{\mathbf{x}}(\mathbf{s})$ and $k_{\mathbf{s }(\mathbf{x})$ in a $\mathfrak{rkH}$ spac \begin{align} K(\mathbf{x},\mathbf{s}) & =k\left( x_{1}\right) k\left( s_{1}\right) +k\left( x_{2}\right) k\left( s_{2}\right) +\cdots+k\left( x_{d}\right) k\left( s_{d}\right) \label{Locus Statistics in RKHS}\\ & =\left\Vert k_{\mathbf{x}}\right\Vert \left\Vert k_{\mathbf{s}}\right\Vert \cos\varphi\nonumber\\ & =\left\Vert k_{\mathbf{x}}-k_{\mathbf{s}}\right\Vert \nonumber \end{align} determines the distance between the geometric loc \ \begin{pmatrix} \left\Vert k_{\mathbf{x}}\right\Vert \cos\mathbb{\alpha}_{k\left( \mathbf{x}_{1}\right) 1}, & \left\Vert k_{\mathbf{x}}\right\Vert \cos\mathbb{\alpha}_{k\left( \mathbf{x}_{2}\right) 2}, & \cdots, & \left\Vert k_{\mathbf{x}}\right\Vert \cos\mathbb{\alpha}_{k\left( \mathbf{x}_{d}\right) d \end{pmatrix} \] an \ \begin{pmatrix} \left\Vert k_{\mathbf{s}}\right\Vert \cos\mathbb{\alpha}_{k\left( \mathbf{s}_{1}\right) 1}, & \left\Vert k_{\mathbf{s}}\right\Vert \cos\mathbb{\alpha}_{k\left( \mathbf{s}_{2}\right) 2}, & \cdots, & \left\Vert k_{\mathbf{s}}\right\Vert \cos\mathbb{\alpha}_{k\left( \mathbf{s}_{d}\right) d \end{pmatrix} \] of the given reproducing kernels. Thus, it is concluded that the vector relationships contained within Eq. (\ref{Inner Product Statistic Reproducing Kernels}) determine a rich system of topological relationships between the loci of two reproducing kernels. Figure $\ref{Second-order Distance Statisitcs RKHS}$ depicts correlated algebraic, geometric, and topological structures determined by an inner product statistic $K(\boldsymbol{\upsilon},\boldsymbol{\nu})$ in a $\mathfrak{rkH}$ space \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure16.png }\caption{Inner product statistics $K(\boldsymbol{\upsilon},\boldsymbol{\nu})$ in reproducing kernel Hilbert spaces determine angles and corresponding distances between the geometric loci of reproducing kernels $k_{\boldsymbol{\upsilon}}$ and $k_{\boldsymbol{\nu}}$. \label{Second-order Distance Statisitcs RKHS \end{figure} \textbf{\ } \subsubsection{Scalar Projection Statistics for Kernels} Scalar projection statistics specify signed magnitudes along the axes of given reproducing kernels. The inner product statistic $\left\Vert k_{\mathbf{x }(\mathbf{s})\right\Vert \left\Vert k_{\mathbf{s}}(\mathbf{x})\right\Vert \cos\theta$ can be interpreted as the length $\left\Vert k_{\mathbf{x }(\mathbf{s})\right\Vert $ of $k_{\mathbf{x}}$ times the scalar projection of $k_{\mathbf{s}}$ onto $k_{\mathbf{x}} \begin{equation} K(\mathbf{x},\mathbf{s})=\left\Vert k_{\mathbf{x}}(\mathbf{s})\right\Vert \times\left[ \left\Vert k_{\mathbf{s}}(\mathbf{x})\right\Vert \cos \theta\right] \text{,} \label{Scalar Projection Reproducing Kernels \end{equation} where the scalar projection of $k_{\mathbf{s}}$ onto $k_{\mathbf{x}}$, also known as the component of $k_{\mathbf{s}}$ along $k_{\mathbf{x}}$, is defined to be the signed magnitude $\left\Vert k_{\mathbf{s}}(\mathbf{x})\right\Vert \cos\theta$ of the vector projection, where $\theta$ is the angle between $k_{\mathbf{x}}$ and $k_{\mathbf{s}}$ \citep{Stewart2009 . Scalar projections are denoted by $\operatorname{comp _{\overrightarrow{k_{\mathbf{x}}}}\left( \overrightarrow{k_{\mathbf{s} }\right) $, where $\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}} }\left( \overrightarrow{k_{\mathbf{s}}}\right) <0$ if $\pi/2<\theta\leq\pi$. The scalar projection statistic $\left\Vert k_{\mathbf{s}}(\mathbf{x )\right\Vert \cos\theta$ satisfies the inner product relationship $\left\Vert k_{\mathbf{s}}(\mathbf{x})\right\Vert \cos\theta=\frac{K(\mathbf{x ,\mathbf{s})}{\left\Vert k_{\mathbf{x}}\right\Vert }$ between the unit vector $\frac{k_{\mathbf{x}}}{\left\Vert k_{\mathbf{x}}\right\Vert }$ and the vector $k_{\mathbf{s}}$. In the next part of the paper, I\ will devise three systems of data-driven, locus equations which generate optimal likelihood ratio tests, i.e., optimal binary classification systems, for digital data. The data-driven, mathematical laws provide a constructive solution to the fundamental integral equation of binary classification for a classification system in statistical equilibrium in Eq. (\ref{Equalizer Rule}). I\ have devised a theorem that states the essential aspects of the binary classification problem. The binary classification theorem is stated below. \section*{Binary Classification Theorem} Let $\widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ denote the likelihood ratio test for a binary classification system, where $\omega_{1}$ or $\omega_{2}$ is the true data category, and $d$-component random vectors $\mathbf{x}$ from class $\omega_{1}$ and class $\omega_{2}$ are generated according to probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics. The discriminant functio \[ \widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \] is the solution to the integral equatio \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min }\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equation $f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) $. Therefore, the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions, which are related to positions and potential locations of random vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, are equal to the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions, which are related to positions and potential locations of random vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $. Furthermore, the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is equal to the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$ \[ E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) =E_{\min}\left( Z|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{. \] Thus, the total eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the binary classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ is equal to the eigenenergies associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ and the locus of a corresponding decision boundary $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) =0$ \[ E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) =E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +E_{\min}\left( Z|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{. \] It follows that the binary classification system $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ is in statistical equilibrium \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}-\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}-\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\text{, \end{align*} where the forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are balanced with the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region such that the expected risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system is minimized, and the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are balanced with the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region such that the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system is minimized. Thus, any given binary classification system $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ exhibits an error rate that is consistent with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system: for all random vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, where $p\left( \mathbf{x |\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics. Therefore, a binary classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ seeks a point of statistical equilibrium where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the expected risk of the classification system are minimized, and the classification system is in statistical equilibrium. \subsection*{Proof} It can be shown that the general form of the decision function for a binary classification system is given by the likelihood ratio test \begin{equation} \Lambda\left( \mathbf{x}\right) \triangleq\frac{p\left( \mathbf{x |\omega_{1}\right) }{p\left( \mathbf{x}|\omega_{2}\right) }\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}1\text{,} \label{General Form of Decision Function \end{equation} where $\omega_{1}$ or $\omega_{2}$ is the true data category and $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are class-conditional probability density functions. Now, take the transform $\ln\left( \Lambda\left( \mathbf{x}\right) \right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}$ $\ln\left( 1\right) $ of the likelihood ratio test for a binary classification syste \[ \ln\Lambda\left( \mathbf{x}\right) \triangleq\ln p\left( \mathbf{x |\omega_{1}\right) -\ln p\left( \mathbf{x}|\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}\ln1\text{, \] where $\ln\exp\left( x\right) =x$, and let $\widehat{\Lambda}\left( \mathbf{x}\right) $ denote the transform $\ln\left( \Lambda\left( \mathbf{x}\right) \right) $ of the likelihood ratio $\Lambda\left( \mathbf{x}\right) =\frac{p\left( \mathbf{x}|\omega_{1}\right) }{p\left( \mathbf{x}|\omega_{2}\right) }$. It follows that the general form of the decision function for a binary classification system can be written as \begin{align} \widehat{\Lambda}\left( \mathbf{x}\right) & \triangleq\ln p\left( \mathbf{x}|\omega_{1}\right) -\ln p\left( \mathbf{x}|\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless} 0\label{General Form of Decision Function II}\\ & =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{.}\nonumber \end{align} Given Eq. (\ref{General Form of Decision Function}), the general form of the decision function and the decision boundary for Gaussian data are completely defined by the likelihood ratio test \[ \Lambda\left( \mathbf{x}\right) =\frac{\left\vert \mathbf{\Sigma _{2}\right\vert ^{1/2}\exp\left\{ -\frac{1}{2}\left( \mathbf{x -\boldsymbol{\mu}_{1}\right) ^{T}\mathbf{\Sigma}_{1}^{-1}\left( \mathbf{x}-\boldsymbol{\mu}_{1}\right) \right\} }{\left\vert \mathbf{\Sigma }_{1}\right\vert ^{1/2}\exp\left\{ -\frac{1}{2}\left( \mathbf{x -\boldsymbol{\mu}_{2}\right) ^{T}\mathbf{\Sigma}_{2}^{-1}\left( \mathbf{x}-\boldsymbol{\mu}_{2}\right) \right\} }\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}1\text{, \] where $\boldsymbol{\mu}_{1}$ and $\boldsymbol{\mu}_{2}$ are $d$-component mean vectors, $\mathbf{\Sigma}_{1}$ and $\mathbf{\Sigma}_{2}$ are $d$-by-$d$ covariance matrices, $\mathbf{\Sigma}^{-1}$ and $\left\vert \mathbf{\Sigma }\right\vert $ denote the inverse and determinant of a covariance matrix, and $\omega_{1}$ or $\omega_{2}$ is the true data category. So, take the transform $\ln\left( \Lambda\left( \mathbf{x}\right) \right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}$ $\ln\left( 1\right) $ of the likelihood ratio test for Gaussian data. Accordingly, take any given decision boundary $D\left( \mathbf{x}\right) $ \begin{align*} D\left( \mathbf{x}\right) & :\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\mathbf{x}^{T}\mathbf{\Sigma}_{1 ^{-1}\mathbf{x}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma}_{1 ^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma _{1}\right\vert ^{1/2}\right) \\ & -\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x+}\frac{1}{2}\boldsymbol{\mu }_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \\ & =0 \end{align*} that is generated according to the likelihood ratio test \begin{align*} \widehat{\Lambda}\left( \mathbf{x}\right) & =\mathbf{x}^{T}\mathbf{\Sigma }_{1}^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\mathbf{x}^{T}\mathbf{\Sigma _{1}^{-1}\mathbf{x}-\frac{1}{2}\boldsymbol{\mu}_{1}^{T}\mathbf{\Sigma _{1}^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma }_{1}\right\vert ^{1/2}\right) \\ & -\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}-\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x-}\frac{1}{2}\boldsymbol{\mu }_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0\text{. \end{align*} It follows that the decision space $Z$ and the corresponding decision regions $Z_{1}$ and $Z_{2}$ of the classification syste \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0 \] are determined by either overlapping or non-overlapping data distributions. It also follows that the decision boundary $D\left( \mathbf{x}\right) $ and the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ satisfy the vector equation \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0\text{, \] where the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{1}\right) $ given class $\omega_{1}$ is given by the vector expression \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) =\mathbf{x}^{T}\mathbf{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}-\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{1}^{-1}\mathbf{x}-\frac{1}{2}\boldsymbol{\mu }_{1}^{T}\mathbf{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{1}\right\vert ^{1/2}\right) \] of a class-conditional probability density function $p\left( \mathbf{x |\omega_{1}\right) $, and the likelihood ratio $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$ is given by the vector expression \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}-\frac{1 {2}\mathbf{x}^{T}\mathbf{\Sigma}_{2}^{-1}\mathbf{x-}\frac{1}{2}\boldsymbol{\mu }_{2}^{T}\mathbf{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}-\frac{1}{2}\ln\left( \left\vert \mathbf{\Sigma}_{2}\right\vert ^{1/2}\right) \] of a class-conditional probability density function $p\left( \mathbf{x |\omega_{2}\right) $. Therefore, the loci of the decision boundary $D\left( \mathbf{x}\right) $ and the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) $ are determined by a locus of principal eigenaxis components and likelihoods \begin{align*} \widehat{\Lambda}\left( \mathbf{x}\right) & =p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \\ & \triangleq\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i}}k_{\mathbf{x}_{1_{i} }-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i}}k_{\mathbf{x}_{2_{i}}}\text{, \end{align*} where $k_{\mathbf{x}_{1_{i}}}$ and $k_{\mathbf{x}_{2_{i}}}$ are reproducing kernels for respective data points $\mathbf{x}_{1_{i}}$ and $\mathbf{x _{2_{i}}$: the reproducing kernel $K(\mathbf{x,s})=k_{\mathbf{s}}(\mathbf{x})$ is either $k_{\mathbf{s}}(\mathbf{x})\triangleq\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}$ or $k_{\mathbf{s}}(\mathbf{x})\triangleq \exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $, $\mathbf{x}_{1i}\sim p\left( \mathbf{x}|\omega_{1}\right) $, $\mathbf{x _{2i}\sim p\left( \mathbf{x}|\omega_{2}\right) $, and $\psi_{1i}$ and $\psi_{2i}$ are scale factors that provide measures of likelihood for respective data points $\mathbf{x}_{1i}$ and $\mathbf{x}_{2i}$ which lie in either overlapping regions or tails regions of data distributions related to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega _{2}\right) $, that is in statistical equilibrium \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \rightleftharpoons p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \] such that the locus of principal eigenaxis components and likelihoods $\widehat{\Lambda}\left( \mathbf{x}\right) =\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i}}k_{\mathbf{x}_{1_{i}}}-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i }k_{\mathbf{x}_{2_{i}}}$ satisfies an equilibrium equation \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i}}k_{\mathbf{x}_{1_{i}} \rightleftharpoons\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i}}k_{\mathbf{x}_{2_{i } \] in an optimal manner. Thus, the discriminant functio \begin{align*} \widehat{\Lambda}\left( \mathbf{x}\right) & =p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \\ & \triangleq\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i}}k_{\mathbf{x}_{1_{i} }-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i}}k_{\mathbf{x}_{2_{i}}}\text{, \end{align*} is the solution to the integral equatio \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where the equilibrium point $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$ \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i}}k_{\mathbf{x}_{1_{i}}}-\sum \nolimits_{i=1}^{l_{2}}\psi_{2_{i}}k_{\mathbf{x}_{2_{i}}}=0 \] is the focus of a decision boundary $D\left( \mathbf{x}\right) $, and $Z_{1}$ and $Z_{2}$ are decision regions that have respective risks for class $\omega_{2}$ and class $\omega_{1}$ \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{ \ and \ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \] and respective counter risks for class $\omega_{1}$ and class $\omega_{2}$ \[ \overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \text{ \ and \ }\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{, \] where the forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{1}$ decision region and the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region are balanced with the forces associated with the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region and the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \\ & =\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) +\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{, \end{align*} and the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is balanced with the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$ \[ E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \rightleftharpoons E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{, \] where $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \triangleq\left\Vert \sum\nolimits_{i=1}^{l_{1 }\psi_{1_{i}}k_{\mathbf{x}_{1_{i}}}\right\Vert ^{2}$ and $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \triangleq\left\Vert \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i }k_{\mathbf{x}_{2_{i}}}\right\Vert ^{2}$, in such a manner that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min }\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equation $f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) $. It follows that the classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ is in statistical equilibrium \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}-\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}-\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\text{, \end{align*} where the forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are balanced with the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \\ & =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \end{align*} such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system is minimized, and the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are balanced with the eigenenergies associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :E_{\min }\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) -E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \\ & =E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) -E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) \end{align*} such that the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system is minimized. Therefore, it is concluded that the expected risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equatio \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the expected risk of the classification system are minimized, and the classification system is in statistical equilibrium. These results are readily generalized for any given class-conditional probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics. \subsection*{Generalization of Proof} Returning to Eq. (\ref{General Form of Decision Function II}), take the general form of the decision function for a binary classification syste \begin{align*} \widehat{\Lambda}\left( \mathbf{x}\right) & \triangleq\ln p\left( \mathbf{x}|\omega_{1}\right) -\ln p\left( \mathbf{x}|\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\\ & =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{, \end{align*} where $\omega_{1}$ or $\omega_{2}$ is the true data category and $d$-component random vectors $\mathbf{x}$ from class $\omega_{1}$ and class $\omega_{2}$ are generated according to probability density functions $p\left( \mathbf{x |\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics. Now take any given decision boundary $D\left( \mathbf{x}\right) $ \[ D\left( \mathbf{x}\right) :p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{2}\right) =0 \] that is generated according to the likelihood ratio test \[ \widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0\text{, \] where $d$-component random vectors $\mathbf{x}$ from class $\omega_{1}$ and class $\omega_{2}$ are generated according to the probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics. It follows that the decision space $Z$ and the corresponding decision regions $Z_{1}$ and $Z_{2}$ of the classification system $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ are determined by either overlapping or non-overlapping data distributions. It also follows that the decision boundary $D\left( \mathbf{x}\right) $ and the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ satisfy the vector equation \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0\text{, \] where the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{1}\right) $ given class $\omega_{1}$ is given by a vector expression $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) $ of the class-conditional probability density function $p\left( \mathbf{x}|\omega_{1}\right) $, and the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$ is given by a vector expression $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) $ of the class-conditional probability density function $p\left( \mathbf{x}|\omega_{2}\right) $. Therefore, the loci of the decision boundary $D\left( \mathbf{x}\right) $ and the likelihood ratio $\widehat{\Lambda}\left( \mathbf{x}\right) $ are determined by a locus of principal eigenaxis components and likelihoods \begin{align*} \widehat{\Lambda}\left( \mathbf{x}\right) & =p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \\ & \triangleq\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i}}k_{\mathbf{x}_{1_{i} }-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i}}k_{\mathbf{x}_{2_{i}}}\text{, \end{align*} where $k_{\mathbf{x}_{1_{i}}}$ and $k_{\mathbf{x}_{2_{i}}}$ are reproducing kernels for respective data points $\mathbf{x}_{1_{i}}$ and $\mathbf{x _{2_{i}}$: the reproducing kernel $K(\mathbf{x,s})=k_{\mathbf{s}}(\mathbf{x})$ is either $k_{\mathbf{s}}(\mathbf{x})\triangleq\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}$ or $k_{\mathbf{s}}(\mathbf{x})\triangleq \exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $, $\mathbf{x}_{1i}\sim p\left( \mathbf{x}|\omega_{1}\right) $, $\mathbf{x _{2i}\sim p\left( \mathbf{x}|\omega_{2}\right) $, and $\psi_{1i}$ and $\psi_{2i}$ are scale factors that provide measures of likelihood for respective data points $\mathbf{x}_{1i}$ and $\mathbf{x}_{2i}$ which lie in either overlapping regions or tails regions of data distributions related to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega _{2}\right) $, that is in statistical equilibrium \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \rightleftharpoons p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \] such that the locus of principal eigenaxis components and likelihoods $\widehat{\Lambda}\left( \mathbf{x}\right) =\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i}}k_{\mathbf{x}_{1_{i}}}-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i }k_{\mathbf{x}_{2_{i}}}$ satisfies the equilibrium equation \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i}}k_{\mathbf{x}_{1_{i}} \rightleftharpoons\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i}}k_{\mathbf{x}_{2_{i } \] in an optimal manner. Thus, the discriminant function $\widehat{\Lambda}\left( \mathbf{x}\right) =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i}}k_{\mathbf{x}_{1_{i}}}-\sum \nolimits_{i=1}^{l_{2}}\psi_{2_{i}}k_{\mathbf{x}_{2_{i}}}$ is the solution to the integral equatio \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where the equilibrium point $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$ \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i}}k_{\mathbf{x}_{1_{i}}}-\sum \nolimits_{i=1}^{l_{2}}\psi_{2_{i}}k_{\mathbf{x}_{2_{i}}}=0 \] is the focus of a decision boundary $D\left( \mathbf{x}\right) $, and the forces associated with the counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{1}$ decision region and the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region are balanced with the forces associated with the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region and the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \\ & =\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) +\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{, \end{align*} and the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is balanced with the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$: \[ E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \rightleftharpoons E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{, \] where $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \triangleq\left\Vert \sum\nolimits_{i=1}^{l_{1 }\psi_{1_{i}}k_{\mathbf{x}_{1_{i}}}\right\Vert ^{2}$ and $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \triangleq\left\Vert \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i }k_{\mathbf{x}_{2_{i}}}\right\Vert ^{2}$, in such a manner that the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min }\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equation $f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) $. It follows that the classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ is in statistical equilibrium \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}-\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}-\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\text{, \end{align*} where the forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are balanced with the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \\ & =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \end{align*} such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system is minimized, and the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are balanced with the eigenenergies associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :E_{\min }\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) -E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \\ & =E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) -E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) \end{align*} such that the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system is minimized. Therefore, it is concluded that the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equatio \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the expected risk of the classification system are minimized, and the classification system is in statistical equilibrium. I\ will now show that the eigenenergy of classification systems is conserved and remains relatively constant, so that the eigenenergy and the corresponding expected risk of a binary classification system cannot be created or destroyed, but only transferred from one classification system to another. \section*{Law of Conservation of Eigenenergy:} \subsection*{For Binary Classification Systems} Let $\widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ denote the likelihood ratio test for a binary classification system, where $\omega_{1}$ or $\omega_{2}$ is the true data category and $d$-component random vectors $\mathbf{x}$ from class $\omega_{1}$ and class $\omega_{2}$ are generated according to probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics. The expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda }\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of a binary classification system $p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equatio \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{1}$ and $Z_{2}$ decision regions are equal to the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ and $Z_{2}$ decision regions, and the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is equal to the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$. The eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ is the state of a binary classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ that is associated with the position or location of a likelihood ratio in statistical equilibrium: $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) =0$ and the locus of a corresponding decision boundary: $D\left( \mathbf{x}\right) :$ $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$. Thus, any given binary classification system $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ exhibits an error rate that is consistent with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of the classification system: for all random vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, where $p\left( \mathbf{x |\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics. The total eigenenergy of a binary classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ is found by adding up contributions from characteristics of the classification system: The eigenenergies $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ associated with the positions or locations of the class-conditional likelihood ratios $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) $ and $p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{2}\right) $, where $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and $E_{\min}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ are related to eigenenergies associated with positions and potential locations of extreme points that lie in either overlapping regions or tails regions of statistical distributions related to the class-conditional probability density functions $p\left( \mathbf{x |\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $. Any given binary classification system that is determined by a likelihood ratio test \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{, \] where the probability density functions $p\left( \mathbf{x}|\omega _{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics, and the locus of a decision boundary \[ D\left( \mathbf{x}\right) :p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x \right) |\omega_{2}\right) =0 \] is governed by the locus of a likelihood ratio $p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) $ in statistical equilibrium \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \rightleftharpoons p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \text{, \] is a closed classification system. Thus, the total eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of any given binary classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ is conserved and remains relatively constant. Therefore, the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of a binary classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ cannot be created or destroyed, but only transferred from one classification system to another. It follows that the corresponding expected risk $\mathfrak{R}_{\mathfrak{\min }}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of a binary classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ cannot be created or destroyed, but only transferred from one classification system to another. I\ will now devise a system of data-driven, locus equations that generate computer-implemented, optimal linear classification systems. My discoveries are based on useful relations between geometric locus methods, geometric methods in Hilbert spaces, statistics, and the binary classification theorem that I\ have derived. \section{Optimal Linear Classification Systems} I will begin by outlining my design for computer-implemented optimal linear classification systems. Such computer-implemented systems are scalable modules for optimal statistical pattern recognition systems, all of which are capable of performing a wide variety of statistical pattern recognition tasks, where any given $M$-class statistical pattern recognition system exhibits optimal generalization performance for an $M$-class feature space. \subsection*{Problem Formulation} \begin{flushleft} The formulation of a system of data-driven, locus equations that generates computer-implemented, optimal linear classification systems requires solving three fundamental problems: \end{flushleft} \paragraph{Problem $\mathbf{1}$} \textit{Define the geometric figures in a linear classification system, where geometric figures involve points, vectors, lines, line segments, angles, regions, planes, and hyperplanes.} \paragraph{Problem $\mathbf{2}$} \textit{Define the geometric and statistical properties exhibited by each of the geometric figures.} \paragraph{Problem $\mathbf{3}$} \textit{Define the forms of the data-driven, locus equations that determine the geometric figures.} \subsection*{The Solution} \begin{flushleft} I\ have formulated a solution that answers all three problems. My solution is based on three ideas: \end{flushleft} \paragraph{Idea $\mathbf{1}$} Devise \emph{a dual locus of data points} that determines optimal linear decision boundaries \emph{and} likelihood ratios that achieve the lowest possible error rate. \paragraph{Idea $\mathbf{2}$} The dual locus of data points must \emph{determine} the \emph{coordinate system} of the \emph{linear decision boundary}. \paragraph{Idea $\mathbf{3}$} The dual locus of data points \emph{must satisfy discrete versions of the fundamental equations of binary classification for a classification system in statistical equilibrium.} \subsection*{Key Elements of the Solution} \begin{flushleft} The essential elements of the solution are outlined below. \end{flushleft} \paragraph{Locus of Principal Eigenaxis Components} Returning to Eqs (\ref{Normal Eigenaxis Functional}) and (\ref{Normal Form Normal Eigenaxis}), given that the vector components of a principal eigenaxis specify all forms of linear curves and surfaces, and all of the points on any given line, plane, or hyperplane explicitly and exclusively reference the principal eigenaxis of the linear locus, it follows that the principal eigenaxis of a linear decision boundary provides an elegant, statistical eigen-coordinate system for a linear classification system. Therefore, the dual locus of data points \emph{must} be a \emph{locus of principal eigenaxis components}. \paragraph{Critical Minimum Eigenenergy Constraint} Given Eq. (\ref{Characteristic Eigenenergy}), it follows that the principal eigenaxis of a linear decision boundary satisfies the linear decision boundary in terms of its eigenenergy. Accordingly, the principal eigenaxis of a linear decision boundary exhibits a characteristic eigenenergy that is unique for the linear decision boundary. Thereby, the\ \emph{important generalizations} for a linear decision boundary are \emph{determined by} the \emph{eigenenergy} exhibited by its \emph{principal eigenaxis}. Therefore, the locus of principal eigenaxis components \emph{must satisfy} a critical minimum, i.e., a total allowed, \emph{eigenenergy constraint} such that the locus of principal eigenaxis components satisfies a \emph{linear decision boundary} in \emph{terms of} its critical minimum or total allowed \emph{eigenenergies.} Thus, the locus of principal eigenaxis components must satisfy \emph{discrete and data-driven versions} of the vector and equilibrium equation in Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) and (\ref{Equilibrium Equation of Likelihood Ratio and Decision Boundary}), the integral equation in Eq. (\ref{Integral Equation of Likelihood Ratio and Decision Boundary}), the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}), and the corresponding integral equation in Eq. (\ref{Balancing of Bayes' Risks and Counteracting Risks}) \emph{in terms of its total allowed eigenenergies}. \paragraph{Extreme Points} In order for the locus of principal eigenaxis components to implement a likelihood ratio, the locus of principal eigenaxis components \emph{must} be \emph{formed by data points} that lie in either overlapping regions or tail region of data distributions, thereby determining \emph{decision regions} based on forces associated with \emph{risks and counter risks}:\emph{ }which are related to positions and potential locations of data points that lie in either overlapping regions or tail region of data distributions, where an unknown portion of the data points are the \emph{source of decision error}. Data points that lie in either overlapping regions or tail region of data distributions will be called extreme points. \paragraph{Parameter Vector of Class-conditional Densities} Given that the locus of principal eigenaxis components must determine a likelihood ratio, it follows that the locus of principal eigenaxis components \emph{must also} be a \emph{parameter vector} that \emph{provides an estimate of class-conditional density functions}. Given Eq. (\emph{\ref{Equilibrium Equation of Likelihood Ratio and Decision Boundary}}), it also follows that the parameter vector must be in statistical equilibrium. \paragraph{Minimum Conditional Risk Constraint} Given Eqs (\ref{Equalizer Rule}) and (\ref{Balancing of Bayes' Risks and Counteracting Risks}), it follows that the parameter vector must satisfy a discrete and data-driven version of the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}) and the corresponding integral equation in (\ref{Balancing of Bayes' Risks and Counteracting Risks})\emph{,} where extreme data points that lie in decision regions involve forces associated with risks or counter risks, such that the parameter vector satisfies the linear decision boundary in terms of a minimum risk. Thus, the dual locus of extreme data points must jointly satisfy a discrete and data-driven version of the fundamental integral equation of binary classification in Eq. \emph{(\ref{Equalizer Rule})} in terms of forces associated with risks and counter risks which are related to positions and potential locations of extreme data points and corresponding total allowed eigenenergies of principal eigenaxis components. Moreover, the forces associated with risks and counter risks that are related to positions and potential locations of extreme data points and the corresponding total allowed eigenenergies of principal eigenaxis components must jointly satisfy a discrete and data-driven version of Eq. (\ref{Balancing of Bayes' Risks and Counteracting Risks}) so that $\left( 1\right) $ the forces associated with risks and counter risks that are related to positions and potential locations of extreme data points are effectively balanced with each other, and $\left( 2\right) $ the total allowed eigenenergies of the principal eigenaxis components are effectively balanced with each other. I will now show that distributions of extreme points determine decision regions for binary classification systems. \subsection{Distributions of Extreme Points} Take a collection of feature vectors for any two pattern classes, where the data distributions are either overlapping or non-overlapping with each other. Data points located in overlapping regions or tails regions between two data distributions specify directions for which a given collection of data is most variable or spread out. Call these data points "extreme points," where any given extreme point is the endpoint of an extreme vector. Any given extreme point is characterized by an expected value (a central location) and a covariance (a spread). Figure $\ref{Location Properties Extreme Data Points}$a depicts how overlapping intervals of probability density functions determine locations of extreme points for two overlapping regions, and Fig. $\ref{Location Properties Extreme Data Points}$b depicts how non-overlapping intervals of probability density functions determine locations of extreme points in two tail regions. Accordingly, distributions of extreme points determine decision regions for binary classification systems, where the forces associated with risks and counter risks are related to positions and potential locations of extreme data points \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure17.png }\caption{Location properties of extreme data points are determined by overlapping or non-overlapping intervals of probability density functions. $\left( a\right) $ For overlapping data distributions, overlapping intervals of likelihoods determine numbers and locations of extreme data points. $\left( b\right) $ Relatively few extreme data points are located within the tail regions of non-overlapping data distributions. \label{Location Properties Extreme Data Points \end{figure} \textbf{\ } I\ will call a dual locus of principal eigenaxis components that determines an estimate of class-conditional densities for extreme points a "linear eigenlocus." I\ will refer to the parameter vector that provides an estimate of class-conditional densities for extreme points as a "locus of likelihoods" or a "parameter vector of likelihoods." A linear eigenlocus, which is formed by a dual locus of principal eigenaxis components and likelihoods, is a data-driven likelihood ratio and decision boundary that determines computer-implemented, optimal linear statistical classification systems that minimize the expected risk: for data drawn from statistical distributions that have similar covariance matrices. I will call the system of data-driven, mathematical laws that generates a linear eigenlocus a "linear eigenlocus transform." I\ will introduce the primal equation of a linear eigenlocus in the next section. I\ will begin the next section by defining important geometric and statistical properties exhibited by weighted extreme points on a linear eigenlocus. I\ will define these properties in terms of geometric and statistical criterion. \subsection{Linear Eigenlocus Transforms} A\ high level description of linear eigenlocus transforms is outlined below. The high level description specifies essential geometric and statistical properties exhibited by the weighted extreme points on a linear eigenlocus. \textbf{Linear eigenlocus transforms generate a locus of weighted extreme points that is a dual locus of likelihoods and principal eigenaxis components, where each weight specifies a class membership statistic and conditional density for an extreme point, and each weight determines the magnitude and the total allowed eigenenergy of an extreme vector.} \begin{flushleft} \textbf{Linear eigenlocus transforms choose each weight in a manner which ensures that:} \end{flushleft} \paragraph{Criterion $\mathbf{1}$} Each conditional density of an extreme point describes the central location (expected value) and the spread (covariance) of the extreme point. \paragraph{Criterion $\mathbf{2}$} Distributions of the extreme points are distributed over the locus of likelihoods in a symmetrically balanced and well-proportioned manner. \paragraph{Criterion $\mathbf{3}$} The total allowed eigenenergy possessed by each weighted extreme vector specifies the probability of observing the extreme point within a localized region. \paragraph{Criterion $\mathbf{4}$} The total allowed eigenenergies of the weighted extreme vectors are symmetrically balanced with each other about a center of total allowed eigenenergy. \paragraph{Criterion $\mathbf{5}$} The forces associated with risks and counter risks related to the weighted extreme points are symmetrically balanced with each other about a center of minimum risk. \paragraph{Criterion $\mathbf{6}$} The locus of principal eigenaxis components formed by weighted extreme vectors partitions any given feature space into congruent decision regions which are symmetrically partitioned by a linear decision boundary. \paragraph{Criterion $\mathbf{7}$} The locus of principal eigenaxis components is the focus of a linear decision boundary. \paragraph{Criterion $\mathbf{8}$} The locus of principal eigenaxis components formed by weighted extreme vectors satisfies the linear decision boundary in terms of a critical minimum eigenenergy. \paragraph{Criterion $\mathbf{9}$} The locus of likelihoods formed by weighted extreme points satisfies the linear decision boundary in terms of a minimum probability of decision error. \paragraph{Criterion $\mathbf{10}$} For data distributions that have dissimilar covariance matrices, the forces associated with counter risks and risks, within each of the congruent decision regions, are balanced with each other. For data distributions that have similar covariance matrices, the forces associated with counter risks within each of the congruent decision regions are equal to each other, and the forces associated with risks within each of the congruent decision regions are equal to each other. \paragraph{Criterion $\mathbf{11}$} For data distributions that have dissimilar covariance matrices, the eigenenergies associated with counter risks and the eigenenergies associated with risks, within each of the congruent decision regions, are balanced with other. For data distributions that have similar covariance matrices, the eigenenergies associated with counter risks within each of the congruent decision regions are equal to each other, and the eigenenergies associated with risks within each of the congruent decision regions are equal to each other. I\ will devise a system of data-driven, locus equations that determines likelihood ratios and decision boundaries which satisfy all of the above criteria. Linear eigenlocus discriminant functions satisfy a fundamental statistical property that I\ will call "symmetrical balance." The statistical property of symmetrical balance is introduced next. \subsection{Design of Balanced Fittings} Learning machine architectures with $N$ free parameters have a learning capacity to fit $N$ data points, where curves or surfaces can be made to pass through every data point. However, highly flexible architectures with indefinite parameter sets overfit training data \citep {Wahba1987,Breiman1991,Geman1992,Barron1998,Boser1992,Gershenfeld1999,Duda2001,Hastie2001,Haykin2009 , whereas architectures with too few parameters underfit training data \citep{Guyon1992,Ivanciuc2007applications . I will show that fitting learning machine architectures to unknown discriminant functions of data involves the design of \emph{balanced fittings} for given sets of data points. But how do we define balanced fittings for learning machines? If we think in terms of classical interpolation methods, Fig. $\ref{Interpolation of Random Data Points}$ depicts a cartoon of an underfitting, overfitting, and balanced fitting of a given set of data points; Fig. $\ref{Interpolation of Random Data Points}$ also illustrates that classical interpolation methods provide ill-suited fittings for unknown functions of random data points \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure18.png }\caption{ Illustration of the difficulties associated with fitting an unknown function to a collection of random data points using classical interpolation methods. \label{Interpolation of Random Data Points \end{figure} I\ will devise a system of data-driven, locus equations that determine balanced fittings for learning machine architectures, where an unknown function is a linear or quadratic discriminant function. The general idea is outlined below. \subsection{Statistical Property of Symmetrical Balance} In this paper, I\ will design balanced fittings for learning machine architectures, for any given set of data points, in terms of a fundamental statistical property that I have named "symmetrical balance." Generally speaking, symmetrical balance can be described as having an even distribution of "weight" or a similar "load" on equal sides of a centrally placed fulcrum. When a set of elements are arranged equally on either side of a central axis, the result is bilateral symmetry \citep{Jirousek1995 . Objects which exhibit bilateral symmetry look the same on both sides of a central axis or a midline. The physical property of symmetrical balance involves a physical system in equilibrium, whereby opposing forces or influences of the system are balanced with each other. \subsubsection{Physical Property of Symmetrical Balance} The physical property of symmetrical balance involves sets of elements which are evenly or equally distributed over either side of an axis or a lever, where a fulcrum is placed directly under the center of the axis or the lever. Accordingly, symmetrical balance involves an axis or a lever in equilibrium, where different elements are equal or in correct proportions, relative to the center of an axis or a lever, such that the opposing forces and influences of a system are balanced with each other. \subsubsection{General Machinery of a Fulcrum and a Lever} As a practical example, consider the general machinery of a fulcrum and a lever, where a lever is any rigid object capable of turning about some fixed point called a fulcrum. If a fulcrum is placed under directly under a lever's center of gravity, the lever will remain balanced. Accordingly, the center of gravity is the point at which the entire weight of a lever is considered to be concentrated, so that if a fulcrum is placed at this point, the lever will remain in equilibrium. If a lever is of uniform dimensions and density, then the center of gravity is at the geometric center of the lever. Consider for example, the playground device known as a seesaw or teeter-totter. The center of gravity is at the geometric center of a teeter-totter, which is where the fulcrum of a seesaw is located\ \citep{Asimov1966 . I\ will use the idea of symmetrical balance, in terms of the general machinery of a fulcrum and a lever, to devise a system of data-driven, locus equations which determines the elegant, statical balancing feat that is outlined next. \subsection{An Elegant Statistical Balancing Feat} I\ will devise a system of data-driven, locus equations that determines a dual locus of principal eigenaxis components and likelihoods, all of which satisfy the statistical property of symmetrical balance described in the above set of criteria. The dual locus provides an estimate of a principal eigenaxis that has symmetrically balanced distributions of eigenenergies on equal sides of a centrally placed fulcrum, which is located at its center of total allowed eigenenergy. The dual locus also provides an estimate of a parameter vector of likelihoods that has symmetrically balanced distributions of forces associated with risks and counter risks on equal sides of a centrally placed fulcrum, which is located at the center of risk. Thereby, a dual locus is in \emph{statistical equilibrium}. \subsubsection{Statistical Equilibrium} I\ will show that the total allowed eigenenergies possessed by the principal eigenaxis components on a dual locus are distributed over its axis in a symmetrically balanced and well-proportioned manner, such that the total allowed eigenenergies of a dual locus are symmetrically balanced with each other about its center of total allowed eigenenergy, which is at the geometric center of the dual locus. I will also demonstrate that the utility of the statistical balancing feat involves \emph{balancing} all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\omega_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) $ in the $Z_{1}$ decision region \emph{with} all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) $ in the $Z_{2}$ decision region \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\omega_{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1 |\omega_{2}\right) \rightleftharpoons\overline{\mathfrak{R}}_{\mathfrak{\min }}\left( Z_{2}|\omega_{2}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) \text{, \] where the forces associated with risks and counter risks are related to extreme point positions and potential locations, such that the eigenenergy and the corresponding expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of a binary classification system are both minimized. Figure $\ref{Dual Statistical Balancing Feat}$ illustrates that linear eigenlocus transforms routinely accomplish an elegant, statistical balancing feat in eigenspace which facilitates a surprising, statistical balancing feat in decision space $Z$ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure19.png }\caption{Linear eigenlocus transforms generate linear discriminant functions and linear decision boundaries which possess the statistical property of symmetrical balance, whereby a dual locus determines decision regions $Z_{1}$ and $Z_{2}$ that have symmetrically balanced forces associated with risks and counter risks: $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\protect\widehat{\Lambda}\left( \mathbf{x}\right) \right) :\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\omega_{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) \rightleftharpoons\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2 |\omega_{1}\right) $ \label{Dual Statistical Balancing Feat \end{figure} Linear eigenlocus transforms are generated by solving the inequality constrained optimization problem that is introduced next. \subsection{Primal Problem of a Linear Eigenlocus} Take any given collection of training data for a binary classification problem of the form \[ \left( \mathbf{x}_{1},y_{1}\right) ,\ldots,\left( \mathbf{x}_{N ,y_{N}\right) \i \mathbb{R} ^{d}\times Y,Y=\left\{ \pm1\right\} \text{, \] where feature vectors $\mathbf{x}$ from class $\omega_{1}$ and class $\omega_{2}$ are drawn from unknown, class-conditional probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ and are identically distributed. A linear eigenlocus $\boldsymbol{\tau}$ is estimated by solving an inequality constrained optimization problem \begin{align} \min\Psi\left( \boldsymbol{\tau}\right) & =\left\Vert \boldsymbol{\tau }\right\Vert ^{2}/2+C/2\sum\nolimits_{i=1}^{N}\xi_{i}^{2}\text{, \label{Primal Normal Eigenlocus}\\ \text{s.t. }y_{i}\left( \mathbf{x}_{i}^{T}\boldsymbol{\tau}+\tau_{0}\right) & \geq1-\xi_{i},\ \xi_{i}\geq0,\ i=1,...,N\text{,}\nonumber \end{align} where $\boldsymbol{\tau}$ is a $d\times1$ constrained, primal linear eigenlocus which is a dual locus of likelihoods and principal eigenaxis components $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) $, $\left\Vert \boldsymbol{\tau}\right\Vert ^{2}$ is the total allowed eigenenergy exhibited by $\boldsymbol{\tau}$, $\tau_{0}$ is a functional of $\boldsymbol{\tau}$, $C$ and $\xi_{i}$ are regularization parameters, and $y_{i}$ are class membership statistics: if $\mathbf{x}_{i}\in\omega_{1}$, assign $y_{i}=1$; if $\mathbf{x}_{i}\in\omega_{2}$, assign $y_{i}=-1$. Equation (\ref{Primal Normal Eigenlocus}) is the primal problem of a linear eigenlocus, where the system of $N$ inequalities must be satisfied \[ y_{i}\left( \mathbf{x}_{i}^{T}\boldsymbol{\tau}+\tau_{0}\right) \geq 1-\xi_{i},\ \xi_{i}\geq0,\ i=1,...,N\text{, \] such that a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ satisfies a critical minimum eigenenergy constraint \begin{equation} \gamma\left( \boldsymbol{\tau}\right) =\left\Vert \boldsymbol{\tau }\right\Vert _{\min_{c}}^{2}\text{,} \label{Minimum Total Eigenenergy Primal Normal Eigenlocus \end{equation} where $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ determines the minimum risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\tau }\right) $ of a linear classification system. Solving the inequality constrained optimization problem in Eq. (\ref{Primal Normal Eigenlocus}) involves solving a dual optimization problem that determines the fundamental unknowns of Eq. (\ref{Primal Normal Eigenlocus}). Denote a Wolfe dual linear eigenlocus by $\boldsymbol{\psi}$ and the Lagrangian dual problem of $\boldsymbol{\psi}$ by $\max\Xi\left( \boldsymbol{\psi}\right) $. Let $\boldsymbol{\psi}$ be a Wolfe dual of $\boldsymbol{\tau}$ such that proper and effective strong duality relationships exist between the algebraic systems of $\min\Psi\left( \boldsymbol{\tau}\right) $ and $\max\Xi\left( \boldsymbol{\psi}\right) $. Thereby, let $\boldsymbol{\psi}$ be related with $\boldsymbol{\tau}$ in a symmetrical manner that specifies the locations of the principal eigenaxis components on $\boldsymbol{\tau}$. The Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ is important for the following reasons. \subsection{Why the Wolfe Dual Linear Eigenlocus Matters} Duality relationships for Lagrange multiplier problems are based on the premise that it is the Lagrange multipliers which are the fundamental unknowns associated with a constrained problem. Dual methods solve an alternate problem, termed the dual problem, whose unknowns are the Lagrange multipliers of the first problem, termed the primal problem. Once the Lagrange multipliers are known, the solution to a primal problem can be determined \citep{Luenberger2003 . \subsubsection{The Real Unknowns} A constrained, primal linear eigenlocus is a dual locus of principal eigenaxis components and likelihoods formed by weighted extreme points, where each weight is specified by a class membership statistic and a scale factor. Each scale factor specifies a conditional density for a weighted extreme point on a locus of likelihoods, and each scale factor determines the magnitude and the eigenenergy of a weighted extreme vector on a locus of principal eigenaxis components. The main issue concerns how the scale factors are determined. \subsubsection{The Fundamental Unknowns} The fundamental unknowns are the scale factors of the principal eigenaxis components on a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$. \subsection{Strong Dual Linear Eigenlocus Transforms} For the problem of linear eigenlocus transforms, the Lagrange multipliers method introduces a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ of principal eigenaxis components, for which the Lagrange multipliers $\left\{ \psi_{i}\right\} _{i=1}^{N}$ are the magnitudes or lengths of a set of Wolfe dual principal eigenaxis components $\left\{ \psi_{i \overrightarrow{\mathbf{e}}_{i}\right\} _{i=1}^{N}$, where $\left\{ \overrightarrow{\mathbf{e}}_{i}\right\} _{i=1}^{N}$ are non-orthogonal unit vectors, and finds extrema for the restriction of a primal linear eigenlocus $\boldsymbol{\tau}$ to a Wolfe dual eigenspace. Accordingly, the fundamental unknowns associated with Eq. (\ref{Primal Normal Eigenlocus}) are the magnitudes or lengths of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$. \subsubsection{Strong Duality} Because Eq. (\ref{Primal Normal Eigenlocus}) is a convex programming problem, the theorem for convex duality guarantees an equivalence and corresponding symmetry between a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ and its Wolfe dual $\boldsymbol{\psi}$ \citep{Nash1996,Luenberger2003 . Strong duality holds between the systems of locus equations denoted by $\min\Psi\left( \boldsymbol{\tau}\right) $ and $\max\Xi\left( \boldsymbol{\psi}\right) $, so that the duality gap between the constrained primal and the Wolfe dual linear eigenlocus solution is zero \citep{Luenberger1969,Nash1996,Fletcher2000,Luenberger2003 . The Lagrangian dual problem of a Wolfe dual linear eigenlocus will be derived by means of the Lagrangian functional that is introduced next. \subsection{The Lagrangian of the Linear Eigenlocus} The inequality constrained optimization problem in Eq. (\ref{Primal Normal Eigenlocus}) is solved by using Lagrange multipliers $\psi_{i}\geq0$ and the Lagrangian functional \begin{align} L_{\Psi\left( \boldsymbol{\tau}\right) }\left( \boldsymbol{\tau \mathbf{,}\tau_{0},\mathbf{\xi},\boldsymbol{\psi}\right) & =\left\Vert \boldsymbol{\tau}\right\Vert ^{2}/2\label{Lagrangian Normal Eigenlocus}\\ & +C/2\sum\nolimits_{i=1}^{N}\xi_{i}^{2}\nonumber\\ & -\sum\nolimits_{i=1}^{N}\psi_{i}\nonumber\\ & \times\left\{ y_{i}\left( \mathbf{x}_{i}^{T}\boldsymbol{\tau}+\tau _{0}\right) -1+\xi_{i}\right\} \nonumber \end{align} which is minimized with respect to the primal variables $\boldsymbol{\tau $\textbf{ }and $\tau_{0}$ and is maximized with respect to the dual variables $\psi_{i}$. The Karush-Kuhn-Tucker (KKT) conditions on the Lagrangian functional $L_{\Psi\left( \boldsymbol{\tau}\right) }$ \begin{equation} \boldsymbol{\tau}-\sum\nolimits_{i=1}^{N}\psi_{i}y_{i}\mathbf{x}_{i}=0,\text{ \ }i=1,...N\text{,} \label{KKTE1 \end{equation \begin{equation} \sum\nolimits_{i=1}^{N}\psi_{i}y_{i}=0,\text{ \ }i=1,...,N\text{,} \label{KKTE2 \end{equation \begin{equation} C\sum\nolimits_{i=1}^{N}\xi_{i}-\sum\nolimits_{i=1}^{N}\psi_{i}=0\text{,} \label{KKTE3 \end{equation \begin{equation} \psi_{i}\geq0,\text{ \ }i=1,...,N\text{,} \label{KKTE4 \end{equation \begin{equation} \psi_{i}\left[ y_{i}\left( \mathbf{x}_{i}^{T}\boldsymbol{\tau}+\tau _{0}\right) -1+\xi_{i}\right] \geq0,\ i=1,...,N\text{,} \label{KKTE5 \end{equation} which can found in \citep{Cortes1995,Burges1998,Cristianini2000,Scholkopf2002 , determine a system of data-driven, locus equations which are jointly satisfied by a constrained primal and a Wolfe dual linear eigenlocus. I will define the manner in which the KKT conditions determine geometric and statistical properties exhibited by weighted extreme points on a Wolfe dual $\boldsymbol{\psi}$ and a constrained primal $\boldsymbol{\tau}$ linear eigenlocus. Thereby, I\ will demonstrate the manner in which the KKT conditions ensure that $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ jointly satisfy discrete and data-driven versions of the fundamental equations of binary classification for a classification system in statistical equilibrium. The Lagrangian dual problem of a Wolfe dual linear eigenlocus is introduced next. \subsection{Lagrangian Dual Problem of a Linear Eigenlocus} The resulting expressions for a primal linear eigenlocus $\boldsymbol{\tau}$ in Eq. (\ref{KKTE1}) and a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ in Eq. (\ref{KKTE2}) are substituted into the Lagrangian functional $L_{\Psi\left( \boldsymbol{\tau}\right) }$ of Eq. (\ref{Lagrangian Normal Eigenlocus}) and simplified. This produces the Lagrangian dual problem of a Wolfe dual linear eigenlocus: a quadratic programming proble \begin{equation} \max\Xi\left( \boldsymbol{\psi}\right) =\sum\nolimits_{i=1}^{N}\psi_{i -\sum\nolimits_{i,j=1}^{N}\psi_{i}\psi_{j}y_{i}y_{j}\frac{\left[ \mathbf{x}_{i}^{T}\mathbf{x}_{j}+\delta_{ij}/C\right] }{2} \label{Wolfe Dual Normal Eigenlocus \end{equation} which is subject to the algebraic constraints $\sum\nolimits_{i=1}^{N y_{i}\psi_{i}=0$ and $\psi_{i}\geq0$, where $\delta_{ij}$ is the Kronecker $\delta$ defined as unity for $i=j$ and $0$ otherwise. Equation (\ref{Wolfe Dual Normal Eigenlocus}) can be written in vector notation by letting $\mathbf{Q}\triangleq\epsilon\mathbf{I +\widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^{T}$ and $\widetilde{\mathbf{X }\triangleq\mathbf{D}_{y}\mathbf{X}$, where $\mathbf{D}_{y}$ is an $N\times N$ diagonal matrix of class membership statistics (labels) $y_{i}$, and the $N\times d$ data matrix is $\mathbf{X}$ $ \begin{pmatrix} \mathbf{x}_{1}, & \mathbf{x}_{2}, & \ldots, & \mathbf{x}_{N \end{pmatrix} ^{T}$. This produces the matrix version of the Lagrangian dual problem of a primal linear eigenlocus within its Wolfe dual eigenspace \begin{equation} \max\Xi\left( \boldsymbol{\psi}\right) =\mathbf{1}^{T}\boldsymbol{\psi }-\frac{\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}}{2} \label{Vector Form Wolfe Dual \end{equation} which is subject to the constraints $\boldsymbol{\psi}^{T}\mathbf{y}=0$ and $\psi_{i}\geq0$ \citep{Reeves2009 . Given the theorem for convex duality, it follows that a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ is a dual locus of likelihoods and principal eigenaxis components $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) $, where $\boldsymbol{\psi}$ exhibits a total allowed eigenenergy $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}$ that is symmetrically related to the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}$: $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}\simeq\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$. \subsection{Loci of Constrained Quadratic Forms} The representation of a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ within its Wolfe dual eigenspace involves the eigensystem of the constrained quadratic form $\boldsymbol{\psi}^{T}\mathbf{Q \boldsymbol{\psi}$ in Eq. (\ref{Vector Form Wolfe Dual}), where $\boldsymbol{\psi}$ is the principal eigenvector of $\mathbf{Q}$, such that $\boldsymbol{\psi}^{T}\mathbf{y}=0$ and $\psi_{i}\geq0$. I will demonstrate how the eigensystem in Eq. (\ref{Vector Form Wolfe Dual}) determines the manner in which the total allowed eigenenergies $\left\Vert \boldsymbol{\tau }\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}$ are symmetrically balanced with each other. The standard quadratic for \[ \mathbf{x}^{T}\mathbf{Ax}=1 \] is a locus equation that determines $d$-dimensional circles, ellipses, hyperbolas, parabolas, lines, or points, where a symmetric matrix $\mathbf{A} $ specifies algebraic constraints that are satisfied by the points $\mathbf{x}$ on a given locus. The eigenvalues of $\mathbf{A}$ determine the shape of a given locus and the principal eigenvector of $\mathbf{A}$ is the major (principal) axis of a given locus \citep{Hewson2009 . Alternatively, principal eigenvectors of sample correlation and covariance matrices determine principal axes that describe the direction in which the data is the most variable or spread out \citep{Duda2001,Hastie2001,Jolliffe2002 . I\ will demonstrate that Eqs (\ref{Wolfe Dual Normal Eigenlocus}) and (\ref{Vector Form Wolfe Dual}) determine a dual linear eigenlocus $\boldsymbol{\psi}$ which is in statistical equilibrium such that the total allowed eigenenergies $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}$ exhibited by $\boldsymbol{\tau}$ are symmetrically balanced with each other about a center of total allowed eigenenergy. I will also demonstrate that the utility of the statistical balancing feat involves \emph{balancing} all of the forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|\omega_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) $ in the $Z_{1}$ decision region \emph{with} all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2 |\omega_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) $ in the $Z_{2}$ decision region, where the forces associated with risks and counter risks are related to positions and potential locations of extreme points, such that the eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ and the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\tau}\right) $ of a discrete, linear classification system are both minimized. I\ will now use the KKT conditions in Eqs (\ref{KKTE1}) and (\ref{KKTE4}) to derive the locus equation of a constrained, primal linear eigenlocus $\boldsymbol{\tau}$. \subsection{The Constrained Primal Linear Eigenlocus} Using the KKT\ conditions in Eqs (\ref{KKTE1}) and (\ref{KKTE4}), it follows that an estimate for $\boldsymbol{\tau}$ satisfies the following vector expression \begin{equation} \boldsymbol{\tau}=\sum\nolimits_{i=1}^{N}y_{i}\psi_{i}\mathbf{x}_{i}\text{,} \label{Normal Eigenlocus Estimate \end{equation} where the $y_{i}$ terms are training set labels, i.e., class membership statistics, (if $\mathbf{x}_{i}$ is a member of class $\omega_{1}$, assign $y_{i}=1$; otherwise, assign $y_{i}=-1$), $\psi_{i}$ is a scale factor for $\mathbf{x}_{i}$, and the magnitude $\psi_{i}$ of each principal eigenaxis component $\psi_{i}\overrightarrow{\mathbf{e}}_{i}$ on $\boldsymbol{\psi}$ is greater than or equal to zero: $\psi_{i}\geq0$. The KKT condition in Eq. (\ref{KKTE4}) requires that the length $\psi_{i}$ of each principal eigenaxis component $\psi_{i}\overrightarrow{\mathbf{e}}_{i}$ on $\boldsymbol{\psi}$ either satisfy or exceed zero: $\psi_{i}\geq0$. Any principal eigenaxis component $\psi_{i}\overrightarrow{\mathbf{e}}_{i}$ which has zero length, where $\psi_{i}=0$, satisfies the origin $P_{\mathbf{0} \begin{pmatrix} 0, & 0, & \cdots, & 0 \end{pmatrix} $ and is not on the Wolfe dual linear eigenlocus $\boldsymbol{\psi}$. It follows that the constrained, primal principal eigenaxis component $\psi _{i}\mathbf{x}_{i}$ also has zero length, i.e., $\left\Vert \psi_{i \mathbf{x}_{i}\right\Vert =0$, and is not on the constrained, primal linear eigenlocus $\boldsymbol{\tau}$. Data points $\mathbf{x}_{i}$ correlated with Wolfe dual principal eigenaxis components $\psi_{i}\overrightarrow{\mathbf{e}}_{i}$ that have non-zero magnitudes $\psi_{i}>0$ are termed extreme vectors. Accordingly, extreme vectors are unscaled, primal principal eigenaxis components on $\boldsymbol{\tau}$. Geometric and statistical properties of extreme vectors are outlined below. \subsubsection{Properties of Extreme Vectors} Take a collection of training data drawn from any two statistical distributions. An extreme point is defined to be a data point which exhibits a high variability of geometric location, that is, possesses a large covariance, such that it is located $(1)$ relatively far from its distribution mean, $(2)$ relatively close to the mean of the other distribution, and $(3)$ relatively close to other extreme points. Therefore, an extreme point is located somewhere within either an overlapping region or a tail region between the two data distributions. Given the geometric and statistical properties exhibited by the locus of an extreme point, it follows that a set of extreme vectors determine principal directions of large covariance for a given collection of training data. Thus, extreme vectors are discrete principal components that specify directions for which a given collection of training data is most variable or spread out. Accordingly, the loci of a set of extreme vectors span a region of large covariance between two distributions of training data. Decision regions and risks and counter risks for overlapping and non-overlapping data distributions are defined next. \paragraph{Overlapping Data Distributions} For overlapping data distributions, the loci of the extreme vectors from each pattern class are distributed within bipartite, joint geometric regions of large covariance, both of which span the region of data distribution overlap. Therefore, decision regions that have significant risks are functions of overlapping intervals of probability density functions that determine numbers and locations of extreme points. Figure $\ref{Location Properties Extreme Data Points}$a depicts how extreme points from two pattern classes are located within bipartite, joint geometric regions of large variance that are located between two overlapping data distributions. \paragraph{Non-overlapping Data Distributions} For non-overlapping data distributions, the loci of the extreme vectors are distributed within bipartite, disjoint geometric regions of large covariance, i.e., separate tail regions, that are located between the data distributions. Because tail regions of distributions are determined by non-overlapping intervals of low likelihood, relatively few extreme points are located within tail regions. Thereby, relatively few extreme points are located between non-overlapping data distributions. Thus, decision regions that have \emph{negligible or no risk} are functions of non-overlapping intervals of probability density functions that determine \emph{tail regions}. Figure $\ref{Location Properties Extreme Data Points}$b illustrates how small numbers of extreme points are located within the tail regions of non-overlapping, Gaussian data distributions. \subsection{Primal Linear Eigenlocus Components} All of the principal eigenaxis components on a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ are labeled, scaled extreme points in \mathbb{R} ^{d}$. Denote the labeled, scaled extreme vectors that belong to class $\omega_{1}$ and $\omega_{2}$ by $\psi_{1_{i\ast}}\mathbf{x}_{1_{i_{\ast}}}$ and $-\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$, with scale factors $\psi_{1_{i\ast}}$ and $\psi_{2_{i\ast}}$, extreme vectors $\mathbf{x _{1_{i\ast}}$ and $\mathbf{x}_{2_{i\ast}}$, and class membership statistics $y_{i}=1$ and $y_{i}=-1$ respectively. Let there be $l_{1}$ labeled, scaled extreme points $\left\{ \psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ and $l_{2}$ labeled, scaled extreme points $\left\{ -\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2}}$. Given Eq. (\ref{Normal Eigenlocus Estimate}) and the assumptions outlined above, it follows that an estimate for a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ is based on the vector difference between a pair of constrained, primal linear eigenlocus components \begin{align} \boldsymbol{\tau} & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast} \mathbf{x}_{1_{i\ast}}-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast} \mathbf{x}_{2_{i\ast}}\label{Pair of Normal Eigenlocus Components}\\ & =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\text{,}\nonumber \end{align} where the constrained, primal linear eigenlocus components $\sum \nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ and $\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$ are denoted by $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ respectively. The sets of scaled extreme points $\left\{ \psi_{1_{1\ast}}\mathbf{x _{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ and $\left\{ \psi_{2_{1\ast} \mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2}}$ on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ determine the loci of $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ and therefore determine the dual locus of $\boldsymbol{\tau}=\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$. Figure\textbf{\ }$\ref{Primal Linear Eigenlocus in Wolfe Dual Eigenspace}$ depicts how the loci of $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ determine the dual locus of $\boldsymbol{\tau}$ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure20.png }\caption{$\left( a\right) $ A constrained, primal linear eigenlocus $\boldsymbol{\tau}$ is determined by the vector difference $\boldsymbol{\tau }_{1}-\boldsymbol{\tau}_{2}$ between a pair of constrained, primal linear eigenlocus components $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$. $\left( b\right) $ The scaled extreme points on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ are endpoints of scaled extreme vectors that possess unchanged directions and eigen-balanced lengths. \label{Primal Linear Eigenlocus in Wolfe Dual Eigenspace \end{figure} I\ will now define how the regularization parameters $C$ and $\xi_{i}$ in Eqs (\ref{Primal Normal Eigenlocus}) and (\ref{Vector Form Wolfe Dual}) affect the dual locus of $\boldsymbol{\tau}$. Regularization components are essential numerical ingredients in algorithms that involve inversions of data matrices \citep {Linz1979,Groetsch1984,Wahba1987,Groetsch1993,Hansen1998,Engl2000,Zhdanov2002,Linz2003 . Because linear eigenlocus transforms involve an inversion of the Gram matrix $\mathbf{Q}$ in Eq. (\ref{Vector Form Wolfe Dual}), some type of regularization is required for low rank Gram matrices \citep{Reeves2009,Reeves2011 . \subsection{Weak Dual Linear Eigenlocus Transforms} It has been shown that the number and the locations of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ are considerably affected by the rank and eigenspectrum of $\mathbf{Q}$. In particular, low rank Gram matrices $\mathbf{Q}$ generate "weak\emph{\ }dual" linear eigenlocus transforms that produce irregular, linear partitions of decision spaces \citep{,Reeves2015resolving . For example, given non-overlapping data distributions and low rank Gram matrices, weak dual linear eigenlocus transforms produce asymmetric, linear partitions that exhibit optimal generalization performance at the expense of unnecessary principal eigenaxis components, where \emph{all} of the training data is transformed into constrained, primal principal eigenaxis components. For overlapping data distributions, incomplete eigenspectra of low rank Gram matrices $\mathbf{Q}$ result in weak dual linear eigenlocus transforms which determine ill-formed, linear decision boundaries that exhibit substandard generalization performance \citep{Reeves2009,Reeves2011,Reeves2015resolving . All of these problems are solved by the regularization method that is described next. \subsubsection{Regularization of Linear Eigenlocus Transforms} For any collection of $N$ training vectors of dimension $d$, where $d<N$, the Gram matrix $\mathbf{Q}$ has low rank. It has been shown that the regularized form of $\mathbf{Q}$, for which $\epsilon\ll1$ and $\mathbf{Q}\triangleq \epsilon\mathbf{I}+\widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^{T}$, ensures that $\mathbf{Q}$ has full rank and a complete eigenvector set so that $\mathbf{Q}$ has a complete eigenspectrum. The regularization constant $C$ is related to the regularization parameter $\epsilon$\ by $\frac{1}{C}$ \citep{Reeves2011 . For $N$ training vectors of dimension $d$, where $d<N$, all of the regularization parameters $\left\{ \xi_{i}\right\} _{i=1}^{N}$ in Eq. (\ref{Primal Normal Eigenlocus}) and all of its derivatives are set equal to a very small value: $\xi_{i}=\xi\ll1$. The regularization constant $C$ is set equal to $\frac{1}{\xi}$: $C=\frac{1}{\xi}$. For $N$ training vectors of dimension $d$, where $N<d$, all of the regularization parameters $\left\{ \xi_{i}\right\} _{i=1}^{N}$ in Eq. (\ref{Primal Normal Eigenlocus}) and all of its derivatives are set equal to zero: $\xi_{i}=\xi=0$. The regularization constant $C$ is set equal to infinity: $C=\infty$. In the next section, I will devise locus equations that determine the manner in which a constrained, primal linear eigenlocus partitions any given feature space into congruent decision regions. \section{Equations of a Linear Discriminant Function} For data distributions that have similar covariance matrices, a constrained, primal linear eigenlocus is the primary basis of a linear discriminant function that implements optimal likelihood ratio tests. The manner in which the dual locus of $\boldsymbol{\tau}$ partitions a feature space is specified by the KKT condition in Eq. (\ref{KKTE5}) and the KKT condition of complementary slackness. \subsection{KKT Condition of Complementary Slackness} The KKT condition of complementary slackness requires that for all constraints that are not active, where locus equations are \emph{ill-defined} \[ y_{i}\left( \mathbf{x}_{i}^{T}\boldsymbol{\tau}+\tau_{0}\right) -1+\xi _{i}>0 \] because they are not satisfied as equalities, the corresponding magnitudes $\psi_{i}$ of the Wolfe dual principal eigenaxis components $\psi _{i}\overrightarrow{\mathbf{e}}_{i}$ must be zero: $\psi_{i}=0$. Accordingly, if an inequality is "slack" (not strict), the other inequality cannot be slack \citep{Sundaram1996 . Therefore, let there be $l$ active constraints, where $l=l_{1}+l_{2}$. Let $\xi_{i}=\xi=0$ or $\xi_{i}=\xi\ll1$. The theorem of Karush, Kuhn, and Tucker provides the guarantee that a Wolf dual linear eigenlocus $\boldsymbol{\psi}$ exists such that the following constraints are satisfied \[ \left\{ \psi_{i\ast}>0\right\} _{i=1}^{l}\text{, \] and the following locus equations are satisfied \[ \psi_{i\ast}\left[ y_{i}\left( \mathbf{x}_{i\ast}^{T}\boldsymbol{\tau +\tau_{0}\right) -1+\xi_{i}\right] =0,\ i=1,...,l\text{, \] where $l$ Wolfe dual principal eigenaxis components $\psi_{i\ast }\overrightarrow{\mathbf{e}}_{i}$ have non-zero magnitudes $\left\{ \psi_{i\ast}\overrightarrow{\mathbf{e}}_{i}|\psi_{i\ast}>0\right\} _{i=1 ^{l}$ \citep{Sundaram1996 . The above condition is known as the \emph{condition of complementary slackness}. So, in order for the constraint $\psi_{i\ast}>0$ to hold, the following locus equation must be satisfied \[ y_{i}\left( \mathbf{x}_{i\ast}^{T}\boldsymbol{\tau}+\tau_{0}\right) -1+\xi_{i}=0\text{. \] Accordingly, let there be $l_{1}$ locus equations \[ \mathbf{x}_{1_{i\ast}}^{T}\boldsymbol{\tau}+\tau_{0}+\xi_{i}=1,\ i=1,...,l_{1 \text{, \] where $y_{i}=+1$, and let there be $l_{2}$ locus equations \[ \mathbf{x}_{2_{i\ast}}^{T}\boldsymbol{\tau}+\tau_{0}-\xi_{i =-1,\ i=1,...,l_{2}\text{, \] where $y_{i}=-1$. It follows that the linear discriminant functio \begin{equation} D\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0} \label{Discriminant Function \end{equation} satisfies the set of constraints \[ D_{0}\left( \mathbf{x}\right) =0\text{, }D_{+1}\left( \mathbf{x}\right) =+1\text{, and }D_{-1}\left( \mathbf{x}\right) =-1\text{, \] where $D_{0}\left( \mathbf{x}\right) $ denotes a linear decision boundary, $D_{+1}\left( \mathbf{x}\right) $ denotes a linear decision border for the $Z_{1}$ decision region, and $D_{-1}\left( \mathbf{x}\right) $ denotes a linear decision border for the $Z_{2}$ decision region. I will now show that the constraints on the linear discriminant function $D\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ determine three equations of symmetrical, linear partitioning curves or surfaces, where all of the points on all three linear loci reference the constrained, primal linear eigenlocus $\boldsymbol{\tau}$. Returning to Eq. (\ref{Normal Form Normal Eigenaxis}), recall that the equation of a linear locus can be written a \[ \frac{\mathbf{x}^{T}\boldsymbol{\nu}}{\left\Vert \boldsymbol{\nu}\right\Vert }=\left\Vert \boldsymbol{\nu}\right\Vert \text{, \] where the principal eigenaxis $\boldsymbol{\nu}/\left\Vert \boldsymbol{\nu }\right\Vert $ has length $1$ and points in the direction of a principal eigenvector $\boldsymbol{\nu}$, and $\left\Vert \boldsymbol{\nu}\right\Vert $ is the distance of a line, plane, or hyperplane to the origin. Any point $\mathbf{x}$ that satisfies the above equation is on the linear locus of points specified by $\boldsymbol{\nu}$, where all of the points $\mathbf{x}$ on the linear locus exclusively reference the principal eigenaxis $\boldsymbol{\nu}$. I will now use Eq. (\ref{Normal Form Normal Eigenaxis}) and the constraints on the linear discriminant function in Eq. (\ref{Discriminant Function}) to devise locus equations that determine the manner in which a constrained, primal linear eigenlocus partitions any given feature space into congruent decision regions. \subsection{Linear Eigenlocus Partitions of Feature Spaces} I\ will now derive the locus equation of a linear decision boundary $D_{0}\left( \mathbf{x}\right) $. \subsubsection{Equation of a Linear Decision Boundary $D_{0}\left( \mathbf{x}\right) $} Given Eq. (\ref{Normal Form Normal Eigenaxis}) and the assumption that $D\left( \mathbf{x}\right) =0$, it follows that the linear discriminant functio \[ D\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0 \] can be written as \begin{equation} \frac{\mathbf{x}^{T}\boldsymbol{\tau}}{\left\Vert \boldsymbol{\tau}\right\Vert }=-\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau}\right\Vert }\text{,} \label{Decision Boundary \end{equation} where $\frac{\left\vert \tau_{0}\right\vert }{\left\Vert \boldsymbol{\tau }\right\Vert }$ is the distance of a linear decision boundary $D_{0}\left( \mathbf{x}\right) $ to the origin. Therefore, any point $\mathbf{x}$ that satisfies Eq. (\ref{Decision Boundary}) is on the linear decision boundary $D_{0}\left( \mathbf{x}\right) $, and all of the points $\mathbf{x}$ on the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ exclusively reference the constrained, primal linear eigenlocus $\boldsymbol{\tau}$. Thereby, the constrained, linear discriminant function $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ satisfies the boundary value of a linear decision boundary $D_{0}\left( \mathbf{x}\right) $: $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}=0$. I will now derive the locus equation of the linear decision border $D_{+1}\left( \mathbf{x}\right) $. \subsubsection{Equation of the $D_{+1}\left( \mathbf{x}\right) $ Decision Border} Given Eq. (\ref{Normal Form Normal Eigenaxis}) and the assumption that $D\left( \mathbf{x}\right) =1$, it follows that the linear discriminant function can be written as \begin{equation} \frac{\mathbf{x}^{T}\boldsymbol{\tau}}{\left\Vert \boldsymbol{\tau}\right\Vert }=-\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau}\right\Vert }+\frac {1}{\left\Vert \boldsymbol{\tau}\right\Vert }\text{,} \label{Decision Border One \end{equation} where $\frac{\left\vert 1-\tau_{0}\right\vert }{\left\Vert \boldsymbol{\tau }\right\Vert }$ is the distance of the linear decision border $D_{+1}\left( \mathbf{x}\right) $ to the origin. Therefore, any point $\mathbf{x}$ that satisfies Eq. (\ref{Decision Border One}) is on the linear decision border $D_{+1}\left( \mathbf{x}\right) $, and all of the points $\mathbf{x}$ on the linear decision border $D_{+1}\left( \mathbf{x}\right) $ exclusively reference the constrained, primal linear eigenlocus $\boldsymbol{\tau}$. Thereby, the constrained, linear discriminant function $\boldsymbol{\tau}^{T \mathbf{x}+\tau_{0}$ satisfies the boundary value of a linear decision border $D_{+1}\left( \mathbf{x}\right) $: $\boldsymbol{\tau}^{T}\mathbf{x}+\tau _{0}=1$. I will now derive the locus equation of the linear decision border $D_{-1}\left( \mathbf{x}\right) $. \subsubsection{Equation of the $D_{-1}\left( \mathbf{x}\right) $ Decision Border} Given Eq. (\ref{Normal Form Normal Eigenaxis}) and the assumption that $D\left( \mathbf{x}\right) =-1$, it follows that the linear discriminant function can be written as \begin{equation} \frac{\mathbf{x}^{T}\boldsymbol{\tau}}{\left\Vert \boldsymbol{\tau}\right\Vert }=-\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau}\right\Vert }-\frac {1}{\left\Vert \boldsymbol{\tau}\right\Vert }\text{,} \label{Decision Border Two \end{equation} where $\frac{\left\vert -1-\tau_{0}\right\vert }{\left\Vert \boldsymbol{\tau }\right\Vert }$ is the distance of the linear decision border $D_{-1}\left( \mathbf{x}\right) $ to the origin. Therefore, any point $\mathbf{x}$ that satisfies Eq. (\ref{Decision Border Two}) is on the linear decision border $D_{-1}\left( \mathbf{x}\right) $, and all of the points $\mathbf{x}$ on the linear decision border $D_{-1}\left( \mathbf{x}\right) $ exclusively reference the constrained, primal linear eigenlocus $\boldsymbol{\tau}$. Thereby, the constrained, linear discriminant function $\boldsymbol{\tau}^{T \mathbf{x}+\tau_{0}$ satisfies the boundary value of a linear decision border $D_{-1}\left( \mathbf{x}\right) $: $\boldsymbol{\tau}^{T}\mathbf{x}+\tau _{0}=-1$. Given Eqs (\ref{Decision Boundary}) - (\ref{Decision Border Two}), it is concluded that the constrained, linear discriminant function $D\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ determines three, symmetrical, linear curves or surfaces, where all of the points on $D_{0}\left( \mathbf{x}\right) $, $D_{+1}\left( \mathbf{x}\right) $, and $D_{-1}\left( \mathbf{x}\right) $ exclusively reference the constrained, primal linear eigenlocus $\boldsymbol{\tau}$. Moreover, it is concluded that the constrained, linear discriminant function $D\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ satisfies boundary values for a linear decision boundary $D_{0}\left( \mathbf{x}\right) $ and two linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $. I will now use the locus equations of the linear decision borders to derive an expression for the distance between the decision borders. \subsubsection{Distance Between the Linear Decision Borders} Using Eqs (\ref{Decision Border One}) and (\ref{Decision Border Two}), it follows that the distance between the linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ \begin{align} D_{\left( D_{+1}\left( \mathbf{x}\right) -D_{-1}\left( \mathbf{x}\right) \right) } & =\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau }\right\Vert }+\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\right) \label{Distance Between Decision Borders}\\ & -\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau}\right\Vert -\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\right) \nonumber\\ & =\frac{2}{\left\Vert \boldsymbol{\tau}\right\Vert }\nonumber \end{align} is equal to twice the inverted length of the constrained, primal linear eigenlocus $\boldsymbol{\tau}$. Therefore, it is concluded that the span of the constrained geometric region between the linear decision borders is regulated by the statistic $2\left\Vert \boldsymbol{\tau}\right\Vert ^{-1}$. I\ will now derive expressions for distances between the linear decision borders and the linear decision boundary. \subsubsection{Distances Between Decision Borders and Boundary} Using Eqs (\ref{Decision Boundary}) and (\ref{Decision Border One}), it follows that the distance between the linear decision border $D_{+1}\left( \mathbf{x}\right) $ and the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ is $\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }$ \begin{align} D_{\left( D_{+1}\left( \mathbf{x}\right) -D_{0}\left( \mathbf{x}\right) \right) } & =\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau }\right\Vert }+\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\right) \label{Symmetrical Distance Between Border One and Boundary}\\ & -\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau}\right\Vert }\right) \nonumber\\ & =\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\text{,}\nonumber \end{align} where the linear decision border $D_{+1}\left( \mathbf{x}\right) $ and the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ delineate a constrained geometric region $R_{1}$ in \mathbb{R} ^{d}$. Using Eqs (\ref{Decision Boundary}) and (\ref{Decision Border Two}), it follows that the distance between the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ and the linear decision border $D_{-1}\left( \mathbf{x}\right) $ is also $\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }$ \begin{align} D_{\left( D_{0}\left( \mathbf{x}\right) -D_{-1}\left( \mathbf{x}\right) \right) } & =\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau }\right\Vert }\right) \label{Symmetrical Distance Between Border Two and Boundary}\\ & -\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau}\right\Vert -\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\right) \nonumber\\ & =\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\text{,}\nonumber \end{align} where the linear decision border $D_{-1}\left( \mathbf{x}\right) $ and the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ delineate a constrained geometric region $R_{2}$ in \mathbb{R} ^{d}$. It follows that the constrained geometric region $R_{1}$ between the linear decision border $D_{+1}\left( \mathbf{x}\right) $ and the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ is congruent to the constrained geometric region $R_{2}$ between the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ and the linear decision border $D_{-1}\left( \mathbf{x}\right) $, i.e., $R_{1}\cong R_{2}$. The equivalent distance of $\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }$ between each linear decision border and the linear decision boundary reveals that the bilateral symmetry exhibited by the linear decision borders along the linear decision boundary is regulated by the inverted length $\left\Vert \boldsymbol{\tau}\right\Vert ^{-1}$ of $\boldsymbol{\tau}$. Therefore, it is concluded that the spans of the congruent geometric regions $R_{1}\cong R_{2}$ delineated by the linear decision boundary of Eq. (\ref{Decision Boundary}) and the linear decision borders of Eqs (\ref{Decision Border One}) and (\ref{Decision Border Two}) are controlled by the\ statistic $\left\Vert \boldsymbol{\tau}\right\Vert ^{-1}$. \subsection{Eigenaxis of Symmetry} It has been shown that a constrained, linear discriminant function $D\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ determines three, symmetrical, linear partitioning curves or surfaces, where all of the points on a linear decision boundary $D_{0}\left( \mathbf{x}\right) $ and linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ exclusively reference a constrained, primal linear eigenlocus $\boldsymbol{\tau}$. Using Eqs (\ref{Distance Between Decision Borders}) - (\ref{Symmetrical Distance Between Border Two and Boundary}), it follows that $\boldsymbol{\tau}$ is an eigenaxis of symmetry which delineates congruent decision regions $Z_{1}\cong Z_{2}$ that are symmetrically partitioned by a linear decision boundary, where the span of both decision regions is regulated by the\ inverted length\textit{\ }$\left\Vert \boldsymbol{\tau}\right\Vert ^{-1}$ of $\boldsymbol{\tau}$. \subsubsection{New Notation and Terminology} I\ will show that the \emph{constrained}, linear eigenlocus discriminant function $D\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x +\tau_{0}$ determines a discrete, linear \emph{classification system} $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$, where $\boldsymbol{\tau=\tau}_{1}-\boldsymbol{\tau}_{2}$ is the \emph{likelihood ratio} of the classification system. Define the \emph{focus} of the linear classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ to be an equilibrium \emph{point} that defines linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ located at equal distances from a linear decision boundary $D_{0}\left( \mathbf{x}\right) $. Therefore, let $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ denote a linear eigenlocus discriminant function and let $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ denote the likelihood ratio of the linear classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ which is a likelihood ratio test $\widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$. The likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\tau }}\left( \mathbf{x}\right) =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ is said to be the primary \emph{focus} of the linear classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$. Figure $\ref{Decision Space for Linear Eigenlocus Transforms}$ illustrates how a constrained, linear discriminant function $\widetilde{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}$ determines three, symmetrical, linear partitioning curves or surfaces which delineate congruent decision regions $Z_{1}\cong Z_{2}$ that will be shown to have symmetrically balanced forces associated with counter risks and risks: $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|\omega_{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) \rightleftharpoons\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) -\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) $ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure21.png }\caption{Linear eigenlocus transforms generate a dual locus of principal eigenaxis components and likelihoods $\boldsymbol{\tau}=\boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}$: the basis of a linear classification system $\protect\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \protect\overset{\omega_{1}}{\protect\underset{\omega_{2}}{\gtrless}}0$ which determines congruent decision regions $Z_{1}\protect\cong Z_{2}$ that will be shown to have symmetrically balanced forces associated with counter risks and risks: $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\omega _{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) \rightleftharpoons\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2 |\omega_{1}\right) $. \label{Decision Space for Linear Eigenlocus Transforms \end{figure} Let $Z$ denote the decision space determined by the decision regions $Z_{1}$ and $Z_{2}$, where $Z\subse \mathbb{R} ^{d}$, $Z=Z_{1}+Z_{2}$, $Z_{1}\cong Z_{2}$, and $Z_{1}$ and $Z_{2}$ are contiguous. I will now demonstrate that the distance between the loci of the constrained, primal linear eigenlocus components $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ regulates the span of the decision space $Z$ between the linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $. \subsection{Regulation of the Decision Space $Z$} Substitution of the expression for $\boldsymbol{\tau}$ in Eq. (\ref{Pair of Normal Eigenlocus Components}) into Eq. (\ref{Distance Between Decision Borders}) provides an expression for the span of the decision space $Z$ between the linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ \begin{equation} Z\propto\frac{2}{\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right\Vert }\text{,} \label{Width of Linear Decision Region \end{equation} where the constrained width of the decision space $Z$ is equal to twice the inverted magnitude of the vector difference of $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$. Thus, the span of the decision space $Z$ between the linear borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ is inversely proportional to the distance between the loci of $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$. Therefore, the distance between the linear decision borders is regulated by the magnitudes and the directions of the constrained, primal linear eigenlocus components $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ on $\boldsymbol{\tau}$. Using the algebraic and geometric relationships in Eq. (\ref{Inner Product Statistic}) depicted in Fig. $\ref{Second-order Distance Statisitcs}$a, it follows that the span of the decision space $Z$ is regulated by the statistic $2\left( \left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}\right) ^{-1}$, where $\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}$ is the angle between $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$. \subsubsection{Regulation of the Decision Regions $Z_{1}$ and $Z_{2}$} The distance between the loci of $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau }_{2}$ also regulates the span of the decision regions between the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ and the linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x \right) $. Substitution of the expression for $\boldsymbol{\tau}$ in Eq. (\ref{Pair of Normal Eigenlocus Components}) into Eq. (\ref{Symmetrical Distance Between Border One and Boundary}) provides an expression for the span of the decision region $Z_{1}$ between the linear decision border $D_{+1}\left( \mathbf{x}\right) $ and the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ \begin{align} Z_{1} & \propto\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}\right\Vert }+\frac{1}{\left\Vert \boldsymbol{\tau }_{1}-\boldsymbol{\tau}_{2}\right\Vert }\right) \label{Large Covariance Region One}\\ & -\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau }_{2}\right\Vert }\right) \nonumber\\ & =\frac{1}{\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert }\text{,}\nonumber \end{align} where the span of the decision region $Z_{1}$ satisfies the statistic $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert ^{-1}$. The span of the decision region $Z_{2}$ between the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ and the linear decision border $D_{-1}\left( \mathbf{x}\right) $ \begin{align} Z_{2} & \propto\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}\right\Vert }\right) \label{Large Covariance Region Two}\\ & -\left( -\frac{\tau_{0}}{\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau }_{2}\right\Vert }-\frac{1}{\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau }_{2}\right\Vert }\right) \nonumber\\ & =\frac{1}{\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert }\nonumber \end{align} also satisfies the statistic $\left\Vert \boldsymbol{\tau}_{1 -\boldsymbol{\tau}_{2}\right\Vert ^{-1}$. Therefore, the span of the congruent decision regions $Z_{1}\cong Z_{2}$ between the linear decision boundary and the linear decision borders is inversely proportional to the magnitude of the vector difference of $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ \[ \frac{1}{\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert }\text{, \] where $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert $ is the distance between the loci of $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau }_{2}$. Thereby, the widths of the decision regions $Z_{1}$ and $Z_{2}$ are regulated by the statistic $\frac{1}{\left\Vert \boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}\right\Vert }$. \subsection{The Linear Eigenlocus Test} I\ will now derive a statistic for the $\tau_{0}$ term in Eq. (\ref{Discriminant Function}). I\ will use the statistic to derive a likelihood statistic that is the basis of a linear eigenlocus decision rule. \subsubsection{Estimate for the $\tau_{0}$ Term} Using the KKT condition in Eq. (\ref{KKTE5}) and the KKT condition of complementary slackness, it follows that the following set of locus equations must be satisfied \[ y_{i}\left( \mathbf{x}_{i\ast}^{T}\boldsymbol{\tau}+\tau_{0}\right) -1+\xi_{i}=0,\ i=1,...,l\text{, \] such that an estimate for $\tau_{0}$ satisfies the statistic \begin{align} \tau_{0} & =\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) -\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\boldsymbol{\tau \label{Normal Eigenlocus Projection Factor}\\ & =\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) -\left( \sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}\right) ^{T}\boldsymbol{\tau }\text{.}\nonumber \end{align} I\ will now use the statistic for $\tau_{0}$ to derive a vector expression for a linear eigenlocus test that is used to classify unknown pattern vectors. Let $\widehat{\mathbf{x}}_{i\ast}\triangleq\sum\nolimits_{i=1}^{l}\mathbf{x _{i\ast}$. Substitution of the statistic for $\tau_{0}$ in Eq. (\ref{Normal Eigenlocus Projection Factor}) into the expression for the discriminant function in Eq. (\ref{Discriminant Function}) provides a linear eigenlocus test $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ for classifying an unknown pattern vector $\mathbf{x}$ \begin{align} \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) & =\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) ^{T}\boldsymbol{\tau }\label{NormalEigenlocusTestStatistic}\\ & \mathbf{+}\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{,}\nonumber \end{align} where the statistic $\widehat{\mathbf{x}}_{i\ast}$ is the locus of the aggregate or cluster $\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}$ of a set of $l$ extreme points, and the statistic $\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $ accounts for the class membership of the primal principal eigenaxis components on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$. \subsubsection{Locus of Aggregated Risk $\protect\widehat{\mathfrak{R}}$} The cluster $\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}$ of a set of extreme points represents the aggregated risk $\widehat{\mathfrak{R}}$ for a decision space $Z$. Accordingly, the vector transform $\mathbf{x}-\widehat{\mathbf{x }_{i\ast}$ accounts for the distance between the unknown vector $\mathbf{x}$ and the locus of aggregated risk $\widehat{\mathfrak{R}}$. I will now derive a vector expression that provides geometric insight into how the linear test $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ in Eq. (\ref{NormalEigenlocusTestStatistic}) makes a decision. \subsection{Linear Decision Locus} Denote a unit linear eigenlocus $\boldsymbol{\tau}\mathbf{/}\left\Vert \boldsymbol{\tau}\right\Vert $ by $\widehat{\boldsymbol{\tau}}$. Letting $\boldsymbol{\tau=\tau}\mathbf{/}\left\Vert \boldsymbol{\tau}\right\Vert $ in Eq. (\ref{NormalEigenlocusTestStatistic}) provides an expression for a decision locu \begin{align} \widehat{D}\left( \mathbf{x}\right) & =\left( \mathbf{x -\widehat{\mathbf{x}}_{i\ast}\right) ^{T}\boldsymbol{\tau}\mathbf{/ \left\Vert \boldsymbol{\tau}\right\Vert \label{Statistical Locus of Category Decision}\\ & \mathbf{+}\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\sum \nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \nonumber \end{align} which is determined by the scalar projection of $\mathbf{x -\widehat{\mathbf{x}}_{i\ast}$ onto $\widehat{\boldsymbol{\tau}}$. Accordingly, the component of $\mathbf{x}-\widehat{\mathbf{x}}_{i\ast}$ along $\widehat{\boldsymbol{\tau}}$ specifies a signed magnitude $\left\Vert \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right\Vert \cos\theta$ along the axis of $\widehat{\boldsymbol{\tau}}$, where $\theta$ is the angle between the transformed vector $\mathbf{x}-\widehat{\mathbf{x}}_{i\ast}$ and $\widehat{\boldsymbol{\tau}}$. It follows that the component $\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\tau}}}}\left( \overrightarrow{\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) }\right) $ of the vector transform $\mathbf{x}-\widehat{\mathbf{x}}_{i\ast}$ of an unknown pattern vector $\mathbf{x}$ along the axis of a unit linear eigenlocus $\widehat{\boldsymbol{\tau}} \[ P_{\widehat{D}\left( \mathbf{x}\right) }=\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\tau}}}}\left( \overrightarrow{\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) }\right) =\left\Vert \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right\Vert \cos\theta \] specifies a locus $P_{\widehat{D}\left( \mathbf{x}\right) }$ of a category decision, where $P_{\widehat{D}\left( \mathbf{x}\right) }$ is at a distance of $\left\Vert \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right\Vert \cos\theta$ from the origin, along the axis of a linear eigenlocus $\boldsymbol{\tau}$. Figure $\ref{Statistical Decision Locus}$ depicts a decision locus generated by the linear eigenlocus test $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ in Eq. (\ref{NormalEigenlocusTestStatistic}) \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure22.png }\caption{Illustration of a statistical decision locus $P_{\protect\widehat{D \left( \mathbf{x}\right) }$ for an unknown, transformed pattern vector $\mathbf{x}-E[\mathbf{x}_{i\ast}]$ that is projected onto $\boldsymbol{\tau }\mathbf{/}\left\Vert \boldsymbol{\tau}\right\Vert $. \label{Statistical Decision Locus \end{figure} The above expression for a decision locus $P_{\widehat{D}\left( \mathbf{x}\right) }$ provides geometric insight into how the linear discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ in Eq. (\ref{Discriminant Function}) assigns an unknown pattern vector to a pattern class. Using Eqs (\ref{Decision Boundary}), (\ref{Decision Border One}), (\ref{Decision Border Two}), and (\ref{Statistical Locus of Category Decision ), it follows that the linear discriminant function in Eq. (\ref{Discriminant Function}) generates a decision locus $P_{\widehat{D \left( \mathbf{x}\right) }$ which lies in a geometric region that is either $\left( 1\right) $ inside or bordering one of the decision regions $Z_{1}$ and $Z_{2}$ depicted in Fig. $\ref{Decision Space for Linear Eigenlocus Transforms}$, $\left( 2\right) $ on the other side of the linear decision border $D_{+1}\left( \mathbf{x \right) $, where $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}=+1$, or $\left( 3\right) $ on the other side of the linear decision border $D_{-1}\left( \mathbf{x}\right) $, where $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}=-1$. Using Eqs (\ref{NormalEigenlocusTestStatistic}) and (\ref{Statistical Locus of Category Decision}), it follows that the linear discriminant functio \[ D\left( \mathbf{x}\right) =\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\tau}}}}\left( \overrightarrow{\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) }\right) +\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi _{i}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{, \] where $\boldsymbol{\tau=\tau}\mathbf{/}\left\Vert \boldsymbol{\tau}\right\Vert $, generates an output based on the decision locus $\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\tau}}}}\left( \mathbf{x -\overline{\mathbf{x}}_{i\ast}\right) $ and the class membership statistic $\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\sum\nolimits_{i=1 ^{l}y_{i}\left( 1-\xi_{i}\right) $. \subsubsection{Linear Decision Threshold} Returning to Eq. (\ref{General Form of Decision Function II}), recall that an optimal decision function computes the likelihood ratio $\Lambda\left( \mathbf{x}\right) $ for a feature vector $\mathbf{x}$ and makes a decision by comparing the ratio $\Lambda\left( \mathbf{x}\right) $ to the threshold $\eta=0$. Given Eqs (\ref{Decision Boundary}) and (\ref{Statistical Locus of Category Decision}), it follows that a linear eigenlocus test $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ makes a decision by comparing the outpu \[ \operatorname{sign}\left( \operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\tau}}}}\left( \overrightarrow{\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) }\right) +\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi _{i}\right) \right) \text{, \] where $\operatorname{sign}\left( x\right) \equiv\frac{x}{\left\vert x\right\vert }$ for $x\neq0$, to a threshold $\eta$ along the axis of $\widehat{\boldsymbol{\tau}}$ in \mathbb{R} ^{d}$, where $\eta=0$. \subsection{Linear Eigenlocus Decision Rules} Substitution of the expression for $\boldsymbol{\tau}$ in Eq. (\ref{Pair of Normal Eigenlocus Components}) into Eq. (\ref{NormalEigenlocusTestStatistic}) provides a linear eigenlocus test in terms of the primal eigenlocus components $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ \begin{align} \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) & =\left( \mathbf{x}-\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}\right) ^{T \boldsymbol{\tau}_{1}\label{NormalEigenlocusTestStatistic2}\\ & -\left( \mathbf{x}-\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}\right) ^{T}\boldsymbol{\tau}_{2}\nonumber\\ & \mathbf{+}\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{.}\nonumber \end{align} I will show that a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ and its Wolfe dual $\boldsymbol{\psi}$ possess an essential statistical property which enables linear eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ to satisfy a discrete and data=driven version of the fundamental integral equation of binary classification \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & =\;\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\int\nolimits_{Z_{2 }p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1} \psi_{1_{i_{\ast}}}\\ & =\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}+\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau _{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, and all of the forces associated with counter risks $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{1}\right) $ and $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau _{2}\right) $ and risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1 |\boldsymbol{\tau}_{2}\right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{1}\right) $ within the $Z_{1}$ and $Z_{2}$ decision regions are symmetrically balanced with each other \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\;\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}-\int\nolimits_{Z_{1 }p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1} \psi_{1_{i_{\ast}}}\\ & =\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}-\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau _{1}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}} \end{align*} by means of an integral equation \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & =\int\nolimits_{Z}p\left( \mathbf{x}_{1_{i\ast} |\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}=\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}+C_{1}\\ & =\int\nolimits_{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}=\left\Vert \boldsymbol{\tau _{2}\right\Vert _{\min_{c}}^{2}+C_{2}\text{, \end{align*} where $p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ and $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ are class-conditional densities for respective extreme points $\mathbf{x _{2_{i\ast}}$ and $\mathbf{x}_{1_{i\ast}}$, and $C_{1}$ and $C_{2}$ are integration constants for $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ respectively. I will identify this essential statistical property after I\ identify the fundamental properties possessed by a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$. \section{The Wolfe Dual Eigenspace I} Let there be $l$ principal eigenaxis components $\left\{ \psi_{i\ast }\overrightarrow{\mathbf{e}}_{i}|\psi_{i\ast}>0\right\} _{i=1}^{l}$ on a constrained, primal linear eigenlocus within its Wolfe dual eigenspace \[ \max\Xi\left( \boldsymbol{\psi}\right) =\mathbf{1}^{T}\boldsymbol{\psi }-\frac{\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}}{2}\text{, \] where the Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ satisfies the constraints $\boldsymbol{\psi}^{T}\mathbf{y}=0$ and $\psi_{i\ast}>0$. Quadratic forms $\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}$ determine five classes of quadratic surfaces that include $N$-dimensional circles, ellipses, hyperbolas, parabolas, and lines \citep{Hewson2009 . I\ will now use Rayleigh's principle \citep[see][]{Strang1986} and the theorem for convex duality to define the geometric essence of $\boldsymbol{\psi}$. Rayleigh's principle guarantees that the quadrati \[ r\left( \mathbf{Q},\mathbf{x}\right) =\mathbf{x}^{T}\mathbf{Qx \] is maximized by the largest eigenvector $\mathbf{x}_{1}$, with its maximal value equal to the largest eigenvalue $\lambda_{1}=\underset{0\neq \mathbf{x\in \mathbb{R} ^{N}}{\max}r\left( \mathbf{Q},\mathbf{x}\right) $. Raleigh's principle can be used to find principal eigenvectors $\mathbf{x}_{1}$ which satisfy additional constraints such as $a_{1}x_{1}+\cdots a_{N}x_{N}=c$, for whic \[ \lambda_{1}=\underset{a_{1}x_{1}+\cdots a_{N}x_{N}=c}{\max}r\left( \mathbf{Q},\mathbf{x}\right) \text{. \] The theorem for convex duality guarantees an equivalence and corresponding symmetry between a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ and its Wolfe dual $\boldsymbol{\psi}$. Raleigh's principle and the theorem for convex duality jointly indicate that Eq. (\ref{Vector Form Wolfe Dual}) provides an estimate of the largest eigenvector $\boldsymbol{\psi}$ of a Gram matrix $\mathbf{Q}$ for which $\boldsymbol{\psi}$ satisfies the constraints $\boldsymbol{\psi}^{T}\mathbf{y}=0$ and $\psi_{i}\geq0$, such that $\boldsymbol{\psi}$ is a principal eigenaxis of three, symmetrical\textit{\ hyperplane partitioning surfaces associated with the constrained quadratic form $\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}$. I will now show that maximization of the functional $\mathbf{1}^{T \boldsymbol{\psi}-\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}\mathbf{/}2 $ requires that $\boldsymbol{\psi}$ satisfy an eigenenergy constraint which is symmetrically related to the restriction of the primal linear eigenlocus $\boldsymbol{\tau}$ to its Wolfe dual eigenspace. \subsection{Eigenenergy Constraint on $\boldsymbol{\psi}$} Equation (\ref{Minimum Total Eigenenergy Primal Normal Eigenlocus}) and the theorem for convex duality jointly indicate that $\boldsymbol{\psi}$ satisfies a critical minimum eigenenergy constraint that is symmetrically related to the critical minimum eigenenergy constraint on $\boldsymbol{\tau}$ \[ \left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}\cong\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}\text{. \] Therefore, a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ satisfies a critical minimum eigenenergy constrain \[ \max\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}=\lambda_{\max \boldsymbol{\psi}}\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2 \] for which the functional $\mathbf{1}^{T}\boldsymbol{\psi}-\boldsymbol{\psi }^{T}\mathbf{Q}\boldsymbol{\psi}\mathbf{/}2$ in Eq. (\ref{Vector Form Wolfe Dual}) is maximized by the largest eigenvector $\boldsymbol{\psi}$ of $\mathbf{Q}$, such that the constrained quadratic form $\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}\mathbf{/}2$\textbf{,} where $\boldsymbol{\psi}^{T}\mathbf{y}=0$ and $\psi_{i}\geq0$, reaches its smallest possible value. This indicates that principal eigenaxis components on $\boldsymbol{\psi}$ satisfy minimum length constraints. Principal eigenaxis components on a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ also satisfy an equilibrium constraint. \subsection{Equilibrium Constraint on $\boldsymbol{\psi}$} The KKT condition in Eq. (\ref{KKTE2}) requires that the magnitudes of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ satisfy the equation \[ \left( y_{i}=1\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}+\left( y_{i}=-1\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}=0 \] so that \begin{equation} \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}-\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i\ast}}=0\text{.} \label{Wolfe Dual Equilibrium Point \end{equation} It follows that the \emph{integrated lengths} of the Wolfe dual principal eigenaxis components correlated with each pattern category must \emph{balance} each other \begin{equation} \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\rightleftharpoons\sum \nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\text{.} \label{Equilibrium Constraint on Dual Eigen-components \end{equation} Accordingly, let $l_{1}+l_{2}=l$ and express a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ in terms of $l$ non-orthogonal unit vectors $\left\{ \overrightarrow{\mathbf{e}}_{1\ast},\ldots,\overrightarrow{\mathbf{e}}_{l\ast }\right\} $ \begin{align} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l}\psi_{i\ast \overrightarrow{\mathbf{e}}_{i\ast}\label{Wolfe Dual Vector Equation}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\overrightarrow{\mathbf{e }_{1i\ast}+\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\overrightarrow{\mathbf{e }_{2i\ast}\nonumber\\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\text{,}\nonumber \end{align} where each scaled, non-orthogonal unit vector $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ is a \emph{displacement vector} that is correlated with an $\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x}_{2_{i\ast}}$ extreme vector respectively, $\boldsymbol{\psi}_{1}$ denotes the Wolfe dual eigenlocus component $\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast }\overrightarrow{\mathbf{e}}_{1_{i\ast}}$, and $\boldsymbol{\psi}_{2}$ denotes the Wolfe dual eigenlocus component $\sum\nolimits_{i=1}^{l_{2}}\psi _{2_{i\ast}}\overrightarrow{\mathbf{e}}_{2_{i\ast}}$. I\ will demonstrate that a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ is a displacement vector that accounts for directions and magnitudes of $\mathbf{x}_{1_{i\ast}}$ and $\mathbf{x}_{2_{i\ast}}$ extreme vectors. Given Eq. (\ref{Equilibrium Constraint on Dual Eigen-components}) and data distributions that have dissimilar covariance matrices, it follows that the forces associated with counter risks and risks, within each of the congruent decision regions, are balanced with each other. Given Eq. (\ref{Equilibrium Constraint on Dual Eigen-components}) and data distributions that have similar covariance matrices, it follows that the forces associated with counter risks within each of the congruent decision regions are equal to each other, and the forces associated with risks within each of the congruent decision regions are equal to each other. Given Eqs (\ref{Equilibrium Constraint on Dual Eigen-components}) and (\ref{Wolfe Dual Vector Equation}), it follows that the axis of a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ can be regarded as a lever that is formed by \emph{sets of principal eigenaxis components which are evenly or equally distributed over either side of the\emph{ axis }of }$\boldsymbol{\psi }$\emph{, where a fulcrum is placed directly under the center of the axis of }$\boldsymbol{\psi}$. Thereby, the axis of $\boldsymbol{\psi}$ is in statistical equilibrium, where all of the principal eigenaxis components on $\boldsymbol{\psi}$ are equal or in correct proportions, relative to the center of $\boldsymbol{\psi}$, such that the opposing forces associated with risks and counter risks of a linear classification system are balanced with each other. Figure $\ref{Linear Dual Locus in Statistical Equilibrium}$ illustrates the axis of $\mathbf{\psi}$ in statistical equilibrium.\textbf \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure23.png }\caption{All of the principal eigenaxis components on $\boldsymbol{\psi}$ have equal or correct proportions, relative to the center of $\boldsymbol{\psi }$, so that opposing forces associated with risks and counter risks are symmetrically balanced with each other. \label{Linear Dual Locus in Statistical Equilibrium \end{figure} } Using Eqs (\ref{Equilibrium Constraint on Dual Eigen-components}) and (\ref{Wolfe Dual Vector Equation}), it follows that the length $\left\Vert \boldsymbol{\psi}_{1}\right\Vert $ of the vector $\boldsymbol{\psi}_{1}$ is balanced with the length $\left\Vert \boldsymbol{\psi}_{2}\right\Vert $ of the vector $\boldsymbol{\psi}_{2}$ \begin{equation} \left\Vert \boldsymbol{\psi}_{1}\right\Vert \rightleftharpoons\left\Vert \boldsymbol{\psi}_{2}\right\Vert \text{,} \label{Equilibrium Constraint on Dual Component Lengths \end{equation} and that the total allowed eigenenergies exhibited by $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$ are balanced with each other \begin{equation} \left\Vert \boldsymbol{\psi}_{1}\right\Vert _{\min_{c}}^{2}\rightleftharpoons \left\Vert \boldsymbol{\psi}_{2}\right\Vert _{\min_{c}}^{2}\text{.} \label{Symmetrical Balance of Wolf Dual Eigenenergies \end{equation} Therefore, the equilibrium constraint on $\boldsymbol{\psi}$ in Eq. (\ref{Equilibrium Constraint on Dual Eigen-components}) ensures that the total allowed eigenenergies exhibited by the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$ are symmetrically balanced with each other \[ \left\Vert \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\right\Vert _{\min_{c }^{2}\rightleftharpoons\left\Vert \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast }\right\Vert _{\min_{c}}^{2 \] about the center of total allowed eigenenergy $\left\Vert \boldsymbol{\psi }\right\Vert _{\min_{c}}^{2}$: which is located at the geometric center of $\boldsymbol{\psi}$ because $\left\Vert \boldsymbol{\psi}_{1}\right\Vert \equiv\left\Vert \boldsymbol{\psi}_{2}\right\Vert $. This indicates that the total allowed eigenenergies of $\boldsymbol{\psi}$ are distributed over its axis in a symmetrically balanced and well-proportioned manner. \subsection{Symmetrical Balance Exhibited by the Axis of $\boldsymbol{\psi}$} Given Eqs (\ref{Equilibrium Constraint on Dual Component Lengths}) and (\ref{Symmetrical Balance of Wolf Dual Eigenenergies}), it follows that the axis of a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ can be regarded as a lever that has \emph{equal weight on equal sides of a centrally placed fulcrum}. Thereby, the axis of $\boldsymbol{\psi}$ is a lever that has an equal distribution of eigenenergies on equal sides of a centrally placed fulcrum. Later on, I\ will show that symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ are symmetrically distributed over the axes of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ and the unconstrained, correlated primal principal eigenaxis components (the extreme vectors) on $\boldsymbol{\tau}$. Figure\textbf{\ $\ref{Symmetrical Balance of Wolfe Dual Linear Eigenlocus}$ depicts how the axis of $\boldsymbol{\psi}$ can be regarded as a lever that has an equal distribution of eigenenergies on equal sides of a centrally placed fulcrum which is located at the geometric center, i.e., the critical minimum eigenenergy $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\psi}$ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure24.png }\caption{The axis of $\boldsymbol{\psi}$ can be regarded as a lever that has an equal distribution of eigenenergies on equal sides of a centrally placed fulcrum which is located at the center of the total allowed eigenenergy $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\psi }$. \label{Symmetrical Balance of Wolfe Dual Linear Eigenlocus \end{figure} The eigenspectrum of $\mathbf{Q}$ plays a fundamental role in describing the hyperplane surfaces associated with $\boldsymbol{\psi}$. I\ will now demonstrate that the eigenspectrum of $\mathbf{Q}$ determines the shapes of the quadratic surfaces which are specified by the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual}). \subsection{Eigenspectrum Shaping of Quadratic Surfaces} Take the standard equation of a quadratic form: $\mathbf{x}^{T}\mathbf{Qx}=1$. Write $\mathbf{x}$ in terms of an orthogonal basis of unit eigenvectors $\left\{ \mathbf{v}_{1},\ldots,\mathbf{v}_{N}\right\} $ so that $\mathbf{x=}\sum\nolimits_{i=1}^{N}x_{i}\mathbf{v}_{i}$. Substitution of this expression into $\mathbf{x}^{T}\mathbf{Qx} \[ \mathbf{x}^{T}\mathbf{Qx}=\left( \sum\nolimits_{i=1}^{N}x_{i}\mathbf{v _{i}\right) ^{T}\mathbf{Q}\left( \sum\nolimits_{j=1}^{N}x_{j}\mathbf{v _{j}\right) \] produces a simple coordinate form equation of a quadratic surfac \begin{equation} \lambda_{1}x_{1}^{2}+\lambda_{2}x_{2}^{2}+\ldots+\lambda_{N}x_{N}^{2}=1 \label{Eigenvalue Coordinate Form Second Order Loci \end{equation} \emph{solely} in terms of the eigenvalues $\lambda_{N}\leq$ $\lambda _{N-1}\ldots\leq\lambda_{1}$ of the matrix $\mathbf{Q}$ \citep{Hewson2009 . Equation (\ref{Eigenvalue Coordinate Form Second Order Loci}) reveals that the \emph{geometric shape} of a quadratic surface is completely determined by the \emph{eigenvalues} of the matrix associated with a quadratic form. This general property of quadratic forms will lead to far reaching consequences for linear and quadratic eigenlocus transforms. I will now show that the inner product statistics of a training data collection essentially determine the geometric shapes of the quadratic surfaces which are specified by the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual}). Consider a Gram or kernel matrix $\mathbf{Q}$ associated with the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual}). Denote the elements of the Gram or kernel matrix $\mathbf{Q}$ by $\varphi\left( \mathbf{x _{i},\mathbf{x}_{j}\right) $, where $\varphi\left( \mathbf{x}_{i ,\mathbf{x}_{j}\right) $ denotes an inner product relationship between the training vectors $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$. The Cayley-Hamilton theorem provides the result that the eigenvalues $\left\{ \lambda _{i}\right\} _{i=1}^{N}\in\Re$ of $\mathbf{Q}$ satisfy the characteristic equatio \[ \det\left( \mathbf{Q-\lambda I}\right) =0 \] which is a polynomial of degree $N$. The roots $p\left( \lambda\right) =0$ of the characteristic polynomial $p\left( \lambda\right) $ of $\mathbf{Q}$ \[ \det\left( \begin{bmatrix} \varphi\left( \mathbf{x}_{1},\mathbf{x}_{1}\right) -\lambda_{1} & \cdots & \varphi\left( \mathbf{x}_{1},\mathbf{x}_{N}\right) \\ \varphi\left( \mathbf{x}_{2},\mathbf{x}_{1}\right) & \cdots & \varphi\left( \mathbf{x}_{2},\mathbf{x}_{N}\right) \\ \vdots & \ddots & \vdots\\ \varphi\left( \mathbf{x}_{N},\mathbf{x}_{1}\right) & \cdots & \varphi\left( \mathbf{x}_{N},\mathbf{x}_{N}\right) -\lambda_{N \end{bmatrix} \right) =0 \] are also the eigenvalues $\lambda_{N}\leq$ $\lambda_{N-1}\leq\ldots\leq \lambda_{1}$ of $\mathbf{Q}$ \citep{Lathi1998 . Therefore, given that $(1)$ the roots of a characteristic polynomial $p\left( \lambda\right) $ vary continuously with its coefficients and that $(2)$ the coefficients of $p\left( \lambda\right) $ can be expressed in terms of sums of principal minors \citep[see][]{Meyer2000 , it follows that the eigenvalues $\lambda_{i}$ of $\mathbf{Q}$ vary continuously with the inner product elements $\varphi\left( \mathbf{x _{i},\mathbf{x}_{j}\right) $ of $\mathbf{Q}$. Thereby, the eigenvalues $\lambda_{N}\leq$ $\lambda_{N-1}\leq\ldots\leq\lambda_{1}$ of a Gram or kernel matrix $\mathbf{Q}$ are essentially determined by its inner product elements $\varphi\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) $. \subsection{Statistics for Linear Partitions} Given Eq. (\ref{Eigenvalue Coordinate Form Second Order Loci}) and the continuous functional relationship between the inner product elements and the eigenvalues of a Gram or kernel matrix, it follows that the geometric shapes of the three, symmetrical quadratic partitioning surfaces determined by Eqs (\ref{Wolfe Dual Normal Eigenlocus}) and (\ref{Vector Form Wolfe Dual}) are an inherent\textit{\ }function of inner product statistics $\varphi\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) $ between vectors. Therefore, it is concluded that the form of the inner product statistics contained within Gram or kernel matrices essentially determines the shapes of the three, symmetrical quadratic partitioning surfaces determined by Eqs (\ref{Wolfe Dual Normal Eigenlocus}) and (\ref{Vector Form Wolfe Dual}). I\ have conducted simulation studies which demonstrate that the eigenvalues of a polynomial kernel matrix associated with the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual}), for which matrix elements $\varphi\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) $ have the algebraic form of $\left( \mathbf{x}_{i}^{T}\mathbf{x}_{j}+1\right) ^{2}$, determine either\ $l-1 -dimensional circles, ellipses, hyperbolas, or parabolas. Such second-order decision boundary estimates can be generated by solving an inequality constrained optimization problem that is similar in nature to Eq. (\ref{Primal Normal Eigenlocus}). I\ have also conducted simulation studies which demonstrate that the eigenvalues of a Gram matrix associated with the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual}), for which matrix elements $\varphi\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) $ have the algebraic form of $\mathbf{x}_{i}^{T}\mathbf{x}_{j}$, determine $l-1$-dimensional hyperplane surfaces \citep[see][]{ Reeves2015resolving . So, let the Gram matrix $\mathbf{Q}$ in Eq. (\ref{Vector Form Wolfe Dual}) contain inner product statistics $\varphi\left( \mathbf{x}_{i},\mathbf{x _{j}\right) =\mathbf{x}_{i}^{T}\mathbf{x}_{j}$ for a separating hyperplane $H_{0}\left( \mathbf{x}\right) $ and hyperplane decision borders $H_{+1}\left( \mathbf{x}\right) $ and $H_{-1}\left( \mathbf{x}\right) $ that have bilateral symmetry along $H_{0}\left( \mathbf{x}\right) $. Later on, I\ will show that linear eigenlocus transforms map the labeled $\pm1$, inner product statistics $\mathbf{x}_{i}^{T}\mathbf{x}_{j}$ contained within $\mathbf{Q} \[ \mathbf{Q}\boldsymbol{\psi}=\lambda\mathbf{_{\max\boldsymbol{\psi} }\boldsymbol{\psi \] into a Wolfe dual linear eigenlocus of principal eigenaxis component \[ \mathbf{Q\sum\nolimits_{i=1}^{l}}\psi_{i\ast}\overrightarrow{\mathbf{e }_{i\ast}=\lambda\mathbf{_{\max\boldsymbol{\psi}}}\sum\nolimits_{i=1}^{l \psi_{i\ast}\overrightarrow{\mathbf{e}}_{i\ast}\text{, \] formed by $l$ scaled $\psi_{i\ast}$, non-orthogonal unit vectors $\left\{ \overrightarrow{\mathbf{e}}_{1\ast},\ldots,\overrightarrow{\mathbf{e}}_{l\ast }\right\} $, where the locus of each Wolfe dual principal eigenaxis component $\psi_{i\ast}\overrightarrow{\mathbf{e}}_{i\ast}\ $is determined by the direction and well-proportioned magnitude of a correlated extreme vector $\mathbf{x}_{i\ast}$. By way of motivation, I\ will now identify the fundamental statistical property possessed by $\boldsymbol{\tau}$ that enables a linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ to satisfy a discrete and data-driven version of the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}). \section{Property of Symmetrical Balance I} I\ have demonstrated that constrained, linear eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ determine contiguous and congruent $Z_{1}\cong Z_{2}$ decision regions $Z_{1}$ and $Z_{2}$ that are delineated by linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ located at equal distances $\frac{2}{\left\Vert \boldsymbol{\tau}\right\Vert }$ from a linear decision boundary $D_{0}\left( \mathbf{x}\right) $, where all of the points $\mathbf{x}$ on $D_{+1}\left( \mathbf{x}\right) $, $D_{-1}\left( \mathbf{x}\right) $, and $D_{0}\left( \mathbf{x}\right) $ reference a linear eigenlocus $\boldsymbol{\tau}$. Therefore, I\ have shown that constrained, linear eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ satisfy boundary values of linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ and linear decision boundaries $D_{0}\left( \mathbf{x}\right) $, where the axis of $\boldsymbol{\tau}$ is an axis of symmetry for $D_{+1}\left( \mathbf{x}\right) $, $D_{-1}\left( \mathbf{x}\right) $, and $D_{0}\left( \mathbf{x}\right) $. The bilateral symmetry exhibited by linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ along a linear decision boundary $D_{0}\left( \mathbf{x}\right) $ is also known as symmetrical balance. Given that $\boldsymbol{\tau}$ is an axis of symmetry which satisfies boundary values of linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ and linear decision boundaries $D_{0}\left( \mathbf{x}\right) $, it follows that $\boldsymbol{\tau}$ \emph{must posses} the statistical property of \emph{symmetrical balance}. Recall that the physical property of symmetrical balance involves an axis or lever in equilibrium where different elements are equal or in correct proportions, relative to the center of an axis or a lever, such that the opposing forces or influences of a system are balanced with each other. \subsection{Symmetrical Balance Exhibited by the Axis of $\boldsymbol{\tau}$} Returning to Eqs (\ref{Equilibrium Constraint on Dual Eigen-components}) and (\ref{Wolfe Dual Vector Equation}), recall that the axis of $\boldsymbol{\psi }$ can be regarded as a lever in statistical equilibrium where different principal eigenaxis components are equal or in correct proportions, relative to the center of $\boldsymbol{\psi}$, such that the opposing forces associated with the risks and counter risks of a linear classification system are balanced with each other. Thus, the axis of $\boldsymbol{\psi=\psi _{1}+\boldsymbol{\psi}_{2}$ exhibits the statistical property of symmetrical balance, where $\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast} \overrightarrow{\mathbf{e}}_{1_{i\ast}}\rightleftharpoons\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i\ast}}\overrightarrow{\mathbf{e}}_{2_{i\ast}}$. Furthermore, given Eqs (\ref{Equilibrium Constraint on Dual Component Lengths ) and (\ref{Symmetrical Balance of Wolf Dual Eigenenergies}), the axis of $\boldsymbol{\psi}$ can be regarded as a lever that has an equal distribution of eigenenergies on equal sides of a centrally placed fulcrum which is located at the center of total allowed eigenenergy $\left\Vert \boldsymbol{\psi }\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\psi}$. Accordingly, the total allowed eigenenergies possessed by the principal eigenaxis components on $\boldsymbol{\psi}$ are symmetrically balanced with each other about a center of total allowed eigenenergy $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}$ which is located at the geometric center of $\boldsymbol{\psi }$. Thus, the axis of $\boldsymbol{\psi=\psi}_{1}+\boldsymbol{\psi}_{2}$ exhibits the statistical property of symmetrical balance, where $\left\Vert \boldsymbol{\psi}_{1}\right\Vert \rightleftharpoons\left\Vert \boldsymbol{\psi }_{2}\right\Vert $ and $\left\Vert \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast }\overrightarrow{\mathbf{e}}_{1_{i\ast}}\right\Vert _{\min_{c}}^{2 \rightleftharpoons\left\Vert \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast }\overrightarrow{\mathbf{e}}_{2_{i\ast}}\right\Vert _{\min_{c}}^{2}$. Returning to Eqs (\ref{Normal Eigenaxis Functional}) and (\ref{Characteristic Eigenenergy}), recall that the locus of any given line, plane, or hyperplane is determined by the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the locus of its principal eigenaxis $\boldsymbol{\nu}$, where any given principal eigenaxis $\boldsymbol{\nu}$ and any given point $\mathbf{x}$ on a linear locus satisfies the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ of $\boldsymbol{\nu}$. Thereby, the inherent property of a linear locus and its principal eigenaxis $\boldsymbol{\nu}$ is the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by $\boldsymbol{\nu}$. Therefore, given Eqs (\ref{Normal Eigenaxis Functional}), (\ref{Characteristic Eigenenergy}), (\ref{Minimum Total Eigenenergy Primal Normal Eigenlocus}), and (\ref{Pair of Normal Eigenlocus Components}), it follows that a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ satisfies the linear decision boundary $D_{0}\left( \mathbf{x}\right) $ in Eq. (\ref{Decision Boundary}) and the linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $ in Eqs (\ref{Decision Border One}) and (\ref{Decision Border Two}) in terms of its total allowed eigenenergies \begin{align*} \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}\\ & \cong\left[ \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c} ^{2}-\boldsymbol{\tau}_{1}^{T}\boldsymbol{\tau}_{2}\right] +\left[ \left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\boldsymbol{\tau }_{2}^{T}\boldsymbol{\tau}_{1}\right] \text{, \end{align*} where the functional $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c }^{2}-\boldsymbol{\tau}_{1}^{T}\boldsymbol{\tau}_{2}$ is associated with the $D_{+1}\left( \mathbf{x}\right) $ linear decision border, the functional $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\boldsymbol{\tau }_{2}^{T}\boldsymbol{\tau}_{1}$ is associated with the $D_{-1}\left( \mathbf{x}\right) $ linear decision border, and the functional $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ is associated with the linear decision boundary $D_{0}\left( \mathbf{x}\right) $. Thus, the total allowed eigenenergies of the principal eigenaxis components on a linear eigenlocus $\boldsymbol{\tau=\tau}_{1}-\boldsymbol{\tau}_{2}$ must satisfy the law of cosines in the symmetrically balanced manner depicted in Fig. $\ref{Law of Cosines for Binary Classification Systems}$ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure25.png }\caption{The likelihood ratio $\protect\widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ of a linear eigenlocus discriminant function $\protect\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}$ satisfies the law of cosines in a symmetrically balanced manner. \label{Law of Cosines for Binary Classification Systems \end{figure} Given that $\boldsymbol{\tau}$ must possess the statistical property of symmetrical balance in terms of its principal eigenaxis components, it follows that the axis of $\boldsymbol{\tau}$ is essentially a lever that is symmetrically balanced with respect to the center of eigenenergy $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}$. Accordingly, the axis of $\boldsymbol{\tau}$ is said to be in statistical equilibrium, where the constrained, primal principal eigenaxis components on $\boldsymbol{\tau} \begin{align*} \boldsymbol{\tau} & =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast} -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast} \end{align*} are equal or in correct proportions, relative to the center of $\boldsymbol{\tau}$, such that all of the forces associated with the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{1}\right) $: for class $\omega_{2}$ and class $\omega_{1}$, and all of the forces associated with the counter risks $\overline{\mathfrak{R}}_{\mathfrak{B }\left( Z_{1}|\boldsymbol{\tau}_{1}\right) $ and $\overline{\mathfrak{R }_{\mathfrak{B}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) $: for class $\omega_{1}$ and class $\omega_{2}$, of a binary, linear classification system $\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ are symmetrically balanced with each other. I will prove that a constrained, linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ satisfies a discrete and data-driven version of the fundamental integral equation of binary classification for a classification system in statistical equilibrium in Eq. (\ref{Equalizer Rule}) because the axis of $\boldsymbol{\tau}$ is essentially a lever that is symmetrically balanced with respect to the center of eigenenergy $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}$ in the following manner \[ \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\equiv\frac{1 {2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2 \] an \[ \left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\equiv\frac{1 {2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2 \] where the equalizer statistic $\nabla_{eq} \[ \nabla_{eq}\triangleq\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1 ^{l}\psi_{_{i\ast} \] for which $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) $, equalizes the total allowed eigenenergies $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ \begin{align*} & \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & \equiv\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2 -\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \end{align*} so that the dual locus of $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ is in statistical equilibrium. Figure $\ref{Symmetrical Balance of Constrained Primal Linear Eigenlocus} \textbf{\ }illustrates the property of symmetrical balance exhibited by the dual locus of $\boldsymbol{\tau}$ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure26.png }\caption{A constrained, linear eigenlocus discriminant function $\protect\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ satisfies a fundamental integral equation of binary classification for a classification system in statistical equilibrium, where the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z\mathbf{|}\boldsymbol{\tau}\right) $ and the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ of the classification system are minimized, because the axis of $\boldsymbol{\tau}$ is essentially a lever that is symmetrically balanced with respect to the center of eigenenergy $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}$. \label{Symmetrical Balance of Constrained Primal Linear Eigenlocus \end{figure} I\ will obtain the above equations for a linear eigenlocus $\boldsymbol{\tau}$ in statistical equilibrium by devising a chain of arguments which demonstrate that a constrained, linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ satisfies discrete and data-driven versions of the fundamental equations of binary classification for a classification system in statistical equilibrium in Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) - (\ref{Balancing of Bayes' Risks and Counteracting Risks}). The general course of my argument is outlined next. \section{General Course of Argument I} In order to prove that a constrained, linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ satisfies $\left( 1\right) $ the vector equatio \[ \boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}=0\text{, \] $\left( 2\right) $ the statistical equilibrium equation \[ p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \rightleftharpoons p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \text{, \] $\left( 3\right) $ the corresponding integral equation \[ \int_{Z}p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) |\omega_{1}\right) d\widehat{\Lambda}_{\boldsymbol{\tau}}=\in _{Z}p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}_{\boldsymbol{\tau}}\text{, \] $\left( 4\right) $ a discrete and data-driven linear version of the fundamental integral equation of binary classification for a classification system in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & =\int_{Z_{1}}p\left( \widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda }_{\boldsymbol{\tau}}+\int_{Z_{2}}p\left( \widehat{\Lambda}_{\boldsymbol{\tau }}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda }_{\boldsymbol{\tau}}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1} \psi_{1_{i_{\ast}}}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}_{\boldsymbol{\tau }+\int_{Z_{2}}p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}_{\boldsymbol{\tau }-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\text{, \end{align*} and $\left( 5\right) $ the corresponding integral equation \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\;\int_{Z_{1}}p\left( \widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda }_{\boldsymbol{\tau}}-\int_{Z_{1}}p\left( \widehat{\Lambda}_{\boldsymbol{\tau }}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda }_{\boldsymbol{\tau}}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1} \psi_{1_{i_{\ast}}}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}_{\boldsymbol{\tau }-\int_{Z_{2}}p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}_{\boldsymbol{\tau }-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\text{, \end{align*} I will need to develop mathematical machinery for several systems of locus equations. The fundamental equations of a binary classification system involve mathematical machinery and systems of locus equations that determine the following mathematical objects: \begin{enumerate} \item Total allowed eigenenergies of extreme points on a Wolfe dual and a constrained primal linear eigenlocus. \item Total allowed eigenenergies of Wolfe dual and constrained primal linear eigenlocus components. \item Total allowed eigenenergy of a Wolfe dual and a constrained primal linear eigenlocus. \item Class-conditional probability density functions for extreme points. \item Conditional probability functions for extreme points. \item Risks and counter risks of extreme points. \item Conditional probability functions for the risks and the counter risks related to positions and potential locations of extreme points. \item Integral equations of class-conditional probability density functions. \end{enumerate} A high level overview of the development of the mathematical machinery and systems of locus equations is outlined below. I\ will develop class-conditional probability density functions and conditional probability functions for extreme points in the following manner: Using Eq. (\ref{Geometric Locus of Vector}), any given extreme point $\mathbf{x}_{i\ast}$ is the endpoint on a locus (a position vector) of random variable \ \begin{pmatrix} \left\Vert \mathbf{x}_{i\ast}\right\Vert \cos\mathbb{\alpha}_{\mathbf{x _{i\ast1}1}, & \left\Vert \mathbf{x}_{i\ast}\right\Vert \cos\mathbb{\alpha }_{\mathbf{x}_{i\ast2}2}, & \cdots, & \left\Vert \mathbf{x}_{i\ast}\right\Vert \cos\mathbb{\alpha}_{\mathbf{x}_{i\ast d}d \end{pmatrix} \text{, \] where each random variable $\left\Vert \mathbf{x}_{i\ast}\right\Vert \cos\mathbb{\alpha}_{x\ast_{i}j}$ is characterized by an expected value $E\left[ \left\Vert \mathbf{x}_{i\ast}\right\Vert \cos\mathbb{\alpha _{x\ast_{i}j}\right] $ and a variance $\operatorname{var}\left( \left\Vert \mathbf{x}_{i\ast}\right\Vert \cos\mathbb{\alpha}_{x_{i\ast}j}\right) $. Therefore, an extreme point $\mathbf{x}_{i\ast}$ is described by a central location (an expected value) and a covariance (a spread). The relative likelihood that an extreme point has a given location is described by a conditional probability density function. The cumulative probability of given locations for an extreme point, i.e., the probability of finding the extreme point within a localized region, is described by a conditional probability function \citep{Ash1993,Flury1997 . So, take the Wolfe dual linear eigenlocus in Eq. (\ref{Wolfe Dual Vector Equation}) \begin{align*} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}+\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{2i\ast}\\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\text{, \end{align*} where each scaled, non-orthogonal unit vector $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ is a displacement vector that is correlated with an $\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x}_{2_{i\ast}}$extreme vector respectively. For a given set $\left\{ \left\{ \mathbf{x}_{1_{i\ast }\right\} _{i=1}^{l_{1}},\;\left\{ \mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2}}\right\} $ of $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x _{2_{i_{\ast}}}$extreme points, I\ will show that each Wolfe dual principal eigenaxis component $\psi_{i\ast}\overrightarrow{\mathbf{e}}_{i\ast}$ on $\boldsymbol{\psi}$ specifies a class-conditional density $p\left( \mathbf{x}_{i\ast}|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{i\ast} }\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ for a correlated extreme point $\mathbf{x}_{i\ast}$, such that $\boldsymbol{\psi }_{1}$ and $\boldsymbol{\psi}_{2}$ are class-conditional probability density functions in Wolfe dual eigenspace, $\boldsymbol{\tau}_{1}$ is a parameter vector for a class-conditional probability density $p\left( \mathbf{x _{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ for a given set $\left\{ \mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ of $\mathbf{x}_{1_{i_{\ast}}}$ extreme points \[ \boldsymbol{\tau}_{1}=p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) \text{, \] and $\boldsymbol{\tau}_{2}$ is a parameter vector for a class-conditional probability density $p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) $ for a given set $\left\{ \mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2}}$ of $\mathbf{x}_{2_{i_{\ast}}}$ extreme points \[ \boldsymbol{\tau}_{2}=p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) \text{, \] where the area under each pointwise conditional density $p\left( \mathbf{x}_{i\ast}|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{i\ast} }\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ is a conditional probability that an extreme point $\mathbf{x}_{i\ast}$ will be observed in a $Z_{1}$ or $Z_{2}$ decision region of a decision space $Z$. In order to develop class-conditional probability densities for extreme points, I will devise a system of data-driven, locus equations in Wolfe dual eigenspace that provides tractable point and coordinate relationships between the weighted (labeled and scaled) extreme points on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ and the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$. I\ will use this system of equations to develop equations for geometric and statistical properties possessed by the Wolfe dual and the constrained, primal principal eigenaxis components. Next, I\ will use these equations and identified properties to define class-conditional probability densities for individual extreme points and class-conditional probability densities $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ for labeled sets of extreme points. Thereby, I\ will demonstrate that the conditional probability function $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ for a given set $\left\{ \mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ of $\mathbf{x}_{1_{i_{\ast}}}$ extreme points is given by the area under the class-conditional probability density function $p\left( \mathbf{x}_{1_{i\ast }}|\boldsymbol{\tau}_{1}\right) \begin{align*} P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) & =\in _{Z}\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast }\right) d\boldsymbol{\tau}_{1}=\int_{Z}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}\\ & =\int_{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}=\left\Vert \boldsymbol{\tau}_{1}\right\Vert ^{2}+C_{1}\text{, \end{align*} over the decision space $Z$, where $\left\Vert \boldsymbol{\tau _{1}\right\Vert ^{2}$ is the total allowed eigenenergy exhibited by $\boldsymbol{\tau}_{1}$ and $C_{1}$ is an integration constant. Likewise, I\ will demonstrate that the conditional probability function $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ for a given set $\left\{ \mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2}}$ of $\mathbf{x}_{2_{i_{\ast}}}$ extreme points is given by the area under the class-conditional probability density function $p\left( \mathbf{x}_{2_{i\ast }}|\boldsymbol{\tau}_{2}\right) \begin{align*} P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) & =\in _{Z}\left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast }\right) d\boldsymbol{\tau}_{2}=\int_{Z}p\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}\\ & =\int_{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}=\left\Vert \boldsymbol{\tau}_{2}\right\Vert ^{2}+C_{2}\text{, \end{align*} over the decision space $Z$, where $\left\Vert \boldsymbol{\tau _{2}\right\Vert ^{2}$ is the total allowed eigenenergy exhibited by $\boldsymbol{\tau}_{2}$ and $C_{2}$ is an integration constant. In order to define the $C_{1}$ and $C_{2}$ integration constants, I\ will need to define the manner in which the total allowed eigenenergies possessed by the scaled extreme vectors on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ are symmetrically balanced with each other. I\ will use these results to define the manner in which the area under the class-conditional probability density functions $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) $ and $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau _{2}\right) $ and the corresponding conditional probability functions $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{1}$ and class $\omega_{2}$ are symmetrically balanced with each other. I\ will define the $C_{1}$ and $C_{2}$ integration constants in the following manner: I will use the KKT condition in Eq. (\ref{KKTE5}) and the theorem of Karush, Kuhn, and Tucker to devise a system of data-driven, locus equations that determines the manner in which the total allowed eigenenergies of the scaled extreme vectors on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ are symmetrically balanced with each other. I\ will use these results along with results obtained from the analysis of the Wolfe dual eigenspace to devise a system of data-driven, locus equations that determines the manner in which the class-conditional density functions $p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x}_{2_{i_{\ast} }|\boldsymbol{\tau}_{2}\right) $ satisfy an integral equatio \[ f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) :\int_{Z}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) d\boldsymbol{\tau}_{1}+\nabla_{eq}=\int_{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau _{2}-\nabla_{eq}\text{, \] over the decision space $Z$, where $\nabla_{eq}$ is a symmetric equalizer statistic. Thereby, I\ will demonstrate that the statistical property of symmetrical balance exhibited by the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ ensures that the conditional probability functions $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{1}$ and class $\omega_{2}$ are equal to each other, so that a linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ satisfies a data-driven version of the integral equation in Eq. (\ref{Integral Equation of Likelihood Ratio and Decision Boundary}). I will use these results along with results obtained from the analysis of the Wolfe dual eigenspace to prove that linear eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ satisfy a data-driven version of the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}) along with the corresponding integral equation in Eq. (\ref{Balancing of Bayes' Risks and Counteracting Risks}). I\ will also devise an integral equation $f\left( \widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ that illustrates the manner in which the property of symmetrical balance exhibited by the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ enables linear eigenlocus discriminant functions $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}$ to effectively balance the forces associated with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z\mathbf{| \boldsymbol{\tau}\right) $ of a classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless} 0$: where all of the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau }_{2}\right) $ within the $Z_{1}$ decision region are symmetrically balanced with all of the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau }_{1}\right) $ within the $Z_{2}$ decision region. Thereby, I will devise integral equations that are satisfied by linear eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$, by which the discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ is the solution to the fundamental integral equation of binary classification for a linear classification system in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & =\;\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\int\nolimits_{Z_{2 }p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\nabla_{eq}\\ & =\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}+\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau _{2}-\nabla_{eq}\text{, \end{align*} where all of the forces associated with the counter risks and the risks for class $\omega_{1}$ and class $\omega_{2}$ are symmetrically balanced with each othe \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\;\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}-\int\nolimits_{Z_{1 }p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\nabla_{eq}\\ & =\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}-\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau _{1}-\nabla_{eq}\text{, \end{align*} over the $Z_{1}$ and $Z_{2}$ decision regions. Linear eigenlocus transforms involve symmetrically balanced, first and second-order statistical moments of extreme data points. I\ will begin my analysis by defining first and second-order statistical moments of data points. \section{First and Second-Order Statistical Moments} Consider again the Gram matrix $\mathbf{Q}$ associated with the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual} \begin{equation} \mathbf{Q} \begin{pmatrix} \mathbf{x}_{1}^{T}\mathbf{x}_{1} & \mathbf{x}_{1}^{T}\mathbf{x}_{2} & \cdots & -\mathbf{x}_{1}^{T}\mathbf{x}_{N}\\ \mathbf{x}_{2}^{T}\mathbf{x}_{1} & \mathbf{x}_{2}^{T}\mathbf{x}_{2} & \cdots & -\mathbf{x}_{2}^{T}\mathbf{x}_{N}\\ \vdots & \vdots & \ddots & \vdots\\ -\mathbf{x}_{N}^{T}\mathbf{x}_{1} & -\mathbf{x}_{N}^{T}\mathbf{x}_{2} & \cdots & \mathbf{x}_{N}^{T}\mathbf{x}_{N \end{pmatrix} \text{,} \label{Autocorrelation Matrix \end{equation} where $\mathbf{Q}\triangleq\widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^{T}$, $\widetilde{\mathbf{X}}\triangleq\mathbf{D}_{y}\mathbf{X}$, $\mathbf{D}_{y} $ is a $N\times N$ diagonal matrix of training labels $y_{i}$ and the $N\times d$ data matrix is $\mathbf{X}$ $ \begin{pmatrix} \mathbf{x}_{1}, & \mathbf{x}_{2}, & \ldots, & \mathbf{x}_{N \end{pmatrix} ^{T}$. Without loss of generality (WLOG), assume that $N$ is an even number, let the first $N/2$ vectors have the label $y_{i}=1$ and let the last $N/2$ vectors have the label $y_{i}=-1$. WLOG, the analysis that follows does not take label information into account. Using Eq. (\ref{Scalar Projection}), let the inner product statistic $\mathbf{x}_{i}^{T}\mathbf{x}_{j}$ be interpreted as $\left\Vert \mathbf{x}_{i}\right\Vert $ times the scalar projection $\left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}$ of $\mathbf{x}_{j} $ onto $\mathbf{x}_{i}$. It follows that row $\mathbf{Q \left( i,:\right) $ in Eq. (\ref{Autocorrelation Matrix}) contains uniformly weighted $\left\Vert \mathbf{x}_{i}\right\Vert $ scalar projections $\left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x _{j}}$ for each of the $N$ vectors $\left\{ \mathbf{x}_{j}\right\} _{j=1}^{N} $ onto the vector $\mathbf{x}_{i}$ \begin{equation} \widetilde{\mathbf{Q}} \begin{pmatrix} \left\Vert \mathbf{x}_{1}\right\Vert \left\Vert \mathbf{x}_{1}\right\Vert \cos\theta_{\mathbf{x_{1}x}_{1}} & \cdots & -\left\Vert \mathbf{x _{1}\right\Vert \left\Vert \mathbf{x}_{N}\right\Vert \cos\theta_{\mathbf{x _{1}\mathbf{x}_{N}}\\ \left\Vert \mathbf{x}_{2}\right\Vert \left\Vert \mathbf{x}_{1}\right\Vert \cos\theta_{\mathbf{x}_{2}\mathbf{x}_{1}} & \cdots & -\left\Vert \mathbf{x}_{2}\right\Vert \left\Vert \mathbf{x}_{N}\right\Vert \cos \theta_{\mathbf{x}_{2}\mathbf{x}_{N}}\\ \vdots & \ddots & \vdots\\ -\left\Vert \mathbf{x}_{N}\right\Vert \left\Vert \mathbf{x}_{1}\right\Vert \cos\theta_{\mathbf{x}_{N}\mathbf{x}_{1}} & \cdots & \left\Vert \mathbf{x _{N}\right\Vert \left\Vert \mathbf{x}_{N}\right\Vert \cos\theta_{\mathbf{x _{N}\mathbf{x}_{N} \end{pmatrix} \text{,} \label{Inner Product Matrix \end{equation} where $0<\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}\leq\frac{\pi}{2}$ or $\frac {\pi}{2}<\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}\leq\pi$. Alternatively, column $\mathbf{Q}\left( :,j\right) $ in Eq. (\ref{Autocorrelation Matrix}) contains weighted $\left\Vert \mathbf{x}_{i}\right\Vert $ scalar projections$\left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x _{i}\mathbf{x}_{j}}$ for the vector $\mathbf{x}_{j}$ onto each of the $N$ vectors $\left\{ \mathbf{x}_{i}\right\} _{i=1}^{N}$. Now consider the $i$th row $\widetilde{\mathbf{Q}}\left( i,:\right) $ of $\widetilde{\mathbf{Q}}$ in Eq. (\ref{Inner Product Matrix}). Again, using Eq. (\ref{Scalar Projection}), it follows that element $\widetilde{\mathbf{Q }\left( i,j\right) $ of row $\widetilde{\mathbf{Q}}\left( i,:\right) $ specifies the length $\left\Vert \mathbf{x}_{i}\right\Vert $ of the vector $\mathbf{x}_{i}$ multiplied by the scalar projection $\left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}$ of\ $\mathbf{x}_{j}$ onto $\mathbf{x}_{i}$ \[ \widetilde{\mathbf{Q}}\left( i,j\right) =\left\Vert \mathbf{x _{i}\right\Vert \left[ \left\Vert \mathbf{x}_{j}\right\Vert \cos \theta_{\mathbf{x}_{i}\mathbf{x}_{j}}\right] \text{, \] where the signed magnitude of the vector projection of $\mathbf{x}_{j}$ along the axis of $\mathbf{x}_{i} \[ \operatorname{comp}_{\overrightarrow{\mathbf{x}}_{i}}\left( \overrightarrow{\mathbf{x}}_{j}\right) =\left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}=\mathbf{x}_{j}^{T}\left( \frac{\mathbf{x}_{i}}{\left\Vert \mathbf{x}_{i}\right\Vert }\right) \] provides a measure $\widehat{\mathbf{x}}_{j}$ of the first degree components of the vector $\mathbf{x}_{j} \[ \mathbf{x}_{j}=\left( x_{j_{1}},x_{j_{2}},\cdots,x_{j_{d}}\right) ^{T \] along the axis of the vector $\mathbf{x}_{i} \[ \mathbf{x}_{i}=\left( x_{i_{1}},x_{i_{2}},\cdots,x_{i_{d}}\right) ^{T}\text{. \] Accordingly, the signed magnitude $\left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}$ provides an estimate $\widehat{\mathbf{x}}_{i}$ for the amount of first degree components of the vector $\mathbf{x}_{i}$ that are distributed over the axis of the vector $\mathbf{x}_{j}$. This indicates that signed magnitudes $\left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}$ contained with $\widetilde{\mathbf{Q}}$ account for how the first degree coordinates of a data point $\mathbf{x}_{i}$ are distributed along the axes of a set of vectors $\left\{ \mathbf{x}_{j}\right\} _{j=1}^{N}$ within Euclidean space. Using the above assumptions and notation, for any given row $\widetilde{\mathbf{Q}}\left( i,:\right) $ of Eq. (\ref{Inner Product Matrix}), it follows that the statistic denoted by $E_{\widehat{\mathbf{x}}_{i}}\left[ \mathbf{x}_{i}|\left\{ \mathbf{x _{j}\right\} _{j=1}^{N}\right] \begin{align} E_{\widehat{\mathbf{x}}_{i}}\left[ \mathbf{x}_{i}|\left\{ \mathbf{x _{j}\right\} _{j=1}^{N}\right] & =\left\Vert \mathbf{x}_{i}\right\Vert {\displaystyle\sum\nolimits_{j}} \operatorname{comp}_{\overrightarrow{\mathbf{x}}_{i}}\left( \overrightarrow{\mathbf{x}}_{j}\right) \label{Row Distribution First Order Vector Coordinates}\\ & =\left\Vert \mathbf{x}_{i}\right\Vert {\displaystyle\sum\nolimits_{j}} \left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j }\nonumber \end{align} provides an estimate $E_{\widehat{\mathbf{x}}_{i}}\left[ \mathbf{x _{i}|\left\{ \mathbf{x}_{j}\right\} _{j=1}^{N}\right] $ for the amount of first degree components of a vector $\mathbf{x}_{i}$ that are distributed over the axes of a set of vectors $\left\{ \mathbf{x}_{j}\right\} _{j=1}^{N}$, where labels have not been taken into account. Thereby, Eq. (\ref{Row Distribution First Order Vector Coordinates}) describes a distribution of first degree coordinates for a vector $\mathbf{x}_{i}$ in a data collection. Given that Eq. (\ref{Row Distribution First Order Vector Coordinates}) involves signed magnitudes of vector projections along the axis of a fixed vector $\mathbf{x}_{i}$, the distribution of first degree vector coordinates described by Eq. (\ref{Row Distribution First Order Vector Coordinates}) is said to determine a \emph{first-order statistical moment about the locus of a data point} $\mathbf{x}_{i}$. Because the statistic $E_{\widehat{\mathbf{x }_{i}}\left[ \mathbf{x}_{i}|\left\{ \mathbf{x}_{j}\right\} _{j=1 ^{N}\right] $ depends on the uniform direction of a fixed vector $\mathbf{x}_{i}$, the statistic $E_{\widehat{\mathbf{x}}_{i}}\left[ \mathbf{x}_{i}|\left\{ \mathbf{x}_{j}\right\} _{j=1}^{N}\right] $ is said to be unidirectional. In the next section, I will devise pointwise covariance statistics that provide unidirectional estimates of covariance along a fixed reference axis. \subsection{Unidirectional Covariance Statistics} Classical covariance statistics provide omnidirectional estimates of covariance along $N$ axes of $N$ vectors. I\ will now argue that such omnidirectional statistics provide non-coherent estimates of covariance. \subsubsection{Omnidirectional Covariance Statistics} Take the data matrix $\mathbf{X}$ $ \begin{pmatrix} \mathbf{x}_{1}, & \mathbf{x}_{2}, & \ldots, & \mathbf{x}_{N \end{pmatrix} ^{T}$ and consider the classical covariance statistic \begin{align*} \widehat{\operatorname{cov}}\left( \mathbf{X}\right) & =\frac{1}{N {\displaystyle\sum\nolimits_{i}} \left( \mathbf{x}_{i}-\overline{\mathbf{x}}\right) ^{2}\text{,}\\ & =\frac{1}{N {\displaystyle\sum\nolimits_{i}} \left( \mathbf{x}_{i}-\left( \frac{1}{N {\displaystyle\sum\nolimits_{i}} \mathbf{x}_{i}\right) \right) ^{2 \end{align*} written in vector notation. The statistic $\widehat{\operatorname{cov}}\left( \mathbf{X}\right) $ measures the square of the Euclidean distance between a common mean vector $\overline{\mathbf{x}}$ and each of the vectors $\mathbf{x}_{i}$ in a collection of data $\left\{ \mathbf{x}_{i}\right\} _{i=1}^{N}$ \citep{Ash1993,Flury1997 . Because the statistic $\widehat{\operatorname{cov}}\left( \mathbf{X}\right) $ depends on $N$ directions of $N$ training vectors, the statistic $\widehat{\operatorname{cov}}\left( \mathbf{X}\right) $ is said to be omnidirectional and is considered to be a non-coherent estimate of covariance. Accordingly, the statistic $\widehat{\operatorname{cov}}\left( \mathbf{X \right) $ provides an omnidirectional, non-coherent estimate of the joint variations of the $d\times N$ random variables of a collection of $N$ random vectors $\left\{ \mathbf{x}_{i}\right\} _{i=1}^{N}$ about the $d$ random variables of the mean vector $\overline{\mathbf{x}}$. I will now devise pointwise covariance statistics $\widehat{\operatorname{cov }_{up}\left( \mathbf{x}_{i}\right) $ for individual vectors. Pointwise covariance statistics are unidirectional statistics that provide coherent estimates of covariance along a fixed reference axis. WLOG, label information is not taken into consideration. \subsubsection{Pointwise Covariance Statistics} Take any row $\widetilde{\mathbf{Q}}\left( i,:\right) $ of the matrix $\widetilde{\mathbf{Q}}$ in Eq. (\ref{Inner Product Matrix}) and consider the inner product statistic $\left\Vert \mathbf{x}_{i}\right\Vert \left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}$ in element $\widetilde{\mathbf{Q}}\left( i,j\right) $. Using Eqs (\ref{Geometric Locus of Vector}) and (\ref{Inner Product Statistic}), it follows that element $\widetilde{\mathbf{Q}}\left( i,j\right) $ in row $\widetilde{\mathbf{Q}}\left( i,:\right) $ specifies the joint variations $\operatorname{cov}\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) \[ \operatorname{cov}\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) =\left\Vert \mathbf{x}_{i}\right\Vert \left\Vert \mathbf{x}_{j}\right\Vert \cos \theta_{\mathbf{x}_{i}\mathbf{x}_{j} \] between the components of the vector $\mathbf{x}_{i} \ \begin{pmatrix} \left\Vert \mathbf{x}_{i}\right\Vert \cos\mathbb{\alpha}_{\mathbf{x}_{i1}1}, & \left\Vert \mathbf{x}_{i}\right\Vert \cos\mathbb{\alpha}_{\mathbf{x}_{i2}2}, & \cdots, & \left\Vert \mathbf{x}_{i}\right\Vert \cos\mathbb{\alpha _{\mathbf{x}_{id}d \end{pmatrix} \] and the components of the vector $\mathbf{x}_{j} \ \begin{pmatrix} \left\Vert \mathbf{x}_{j}\right\Vert \cos\mathbb{\alpha}_{\mathbf{x}_{j1}1}, & \left\Vert \mathbf{x}_{j}\right\Vert \cos\mathbb{\alpha}_{\mathbf{x}_{j2}2}, & \cdots, & \left\Vert \mathbf{x}_{j}\right\Vert \cos\mathbb{\alpha _{\mathbf{x}_{jd}d \end{pmatrix} \text{, \] where the $d$ components $\left\{ \left\Vert \mathbf{x}\right\Vert \cos\mathbb{\alpha}_{x_{i}i}\right\} _{i=1}^{d}$ of any given vector $\mathbf{x}$ are random variables, each of which is characterized by an expected value $E\left[ \left\Vert \mathbf{x}\right\Vert \cos\mathbb{\alpha }_{x_{i}i}\right] $ and a variance $\operatorname{var}\left( \left\Vert \mathbf{x}\right\Vert \cos\mathbb{\alpha}_{x_{i}i}\right) $. It follows that the $j$th element $\widetilde{\mathbf{Q}}\left( i,j\right) $ of row $\widetilde{\mathbf{Q}}\left( i,:\right) $ specifies the joint variations of the $d$ random variables of a vector $\mathbf{x}_{j}$ about the $d$ random variables of the vector $\mathbf{x}_{i}$. Thus, row $\widetilde{\mathbf{Q}}\left( i,:\right) $ specifies the joint variations between the random variables of a fixed vector $\mathbf{x}_{i}$ and the random variables of an entire collection of data. Again, take any row $\widetilde{\mathbf{Q}}\left( i,:\right) $ of the matrix $\widetilde{\mathbf{Q}}$ in Eq. (\ref{Inner Product Matrix}). Using the definition of the scalar project statistic in Eq. (\ref{Scalar Projection}), it follows that the statistic $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{i}\right) $ \begin{align} \widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{i}\right) & {\displaystyle\sum\nolimits_{j=1}^{N}} \left\Vert \mathbf{x}_{i}\right\Vert \left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j} \label{Pointwise Covariance Statistic}\\ & {\displaystyle\sum\nolimits_{j=1}^{N}} \mathbf{x}_{i}^{T}\mathbf{x}_{j}=\mathbf{x}_{i}^{T}\left( {\displaystyle\sum\nolimits_{j=1}^{N}} \mathbf{x}_{j}\right) \nonumber\\ & =\left\Vert \mathbf{x}_{i}\right\Vert {\displaystyle\sum\nolimits_{j=1}^{N}} \left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j }\nonumber \end{align} provides a unidirectional estimate of the joint variations of the $d$ random variables of each of the $N$ vectors of a data collection $\left\{ \mathbf{x}_{j}\right\} _{j=1}^{N}$ and a unidirectional estimate of the joint variations of the $d$ random variables of the common mean {\displaystyle\sum\nolimits_{j=1}^{N}} \mathbf{x}_{j}$ of the data, about the $d$ random variables of a fixed vector $\mathbf{x}_{i}$, along the axis of the fixed vector $\mathbf{x}_{i}$. Thereby, the statistic $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{i}\right) $ specifies the direction of a vector $\mathbf{x}_{i}$ and a signed magnitude along the axis of the vector $\mathbf{x}_{i}$. The statistic $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{i}\right) $ in Eq. (\ref{Pointwise Covariance Statistic}) is defined to be a pointwise covariance estimate for a data point $\mathbf{x}_{i}$, where the statistic $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{i}\right) $ provides a unidirectional estimate of the joint variations between the random variables of each vector $\mathbf{x}_{j}$ in a data collection and the random variables of a fixed vector $\mathbf{x}_{i}$ and a unidirectional estimate of the joint variations between the random variables of the mean vector {\displaystyle\sum\nolimits_{j=1}^{N}} \mathbf{x}_{j}$ and the fixed vector $\mathbf{x}_{i} $. Given that the joint variations estimated by the statistic $\widehat{\operatorname{cov} _{up}\left( \mathbf{x}_{i}\right) $ are derived from second-order distance statistics $\left\Vert \mathbf{x}_{i}-\mathbf{x}_{j}\right\Vert ^{2} $ which involve signed magnitudes of vector projections along the common axis of a fixed vector $\mathbf{x}_{i}$, a pointwise covariance estimate $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{i}\right) $ is said to determine a \emph{second-order statistical moment about the locus of a data point} $\mathbf{x}_{i}$. Using Eq. (\ref{Row Distribution First Order Vector Coordinates}), Eq. (\ref{Pointwise Covariance Statistic}) also specifies a distribution of first order coordinates for a given vector $\mathbf{x}_{i}$ which determines a first-order statistical moment about the locus of the data point $\mathbf{x}_{i}$. I\ will now demonstrate that pointwise covariance statistics can be used to discover extreme points. \subsection{Discovery of Extreme Data Points} The Gram matrix associated with the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual}) contains inner product statistics for\ two labeled collections of data. Denote those data points that belong to class $\omega_{1}$ by $\mathbf{x}_{1_{i}}$ and those that belong to class $\omega_{2}$ by $\mathbf{x}_{2_{i}}$. Let $\overline{\mathbf{x}}_{1}$ and $\overline{\mathbf{x}}_{2}$ denote the mean vectors of class $\omega_{1}$ and class $\omega_{2}$. Let $i=1:n_{1}$ where the vector $\mathbf{x}_{1_{i}}$ has the label $y_{i}=1$ and let $i=n_{1}+1:n_{1}+n_{2}$ where the vector $\mathbf{x}_{2_{i}}$ has the label $y_{i}=-1$. Using label information, Eq. (\ref{Pointwise Covariance Statistic}) can be rewritten a \[ \widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{1_{i}}\right) =\mathbf{x}_{1_{i}}^{T}\left( \sum\nolimits_{j=1}^{n_{1}}\mathbf{x}_{1_{j }-\sum\nolimits_{j=n_{1}+1}^{n_{1}+n_{2}}\mathbf{x}_{2_{j}}\right) \] an \[ \widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2_{i}}\right) =\mathbf{x}_{2_{i}}^{T}\left( \sum\nolimits_{j=n_{1}+1}^{n_{1}+n_{2 }\mathbf{x}_{2_{j}}-\sum\nolimits_{j=1}^{n_{1}}\mathbf{x}_{1_{j}}\right) \text{. \] I will now show that extreme points possess large pointwise covariances relative to the non-extreme points in each respective pattern class. Recall that an extreme point is located relatively far from its distribution mean, relatively close to the mean of the other distribution and relatively close to other extreme points. Denote an extreme point by $\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x}_{2_{i\ast}}$ and a non-extreme point by $\mathbf{x}_{1_{i}}$ or $\mathbf{x}_{2_{i}}$. Take any extreme point $\mathbf{x}_{1_{i\ast}}$ and any non-extreme point $\mathbf{x}_{1_{i}}$ that belong to class $\omega_{1}$ and consider the pointwise covariance estimates for $\mathbf{x}_{1_{i\ast}}$ \[ \widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{1_{i\ast}}\right) =\mathbf{x}_{1_{i\ast}}^{T}\overline{\mathbf{x}}_{1}-\mathbf{x}_{1_{i\ast }^{T}\overline{\mathbf{x}}_{2 \] and for $\mathbf{x}_{1_{i}}$ \[ \widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{1}\right) =\mathbf{x _{1_{i}}^{T}\overline{\mathbf{x}}_{1}-\mathbf{x}_{1_{i}}^{T}\overline {\mathbf{x}}_{2}\text{. \] Because $\mathbf{x}_{1_{i\ast}}$ is an extreme point, it follows that $\mathbf{x}_{1_{i\ast}}^{T}\overline{\mathbf{x}}_{1}>$ $\mathbf{x}_{1_{i} ^{T}\overline{\mathbf{x}}_{1}$ and that $\mathbf{x}_{1_{i\ast}}^{T \overline{\mathbf{x}}_{2}<$ $\mathbf{x}_{1_{i}}^{T}\overline{\mathbf{x}}_{2}$. Thus, $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{1_{i\ast}}\right) >\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{1_{i}}\right) $. Therefore, each extreme point $\mathbf{x}_{1_{i\ast}}$ exhibits a pointwise covariance $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{1_{i\ast }\right) $ that exceeds the pointwise covariance $\widehat{\operatorname{cov }_{up}\left( \mathbf{x}_{1}\right) $ of all of the non-extreme points $\mathbf{x}_{1}$ in class $\omega_{1}$. Now take any extreme point $\mathbf{x}_{2_{i\ast}}$ and any non-extreme point $\mathbf{x}_{2_{i}}$ that belong to class $\omega_{2}$ and consider the pointwise covariance estimates for $\mathbf{x}_{2_{i\ast}}$ \[ \widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2_{i\ast}}\right) =\mathbf{x}_{2_{i\ast}}^{T}\overline{\mathbf{x}}_{2}-\mathbf{x}_{2_{i\ast }^{T}\overline{\mathbf{x}}_{1 \] and for $\mathbf{x}_{2_{i}}$ \[ \widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2}\right) =\mathbf{x _{2_{i}}^{T}\overline{\mathbf{x}}_{2}-\mathbf{x}_{2_{i}}^{T}\overline {\mathbf{x}}_{1}\text{. \] Because $\mathbf{x}_{2_{i\ast}}$ is an extreme point, it follows that $\mathbf{x}_{2_{i\ast}}^{T}\overline{\mathbf{x}}_{2}>$ $\mathbf{x}_{2_{i} ^{T}\overline{\mathbf{x}}_{2}$ and that $\mathbf{x}_{2_{i\ast}}^{T \overline{\mathbf{x}}_{1}<$ $\mathbf{x}_{2_{i}}^{T}\overline{\mathbf{x}}_{1}$. Thus, $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2_{i\ast}}\right) >\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2_{i}}\right) $. Therefore, each extreme point $\mathbf{x}_{2_{i\ast}}$ exhibits a pointwise covariance $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2_{i\ast }\right) $ that exceeds the pointwise covariance $\widehat{\operatorname{cov }_{up}\left( \mathbf{x}_{2}\right) $ of all of the non-extreme points $\mathbf{x}_{2}$ in class $\omega_{2}$. Thereby, it is concluded that extreme points possess large pointwise covariances relative to non-extreme points in their respective pattern class. It is also concluded that the pointwise covariance $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{1_{i\ast}}\right) $ or $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2_{i\ast}}\right) $ exhibited by any given extreme point $\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x}_{2_{i\ast}}$ may exceed pointwise covariances of other extreme points in each respective pattern class. Therefore, it will be assumed that each extreme point $\mathbf{x}_{1_{i_{\ast }}}$ or $\mathbf{x}_{2_{i_{\ast}}}$ exhibits a critical first and second-order statistical moment $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x _{1_{i_{\ast}}}\right) $ or $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2_{i\ast}}\right) $ that exceeds some threshold $\varrho$, for which each corresponding scale factor $\psi_{1i\ast}$ or $\psi_{2i\ast}$ exhibits a critical value that exceeds zero $\psi_{1i\ast}>0$ or $\psi _{2i\ast}>0$. Accordingly, first and second-order statistical moments $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{1_{i}}\right) $ or $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2_{i}}\right) $ about the loci of non-extreme points $\mathbf{x}_{1_{i}}$ or $\mathbf{x}_{2_{i}}$ do not exceed the threshold $\varrho$ and their corresponding scale factors $\psi_{1i}$ or $\psi_{2i}$ are effectively zero: $\psi_{1i}=0$ or $\psi _{2i}=0$. I will now devise a system of equations for a principal eigen-decomposition of the Gram matrix $\mathbf{Q}$ denoted in Eqs (\ref{Autocorrelation Matrix}) and (\ref{Inner Product Matrix}) that describes tractable point and coordinate relationships between the scaled extreme points on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ and the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$. \section{Inside the Wolfe Dual Eigenspace I} Take the Gram matrix $\mathbf{Q}$ associated with the quadratic form in Eq. (\ref{Vector Form Wolfe Dual}). Let $\mathbf{q}_{j\text{ }}$denote the $j$th column of $\mathbf{Q}$, which is an $N$-vector. Let $\lambda_{\max _{\boldsymbol{\psi}}}$ and $\boldsymbol{\psi}$ denote the largest eigenvalue and largest eigenvector of $\mathbf{Q}$ respectively. Using this notation \citep[see][]{Trefethen1998 , the principal eigen-decomposition of $\mathbf{Q} \[ \mathbf{Q}\boldsymbol{\psi}=\lambda\mathbf{_{\max_{\boldsymbol{\psi}} }\boldsymbol{\psi \] can be rewritten a \[ \lambda_{\max_{\boldsymbol{\psi}}}\boldsymbol{\psi} {\displaystyle\sum\nolimits_{j=1}^{N}} \psi_{_{j}}\mathbf{q}_{j\text{ } \] so that the principal eigenaxis $\boldsymbol{\psi}$ of $\mathbf{Q}$ is expressed as a linear combination of transformed vectors $\frac{\psi_{j }{\lambda_{\max_{\boldsymbol{\psi}}}}\mathbf{q}_{j\text{ }}$ \begin{equation} \left[ \begin{array} [c]{c \\ \boldsymbol{\psi}\\ \\ \end{array} \right] =\frac{\psi_{1}}{\lambda_{\max_{\boldsymbol{\psi}}}}\left[ \begin{array} [c]{c \\ \mathbf{q}_{1\text{ }}\\ \\ \end{array} \right] +\frac{\psi_{2}}{\lambda_{\max_{\boldsymbol{\psi}}}}\left[ \begin{array} [c]{c \\ \mathbf{q}_{2\text{ }}\\ \\ \end{array} \right] +\cdots+\frac{\psi_{N}}{\lambda_{\max_{\boldsymbol{\psi}}}}\left[ \begin{array} [c]{c \\ \mathbf{q}_{N\text{ }}\\ \\ \end{array} \right] \text{,} \label{Alternate Eigendecomposition Equation \end{equation} where the $i$th element of the vector $\mathbf{q}_{j\text{ }}$ specifies an inner product statistic $\mathbf{x}_{i}^{T}\mathbf{x}_{j}$ between the vectors $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$. Using Eqs (\ref{Autocorrelation Matrix}) and (\ref{Alternate Eigendecomposition Equation}), a Wolfe dual linear eigenlocus $\left( \psi_{1},\cdots,\psi_{N}\right) ^{T}$ can be written as \begin{align} \boldsymbol{\psi} & =\frac{\psi_{1}}{\lambda_{\max_{\boldsymbol{\psi}}} \begin{pmatrix} \mathbf{x}_{1}^{T}\mathbf{x}_{1}\\ \mathbf{x}_{2}^{T}\mathbf{x}_{1}\\ \vdots\\ -\mathbf{x}_{N}^{T}\mathbf{x}_{1 \end{pmatrix} +\frac{\psi_{2}}{\lambda_{\max_{\boldsymbol{\psi}}} \begin{pmatrix} \mathbf{x}_{1}^{T}\mathbf{x}_{2}\\ \mathbf{x}_{2}^{T}\mathbf{x}_{2}\\ \vdots\\ -\mathbf{x}_{N}^{T}\mathbf{x}_{2 \end{pmatrix} +\cdots\label{Dual Normal Eigenlocus Components}\\ \cdots & +\frac{\psi_{N-1}}{\lambda_{\max_{\boldsymbol{\psi}}} \begin{pmatrix} -\mathbf{x}_{1}^{T}\mathbf{x}_{N-1}\\ -\mathbf{x}_{2}^{T}\mathbf{x}_{N-1}\\ \vdots\\ \mathbf{x}_{N}^{T}\mathbf{x}_{N-1 \end{pmatrix} +\frac{\psi_{N}}{\lambda_{\max_{\boldsymbol{\psi}}} \begin{pmatrix} -\mathbf{x}_{1}^{T}\mathbf{x}_{N}\\ -\mathbf{x}_{2}^{T}\mathbf{x}_{N}\\ \vdots\\ \mathbf{x}_{N}^{T}\mathbf{x}_{N \end{pmatrix} \nonumber \end{align} which illustrates that the magnitude $\psi_{j}$ of the $j^{th}$ Wolfe dual principal eigenaxis component $\psi_{j}\overrightarrow{\mathbf{e}}_{j}$ is correlated with joint variations of labeled vectors $\mathbf{x}_{i}$ about the vector $\mathbf{x}_{j}$. Alternatively, using Eqs (\ref{Inner Product Matrix}) and (\ref{Alternate Eigendecomposition Equation}), a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ can be written as \begin{align} \boldsymbol{\psi} & =\frac{\psi_{1}}{\lambda_{\max_{\boldsymbol{\psi}} }\left( \begin{array} [c]{c \left\Vert \mathbf{x}_{1}\right\Vert \left\Vert \mathbf{x}_{1}\right\Vert \cos\theta_{\mathbf{x}_{1}^{T}\mathbf{x}_{1}}\\ \left\Vert \mathbf{x}_{2}\right\Vert \left\Vert \mathbf{x}_{1}\right\Vert \cos\theta_{\mathbf{x}_{2}^{T}\mathbf{x}_{1}}\\ \vdots\\ -\left\Vert \mathbf{x}_{N}\right\Vert \left\Vert \mathbf{x}_{1}\right\Vert \cos\theta_{\mathbf{x}_{N}^{T}\mathbf{x}_{1} \end{array} \right) +\cdots\label{Dual Normal Eigenlocus Component Projections}\\ & \cdots+\frac{\psi_{N}}{\lambda_{\max_{\boldsymbol{\psi}}}}\left( \begin{array} [c]{c -\left\Vert \mathbf{x}_{1}\right\Vert \left\Vert \mathbf{x}_{N}\right\Vert \cos\theta_{\mathbf{x}_{1}^{T}\mathbf{x}_{N}}\\ -\left\Vert \mathbf{x}_{2}\right\Vert \left\Vert \mathbf{x}_{N}\right\Vert \cos\theta_{\mathbf{x}_{2}^{T}\mathbf{x}_{N}}\\ \vdots\\ \left\Vert \mathbf{x}_{N}\right\Vert \left\Vert \mathbf{x}_{N}\right\Vert \cos\theta_{\mathbf{x}_{N}^{T}\mathbf{x}_{N} \end{array} \right) \nonumber \end{align} which illustrates that the magnitude $\psi_{j}$ of the $j^{th}$ Wolfe dual principal eigenaxis component $\psi_{j}\overrightarrow{\mathbf{e}}_{j}$ on $\boldsymbol{\psi}$ is correlated with scalar projections $\left\Vert \mathbf{x}_{j}\right\Vert \cos\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}$ of the vector $\mathbf{x}_{j}$ onto labeled vectors $\mathbf{x}_{i}$. Equations (\ref{Dual Normal Eigenlocus Components}) and (\ref{Dual Normal Eigenlocus Component Projections}) both indicate that the magnitude $\psi_{j} $ of the $j^{th}$ Wolfe dual principal eigenaxis component $\psi_{j}\overrightarrow{\mathbf{e}}_{j}$ on $\boldsymbol{\psi}$ is correlated with a first and second-order statistical moment about the locus of the data point $\mathbf{x}_{j}$. \subsection{Assumptions} It will be assumed that each extreme point $\mathbf{x}_{1_{i_{\ast}}}$ or $\mathbf{x}_{2_{i_{\ast}}}$ exhibits a critical first and second-order statistical moment $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x _{1_{i_{\ast}}}\right) $ or $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x}_{2_{i\ast}}\right) $ that exceeds some threshold $\varrho$, for which each corresponding scale factor $\psi_{1i\ast}$ or $\psi_{2i\ast}$ exhibits a critical value that exceeds zero: $\psi_{1i\ast}>0$ or $\psi_{2i\ast}>0$. It will also be assumed that first and second-order statistical moments $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x _{1_{i}}\right) $ or $\widehat{\operatorname{cov}}_{up}\left( \mathbf{x _{2_{i}}\right) $ about the loci of non-extreme points $\mathbf{x}_{1_{i}}$ or $\mathbf{x}_{2_{i}}$ do not exceed the threshold $\varrho$ so that the corresponding scale factors $\psi_{1i}$ or $\psi_{2i}$ are effectively zero: $\psi_{1i}=0$ or $\psi_{2i}=0$. Express a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ in terms of $l$ non-orthogonal unit vectors $\left\{ \overrightarrow{\mathbf{e}}_{1\ast },\ldots,\overrightarrow{\mathbf{e}}_{l\ast}\right\} \begin{align} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l}\psi_{i\ast \overrightarrow{\mathbf{e}}_{i\ast \label{Non-orthogonal Eigenaxes of Dual Normal Eigenlocus}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\overrightarrow{\mathbf{e }_{1i\ast}+\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\overrightarrow{\mathbf{e }_{2i\ast}\text{,}\nonumber \end{align} where each scaled, non-orthogonal unit vector denoted by $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ is correlated with an extreme vector $\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x}_{2_{i\ast}}$ respectively. Accordingly, each Wolfe dual principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ is a scaled, non-orthogonal unit vector that contributes to the estimation of $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$. WLOG, indices do not indicate locations of inner product expressions in Eq. (\ref{Dual Normal Eigenlocus Component Projections}). \subsubsection*{Extreme Point Notation} Denote the extreme points that belong to class $\omega_{1}$ and $\omega_{2}$ by $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i\ast}}$ with labels $y_{i}=1$ and $y_{i}=-1$ respectively. Let there be $l_{1}$ extreme points $\mathbf{x}_{1_{i\ast}}$ from class $\omega_{1}$ and $l_{2}$ extreme points $\mathbf{x}_{2_{i\ast}}$ from class $\omega_{2}$. Let there be $l_{1}$ principal eigenaxis components $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$, where each scale factor $\psi_{1i\ast }$ is correlated with an extreme vector $\mathbf{x}_{1_{i_{\ast}}}$. Let there be $l_{2}$ principal eigenaxis components $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$, where each scale factor $\psi_{2i\ast}$ is correlated with an extreme vector $\mathbf{x}_{2_{i_{\ast}}}$. Let $l_{1}+l_{2}=l$. Recall that the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) \right) $ \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) \right) =\mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +\mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \] for a binary classification system involves \emph{opposing forces} that depend on the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ and the corresponding decision boundary $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$. In particular, the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{1}$ decision region and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{2}$ decision region are forces associated with positions and potential locations of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, and the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ decision region and the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ in th $Z_{2}$ decision region are forces associated with positions and potential locations of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x |\omega_{2}\right) $. Linear eigenlocus transforms define the opposing forces of a classification system in terms of forces associated with counter risks $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast}\mathbf{x _{1_{i_{\ast}}}\right) $ and $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|\psi_{2i\ast}\mathbf{x}_{2_{i_{\ast}}}\right) $ and risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast}\mathbf{x _{2_{i_{\ast}}}\right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}\right) $ related to scaled extreme points $\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}$ and $\psi_{2i\ast }\mathbf{x}_{2_{i_{\ast}}}$: which are forces associated with positions and potential locations of extreme points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ in the $Z_{1}$ and $Z_{2}$ decision regions of a decision space $Z$. In particular, the forces associated with counter risks $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast}\mathbf{x _{1_{i_{\ast}}}\right) $ and risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}\right) $ for class $\omega_{1}$ are determined by magnitudes and directions of scaled extreme vectors $\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}$ on $\boldsymbol{\tau}_{1}$, and the forces associated with counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min }}\left( Z_{2}|\psi_{2i\ast}\mathbf{x}_{2_{i_{\ast}}}\right) $ and risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast}\mathbf{x _{2_{i_{\ast}}}\right) $ for class $\omega_{2}$ are determined by magnitudes and directions of scaled extreme vectors $\psi_{2i\ast}\mathbf{x}_{2_{i_{\ast }}}$ on $\boldsymbol{\tau}_{2}$. I\ will show that a Wolfe dual linear eigenlocus $\mathbf{\psi}$ is a displacement vector that accounts for the magnitudes and the directions of all of the scaled extreme vectors on $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2 $. Linear eigenlocus transforms determine the opposing forces of a classification system by means of symmetrically balanced, pointwise covariance statistics. Symmetrically balanced, pointwise covariance statistics determine forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region that are balanced with forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{1}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{2}\right) \right) \\ & \rightleftharpoons\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \text{. \end{align*} I\ will now define symmetrically balanced, pointwise covariance statistics. The geometric nature of the statistics is outlined first. \subsection{Symmetrically Balanced Covariance Statistics I} Take two labeled sets of extreme vectors, where each extreme vector is correlated with a scale factor that determines scaled, signed magnitudes, i.e., scaled components of the scaled extreme vector, along the axes of the extreme vectors in each pattern class, such that the integrated scale factors from each pattern class balance each other. Generally speaking, for any given set of extreme vectors, all of the scaled, signed magnitudes along the axis of any given extreme vector from a given pattern class, which are determined by vector projections of scaled extreme vectors from the \emph{other} pattern class, \emph{are distributed in opposite directions}. Thereby, for two labeled sets of extreme vectors, where each extreme vector is correlated with a scale factor and the integrated scale factors from each pattern class balance each other, it follows that scaled, signed magnitudes along the axis of any given extreme vector, which are determined by vector projections of scaled extreme vectors from the \emph{other} pattern class, \emph{are distributed on the opposite side of the origin}. Accordingly, scaled, signed magnitudes along the axes of all of the extreme vectors are distributed in a symmetrically balanced manner, where each scale factor specifies a symmetrically balanced distribution for an extreme point which ensures that the \emph{components of} an extreme vector are \emph{distributed over} the axes of a given \emph{collection} of extreme vectors in a symmetrically balanced \emph{and} well-proportioned manner. I will show that symmetrically balanced covariance statistics are the basis of linear eigenlocus transforms. For any given set of extreme points, I will demonstrate that linear eigenlocus transforms find a set of scale factors in Wolfe dual eigenspace, which are determined by the symmetrically balanced covariance statistics in Eqs (\ref{Eigen-balanced Pointwise Covariance Estimate Class One}) and (\ref{Eigen-balanced Pointwise Covariance Estimate Class Two}), such that congruent decision regions $Z_{1}\cong Z_{1}$ are determined by symmetrically balanced forces associated with counter risks and risks \begin{align*} \mathfrak{R}_{\mathfrak{\min}}\left( Z:Z_{1}\cong Z\right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\mathbf{\tau _{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\mathbf{\tau _{2}\right) \\ & \rightleftharpoons\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\mathbf{\tau}_{2}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\mathbf{\tau}_{1}\right) \text{, \end{align*} where forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|\mathbf{\tau}_{1}\right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\mathbf{\tau}_{2}\right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are symmetrically balanced with forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region. Figure $\ref{Balancing Feat in Wolfe Dual Eigenspace}$ illustrates that symmetrically balanced covariance statistics determine linear discriminant functions that satisfy a fundamental integral equation of binary classification for a linear classification system in statistical equilibrium, where the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z\mathbf{| \boldsymbol{\tau}\right) $ and the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ of the linear classification system are minimized \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure27.png }\caption{Symmetrically balanced covariance statistics $\protect\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x}_{1_{i_{\ast}}}\right) $ and $\protect\widehat{\operatorname{cov }_{up_{\updownarrow}}\left( \mathbf{x}_{2_{i\ast}}\right) $ for extreme points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i\ast}}$ are the basis of linear eigenlocus transforms. Note: $\delta\left( y\right) \triangleq \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $. \label{Balancing Feat in Wolfe Dual Eigenspace \end{figure} Using Eqs (\ref{Equilibrium Constraint on Dual Eigen-components}) and (\ref{Pointwise Covariance Statistic}), along with the notation and assumptions outlined above, it follows that summation over the $l$ components of $\boldsymbol{\psi}$ in Eq. (\ref{Dual Normal Eigenlocus Component Projections}) provides symmetrically balanced, covariance statistics for the $\mathbf{x}_{1_{i_{\ast}}}$ extreme vectors, where each extreme point $\mathbf{x}_{1_{i_{\ast}}}$ exhibits a symmetrically balanced, first and second-order statistical moment $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x _{1_{i_{\ast}}}\right) \begin{align} \widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x _{1_{i_{\ast}}}\right) & =\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast }\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast} }\label{Eigen-balanced Pointwise Covariance Estimate Class One}\\ & -\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert \sum\nolimits_{j=1 ^{l_{2}}\psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}}}\nonumber \end{align} relative to $l$ symmetrically balanced, scaled, signed magnitudes determined by vector projections of scaled extreme vectors in each respective pattern class. Likewise, summation over the $l$ components of $\boldsymbol{\psi}$ in Eq. (\ref{Dual Normal Eigenlocus Component Projections}) provides symmetrically balanced, covariance statistics for the $\mathbf{x}_{2_{i_{\ast}}}$ extreme vectors, where each extreme point $\mathbf{x}_{2_{i_{\ast}}}$ exhibits a a symmetrically balanced, first and second-order statistical moment $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x}_{2_{i\ast }}\right) \begin{align} \widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x}_{2_{i\ast }\right) & =\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \sum \nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast }\right\Vert \cos\theta_{\mathbf{x}_{2_{i\ast}}\mathbf{x}_{2_{j\ast} }\label{Eigen-balanced Pointwise Covariance Estimate Class Two}\\ & -\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert \sum\nolimits_{j=1 ^{l_{1}}\psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}}}\nonumber \end{align} relative to $l$ symmetrically balanced, scaled, signed magnitudes determined by vector projections of scaled extreme vectors in each respective pattern class. \subsection{Common Geometrical and Statistical Properties} I\ will now use Eqs (\ref{Eigen-balanced Pointwise Covariance Estimate Class One}) and (\ref{Eigen-balanced Pointwise Covariance Estimate Class Two}) to identify symmetrical, geometric and statistical properties possessed by the principal eigenaxis components on $\boldsymbol{\tau}$ and $\boldsymbol{\psi}$. \subsubsection{Loci of the $\psi_{1i\ast}\protect\overrightarrow{\mathbf{e }_{1i\ast}$ Components} Let $i=1:l_{1}$, where each extreme vector $\mathbf{x}_{1_{i_{\ast}}}$ is correlated with a Wolfe principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$. Using Eqs (\ref{Dual Normal Eigenlocus Component Projections}) and (\ref{Non-orthogonal Eigenaxes of Dual Normal Eigenlocus}), it follows that the locus of the $i^{th}$ principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}_{1}$ is a function of the expression \begin{align} \psi_{1i\ast} & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi _{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos\theta _{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}} \label{Dual Eigen-coordinate Locations Component One}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast }}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast} \mathbf{x}_{2_{j\ast}}}\text{,}\nonumber \end{align} where $\psi_{1i\ast}$ provides a scale factor for the non-orthogonal unit vector $\overrightarrow{\mathbf{e}}_{1i\ast}$. Geometric and statistical explanations for the eigenlocus statistic \begin{equation} \psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}\text{ and \psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}}} \label{Projection Statistics psi1 \end{equation} in Eq. (\ref{Dual Eigen-coordinate Locations Component One}) are considered next. \subsubsection{Geometric Nature of Eigenlocus Statistics} The first geometric interpretation of the eigenlocus statistics in Eq. (\ref{Projection Statistics psi1}) defines $\psi_{1_{j\ast}}$ and $\psi_{2_{j\ast}}$ to be scale factors for the signed magnitudes of the vector projection \[ \left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast }}\mathbf{x}_{1_{j\ast}}}\text{ and }\left\Vert \mathbf{x}_{2_{j\ast }\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}} \] of the scaled extreme vectors $\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}$ and $\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}$ along the axis of the extreme vector $\mathbf{x}_{1_{i\ast}}$, where $\cos\theta_{\mathbf{x}_{1_{i\ast} \mathbf{x}_{1_{j\ast}}}$ and $\cos\theta_{\mathbf{x}_{1_{i\ast} \mathbf{x}_{2_{j\ast}}}$ specify the respective angles between the axes of the scaled extreme vectors $\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}$ and $\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}$ and the axis of the extreme vector $\mathbf{x}_{1_{i\ast}}$. Note that the signed magnitude $\psi_{2_{j\ast }\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x _{1_{i\ast}}\mathbf{x}_{2_{j\ast}}}$ is distributed in the opposite direction, so that the locus of $\psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast }\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}}}$ is on the opposite side of the origin, along the axis of the extreme vector $\mathbf{x}_{1_{i\ast}}$. Figure $\ref{Wolfe Dual Linear Eigenlocus Statistics}$ illustrates the geometric and statistical nature of the eigenlocus statistics in Eq. (\ref{Projection Statistics psi1}), where any given scaled, signed magnitude $\psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}$ or $\psi_{2_{j\ast }\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x _{1_{i\ast}}\mathbf{x}_{2_{j\ast}}}$ may be positive or negative (see Figs $\ref{Wolfe Dual Linear Eigenlocus Statistics}$a and $\ref{Wolfe Dual Linear Eigenlocus Statistics}$b) \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure28.png }\caption{Examples of positive and negative, scaled, signed magnitudes of vector projections of scaled extreme vectors $\psi_{1_{j\ast}}\mathbf{x _{1_{j\ast}}$ and $\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}$, along the axis of an extreme vector $\mathbf{x}_{1_{i\ast}}$ which is correlated with a Wolfe dual principal eigenaxis component $\psi_{1i\ast \protect\overrightarrow{\mathbf{e}}_{1i\ast}$. \label{Wolfe Dual Linear Eigenlocus Statistics \end{figure} \subsubsection{An Alternative Geometric Interpretation} An alternative geometric explanation for the eigenlocus statistics in Eq. (\ref{Projection Statistics psi1}) accounts for the representation of $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ within the Wolfe dual eigenspace. Consider the vector relationship \[ \psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert =\left\Vert \psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}\right\Vert =\left\Vert \boldsymbol{\tau }_{1}(j)\right\Vert \] an \[ \psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert =\left\Vert \psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}\right\Vert =\left\Vert \boldsymbol{\tau }_{2}(j)\right\Vert \text{, \] where $\boldsymbol{\tau}_{1}(j)$ and $\boldsymbol{\tau}_{2}(j)$ are the $j$th constrained, primal principal eigenaxis components on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$. It follows that the scaled $\psi_{1_{j\ast}}$, signed magnitude $\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast} \mathbf{x}_{1_{j\ast}}}$ of the vector projection of the scaled extreme vector $\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}$ along the axis of the extreme vector $\mathbf{x}_{1_{i\ast}} \[ \psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}} \] determines the scaled $\cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast }}}$ length of the $j$th constrained, primal principal eigenaxis component $\boldsymbol{\tau}_{1}(j)$ on $\boldsymbol{\tau}_{1} \[ \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}\left\Vert \boldsymbol{\tau}_{1}(j)\right\Vert \text{, \] where $\psi_{1_{j\ast}}$ is the length of the $\psi_{1j\ast \overrightarrow{\mathbf{e}}_{1j\ast}$ Wolfe dual principal eigenaxis component and $\cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}$ specifies the angle between the extreme vectors $\mathbf{x}_{1_{i\ast}}$ and $\mathbf{x _{1_{j\ast}}$. Likewise, the scaled $\psi_{2_{j\ast}}$, signed magnitude $\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast} \mathbf{x}_{2_{j\ast}}}$ of the vector projection of the scaled extreme vector $\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}$ along the axis of the extreme vector $\mathbf{x}_{1_{i\ast}} \[ \psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}} \] determines the scaled $\cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast }}}$ length of the $j$th constrained, primal principal eigenaxis component $\boldsymbol{\tau}_{2}(j)$ on $\boldsymbol{\tau}_{2} \[ \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}}}\left\Vert \boldsymbol{\tau}_{2}(j)\right\Vert \text{, \] where $\psi_{2_{j\ast}}$ is the length of the $\psi_{2j\ast \overrightarrow{\mathbf{e}}_{2j\ast}$ Wolfe dual principal eigenaxis component and $\cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}}}$ specifies the angle between the extreme vectors $\mathbf{x}_{1_{i\ast}}$ and $\mathbf{x _{2_{j\ast}}$. Therefore, each Wolfe dual principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ is a function of the constrained, primal principal eigenaxis components on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ \begin{align} \psi_{1i\ast} & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}\left\Vert \boldsymbol{\tau}_{1}(j)\right\Vert \label{Constrained Primal Eigenlocus psi1 \\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast }}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\cos\theta_{\mathbf{x}_{1_{i\ast }\mathbf{x}_{2_{j\ast}}}\left\Vert \boldsymbol{\tau}_{2}(j)\right\Vert \text{,}\nonumber \end{align} where the angle between each principal eigenaxis component $\boldsymbol{\tau }_{1}(j)$ and $\boldsymbol{\tau}_{2}(j)$ and the extreme vector $\mathbf{x _{1_{i\ast}}$ is fixed. I will now define the significant geometric and statistical properties which are jointly exhibited by Wolfe dual $\psi_{1i\ast}\overrightarrow{\mathbf{e }_{1i\ast}$\textbf{\ }and constrained, primal $\psi_{1i\ast}\mathbf{x _{1_{i\ast}}$ principal eigenaxis components that regulate the symmetric partitioning of a feature space $Z$. \subsection{Significant Geometric and Statistical Properties} Using the definition of Eq. (\ref{Eigen-balanced Pointwise Covariance Estimate Class One}), Eq. (\ref{Dual Eigen-coordinate Locations Component One}) indicates that the locus of the principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e }_{1i\ast}$ is determined by a symmetrically balanced, signed magnitude along the axis of an extreme vector $\mathbf{x}_{1_{i\ast}}$, relative to symmetrically balanced, scaled, signed magnitudes of extreme vector projections in each respective pattern class. \subsubsection{Symmetrically Balanced Signed Magnitudes} Let $\operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert \widetilde{\mathbf{x }_{\ast}\right\Vert _{_{1i_{\ast}}}}\right) $ denote the symmetrically balanced, signed magnitud \begin{align} \operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert \widetilde{\mathbf{x }_{\ast}\right\Vert _{_{1i_{\ast}}}}\right) & =\sum\nolimits_{j=1}^{l_{1 }\psi_{1_{j\ast}}\label{Unidirectional Scaling Term One1}\\ & \times\left[ \left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}\right] \nonumber\\ & -\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\nonumber\\ & \times\left[ \left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}}}\right] \nonumber \end{align} along the axis of the extreme vector $\mathbf{x}_{1_{i\ast}}$ that is correlated with the Wolfe dual principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$. \subsubsection{Symmetrically Balanced Distributions} Using the definitions of Eqs (\ref{Pointwise Covariance Statistic}) and (\ref{Eigen-balanced Pointwise Covariance Estimate Class One}), it follows that Eq. (\ref{Unidirectional Scaling Term One1}) specifies a symmetrically balanced distribution of scaled, first degree coordinates of extreme vectors along the axis of $\mathbf{x}_{1_{i\ast}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies how an extreme vector $\mathbf{x}_{1_{j\ast}}$ or $\mathbf{x}_{2_{j\ast}}$ is distributed along the axis of $\mathbf{x}_{1_{i\ast}}$, and each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies a symmetrically balanced distribution of scaled, first degree coordinates of extreme vectors $\left\{ \psi_{_{j\ast }\mathbf{x}_{j\ast}\right\} _{j=1}^{l}$ along the axis of an extreme vector $\mathbf{x}_{1_{j\ast}}$ or $\mathbf{x}_{2_{j\ast}}$. Therefore, each scaled, signed magnitud \[ \psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}\text{ \ or \ \psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{2_{j\ast}} \] provides an estimate for how the components of the extreme vector $\mathbf{x}_{1_{i_{\ast}}}$ are symmetrically distributed over the axis of a scaled extreme vector $\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}$ or $\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies a symmetrically balanced distribution of scaled, first degree coordinates of extreme vectors $\left\{ \psi_{_{j\ast}}\mathbf{x}_{j\ast}\right\} _{j=1}^{l}$ along the axis of an extreme vector $\mathbf{x}_{1_{j\ast}}$ or $\mathbf{x}_{2_{j\ast}}$. Again, using Eqs (\ref{Pointwise Covariance Statistic} and (\ref{Eigen-balanced Pointwise Covariance Estimate Class One}), it follows that Eq. (\ref{Unidirectional Scaling Term One1}) determines a symmetrically balanced, first and second-order statistical moment about the locus of $\mathbf{x}_{1_{i\ast}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{1_{j\ast}}$ specifies how the components of an extreme vector $\mathbf{x}_{1_{j\ast}}$ or $\mathbf{x}_{2_{j\ast}}$ are distributed along the axis of $\mathbf{x}_{1_{i\ast}}$, and each scale factor $\psi_{1_{j\ast}}$ or $\psi_{1_{j\ast}}$ specifies a symmetrically balanced distribution for an extreme vector $\mathbf{x}_{1_{j\ast}}$ or $\mathbf{x}_{2_{j\ast}}$. \subsubsection{Distributions of Eigenaxis Components} Using Eqs (\ref{Equilibrium Constraint on Dual Eigen-components}), (\ref{Dual Eigen-coordinate Locations Component One}), and (\ref{Unidirectional Scaling Term One1}), it follows that symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ are distributed over the axis of the extreme vector $\mathbf{x}_{1_{i\ast}}$. Again, using Eq. (\ref{Dual Eigen-coordinate Locations Component One}), it follows that identical, symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ are distributed over the axis of the Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$. Thereby, symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ are identically and symmetrically distributed over the respective axes of each Wolfe dual principal eigenaxis component $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ and each correlated extreme vector $\mathbf{x}_{1_{i\ast}}$. Alternatively, using Eq. (\ref{Constrained Primal Eigenlocus psi1}), the symmetrically balanced, signed magnitude in Eq. (\ref{Unidirectional Scaling Term One1}) depends upon the difference between integrated cosine-scaled lengths of the constrained, primal principal eigenaxis components on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ \begin{align} \operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert \widetilde{\mathbf{x }_{\ast}\right\Vert _{_{1_{i\ast}}}}\right) & =\sum\nolimits_{j=1}^{l_{1 }\cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}\left\Vert \boldsymbol{\tau}_{1}(j)\right\Vert \label{Unidirectional Scaling Term One2}\\ & -\sum\nolimits_{j=1}^{l_{2}}\cos\theta_{\mathbf{x}_{1_{i\ast} \mathbf{x}_{2_{j\ast}}}\left\Vert \boldsymbol{\tau}_{2}(j)\right\Vert \nonumber \end{align} which also shows that symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ are identically and symmetrically distributed along the respective axes of each Wolfe dual principal eigenaxis component $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ and each correlated extreme vector $\mathbf{x}_{1_{i\ast}}$. Using Eqs (\ref{Dual Eigen-coordinate Locations Component One}) and (\ref{Unidirectional Scaling Term One1}), it follows that the length $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi _{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ is determined by the weighted length of a correlated extreme vector $\mathbf{x}_{1_{i\ast}} \begin{equation} \psi_{1i\ast}=\left[ \lambda_{\max_{\boldsymbol{\psi}}}^{-1}\times \operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert \widetilde{\mathbf{x }_{\ast}\right\Vert _{_{1i_{\ast}}}}\right) \right] \left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert \text{,} \label{Magnitude Dual Normal Eigenaxis Component Class One \end{equation} where the weighting factor is determined by an eigenvalue $\lambda _{\max_{\boldsymbol{\psi}}}^{-1}$ scaling of a symmetrically balanced, signed magnitude $\operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert \widetilde{\mathbf{x }_{\ast}\right\Vert _{_{1i_{\ast}}}}\right) $ along the axis of $\mathbf{x}_{1_{i\ast}}$. \subsubsection{Symmetrically Balanced Lengths} Given that $\psi_{1i\ast}>0$, $\lambda_{\max_{\boldsymbol{\psi}}}^{-1}>0$ and $\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert >0$, it follows that the symmetrically balanced, signed magnitude along the axis of each extreme vector $\mathbf{x}_{1_{i\ast}}$ is a positive numbe \[ \operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert \widetilde{\mathbf{x }_{\ast}\right\Vert _{_{1i_{\ast}}}}\right) >0 \] which indicates that the weighting factor in Eq. (\ref{Magnitude Dual Normal Eigenaxis Component Class One}) determines a well-proportioned lengt \[ \lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi }_{1i\ast}\left\Vert \widetilde{\mathbf{x}}_{\ast}\right\Vert _{_{1i_{\ast}} }\right) \left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert \] for an extreme vector $\mathbf{x}_{1_{i\ast}}$. Thereby, the length $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi _{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ is determined by a well-proportioned length of a correlated extreme vector $\mathbf{x}_{1_{i\ast }}$. Returning to Eqs (\ref{Eigen-balanced Pointwise Covariance Estimate Class One ) and (\ref{Dual Eigen-coordinate Locations Component One}), it follows that the length $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi} \[ \psi_{1i\ast}=\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi }_{1i\ast}\left\Vert \widetilde{\mathbf{x}}_{\ast}\right\Vert _{_{1i_{\ast}} }\right) \left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert \] is shaped by a symmetrically balanced, first and second-order statistical moment about the locus of a correlated extreme vector $\mathbf{x}_{1_{i\ast}} $. Now, take any given correlated pair $\left\{ \psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast},\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast }\right\} $ of Wolfe dual and constrained, primal principal eigenaxis components. I will now show that the direction of $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ is identical to the direction of $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$. \subsubsection{Directional Symmetries} The vector direction of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ is implicitly specified by Eq. (\ref{Dual Eigen-coordinate Locations Component One}), where it has been assumed that $\psi_{1i\ast}$ provides a scale factor for a non-orthogonal unit vector $\overrightarrow{\mathbf{e}}_{1i\ast}$. Using the definitions of Eqs (\ref{Pointwise Covariance Statistic}) and (\ref{Eigen-balanced Pointwise Covariance Estimate Class One}), it follows that the symmetrically balanced, pointwise covariance statistic in Eq. (\ref{Dual Eigen-coordinate Locations Component One}) specifies the direction of a correlated extreme vector $\mathbf{x}_{1_{i_{\ast}}}$ and a well-proportioned magnitude along the axis of the extreme vector $\mathbf{x}_{1_{i_{\ast}}}$. Returning to Eqs (\ref{Unidirectional Scaling Term One1}), (\ref{Unidirectional Scaling Term One2}), and (\ref{Magnitude Dual Normal Eigenaxis Component Class One}), take any given Wolfe dual principal eigenaxis component $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ that is correlated with an extreme vector $\mathbf{x}_{1_{i_{\ast}}}$. Given that the magnitude $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ is determined by a well-proportioned magnitude of a correlated extreme vector $\mathbf{x}_{1_{i_{\ast}}} \[ \psi_{1i\ast}=\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi }_{1i\ast}\left\Vert \widetilde{\mathbf{x}}_{\ast}\right\Vert _{_{1i_{\ast}} }\right) \left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert \text{, \] it follows that each non-orthogonal unit vector $\overrightarrow{\mathbf{e }_{1i\ast}$ has the same direction as an extreme vector $\mathbf{x _{1_{i_{\ast}}} \[ \overrightarrow{\mathbf{e}}_{1i\ast}\equiv\frac{\mathbf{x}_{1_{i_{\ast}} }{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert }\text{. \] Thereby, the direction of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}$ is identical to the direction of a correlated, constrained primal principal eigenaxis component $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ on $\boldsymbol{\tau}_{1}$ which is determined by the direction of a scaled $\psi_{1i\ast}$ extreme vector $\mathbf{x}_{1_{i\ast}}$. Each Wolfe dual and correlated, constrained primal principal eigenaxis component are said to exhibit \emph{directional symmetry}. Therefore, it is concluded that correlated principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\tau}_{1}$ exhibit directional symmetry. \subsubsection{Directions of Large Covariance} It is concluded that the uniform directions of the Wolfe dual and the correlated, constrained primal principal eigenaxis components specify directions of large covariance which contribute to a symmetric partitioning of a minimal geometric region of constant width that spans a region of large covariance between two data distributions. It is also concluded that each of the correlated principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\tau}_{1}$ possess well-proportioned magnitudes for which the constrained, linear discriminant function $D\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ delineates centrally located, bipartite, congruent regions of large covariance between any two data distributions. \subsubsection{Loci of the $\psi_{2i\ast}\protect\overrightarrow{\mathbf{e }_{2i\ast}$ Components} Let $i=1:l_{2}$, where each extreme vector $\mathbf{x}_{2_{i_{\ast}}}$ is correlated with a Wolfe dual principal eigenaxis component $\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{2i\ast}$. Using Eqs (\ref{Dual Normal Eigenlocus Component Projections}) and (\ref{Non-orthogonal Eigenaxes of Dual Normal Eigenlocus}), it follows that the locus of the $i^{th}$ Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ on $\boldsymbol{\psi}_{2}$ is a function of the expression \begin{align} \psi_{2i\ast} & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast }\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x _{2_{i\ast}}\mathbf{x}_{2_{j\ast}} \label{Dual Eigen-coordinate Locations Component Two}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{2_{i\ast} \mathbf{x}_{1_{j\ast}}}\text{,}\nonumber \end{align} where $\psi_{2i\ast}$ provides a scale factor for the non-orthogonal unit vector $\overrightarrow{\mathbf{e}}_{2i\ast}$. Results obtained from the previous analysis are readily generalized to the Wolfe dual $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ and the constrained, primal $\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$ principal eigenaxis components, so the analysis will not be replicated. However, the counterpart to Eq. (\ref{Unidirectional Scaling Term One1}) is necessary for a future argument. Let $i=l_{1}+1:l_{1}+l_{2}$, where each extreme vector $\mathbf{x}_{2_{i_{\ast}}}$ is correlated with a Wolfe principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$. Accordingly, let $\operatorname{comp}_{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\widetilde{\psi}_{2i\ast}\left\Vert \widetilde{\mathbf{x }_{\ast}\right\Vert _{_{2i_{\ast}}}}\right) $ denote the symmetrically balanced, signed magnitud \begin{align} \operatorname{comp}_{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\widetilde{\psi}_{2i\ast}\left\Vert \widetilde{\mathbf{x }_{\ast}\right\Vert _{_{2i_{\ast}}}}\right) & =\sum\nolimits_{j=1}^{l_{2 }\psi_{2_{j\ast}}\label{Unidirectional Scaling Term Two1}\\ & \times\left[ \left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{2_{i\ast}}\mathbf{x}_{2_{j\ast}}}\right] \nonumber\\ & -\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\nonumber\\ & \times\left[ \left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos \theta_{\mathbf{x}_{2_{i\ast}}\mathbf{x}_{1_{j\ast}}}\right] \nonumber \end{align} along the axis of the extreme vector $\mathbf{x}_{2_{i\ast}}$ that is correlated with the Wolfe dual principal eigenaxis component $\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{2i\ast}$. \subsubsection{Similar Properties Exhibited by $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$} I\ will now identify similar, geometric and statistical properties which are jointly exhibited by the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ and the correlated, constrained, primal principal eigenaxis components on $\boldsymbol{\tau}$. The properties are summarized below. \paragraph{Directional Symmetry} \begin{enumerate} \item The direction of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi }\mathbf{_{1}}$ is identical to the direction of a correlated, constrained, primal principal eigenaxis component $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ on$\mathbf{\ }\boldsymbol{\tau}\mathbf{_{1}}$. \item The direction of each Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ on $\boldsymbol{\psi }\mathbf{_{2}}$ is identical to the direction of a correlated, constrained, primal principal eigenaxis component $\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$ on$\mathbf{\ }\boldsymbol{\tau}\mathbf{_{2}}$. \end{enumerate} \paragraph{Symmetrically Balanced Lengths} \begin{enumerate} \item The lengths of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi }\mathbf{_{1}}$ and each correlated, constrained primal principal eigenaxis component $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ on$\mathbf{\ \boldsymbol{\tau}\mathbf{_{1}}$ are shaped by identical, symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$. \item The lengths of each Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ on $\boldsymbol{\psi }\mathbf{_{2}}$ and each correlated, constrained primal principal eigenaxis component $\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$ on$\mathbf{\ \boldsymbol{\tau}\mathbf{_{2}}$ are shaped by identical, symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$. \end{enumerate} \paragraph{Symmetrically Balanced Pointwise Covariance Statistics} \begin{enumerate} \item The magnitude $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}\mathbf{_{1}} \[ \psi_{1i\ast}=\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\widetilde{\psi }_{1i\ast}\left\Vert \widetilde{\mathbf{x}}_{\ast}\right\Vert _{_{1i_{\ast}} }\right) \left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert \] is determined by a symmetrically balanced, pointwise covariance estimat \begin{align*} \widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x _{1_{i_{\ast}}}\right) & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert \\ & \times\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert \mathbf{x _{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x _{1_{j\ast}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast }}\right\Vert \\ & \times\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert \mathbf{x _{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x _{2_{j\ast}} \end{align*} for a correlated extreme vector\textbf{\ }$\mathbf{x}_{1_{i\ast}}$, such that the locus of each constrained, primal principal eigenaxis component\textbf{\ $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$\textbf{\ }on $\boldsymbol{\tau }\mathbf{_{1}}$\textbf{\ }provides a maximum\textit{\ }covariance estimate in a principal location $\mathbf{x}_{1_{i\ast}}$, in the form of a symmetrically balanced, first and second-order statistical moment about the locus of an extreme data point $\mathbf{x}_{1_{i\ast}}$. \item The magnitude $\psi_{2i\ast}$ of each Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ on $\boldsymbol{\psi}\mathbf{_{2}} \[ \psi_{2i\ast}=\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\widetilde{\psi }_{2i\ast}\left\Vert \widetilde{\mathbf{x}}_{\ast}\right\Vert _{_{2i_{\ast}} }\right) \left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \] is determined by a symmetrically balanced, pointwise covariance estimat \begin{align*} \widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x _{2_{i_{\ast}}}\right) & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \\ & \times\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert \mathbf{x _{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{2_{i\ast}}\mathbf{x _{2_{j\ast}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert \\ & \times\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert \mathbf{x _{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{2_{i\ast}}\mathbf{x _{1_{j\ast}} \end{align*} for a correlated extreme vector\textbf{\ }$\mathbf{x}_{2_{i\ast}}$, such that the locus of each constrained, primal principal eigenaxis component\textbf{\ $\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$\textbf{\ }on $\boldsymbol{\tau }\mathbf{_{2}}$\textbf{\ }provides a maximum\textit{\ }covariance estimate in a principal location $\mathbf{x}_{2_{i\ast}}$, in the form of a symmetrically balanced, first and second-order statistical moment about the locus of an extreme point $\mathbf{x}_{2_{i\ast}}$. \end{enumerate} \paragraph{Symmetrically Balanced Statistical Moments} \begin{enumerate} \item Each Wolfe dual principal eigenaxis component\textbf{\ }$\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}\mathbf{_{1} $\textbf{\ }specifies a symmetrically balanced, first and second-order statistical moment about the locus of a correlated extreme point $\mathbf{x}_{1_{i\ast}}$ relative to the loci of all of the scaled extreme points which determines the locus of a constrained, primal principal eigenaxis component $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ on $\boldsymbol{\tau}_{1}$. \item Each Wolfe dual principal eigenaxis component\textbf{\ }$\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}\mathbf{_{2} $\textbf{\ }specifies a symmetrically balanced, first and second-order statistical moment about the locus of a correlated extreme point $\mathbf{x}_{2_{i\ast}}$ relative to the loci of all of the scaled extreme points which determines the locus of a constrained, primal principal eigenaxis component $\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$ on $\boldsymbol{\tau}_{2}$. \end{enumerate} \paragraph{Symmetrically Balanced Distributions of Extreme Points} \begin{enumerate} \item Any given maximum covariance estimate $\widehat{\operatorname{cov }_{up_{\updownarrow}}\left( \mathbf{x}_{1_{i_{\ast}}}\right) $ describes how the components of $l$ scaled extreme vectors $\left\{ \psi_{1_{j\ast }\mathbf{x}_{1_{j_{\ast}}}\right\} _{j=1}^{l_{1}}$ and $\left\{ \psi_{2_{j\ast}}\mathbf{x}_{2_{j_{\ast}}}\right\} _{j=1}^{l_{2}}$ are distributed along the axis of an extreme vector $\mathbf{x}_{1_{i\ast}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies a symmetrically balanced distribution of $l$ scaled extreme vectors along the axis of an extreme vector $\mathbf{x}_{1_{j\ast}}$ or $\mathbf{x}_{2_{j\ast} $, such that a pointwise covariance estimate $\widehat{\operatorname{cov }_{up_{\updownarrow}}\left( \mathbf{x}_{1_{i_{\ast}}}\right) $ provides an estimate for how the components of the extreme vector $\mathbf{x}_{1_{i_{\ast }}}$ are symmetrically distributed over the axes of the $l$ scaled extreme vectors. Thus, $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x}_{1_{i_{\ast}}}\right) $ describes a distribution of first degree coordinates for $\mathbf{x}_{1_{i_{\ast}}}$. \item Any given maximum\textit{\ }covariance estimate $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x _{2_{i_{\ast}}}\right) $ describes how the components of $l$ scaled extreme vectors $\left\{ \psi_{1_{j\ast}}\mathbf{x}_{1_{j_{\ast}}}\right\} _{j=1}^{l_{1}}$ and $\left\{ \psi_{2_{j\ast}}\mathbf{x}_{2_{j_{\ast} }\right\} _{j=1}^{l_{2}}$ are distributed along the axis of an extreme vector $\mathbf{x}_{2_{i\ast}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies a symmetrically balanced distribution of $l$ scaled extreme vectors along the axis of an extreme vector $\mathbf{x _{1_{j\ast}}$ or $\mathbf{x}_{2_{j\ast}}$, such that a pointwise covariance estimate $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x}_{2_{i_{\ast}}}\right) $ provides an estimate for how the components of the extreme vector $\mathbf{x}_{2_{i_{\ast}}}$ are symmetrically distributed over the axes of the $l$ scaled extreme vectors. Thus, $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \mathbf{x _{2_{i_{\ast}}}\right) $ describes a distribution of first degree coordinates for $\mathbf{x}_{2_{i_{\ast}}}$. \end{enumerate} I\ will now define the equivalence between the total allowed eigenenergies exhibited by $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$. \subsection{Equivalence Between Eigenenergies of $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$} The inner product between the integrated Wolf dual principal eigenaxis components on $\boldsymbol{\psi} \begin{align*} \left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2} & =\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast}}^{T }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }+\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}^{T}}{\left\Vert \mathbf{x _{2_{i\ast}}\right\Vert }\right) \\ & \times\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{\mathbf{x _{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }+\sum \nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }\right) \end{align*} determines the total allowed eigenenergy $\left\Vert \boldsymbol{\psi }\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\psi}$: which is symmetrically equivalent with the critical minimum eigenenergy $\left\Vert \boldsymbol{\tau }\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}$ within its Wolfe dual eigenspac \begin{align*} \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2} & =\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}^{T -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}^{T}\right) \\ & \times\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x _{1_{i\ast}}-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast }\right) \text{. \end{align*} I will now argue that the equivalence $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}\simeq\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ between the total allowed eigenenergies exhibited by $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ involves symmetrically balanced, joint eigenenergy distributions with respect to the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$. \paragraph{Symmetrical Equivalence of Eigenenergy Distributions} Using Eqs (\ref{Equilibrium Constraint on Dual Eigen-components}), (\ref{Dual Eigen-coordinate Locations Component One}), (\ref{Unidirectional Scaling Term One1}), and (\ref{Unidirectional Scaling Term Two1}), it follows that identical, symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ are symmetrically distributed over the respective axes of each Wolfe dual principal eigenaxis component on $\boldsymbol{\psi}$ and each correlated and unconstrained primal principal eigenaxis component (extreme vector) on $\boldsymbol{\tau}$. Therefore, constrained primal and Wolfe dual principal eigenaxis components that are correlated with each other are formed by equivalent, symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$. Thereby, symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ are symmetrically distributed over the axes of all of the Wolf dual principal eigenaxis components $\left\{ \psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast }\right\} _{i=1}^{l_{1}}$ and $\left\{ \psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}\right\} _{i=1}^{l_{2}}$ on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$ and all of the constrained, primal principal eigenaxis components $\left\{ \psi_{1_{i\ast }\mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ and $\left\{ \psi_{2_{i\ast }}\mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2}}$ on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$, where $\overrightarrow{\mathbf{e}}_{1i\ast =\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }$ and $\overrightarrow{\mathbf{e}}_{2i\ast}=\frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }$. Therefore, the distribution of eigenenergies with respect to the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ is symmetrically equivalent to the distribution of eigenenergies with respect to the constrained, primal principal eigenaxis components on $\boldsymbol{\tau}$, such that the total allowed eigenenergies $\left\Vert \boldsymbol{\psi }\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ satisfy symmetrically balanced, joint eigenenergy distributions with respect to the principal eigenaxis components on\emph{\ }$\boldsymbol{\psi}$ and $\boldsymbol{\tau}$. Thus, all of the constrained, primal principal eigenaxis components on $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ possess eigenenergies that satisfy symmetrically balanced, joint eigenenergy distributions with respect to the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$. Later on, I\ will show that the critical minimum eigenenergies exhibited by the scaled extreme vectors determine conditional probabilities of classification error for extreme points, where any given extreme point has a risk or counter risk that is determined by a measure of central location and a measure of spread, both of which are described by a conditional probability density. In the next section, I\ will show that each Wolfe dual principal eigenaxis component specifies a standardized, conditional probability density for an extreme point. By way of motivation, an overview of Bayesian estimates for parameter vectors $\mathbf{\theta}$ of unknown probability densities $p\left( \mathbf{x}\right) $ is presented next. \subsection{Parameter Vectors $\mathbf{\theta}$ of Probability Densities} Take any given class of feature vectors $\mathbf{x}$ that is described by an unknown probability density function $p\left( \mathbf{x}\right) $. Let the unknown density function $p\left( \mathbf{x}\right) $ have a known parametric form in terms of a parameter vector $\mathbf{\theta}$, where the unknowns are the components of $\mathbf{\theta}$. Any information about $\mathbf{\theta}$ prior to observing a set of data samples is assumed to be contained in a known prior density $p\left( \mathbf{\theta}\right) $, where observation of a set of data samples produces a posterior density $p\left( \mathbf{\theta}|D\right) $ which is sharply peaked about the true value of $\mathbf{\theta}$ if $\widehat{\mathbf{\theta}}\simeq\mathbf{\theta}$. Given these assumptions, $p\left( \mathbf{x}|D\right) $ can be computed by integrating the joint density $p\left( \mathbf{x},\mathbf{\theta|}D\right) $ over $\mathbf{\theta} \[ p\left( \mathbf{x}|D\right) =\int p\left( \mathbf{x},\mathbf{\theta |}D\right) d\mathbf{\theta}\text{. \] Write $p\left( \mathbf{x},\mathbf{\theta|}D\right) $ as $p\left( \mathbf{x}|\mathbf{\theta}\right) p\left( \mathbf{\theta}|D\right) $. If $p\left( \mathbf{\theta}|D\right) $ peaks sharply about some value $\widehat{\mathbf{\theta}}$, then the integra \[ p\left( \mathbf{x}|D\right) =\int p\left( \mathbf{x}|\mathbf{\theta }\right) p\left( \mathbf{\theta}|D\right) d\mathbf{\theta}\simeq p\left( \mathbf{x}|\widehat{\mathbf{\theta}}\right) \] produces an estimate $\widehat{\mathbf{\theta}}$ for the desired parameter vector $\mathbf{\theta}$ \citep{Duda2001 . Accordingly, unknown probability density functions $p\left( \mathbf{x \right) $ can be determined by Bayesian estimation of a parameter vector $\mathbf{\theta}$ with a known prior density $p\left( \mathbf{\theta}\right) $. \subsubsection{Learning an Unknown Vector $\mathbf{\theta}$ of Two Conditional Densities} Given the previous assumptions for Bayesian estimation of parameter vectors $\mathbf{\theta}$, it is reasonable to assume that information about an unknown probability density function $p\left( \mathbf{x}\right) $ is distributed over the components of a parameter vector $\widehat{\mathbf{\theta }}$. It has been demonstrated that symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau }$ are symmetrically distributed over the axes of all of the Wolf dual principal eigenaxis components on $\boldsymbol{\psi}$ and all of the constrained, primal principal eigenaxis components on $\boldsymbol{\tau}$. In the next analysis, I\ will show that information for two unknown conditional density functions $p\left( \mathbf{x}_{1_{i\ast}}|\widehat{\mathbf{\theta }_{1}\right) $ and $p\left( \mathbf{x}_{2_{i\ast}}|\widehat{\mathbf{\theta }_{2}\right) $ is distributed over the scaled extreme points on $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$, where $\boldsymbol{\tau}$ is an unknown parameter vector $\widehat{\mathbf{\theta}}$ that contains information about the unknown conditional densities $\widehat{\mathbf{\theta }=\widehat{\mathbf{\theta}}_{1}-\widehat{\mathbf{\theta}}_{2}$. I will now define pointwise conditional densities which are determined by the components of a constrained, primal linear eigenlocus $\boldsymbol{\tau}= $ $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$, where each conditional density $p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ or $p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x _{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ for an $\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x}_{2_{i\ast}}$ extreme point is given by components $\operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast} }\left( \overrightarrow{\boldsymbol{\tau}}\right) $ or $\operatorname{comp _{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) $ of $\boldsymbol{\tau}$ along the corresponding extreme vector $\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x _{2_{i\ast}}$. \subsection{Pointwise Conditional Densities} Consider again the equations for the loci of the $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ and $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ Wolfe dual principal eigenaxis components in Eqs (\ref{Dual Eigen-coordinate Locations Component One}) and (\ref{Dual Eigen-coordinate Locations Component Two}). It has been demonstrated that any given Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ correlated with an $\mathbf{x}_{1_{i_{\ast}}}$ extreme point and any given Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ correlated with an $\mathbf{x}_{2_{i\ast}}$ extreme point provides an estimate for how the components of $l$ scaled extreme vectors $\left\{ \psi_{_{j\ast }\mathbf{x}_{j\ast}\right\} _{j=1}^{l}$ are symmetrically distributed along the axis of a correlated extreme vector $\mathbf{x}_{1_{i_{\ast}}}$ or $\mathbf{x}_{2_{i\ast}}$, where components of scaled extreme vectors $\psi_{_{j\ast}}\mathbf{x}_{j\ast}$ are symmetrically distributed according to class labels $\pm1$, signed magnitudes $\left\Vert \mathbf{x}_{j\ast }\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{_{j\ast}}}$ or $\left\Vert \mathbf{x}_{j\ast}\right\Vert \cos\theta_{\mathbf{x}_{2_{i\ast }\mathbf{x}_{_{j\ast}}}$, and symmetrically balanced distributions of scaled extreme vectors $\left\{ \psi_{_{j\ast}}\mathbf{x}_{j\ast}\right\} _{j=1}^{l}$ specified by scale factors $\psi_{_{j\ast}}$. Thereby, symmetrically balanced distributions of first degree coordinates of all of the extreme points are symmetrically distributed along the axes of all of the extreme vectors, where all of the scale factors satisfy the equivalence relation $\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}=\sum\nolimits_{j=1 ^{l_{2}}\psi_{2_{j\ast}}$. Accordingly, principal eigenaxis components $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{2i\ast}$ describe distributions of first degree coordinates for extreme points $\mathbf{x}_{1_{i_{\ast}}}$ or $\mathbf{x _{2_{i_{\ast}}}$. Therefore, for any given extreme vector $\mathbf{x}_{1_{i\ast}}$, the relative likelihood that the extreme point $\mathbf{x}_{1_{i\ast}}$ has a given location is specified by the locus of the Wolfe dual principal eigenaxis component $\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }$ \begin{align*} \psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast }\right\Vert } & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi _{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos\theta _{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast }}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast} \mathbf{x}_{2_{j\ast}}}\text{, \end{align*} where $\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x _{1_{i\ast}}\right\Vert }$ describes a conditional expectation (a measure of central location) and a conditional covariance (a measure of spread) for the extreme point $\mathbf{x}_{1_{i_{\ast}}}$. Thereby, it is concluded that the principal eigenaxis component $\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }$ specifies a\emph{\ conditional density $p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ for the extreme point $\mathbf{x}_{1_{i_{\ast}}}$, where the scale factor $\psi_{1i\ast}$ is a \emph{unit }measure or estimate of density and likelihood for the extreme point $\mathbf{x}_{1_{i\ast}}$. Likewise, for any given extreme vector $\mathbf{x}_{2_{i\ast}}$, the relative likelihood that the extreme point $\mathbf{x}_{2_{i\ast}}$ has a given location is specified by the locus of the Wolfe dual principal eigenaxis component $\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }$ \begin{align*} \psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert } & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast }\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x _{2_{i\ast}}\mathbf{x}_{2_{j\ast}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{2_{i\ast} \mathbf{x}_{1_{j\ast}}}\text{, \end{align*} where $\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x _{2_{i\ast}}\right\Vert }$ describes a conditional expectation (a measure of central location) and a conditional covariance (a measure of spread) for the extreme point $\mathbf{x}_{2_{i\ast}}$. Thereby, it is concluded that the principal eigenaxis component $\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }$ specifies a conditional density $p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ for the extreme point $\mathbf{x}_{2_{i\ast}}$, where the scale factor $\psi_{2i\ast}$ is a \emph{unit }measure or estimate of density and likelihood for the extreme point $\mathbf{x}_{2_{i\ast}}$. It has been shown that a Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ is formed by a locus of scaled, normalized extreme vector \begin{align*} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac {\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert +\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }\\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\text{, \end{align*} where $\boldsymbol{\psi}_{1}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }$ and $\boldsymbol{\psi}_{2}=\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }$, and each $\psi_{1i\ast}$ or $\psi_{2i\ast}$ scale factor provides a unit measure or estimate of density and likelihood for an $\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x}_{2_{i\ast}}$ extreme point. Given that each Wolfe dual principal eigenaxis component $\psi_{1i\ast \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }$ on $\boldsymbol{\psi}_{1}$ specifies a conditional density $p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x _{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ for a correlated extreme point $\mathbf{x}_{1_{i_{\ast}}}$, it follows that conditional densities for the $\mathbf{x}_{1_{i_{\ast}}}$ extreme points are distributed over the principal eigenaxis components of $\boldsymbol{\psi}_{1} \begin{align} \boldsymbol{\psi}_{1} & =\sum\nolimits_{i=1}^{l_{1}}p\left( \mathbf{x _{1_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast} }\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \frac {\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\label{Wolfe Dual Conditional Density Extreme Points 1}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\text{,}\nonumber \end{align} where $\psi_{1_{i\ast}}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x _{1_{i\ast}}\right\Vert }$ specifies a conditional density for $\mathbf{x _{1_{i_{\ast}}}$, such that $\boldsymbol{\psi}_{1}$ is a parameter vector for a class-conditional probability density $p\left( \frac{\mathbf{x}_{1_{i\ast }}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi _{1}\right) $ for a given set $\left\{ \mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ of $\mathbf{x}_{1_{i_{\ast}}}$ extreme points \[ \boldsymbol{\psi}_{1}=p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \text{. \] Given that each Wolfe dual principal eigenaxis component $\psi_{2i\ast \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }$ on $\boldsymbol{\psi}_{2}$ specifies a conditional density $p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x _{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ for a correlated extreme point $\mathbf{x}_{2_{i_{\ast}}}$, it follows that conditional densities for the $\mathbf{x}_{2_{i_{\ast}}}$ extreme points are distributed over the principal eigenaxis components of $\boldsymbol{\psi}_{2} \begin{align} \boldsymbol{\psi}_{2} & =\sum\nolimits_{i=1}^{l_{2}}p\left( \mathbf{x _{2_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{2i\ast} }\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \frac {\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }\label{Wolfe Dual Conditional Density Extreme Points 2}\\ & =\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }\text{,}\nonumber \end{align} where $\psi_{2_{i\ast}}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x _{2_{i\ast}}\right\Vert }$ specifies a conditional density for $\mathbf{x _{2_{i\ast}}$, such that $\boldsymbol{\psi}_{2}$ is a parameter vector for a class-conditional probability density $p\left( \frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ for a given set $\left\{ \mathbf{x}_{2_{i_{\ast}}}\right\} _{i=1}^{l_{2}}$ of $\mathbf{x}_{2_{i_{\ast}}}$ extreme points \[ \boldsymbol{\psi}_{2}=p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{. \] Therefore, it is concluded that $\boldsymbol{\psi}_{1}$ is a parameter vector for the class-conditional probability density function $p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $\boldsymbol{\psi}_{2}$ is a parameter vector for the class-conditional probability density function $p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $. Returning to Eq. (\ref{Equilibrium Constraint on Dual Eigen-components}), it follows that the pointwise conditional densities $\psi_{1i\ast}\frac {\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }$ and $\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert }$ for all of the extreme points in class $\omega_{1}$ and class $\omega_{2}$ are symmetrically balanced with each other \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\rightleftharpoons \sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \] \qquad in the Wolfe dual eigenspace. Therefore, the class-conditional probability density functions $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi }_{2}$ in the Wolfe dual eigenspace for class $\omega_{1}$ and class $\omega_{2}$ are \emph{symmetrically balanced with each other} \[ p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast }\right\Vert }|\boldsymbol{\psi}_{1}\right) \rightleftharpoons p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{. \] I\ will now devise expressions for the class-conditional probability density functions in the decision space $Z$ for class $\omega_{1}$ and class $\omega_{2}$. \subsection{Class-conditional Probability Densities} I\ will now show that a linear eigenlocus $\boldsymbol{\tau=\tau _{1}-\boldsymbol{\tau}_{2}$ is a parameter vector for class-conditional probability density functions $p\left( \mathbf{x}_{1_{i\ast}}|\omega _{1}\right) $ and $p\left( \mathbf{x}_{2_{i\ast}}|\omega_{2}\right) $. \subsubsection{Class-Conditional Density for Class $\omega_{1}$} Given that each Wolfe dual principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ specifies a conditional density $p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ for a correlated extreme point $\mathbf{x}_{1_{i_{\ast}}}$, it follows that conditional densities for the $\mathbf{x}_{1_{i_{\ast}}}$ extreme points are distributed over the principal eigenaxis components of $\boldsymbol{\tau}_{1} \begin{align} \boldsymbol{\tau}_{1} & =\sum\nolimits_{i=1}^{l_{1}}p\left( \mathbf{x _{1_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast} }\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \mathbf{x _{1_{i\ast}}\label{Conditional Density Extreme Points 1}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast} \text{,}\nonumber \end{align} where $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ specifies a conditional density for $\mathbf{x}_{1_{i_{\ast}}}$, such that $\boldsymbol{\tau}_{1}$ is a parameter vector for a class-conditional probability density $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ for a given set $\left\{ \mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ of $\mathbf{x _{1_{i_{\ast}}}$ extreme points \[ \boldsymbol{\tau}_{1}=p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) \text{. \] \subsubsection{Class-Conditional Density for Class $\omega_{2}$} Given that each Wolfe dual principal eigenaxis component $\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{2i\ast}$ specifies a conditional density $p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) $ for a correlated extreme point $\mathbf{x}_{2_{i_{\ast}}}$, it follows that conditional densities for the $\mathbf{x}_{2_{i_{\ast}}}$ extreme points are distributed over the principal eigenaxis components of $\boldsymbol{\tau}_{2} \begin{align} \boldsymbol{\tau}_{2} & =\sum\nolimits_{i=1}^{l_{2}}p\left( \mathbf{x _{2_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{2i\ast} }\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \mathbf{x _{2_{i\ast}}\label{Conditional Density Extreme Points 2}\\ & =\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast} \text{,}\nonumber \end{align} where $\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$ specifies a conditional density for $\mathbf{x}_{2_{i\ast}}$, such that $\boldsymbol{\tau}_{2}$ is a parameter vector for a class-conditional probability density $p\left( \mathbf{x _{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $ for a given set $\left\{ \mathbf{x}_{2_{i_{\ast}}}\right\} _{i=1}^{l_{2}}$ of $\mathbf{x}_{2_{i_{\ast }}}$ extreme points \[ \boldsymbol{\tau}_{2}=p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau _{2}\right) \text{. \] Therefore, it is concluded that $\boldsymbol{\tau}_{1}$ is a parameter vector for the class-conditional probability density function $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $\boldsymbol{\tau }_{2}$ is a parameter vector for the class-conditional probability density function $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $. I will now devise integrals for the conditional probability functions for class $\omega_{1}$ and class $\omega_{2}$. \subsection{Conditional Probability Functions} I\ will now show that the conditional probability function $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ for class $\omega_{1}$ is given by the area under the class-conditional probability density function $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ over the decision space $Z$. \subsubsection{Conditional Probability Function for Class $\omega_{1}$} A linear eigenlocus $\boldsymbol{\tau}=\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}-\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$ is the basis of a linear classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ that partitions any given feature space into congruent decision regions $Z_{1}\cong Z_{2}$, whereby, for any two overlapping data distributions, an $\mathbf{x}_{1_{i_{\ast}}}$ or $\mathbf{x}_{2_{i_{\ast}}}$ extreme point lies in either region $Z_{1}$ or region $Z_{2}$, and for any two non-overlapping data distributions, $\mathbf{x}_{1_{i_{\ast}}}$ extreme points lie in region $Z_{1}$ and $\mathbf{x}_{2_{i_{\ast}}}$ extreme points lie in region $Z_{2}$. Therefore, the area under each pointwise conditional density in Eq. (\ref{Conditional Density Extreme Points 1} \[ \int_{Z_{1}}p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) d\boldsymbol{\tau _{1}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \text{ or }\int_{Z_{2}}p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x _{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) d\boldsymbol{\tau}_{1}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \] is a conditional probability that an $\mathbf{x}_{1_{i_{\ast}}}$ extreme point will be observed in either region $Z_{1}$ or region $Z_{2}$. Thus, the area $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ under the class-conditional probability density function $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ in Eq. (\ref{Conditional Density Extreme Points 1} \begin{align*} P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) & =\in _{Z}\left( \sum\nolimits_{i=1}^{l_{1}}p\left( \mathbf{x}_{1_{i_{\ast} }|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \mathbf{x}_{1_{i\ast }\right) d\boldsymbol{\tau}_{1}\\ & =\int_{Z}\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x _{1_{i\ast}}\right) d\boldsymbol{\tau}_{1}=\int_{Z}p\left( \mathbf{x _{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}\\ & =\int_{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}=\frac{1}{2}\left\Vert \boldsymbol{\tau}_{1}\right\Vert ^{2}+C=\left\Vert \boldsymbol{\tau _{1}\right\Vert ^{2}+C_{1 \end{align*} specifies the conditional probability of observing a set $\left\{ \mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ of $\mathbf{x}_{1_{i_{\ast}}}$ extreme points within \emph{localized regions} of the decision space $Z$, where conditional densities $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ for $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region \emph{contribute} to the cost or risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}\right) $ of making a decision error, and conditional densities $\psi_{1_{i\ast}}\mathbf{x _{1_{i\ast}}$ for $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region \emph{counteract }the cost or risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast}\mathbf{x _{1_{i_{\ast}}}\right) $ of making a decision error. It follows that the area $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau }_{1}\right) $ under the class-conditional probability density function $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ is determined by regions of risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\psi_{1i\ast }\mathbf{x}_{1_{i_{\ast}}}\right) $ and regions of counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast }\mathbf{x}_{1_{i_{\ast}}}\right) $ for the $\mathbf{x}_{1_{i_{\ast}}}$ extreme points, where regions of risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}\right) $ and regions of counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast }\mathbf{x}_{1_{i_{\ast}}}\right) $ are localized regions in decision space $Z$ that are determined by central locations (expected values) and spreads (covariances) of $\mathbf{x}_{1_{i\ast}}$ extreme points. Therefore, the conditional probability function $P\left( \mathbf{x _{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ for class $\omega_{1}$ is given by the integra \begin{align} P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) & =\in _{Z}p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}=\int_{Z}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1 \label{Conditional Probability Function for Class One}\\ & =\int_{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}=\left\Vert \boldsymbol{\tau}_{1}\right\Vert ^{2}+C_{1}\text{,}\nonumber \end{align} over the decision space $Z$, which has a solution in terms of the critical minimum eigenenergy $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c }^{2}$ exhibited by $\boldsymbol{\tau}_{1}$ and an integration constant $C_{1}$. I\ will now demonstrate that the conditional probability function $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{2}$ is given by the area under the class-conditional probability density function $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $ over the decision space $Z.$ \subsubsection{Conditional Probability Function for Class $\omega_{2}$} The area under each pointwise conditional density in Eq. (\ref{Conditional Density Extreme Points 2} \[ \int_{Z_{1}}p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) d\boldsymbol{\tau _{2}\left( \mathbf{x}_{2_{i\ast}}\right) \text{ or }\int_{Z_{2}}p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x _{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) d\boldsymbol{\tau}_{2}\left( \mathbf{x}_{2_{i\ast}}\right) \] is a conditional probability that an $\mathbf{x}_{2_{i_{\ast}}}$ extreme point will be observed in either region $Z_{1}$ or region $Z_{2}$. Thus, the area $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ under the class-conditional probability density function $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $ in Eq. (\ref{Conditional Density Extreme Points 2} \begin{align*} P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) & =\in _{Z}\left( \sum\nolimits_{i=1}^{l_{2}}p\left( \mathbf{x}_{2_{i_{\ast} }|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \mathbf{x}_{2_{i\ast }\right) d\boldsymbol{\tau}_{2}\\ & =\int_{Z}\left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x _{2_{i\ast}}\right) d\boldsymbol{\tau}_{2}=\int_{Z}p\left( \mathbf{x _{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}\\ & =\int_{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}=\frac{1}{2}\left\Vert \boldsymbol{\tau}_{2}\right\Vert ^{2}+C=\left\Vert \boldsymbol{\tau _{2}\right\Vert ^{2}+C_{2 \end{align*} specifies the conditional probability of observing a set $\left\{ \mathbf{x}_{2_{i_{\ast}}}\right\} _{i=1}^{l_{2}}$ of $\mathbf{x}_{2_{i_{\ast }}}$ extreme points within localized regions of the decision space $Z$, where conditional densities $\psi_{2i\ast}\mathbf{x}_{2_{i_{\ast}}}$ for $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region \emph{contribute} to the cost or risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|\psi_{2i\ast}\mathbf{x}_{2_{i_{\ast}}}\right) $ of making a decision error, and conditional densities $\psi_{2i\ast}\mathbf{x _{2_{i_{\ast}}}$ for $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region \emph{counteract }the cost or risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast}\mathbf{x _{2_{i_{\ast}}}\right) $ of making a decision error. It follows that the area $P\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau }_{2}\right) $ under the class-conditional probability density function $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $ is determined by regions of risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast}\mathbf{x}_{2_{i_{\ast}}}\right) $ and regions of counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast }\mathbf{x}_{2_{i_{\ast}}}\right) $ for the $\mathbf{x}_{2_{i_{\ast}}}$ extreme points, where regions of risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast}\mathbf{x}_{2_{i_{\ast}}}\right) $ and regions of counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast }\mathbf{x}_{2_{i_{\ast}}}\right) $ are localized regions in decision space $Z$ that are determined by central locations (expected values) and spreads (covariances) of $\mathbf{x}_{2_{i\ast}}$ extreme points. Therefore, the conditional probability function $P\left( \mathbf{x _{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{2}$ is given by the integra \begin{align} P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) & =\in _{Z}p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}=\int_{Z}p\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2 \label{Conditional Probability Function for Class Two}\\ & =\int_{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}=\left\Vert \boldsymbol{\tau}_{2}\right\Vert ^{2}+C_{2}\text{,}\nonumber \end{align} over the decision space $Z$, which has a solution in terms of the critical minimum eigenenergy $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c }^{2}$ exhibited by $\boldsymbol{\tau}_{2}$ and an integration constant $C_{2}$. In order to precisely define the manner in which constrained linear eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ satisfy a data-driven version of the fundamental integral equation of binary classification for a classification system in statistical equilibrium, I need to precisely define the manner in which the total allowed eigenenergies of the principal eigenaxis components on $\boldsymbol{\tau}$ are symmetrically balanced with each other. Furthermore, I need to identify the manner in which the property of symmetrical balance exhibited by the principal eigenaxis components on $\boldsymbol{\psi}$ \emph{and} $\boldsymbol{\tau}$ enables linear eigenlocus classification systems $\mathbf{x}^{T}\boldsymbol{\tau +\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ to \emph{effectively balance all of the forces associated with }the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\mathbf{\tau}_{1}\right) $ in the $Z_{1}$ decision region \emph{with all of the forces associated with} the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|\mathbf{\tau}_{1}\right) $ and the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) $ in the $Z_{2}$ decision region. Recall that the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda }\left( \mathbf{x}\right) \right) $ of a binary classification syste \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) =\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \] involves opposing forces that depend on the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ and the corresponding decision boundary $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$. It has been demonstrated that linear eigenlocus transforms define these opposing forces in terms of symmetrically balanced, pointwise covariance statistics \begin{align*} \psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast }\right\Vert } & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi _{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos\theta _{\mathbf{x}_{1_{i\ast}}\mathbf{x}_{1_{j\ast}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{1_{i_{\ast }}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast} \mathbf{x}_{2_{j\ast}}}\text{, \end{align*} an \begin{align*} \psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert } & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast }\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x _{2_{i\ast}}\mathbf{x}_{2_{j\ast}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{2_{i\ast} \mathbf{x}_{1_{j\ast}}}\text{, \end{align*} such that any given conditional densit \[ p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \text{ \ or \ }p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x _{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \] for a respective extreme point $\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x _{2_{i\ast}}$ is defined in terms of forces related to counter risks and risks associated with positions and potential locations of $\mathbf{x}_{1_{i\ast}}$ and $\mathbf{x}_{2_{i\ast}}$ extreme points within the $Z_{1}$ and $Z_{2}$ decision regions of a decision space $Z$. Linear eigenlocus transforms routinely accomplish an elegant, statistical balancing feat that involves finding the right mix of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$. I\ will now show that the scale factors $\left\{ \psi_{i\ast}\right\} _{i=1}^{l}$ of the Wolfe dual principal eigenaxis components $\left\{ \psi_{i\ast \frac{\mathbf{x}_{i\ast}}{\left\Vert \mathbf{x}_{i\ast}\right\Vert |\psi_{i\ast}>0\right\} _{i=1}^{l}$ on $\boldsymbol{\psi}$ play a fundamental role in this statistical balancing feat. I\ will develop an equation of statistical equilibrium for the axis of $\boldsymbol{\tau}$ that is determined by the equation of statistical equilibrium \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\rightleftharpoons \sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \] for the axis of $\boldsymbol{\psi}$. \subsection{Finding the Right Mix of Component Lengths} It has been demonstrated that the directions of the constrained primal and the Wolfe dual principal eigenaxis components are fixed, along with the angles between all of the extreme vectors. I will now show that the lengths of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ must satisfy critical magnitude constraints. Using Eq. (\ref{Dual Eigen-coordinate Locations Component One}), it follows that the integrated lengths $\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}$ of the components $\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }$ on $\boldsymbol{\psi}_{1}$ must satisfy the equation \begin{align} \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast} & =\lambda_{\max_{\boldsymbol{\psi }}}^{-1}\sum\nolimits_{i=1}^{l_{1}}\left\Vert \mathbf{x}_{1_{i\ast }\right\Vert \label{integrated dual loci one1}\\ & \times\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert \mathbf{x _{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x _{1_{j\ast}}}\nonumber\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\sum\nolimits_{i=1}^{l_{1 }\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert \nonumber\\ & \times\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert \mathbf{x _{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{1_{i\ast}}\mathbf{x _{2_{j\ast}}}\nonumber \end{align} which reduces t \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}=\lambda_{\max_{\boldsymbol{\psi} }^{-1}\sum\nolimits_{i=1}^{l_{1}}\mathbf{x}_{1_{i\ast}}^{T}\left( \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast} -\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}\right) \text{. \] Using Eq. (\ref{Dual Eigen-coordinate Locations Component Two}), it follows that the integrated lengths $\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}$ of the components $\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }$ on $\boldsymbol{\psi}_{2}$ must satisfy the equation \begin{align} \sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast} & =\lambda_{\max_{\boldsymbol{\psi }}}^{-1}\sum\nolimits_{i=1}^{l_{2}}\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert \label{integrated dual loci two1}\\ & \times\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert \mathbf{x _{2_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{2_{i\ast}}\mathbf{x _{2_{j\ast}}}\nonumber\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\sum\nolimits_{i=1}^{l_{2 }\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \nonumber\\ & \times\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert \mathbf{x _{1_{j\ast}}\right\Vert \cos\theta_{\mathbf{x}_{2_{i\ast}}\mathbf{x _{1_{j\ast}}}\nonumber \end{align} which reduces t \[ \sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}=\lambda_{\max_{\boldsymbol{\psi} }^{-1}\sum\nolimits_{i=1}^{l_{2}}\mathbf{x}_{2_{i\ast}}^{T}\left( \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast} -\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}\right) \text{. \] I will now show that Eqs (\ref{integrated dual loci one1}) and (\ref{integrated dual loci two1}) determine a balanced eigenlocus equation, where RHS Eq. (\ref{integrated dual loci one1}) $=$ RHS Eq. (\ref{integrated dual loci two1}). \subsection{Balanced Linear Eigenlocus Equations} Returning to Eq. (\ref{Equilibrium Constraint on Dual Eigen-components} \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\rightleftharpoons\sum \nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\text{, \] where the axis of $\boldsymbol{\psi}$ is in statistical equilibrium, it follows that the RHS\ of Eq. (\ref{integrated dual loci one1}) must equal the RHS\ of Eq. (\ref{integrated dual loci two1}) \begin{align} & \sum\nolimits_{i=1}^{l_{1}}\mathbf{x}_{1_{i\ast}}^{T}\left( \sum \nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}-\sum \nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}\right) \label{Balanced Eigenlocus Equation Linear}\\ & =\sum\nolimits_{i=1}^{l_{2}}\mathbf{x}_{2_{i\ast}}^{T}\left( \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast} -\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}\right) \text{.}\nonumber \end{align} Therefore, all of the $\mathbf{x}_{1_{i\ast}}$ and $\mathbf{x}_{2_{i\ast}}$ extreme points are distributed over the axes of $\boldsymbol{\tau}_{1 $\textbf{ }and $\boldsymbol{\tau}_{2}$ in the symmetrically balanced manner \begin{equation} \sum\nolimits_{i=1}^{l_{1}}\mathbf{x}_{1_{i\ast}}^{T}\left( \boldsymbol{\tau }_{1}\mathbf{-}\boldsymbol{\tau}_{2}\right) =\sum\nolimits_{i=1}^{l_{2 }\mathbf{x}_{2_{i\ast}}^{T}\left( \boldsymbol{\tau}_{2}\mathbf{- \boldsymbol{\tau}_{1}\right) \text{,} \label{Balanced Eigenlocus Equation \end{equation} where the components of the $\mathbf{x}_{1_{i\ast}}$ extreme vectors along the axis of $\boldsymbol{\tau}_{2}$ oppose the components of the $\mathbf{x _{1_{i\ast}}$ extreme vectors along the axis of $\boldsymbol{\tau}_{1}$, and the components of the $\mathbf{x}_{2_{i\ast}}$ extreme vectors along the axis of $\boldsymbol{\tau}_{1}$ oppose the components of the $\mathbf{x}_{2_{i\ast }}$ extreme vectors along the axis of $\boldsymbol{\tau}_{2}$. Let $\widehat{\mathbf{x}}_{i\ast}\triangleq\sum\nolimits_{i=1}^{l \mathbf{x}_{i\ast}$, where $\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast =\sum\nolimits_{i=1}^{l_{1}}\mathbf{x}_{1_{i\ast}}+\sum\nolimits_{i=1}^{l_{2 }\mathbf{x}_{2_{i\ast}}$. Using Eq. (\ref{Balanced Eigenlocus Equation}), it follows that the component of $\widehat{\mathbf{x}}_{i\ast}$ along $\boldsymbol{\tau}_{1}$ is symmetrically balanced with the component of $\widehat{\mathbf{x}}_{i\ast}$ along $\boldsymbol{\tau}_{2} \[ \operatorname{comp}_{\overrightarrow{\boldsymbol{\tau}_{1}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) =\operatorname{comp _{\overrightarrow{\boldsymbol{\tau}_{2}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) \] so that the components $\operatorname{comp}_{\overrightarrow{\boldsymbol{\tau }_{1}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) $ and $\operatorname{comp}_{\overrightarrow{\boldsymbol{\tau}_{2}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) $ of clusters or aggregates of the extreme vectors from both pattern classes have \emph{equal forces associated with risks and counter risks} on opposite sides of the axis of $\boldsymbol{\tau}$. \subsubsection{Statistical Equilibrium of Risks and Counter Risks} Given Eq. (\ref{Balanced Eigenlocus Equation}), it follows that the axis of $\boldsymbol{\tau}$ is a lever of uniform density, where the center of $\boldsymbol{\tau}$ is $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c }^{2}$, for which two equal weights $\operatorname{comp _{\overrightarrow{\boldsymbol{\tau}_{1}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) $ and $\operatorname{comp}_{\overrightarrow{\boldsymbol{\tau}_{2}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) $ are placed on opposite sides of the fulcrum of $\boldsymbol{\tau}$, whereby the axis of $\boldsymbol{\tau}$ is in \emph{statistical equilibrium}. Figure $\ref{Statistical Equilibrium of Primal Linear Eigenlocus}$ illustrates the axis of $\boldsymbol{\tau}$ in statistical equilibrium, where forces associated with counter risks and risks of aggregates of extreme points are balanced with each other \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure29.png }\caption{The axis of $\boldsymbol{\tau}$ is in statistical equilibrium, where two equal weights $\operatorname{comp _{\protect\overrightarrow{\boldsymbol{\tau}_{1}}}\left( \protect\overrightarrow{\protect\widehat{\mathbf{x}}_{i\ast}}\right) $ and $\operatorname{comp}_{\protect\overrightarrow{\boldsymbol{\tau}_{2}}}\left( \protect\overrightarrow{\protect\widehat{\mathbf{x}}_{i\ast}}\right) $ are placed on opposite sides of the fulcrum of $\boldsymbol{\tau}$ which is located at the center of total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}$. \label{Statistical Equilibrium of Primal Linear Eigenlocus \end{figure} \subsection{Critical Magnitude Constraints} Equation (\ref{Balanced Eigenlocus Equation}) indicates that the length \[ \left\{ \psi_{1_{i\ast}}|\psi_{1_{i\ast}}>0\right\} _{i=1}^{l_{1}}\text{ and }\left\{ \psi_{2_{i\ast}}|\psi_{2_{i\ast}}>0\right\} _{i=1}^{l_{2} \] of the $l$ Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ satisfy critical magnitude constraints, such that the Wolfe dual eigensystem in Eq. (\ref{Dual Normal Eigenlocus Components}), which specifies highly interconnected, balanced sets of inner product relationships amongst the Wolfe dual and the constrained, primal principal eigenaxis components in Eqs (\ref{integrated dual loci one1}) and (\ref{integrated dual loci two1}), determines well-proportioned lengths $\psi_{1i\ast}$ or $\psi_{2i\ast}$ for each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\frac {\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }$ or $\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert }$ on $\boldsymbol{\psi}_{1}$ or $\boldsymbol{\psi}_{2}$, where each scale factor $\psi_{1i\ast}$ or $\psi_{2i\ast}$ determines a well-proportioned length for a correlated, constrained primal principal eigenaxis component $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ or $\psi _{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$ on $\boldsymbol{\tau}_{1}$ or $\boldsymbol{\tau}_{2}$. I will demonstrate that the axis of $\boldsymbol{\psi}$, which is constrained to be in statistical equilibrium $\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast }\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\rightleftharpoons\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac {\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }$, determines an equilibrium point $p\left( \widehat{\Lambda}_{\boldsymbol{\psi }}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) =0$ of an integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ such that a linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ is the solution to a fundamental integral equation of binary classification for a linear classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ in statistical equilibrium. Let $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\tau}\right) $ denote the risk of a linear classification system $\boldsymbol{\tau}^{T \mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ that is determined by a linear eigenlocus transform. Take any given set $\left\{ \left\{ \mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}},\;\left\{ \mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2}}\right\} $ of extreme points and take the set of scale factors $\left\{ \left\{ \psi_{1i\ast}\right\} _{i=1}^{l_{1}},\left\{ \psi_{2i\ast}\right\} _{i=1}^{l_{2}}\right\} $ that are determined by a linear eigenlocus transform. Let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi _{1_{j\ast}}\mathbf{x}_{1_{j\ast}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the scaled extreme vector $\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}$ in the decision space $Z$, where the force associated with the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z|\mathbf{\tau}\right) $ may be positive or negative. Let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{2_{j\ast }\mathbf{x}_{2_{j\ast}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the scaled extreme vector $\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}$ in the decision space $Z$, where the force associated with the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z|\mathbf{\tau}\right) $ may be positive or negative. Take any given extreme point $\mathbf{x}_{1_{i\ast}}$ from class $\omega_{1}$. Let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi _{1_{j\ast}}\mathbf{x}_{1_{i\ast}}^{T}\mathbf{x}_{1_{j\ast}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the component of the extreme vector $\mathbf{x}_{1_{i\ast}}$ along the scaled extreme vector $\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}$ in the decision space $Z$, where the force associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\tau}\right) $ may be positive or negative. Likewise, let $\overleftrightarrow{\mathfrak{R }_{\mathfrak{\min}}\left( Z|\psi_{2_{j\ast}}\mathbf{x}_{1_{i\ast} ^{T}\mathbf{x}_{2_{j\ast}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the component of the extreme vector $\mathbf{x}_{1_{i\ast}}$ along the scaled extreme vector $\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}$ in the decision space $Z$, where the force associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\tau}\right) $ may be positive or negative. Take any given extreme point $\mathbf{x}_{2_{i_{\ast}}}$ from class $\omega_{2}$. Let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{2_{j\ast}}\mathbf{x}_{2_{i\ast}}^{T}\mathbf{x}_{2_{j\ast}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the component of the extreme vector $\mathbf{x _{2_{i\ast}}$ along the scaled extreme vector $\psi_{2_{j\ast}}\mathbf{x _{2_{j\ast}}$ in the decision space $Z$, where the force associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\tau}\right) $ may be positive or negative. Likewise, let $\overleftrightarrow{\mathfrak{R }_{\mathfrak{\min}}\left( Z|\psi_{1_{j\ast}}\mathbf{x}_{2_{i\ast} ^{T}\mathbf{x}_{1_{j\ast}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the component of the extreme vector $\mathbf{x}_{2_{i\ast}}$ along the scaled extreme vector $\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}$ in the decision space $Z$, where the force associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\tau}\right) $ may be positive or negative. Returning to Eq. (\ref{Balanced Eigenlocus Equation Linear} \begin{align*} & \sum\nolimits_{i=1}^{l_{1}}\mathbf{x}_{1_{i\ast}}^{T}\left( \sum \nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}-\sum \nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast}}\right) \\ & =\sum\nolimits_{i=1}^{l_{2}}\mathbf{x}_{2_{i\ast}}^{T}\left( \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\mathbf{x}_{2_{j\ast} -\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\mathbf{x}_{1_{j\ast}}\right) \text{, \end{align*} it follows that the collective forces associated with risks and counter risks, which are related to the positions and the potential locations of all of the extreme points, are balanced in the following manner \begin{align*} \mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\tau}\right) & :\sum\nolimits_{i=1}^{l_{1}}\left[ \sum\nolimits_{j=1}^{l_{1} \overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{1_{j\ast }\mathbf{x}_{1_{i\ast}}^{T}\mathbf{x}_{1_{j\ast}}\right) -\sum\nolimits_{j=1 ^{l_{2}}\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{2_{j\ast}}\mathbf{x}_{1_{i\ast}}^{T}\mathbf{x}_{2_{j\ast}}\right) \right] \\ & =\sum\nolimits_{i=1}^{l_{2}}\left[ \sum\nolimits_{j=1}^{l_{2 }\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{2_{j\ast }\mathbf{x}_{2_{i\ast}}^{T}\mathbf{x}_{2_{j\ast}}\right) -\sum\nolimits_{j=1 ^{l_{1}}\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{1_{j\ast}}\mathbf{x}_{2_{i\ast}}^{T}\mathbf{x}_{1_{j\ast}}\right) \right] \text{. \end{align*} So, take any given set $\left\{ \left\{ \mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}},\;\left\{ \mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2 }\right\} $ of extreme points and take the set of scale factors $\left\{ \left\{ \psi_{1i\ast}\right\} _{i=1}^{l_{1}},\left\{ \psi_{2i\ast}\right\} _{i=1}^{l_{2}}\right\} $ that are determined by a linear eigenlocus transform. I\ will show that linear eigenlocus transforms choose magnitudes or scale factors for the Wolfe dual principal eigenaxis components \[ \left\{ \psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x _{1_{i\ast}}\right\Vert }\right\} _{i=1}^{l_{1}}\text{ and }\left\{ \psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert }\right\} _{i=1}^{l_{2} \] on $\mathbf{\psi}$, which is constrained to satisfy the equation of statistical equilibrium \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\rightleftharpoons \sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }\text{, \] such that the likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ and the classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ are in statistical equilibrium, and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\tau}\right) $ and the corresponding total allowed eigenenergy $\left\Vert \boldsymbol{\tau }_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ are minimized. In the next section, I\ will explicitly define the manner in which constrained, linear eigenlocus discriminant functions $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}$ satisfy linear decision boundaries $D_{0}\left( \mathbf{x}\right) $ and linear decision borders $D_{+1}\left( \mathbf{x \right) $ and $D_{-1}\left( \mathbf{x}\right) $. I will use these results to show that the principal eigenaxis $\boldsymbol{\tau}$ of a linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ is a lever that is symmetrically balanced with respect to the center of eigenenergy $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c }^{2}$ of $\boldsymbol{\tau}$, such that the total allowed eigenenergies exhibited by the scaled extreme vectors on $\boldsymbol{\tau}_{1 -\boldsymbol{\tau}_{2}$ are symmetrically balanced about the fulcrum $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau }$. Thereby, I\ will show that the likelihood ratio $\widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}$ and the classification system $\boldsymbol{\tau }^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless }0$ are in statistical equilibrium. I\ will use all of these results to identify the manner in which the property of symmetrical balance exhibited by the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\tau}$ enables linear eigenlocus classification systems $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0 \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ to effectively balance all of the forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) $ in the $Z_{1}$ decision region with all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2 |\boldsymbol{\tau}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|\boldsymbol{\tau}_{1}\right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}-\int\nolimits_{Z_{1 }p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\delta\left( y\right) \boldsymbol{\psi}_{1}\\ & =\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}-\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau _{1}-\delta\left( y\right) \boldsymbol{\psi}_{2}\text{, \end{align*} where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, and $Z_{1}$ and $Z_{2}$ are congruent decision regions $Z_{1}\cong Z_{2}$, given the equilibrium point $\boldsymbol{\psi _{1}-\boldsymbol{\psi}_{2}=0$ and the class-conditional probability density functions $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $, where the areas under the probability density functions $p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x}_{2_{i_{\ast} }|\boldsymbol{\tau}_{2}\right) $ are symmetrically balanced with each other over the $Z_{1}$ and $Z_{2}$ decision regions. \section{Risk Minimization for Linear Classifiers} In the next two sections, I will show that the conditional probability function $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ for class $\omega_{1}$, which is given by the integra \[ P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) =\in _{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}=\left\Vert \boldsymbol{\tau }_{1}\right\Vert _{\min_{c}}^{2}+C_{1}\text{, \] over the decision space $Z$, and the conditional probability function $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{2}$, which is given by the integra \[ P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) =\in _{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}=\left\Vert \boldsymbol{\tau }_{2}\right\Vert _{\min_{c}}^{2}+C_{2}\text{, \] over the decision space $Z$, satisfy an integral equation where the area under the probability density function $p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) $ for class $\omega_{1}$ is \emph{symmetrically balanced with} the area under the probability density function $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{2} \[ f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) :\int_{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\nabla _{eq}\equiv\int_{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\nabla _{eq}\text{, \] where $\nabla_{eq}$ is an equalizer statistic, such that the likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ and the classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ are in statistical equilibrium. Accordingly, I\ will formulate a system of data-driven, locus equations that determines the total allowed eigenenergies $\left\Vert \boldsymbol{\tau _{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\tau _{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$, and I will derive values for the integration constants $C_{1}$ and $C_{2}$. I will use these results to devise an equalizer statistic $\nabla_{eq}$ for an integral equation that is satisfied by the class-conditional probability density functions $p\left( \mathbf{x _{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x _{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $. I\ will now devise a system of data-driven, locus equations that determines the manner in which the total allowed eigenenergies of the scaled extreme points on $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ are symmetrically balanced about the fulcrum $\left\Vert \boldsymbol{\tau}\right\Vert _{\min _{c}}^{2}$ of $\boldsymbol{\tau}$. Accordingly, I will devise three systems of data-driven, locus equations that explicitly determine the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{1}$, the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{2}$, and the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}$. \subsection{Critical Minimum Eigenenergy Constraints I} Let there be $l$ labeled, scaled extreme points on a linear eigenlocus $\boldsymbol{\tau}$. Given the theorem of Karush, Kuhn, and Tucker and the KKT condition in Eq. (\ref{KKTE5}), it follows that a Wolf dual linear eigenlocus $\boldsymbol{\psi}$ exists, for whic \[ \left\{ \psi_{i\ast}>0\right\} _{i=1}^{l}\text{, \] such that the $l$ constrained, primal principal eigenaxis components $\left\{ \psi_{i_{\ast}}\mathbf{x}_{i_{\ast}}\right\} _{i=1}^{l}$ on $\boldsymbol{\tau }$ satisfy a system of $l$ eigenlocus equations \begin{equation} \psi_{i_{\ast}}\left[ y_{i}\left( \mathbf{x}_{i_{\ast}}^{T}\boldsymbol{\tau }+\tau_{0}\right) -1+\xi_{i}\right] =0,\ i=1,...,l\text{.} \label{Minimum Eigenenergy Functional System \end{equation} I\ will now use Eq. (\ref{Minimum Eigenenergy Functional System}) to define critical minimum eigenenergy constraints on $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$. The analysis begins with the critical minimum eigenenergy constraint on $\boldsymbol{\tau}_{1}$. \subsubsection{Total Allowed Eigenenergy of $\boldsymbol{\tau}_{1}$} Take any scaled extreme vector $\psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}$ that belongs to class $\omega_{1}$. Using Eq. (\ref{Minimum Eigenenergy Functional System}) and letting $y_{i}=+1$, it follows that the constrained, primal principal eigenaxis component $\psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}$ on $\boldsymbol{\tau}_{1}$ is specified by the equation \[ \psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}^{T}\boldsymbol{\tau =\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) \] which is part of a system of $l_{1}$ eigenlocus equations. Therefore, each constrained, primal principal eigenaxis component $\psi_{1_{i_{\ast} }\mathbf{x}_{1_{i_{\ast}}}$ on $\boldsymbol{\tau}_{1}$ satisfies the above locus equation. Now take all of the $l_{1}$ scaled extreme vectors $\left\{ \psi_{1_{i_{\ast }}}\mathbf{x}_{1_{i_{\ast}}}\right\} _{i=1}^{l_{1}}$ that belong to class $\omega_{1}$. Again, using Eq. (\ref{Minimum Eigenenergy Functional System}) and letting $y_{i}=+1$, it follows that the complete set $\left\{ \psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}\right\} _{i=1}^{l_{1}}$ of $l_{1}$ constrained, primal principal eigenaxis components $\psi_{1_{i_{\ast }}\mathbf{x}_{1_{i_{\ast}}}$ on $\boldsymbol{\tau}_{1}$ is determined by the system of $l_{1}$ eigenlocus equations \begin{equation} \psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}^{T}\boldsymbol{\tau =\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) ,\ i=1,...,l_{1 \text{.} \label{Minimum Eigenenergy Class One \end{equation} Using Eq. (\ref{Minimum Eigenenergy Class One}), it follows that the entire set $\left\{ \psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}\right\} _{i=1}^{l_{1}}$ of $l_{1}\times d$ transformed, extreme vector coordinates satisfies the system of $l_{1}$ eigenlocus equations \[ \text{ }(1)\text{ \ }\psi_{1_{1_{\ast}}}\mathbf{x}_{1_{1_{\ast}} ^{T}\boldsymbol{\tau}=\psi_{1_{1_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) \text{, \ \[ \text{ }(2)\text{ \ }\psi_{1_{2_{\ast}}}\mathbf{x}_{1_{2_{\ast}} ^{T}\boldsymbol{\tau}=\psi_{1_{2_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) \text{, \] \[ \vdots \ \[ (l_{1})\text{\ \ }\psi_{1_{l_{\ast}}}\mathbf{x}_{1_{l_{\ast}}}^{T \boldsymbol{\tau}=\psi_{1_{l_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) \text{, \] where each constrained, primal principal eigenaxis component $\psi _{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}$ on $\boldsymbol{\tau}_{1}$ satisfies the identity \[ \psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}^{T}\boldsymbol{\tau}\equiv \psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) \text{. \] I\ will now derive an identity for the total allowed eigenenergy of $\boldsymbol{\tau}_{1}$. Let $E_{\boldsymbol{\tau}_{1}}$ denote the functional of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}_{1}$ and let $\boldsymbol{\tau=\tau }_{1}-\boldsymbol{\tau}_{2}$. Summation over the above system of $l_{1}$ eigenlocus equations produces the following equation for the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}_{1}$ \[ \left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast }}^{T}\right) \left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) \equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\tau _{0}\right) \] which reduces t \[ \boldsymbol{\tau}_{1}^{T}\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{1 ^{T}\boldsymbol{\tau}_{2}\equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }\left( 1-\xi_{i}-\tau_{0}\right) \] so that the functional $E_{\boldsymbol{\tau}_{1}}$ satisfies the identit \[ \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\boldsymbol{\tau }_{1}^{T}\boldsymbol{\tau}_{2}\equiv\sum\nolimits_{i=1}^{l_{1}}\psi _{1_{i_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) \text{. \] Therefore, the total allowed eigenenergy $\left\Vert \boldsymbol{\tau _{1}\right\Vert _{\min_{c}}^{2}$ exhibited by the constrained, primal principal eigenlocus component $\boldsymbol{\tau}_{1}$ is determined by the identit \begin{equation} \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}\equiv\sum \nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) \text{,} \label{TAE Eigenlocus Component One \end{equation} where the functional $E_{\boldsymbol{\tau}_{1}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{1} \[ E_{\boldsymbol{\tau}_{1}}=\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1 \boldsymbol{\tau}_{2} \] is equivalent to the functional $E_{\boldsymbol{\psi}_{1}} \[ E_{\boldsymbol{\psi}_{1}}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }\left( 1-\xi_{i}-\tau_{0}\right) \] of the integrated magnitudes $\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}$ of the Wolfe dual principal eigenaxis components $\psi_{1_{i_{\ast}} \frac{\mathbf{x}_{1_{i_{\ast}}}}{\left\Vert \mathbf{x}_{1_{i_{\ast} }\right\Vert }$ and the $\tau_{0}$ statistic. Returning to Eq. (\ref{Decision Border One}), it follows that the functionals $E_{\boldsymbol{\tau}_{1}}$ and $E_{\boldsymbol{\psi}_{1}}$ specify the manner in which linear eigenlocus discriminant functions $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T \boldsymbol{\tau}+\tau_{0}$ satisfy the linear decision border $D_{+1}\left( \mathbf{x}\right) $: $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}=1$. Given Eq. (\ref{TAE Eigenlocus Component One}), it is concluded that a linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ satisfies the linear decision border $D_{+1}\left( \mathbf{x}\right) $ in terms of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau _{1}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{1}$, where the functional $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c} ^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau }_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}$ is constrained by the functional $\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }\left( 1-\xi_{i}-\tau_{0}\right) $. The critical minimum eigenenergy constraint on $\boldsymbol{\tau}_{2}$ is examined next. \subsubsection{Total Allowed Eigenenergy of $\boldsymbol{\tau}_{2}$} Take any scaled extreme vector $\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}$ that belongs to class $\omega_{2}$. Using Eq. (\ref{Minimum Eigenenergy Functional System}) and letting $y_{i}=-1$, it follows that the constrained, primal principal eigenaxis component $\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}$ on $\boldsymbol{\tau}_{2}$ is specified by the equation \[ -\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}^{T}\boldsymbol{\tau =\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) \] which is part of a system of $l_{2}$ eigenlocus equations. Therefore, each constrained, primal principal eigenaxis component $\psi_{2_{i_{\ast} }\mathbf{x}_{2_{i_{\ast}}}$ on $\boldsymbol{\tau}_{2}$ satisfies the above locus equation. Now take all of the $l_{2}$ scaled, extreme vectors $\left\{ \psi _{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}\right\} _{i=1}^{l_{2}}$ that belong to class $\omega_{2}$. Again, using Eq. (\ref{Minimum Eigenenergy Functional System}) and letting $y_{i}=-1$, it follows that the complete set $\left\{ \psi_{2_{i_{\ast}}}\mathbf{x _{2_{i_{\ast}}}\right\} _{i=1}^{l_{2}}$ of $l_{2}$ constrained, primal principal eigenaxis components $\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}$ on $\boldsymbol{\tau}_{2}$ is determined by the system of $l_{2}$ eigenlocus equations \begin{equation} -\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}^{T}\boldsymbol{\tau =\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) ,\ i=1,...,l_{2 \text{.} \label{Minimum Eigenenergy Class Two \end{equation} Using Eq. (\ref{Minimum Eigenenergy Class Two}), it follows that the entire set $\left\{ \psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}\right\} _{i=1}^{l_{2}}$ of $l_{2}\times d$ transformed, extreme vector coordinates satisfies the system of $l_{2}$ eigenlocus equations \[ \text{ }(1)\text{ \ }-\psi_{2_{1_{\ast}}}\mathbf{x}_{2_{1_{\ast}} ^{T}\boldsymbol{\tau}=\psi_{2_{1_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) \text{, \ \[ (2)\text{ \ }-\psi_{2_{2_{\ast}}}\mathbf{x}_{2_{2_{\ast}}}^{T}\boldsymbol{\tau }=\psi_{2_{2_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) \text{, \ \[ \vdots \ \[ (l_{2})\ -\psi_{2_{l_{2}\ast}}\mathbf{x}_{2_{_{l_{2}\ast}}}^{T \boldsymbol{\tau}=\psi_{2_{l_{2}\ast}}\left( 1-\xi_{i}+\tau_{0}\right) \text{, \] where each constrained, primal principal eigenaxis component $\psi_{2_{\ast }\mathbf{x}_{2_{_{\ast}}}$ on $\boldsymbol{\tau}_{2}$ satisfies the identity \[ -\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}^{T}\boldsymbol{\tau}\equiv \psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) \text{. \] I will now derive an identity for the total allowed eigenenergy of $\boldsymbol{\tau}_{2}$. Let $E_{\boldsymbol{\tau}_{2}}$ denote the functional of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}_{2}$ and let $\boldsymbol{\tau=\tau }_{1}-\boldsymbol{\tau}_{2}$. Summation over the above system of $l_{2}$ eigenlocus equations produces the following equation for the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau}_{2}$ \[ -\left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast }}}^{T}\right) \left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) \equiv\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau _{0}\right) \] which reduces t \[ \boldsymbol{\tau}_{2}^{T}\boldsymbol{\tau}_{2}-\boldsymbol{\tau}_{2 ^{T}\boldsymbol{\tau}_{1}\equiv\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\left( 1-\xi_{i}+\tau_{0}\right) \] so that the functional $E_{\boldsymbol{\tau}_{2}}$ satisfies the identit \[ \left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\boldsymbol{\tau }_{2}^{T}\boldsymbol{\tau}_{1}\equiv\sum\nolimits_{i=1}^{l_{2}}\psi _{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) \text{. \] Therefore, the total allowed eigenenergy $\left\Vert \boldsymbol{\tau _{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the constrained, primal eigenlocus component $\boldsymbol{\tau}_{2}$ is determined by the identit \begin{equation} \left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}\equiv\sum \nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) \text{,} \label{TAE Eigenlocus Component Two \end{equation} where the functional $E_{\boldsymbol{\tau}_{2}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{2} \[ E_{\boldsymbol{\tau}_{2}}=\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2 \boldsymbol{\tau}_{1} \] is equivalent to the functional $E_{\boldsymbol{\psi}_{2}} \[ E_{\boldsymbol{\psi}_{2}}=\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\left( 1-\xi_{i}+\tau_{0}\right) \] of the integrated magnitudes $\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}$ of the Wolfe dual principal eigenaxis components $\psi_{2_{i_{\ast}} \frac{\mathbf{x}_{2_{i_{\ast}}}}{\left\Vert \mathbf{x}_{2_{i_{\ast} }\right\Vert }$ and the $\tau_{0}$ statistic. Returning to Eq. (\ref{Decision Border Two}), it follows the functionals $E_{\boldsymbol{\tau}_{2}}$ and $E_{\boldsymbol{\psi}_{2}}$ specify the manner in which linear eigenlocus discriminant functions $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T \boldsymbol{\tau}+\tau_{0}$ satisfy the linear decision border $D_{-1}\left( \mathbf{x}\right) $: $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}=-1$. Given Eq. (\ref{TAE Eigenlocus Component Two}), it is concluded that a linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ satisfies the linear decision border $D_{-1}\left( \mathbf{x}\right) $ in terms of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau _{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{2}$, where the functional $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c} ^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau }_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}$ is constrained by the functional $\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\left( 1-\xi_{i}+\tau_{0}\right) $. The critical minimum eigenenergy constraint on $\boldsymbol{\tau}$ is examined next. \subsubsection{Total Allowed Eigenenergy of $\boldsymbol{\tau}$} I\ will now derive an identity for the total allowed eigenenergy of a constrained, primal linear eigenlocus $\boldsymbol{\tau}$. Let $E_{\boldsymbol{\tau}}$ denote the functional of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau }$. Summation over the complete system of eigenlocus equations satisfied by $\boldsymbol{\tau}_{1} \[ \left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast }}^{T}\right) \boldsymbol{\tau}\equiv\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) \] and by $\boldsymbol{\tau}_{2} \[ \left( -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i\ast }^{T}\right) \boldsymbol{\tau}\equiv\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) \] produces the following identity for the functional $E_{\boldsymbol{\tau}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\tau} \begin{align*} \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2} & :\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}} ^{T}-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i\ast} ^{T}\right) \boldsymbol{\tau}\\ & \equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left( 1-\xi_{i -\tau_{0}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) \end{align*} which reduces t \begin{align} \left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) ^{T \boldsymbol{\tau} & \equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}-\tau_{0}\right) \label{Symmetrical Balance of TAE SDE}\\ & +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau _{0}\right) \nonumber\\ & \equiv\sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}\left( 1-\xi_{i}\right) \text{,}\nonumber \end{align} where I\ have used the equilibrium constraint on $\boldsymbol{\psi}$ in Eq. (\ref{Equilibrium Constraint on Dual Eigen-components}). Thereby, the functional $E_{\boldsymbol{\tau}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau} \begin{align*} E_{\boldsymbol{\tau}} & =\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right) ^{T}\boldsymbol{\tau}\\ & \mathbf{=}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2 \end{align*} is equivalent to the functional $E_{\boldsymbol{\psi}} \[ E_{\boldsymbol{\psi}}=\sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}\left( 1-\xi _{i}\right) \] solely in terms of the integrated magnitudes $\sum\nolimits_{i=1}^{l \psi_{i_{\ast}}$ of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$. Thus, the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ exhibited by a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ is specified by the integrated magnitudes $\psi_{i_{\ast }$ of the Wolfe dual principal eigenaxis components $\psi_{i\ast \frac{\mathbf{x}_{i\ast}}{\left\Vert \mathbf{x}_{i\ast}\right\Vert }$ on $\boldsymbol{\psi} \begin{align} \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2} & \equiv \sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}\left( 1-\xi_{i}\right) \label{TAE SDE \\ & \equiv\sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}-\sum\nolimits_{i=1}^{l}\xi _{i}\psi_{i_{\ast}}\text{,}\nonumber \end{align} where the regularization parameters $\xi_{i}=\xi\ll1$ are seen to determine negligible constraints on $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c }^{2}$. Therefore, it is concluded that the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ exhibited by a constrained, primal linear eigenlocus $\boldsymbol{\tau}$ is determined by the integrated magnitudes $\sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}$ of the Wolfe dual principal eigenaxis components $\psi_{i\ast}\frac{\mathbf{x}_{i\ast }{\left\Vert \mathbf{x}_{i\ast}\right\Vert }$ on $\boldsymbol{\psi}$. Returning to Eq. (\ref{Decision Boundary}), it follows that the equilibrium constraint on $\boldsymbol{\psi}$ and the corresponding functionals $E_{\boldsymbol{\tau}}$ and $E_{\boldsymbol{\psi}}$ specify the manner in which linear eigenlocus discriminant functions $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T \boldsymbol{\tau}+\tau_{0}$ satisfy linear decision boundaries $D_{0}\left( \mathbf{x}\right) $: $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}=0$. Given Eq. (\ref{TAE SDE}), it is concluded that a linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T}\boldsymbol{\tau}+\tau_{0}$ satisfies a linear decision boundary $D_{0}\left( \mathbf{x}\right) $ in terms of its total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c }^{2}$, where the functional $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ is constrained by the functional $\sum\nolimits_{i=1}^{l \psi_{i_{\ast}}\left( 1-\xi_{i}\right) $. Using Eqs (\ref{TAE Eigenlocus Component One}), (\ref{TAE Eigenlocus Component Two}), and (\ref{Symmetrical Balance of TAE SDE}), it follows that the symmetrically balanced constraint \[ E_{\boldsymbol{\psi}_{1}}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }\left( 1-\xi_{i}-\tau_{0}\right) \text{ \ and \ }E_{\boldsymbol{\psi}_{2 }=\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau _{0}\right) \] satisfied by a linear eigenlocus discriminant function $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T \boldsymbol{\tau}+\tau_{0}$ on the respective linear decision borders $D_{+1}\left( \mathbf{x}\right) $ and $D_{-1}\left( \mathbf{x}\right) $, and the corresponding constrain \[ E_{\boldsymbol{\psi}}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}-\tau_{0}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\left( 1-\xi_{i}+\tau_{0}\right) \] satisfied by a linear eigenlocus discriminant function $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\mathbf{x}^{T \boldsymbol{\tau}+\tau_{0}$ on the linear decision boundary $D_{0}\left( \mathbf{x}\right) $, ensure that the total allowed eigenenergies $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the scaled extreme points on $\boldsymbol{\tau}_{1 -\boldsymbol{\tau}_{2}$ \begin{align*} \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau }_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos \theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}\\ & +\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1} \end{align*} satisfy the law of cosines in the symmetrically balanced manner depicted in Fig. $\ref{Law of Cosines for Binary Classification Systems}$. Given the binary classification theorem, it follows that linear eigenlocus likelihood ratios $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ and corresponding decision boundaries $D_{0}\left( \mathbf{x}\right) $ satisfy an integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ where the areas $\int\nolimits_{Z}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}$ and $\int\nolimits_{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}$ under the class-conditional probability density functions $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) $ and $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau _{2}\right) $ are balanced with each other. Furthermore, the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{2}\right) $ given class $\omega_{2}$ must be balanced with the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{1}\right) $ given class $\omega_{1}$. Therefore, linear eigenlocus likelihood ratios $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ and corresponding decision boundaries $D_{0}\left( \mathbf{x}\right) $ also satisfy an integral equation where the total allowed eigenenergies $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ of a linear eigenlocus $\boldsymbol{\tau}=\boldsymbol{\tau}_{1}-\boldsymbol{\tau }_{2}$ are balanced with each other. I\ will show that the discrete, linear classification system $\boldsymbol{\tau }^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless }0$ seeks an equilibrium point $p\left( \widehat{\Lambda}_{\boldsymbol{\psi }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) =0$ of an integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $, where the total allowed eigenenergies $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c }^{2}$ of the classification system are balanced with each other, such that the eigenenergy and the risk of the classification system $\boldsymbol{\tau }^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless }0$ are minimized, and the classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ is in statistical equilibrium. In the next section, I will develop an integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ that is satisfied by linear eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$, where the total allowed eigenenergies $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the principal eigenaxis components on a linear eigenlocus $\boldsymbol{\tau}=\boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}$ are symmetrically balanced with each other. Thereby, I will show that the likelihood ratio $p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{2}\right) $ is in statistical equilibrium, and that the areas under the class-conditional probability density functions $p\left( \mathbf{x _{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x _{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $, over the decision space $Z=Z_{1}+Z_{2}$, are symmetrically balanced with each other. I will use these results to show that the linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ is the solution to a fundamental integral equation of binary classification for a linear classification system in statistical equilibrium. The solution involves a surprising, statistical balancing feat in decision space $Z$ which hinges on an elegant, statistical balancing feat in eigenspace $\widetilde{Z}$. \section{The Balancing Feat in Eigenspace I} A linear eigenlocu \[ \boldsymbol{\tau}=\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2 \] which is formed by a locus of labeled ($+1$ or $-1$), scaled ($\psi_{1_{i\ast }}$ or $\psi_{2_{i\ast}}$) extreme vectors ($\mathbf{x}_{1_{i\ast}}$ or $\mathbf{x}_{2_{i\ast}}$ \[ \boldsymbol{\tau}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x _{1_{i\ast}}-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast} \] has a \emph{dual nature} that is \emph{twofold}: Each $\psi_{1_{i\ast}}$ or $\psi_{2_{i\ast}}$ scale factor determines the total allowed eigenenerg \[ \left\Vert \psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}\right\Vert _{\min_{c} ^{2}\text{ \ or \ }\left\Vert \psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast }\right\Vert _{\min_{c}}^{2 \] of a principal eigenaxis component $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ or $\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}$ on $\boldsymbol{\tau}_{1 -\boldsymbol{\tau}_{2}$ in decision space $Z$, and each $\psi_{1_{i\ast}}$ or $\psi_{2_{i\ast}}$ scale factor determines the total allowed eigenenerg \[ \left\Vert \psi_{1_{i\ast}}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\right\Vert _{\min_{c}}^{2}\text{ \ or \ }\left\Vert \psi_{2_{i\ast}}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }\right\Vert _{\min_{c}}^{2 \] of a principal eigenaxis component $\psi_{1_{i\ast}}\frac{\mathbf{x _{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }$ or $\psi_{2_{i\ast}}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x _{2_{i\ast}}\right\Vert }$ on $\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ in Wolfe dual eigenspace $\widetilde{Z}$. In addition, each $\psi_{1_{i\ast}}$ scale factor specifies dual conditional densities for an $\mathbf{x}_{1_{i\ast}}$ extreme point \[ p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \frac{\mathbf{x _{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\text{ \ and \ }p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \mathbf{x}_{1_{i\ast }\text{, \] and each $\psi_{2_{i\ast}}$ scale factor specifies dual conditional densities for an $\mathbf{x}_{2_{i\ast}}$ extreme point \[ p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \frac{\mathbf{x _{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }\text{ \ and \ }p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \mathbf{x}_{2_{i\ast }\text{. \] Accordingly, a Wolfe dual linear eigenlocus $\boldsymbol{\psi =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ is a parameter vector of likelihoods \begin{align*} \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) & =\sum\nolimits_{i=1}^{l_{1}}p\left( \mathbf{x}_{1_{i_{\ast}} |\operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \frac{\mathbf{x _{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\\ & +\sum\nolimits_{i=1}^{l_{2}}p\left( \mathbf{x}_{2_{i_{\ast} }|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \frac{\mathbf{x _{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert \end{align*} \emph{and} a locus of principal eigenaxis components in Wolfe dual eigenspace $\widetilde{Z}$, and a primal linear eigenlocus $\boldsymbol{\tau=\tau _{1}-\boldsymbol{\tau}_{2}$ is a parameter vector of likelihoods \begin{align*} \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) & =\sum\nolimits_{i=1}^{l_{1}}p\left( \mathbf{x}_{1_{i_{\ast}} |\operatorname{comp}_{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \mathbf{x}_{1_{i\ast}}\\ & -\sum\nolimits_{i=1}^{l_{2}}p\left( \mathbf{x}_{2_{i_{\ast} }|\operatorname{comp}_{\overrightarrow{\mathbf{x}_{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) \mathbf{x}_{2_{i\ast} \end{align*} \emph{and} a locus of principal eigenaxis components in decision space $Z$, that jointly determine the basis of a linear classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$. Moreover, the Wolfe dual likelihood rati \begin{align*} \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) & =p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega _{1}\right) +p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2 \end{align*} is constrained to satisfy the equilibrium equation \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \] so that the Wolfe dual likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\psi }\left( \mathbf{x}\right) =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ is in statistical equilibrium \[ \boldsymbol{\psi}_{1}=\boldsymbol{\psi}_{2}\text{. \] I will demonstrate that the dual nature of $\boldsymbol{\tau}$ enables a linear eigenlocus discriminant functio \[ \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0 \] to be the solution to a fundamental integral equation of binary classification for a classification system in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & =\;\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\int\nolimits_{Z_{2 }p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\delta\left( y\right) \boldsymbol{\psi}_{1}\\ & =\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}+\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau _{2}-\delta\left( y\right) \boldsymbol{\psi}_{2}\text{, \end{align*} where all of the forces associated with the counter risks and the risks for class $\omega_{1}$ and class $\omega_{2}$ are symmetrically balanced with each othe \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\int\nolimits_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}-\int\nolimits_{Z_{1 }p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\delta\left( y\right) \boldsymbol{\psi}_{1}\\ & =\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}-\int\nolimits_{Z_{2}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau _{1}-\delta\left( y\right) \boldsymbol{\psi}_{2}\text{, \end{align*} over the $Z_{1}$ and $Z_{2}$ decision regions, by means of an elegant, statistical balancing feat in Wolfe dual eigenspace $\widetilde{Z}$, where the functional $E_{\boldsymbol{\tau}_{1}}$ of $\left\Vert \boldsymbol{\tau _{1}\right\Vert _{\min_{c}}^{2}$ in Eq. (\ref{TAE Eigenlocus Component One}) and the functional $E_{\boldsymbol{\tau}_{2}}$ of $\left\Vert \boldsymbol{\tau }_{2}\right\Vert _{\min_{c}}^{2}$ in Eq. (\ref{TAE Eigenlocus Component Two}) are constrained to be equal to each other by means of a symmetric equalizer statistic $\nabla_{eq}$: $\frac{\delta\left( y\right) }{2}\boldsymbol{\psi $, where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) $. I have shown that each of the constrained, primal principal eigenaxis components $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ or $\psi_{2_{i\ast }\mathbf{x}_{2_{i\ast}}$ on $\boldsymbol{\tau}=\boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}$ have such magnitudes and directions that a constrained, linear eigenlocus discriminant function $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}$ partitions any given feature space into congruent decision regions $Z_{1}\cong Z_{2}$, which are symmetrically partitioned by a linear decision boundary, by means of three, symmetrical linear loci, all of which reference $\boldsymbol{\tau}$. I\ will show that linear eigenlocus classification systems $\boldsymbol{\tau }^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless }0$ generate decision regions $Z_{1}$ and $Z_{2}$ for which the dual parameter vectors of likelihoods $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ and $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ are in statistical equilibrium. Thereby, I\ will demonstrate that balancing the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\tau}\right) $ of the linear classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ hinges on balancing the eigenenergies associated with the positions or locations of the dual likelihood ratios $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) $ and $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) $ \begin{align*} \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }+\sum\nolimits_{i=1}^{l_{2 }\psi_{2_{i\ast}}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x _{2_{i\ast}}\right\Vert \end{align*} an \begin{align*} \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) & =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast} -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}\text{. \end{align*} \subsection{Balancing the Eigenenergies of $\boldsymbol{\tau}_{1 -\boldsymbol{\tau}_{2}$} I will now devise an equation that determines how the total allowed eigenenergies $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ are symmetrically balanced with each other. Using Eq. (\ref{TAE Eigenlocus Component One}) and the equilibrium constraint on $\boldsymbol{\psi}$ in Eq. (\ref{Equilibrium Constraint on Dual Eigen-components} \begin{align*} \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}} & \equiv \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\tau _{0}\right) \\ & \equiv\frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i_{\ast}}}\left( 1-\xi _{i}-\tau_{0}\right) \text{, \end{align*} it follows that the functional $E_{\boldsymbol{\tau}_{1}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c }^{2}$ of $\boldsymbol{\tau}_{1} \[ E_{\boldsymbol{\tau}_{1}}=\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1 \boldsymbol{\tau}_{2} \] is equivalent to the functional $E_{\boldsymbol{\psi}_{1}}$ \begin{equation} E_{\boldsymbol{\psi}_{1}}=\frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i_{\ast} }\left( 1-\xi_{i}\right) -\tau_{0}\sum\nolimits_{i=1}^{l_{1}}\psi _{1_{i_{\ast}}}\text{.} \label{TAE Constraint COMP1 \end{equation} Using Eq. (\ref{TAE Eigenlocus Component Two}) and the equilibrium constraint on $\boldsymbol{\psi}$ in Eq. (\ref{Equilibrium Constraint on Dual Eigen-components} \begin{align*} \left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}} & \equiv \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau _{0}\right) \\ & \equiv\frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i_{\ast}}}\left( 1-\xi _{i}+\tau_{0}\right) \text{, \end{align*} it follows that the functional $E_{\boldsymbol{\tau}_{2}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c }^{2}$ of $\boldsymbol{\tau}_{2} \[ E_{\boldsymbol{\tau}_{2}}=\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2 \boldsymbol{\tau}_{1} \] is equivalent to the functional $E_{\boldsymbol{\psi}_{2}}$ \begin{equation} E_{\boldsymbol{\psi}_{2}}=\frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i_{\ast} }\left( 1-\xi_{i}\right) +\tau_{0}\sum\nolimits_{i=1}^{l_{2}}\psi _{2_{i_{\ast}}}\text{.} \label{TAE Constraint COMP2 \end{equation} Next, I will use the identity for $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ in Eq. (\ref{TAE SDE} \[ \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}\equiv\sum \nolimits_{i=1}^{l}\psi_{i_{\ast}}\left( 1-\xi_{i}\right) \] to rewrite $E_{\boldsymbol{\psi}_{1}} \begin{align*} E_{\boldsymbol{\psi}_{1}} & =\frac{1}{2}\sum\nolimits_{i=1}^{l \psi_{_{i_{\ast}}}\left( 1-\xi_{i}\right) -\tau_{0}\sum\nolimits_{i=1 ^{l_{1}}\psi_{1_{i_{\ast}}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}-\tau_{0}\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}} \end{align*} and $E_{\boldsymbol{\psi}_{2}} \begin{align*} E_{\boldsymbol{\psi}_{2}} & =\frac{1}{2}\sum\nolimits_{i=1}^{l \psi_{_{i_{\ast}}}\left( 1-\xi_{i}\right) +\tau_{0}\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}+\tau_{0}\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}} \end{align*} in terms of $\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c }^{2}$ and a symmetric equalizer statistic. Substituting the rewritten expressions for $E_{\boldsymbol{\psi}_{1}}$ and $E_{\boldsymbol{\psi}_{2}}$ into Eqs (\ref{TAE Eigenlocus Component One}) and (\ref{TAE Eigenlocus Component Two}) produces the equation \[ \left( \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}\right) +\tau_{0 \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\equiv\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2 \] and \[ \left( \left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}\right) -\tau_{0 \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\equiv\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}\text{, \] where the terms $\tau_{0}\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}$ and $-\tau_{0}\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}$ specify a symmetric equalizer statistic $\nabla_{eq}$ for integrals of class-conditional probability density functions $p\left( \mathbf{x}_{1_{i\ast} |\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x}_{2_{i_{\ast} }|\boldsymbol{\tau}_{2}\right) $ that determine conditional probability functions $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $. Therefore, let$\ \nabla_{eq}$ denote $\tau_{0}\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i_{\ast}}}$ and $\tau_{0}\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast }}}$, wher \[ \nabla_{eq}\triangleq\frac{\tau_{0}}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast }\text{. \] It follows that the dual, class-conditional parameter vectors of likelihoods $\boldsymbol{\psi}_{1}$, $\boldsymbol{\psi}_{2}$, $\boldsymbol{\tau}_{1}$, and $\boldsymbol{\tau}_{2}$ satisfy the eigenlocus equation \begin{equation} \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}+\nabla_{eq}\equiv \frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2} \label{Balancing Feat SDEC1 \end{equation} an \begin{equation} \left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}-\nabla_{eq}\equiv \frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}\text{,} \label{Balancing Feat SDEC2 \end{equation} where $\nabla_{eq}\triangleq\frac{\tau_{0}}{2}\sum\nolimits_{i=1}^{l \psi_{_{i\ast}}$. I will now examine the geometric and statistical properties of the equalizer statistic $\nabla_{eq}$ in eigenspace. \subsubsection{Properties of $\nabla_{eq}$ in Eigenspace} Substituting the vector expression $\boldsymbol{\tau}=\sum\nolimits_{i=1 ^{l_{1}}\psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}-\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}$ for $\boldsymbol{\tau}$ in Eq. (\ref{Pair of Normal Eigenlocus Components}) into the expression for $\tau_{0}$ in Eq. (\ref{Normal Eigenlocus Projection Factor}) produces the statistic for $\tau_{0}$ \begin{align} \tau_{0} & =-\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\sum \nolimits_{j=1}^{l_{1}}\psi_{1_{j_{\ast}}}\mathbf{x}_{1_{j_{\ast} }\label{Eigenlocus Projection Factor Two}\\ & +\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\sum\nolimits_{j=1}^{l_{2 }\psi_{2_{j_{\ast}}}\mathbf{x}_{2_{j_{\ast}}}+\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) \text{.}\nonumber \end{align} Substituting the statistic for $\tau_{0}$ in Eq. (\ref{Eigenlocus Projection Factor Two}) into the expression for $\nabla_{eq}$ produces the statistic for $\nabla_{eq}$ \begin{align*} \nabla_{eq} & =\frac{\tau_{0}}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & =-\left( \sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\boldsymbol{\tau }_{1}\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & +\left( \sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\boldsymbol{\tau _{2}\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\text{, \end{align*} where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $. Let $\widehat{\mathbf{x}}_{i\ast}\triangleq \sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}$. It follows that $\tau_{0}$ regulates a symmetrical balancing act for components of $\widehat{\mathbf{x}}_{i\ast}$ along $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$, where the statistic $\nabla_{eq}$ is written a \[ +\nabla_{eq}=\left[ \operatorname{comp}_{\overrightarrow{\boldsymbol{\tau }_{2}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) -\operatorname{comp}_{\overrightarrow{\boldsymbol{\tau}_{1}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) +\delta\left( y\right) \right] \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \] an \[ -\nabla_{eq}=\left[ \operatorname{comp}_{\overrightarrow{\boldsymbol{\tau }_{1}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) -\operatorname{comp}_{\overrightarrow{\boldsymbol{\tau}_{2}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{i\ast}}\right) -\delta\left( y\right) \right] \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\text{. \] Returning to Eq. (\ref{Balanced Eigenlocus Equation}) \[ \widehat{\mathbf{x}}_{1i\ast}^{T}\left( \boldsymbol{\tau}_{1}\mathbf{- \boldsymbol{\tau}_{2}\right) =\widehat{\mathbf{x}}_{2i\ast}^{T}\left( \boldsymbol{\tau}_{2}\mathbf{-}\boldsymbol{\tau}_{1}\right) \text{, \] given that the components of $\widehat{\mathbf{x}}_{1i\ast}$ and $\widehat{\mathbf{x}}_{2i\ast}$ along $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ are in statistical equilibrium \[ \left[ \operatorname{comp}_{\overrightarrow{\boldsymbol{\tau}_{1}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{1i\ast}}\right) -\operatorname{comp _{\overrightarrow{\boldsymbol{\tau}_{2}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{1i\ast}}\right) \right] \rightleftharpoons\left[ \operatorname{comp _{\overrightarrow{\boldsymbol{\tau}_{2}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{2i\ast}}\right) -\operatorname{comp _{\overrightarrow{\boldsymbol{\tau}_{1}}}\left( \overrightarrow{\widehat{\mathbf{x}}_{2i\ast}}\right) \right] \text{, \] it follows tha \[ +\nabla_{eq}=\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l \psi_{_{i\ast}}\equiv\delta\left( y\right) \boldsymbol{\psi}_{1 \] an \[ -\nabla_{eq}=-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l \psi_{_{i\ast}}\equiv-\delta\left( y\right) \boldsymbol{\psi}_{2}\text{. \] I\ will now demonstrate that the symmetric equalizer statistic $\nabla_{eq}$ ensures that the class-conditional probability density functions $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $ satisfy the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) d\boldsymbol{\tau}_{1}+\delta\left( y\right) \frac{1}{2 \sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & =\int_{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l \psi_{_{i\ast}}\text{, \end{align*} over the decision space $Z$: $Z=Z_{1}+Z_{2}$ and $Z_{1}\cong Z_{2}$, whereby the likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) $ of the classification system $\boldsymbol{\tau}^{T}\left( \mathbf{x -\widehat{\mathbf{x}}_{i\ast}\right) +\delta\left( y\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ is in statistical equilibrium. In the process, I will formulate an equation which ensures that $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ are symmetrically balanced with each other. \subsection{Linear Eigenlocus Integral Equation} Let $\nabla_{eq}=\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1 ^{l}\psi_{_{i\ast}}$. Substituting the expression for $+\nabla_{eq}$ into Eq. (\ref{Balancing Feat SDEC1}) produces an equation that is satisfied by the conditional probabilities of locations for the set $\left\{ \mathbf{x _{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ of $\mathbf{x}_{1_{i\ast}}$ extreme points within the decision space $Z$ \begin{align*} P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) & =\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau }_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos \theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}\text{, \end{align*} and substituting the expression for $-\nabla_{eq}$ into Eq. (\ref{Balancing Feat SDEC2}) produces an equation that is satisfied by the conditional probabilities of locations for the set $\left\{ \mathbf{x _{2_{i\ast}}\right\} _{i=1}^{l_{2}}$ of $\mathbf{x}_{2_{i\ast}}$ extreme points within the decision space $Z$ \begin{align*} P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) & =\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau }_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos \theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}\text{, \end{align*} where the equalizer statistic $\nabla_{eq} \[ \nabla_{eq}=\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l \psi_{_{i\ast} \] \emph{equalizes} the conditional probabilities $P\left( \mathbf{x}_{1_{i\ast }}|\boldsymbol{\tau}_{1}\right) $ and $P\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) $ of observing the set $\left\{ \mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ of $\mathbf{x}_{1_{i\ast}}$ extreme points and the set $\left\{ \mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2}}$ of $\mathbf{x}_{2_{i\ast}}$ extreme points within the $Z_{1}$ and $Z_{2}$ decision regions of the decision space $Z$. Therefore, the equalizer statistic $\delta\left( y\right) \frac{1}{2 \sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}$ ensures that $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ are symmetrically balanced with each other in the following manner \[ \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\equiv\frac{1 {2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2 \] an \[ \left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\equiv\frac{1 {2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}\text{. \] Thereby, the equalizer statisti \[ \delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \] \emph{equalizes} the total allowed eigenenergies $\left\Vert \boldsymbol{\tau }_{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\tau _{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ so that the total allowed eigenenergies $\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the scaled extreme points on $\boldsymbol{\tau}_{1 -\boldsymbol{\tau}_{2}$ are symmetrically balanced with each other about the fulcrum of $\boldsymbol{\tau}$ \begin{equation} \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\equiv\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}} \label{Symmetrical Balance of Total Allowed Eigenenergies \end{equation} which is located at the center of eigenenergy $\left\Vert \boldsymbol{\tau }\right\Vert _{\min_{c}}^{2}$: the geometric center of $\boldsymbol{\tau}$. Thus, the likelihood rati \begin{align*} \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) & =p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \\ & =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2 \end{align*} of the classification system $\boldsymbol{\tau}^{T}\left( \mathbf{x -\widehat{\mathbf{x}}_{i\ast}\right) +\delta\left( y\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ is in statistical equilibrium. It follows that the eigenenergy $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is symmetrically balanced with the eigenenergy $\left\Vert \boldsymbol{\tau _{2}\right\Vert _{\min_{c}}^{2}$ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$. Returning to Eq. (\ref{Conditional Probability Function for Class One} \[ P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) =\in _{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}=\left\Vert \boldsymbol{\tau }_{1}\right\Vert _{\min_{c}}^{2}+C_{1 \] and Eq. (\ref{Conditional Probability Function for Class Two} \[ P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) =\in _{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}=\left\Vert \boldsymbol{\tau }_{2}\right\Vert _{\min_{c}}^{2}+C_{2}\text{, \] it follows that the value for the integration constant $C_{1}$ in Eq. (\ref{Conditional Probability Function for Class One}) i \[ C_{1}=-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau }_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2} \] and the value for the integration constant $C_{2}$ in Eq. (\ref{Conditional Probability Function for Class Two}) i \[ C_{2}=-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau }_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1 }\text{. \] Therefore, the area $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) $ under the class-conditional probability density function $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ in Eq. (\ref{Conditional Density Extreme Points 1}) \begin{align} P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) & =\in _{Z}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1 ^{l}\psi_{_{i\ast}}\label{Integral Equation Class One}\\ & =\int_{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\nonumber\\ & =\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\nonumber\\ & =\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\nonumber\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}\text{,}\nonumber \end{align} over the decision space $Z$, is \emph{symmetrically balanced} with the area $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ under the class-conditional probability density function $p\left( \mathbf{x _{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $ in Eq. (\ref{Conditional Density Extreme Points 2}) \begin{align} P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) & =\in _{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1 ^{l}\psi_{_{i\ast}}\label{Integral Equation Class Two}\\ & =\int_{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\delta\left( y\right) \frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}\nonumber\\ & =\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\nonumber\\ & =\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\nonumber\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}\text{,}\nonumber \end{align} over the decision space $Z$, where the area $P\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) $ under $p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) $ and the area $P\left( \mathbf{x}_{2_{i\ast }}|\boldsymbol{\tau}_{2}\right) $ under $p\left( \mathbf{x}_{2_{i_{\ast} }|\boldsymbol{\tau}_{2}\right) $ are constrained to be equal to $\frac{1 {2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ \[ P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) \equiv P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) \equiv\frac{1 {2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}\text{. \] It follows that the linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ is the solution to the integral equatio \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1 +\int_{Z_{2}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}} \label{Linear Eigenlocus Integral Equation}\\ & =\int_{Z_{1}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}+\int_{Z_{2 }\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{,}\nonumber \end{align} over the decision space $Z=Z_{1}+Z_{2}$, where the dual likelihood ratios $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ and $\widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}$ are in statistical equilibrium, all of the forces associated with the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and the risks $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of extreme points $\mathbf{x}_{1_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, are equal to all of the forces associated with the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and the counter risks $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau }}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of extreme points $\mathbf{x}_{2_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $, and the eigenenergy associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is balanced with the eigenenergy associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$. So, let $p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x _{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ denote the Wolfe dual parameter vectors of likelihoods $\boldsymbol{\psi}_{1}=\sum\nolimits_{i=1}^{l_{1}}\psi _{1_{i_{\ast}}}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast }\right\Vert }$ and $\boldsymbol{\psi}_{2}=\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i_{\ast}}}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x _{2_{i\ast}}\right\Vert }$. It follows that the class-conditional probability density functions $p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert }|\boldsymbol{\psi}_{2}\right) $ in the Wolfe dual eigenspace $\widetilde{Z}$ and the class-conditional probability density functions $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x}_{2_{i_{\ast}}}|\boldsymbol{\tau}_{2}\right) $ in the decision space $Z$ satisfy the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) d\boldsymbol{\tau}_{1}+\delta\left( y\right) p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \\ & =\int_{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}-\delta\left( y\right) p\left( \frac{\mathbf{x _{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi }_{2}\right) \text{, \end{align*} over the decision space $Z$, where $Z=Z_{1}+Z_{2}$, $Z_{1}$ and $Z_{2}$ are contiguous decision regions, $Z_{1}\cong Z_{2}$, and $Z\subse \mathbb{R} ^{d}$. Thus, it is concluded that the linear eigenlocus discriminant functio \[ \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) ^{T}\left( \boldsymbol{\tau }_{1}-\boldsymbol{\tau}_{2}\right) \mathbf{+}\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) \] is the solution to the integral equation \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau }_{1}\right) d\boldsymbol{\tau}_{1}+\int_{Z_{2}}p\left( \mathbf{x _{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1 \label{Linear Eigenlocus Integral Equation I}\\ & +\delta\left( y\right) p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \nonumber\\ & =\int_{Z_{1}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\int_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}\nonumber\\ & -\delta\left( y\right) p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{,}\nonumber \end{align} over the decision space $Z=Z_{1}+Z_{2}$, where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, the integral $\int_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) d\boldsymbol{\tau}_{1}$ accounts for all of the forces associated with counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}\right) $ that are related to positions and potential locations of corresponding $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region, the integral $\int_{Z_{2}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}$ accounts for all of the forces associated with risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\psi_{1i\ast}\mathbf{x _{1_{i_{\ast}}}\right) $ that are related to positions and potential locations of corresponding $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region, the integral $\int_{Z_{1}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}$ accounts for all of the forces associated with risks $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast}\mathbf{x}_{2_{i_{\ast}}}\right) $ that are related to positions and potential locations of corresponding $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region, and the integral $\int_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}$ accounts for all of the forces associated with counter risks $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast}\mathbf{x}_{2_{i_{\ast} }\right) $ that are related to positions and potential locations of corresponding $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region. The equalizer statistics $+\delta\left( y\right) p\left( \frac {\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $-\delta\left( y\right) p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ ensure that the collective forces associated with the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\tau _{1}\right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\tau }_{2}\right) $ for class $\omega_{1}$ and class $\omega_{2}$, which are given by the respective integrals $\int_{Z}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}$ and $\int_{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}$, are symmetrically balanced with each other. Therefore, the classification syste \[ \left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) ^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) +\delta\left( y\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0 \] is in statistical equilibrium \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) : & \int_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau }_{1}\right) d\boldsymbol{\tau}_{1}-\int_{Z_{1}}p\left( \mathbf{x _{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2 \label{Linear Eigenlocus Integral Equation II}\\ & +\delta\left( y\right) p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \nonumber\\ & =\int_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}-\int_{Z_{2}}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}\nonumber\\ & -\delta\left( y\right) p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{,}\nonumber \end{align} where all of the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\mathbf{\tau}_{1}\right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are symmetrically balanced with all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\mathbf{\tau}_{1}\right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region, such that the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) \right) $ of the classification system is minimized, and the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\mathbf{\tau _{1}\right) $ for class $\omega_{1}$ and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are balanced with the eigenenergies associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau }}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) -E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \\ & =E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) |\omega_{2}\right) \right) -E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \end{align*} such that the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of the classification system is minimized. Thus, the locus of principal eigenaxis components on $\boldsymbol{\tau }=\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ satisfies the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1 +\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & =\int_{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $\boldsymbol{\tau}_{1}$ and $\boldsymbol{\tau}_{2}$ are components of a principal eigenaxis $\boldsymbol{\tau}$, and $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c }^{2}$ is the total allowed eigenenergy exhibited by $\boldsymbol{\tau}$. The above integral equation can be written as \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1 +\int_{Z_{2}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \label{Linear Eigenlocus Integral Equation III}\\ & =\int_{Z_{1}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}+\int_{Z_{2 }\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\delta\left( y\right) \frac {1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\text{,}\nonumber \end{align} where the integral $\int_{Z_{1}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}$ accounts for all of the eigenenergies $\left\Vert \psi_{1_{i_{\ast} }\mathbf{x}_{1_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}$ exhibited by all of the $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region, the integral $\int_{Z_{2}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}$ accounts for all of the eigenenergies $\left\Vert \psi_{1_{i_{\ast} }\mathbf{x}_{1_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}$ exhibited by all of the $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region, the integral $\int_{Z_{1}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}$ accounts for all of the eigenenergies $\left\Vert \psi_{2_{i_{\ast} }\mathbf{x}_{2_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}$ exhibited by all of the $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region, and the integral $\int_{Z_{2}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau }_{2}$ accounts for all of the eigenenergies $\left\Vert \psi_{2_{i_{\ast} }\mathbf{x}_{2_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}$ exhibited by all of the $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region. The equalizer statistics $+\delta\left( y\right) \frac{1}{2 \sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}$ and $-\delta\left( y\right) \frac {1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}$ ensure that the integrals $\int_{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}$ and $\int_{Z \boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}$ are symmetrically balanced with each other. Equation (\ref{Linear Eigenlocus Integral Equation III}) can be rewritten a \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1 -\int_{Z_{1}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \label{Linear Eigenlocus Integral Equation IV}\\ & =\int_{Z_{2}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\int_{Z_{2 }\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}-\delta\left( y\right) \frac {1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\text{,}\nonumber \end{align} where all of the eigenenergies $\left\Vert \psi_{1_{i_{\ast}}}\mathbf{x _{1_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \psi_{2_{i_{\ast }}\mathbf{x}_{2_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}$ associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1 |\mathbf{\tau}_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|\boldsymbol{\tau}_{2}\right) $ in the $Z_{1}$ decision region are \emph{symmetrically balanced} with all of the eigenenergies $\left\Vert \psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}\right\Vert _{\min _{c}}^{2}$ associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\mathbf{\tau}_{1}\right) $ in the $Z_{2}$ decision region. Given Eqs (\ref{Linear Eigenlocus Integral Equation I}) - (\ref{Linear Eigenlocus Integral Equation IV}), it is concluded that linear eigenlocus discriminant functions $\widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ satisfy discrete and data-driven versions of the integral equation of binary classification in Eq. (\ref{Integral Equation of Likelihood Ratio and Decision Boundary}), the fundamental integral equation of binary classification for a classification system in statistical equilibrium in Eq. (\ref{Equalizer Rule}), and the corresponding integral equation for a classification system in statistical equilibrium in Eq. (\ref{Balancing of Bayes' Risks and Counteracting Risks}). \subsection{Equilibrium Points of Integral Equations I} Returning to the binary classification theorem, recall that the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min }\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of a classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equatio \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}+\int_{Z_{1}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where the equilibrium point $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$ is the focus of a decision boundary $D\left( \mathbf{x}\right) $. Returning to Eq. (\ref{Wolfe Dual Equilibrium Point}) \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}-\sum\nolimits_{i=1}^{l_{2} \psi_{2i\ast}=0\text{, \] it follows that the Wolfe dual linear eigenlocus $\boldsymbol{\psi}$ of likelihoods and principal eigenaxis components $\widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) \begin{align*} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l}\psi_{i\ast}\frac{\mathbf{x _{i\ast}}{\left\Vert \mathbf{x}_{i\ast}\right\Vert }\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }+\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert }\\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2 \end{align*} is the \emph{equilibrium point} $p\left( \widehat{\Lambda}_{\boldsymbol{\psi }}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) =0$ \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }-\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert }=0 \] of the integral equation in Eq. (\ref{Linear Eigenlocus Integral Equation}) and all of its derivatives in Eqs (\ref{Linear Eigenlocus Integral Equation I ) - (\ref{Linear Eigenlocus Integral Equation IV}). Therefore, it is concluded that the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ and the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of the classification system $\boldsymbol{\tau}^{T}\left( \mathbf{x -\widehat{\mathbf{x}}_{i\ast}\right) +\delta\left( y\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium point $p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) =0$ \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }-\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert }=0 \] of the integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) \right) $ \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau }_{1}\right) d\boldsymbol{\tau}_{1}+\int_{Z_{2}}p\left( \mathbf{x _{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}\\ & +\delta\left( y\right) p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \\ & =\int_{Z_{1}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\int_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}\\ & -\delta\left( y\right) p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where the equilibrium point $\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }-\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert }=0$ is the dual focus of a linear decision boundary $D\left( \mathbf{x}\right) $. I will now develop an integral equation that explicitly accounts for the primal focus $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) $ of a linear decision boundary $D\left( \mathbf{x}\right) $ and the equilibrium point $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x \right) $ of the integral equation $f\left( \widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $. \section{The Balancing Feat in Dual Space I} Let $\boldsymbol{\tau=\tau}_{1}-\boldsymbol{\tau}_{2}$ and substitute the statistic for $\tau_{0}$ in Eq. (\ref{Eigenlocus Projection Factor Two} \begin{align*} \tau_{0} & =-\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\sum \nolimits_{j=1}^{l_{1}}\psi_{1_{j_{\ast}}}\mathbf{x}_{1_{j_{\ast}}}\\ & +\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\sum\nolimits_{j=1}^{l_{2 }\psi_{2_{j_{\ast}}}\mathbf{x}_{2_{j_{\ast}}}+\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) \end{align*} into Eq. (\ref{Minimum Eigenenergy Class One} \[ \psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}^{T}\boldsymbol{\tau =\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\tau_{0}\right) ,\ i=1,...,l_{1 \text{, \] where each conditional density $\psi_{1_{i_{\ast}}}\frac{\mathbf{x _{1_{i_{\ast}}}}{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert }$ of an $\mathbf{x}_{1_{i_{\ast}}}$ extreme point satisfies the identity \[ \psi_{1_{i_{\ast}}}\left( 1-\xi_{i}\right) \equiv\psi_{1_{i_{\ast} }\mathbf{x}_{1_{i_{\ast}}}^{T}\boldsymbol{\tau}+\psi_{1_{i_{\ast}}}\tau _{0}\text{. \] Accordingly, the above identity can be rewritten in terms of an eigenlocus equation that is satisfied by the conditional density $\psi_{1_{i_{\ast} }\frac{\mathbf{x}_{1_{i_{\ast}}}}{\left\Vert \mathbf{x}_{1_{i_{\ast} }\right\Vert }$ of an $\mathbf{x}_{1_{i_{\ast}}}$ extreme point \begin{align} \psi_{1_{i_{\ast}}} & =\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) \label{Pointwise Conditional Density Constraint One}\\ & +\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1}^{l}\mathbf{x}_{j\ast ^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau}_{1}\right) \right\} \nonumber\\ & +\xi_{i}\psi_{1_{i_{\ast}}}+\delta\left( y\right) \psi_{1_{i_{\ast} }\text{,}\nonumber \end{align} where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, $\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}$ is a principal eigenaxis component on $\boldsymbol{\tau}_{1}$, and the set of scaled extreme vectors\emph{\ }$\psi_{1_{i_{\ast}}}\left( \sum\nolimits_{j=1}^{l \mathbf{x}_{j\ast}\right) $ are symmetrically distributed over $\boldsymbol{\tau}_{2}-\boldsymbol{\tau}_{1}$ \[ \psi_{1_{i_{\ast}}}\left( \sum\nolimits_{j=1}^{l}\mathbf{x}_{j\ast ^{T}\right) \boldsymbol{\tau}_{2}-\psi_{1_{i_{\ast}}}\left( \sum \nolimits_{j=1}^{l}\mathbf{x}_{j\ast}^{T}\right) \boldsymbol{\tau _{1}\text{. \] Again, let $\boldsymbol{\tau=\tau}_{1}-\boldsymbol{\tau}_{2}$. Substitute the statistic for $\tau_{0}$ in Eq. (\ref{Eigenlocus Projection Factor Two} \begin{align*} \tau_{0} & =-\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\sum \nolimits_{j=1}^{l_{1}}\psi_{1_{j_{\ast}}}\mathbf{x}_{1_{j_{\ast}}}\\ & +\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\sum\nolimits_{j=1}^{l_{2 }\psi_{2_{j_{\ast}}}\mathbf{x}_{2_{j_{\ast}}}+\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) \end{align*} into Eq. (\ref{Minimum Eigenenergy Class Two} \[ -\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}^{T}\boldsymbol{\tau =\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\tau_{0}\right) ,\text{\ i=1,...,l_{2}\text{, \] where each conditional density $\psi_{2_{i_{\ast}}}\frac{\mathbf{x _{2_{i_{\ast}}}}{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert }$ of an $\mathbf{x}_{2_{i_{\ast}}}$ extreme point satisfies the identity \[ \psi_{2_{i_{\ast}}}\left( 1-\xi_{i}\right) =-\psi_{2_{i_{\ast}} \mathbf{x}_{2_{i_{\ast}}}^{T}\boldsymbol{\tau}-\psi_{2_{i_{\ast}}}\tau _{0}\text{. \] Accordingly, the above identity can be rewritten in terms of an eigenlocus equation that is satisfied by the conditional density $\psi_{2_{i_{\ast} }\frac{\mathbf{x}_{2_{i_{\ast}}}}{\left\Vert \mathbf{x}_{2_{i_{\ast} }\right\Vert }$ of an $\mathbf{x}_{2_{i_{\ast}}}$ extreme point \begin{align} \psi_{2_{i_{\ast}}} & =\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}} ^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau}_{1}\right) \label{Pointwise Conditional Density Constraint Two}\\ & +\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1}^{l}\mathbf{x}_{j\ast ^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) \right\} \nonumber\\ & +\xi_{i}\psi_{2_{i_{\ast}}}-\delta\left( y\right) \psi_{2_{i_{\ast} }\text{,}\nonumber \end{align} where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, $\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}$ is a principal eigenaxis component on $\boldsymbol{\tau}_{2}$, and the set of scaled extreme vectors $\psi_{2_{i_{\ast}}}\left( \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}\right) $ are symmetrically distributed over $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ \[ \psi_{2_{i_{\ast}}}\left( \sum\nolimits_{j=1}^{l}\mathbf{x}_{j\ast ^{T}\right) \boldsymbol{\tau}_{1}-\psi_{2_{i_{\ast}}}\left( \sum \nolimits_{j=1}^{l}\mathbf{x}_{j\ast}^{T}\right) \boldsymbol{\tau _{2}\text{. \] Using Eqs (\ref{Integral Equation Class One}) and (\ref{Pointwise Conditional Density Constraint One}), it follows that the conditional probability $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) $ of observing the set $\left\{ \mathbf{x}_{1_{i\ast}}\right\} _{i=1}^{l_{1}}$ of $\mathbf{x}_{1_{i\ast}}$ extreme points within localized regions of the decision space $Z$ is determined by the eigenlocus equation \begin{align} P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast}}^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) \label{Bayes' Risk One}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau _{1}\right) \right\} \nonumber\\ & +\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }+\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi_{1_{i_{\ast}}}\nonumber\\ & \equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\text{,}\nonumber \end{align} where $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ evaluates to $\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}$, and scaled extreme vectors are symmetrically distributed over $\boldsymbol{\tau _{2}-\boldsymbol{\tau}_{1}$ in the following manner \[ \left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\right) \boldsymbol{\tau}_{2}-\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\right) \boldsymbol{\tau}_{1}\text{. \] Using Eqs (\ref{Integral Equation Class Two}) and (\ref{Pointwise Conditional Density Constraint Two}), it follows that the conditional probability $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) $ of observing the set $\left\{ \mathbf{x}_{2_{i\ast}}\right\} _{i=1}^{l_{2}}$ of $\mathbf{x}_{2_{i\ast}}$ extreme points within localized regions of the decision space $Z$ is determined by the eigenlocus equation \begin{align} P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) & =\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}} ^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau}_{1}\right) \label{Bayes' Risk Two}\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right) \right\} \nonumber\\ & -\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }+\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi_{2_{i_{\ast}}}\nonumber\\ & \equiv\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{,}\nonumber \end{align} where $P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ evaluates to $\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}$, and scaled extreme vectors are symmetrically distributed over $\boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}$ in the following manner \[ \left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\right) \boldsymbol{\tau}_{1}-\left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\right) \boldsymbol{\tau}_{2}\text{. \] I will now use Eqs (\ref{Bayes' Risk One}) and (\ref{Bayes' Risk Two}) to devise an equilibrium equation that determines the overall manner in which linear eigenlocus discriminant functions $\widehat{\Lambda}_{\boldsymbol{\tau }}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ minimize the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\tau }\right) $ and the total allowed eigenenergy $\left\Vert \boldsymbol{\tau }\right\Vert _{\min_{c}}^{2}$ for a given decision space $Z$. \subsection{Minimization of Risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\tau}\right) $ and Eigenenergy$\left\Vert \boldsymbol{\tau }\right\Vert _{\min_{c}}^{2}$} Take the estimates in Eqs (\ref{Bayes' Risk One}) and (\ref{Bayes' Risk Two}) \[ P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) =\sum \nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}} \] an \[ P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) =\sum \nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}} \] for the conditional probabilities $P\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) $ and $P\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) $ of observing the $\mathbf{x}_{1_{i\ast}}$ and $\mathbf{x}_{2_{i\ast}}$ extreme points within localized regions of the decision space $Z$. Given that the Wolfe dual eigenlocus of principal eigenaxis components and likelihoods \begin{align*} \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{\mathbf{x}_{1_{i\ast }}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }+\sum\nolimits_{i=1}^{l_{2 }\psi_{2_{i_{\ast}}}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x _{2_{i\ast}}\right\Vert \end{align*} satisfies the equilibrium equation \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }=\sum\nolimits_{i=1}^{l_{2 }\psi_{2_{i_{\ast}}}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x _{2_{i\ast}}\right\Vert }\text{, \] it follows that the conditional probabilities of observing the $\mathbf{x _{1_{i\ast}}$ and the $\mathbf{x}_{2_{i\ast}}$ extreme points within localized regions of the decision space $Z$ are equal to each other \[ P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) =P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) \text{. \] Accordingly, set the vector expressions in Eqs (\ref{Bayes' Risk One}) and (\ref{Bayes' Risk Two}) equal to each other \begin{align*} & \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast} ^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) +\sum \nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau _{1}\right) \right\} \\ & +\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }+\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi_{1_{i_{\ast}}}\\ & =\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast} }^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau}_{1}\right) -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right) \right\} \\ & -\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }+\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi_{2_{i_{\ast}}}\text{. \end{align*} It follows that the equilibrium equation \begin{align} & \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}} \label{Balancing Bayes' Risk Linear}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau _{1}\right) \right\} +\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi_{1_{i_{\ast} }\nonumber\\ & =\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\nonumber\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right) \right\} +\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi_{2_{i_{\ast} }\nonumber \end{align} is satisfied by the equilibrium point \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] and the primal likelihood ratio \[ \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \] of the classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$, where the equilibrium equation in Eq. (\ref{Balancing Bayes' Risk Linear}) is constrained by the equilibrium point in Eq. (\ref{Wolfe Dual Equilibrium Point}). I will now use Eq. (\ref{Balancing Bayes' Risk Linear}) to develop a fundamental linear eigenlocus integral equation of binary classification for a classification system in statistical equilibrium. \subsection{Fundamental Balancing Feat in Dual Spaces I} Returning to Eq. (\ref{Conditional Probability Function for Class One} \begin{align*} P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) & =\in _{Z}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}\\ & =\int_{Z}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}=\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}+C_{1 \end{align*} and Eq.(\ref{Conditional Probability Function for Class Two} \begin{align*} P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) & =\in _{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}\\ & =\int_{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}=\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}+C_{2}\text{, \end{align*} and using Eq. (\ref{Balancing Bayes' Risk Linear} \begin{align*} & \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau _{1}\right) \right\} +\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi_{1_{i_{\ast} }\\ & =\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right) \right\} +\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi_{2_{i_{\ast}} \end{align*} where $P\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) =P\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $, it follows that the value for the integration constant $C_{1}$ in Eq. (\ref{Conditional Probability Function for Class One}) is \[ C_{1}=-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau }_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2} +\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi_{1_{i_{\ast}}}\text{, \] and that the value for the integration constant $C_{2}$ in Eq.(\ref{Conditional Probability Function for Class Two}) is \[ C_{2}=-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau }_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1} +\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi_{2_{i_{\ast}}}\text{. \] Substituting the values for $C_{1}$ and $C_{2}$ into Eqs (\ref{Conditional Probability Function for Class One}) and (\ref{Conditional Probability Function for Class Two}) produces the integral equatio \begin{align*} & f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) {\displaystyle\int\nolimits_{Z}} \boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau _{1}\right) \right\} \\ & =\int\nolimits_{Z}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right) \right\} \text{, \end{align*} over the decision space $Z$, where the equalizer statistic \[ \nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) =\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}+\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1}^{l}\mathbf{x}_{j\ast ^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau}_{1}\right) \right\} \] an \[ \nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) =-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}-\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1}^{l}\mathbf{x}_{j\ast ^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) \right\} \] ensure that the eigenenergy $\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}$ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is symmetrically balanced with the eigenenergy $\left\Vert \boldsymbol{\tau _{2}\right\Vert _{\min_{c}}^{2}$ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$. The primal class-conditional probability density functions $p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) $ and $p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) $ and the Wolfe dual class-conditional probability density functions $p\left( \frac{\mathbf{x _{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi }_{1}\right) $ and $p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ satisfy the integral equation in the following manner \begin{align} & f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) {\displaystyle\int\nolimits_{Z}} p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\delta\left( y\right) p\left( \frac{\mathbf{x _{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi }_{1}\right) \label{Linear Eigenlocus Integral Equation V}\\ & +\widehat{\mathbf{x}}_{i\ast}\left[ p\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) -p\left( \mathbf{x}_{1_{i\ast} |\boldsymbol{\tau}_{1}\right) \right] p\left( \frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \nonumber\\ & =\int\nolimits_{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau _{2}\right) d\boldsymbol{\tau}_{2}-\delta\left( y\right) p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \nonumber\\ & +\widehat{\mathbf{x}}_{i\ast}\left[ p\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) -p\left( \mathbf{x}_{1_{i\ast} |\boldsymbol{\tau}_{1}\right) \right] p\left( \frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{,}\nonumber \end{align} over the $Z_{1}$ and $Z_{2}$ decision regions. The equalizer statistics \begin{align} \nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) & =\delta\left( y\right) p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast }\right\Vert }|\boldsymbol{\psi}_{1}\right) \label{Equalizer Statistic Class One}\\ & +\widehat{\mathbf{x}}_{i\ast}p\left( \mathbf{x}_{2_{i\ast} |\boldsymbol{\tau}_{2}\right) p\left( \frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \nonumber\\ & -\widehat{\mathbf{x}}_{i\ast}p\left( \mathbf{x}_{1_{i\ast} |\boldsymbol{\tau}_{1}\right) p\left( \frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \nonumber \end{align} an \begin{align} \nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) & =-\delta\left( y\right) p\left( \frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast }\right\Vert }|\boldsymbol{\psi}_{2}\right) \label{Equalizer Statistic Class Two}\\ & +\widehat{\mathbf{x}}_{i\ast}p\left( \mathbf{x}_{2_{i\ast} |\boldsymbol{\tau}_{2}\right) p\left( \frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \nonumber\\ & -\widehat{\mathbf{x}}_{i\ast}p\left( \mathbf{x}_{1_{i\ast} |\boldsymbol{\tau}_{1}\right) p\left( \frac{\mathbf{x}_{2_{i\ast} }{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{,}\nonumber \end{align} where $p\left( \frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast }}\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $p\left( \frac {\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ determine the equilibrium poin \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) \right) $ in Eq. (\ref{Linear Eigenlocus Integral Equation V}), ensure that all of the forces associated with counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}\right) $ and risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast}\mathbf{x _{2_{i_{\ast}}}\right) $ in the $Z_{1}$ decision region, which are related to positions and potential locations of corresponding $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ extreme points in the $Z_{1}$ decision region, are \emph{balanced with} all of the forces associated with counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast }\mathbf{x}_{2_{i_{\ast}}}\right) $ and risks $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}\right) $ in the $Z_{2}$ decision region, which are related to positions and potential locations of corresponding $\mathbf{x}_{2_{i_{\ast}}}$ and $\mathbf{x}_{1_{i_{\ast}}}$ extreme points in the $Z_{2}$ decision region, such that the collective forces associated with the integrals $\int_{Z}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}$ and $\int_{Z}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}$ are \emph{symmetrically balanced with} each other. The above integral equation can be written as \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau }_{1}\right) d\boldsymbol{\tau}_{1}+\int_{Z_{2}}p\left( \mathbf{x _{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\nabla _{eq}\left( \widehat{\Lambda}_{\boldsymbol{\psi}_{1}}\left( \mathbf{x \right) \right) \\ & =\int_{Z_{1}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\int_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\nabla_{eq}\left( \widehat{\Lambda}_{\boldsymbol{\psi}_{2}}\left( \mathbf{x}\right) \right) \text{, \end{align*} where the integral $\int_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast} |\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}$ accounts for all of the forces associated with counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min }}\left( Z_{1}|\psi_{1i\ast}\mathbf{x}_{1_{i_{\ast}}}\right) $ which are related to positions and potential locations of corresponding $\mathbf{x _{1_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region, the integral $\int_{Z_{2}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau _{1}\right) d\boldsymbol{\tau}_{1}$ accounts for all of the forces associated with risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\psi_{1i\ast }\mathbf{x}_{1_{i_{\ast}}}\right) $ which are related to positions and potential locations of corresponding $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region, the integral $\int_{Z_{2 }p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}$ accounts for all of the forces associated with counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast }\mathbf{x}_{2_{i_{\ast}}}\right) $ which are related to positions and potential locations of corresponding $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region, and the integral $\int_{Z_{1 }p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}$ accounts for all of the forces associated with risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast}\mathbf{x _{2_{i_{\ast}}}\right) $ which are related to positions and potential locations of corresponding $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region. It follows that the classification system $\boldsymbol{\tau}^{T \mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ seeks a point of statistical equilibrium $p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega _{2}\right) =0$ where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the risk of the classification system are minimized, and the classification system is in statistical equilibrium. Therefore, it is concluded that the linear eigenlocus discriminant functio \[ \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\left( \mathbf{x}-\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}\right) ^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) +\sum\nolimits_{i=1 ^{l}y_{i}\left( 1-\xi_{i}\right) \] is the solution to the integral equation \begin{align*} & f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) =\int_{Z_{1}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\int_{Z_{2 }\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{2}-\boldsymbol{\tau _{1}\right) \right\} \\ & =\int_{Z_{1}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}+\int_{Z_{2 }\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}\mathbf{x}_{j\ast}^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau _{2}\right) \right\} \text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $Z_{1}$ and $Z_{2}$ are congruent decision regions $Z_{1}\cong Z_{2}$ that have respective counter risks \[ \overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau _{1}\right) \text{ \ and \ }\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) \] and respective risks \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) \text{ \ and \ }\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau }_{1}\right) \text{, \] where all of the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min }}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region and the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) $ for class $\omega_{2}$ in the $Z_{2}$ decision region are balanced with all of the forces associated with the counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{1}\right) $ for class $\omega_{1}$ in the $Z_{1}$ decision region and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{1}\right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) +\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau _{2}\right) \\ & \rightleftharpoons\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{1}\right) +\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{1}\right) \end{align*} such that the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ and the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) \right) $ of the classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium point $p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega _{2}\right) =0$ \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\frac{\mathbf{x}_{1_{i\ast} }{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }-\sum\nolimits_{i=1}^{l_{2 }\psi_{2_{i\ast}}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x _{2_{i\ast}}\right\Vert }=0 \] of the integral equation $f\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) $. It follows that the classification syste \[ \left( \mathbf{x}-\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}\right) ^{T}\left( \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right) +\sum \nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0 \] is in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\int_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau }_{1}\right) d\boldsymbol{\tau}_{1}-\int_{Z_{1}}p\left( \mathbf{x _{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\nabla _{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \\ & =\int_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}-\int_{Z_{2}}p\left( \mathbf{x}_{1_{i\ast }|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{, \end{align*} where all of the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau }_{2}\right) $ in the $Z_{1}$ decision region are balanced with all of the forces associated with the counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{1}\right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1 |\boldsymbol{\tau}_{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) \\ & =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau }_{2}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau }_{1}\right) \text{, \end{align*} and the eigenenergies associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau }_{2}\right) $ in the $Z_{1}$ decision region are balanced with the eigenenergies associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{1}\right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :E_{\min_{c}}\left( Z_{1}|\boldsymbol{\tau}_{1}\right) -E_{\min_{c}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) \\ & =E_{\min_{c}}\left( Z_{2}|\boldsymbol{\tau}_{2}\right) -E_{\min_{c }\left( Z_{2}|\boldsymbol{\tau}_{1}\right) \text{. \end{align*} Therefore, it is concluded that the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ and the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of the classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium point $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) $ \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}p\left( \mathbf{x}_{1_{i\ast}}|\boldsymbol{\tau }_{1}\right) d\boldsymbol{\tau}_{1}+\int_{Z_{2}}p\left( \mathbf{x _{1_{i\ast}}|\boldsymbol{\tau}_{1}\right) d\boldsymbol{\tau}_{1}+\nabla _{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \\ & =\int_{Z_{1}}p\left( \mathbf{x}_{2_{i\ast}}|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\int_{Z_{2}}p\left( \mathbf{x}_{2_{i\ast }|\boldsymbol{\tau}_{2}\right) d\boldsymbol{\tau}_{2}+\nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \text{, \end{align*} where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the risk of the classification system are minimized, and the classification system is in statistical equilibrium. Figure $\ref{Symmetrical Balance of Bayes' Error Linear}$ illustrates the manner in which linear eigenlocus discriminant functions $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}$ minimize the total allowed eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\mathbf{\tau}\right) $ of linear classification systems \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure30.png }\caption{Linear eigenlocus transforms generate linear classification systems $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\protect\overset{\omega _{1}}{\protect\underset{\omega_{2}}{\gtrless}}0$ that satisfy a fundamental integral equation of binary classification for a classification system in statistical equilibrium. \label{Symmetrical Balance of Bayes' Error Linear \end{figure} By way of illustration, Fig. $\ref{Bayes' Decision Boundaries Linear}$ shows that linear eigenlocus transforms generate decision regions $Z_{1}$ and $Z_{2}$ that minimize the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\tau}\right) $, where $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\tau}_{2}\right) =\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\tau}_{1}\right) $, for highly overlapping data distributions, completely overlapping data distributions and non-overlapping data distributions. Accordingly, given data distributions that have similar covariance matrices, linear eigenlocus classification systems $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ generate optimal linear decision boundaries for highly overlapping data distributions (see Fig. $\ref{Bayes' Decision Boundaries Linear}$a and Fig. $\ref{Bayes' Decision Boundaries Linear}$b), completely overlapping data distributions (see Fig. $\ref{Bayes' Decision Boundaries Linear}$c and Fig. $\ref{Bayes' Decision Boundaries Linear}$d) and non-overlapping data distributions (see Fig. $\ref{Bayes' Decision Boundaries Linear}$e and Fig. $\ref{Bayes' Decision Boundaries Linear}$f), where unconstrained, primal principal eigenaxis components (extreme points) are enclosed in blue circles \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure31.png }\caption{Linear eigenlocus classification systems $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\protect\overset{\omega_{1}}{\protect\underset{\omega _{2}}{\gtrless}}0$ generate optimal linear decision boundaries for $\left( 1\right) $ highly overlapping data distributions: see $\left( a\right) $ and $\left( b\right) $, $\left( 2\right) $ completely overlapping data distributions: see $\left( c\right) $ and $\left( d\right) $, and $\left( 3\right) $ non-overlapping data distributions: see $\left( e\right) $ and $\left( f\right) $. \label{Bayes' Decision Boundaries Linear \end{figure} I\ am now in a position to formally state a \emph{discrete linear classification theorem.} \section*{Discrete Linear Classification Theorem} Take a collection of $d$-component random vectors $\mathbf{x}$ that are generated according to probability density functions $p\left( \mathbf{x |\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics and similar covariance matrices, and let $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ denote the likelihood ratio test for a discrete, linear classification system, where $\omega_{1}$ or $\omega_{2}$ is the true data category, $\boldsymbol{\tau}$ is a locus of principal eigenaxis components and likelihoods \begin{align*} \boldsymbol{\tau} & \triangleq\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \\ & =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast} }-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast} }\text{, \end{align*} where $\mathbf{x}_{1_{i_{\ast}}}\sim p\left( \mathbf{x}|\omega_{1}\right) $, $\mathbf{x}_{2_{i_{\ast}}}\sim p\left( \mathbf{x}|\omega_{2}\right) $, $\psi_{1_{i_{\ast}}}$ and $\psi_{2_{i_{\ast}}}$ are scale factors that provide unit measures of likelihood for respective data points $\mathbf{x _{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ which lie in either overlapping regions or tails regions of data distributions related to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega _{2}\right) $, and $\tau_{0}$ is a functional of $\boldsymbol{\tau}$ \[ \tau_{0}=\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) -\sum \nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\boldsymbol{\tau}\text{, \] where $\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}=\sum\nolimits_{i=1}^{l_{1 }\mathbf{x}_{1_{i_{\ast}}}+\sum\nolimits_{i=1}^{l_{2}}\mathbf{x}_{2_{i_{\ast }}$ is a cluster of the data points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ used to form $\boldsymbol{\tau}$, $y_{i}$ are class membership statistics: if $\mathbf{x}_{i\ast}\in\omega_{1}$, assign $y_{i}=1$; if $\mathbf{x}_{i\ast}\in\omega_{2}$, assign $y_{i}=-1$, and $\xi_{i}$ are regularization parameters: $\xi_{i}=\xi=0$ for full rank Gram matrices or $\xi_{i}=\xi\ll1$ for low rank Gram matrices. The linear discriminant functio \[ \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0 \] is the solution to the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1 +\int_{Z_{2}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{1}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}+\int_{Z_{2 }\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $Z_{1}$ and $Z_{2}$ are congruent decision regions: $Z_{1}\cong Z_{2}$ and $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of the classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}-\sum\nolimits_{i=1}^{l_{2} \psi_{2i\ast}=0 \] of the integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) \right) $, where the equilibrium point is a dual locus of principal eigenaxis components and likelihoods \begin{align*} \boldsymbol{\psi} & \triangleq\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) +p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{\mathbf{x _{1_{i_{\ast}}}}{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\frac{\mathbf{x}_{2_{i_{\ast} }}{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert \end{align*} that is constrained to be in statistical equilibrium \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{\mathbf{x}_{1_{i_{\ast}} }{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert }=\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}\frac{\mathbf{x}_{2_{i_{\ast}}}}{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert }\text{. \] Therefore, the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{1}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of data points $\mathbf{x _{1_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x |\omega_{1}\right) $, are balanced with the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of data points $\mathbf{x}_{2_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $. Furthermore, the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is balanced with the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$ \[ \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\equiv\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \] where the total eigenenerg \begin{align*} \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}\\ & =\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}\\ & +\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\left( 1-\xi_{i}\right) \end{align*} of the discrete, linear classification system $\boldsymbol{\tau}^{T \mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ is determined by the eigenenergies associated with the position or location of the likelihood ratio $\boldsymbol{\tau=\tau}_{1}-\boldsymbol{\tau}_{2}$ and the locus of a corresponding, linear decision boundary $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}=0$. It follows that the discrete, linear classification system $\boldsymbol{\tau }^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless }0$ is in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) : & \int_{Z_{1}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1 -\int_{Z_{1}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{2}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\int_{Z_{2 }\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \end{align*} where the forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau }}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are balanced with the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{1}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{2}\right) \right) \\ & =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{2}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{1}\right) \right) \end{align*} such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of the classification system is minimized, and the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region are balanced with the eigenenergies associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau }}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) -E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \\ & =E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) |\omega_{2}\right) \right) -E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) \end{align*} such that the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of the classification system is minimized. Thus, any given discrete, linear classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ exhibits an error rate that is consistent with the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min }\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of the classification system: for all random vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, where $p\left( \mathbf{x |\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are related to statistical distributions of random vectors $\mathbf{x}$ that have similar covariance matrices and constant or unchanging statistics. Therefore, a discrete, linear classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ seeks a point of statistical equilibrium where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the expected risk of the classification system are minimized, and the classification system is in statistical equilibrium. I\ will now show that the eigenenergy of a discrete, linear classification system is conserved and remains relatively constant, so that the eigenenergy and the corresponding expected risk of a discrete, linear classification system cannot be created or destroyed, but only transferred from one classification system to another. \section*{Law of Conservation of Eigenenergy:} \subsection*{For Discrete Linear Classification Systems} Take a collection of $N$ random vectors $\mathbf{x}$ of dimension $d$ that are generated according to probability density functions $p\left( \mathbf{x |\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics and similar covariance matrices, where the number of random vectors $\mathbf{x}\sim p\left( \mathbf{x}|\omega_{1}\right) $ equals the number of random vectors $\mathbf{x}\sim p\left( \mathbf{x}|\omega _{2}\right) $, and let $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ denote the likelihood ratio test for a discrete, linear classification system, where $\omega_{1}$ or $\omega_{2}$ is the true data category, $\boldsymbol{\tau}$ is a locus of principal eigenaxis components and likelihoods \begin{align*} \boldsymbol{\tau} & \triangleq\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \\ & =\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast} }-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast} }\text{, \end{align*} where $\mathbf{x}_{1_{i_{\ast}}}\sim p\left( \mathbf{x}|\omega_{1}\right) $, $\mathbf{x}_{2_{i_{\ast}}}\sim p\left( \mathbf{x}|\omega_{2}\right) $, $\psi_{1_{i_{\ast}}}$ and $\psi_{2_{i_{\ast}}}$ are scale factors that provide unit measures of likelihood for respective data points $\mathbf{x _{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ which lie in either overlapping regions or tails regions of data distributions related to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega _{2}\right) $, and $\tau_{0}$ is a functional of $\boldsymbol{\tau}$ \[ \tau_{0}=\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) -\sum \nolimits_{i=1}^{l}\mathbf{x}_{i\ast}^{T}\boldsymbol{\tau}\text{, \] where $\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}=\sum\nolimits_{i=1}^{l_{1 }\mathbf{x}_{1_{i_{\ast}}}+\sum\nolimits_{i=1}^{l_{2}}\mathbf{x}_{2_{i_{\ast }}$ is a cluster of the data points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ used to form $\boldsymbol{\tau}$, $y_{i}$ are class membership statistics: if $\mathbf{x}_{i\ast}\in\omega_{1}$, assign $y_{i}=1$; if $\mathbf{x}_{i\ast}\in\omega_{2}$, assign $y_{i}=-1$, and $\xi_{i}$ are regularization parameters: $\xi_{i}=\xi=0$ for full rank Gram matrices or $\xi_{i}=\xi\ll1$ for low rank Gram matrices. The expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of a discrete, linear classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}-\sum\nolimits_{i=1}^{l_{2} \psi_{2i\ast}=0 \] of the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1 +\int_{Z_{2}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{1}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}+\int_{Z_{2 }\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $Z_{1}$ and $Z_{2}$ are congruent decision regions: $Z_{1}\cong Z_{2}$, $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, and the forces associated with the counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of data points $\mathbf{x}_{1_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, are balanced with the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{2}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of data points $\mathbf{x _{2_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x |\omega_{2}\right) $. Furthermore, the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is balanced with the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) $ given class $\omega_{2}$ \[ \left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\equiv\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \] wher \begin{align*} \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau }_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos \theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}\\ & +\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1}}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\left( 1-\xi_{i}\right) \text{. \end{align*} The eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c} ^{2}=\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}$ is the state of a discrete, linear classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ that is associated with the position or location of a dual likelihood ratio \begin{align*} \boldsymbol{\psi} & \triangleq\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) +p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{\mathbf{x _{1_{i_{\ast}}}}{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\frac{\mathbf{x}_{2_{i_{\ast} }}{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert }\text{, \end{align*} which is constrained to be in statistical equilibrium \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{\mathbf{x}_{1_{i_{\ast}} }{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert }=\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}\frac{\mathbf{x}_{2_{i_{\ast}}}}{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert }\text{, \] and the locus of a corresponding linear decision boundary $\boldsymbol{\tau }^{T}\mathbf{x}+\tau_{0}=0$. Thus, any given discrete, linear classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ exhibits an error rate that is consistent with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) \right) $ of the classification system: for all random vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega _{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, where $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are related to statistical distributions of random vectors $\mathbf{x}$ that have similar covariance matrices and constant or unchanging statistics. The total eigenenergy of a discrete, linear classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ is found by adding up contributions from characteristics of the classification system: The eigenenergies $E_{\min}\left( Z|p\left( \widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ and $E_{\min}\left( Z|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ associated with the positions or locations of the parameter vectors of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{1}\right) $ and $p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{2}\right) $, wher \[ E_{\min}\left( Z|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega_{1}\right) \right) =\left\Vert \boldsymbol{\tau }_{1}\right\Vert _{\min_{c}}^{2}\text{ \ and \ }E_{\min}\left( Z|p\left( \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) |\omega _{2}\right) \right) =\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c }^{2 \] are related to eigenenergies associated with positions and potential locations of extreme points that lie in either overlapping regions or tails regions of statistical distributions related to the class-conditional probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, and the total eigenenergy $\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}$ satisfies the vector equations \begin{align*} \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}\\ & =\left\Vert \boldsymbol{\tau}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{1}\right\Vert \left\Vert \boldsymbol{\tau}_{2}\right\Vert \cos\theta_{\boldsymbol{\tau}_{1}\boldsymbol{\tau}_{2}}\\ & +\left\Vert \boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\tau}_{2}\right\Vert \left\Vert \boldsymbol{\tau}_{1}\right\Vert \cos\theta_{\boldsymbol{\tau}_{2}\boldsymbol{\tau}_{1} \end{align*} an \[ \left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}=\sum\nolimits_{i=1 ^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}\right) +\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\left( 1-\xi_{i}\right) \text{. \] Any given discrete, linear classification system that is determined by a likelihood ratio test \[ \boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0\text{, \] where the class-conditional probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics and similar covariance matrices, and the locus of a linear decision boundary \[ D\left( \mathbf{x}\right) :\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}=0 \] is governed by the locus of a dual likelihood ratio $p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) +p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega _{2}\right) $ in statistical equilibrium \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) \rightleftharpoons p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \text{, \] is a closed classification system. Thus, the total eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) \begin{align*} E_{\min}\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) \right) & \triangleq\left\Vert \boldsymbol{\tau}\right\Vert _{\min_{c}}^{2}\\ & =\left\Vert \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}\right\Vert _{\min_{c}}^{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\left( 1-\xi_{i}\right) \end{align*} of any given discrete, linear classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ is conserved and remains relatively constant. Therefore, the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of a discrete, linear classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ cannot be created or destroyed, but only transferred from one classification system to another. It follows that the corresponding expected risk $\mathfrak{R}_{\mathfrak{\min }}\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ of a discrete, linear classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ cannot be created or destroyed, but only transferred from one classification system to another. I\ will now identify the fundamental property which is common to each of the scaled extreme points on any given likelihood ratio $\widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) $ and linear decision boundary $D_{0}\left( \mathbf{x}\right) $ that is determined by a linear eigenlocus classification system $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$. \subsubsection{Inherent Property of Eigen-scaled Extreme Points on $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$} Given that a linear eigenlocus $\boldsymbol{\tau}=\boldsymbol{\tau _{1}-\boldsymbol{\tau}_{2}$ is a locus of likelihoods that determines a likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) $ and a locus of principal eigenaxis components that determines the coordinate system of a linear decision boundary $D_{0}\left( \mathbf{x \right) $, it follows that the total allowed eigenenerg \[ \left\Vert \psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}\right\Vert _{\min_{c }^{2}\text{ or \ }\left\Vert \psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast} }\right\Vert _{\min_{c}}^{2 \] exhibited by each scaled extreme vecto \[ \psi_{1_{i_{\ast}}}\mathbf{x}_{1_{i_{\ast}}}\text{ or \ }\psi_{2_{i_{\ast} }\mathbf{x}_{2_{i_{\ast}} \] on $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ and the corresponding class-conditional risk \[ \int_{Z_{2}}p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) d\boldsymbol{\tau _{1}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \text{ or }\int_{Z_{1}}p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x _{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) d\boldsymbol{\tau}_{2}\left( \mathbf{x}_{2_{i\ast}}\right) \] or class-conditional counter risk \[ \int_{Z_{1}}p\left( \mathbf{x}_{1_{i_{\ast}}}|\operatorname{comp _{\overrightarrow{\mathbf{x}_{1i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) d\boldsymbol{\tau _{1}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \text{ or }\int_{Z_{2}}p\left( \mathbf{x}_{2_{i_{\ast}}}|\operatorname{comp}_{\overrightarrow{\mathbf{x _{2i\ast}}}\left( \overrightarrow{\boldsymbol{\tau}}\right) \right) d\boldsymbol{\tau}_{2}\left( \mathbf{x}_{2_{i\ast}}\right) \] possessed by each extreme point $\mathbf{x}_{1_{i_{\ast}}}$ or $\mathbf{x _{2_{i_{\ast}}}$, which are determined by $\left\Vert \psi_{1_{i_{\ast} }\mathbf{x}_{1_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}$ or $\left\Vert \psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}$, \emph{jointly satisfy} the fundamental linear eigenlocus integral equation of binary classification in Eq. (\ref{Linear Eigenlocus Integral Equation V}). Thereby, it is concluded that the \emph{fundamental property} possessed by each of the scaled extreme points on a linear eigenlocus $\boldsymbol{\tau }_{1}-\boldsymbol{\tau}_{2}$ is the \emph{total allowed eigenenergy} exhibited by a corresponding, scaled extreme vector. I will now devise an expression for a linear eigenlocus that is a locus of discrete conditional probabilities. \section{Linear Eigenlocus of Probabilities} Write a linear eigenlocus $\boldsymbol{\tau}$ in terms o \begin{align*} \boldsymbol{\tau} & =\lambda_{\max_{\psi}}^{-1}\sum\nolimits_{i=1}^{l_{1 }\frac{\mathbf{x}_{1_{i_{\ast}}}}{\left\Vert \mathbf{x}_{1_{i_{\ast} }\right\Vert }\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert ^{2 \widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( \mathbf{x _{1_{i_{\ast}}}\right) \\ & -\lambda_{\max_{\psi}}^{-1}\sum\nolimits_{i=1}^{l_{2}}\frac{\mathbf{x _{2_{i_{\ast}}}}{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert }\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert ^{2}\widehat{\operatorname{cov }_{sm_{\updownarrow}}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \text{, \end{align*} where $\widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( \mathbf{x _{1_{i_{\ast}}}\right) $ and $\widehat{\operatorname{cov}}_{sm_{\updownarrow }}\left( \mathbf{x}_{2_{i_{\ast}}}\right) $ denote the symmetrically balanced, signed magnitudes in Eqs (\ref{Unidirectional Scaling Term One1}) and (\ref{Unidirectional Scaling Term Two1}): the terms $\frac{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert }{\left\Vert \mathbf{x}_{1_{i_{\ast} }\right\Vert }$ and $\frac{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert }{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert }$ have been introduced and rearranged. Next, rewrite $\boldsymbol{\tau}$ in terms of total allowed eigenenergie \begin{align} \boldsymbol{\tau} & =\sum\nolimits_{i=1}^{l_{1}}\left\Vert \lambda _{\max_{\psi}}^{-1}\left( \widehat{\operatorname{cov}}_{sm_{\updownarrow }\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) ^{\frac{1}{2} \mathbf{x}_{1_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}\frac{\mathbf{x _{1_{i_{\ast}}}}{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert }\label{Probabilisitc Expression for Normal Eigenlocus}\\ & -\sum\nolimits_{i=1}^{l_{2}}\left\Vert \lambda_{\max_{\psi}}^{-1}\left( \widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( \mathbf{x _{2_{i_{\ast}}}\right) \right) ^{\frac{1}{2}}\mathbf{x}_{2_{i_{\ast} }\right\Vert _{\min_{c}}^{2}\frac{\mathbf{x}_{2_{i_{\ast}}}}{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert }\text{,}\nonumber \end{align} where the conditional probability $\mathcal{P}\left( \mathbf{x}_{1_{i_{\ast }}\mathbf{|}\tilde{Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) $ of observing an $\mathbf{x}_{1_{i_{\ast}}}$ extreme point within a localized region $\tilde{Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) $ of a decision space $Z=Z_{1}+Z_{2}$ is given by the expression \[ \mathcal{P}\left( \mathbf{x}_{1_{i_{\ast}}}\mathbf{|}\tilde{Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) =\left\Vert \lambda_{\max_{\psi }^{-1}\left( \widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) ^{\frac{1}{2}}\mathbf{x _{1_{i_{\ast}}}\right\Vert _{\min_{c}}^{2}\text{, \] where $\tilde{Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \subset Z_{1}$ or $\tilde{Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \subset Z_{2}$, and the conditional probability $\mathcal{P}\left( \mathbf{x}_{2_{i_{\ast}} |\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \right) $ of observing an $\mathbf{x}_{2_{i_{\ast}}}$ extreme point within a localized region $\tilde {Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) $ of a decision space $Z$ is given by the expression \[ \mathcal{P}\left( \mathbf{x}_{2_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x _{2_{i_{\ast}}}\right) \right) =\left\Vert \lambda_{\max_{\psi}}^{-1}\left( \widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( \mathbf{x _{2_{i_{\ast}}}\right) \right) ^{\frac{1}{2}}\mathbf{x}_{2_{i_{\ast} }\right\Vert _{\min_{c}}^{2}\text{, \] where $\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \subset Z_{1}$ or $\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \subset Z_{2}$. Now rewrite Eq. (\ref{Probabilisitc Expression for Normal Eigenlocus}) as a locus of discrete conditional probabilities \begin{align} \boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2} & =\sum\nolimits_{i=1}^{l_{1 }\mathcal{P}\left( \mathbf{x}_{1_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x _{1_{i_{\ast}}}\right) \right) \frac{\mathbf{x}_{1_{i_{\ast}}}}{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert \label{SDNE Conditional Likelihood Ratio}\\ & -\sum\nolimits_{i=1}^{l_{2}}\mathcal{P}\left( \mathbf{x}_{2_{i_{\ast} }\mathbf{|}\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \right) \frac{\mathbf{x}_{2_{i_{\ast}}}}{\left\Vert \mathbf{x}_{2_{i_{\ast} }\right\Vert }\nonumber \end{align} which provides discrete measures for conditional probabilities of classification errors $\mathcal{P}_{\min_{e}}\left( \mathbf{x}_{1_{i_{\ast} }|Z_{2}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) $ and $\mathcal{P _{\min_{e}}\left( \mathbf{x}_{2_{i_{\ast}}}|Z_{1}\left( \mathbf{x _{2_{i_{\ast}}}\right) \right) $ for $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region and $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region. I\ will now use Eq. (\ref{SDNE Conditional Likelihood Ratio}) to devise a probabilistic expression for a linear eigenlocus discriminant function. \subsection{A Probabilistic Expression for $\boldsymbol{\tau}$} Returning to Eq. (\ref{Statistical Locus of Category Decision}), consider the estimate $\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) $ that an unknown pattern vector $\mathbf{x}$ is located within some particular region of \mathbb{R} ^{d} \begin{align*} \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) & =\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) ^{T}\boldsymbol{\tau }\mathbf{/}\left\Vert \boldsymbol{\tau}\right\Vert \\ & \mathbf{+}\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\sum \nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \end{align*} based on the value of the decision locus $\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\tau}}}}\left( \overrightarrow{\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) }\right) $ and class membership statistic $\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, where $\operatorname{comp}_{\overrightarrow{\widehat{\boldsymbol{\tau}}}}\left( \overrightarrow{\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) }\right) $ denotes a signed magnitude $\left\Vert \mathbf{x -\widehat{\mathbf{x}}_{i\ast}\right\Vert \cos\theta$ along the axis of $\widehat{\boldsymbol{\tau}}$, $\theta$ is the angle between the vector $\mathbf{x}-\widehat{\mathbf{x}}_{i\ast}$ and $\widehat{\boldsymbol{\tau}}$, and $\widehat{\boldsymbol{\tau}}$ denotes the unit linear eigenlocus $\boldsymbol{\tau}\mathbf{/}\left\Vert \boldsymbol{\tau}\right\Vert $. I will now demonstrate that the signed magnitude expression $\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) ^{T}\boldsymbol{\tau }\mathbf{/}\left\Vert \boldsymbol{\tau}\right\Vert $ is a locus of discrete conditional probabilities. Substitute the expression for $\boldsymbol{\tau}_{1}-\boldsymbol{\tau}_{2}$ in Eq. (\ref{SDNE Conditional Likelihood Ratio}) into the expression for the linear eigenlocus test in Eq. (\ref{NormalEigenlocusTestStatistic}). Denote the unit primal principal eigenaxis components $\frac{\mathbf{x}_{1_{i_{\ast }}}{\left\Vert \mathbf{x}_{1_{i_{\ast}}}\right\Vert }$ and $\frac {\mathbf{x}_{2_{i_{\ast}}}}{\left\Vert \mathbf{x}_{2_{i_{\ast}}}\right\Vert }$ by $\widehat{\mathbf{x}}_{1_{i_{\ast}}}$ and $\widehat{\mathbf{x} _{2_{i_{\ast}}}$. It follows that the probability that the unknown pattern vector $\mathbf{x}$ is located within a specific region of \mathbb{R} ^{d}$ is provided by the expressio \begin{align*} \widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) & =\sum\nolimits_{i=1}^{l_{1}}\left[ \left( \mathbf{x}-\widehat{\mathbf{x }_{i\ast}\right) ^{T}\widehat{\mathbf{x}}_{1_{i_{\ast}}}\right] \mathcal{P}\left( \mathbf{x}_{1_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x _{1_{i_{\ast}}}\right) \right) \\ & -\sum\nolimits_{i=1}^{l_{2}}\left[ \left( \mathbf{x}-\widehat{\mathbf{x }_{i\ast}\right) ^{T}\widehat{\mathbf{x}}_{2_{i_{\ast}}}\right] \mathcal{P}\left( \mathbf{x}_{2_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x _{2_{i_{\ast}}}\right) \right) \\ & \mathbf{+}\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\sum \nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \text{, \end{align*} where $\mathcal{P}\left( \mathbf{x}_{1_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) $ and $\mathcal{P}\left( \mathbf{x}_{2_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \right) $ provides discrete measures for conditional probabilities of classification error \[ \mathcal{P}_{\min_{e}}\left( \mathbf{x}_{1_{i_{\ast}}}|Z_{2}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) \text{ and }\mathcal{P}_{\min_{e }\left( \mathbf{x}_{2_{i_{\ast}}}|Z_{1}\left( \mathbf{x}_{2_{i_{\ast} }\right) \right) \] for $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region and $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region or conditional probabilities of counter risk \[ \mathcal{P}_{\min_{e}}\left( \mathbf{x}_{1_{i_{\ast}}}|Z_{1}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) \text{ and }\mathcal{P}_{\min_{e }\left( \mathbf{x}_{2_{i_{\ast}}}|Z_{2}\left( \mathbf{x}_{2_{i_{\ast} }\right) \right) \] for $\mathbf{x}_{1_{i_{\ast}}}$ extreme points that lie in the $Z_{1}$ decision region and $\mathbf{x}_{2_{i_{\ast}}}$ extreme points that lie in the $Z_{2}$ decision region. The above expression reduces to a locus of discrete conditional probabilitie \begin{align} \Lambda_{\boldsymbol{\tau}}\left( \mathbf{x}\right) & =\sum\nolimits_{i=1 ^{l_{1}}\operatorname{comp}_{\overrightarrow{\widehat{\mathbf{x}}_{1_{i_{\ast }}}}}\left( \overrightarrow{\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast }\right) }\right) \mathcal{P}\left( \mathbf{x}_{1_{i_{\ast}}}|\tilde {Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) \label{Normal Eigenlocus Likelihood Ratio}\\ & -\sum\nolimits_{i=1}^{l_{2}}\operatorname{comp _{\overrightarrow{\widehat{\mathbf{x}}_{2_{i_{\ast}}}}}\left( \overrightarrow{\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) }\right) \mathcal{P}\left( \mathbf{x}_{2_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \right) \nonumber\\ & \mathbf{+}\frac{1}{\left\Vert \boldsymbol{\tau}\right\Vert }\sum \nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \text{,}\nonumber \end{align} so that the conditional probability $\mathcal{P}\left( \mathbf{x|}\tilde {Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) $ of finding the unknown pattern vector $\mathbf{x}$ within the localized region $\tilde{Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) $ of the decision space $Z$ is determined by the likelihood statistic \begin{equation} \mathcal{P}\left( \mathbf{x|}\tilde{Z}\left( \mathbf{x}_{1_{i_{\ast} }\right) \right) =\operatorname{comp}_{\overrightarrow{\widehat{\mathbf{x }_{1_{i_{\ast}}}}}\left( \overrightarrow{\left( \mathbf{x -\widehat{\mathbf{x}}_{i\ast}\right) }\right) \mathcal{P}\left( \mathbf{x}_{1_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) \text{,} \label{Probability Estimate One \end{equation} and the conditional probability $\mathcal{P}\left( \mathbf{x|}\tilde {Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \right) $ of finding the unknown pattern vector $\mathbf{x}$ within the localized region $\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) $ of the decision space $Z$ is determined by the likelihood statistic \begin{equation} \mathcal{P}\left( \mathbf{x|}\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast} }\right) \right) =\operatorname{comp}_{\overrightarrow{\widehat{\mathbf{x }_{2_{i_{\ast}}}}}\left( \overrightarrow{\left( \mathbf{x -\widehat{\mathbf{x}}_{i\ast}\right) }\right) \mathcal{P}\left( \mathbf{x}_{2_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \right) \text{.} \label{Probability Estimate Two \end{equation} Accordingly, the likelihood statistic $\mathcal{P}\left( \mathbf{x|}\tilde {Z}\left( \mathbf{x}_{1_{i_{\ast}}}\right) \right) $ in Eq. (\ref{Probability Estimate One}) is proportional, according to the signed magnitude $\operatorname{comp}_{\overrightarrow{\widehat{\mathbf{x }_{1_{i_{\ast}}}}}\left( \overrightarrow{\left( \mathbf{x -\widehat{\mathbf{x}}_{i\ast}\right) }\right) $ along the axis of $\widehat{\mathbf{x}}_{1_{i_{\ast}}}$, to the conditional probability $\mathcal{P}\left( \mathbf{x}_{1_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x _{1_{i_{\ast}}}\right) \right) $ of finding the extreme point $\mathbf{x _{1_{i_{\ast}}}$ within a localized region $\tilde{Z}\left( \mathbf{x _{1_{i_{\ast}}}\right) $ of the decision space $Z$. Similarly, the likelihood statistic $\mathcal{P}\left( \mathbf{x|}\tilde{Z}\left( \mathbf{x _{2_{i_{\ast}}}\right) \right) $ in Eq. (\ref{Probability Estimate Two}) is proportional, according to the signed magnitude $\operatorname{comp _{\overrightarrow{\widehat{\mathbf{x}}_{2_{i_{\ast}}}}}\left( \overrightarrow{\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) }\right) $ along the axis of $\widehat{\mathbf{x}}_{2_{i_{\ast}}}$, to the conditional probability $P\left( \mathbf{x}_{2_{i_{\ast}}}|\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) \right) $ of finding the extreme point $\mathbf{x}_{2_{i_{\ast}}}$ within a localized region $\tilde{Z}\left( \mathbf{x}_{2_{i_{\ast}}}\right) $ of the decision space $Z$. Thus, it is concluded that the signed magnitude expression $\left( \mathbf{x}-\widehat{\mathbf{x}}_{i\ast}\right) ^{T}\boldsymbol{\tau }\mathbf{/}\left\Vert \boldsymbol{\tau}\right\Vert $ in Eq. (\ref{Statistical Locus of Category Decision}) is a locus of discrete conditional probabilities that satisfies the linear eigenlocus integral equation of binary classification in Eq. (\ref{Linear Eigenlocus Integral Equation V}). I\ will now devise a system of data-driven, locus equations that generate computer-implemented, optimal quadratic classification systems. The general trend of my arguments is similar to the general trend of my arguments for linear eigenlocus transforms. My discoveries are based on useful relations between geometric locus methods, geometric methods in reproducing kernel Hilbert spaces, statistics, and the binary classification theorem that I\ have derived. \section{Optimal Quadratic Classification Systems} I will begin by outlining my design for computer-implemented, optimal quadratic classification systems. Such computer-implemented systems are scalable modules for optimal statistical pattern recognition systems, all of which are capable of performing a wide variety of statistical pattern recognition tasks, where any given $M$-class statistical pattern recognition system achieves the lowest possible error rate and has the best generalization performance for its $M$-class feature space. \subsection*{Problem Formulation} \begin{flushleft} The formulation of a system of data-driven, locus equations that generates computer-implemented, optimal quadratic classification systems requires solving three fundamental problems: \end{flushleft} \paragraph{Problem $\mathbf{1}$} \textit{Define the geometric figures in a quadratic classification system, where geometric figures involve points, vectors, line segments, angles, regions, and quadratic curves or surfaces.} \paragraph{Problem $\mathbf{2}$} \textit{Define the geometric and statistical properties exhibited by each of the geometric figures.} \paragraph{Problem $\mathbf{3}$} \textit{Define the forms of the data-driven, locus equations that determine the geometric figures.} \subsection*{The Solution} \begin{flushleft} I\ have formulated a solution that answers all three problems. My solution is based on three ideas: \end{flushleft} \paragraph{Idea $\mathbf{1}$} Devise \emph{a dual locus of data points} that determines quadratic decision boundaries \emph{and} likelihood ratios that achieve the lowest possible error rate. \paragraph{Idea $\mathbf{2}$} The dual locus of data points must \emph{determine} the \emph{coordinate system} of the \emph{quadratic decision boundary}. \paragraph{Idea $\mathbf{3}$} The dual locus of data points \emph{must satisfy discrete versions of the fundamental equations of binary classification for a classification system in statistical equilibrium.} \subsection*{Key Elements of the Solution} \begin{flushleft} The essential elements of the solution are outlined below. \end{flushleft} \paragraph{Locus of Principal Eigenaxis Components} Returning to Eqs (\ref{Vector Equation of a Conic}) and (\ref{Vector Equation of Circles and Spheres}), given that the vector components of a principal eigenaxis specify all forms of quadratic curves and surfaces, and all of the points on any given quadratic curve or surface explicitly and exclusively reference the principal eigenaxis of the quadratic locus, it follows that the principal eigenaxis of a quadratic decision boundary provides an elegant, statistical eigen-coordinate system for a quadratic classification system. Therefore, the dual locus of data points \emph{must} be a \emph{locus of principal eigenaxis components}. \paragraph{Critical Minimum Eigenenergy Constraint} Given Eqs (\ref{Characteristic Eigenenergy of Quadratic}) and (\ref{Characteristic Eigenenergy of Quadratic 2}), it follows that the principal eigenaxis of a quadratic decision boundary satisfies the quadratic decision boundary in terms of its eigenenergy. Accordingly, the principal eigenaxis of a quadratic decision boundary exhibits a characteristic eigenenergy that is unique for the quadratic decision boundary. Thereby, the\ \emph{important generalizations} for a quadratic decision boundary are \emph{determined by} the \emph{eigenenergy} exhibited by its \emph{principal eigenaxis}. Therefore, the locus of principal eigenaxis components \emph{must satisfy} a critical minimum, i.e., a total allowed, \emph{eigenenergy constraint}, such that the locus of principal eigenaxis components satisfies a \emph{quadratic decision boundary} in \emph{terms of} its critical minimum or total allowed \emph{eigenenergies. }Thus, the locus of principal eigenaxis components must satisfy the vector and equilibrium equation in Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) and (\ref{Equilibrium Equation of Likelihood Ratio and Decision Boundary}), the integral equation in Eq. (\ref{Integral Equation of Likelihood Ratio and Decision Boundary}), the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}), and the corresponding integral equation in Eq. (\ref{Balancing of Bayes' Risks and Counteracting Risks}) \emph{in terms of its total allowed eigenenergies}. \paragraph{Extreme Points} In order for the locus of principal eigenaxis components to implement a likelihood ratio, the locus of principal eigenaxis components \emph{must} be \emph{formed by data points} that lie in either overlapping regions or tail region of data distributions, thereby determining \emph{decision regions} based on forces associated with \emph{risks and counter risks}:\emph{ }which are related to positions and potential locations of data points that lie in either overlapping regions or tail region of data distributions, where an unknown portion of the data points are the \emph{source of decision errors}. Data points that lie in either overlapping regions or tail region of data distributions will be called extreme points. \paragraph{Parameter Vector of Class-conditional Densities} Given that the locus of principal eigenaxis components must determine a likelihood ratio for binary classification, it follows that the locus of principal eigenaxis components \emph{must also} be a \emph{parameter vector} that \emph{provides an estimate of class-conditional density functions}. Given Eq. (\emph{\ref{Equilibrium Equation of Likelihood Ratio and Decision Boundary}}), it also follows that the parameter vector must be in statistical equilibrium. \paragraph{Locus of Reproducing Kernels} Descriptions of quadratic decision boundaries involve first and second degree point coordinates and first and second degree vector components. Therefore, the dual locus of data points \emph{must} be a \emph{locus of reproducing kernels}, where either second-order, polynomial reproducing kernels $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}$ or Gaussian reproducing kernels $k_{\mathbf{s}}=\exp\left( -\gamma\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $, where $\gamma=0.01$, are deemed sufficient. \paragraph{Minimum Conditional Risk Constraint} Given Eqs (\ref{Equalizer Rule}) and (\ref{Balancing of Bayes' Risks and Counteracting Risks}), it follows that the parameter vector must satisfy a discrete version of the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}) and the corresponding integral equation in (\ref{Balancing of Bayes' Risks and Counteracting Risks})\emph{,} where extreme data points that lie in decision regions involve forces associated with risks or counter risks, such that the parameter vector satisfies the quadratic decision boundary in terms of minimum risk. Thus, the dual locus of extreme data points must jointly satisfy a discrete version of the fundamental integral equation of binary classification in Eq. \emph{(\ref{Equalizer Rule})} in terms of forces associated with risks and counter risks which are related to positions and potential locations of extreme data points and corresponding total allowed eigenenergies of principal eigenaxis components. Moreover, the forces associated with risks and counter risks related to positions and potential locations of extreme data points and the corresponding total allowed eigenenergies of principal eigenaxis components must jointly satisfy a discrete version of Eq. (\ref{Balancing of Bayes' Risks and Counteracting Risks}) so that $\left( 1\right) $ all of the forces associated with the risks and counter risks that are related to positions and potential locations of extreme data points are effectively balanced with each other, and $\left( 2\right) $ the total allowed eigenenergies of the principal eigenaxis components are effectively balanced with each other. I\ will call a dual locus of principal eigenaxis components formed by weighted reproducing kernels of extreme points that determines an estimate of class-conditional densities for extreme points a "quadratic eigenlocus." I\ will refer to the parameter vector that provides an estimate of class-conditional densities for extreme points as a "locus of likelihoods" or a "parameter vector or likelihoods." A quadratic eigenlocus, which is formed by a dual locus of principal eigenaxis components and likelihoods, is a data-driven likelihood ratio and decision boundary that determines computer-implemented, quadratic classification systems that minimize the expected risk. In this paper, the dual locus is based on second-order, polynomial reproducing kernels of extreme points, where each reproducing kernel replaces a directed, straight line segment of an extreme vector with a second-order, polynomial curve. All of the locus equations are readily generalized for Gaussian reproducing kernels. I\ will refer to second-order, polynomial reproducing kernels as reproducing kernels, where any given reproducing kernel is an extreme vector. I will call the system of data-driven, mathematical laws that generates a quadratic eigenlocus a "quadratic eigenlocus transform." I\ will introduce the primal equation of a quadratic eigenlocus in the next section. I\ will begin the next section by defining important geometric and statistical properties exhibited by weighted reproducing kernels of extreme points on a quadratic eigenlocus. I\ will define these properties in terms of geometric and statistical criterion. \subsection{Quadratic Eigenlocus Transforms} A\ high level description of quadratic eigenlocus transforms is outlined below. The high level description specifies essential geometric and statistical properties exhibited by weighted reproducing kernels of extreme points on a quadratic eigenlocus. \textbf{Quadratic eigenlocus transforms generate a locus of weighted reproducing kernels of extreme points that is a dual locus of likelihoods and principal eigenaxis components, where each weight specifies a class membership statistic and conditional density for an extreme point, and each weight determines the magnitude and the total allowed eigenenergy of an extreme vector.} \begin{flushleft} \textbf{Quadratic eigenlocus transforms choose each weight in a manner which ensures that:} \end{flushleft} \paragraph{Criterion $\mathbf{1}$} Each conditional density of an extreme point describes the central location (expected value) and the spread (covariance) of the extreme point. \paragraph{Criterion $\mathbf{2}$} Distributions of the extreme points are distributed over the locus of likelihoods in a symmetrically balanced and well-proportioned manner. \paragraph{Criterion $\mathbf{3}$} The total allowed eigenenergy possessed by each weighted extreme vector specifies the probability of observing the extreme point within a localized region. \paragraph{Criterion $\mathbf{4}$} The total allowed eigenenergies of the weighted extreme vectors are symmetrically balanced with each other about the center of total allowed eigenenergy. \paragraph{Criterion $\mathbf{5}$} The forces associated with risks and counter risks related to the weighted extreme points are symmetrically balanced with each other about a center of minimum risk. \paragraph{Criterion $\mathbf{6}$} The locus of principal eigenaxis components formed by weighted extreme vectors partitions any given feature space into symmetrical decision regions which are symmetrically partitioned by a quadratic decision boundary. \paragraph{Criterion $\mathbf{7}$} The locus of principal eigenaxis components is the focus of a quadratic decision boundary. \paragraph{Criterion $\mathbf{8}$} The locus of principal eigenaxis components formed by weighted extreme vectors satisfies the quadratic decision boundary in terms of a critical minimum eigenenergy. \paragraph{Criterion $\mathbf{9}$} The locus of likelihoods formed by weighted reproducing kernels of extreme points satisfies the quadratic decision boundary in terms of a minimum probability of decision error. \paragraph{Criterion $\mathbf{10}$} For data distributions that have dissimilar covariance matrices, the forces associated with counter risks and risks, within each of the symmetrical decision regions, are balanced with each other. For data distributions that have similar covariance matrices, the forces associated with counter risks within each of the symmetrical decision regions are equal to each other, and the forces associated with risks within each of the symmetrical decision regions are equal to each other. \paragraph{Criterion $\mathbf{11}$} For data distributions that have dissimilar covariance matrices, the eigenenergies associated with counter risks and the eigenenergies associated with risks, within each of the symmetrical decision regions, are balanced with other. For data distributions that have similar covariance matrices, the eigenenergies associated with counter risks within each of the symmetrical decision regions are equal to each other, and the eigenenergies associated with risks within each of the symmetrical decision regions are equal to each other. I\ will devise a system of data-driven, locus equations that determines likelihood ratios and decision boundaries which satisfy all of the above criteria. Accordingly, quadratic eigenlocus transforms generate a dual locus of principal eigenaxis components and likelihoods that exhibits the statistical property of symmetrical balance which is illustrated in Fig. ($\ref{Dual Statistical Balancing Feat}$). Quadratic eigenlocus transforms are generated by solving the inequality constrained optimization problem that is introduced next. \subsection{Primal Problem of a Quadratic Eigenlocus} Take any given collection of training data for a binary classification problem of the form \[ \left( \mathbf{x}_{1},y_{1}\right) ,\ldots,\left( \mathbf{x}_{N ,y_{N}\right) \i \mathbb{R} ^{d}\times Y,Y=\left\{ \pm1\right\} \text{, \] where feature vectors $\mathbf{x}$ from class $\omega_{1}$ and class $\omega_{2}$ are drawn from unknown, class-conditional probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ and are identically distributed. A quadratic eigenlocus $\boldsymbol{\kappa}$ is estimated by solving an inequality constrained optimization problem \begin{align} \min\Psi\left( \boldsymbol{\kappa}\right) & =\left\Vert \boldsymbol{\kappa }\right\Vert ^{2}/2+C/2\sum\nolimits_{i=1}^{N}\xi_{i}^{2}\text{, \label{Primal Normal Eigenlocus Q}\\ \text{s.t. }y_{i}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\right) & \geq1-\xi_{i},\ \xi_{i \geq0,\ i=1,...,N\text{,}\nonumber \end{align} where $\boldsymbol{\kappa}$ is a $d\times1$ constrained, primal quadratic eigenlocus which is a dual locus of likelihoods and principal eigenaxis components, $\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}$ is a reproducing kernel for the point $\mathbf{x}_{i}$, $\left\Vert \boldsymbol{\kappa}\right\Vert ^{2}$ is the total allowed eigenenergy exhibited by $\boldsymbol{\kappa}$, $\kappa_{0}$ is a functional of $\boldsymbol{\kappa}$, $C$ and $\xi_{i}$ are regularization parameters, and $y_{i}$ are class membership statistics: if $\mathbf{x}_{i}\in\omega_{1}$, assign $y_{i}=1$; if $\mathbf{x}_{i}\in\omega_{2}$, assign $y_{i}=-1$. Equation (\ref{Primal Normal Eigenlocus Q}) is the primal problem of a quadratic eigenlocus, where the system of $N$ inequalities must be satisfied \[ y_{i}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}\right) \geq1-\xi_{i},\ \xi_{i}\geq 0,\ i=1,...,N\text{, \] such that a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ satisfies a critical minimum eigenenergy constraint \begin{equation} \gamma\left( \boldsymbol{\kappa}\right) =\left\Vert \boldsymbol{\kappa }\right\Vert _{\min_{c}}^{2}\text{,} \label{Minimum Total Eigenenergy Primal Normal Eigenlocus Q \end{equation} where $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ determines the minimum risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\kappa }\right) $ of a quadratic classification system. Solving the inequality constrained optimization problem in Eq. (\ref{Primal Normal Eigenlocus Q}) involves solving a dual optimization problem that determines the fundamental unknowns of Eq. (\ref{Primal Normal Eigenlocus Q}). Denote a Wolfe dual quadratic eigenlocus by $\boldsymbol{\psi}$ and the Lagrangian dual problem of $\boldsymbol{\psi}$ by $\max\Xi\left( \boldsymbol{\psi}\right) $. Let $\boldsymbol{\psi}$ be a Wolfe dual of $\boldsymbol{\kappa}$ such that proper and effective strong duality relationships exist between the algebraic systems of $\min\Psi\left( \boldsymbol{\kappa}\right) $ and $\max\Xi\left( \boldsymbol{\psi}\right) $. Thereby, let $\boldsymbol{\psi}$ be related with $\boldsymbol{\kappa}$ in a symmetrical manner that specifies the locations of the principal eigenaxis components on $\boldsymbol{\kappa}$. \subsubsection{The Real Unknowns} A constrained, primal quadratic eigenlocus is a dual locus of principal eigenaxis components and likelihoods formed by weighted reproducing kernels of extreme points, where each weight is specified by a class membership statistic and a scale factor. Each scale factor specifies a conditional density for a weighted extreme point on a locus of likelihoods, and each scale factor determines the magnitude and the eigenenergy of a weighted extreme vector on a locus of principal eigenaxis components. The main issue concerns how the scale factors are determined. \subsubsection{The Fundamental Unknowns} The fundamental unknowns are the scale factors of the principal eigenaxis components on a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$. \subsection{Strong Dual Quadratic Eigenlocus Transforms} For the problem of quadratic eigenlocus transforms, the Lagrange multipliers method introduces a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ of principal eigenaxis components, for which the Lagrange multipliers $\left\{ \psi_{i}\right\} _{i=1}^{N}$ are the magnitudes or lengths of a set of Wolfe dual principal eigenaxis components $\left\{ \psi_{i \overrightarrow{\mathbf{e}}_{i}\right\} _{i=1}^{N}$, where $\left\{ \overrightarrow{\mathbf{e}}_{i}\right\} _{i=1}^{N}$ are non-orthogonal unit vectors and finds extrema for the restriction of a primal quadratic eigenlocus $\boldsymbol{\kappa}$ to a Wolfe dual eigenspace. Accordingly, the fundamental unknowns associated with Eq. (\ref{Primal Normal Eigenlocus Q}) are the magnitudes or lengths of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$. \subsubsection{Strong Duality} Because Eq. (\ref{Primal Normal Eigenlocus Q}) is a convex programming problem, the theorem for convex duality guarantees an equivalence and corresponding symmetry between a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ and its Wolfe dual $\boldsymbol{\psi}$ \citep{Nash1996,Luenberger2003 . Strong duality holds between the systems of locus equations denoted by $\min\Psi\left( \boldsymbol{\tau}\right) $ and $\max\Xi\left( \boldsymbol{\psi}\right) $, so that the duality gap between the constrained primal and the Wolfe dual quadratic eigenlocus solution is zero \citep{Luenberger1969,Nash1996,Fletcher2000,Luenberger2003 . The Lagrangian dual problem of a Wolfe dual quadratic eigenlocus will be derived by means of the Lagrangian equation that is introduced next. \subsection{The Lagrangian of the Quadratic Eigenlocus} The inequality constrained optimization problem in Eq. (\ref{Primal Normal Eigenlocus Q}) is solved by using Lagrange multipliers $\psi_{i}\geq0$ and the Lagrangian \begin{align} L_{\Psi\left( \boldsymbol{\kappa}\right) }\left( \boldsymbol{\kappa }\mathbf{,}\kappa_{0},\mathbf{\xi},\boldsymbol{\psi}\right) & =\left\Vert \boldsymbol{\kappa}\right\Vert ^{2}/2\label{Lagrangian Normal Eigenlocus Q}\\ & +C/2\sum\nolimits_{i=1}^{N}\xi_{i}^{2}\nonumber\\ & -\sum\nolimits_{i=1}^{N}\psi_{i}\nonumber\\ & \times\left\{ y_{i}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\right) -1+\xi_{i}\right\} \nonumber \end{align} which is minimized with respect to the primal variables $\boldsymbol{\kappa $\textbf{ }and $\kappa_{0}$ and is maximized with respect to the dual variables $\psi_{i}$. The Karush-Kuhn-Tucker (KKT) conditions on the Lagrangian $L_{\Psi\left( \boldsymbol{\kappa}\right) }$ \begin{equation} \boldsymbol{\kappa}-\sum\nolimits_{i=1}^{N}\psi_{i}y_{i}\left( \mathbf{x ^{T}\mathbf{x}_{i}+1\right) ^{2}=0,\text{ \ }i=1,...N\text{,} \label{KKTE1 Q \end{equation \begin{equation} \sum\nolimits_{i=1}^{N}\psi_{i}y_{i}=0,\text{ \ }i=1,...,N\text{,} \label{KKTE2 Q \end{equation \begin{equation} C\sum\nolimits_{i=1}^{N}\xi_{i}-\sum\nolimits_{i=1}^{N}\psi_{i}=0\text{,} \label{KKTE3 Q \end{equation \begin{equation} \psi_{i}\geq0,\text{ \ }i=1,...,N\text{,} \label{KKTE4 Q \end{equation \begin{equation} \psi_{i}\left[ y_{i}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\right) -1+\xi_{i}\right] \geq 0,\ i=1,...,N\text{,} \label{KKTE5 Q \end{equation} which can found in \citep{Cortes1995,Burges1998,Cristianini2000,Scholkopf2002 , determine a system of data-driven, locus equations which are jointly satisfied by a constrained primal and a Wolfe dual quadratic eigenlocus. I will define the manner in which the KKT conditions determine geometric and statistical properties exhibited by weighted reproducing kernels of extreme points on a Wolfe dual $\boldsymbol{\psi}$ and a constrained primal $\boldsymbol{\kappa}$ quadratic eigenlocus. Thereby, I\ will demonstrate the manner in which the KKT conditions ensure that $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ jointly satisfy discrete and data-driven versions of the fundamental equations of binary classification for a classification system in statistical equilibrium. The Lagrangian dual problem of a Wolfe dual quadratic eigenlocus is introduced next. \subsection{Lagrangian Dual Problem of a Quadratic Eigenlocus} The resulting expressions for a primal quadratic eigenlocus $\boldsymbol{\kappa}$ in Eq. (\ref{KKTE1 Q}) and a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ in Eq. (\ref{KKTE2 Q}) are substituted into the Lagrangian functional $L_{\Psi\left( \boldsymbol{\tau}\right) }$ of Eq. (\ref{Lagrangian Normal Eigenlocus Q}) and simplified. This produces the Lagrangian dual problem of a Wolfe dual quadratic eigenlocus: a quadratic programming proble \begin{equation} \max\Xi\left( \boldsymbol{\psi}\right) =\sum\nolimits_{i=1}^{N}\psi_{i -\sum\nolimits_{i,j=1}^{N}\psi_{i}\psi_{j}y_{i}y_{j}\frac{\left[ \left( \mathbf{x}_{i}^{T}\mathbf{x}_{j}+1\right) ^{2}+\delta_{ij}/C\right] }{2} \label{Wolfe Dual Normal Eigenlocus Q \end{equation} which is subject to the algebraic constraints $\sum\nolimits_{i=1}^{N y_{i}\psi_{i}=0$ and $\psi_{i}\geq0$, where $\delta_{ij}$ is the Kronecker $\delta$ defined as unity for $i=j$ and $0$ otherwise. Equation (\ref{Wolfe Dual Normal Eigenlocus Q}) can be written in vector notation by letting $\mathbf{Q}\triangleq\epsilon\mathbf{I +\widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^{T}$ and $\widetilde{\mathbf{X }\triangleq\mathbf{D}_{y}\mathbf{X}$, where $\mathbf{D}_{y}$ is an $N\times N$ diagonal matrix of training labels $y_{i}$ and the $N\times d$ data matrix i \[ \mathbf{X} \begin{pmatrix} \left( \mathbf{x}^{T}\mathbf{x}_{1}+1\right) ^{2}, & \left( \mathbf{x ^{T}\mathbf{x}_{2}+1\right) ^{2}, & \ldots, & \left( \mathbf{x ^{T}\mathbf{x}_{N}+1\right) ^{2 \end{pmatrix} ^{T}\text{. \] This produces the matrix version of the Lagrangian dual problem of a primal quadratic eigenlocus within its Wolfe dual eigenspace \begin{equation} \max\Xi\left( \boldsymbol{\psi}\right) =\mathbf{1}^{T}\boldsymbol{\psi }-\frac{\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}}{2} \label{Vector Form Wolfe Dual Q \end{equation} which is subject to the constraints $\boldsymbol{\psi}^{T}\mathbf{y}=0$ and $\psi_{i}\geq0$ \citep{Reeves2009 . Given the theorem for convex duality, it follows that a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ is a dual locus of likelihoods and principal eigenaxis components $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) $, where $\boldsymbol{\psi}$ exhibits a total allowed eigenenergy $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}$ that is symmetrically related to the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}$: $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}\simeq\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$. \subsection{Loci of Constrained Quadratic Forms} The representation of a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ within its Wolfe dual eigenspace involves the eigensystem of the constrained quadratic form $\boldsymbol{\psi}^{T \mathbf{Q}\boldsymbol{\psi}$ in Eq. (\ref{Vector Form Wolfe Dual Q}), where $\boldsymbol{\psi}$ is the principal eigenvector of $\mathbf{Q}$, such that $\boldsymbol{\psi}^{T}\mathbf{y}=0$ and $\psi_{i}\geq0$. I\ will demonstrate that Eq. (\ref{Vector Form Wolfe Dual Q}) determines a dual quadratic eigenlocus $\boldsymbol{\psi}$ which is in statistical equilibrium such that the total allowed eigenenergies $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}$ are symmetrically balanced with each other about a center of total allowed eigenenergy. I will also demonstrate that the utility of the statistical balancing feat involves \emph{balancing} all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\omega_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\omega_{2}\right) $ in the $Z_{1}$ decision region \emph{with} all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\omega_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\omega_{1}\right) $ in the $Z_{2}$ decision region, where the forces associated with risks and counter risks are related to positions and potential locations of extreme points, such that the eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\boldsymbol{\kappa}\right) $ of a discrete, quadratic classification system are both minimized. I\ will now use the KKT conditions in Eqs (\ref{KKTE1 Q}) and (\ref{KKTE4 Q}) to derive the locus equation of a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. \subsection{The Constrained Primal Quadratic Eigenlocus} Using the KKT\ conditions in Eqs (\ref{KKTE1 Q}) and (\ref{KKTE4 Q}), it follows that an estimate for $\boldsymbol{\kappa}$ satisfies the following locus equation \begin{equation} \boldsymbol{\kappa}=\sum\nolimits_{i=1}^{N}y_{i}\psi_{i}\left( \mathbf{x ^{T}\mathbf{x}_{i}+1\right) ^{2}\text{,} \label{Normal Eigenlocus Estimate Q \end{equation} where the $y_{i}$ terms are class membership statistics (if $\mathbf{x}_{i}$ is a member of class $\omega_{1}$, assign $y_{i}=1$; otherwise, assign $y_{i}=-1$) and the magnitude $\psi_{i}$ of each principal eigenaxis component $\psi_{i}\overrightarrow{\mathbf{e}}_{i}$ on $\boldsymbol{\psi}$ is greater than or equal to zero: $\psi_{i}\geq0$. The KKT condition in Eq. (\ref{KKTE4 Q}) requires that the length $\psi_{i}$ of each principal eigenaxis component $\psi_{i}\overrightarrow{\mathbf{e} _{i}$ on $\boldsymbol{\psi}$ either satisfy or exceed zero: $\psi_{i}\geq0$. Any principal eigenaxis component $\psi_{i}\overrightarrow{\mathbf{e}}_{i}$ which has zero length ($\psi_{i}=0$) satisfies the origin $P_{\mathbf{0} \begin{pmatrix} 0, & 0, & \cdots, & 0 \end{pmatrix} $ and is not on the Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$. It follows that the constrained, primal principal eigenaxis component $\psi _{i}\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}$ also has zero length ($\psi_{i}\left( \mathbf{x}_{i}^{2}+1\right) =0$) and is not on the constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. Reproducing kernels $\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}$ of data points $\mathbf{x}_{i}$ correlated with Wolfe dual principal eigenaxis components $\psi_{i}\overrightarrow{\mathbf{e}}_{i}$ that have non-zero magnitudes $\psi_{i}>0$ are termed extreme vectors. Accordingly, extreme vectors are unscaled, primal principal eigenaxis components on $\boldsymbol{\kappa}$. Recall that a set of extreme vectors specify principal directions of large covariance for a given collection of training data. Thus, extreme vectors are discrete principal components that determine directions for which a given collection of training data is most variable or spread out. Therefore, the loci of a set of extreme vectors span a region of large covariance between two distributions of training data. See Fig. $\ref{Location Properties Extreme Data Points}$. \subsection{Primal Quadratic Eigenlocus Components} All of the principal eigenaxis components on a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ are labeled, scaled reproducing kernels of extreme points in \mathbb{R} ^{d}$. Denote the labeled, scaled extreme vectors that belong to class $\omega_{1}$ and $\omega_{2}$ by $\psi_{1_{i\ast}}\left( \mathbf{x ^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}$ and $-\psi_{2_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}$, with scale factors $\psi_{1_{i\ast}}$ and $\psi_{2_{i\ast}}$, extreme vectors $\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}$ and $\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}$, and labels $y_{i}=1$ and $y_{i}=-1$ respectively. Let there be $l_{1}$ labeled, scaled reproducing kernels $\left\{ \psi_{1_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast }+1\right) ^{2}\right\} _{i=1}^{l_{1}}$ and $l_{2}$ labeled, scaled reproducing kernels $\left\{ -\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast }+1\right) ^{2}\right\} _{i=1}^{l_{2}}$. Given Eq. (\ref{Normal Eigenlocus Estimate Q}) and the assumptions outlined above, it follows that an estimate for a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ is based on the vector difference between a pair of constrained, primal quadratic eigenlocus components \begin{align} \boldsymbol{\kappa} & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}-\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}\label{Pair of Normal Eigenlocus Components Q}\\ & =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\text{,}\nonumber \end{align} where the constrained, primal quadratic eigenlocus component \[ \boldsymbol{\kappa}_{1}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}\text{ and \boldsymbol{\kappa}_{2}=\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2 \] are denoted by $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ respectively. The scaled reproducing kernels on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ determine the loci of $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ and therefore determine the dual locus of $\boldsymbol{\kappa}=\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$. Figure\textbf{\ }$\ref{Primal Quadratic Eigenlocus in Wolfe Dual Eigenspace}$ depicts how the configurations of $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ determine the configuration of $\boldsymbol{\kappa $ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure32.png }\caption{$\left( a\right) $ A constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ is determined by the vector difference $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ between a pair of constrained, primal quadratic eigenlocus components $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$. $\left( b\right) $ The scaled extreme points on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ are endpoints of scaled extreme vectors that possess unchanged directions and eigen-balanced lengths. \label{Primal Quadratic Eigenlocus in Wolfe Dual Eigenspace \end{figure} I\ will now define values for the regularization parameters $C$ and $\xi_{i}$ in Eqs (\ref{Primal Normal Eigenlocus Q}) and (\ref{Vector Form Wolfe Dual Q}). \subsection{Weak Dual Quadratic Eigenlocus Transforms} The number and the locations of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ are considerably affected by the rank and eigenspectrum of $\mathbf{Q}$. Low rank kernel matrices $\mathbf{Q}$ generate "weak\emph{\ }dual" quadratic eigenlocus transforms that produce irregular, quadratic partitions of decision spaces. Given non-overlapping data distributions and low rank kernel matrices $\mathbf{Q}$, weak dual quadratic eigenlocus transforms produce asymmetric, quadratic partitions that exhibit optimal generalization performance at the expense of unnecessary principal eigenaxis components, where \emph{all} of the training data is transformed into constrained, primal principal eigenaxis components. For overlapping data distributions, incomplete eigenspectra of low rank kernel matrices $\mathbf{Q}$ result in weak dual quadratic eigenlocus transforms which determine ill-formed, quadratic decision boundaries that exhibit substandard generalization performance. All of these problems are solved by the regularization method that is described next. \subsubsection{Regularization of Quadratic Eigenlocus Transforms} For any collection of $N$ training vectors of dimension $d$, where $d<N$, the kernel matrix $\mathbf{Q}$ has low rank. The results for low rank Gram matrices in \citet{Reeves2011} are readily extended to kernel matrices. Accordingly, the regularized form of $\mathbf{Q}$, for which $\epsilon\ll1$ and $\mathbf{Q}\triangleq \epsilon\mathbf{I}+\widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^{T}$, ensures that $\mathbf{Q}$ has full rank and a complete eigenvector set so that $\mathbf{Q}$ has a complete eigenspectrum. The regularization constant $C$ is related to the regularization parameter $\epsilon$\ by $\frac{1}{C}$. For $N$ training vectors of dimension $d$, where $d<N$, all of the regularization parameters $\left\{ \xi_{i}\right\} _{i=1}^{N}$ in Eq. (\ref{Primal Normal Eigenlocus Q}) and all of its derivatives are set equal to a very small value: $\xi_{i}=\xi\ll1$. The regularization constant $C$ is set equal to $\frac{1}{\xi}$: $C=\frac{1}{\xi}$. For $N$ training vectors of dimension $d$, where $N<d$, all of the regularization parameters $\left\{ \xi_{i}\right\} _{i=1}^{N}$ in Eq. (\ref{Primal Normal Eigenlocus Q}) and all of its derivatives are set equal to zero: $\xi_{i}=\xi=0$. The regularization constant $C$ is set equal to infinity: $C=\infty$. In the next section, I will devise locus equations that determine the manner in which a constrained, primal quadratic eigenlocus partitions any given feature space into symmetrical decision regions. \section{Equations of a Quadratic Discriminant Function} A constrained, primal quadratic eigenlocus is the primary basis of a quadratic discriminant function that implements an optimal likelihood ratio test. The manner in which the dual locus of $\boldsymbol{\kappa}$ partitions a feature space is specified by the KKT condition in Eq. (\ref{KKTE5 Q}) and the KKT condition of complementary slackness. \subsection{KKT Condition of Complementary Slackness} The KKT condition of complementary slackness requires that for all constraints that are not active, where locus equations are \emph{ill-defined} \[ y_{i}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}\right) -1+\xi_{i}>0 \] because they are not satisfied as equalities, the corresponding magnitudes $\psi_{i}$ of the Wolfe dual principal eigenaxis components $\psi _{i}\overrightarrow{\mathbf{e}}_{i}$ must be zero: $\psi_{i}=0$ \citep{Sundaram1996 . Accordingly, if an inequality is "slack" (not strict), the other inequality cannot be slack. Therefore, let there be $l$ active constraints, where $l=l_{1}+l_{2}$. Let $\xi_{i}=\xi=0$ or $\xi_{i}=\xi\ll1$. The theorem of Karush, Kuhn, and Tucker provides the guarantee that a Wolf dual quadratic eigenlocus $\boldsymbol{\psi }$ exists such that the following constraints are satisfied \[ \left\{ \psi_{i\ast}>0\right\} _{i=1}^{l}\text{, \] and the following locus equations are satisfied \[ \psi_{i\ast}\left[ y_{i}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i\ast }+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\right) -1+\xi_{i}\right] =0,\ i=1,...,l\text{, \] where $l$ Wolfe dual principal eigenaxis components $\psi_{i\ast }\overrightarrow{\mathbf{e}}_{i}$ have non-zero magnitudes $\left\{ \psi_{i\ast}\overrightarrow{\mathbf{e}}_{i}|\psi_{i\ast}>0\right\} _{i=1}^{l} $ \citep{Sundaram1996 . The above condition is known as the condition of complementary slackness. So, in order for the constraint $\psi_{i\ast}>0$ to hold, the following locus equation must be satisfied \[ y_{i}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}\right) -1+\xi_{i}=0\text{. \] Accordingly, let there be $l_{1}$ locus equations \[ \left( \mathbf{x}^{T}\mathbf{x}_{1i\ast}+1\right) ^{2}\boldsymbol{\kappa }+\kappa_{0}+\xi_{i}=1,\ i=1,...,l_{1}\text{, \] where $y_{i}=+1$, and $l_{2}$ locus equations \[ \left( \mathbf{x}^{T}\mathbf{x}_{2i\ast}+1\right) ^{2}\boldsymbol{\kappa }-\kappa_{0}-\xi_{i}=-1,\ i=1,...,l_{2}\text{, \] where $y_{i}=-1$. It follows that the quadratic discriminant functio \begin{equation} D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0} \label{Discriminant Function Q \end{equation} satisfies the set of constraints \[ D_{0}\left( \mathbf{s}\right) =0\text{, }D_{+1}\left( \mathbf{s}\right) =+1\text{, and }D_{-1}\left( \mathbf{s}\right) =-1\text{, \] where $D_{0}\left( \mathbf{s}\right) $ denotes a quadratic decision boundary, $D_{+1}\left( \mathbf{x}\right) $ denotes a quadratic decision border for the $Z_{1}$ decision region, and $D_{-1}\left( \mathbf{x}\right) $ denotes a quadratic decision border for the $Z_{2}$ decision region. I will now show that the constraints on the quadratic discriminant function $D\left( \mathbf{s}\right) $ determine three equations of symmetrical, quadratic partitioning curves or surfaces, where all of the points on all three quadratic loci reference the constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. Returning to Eq. (\ref{Normal Form Second-order Locus}), recall that the equation of a quadratic locus can be written a \[ \frac{2\mathbf{x}^{T}\boldsymbol{\nu}+\left( e^{2}\cos^{2}\theta-1\right) \left\Vert \mathbf{x}\right\Vert ^{2}}{\left\Vert \boldsymbol{\nu}\right\Vert }=\left\Vert \boldsymbol{\nu}\right\Vert \text{, \] where $e$ is the eccentricity, $\theta$ is the angle between $\mathbf{x}$ and $\boldsymbol{\nu}$, and the principal eigenaxis $\boldsymbol{\nu}/\left\Vert \boldsymbol{\nu}\right\Vert $ has length $1$ and points in the direction of a principal eigenvector $\boldsymbol{\nu}$. Any point $\mathbf{x}$ that satisfies the above equation is on the quadratic locus of points specified by $\boldsymbol{\nu}$, where all of the points $\mathbf{x}$ on the quadratic locus exclusively reference the principal eigenaxis $\boldsymbol{\nu}$. Returning to Eq. (\ref{Normal Form Circle and Sphere}), recall that the equation of a spherically symmetric, quadratic locus can be written a \[ \frac{2\left( \mathbf{x-r}\right) ^{T}\boldsymbol{\nu}}{\left\Vert \boldsymbol{\nu}\right\Vert }=\left\Vert \boldsymbol{\nu}\right\Vert \text{, \] where $\mathbf{r}$ is the radius of a spherically symmetric, quadratic locus, and the principal eigenaxis $\boldsymbol{\nu}/\left\Vert \boldsymbol{\nu }\right\Vert $ has length $1$ and points in the direction of a principal eigenvector $\boldsymbol{\nu}$. Any point $\mathbf{x}$ that satisfies the above equation is on the spherically symmetric, quadratic locus of points specified by $\boldsymbol{\nu}$, where all of the points $\mathbf{x}$ on the spherically symmetric, quadratic locus exclusively reference the principal eigenaxis $\boldsymbol{\nu}$. I\ will now use Eqs (\ref{Normal Form Second-order Locus}) and (\ref{Normal Form Circle and Sphere}) along with the constraints on the quadratic discriminant function in Eq. (\ref{Discriminant Function Q}) to devise locus equations that determine the manner in which a constrained, primal quadratic eigenlocus partitions any given feature space into symmetrical decision regions. \subsection{Quadratic Partitions of Feature Spaces} I\ will now derive the locus equation of a quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $. \subsubsection{Equation of a Quadratic Decision Boundary $D_{0}\left( \mathbf{s}\right) $} Using Eqs (\ref{Normal Form Second-order Locus}) and (\ref{Normal Form Circle and Sphere}), along with the assumption that $D\left( \mathbf{s}\right) =0$, it follows that the quadratic discriminant functio \[ D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0 \] can be rewritten as \begin{equation} \frac{\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa }{\left\Vert \boldsymbol{\kappa}\right\Vert }=-\frac{\kappa_{0}}{\left\Vert \boldsymbol{\kappa}\right\Vert }\text{.} \label{Decision Boundary Q \end{equation} Therefore, any point $\mathbf{s}$ that satisfies Eq. (\ref{Decision Boundary Q}) is on the quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $, and all of the points $\mathbf{s}$ on the quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $ exclusively reference the constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. Thereby, the constrained, quadratic discriminant function $\boldsymbol{\kappa }^{T}k_{\mathbf{s}}+\kappa_{0}$ satisfies the boundary value of a quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $: $\boldsymbol{\kappa ^{T}k_{\mathbf{s}}+\kappa_{0}=0$. I will now derive the locus equation of the $D_{+1}\left( \mathbf{s}\right) $ quadratic decision border. \subsubsection{Equation of the $D_{+1}\left( \mathbf{s}\right) $ Decision Border} Using Eqs (\ref{Normal Form Second-order Locus}) and (\ref{Normal Form Circle and Sphere}), along with the assumption that $D\left( \mathbf{s}\right) =1$, it follows that the quadratic discriminant function in Eq. (\ref{Discriminant Function Q}) can be rewritten as \begin{equation} \frac{\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa }{\left\Vert \boldsymbol{\kappa}\right\Vert }=-\frac{\kappa_{0}}{\left\Vert \boldsymbol{\kappa}\right\Vert }+\frac{1}{\left\Vert \boldsymbol{\kappa }\right\Vert }\text{.} \label{Decision Border One Q \end{equation} Therefore, any point $\mathbf{s}$ that satisfies Eq. (\ref{Decision Border One Q}) is on the quadratic decision border $D_{+1}\left( \mathbf{s}\right) $, and all of the points $\mathbf{s}$ on the quadratic decision border $D_{+1}\left( \mathbf{s}\right) $ exclusively reference the constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. Thereby, the constrained, quadratic discriminant function $\boldsymbol{\kappa }^{T}k_{\mathbf{s}}+\kappa_{0}$ satisfies the boundary value of a quadratic decision border $D_{+1}\left( \mathbf{s}\right) $: $\boldsymbol{\kappa ^{T}k_{\mathbf{s}}+\kappa_{0}=1$. I will now derive the locus equation of the $D_{-1}\left( \mathbf{s}\right) $ quadratic decision border. \subsubsection{Equation of the $D_{-1}\left( \mathbf{s}\right) $ Decision Border} Using Eqs (\ref{Normal Form Second-order Locus}) and (\ref{Normal Form Circle and Sphere}), along with the assumption that $D\left( \mathbf{s}\right) =-1$, it follows that the quadratic discriminant function in Eq. (\ref{Discriminant Function Q}) can be rewritten as \begin{equation} \frac{\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa }{\left\Vert \boldsymbol{\kappa}\right\Vert }=-\frac{\kappa_{0}}{\left\Vert \boldsymbol{\kappa}\right\Vert }-\frac{1}{\left\Vert \boldsymbol{\kappa }\right\Vert }\text{.} \label{Decision Border Two Q \end{equation} Therefore, any point $\mathbf{s}$ that satisfies Eq. (\ref{Decision Border Two Q}) is on the quadratic decision border $D_{-1}\left( \mathbf{s}\right) $, and all of the points $\mathbf{s}$ on the quadratic decision border $D_{-1}\left( \mathbf{s}\right) $ exclusively reference the constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. Thereby, the constrained, quadratic discriminant function $\boldsymbol{\kappa }^{T}k_{\mathbf{s}}+\kappa_{0}$ satisfies the boundary value of a quadratic decision border $D_{-1}\left( \mathbf{s}\right) $: $\boldsymbol{\kappa ^{T}k_{\mathbf{s}}+\kappa_{0}=-1$. Given Eqs (\ref{Decision Boundary Q}), (\ref{Decision Border One Q}), and (\ref{Decision Border Two Q}), it is concluded that the constrained, quadratic discriminant function $D\left( \mathbf{s}\right) =\boldsymbol{\kappa ^{T}k_{\mathbf{s}}+\kappa_{0}$ determines three, symmetrical, quadratic curves or surfaces, where all of the points on $D_{0}\left( \mathbf{s}\right) $, $D_{+1}\left( \mathbf{s}\right) $, and $D_{-1}\left( \mathbf{s}\right) $ exclusively reference the constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. Moreover, it is concluded that the constrained, quadratic discriminant function $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}$ satisfies boundary values for a quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $ and two quadratic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $. The quadratic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $ in Eqs (\ref{Decision Border One Q}) and (\ref{Decision Border Two Q}) satisfy the symmetrically balanced constraint \[ -\frac{\kappa_{0}}{\left\Vert \boldsymbol{\kappa}\right\Vert }+\frac {1}{\left\Vert \boldsymbol{\kappa}\right\Vert }\text{ and }-\frac{\kappa_{0 }{\left\Vert \boldsymbol{\kappa}\right\Vert }-\frac{1}{\left\Vert \boldsymbol{\kappa}\right\Vert \] with respect to the constraint satisfied by the quadratic decision boundary $D_{0}\left( \mathbf{s}\right) \[ -\frac{\kappa_{0}}{\left\Vert \boldsymbol{\kappa}\right\Vert \] so that a constrained, quadratic discriminant functio \[ D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0 \] delineates symmetrical decision regions $Z_{1}\simeq Z_{2}$ that are symmetrically partitioned by the quadratic decision boundary in Eq. (\ref{Decision Boundary Q}). \subsection{Eigenaxis of Symmetry} It has been shown that a constrained, quadratic discriminant function $D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ determines three, symmetrical quadratic partitioning curves or surfaces, where all of the points on a quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $ and quadratic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s \right) $ exclusively reference a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. Thereby, $\boldsymbol{\kappa}$ is an eigenaxis of symmetry which delineates symmetrical decision regions $Z_{1}\simeq Z_{2}$ that are symmetrically partitioned by a quadratic decision boundary, where the span of both decision regions is regulated by the\ constraints in Eqs (\ref{Decision Boundary Q}), (\ref{Decision Border One Q}), and (\ref{Decision Border Two Q}). \subsubsection{Illustrations of Eigenaxes of Symmetry} Figures $\ref{Axis of Symmetry One}$, $\ref{Axis of Symmetry Two}$, and $\ref{Axis of Symmetry Three}$ show that $\boldsymbol{\kappa}$ is an eigenaxis of symmetry which delineates symmetrical decision regions $Z_{1}\simeq Z_{2}$ that are symmetrically partitioned by a quadratic decision boundary. The examples have been produced by simulation case studies for Gaussian data in MATLAB. \paragraph{Eigenaxis of Symmetry for Parabolic Decision Boundary} Figure $\ref{Axis of Symmetry One}$ illustrates a case where $\boldsymbol{\kappa}$ is an eigenaxis of symmetry which delineates symmetrical decision regions $Z_{1}\simeq Z_{2}$ determined by parabolic decision borders that are symmetrically partitioned by a parabolic decision boundary. Accordingly, a constrained, quadratic discriminant function $D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}$ satisfies the boundary value of a parabolic decision boundary and the boundary values of two parabolic decision borders \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.813in {Figure33.png }\caption{Simulation example where a constrained, quadratic eigenlocus discriminant function $D\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ describes three, symmetrical parabolic partitioning curves or surfaces, where all of the points on a parabolic decision boundary $D_{0}\left( \mathbf{s}\right) $ and parabolic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $ exclusively reference a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. \label{Axis of Symmetry One \end{figure} \paragraph{Eigenaxis of Symmetry for Hyperbolic Decision Boundary} Figure $\ref{Axis of Symmetry Two}$ illustrates a case where $\boldsymbol{\kappa}$ is an eigenaxis of symmetry which delineates symmetrical decision regions $Z_{1}\simeq Z_{2}$ determined by hyperbolic decision borders that are symmetrically partitioned by a hyperbolic decision boundary. Accordingly, a constrained, quadratic discriminant function $D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}$ satisfies the boundary value of a hyperbolic decision boundary and the boundary values of two hyperbolic decision borders \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5867in, width=3.8865in {Figure34.png }\caption{Simulation example where a constrained, quadratic eigenlocus discriminant function $D\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ describes three, symmetrical hyperbolic partitioning curves or surfaces, where all of the points on a hyperbolic decision boundary $D_{0}\left( \mathbf{s}\right) $ and hyperbolic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $ exclusively reference a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. \label{Axis of Symmetry Two \end{figure} \paragraph{Eigenaxis of Symmetry for Parabolic Decision Boundary} Figure $\ref{Axis of Symmetry Three}$ illustrates another case where $\boldsymbol{\kappa}$ is an eigenaxis of symmetry which delineates symmetrical decision regions $Z_{1}\simeq Z_{2}$ determined by parabolic decision borders that are symmetrically partitioned by a parabolic decision boundary. Accordingly, a constrained, quadratic discriminant function $D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}$ satisfies the boundary value of a parabolic decision boundary and the boundary values of two parabolic decision borders \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=4.1485in {Figure33.png }\caption{Simulation example where a constrained, quadratic eigenlocus discriminant function $D\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ describes three, symmetrical parabolic partitioning curves or surfaces, where all of the points on a parabolic decision boundary $D_{0}\left( \mathbf{s}\right) $ and parabolic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $ exclusively reference a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. \label{Axis of Symmetry Three \end{figure} \subsubsection{New Notation and Terminology} I\ will show that the \emph{constrained}, quadratic eigenlocus discriminant function $D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s +1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ determines a discrete, quadratic \emph{classification system} $\left( \mathbf{x}^{T}\mathbf{s +1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$, where $\boldsymbol{\kappa=\kappa _{1}-\boldsymbol{\kappa}_{2}$ is the \emph{likelihood ratio} of the classification system. Define the \emph{primal focus} of the quadratic classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ to be an \emph{equilibrium point} that defines quadratic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $ located at symmetrically constrained distances from a quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $. Therefore, let $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}$ denote a quadratic eigenlocus discriminant function and let $\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ denote the likelihood ratio of the quadratic classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ which is a likelihood ratio test $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$. The likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ is said to be the primary \emph{focus} of the quadratic classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$. \subsection{The Quadratic Eigenlocus Test} I\ will now derive a statistic for the $\kappa_{0}$ term in Eq. (\ref{Discriminant Function Q}). I\ will use the statistic to derive a likelihood statistic that is the basis of a quadratic eigenlocus decision rule. \subsubsection{Locus Equation for the $\kappa_{0}$ Term} Using the KKT condition in Eq. (\ref{KKTE5 Q}) and the KKT condition of complementary slackness, it follows that the following set of locus equations must be satisfied \[ y_{i}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}\right) -1+\xi_{i}=0,\ i=1,...,l\text{, \] such that an estimate for $\kappa_{0}$ satisfies the locus equation \begin{align} \kappa_{0} & =\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) -\sum\nolimits_{i=1}^{l}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i\ast }+1\right) ^{2}\right) \boldsymbol{\kappa \label{Normal Eigenlocus Projection Factor Q}\\ & =\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) -\left( \sum\nolimits_{i=1}^{l}\left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2}\right) \boldsymbol{\kappa}\text{.}\nonumber \end{align} I\ will now use the statistic for $\kappa_{0}$ to derive a vector expression for a quadratic eigenlocus test that is used to classify unknown pattern vectors. Substitution of the statistic for $\kappa_{0}$ in Eq. (\ref{Normal Eigenlocus Projection Factor Q}) into the expression for the quadratic discriminant function $D\left( \mathbf{s}\right) $ in Eq. (\ref{Discriminant Function Q}) provides a quadratic eigenlocus test $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \overset{H_{1}}{\underset{H_{2}}{\gtrless}}0$ for classifying an unknown pattern vector $\mathbf{s}$ \begin{align} \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =\left( \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\right) \boldsymbol{\kappa}-\sum\nolimits_{i=1}^{l}\left( \left( \mathbf{x ^{T}\mathbf{x}_{i\ast}+1\right) ^{2}\right) \boldsymbol{\kappa }\label{NormalEigenlocusTestStatistic Q}\\ & \mathbf{+}\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\nonumber\\ & =\left( \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}-\sum \nolimits_{i=1}^{l}\left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2}\right) \boldsymbol{\kappa}\nonumber\\ & \mathbf{+}\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{,}\nonumber \end{align} where the statistic $\sum\nolimits_{i=1}^{l}\left( \mathbf{x}^{T \mathbf{x}_{i\ast}+1\right) ^{2}$ is the locus of an aggregate or cluster of a set of $l$ extreme points, and the statistic $\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) $ accounts for the class membership of the primal principal eigenaxis components on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$. \subsubsection{Locus of Aggregated Risk $\protect\widehat{\mathfrak{R}}$} The cluster $\sum\nolimits_{i=1}^{l}\left( \mathbf{x}^{T}\mathbf{x}_{i\ast }+1\right) ^{2}$ of a set of extreme points represents the aggregated risk $\widehat{\mathfrak{R}}$ for a decision space $Z$. Accordingly, the vector transfor \[ \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}-\sum\nolimits_{i=1}^{l}\left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2 \] accounts for the distance between the unknown vector $\mathbf{s}$ and the locus of aggregated risk $\widehat{\mathfrak{R}}$. Let $\widehat{\mathbf{x }_{i\ast}\triangleq\sum\nolimits_{i=1}^{l}\left( \mathbf{x}^{T \mathbf{x}_{i\ast}+1\right) ^{2}$. \subsubsection{Quadratic Decision Locus} Denote a unit quadratic eigenlocus $\boldsymbol{\kappa}\mathbf{/}\left\Vert \boldsymbol{\kappa}\right\Vert $ by $\widehat{\boldsymbol{\kappa}}$. Letting $\boldsymbol{\kappa}=\boldsymbol{\kappa}\mathbf{/}\left\Vert \boldsymbol{\kappa}\right\Vert $ in Eq. (\ref{NormalEigenlocusTestStatistic Q ) provides an expression for a decision locu \begin{align} \widehat{D}\left( \mathbf{s}\right) & =\left( k_{\mathbf{s }-k_{\widehat{\mathbf{x}}_{i\ast}}\right) \boldsymbol{\kappa}\mathbf{/ \left\Vert \boldsymbol{\kappa}\right\Vert \label{Statistical Locus of Category Decision Q}\\ & \mathbf{+}\frac{1}{\left\Vert \boldsymbol{\kappa}\right\Vert \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \nonumber \end{align} which is determined by the scalar projection of $k_{\mathbf{s} -k_{\widehat{\mathbf{x}}_{i\ast}}$ onto $\widehat{\boldsymbol{\kappa}}$. Accordingly, the component of $k_{\mathbf{s}}-k_{\widehat{\mathbf{x}}_{i\ast }$ along $\widehat{\boldsymbol{\kappa}}$ specifies a signed magnitude $\left\Vert k_{\mathbf{s}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right\Vert \cos\theta$ along the axis of $\widehat{\boldsymbol{\kappa}}$, where $\theta$ is the angle between the transformed vector $k_{\mathbf{s} -k_{\widehat{\mathbf{x}}_{i\ast}}$ and $\widehat{\boldsymbol{\kappa}}$. It follows that the component $\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\kappa}}}}\left( \overrightarrow{\left( k_{\mathbf{s}}-k_{\widehat{\mathbf{x}}_{i\ast }\right) }\right) $ of the vector transform $k_{\mathbf{s} -k_{\widehat{\mathbf{x}}_{i\ast}}$ of an unknown pattern vector $k_{\mathbf{s }$ along the axis of a unit quadratic eigenlocus $\widehat{\boldsymbol{\kappa }} \[ P_{\widehat{D}\left( \mathbf{s}\right) }=\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\kappa}}}}\left( \overrightarrow{\left( k_{\mathbf{s}}-k_{\widehat{\mathbf{x}}_{i\ast }\right) }\right) =\left\Vert k_{\mathbf{s}}-k_{\widehat{\mathbf{x}}_{i\ast }}\right\Vert \cos\theta \] specifies a locus $P_{\widehat{D}\left( \mathbf{s}\right) }$ of a category decision, where $P_{\widehat{D}\left( \mathbf{s}\right) }$ is at a distance of $\left\Vert k_{\mathbf{s}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right\Vert \cos\theta$ from the origin, along the axis of a quadratic eigenlocus $\boldsymbol{\kappa}$. Accordingly, the quadratic discriminant function $D\left( \mathbf{s}\right) $ in Eq. (\ref{Discriminant Function Q}) makes a decision based on the value of the decision locus $\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\kappa}}}}\left( \overrightarrow{\left( k_{\mathbf{s}}-k_{\widehat{\mathbf{x}}_{i\ast }\right) }\right) $ and the class membership statistic $\frac{1}{\left\Vert \boldsymbol{\kappa}\right\Vert }\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi _{i}\right) $. Figure $\ref{Statistical Decision Locus Q}$ depicts a decision locus generated by the quadratic discriminant function $\widehat{D}\left( \mathbf{s}\right) $ in Eq. (\ref{Statistical Locus of Category Decision Q}) \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure36.png }\caption{A statistical decision locus $P_{\protect\widehat{D}\left( \mathbf{s}\right) }$ for an unknown, transformed pattern vector $k_{\mathbf{s}}-k_{\overline{\mathbf{x}}_{i\ast}}$ that is projected onto $\boldsymbol{\kappa}\mathbf{/}\left\Vert \boldsymbol{\kappa}\right\Vert $. \label{Statistical Decision Locus Q \end{figure} \subsubsection{Quadratic Decision Threshold} Returning to Eq. (\ref{General Form of Decision Function II}), recall that an optimal decision function computes the likelihood ratio $\Lambda\left( \mathbf{x}\right) $ for a feature vector $\mathbf{x}$ and makes a decision by comparing the ratio $\Lambda\left( \mathbf{x}\right) $ to the threshold $\eta=0$. Given Eqs (\ref{Decision Boundary Q}) and (\ref{Statistical Locus of Category Decision Q}), it follows that a quadratic eigenlocus test $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \overset{H_{1}}{\underset{H_{2}}{\gtrless}}0$ makes a decision by comparing the outpu \[ \operatorname{sign}\left( \operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\kappa}}}}\left( \overrightarrow{\left( k_{\mathbf{s}}-k_{\widehat{\mathbf{x}}_{i\ast }\right) }\right) +\frac{1}{\left\Vert \boldsymbol{\kappa}\right\Vert \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \right) \text{, \] where $\operatorname{sign}\left( x\right) \equiv\frac{x}{\left\vert x\right\vert }$ for $x\neq0$, to a threshold $\eta$ along the axis of $\widehat{\boldsymbol{\kappa}}$ in \mathbb{R} ^{d}$, where $\eta=0$. \subsection{Quadratic Eigenlocus Decision Rules} Substitution of the equation for $\boldsymbol{\kappa}$ in Eq. (\ref{Pair of Normal Eigenlocus Components Q}) into Eq. (\ref{NormalEigenlocusTestStatistic Q}) provides a quadratic eigenlocus test in terms of the primal eigenlocus components $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ \begin{align} \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =\left( \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}-\sum\nolimits_{i=1 ^{l}\left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2}\right) \boldsymbol{\kappa}_{1}\label{NormalEigenlocusTestStatistic2 Q}\\ & -\left( \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}-\sum \nolimits_{i=1}^{l}\left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2}\right) \boldsymbol{\kappa}_{2}\nonumber\\ & \mathbf{+}\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{.}\nonumber \end{align} I will show that a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ and its Wolfe dual $\boldsymbol{\psi}$ possess an essential statistical property which enables quadratic eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}$ to satisfy a discrete version of the fundamental integral equation of binary classification \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & =\;\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{1_{i\ast} }|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}+\in \nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa _{1}\right) d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \sum \nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{2_{i\ast}} |\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}+\int\nolimits_{Z_{2 }p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2 }\psi_{2_{i_{\ast}}}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, and all of the forces associated with counter risks $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{1}\right) $ and $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa }_{2}\right) $ and risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) $ and $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|\boldsymbol{\kappa}_{1}\right) $ within the $Z_{1}$ and $Z_{2}$ decision regions are symmetrically balanced with each other \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :\;\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{1_{i\ast} }|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}-\in \nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}+\delta\left( y\right) \sum \nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int\nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{2_{i\ast}} |\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}-\int\nolimits_{Z_{2 }p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2 }\psi_{2_{i_{\ast}} \end{align*} by means of an integral equation \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & =\int\nolimits_{Z}p\left( k_{\mathbf{x}_{1_{i\ast} }|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}=\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}+C_{1}\\ & =\int\nolimits_{Z}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa }_{2}\right) d\boldsymbol{\kappa}_{2}=\left\Vert \boldsymbol{\kappa _{2}\right\Vert _{\min_{c}}^{2}+C_{2}\text{, \end{align*} where $p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ and $p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ are class-conditional densities for respective extreme points $k_{\mathbf{x _{2_{i\ast}}}$ and $k_{\mathbf{x}_{1_{i\ast}}}$, and $C_{1}$ and $C_{2}$ are integration constants for $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa }_{2}$ respectively. I will define this property after I\ define the fundamental properties possessed by a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$. \section{The Wolfe Dual Eigenspace II} Let there be $l$ principal eigenaxis components $\left\{ \psi_{i\ast }\overrightarrow{\mathbf{e}}_{i}|\psi_{i\ast}>0\right\} _{i=1}^{l}$ on a constrained, primal quadratic eigenlocus within its Wolfe dual eigenspace \[ \max\Xi\left( \boldsymbol{\psi}\right) =\mathbf{1}^{T}\boldsymbol{\psi }-\frac{\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}}{2}\text{, \] where the Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ satisfies the constraints $\boldsymbol{\psi}^{T}\mathbf{y}=0$ and $\psi_{i\ast}>0$. The theorem for convex duality guarantees an equivalence and corresponding symmetry between a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ and its Wolfe dual $\boldsymbol{\psi}$. Raleigh's principle \citep[see][]{Strang1986} and the theorem for convex duality jointly indicate that Eq. (\ref{Vector Form Wolfe Dual Q}) provides an estimate of the largest eigenvector $\boldsymbol{\psi}$ of a kernel matrix $\mathbf{Q}$, where $\boldsymbol{\psi}$ satisfies the constraints $\boldsymbol{\psi}^{T \mathbf{y}=0$ and $\psi_{i}\geq0$, such that $\boldsymbol{\psi}$ is a principal eigenaxis of three, symmetrical\textit{\ }quadratic partitioning surfaces associated with the constrained quadratic form $\boldsymbol{\psi ^{T}\mathbf{Q}\boldsymbol{\psi}$. I will now show that maximization of the functional $\mathbf{1}^{T \boldsymbol{\psi}-\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}\mathbf{/}2$ requires that $\boldsymbol{\psi}$ satisfy an eigenenergy constraint which is symmetrically related to the restriction of the primal quadratic eigenlocus $\boldsymbol{\kappa}$ to its Wolfe dual eigenspace. \subsection{Eigenenergy Constraint on $\boldsymbol{\psi}$} Equation (\ref{Minimum Total Eigenenergy Primal Normal Eigenlocus Q}) and the theorem for convex duality jointly indicate that $\boldsymbol{\psi}$ satisfies an eigenenergy constraint that is symmetrically related to the eigenenergy constraint on $\boldsymbol{\kappa}$ within its Wolfe dual eigenspace \[ \left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}\cong\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}\text{. \] Therefore, a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ satisfies an eigenenergy constrain \[ \max\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}=\lambda_{\max \boldsymbol{\psi}}\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2 \] for which the functional $\mathbf{1}^{T}\boldsymbol{\psi}-\boldsymbol{\psi }^{T}\mathbf{Q}\boldsymbol{\psi}\mathbf{/}2$ in Eq. (\ref{Vector Form Wolfe Dual Q}) is maximized by the largest eigenvector $\boldsymbol{\psi}$ of $\mathbf{Q}$, such that the constrained quadratic form $\boldsymbol{\psi}^{T}\mathbf{Q}\boldsymbol{\psi}\mathbf{/}2$\textbf{,} where $\boldsymbol{\psi}^{T}\mathbf{y}=0$ and $\psi_{i}\geq0$, reaches its smallest possible value. This indicates that principal eigenaxis components on a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ satisfy minimum length constraints. Principal eigenaxis components on a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ also satisfy an equilibrium constraint. \subsection{Equilibrium Constraint on $\boldsymbol{\psi}$} The KKT condition in Eq. (\ref{KKTE2 Q}) requires that the magnitudes of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ satisfy the equation \[ \left( y_{i}=1\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}+\left( y_{i}=-1\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}=0 \] so tha \begin{equation} \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}-\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i\ast}}=0\text{.} \label{Wolfe Dual Equilibrium Point Q \end{equation} It follows that the integrated lengths of the Wolfe dual principal eigenaxis components correlated with each pattern category must \emph{balance} each other \begin{equation} \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\rightleftharpoons\sum \nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\text{.} \label{Equilibrium Constraint on Dual Eigen-components Q \end{equation} Accordingly, let $l_{1}+l_{2}=l$ and express a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ in terms of $l$ non-orthogonal unit vectors $\left\{ \overrightarrow{\mathbf{e}}_{1\ast},\ldots,\overrightarrow{\mathbf{e}}_{l\ast }\right\} $ \begin{align} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l}\psi_{i\ast \overrightarrow{\mathbf{e}}_{i\ast}\label{Wolfe Dual Vector Equation Q}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\overrightarrow{\mathbf{e }_{1i\ast}+\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\overrightarrow{\mathbf{e }_{2i\ast}\nonumber\\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\text{,}\nonumber \end{align} where each scaled, non-orthogonal unit vector $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ is correlated with an extreme vector $\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}$ or $\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}$ respectively, $\boldsymbol{\psi}_{1}$ denotes the Wolfe dual eigenlocus component $\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\overrightarrow{\mathbf{e }_{1_{i\ast}}$, and $\boldsymbol{\psi}_{2}$ denotes the Wolfe dual eigenlocus component $\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast} \overrightarrow{\mathbf{e}}_{2_{i\ast}}$. Given Eq. (\ref{Equilibrium Constraint on Dual Eigen-components Q}) and data distributions that have dissimilar covariance matrices, it follows that the forces associated with counter risks and risks, within each of the symmetrical decision regions, are balanced with each other. Given Eq. (\ref{Equilibrium Constraint on Dual Eigen-components Q}) and data distributions that have similar covariance matrices, it follows that the forces associated with counter risks within each of the symmetrical decision regions are equal to each other, and the forces associated with risks within each of the symmetrical decision regions are equal to each other. Given Eqs (\ref{Equilibrium Constraint on Dual Eigen-components Q}) and (\ref{Wolfe Dual Vector Equation Q}), it follows that the axis of a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ can be regarded as a lever that is formed by \emph{sets of principal eigenaxis components which are evenly or equally distributed over either side of the\emph{ axis }of }$\boldsymbol{\psi }$\emph{, where a fulcrum is placed directly under the center of the axis of }$\boldsymbol{\psi}$. Thereby, the axis of $\boldsymbol{\psi}$ is in statistical equilibrium, where all of the principal eigenaxis components on $\boldsymbol{\psi}$ are equal or in correct proportions, relative to the center of $\boldsymbol{\psi}$, such that the opposing forces associated with risks and counter risks of a quadratic classification system are symmetrically balanced with each other. Figure $\ref{Quadratic Dual Locus in Statistical Equilibrium}$\textbf{ }illustrates the axis of $\mathbf{\psi}$ in statistical equilibrium.\textbf \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure37.png }\caption{All of the principal eigenaxis components on $\boldsymbol{\psi}$ have equal or correct proportions, relative to the center of $\boldsymbol{\psi }$, so that opposing forces associated with risks and counter risks are symmetrically balanced with each other. \label{Quadratic Dual Locus in Statistical Equilibrium \end{figure} } Using Eq. (\ref{Equilibrium Constraint on Dual Eigen-components Q}), it follows that the length $\left\Vert \boldsymbol{\psi}_{1}\right\Vert $ of $\boldsymbol{\psi}_{1}$ is balanced with the length $\left\Vert \boldsymbol{\psi}_{2}\right\Vert $ of $\boldsymbol{\psi}_{2}$ \begin{equation} \left\Vert \boldsymbol{\psi}_{1}\right\Vert \rightleftharpoons\left\Vert \boldsymbol{\psi}_{2}\right\Vert \label{Equilibrium Constraint on Dual Component Lengths Q \end{equation} and that the total allowed eigenenergies exhibited by $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$ are balanced with each other \begin{equation} \left\Vert \boldsymbol{\psi}_{1}\right\Vert _{\min_{c}}^{2}\rightleftharpoons \left\Vert \boldsymbol{\psi}_{2}\right\Vert _{\min_{c}}^{2}\text{.} \label{Symmetrical Balance of Wolf Dual Eigenenergies Q \end{equation} Therefore, the equilibrium constraint on $\boldsymbol{\psi}$ in Eq. (\ref{Equilibrium Constraint on Dual Eigen-components Q}) ensures that the total allowed eigenenergies exhibited by the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$ are symmetrically balanced with each other \[ \left\Vert \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast} \overrightarrow{\mathbf{e}}_{1_{i\ast}}\right\Vert _{\min_{c}}^{2 \rightleftharpoons\left\Vert \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast }\overrightarrow{\mathbf{e}}_{2_{i\ast}}\right\Vert _{\min_{c}}^{2 \] about the center of total allowed eigenenergy $\left\Vert \boldsymbol{\psi }\right\Vert _{\min_{c}}^{2}$: which is located at the geometric center of $\boldsymbol{\psi}$ because $\left\Vert \boldsymbol{\psi}_{1}\right\Vert \equiv\left\Vert \boldsymbol{\psi}_{2}\right\Vert $. This indicates that the total allowed eigenenergies of $\boldsymbol{\psi}$ are distributed over its axis in a symmetrically balanced and well-proportioned manner. \subsection{Symmetrical Balance Exhibited by the Axis of $\boldsymbol{\psi}$} Given Eqs (\ref{Equilibrium Constraint on Dual Component Lengths Q}) and (\ref{Symmetrical Balance of Wolf Dual Eigenenergies Q}), it follows that the axis of a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ can be regarded as a lever that has \emph{equal weight on equal sides of a centrally placed fulcrum}. Thereby, the axis of $\boldsymbol{\psi}$ is a lever that has an equal distribution of eigenenergies on equal sides of a centrally placed fulcrum. Later on, I\ will show that symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa }$ are symmetrically distributed over the axes of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ and the unconstrained, correlated primal principal eigenaxis components (extreme vectors) on $\boldsymbol{\kappa }$. Figure $\ref{Symmetrical Balance of Wolfe Dual Quadratic Eigenlocus}$ depicts how the axis of $\boldsymbol{\psi}$ can be regarded as a lever that has an equal distribution of eigenenergies on equal sides of a centrally placed fulcrum which is located at the geometric center, i.e., the critical minimum eigenenergy $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}$, of $\boldsymbol{\psi}$ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure38.png }\caption{The axis of $\boldsymbol{\psi}$ can be regarded as a lever that has an equal distribution of eigenenergies on equal sides of a centrally placed fulcrum which is located at the center of total allowed eigenenergy $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\psi }$. \label{Symmetrical Balance of Wolfe Dual Quadratic Eigenlocus \end{figure} \subsection{Statistics for Quadratic Partitions} Returning to Section $10.4$, recall that the eigenspectrum of a kernel matrix $\mathbf{Q}$ determines the shapes of the quadratic surfaces which are specified by the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual Q}), where the eigenvalues $\lambda_{N}\leq$ $\lambda_{N-1}\leq\ldots\leq\lambda_{1}$ of a kernel matrix $\mathbf{Q}$ are essentially determined by its inner product elements $\varphi\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) $. Therefore, let the kernel matrix $\mathbf{Q}$ in Eq. (\ref{Vector Form Wolfe Dual Q}) contain inner product statistics $\varphi\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) =\left( \mathbf{x _{i}^{T}\mathbf{x}_{j}+1\right) ^{2}$ for a quadratic decision boundary $Q_{0}\left( \mathbf{x}\right) $ and quadratic decision borders $Q_{+1}\left( \mathbf{x}\right) $ and $Q_{-1}\left( \mathbf{x}\right) $ that delineate symmetrical decision regions $Z_{1}\simeq Z_{2}$. Later on, I\ will show that quadratic eigenlocus transforms map the labeled $\pm1$, inner product statistics $\left( \mathbf{x}_{i}^{T}\mathbf{x}_{j}+1\right) ^{2}$ contained within $\mathbf{Q} \[ \mathbf{Q}\boldsymbol{\psi}=\lambda\mathbf{_{\max\boldsymbol{\psi} }\boldsymbol{\psi \] into a Wolfe dual quadratic eigenlocus of principal eigenaxis component \[ \mathbf{Q\sum\nolimits_{i=1}^{l}}\psi_{i\ast}\overrightarrow{\mathbf{e }_{i\ast}=\lambda\mathbf{_{\max\boldsymbol{\psi}}}\sum\nolimits_{i=1}^{l \psi_{i\ast}\overrightarrow{\mathbf{e}}_{i\ast}\text{, \] formed by $l$ scaled $\psi_{i\ast}$, non-orthogonal unit vectors $\left\{ \overrightarrow{\mathbf{e}}_{1\ast},\ldots,\overrightarrow{\mathbf{e}}_{l\ast }\right\} $, where the locus of each Wolfe dual principal eigenaxis component $\psi_{i\ast}\overrightarrow{\mathbf{e}}_{i\ast}\ $is determined by the direction and well-proportioned magnitude of a correlated extreme vector $\left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2}$. By way of motivation, I\ will now define the fundamental property possessed by a quadratic eigenlocus $\boldsymbol{\kappa}$ that enables a quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ to satisfy the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}). \section{The Property of Symmetrical Balance II} I\ have demonstrated that constrained, quadratic eigenlocus discriminant functions $D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s +1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ determine symmetrical $Z_{1}\simeq Z_{2}$ decision regions $Z_{1}$ and $Z_{2}$ that are delineated by quadratic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $, which satisfy symmetrically balanced constraints relative to the constraint on a quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $, where all of the points $\mathbf{s}$ on $D_{+1}\left( \mathbf{s}\right) $, $D_{-1}\left( \mathbf{s}\right) $, and $D_{0}\left( \mathbf{s}\right) $ reference $\boldsymbol{\kappa}$. Therefore, I\ have shown that constrained, quadratic eigenlocus discriminant functions $D\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s +1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfy boundary values of quadratic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $ and quadratic decision boundaries $D_{0}\left( \mathbf{s}\right) $, where the axis of $\boldsymbol{\kappa}$ is an axis of symmetry for $D_{+1}\left( \mathbf{s}\right) $, $D_{-1}\left( \mathbf{s}\right) $, and $D_{0}\left( \mathbf{s}\right) $. Given that $\boldsymbol{\kappa}$ is an axis of symmetry which satisfies boundary values of quadratic decision borders $D_{+1}\left( \mathbf{s \right) $ and $D_{-1}\left( \mathbf{s}\right) $ and quadratic decision boundaries $D_{0}\left( \mathbf{s}\right) $, it follows that $\boldsymbol{\kappa}$ \emph{must posses} the statistical property of \emph{symmetrical balance}. Recall that the physical property of symmetrical balance involves an axis or lever in equilibrium where different elements are equal or in correct proportions, relative to the center of an axis or a lever, such that the opposing forces or influences of a system are balanced with each other. \subsection{Symmetrical Balance Exhibited by the Axis of $\boldsymbol{\kappa $} Returning to Eqs (\ref{Equilibrium Constraint on Dual Eigen-components Q}) and (\ref{Wolfe Dual Vector Equation Q}), recall that the axis of $\boldsymbol{\psi}$ can be regarded as a lever in statistical equilibrium where different principal eigenaxis components are equal or in correct proportions, relative to the center of $\boldsymbol{\psi}$, such that the opposing forces associated with the risks and the counter risks of a quadratic classification system are balanced with each other. Thus, the axis of $\boldsymbol{\psi=\psi}_{1}+\boldsymbol{\psi}_{2}$ exhibits the statistical property of symmetrical balance, where $\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i\ast}}\overrightarrow{\mathbf{e}}_{1_{i\ast}}\equiv\sum \nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\overrightarrow{\mathbf{e}}_{2_{i\ast }$. Furthermore, given Eqs (\ref{Equilibrium Constraint on Dual Component Lengths Q}) and (\ref{Symmetrical Balance of Wolf Dual Eigenenergies Q}), the axis of $\boldsymbol{\psi}$ can be regarded as a lever that has an equal distribution of eigenenergies on equal sides of a centrally placed fulcrum which is located at the center of total allowed eigenenergy $\left\Vert \boldsymbol{\psi }\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\psi}$. Accordingly, the total allowed eigenenergies possessed by the principal eigenaxis components on $\boldsymbol{\psi}$ are symmetrically balanced with each other about a center of total allowed eigenenergy $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}$ which is located at the geometric center of $\boldsymbol{\psi }$. Thus, the axis of $\boldsymbol{\psi=\psi}_{1}+\boldsymbol{\psi}_{2}$ exhibits the statistical property of symmetrical balance, where $\left\Vert \boldsymbol{\psi}_{1}\right\Vert \equiv\left\Vert \boldsymbol{\psi _{2}\right\Vert $ and $\left\Vert \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast }\overrightarrow{\mathbf{e}}_{1_{i\ast}}\right\Vert _{\min_{c}}^{2 \equiv\left\Vert \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast} \overrightarrow{\mathbf{e}}_{2_{i\ast}}\right\Vert _{\min_{c}}^{2}$. Returning to Eqs (\ref{Characteristic Eigenenergy of Quadratic}) and (\ref{Characteristic Eigenenergy of Quadratic 2}), recall that the locus of any quadratic curve or surface is determined by the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by the locus of its principal eigenaxis $\boldsymbol{\nu}$, where any given principal eigenaxis $\boldsymbol{\nu}$ and any given point $\mathbf{x}$ on a quadratic locus satisfies the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ of $\boldsymbol{\nu}$. Accordingly, the inherent property of a quadratic locus and its principal eigenaxis $\boldsymbol{\nu}$ is the eigenenergy $\left\Vert \boldsymbol{\nu}\right\Vert ^{2}$ exhibited by $\boldsymbol{\nu}$. Therefore, Eqs (\ref{Characteristic Eigenenergy of Quadratic}), (\ref{Characteristic Eigenenergy of Quadratic 2}), (\ref{Minimum Total Eigenenergy Primal Normal Eigenlocus Q}), and (\ref{Pair of Normal Eigenlocus Components Q}) jointly indicate that a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ satisfies the quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $ in Eq (\ref{Decision Boundary Q}) and the quadratic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $ in Eqs (\ref{Decision Border One Q}) and (\ref{Decision Border Two Q}) in terms of its total allowed eigenenergie \begin{align*} \left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}\\ & \cong\left[ \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c} ^{2}-\boldsymbol{\kappa}_{1}^{T}\boldsymbol{\kappa}_{2}\right] +\left[ \left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2 -\boldsymbol{\kappa}_{2}^{T}\boldsymbol{\kappa}_{1}\right] \text{, \end{align*} where the functional $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min _{c}}^{2}-\boldsymbol{\kappa}_{1}^{T}\boldsymbol{\kappa}_{2}$ is associated with the $D_{+1}\left( \mathbf{s}\right) $ quadratic decision border, the functional $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c} ^{2}-\boldsymbol{\kappa}_{2}^{T}\boldsymbol{\kappa}_{1}$ is associated with the $D_{-1}\left( \mathbf{s}\right) $ quadratic decision border, and the functional $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ is associated with the quadratic decision boundary $D_{0}\left( \mathbf{s \right) $. Thus, the total allowed eigenenergies of the principal eigenaxis components on a quadratic eigenlocus $\boldsymbol{\kappa=\kappa _{1}-\boldsymbol{\kappa}_{2}$ must satisfy the law of cosines in the symmetrically balanced manner depicted in Fig. $\ref{Law of Cosines for Quadratic Classification Systems}$ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=2.6809in {Figure39.png }\caption{The likelihood ratio $\protect\widehat{\Lambda}_{\boldsymbol{\kappa }}\left( \mathbf{s}\right) =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ of a quadratic eigenlocus discriminant function $\protect\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfies the law of cosines in a symmetrically balanced manner. \label{Law of Cosines for Quadratic Classification Systems \end{figure} Given that $\boldsymbol{\kappa}$ must possess the statistical property of symmetrical balance in terms of its principal eigenaxis components, it follows that the axis of $\boldsymbol{\kappa}$ is essentially a lever that is symmetrically balanced with respect to the center of eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}$. Accordingly, the axis of $\boldsymbol{\kappa}$ is said to be in statistical equilibrium, where the constrained, primal principal eigenaxis components on $\boldsymbol{\kappa} \begin{align*} \boldsymbol{\kappa} & =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\left( \mathbf{x ^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}-\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2 \end{align*} are equal or in correct proportions, relative to the center of $\boldsymbol{\kappa}$, such that all of the forces associated with the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa _{1}\right) $ and all of the forces associated with the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa }_{1}\right) $ and $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{2}\right) $ of a binary, quadratic classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa }+\kappa_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are symmetrically balanced with each other. I will prove that a constrained, quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfies a discrete and data-driven version of the fundamental integral equation of binary classification for a classification system in statistical equilibrium in Eq. (\ref{Equalizer Rule}) because the axis of $\mathbf{\kappa }$ is essentially a lever that is symmetrically balanced with respect to the center of eigenenergy $\left\Vert \mathbf{\kappa}_{1}-\mathbf{\kappa _{2}\right\Vert _{\min_{c}}^{2}$ of $\mathbf{\kappa}$ in the following manner \[ \left\Vert \mathbf{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \mathbf{\kappa}_{1}\right\Vert \left\Vert \mathbf{\kappa}_{2}\right\Vert \cos\theta_{\mathbf{\kappa}_{1}\mathbf{\kappa}_{2}}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\equiv\frac{1}{2}\left\Vert \mathbf{\kappa}\right\Vert _{\min_{c}}^{2 \] an \[ \left\Vert \mathbf{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \mathbf{\kappa}_{2}\right\Vert \left\Vert \mathbf{\kappa}_{1}\right\Vert \cos\theta_{\mathbf{\kappa}_{2}\mathbf{\kappa}_{1}}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\equiv\frac{1}{2}\left\Vert \mathbf{\kappa}\right\Vert _{\min_{c}}^{2 \] where the equalizer statistic $\nabla_{eq} \[ \nabla_{eq}\triangleq\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1 ^{l}\psi_{_{i\ast} \] for which $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) $, equalizes the total allowed eigenenergies $\left\Vert \mathbf{\kappa}_{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \mathbf{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\mathbf{\kappa }_{1}$ and $\mathbf{\kappa}_{2}$ \[ \left\Vert \mathbf{\kappa}_{1}\right\Vert _{\min_{c}}^{2}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\equiv\left\Vert \mathbf{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \] so that the dual locus of $\mathbf{\kappa}_{1}-\mathbf{\kappa}_{2}$ is in statistical equilibrium. Figure $\ref{Symmetrical Balance of Constrained Primal Quadratic Eigenlocus}$ illustrates the property of symmetrical balance exhibited by the dual locus of $\boldsymbol{\kappa}$ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure40.png }\caption{A constrained, quadratic eigenlocus discriminant function $\protect\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}$ satisfies a fundamental integral equation of binary classification for a quadratic classification system in statistical equilibrium, where the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z\mathbf{|}\boldsymbol{\kappa }\right) $ and the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa }\right\Vert _{\min_{c}}^{2}$ of the system are minimized, because the axis of $\boldsymbol{\kappa}$ is essentially a lever that is symmetrically balanced with respect to the center of eigenenergy $\left\Vert \boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}$. \label{Symmetrical Balance of Constrained Primal Quadratic Eigenlocus \end{figure} I\ will obtain the above equations for a quadratic eigenlocus $\boldsymbol{\kappa}$ in statistical equilibrium by devising a chain of arguments which demonstrate that a constrained quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}$ satisfies discrete and data-driven versions of the fundamental equations of binary classification for a classification system in statistical equilibrium in Eqs (\ref{Vector Equation of Likelihood Ratio and Decision Boundary}) - (\ref{Balancing of Bayes' Risks and Counteracting Risks}). The general course of my argument, which is outlined below, follows the general course of my argument for linear eigenlocus transforms. \section{General Course of Argument II} In order to prove that a constrained quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}$ satisfies $\left( 1\right) $ the vector equatio \[ \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}=0\text{, \] $\left( 2\right) $ the statistical equilibrium equation \[ p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \rightleftharpoons p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \text{, \] $\left( 3\right) $ the corresponding integral equation \[ \int_{Z}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) d\widehat{\Lambda}_{\boldsymbol{\kappa}}=\in _{Z}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) d\widehat{\Lambda}_{\boldsymbol{\kappa}}\text{, \] $\left( 4\right) $ a discrete, quadratic version of the fundamental integral equation of binary classification for a classification system in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & =\int_{Z_{1}}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) |\omega_{1}\right) d\widehat{\Lambda }_{\boldsymbol{\kappa}}+\int_{Z_{2}}p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) d\widehat{\Lambda}_{\boldsymbol{\kappa}}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) d\widehat{\Lambda}_{\boldsymbol{\kappa }}+\int_{Z_{2}}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) d\widehat{\Lambda}_{\boldsymbol{\kappa }}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\text{, \end{align*} and $\left( 5\right) $ the corresponding integral equation \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :\;\int_{Z_{1}}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) |\omega_{1}\right) d\widehat{\Lambda }_{\boldsymbol{\kappa}}-\int_{Z_{1}}p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \widehat{\Lambda}_{\boldsymbol{\kappa}}+\delta\left( y\right) \sum \nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) d\widehat{\Lambda}_{\boldsymbol{\kappa }}-\int_{Z_{2}}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) d\widehat{\Lambda}_{\boldsymbol{\kappa }}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\text{, \end{align*} I will need to develop mathematical machinery for several systems of locus equations. The fundamental equations of a binary classification system involve mathematical machinery and systems of locus equations that determine the following mathematical objects: \begin{enumerate} \item Total allowed eigenenergies of extreme points on a Wolfe dual and a constrained primal quadratic eigenlocus. \item Total allowed eigenenergies of Wolfe dual and constrained quadratic linear eigenlocus components. \item Total allowed eigenenergy of a Wolfe dual and a constrained primal quadratic eigenlocus. \item Class-conditional probability density functions for extreme points. \item Conditional probability functions for extreme points. \item Risks and counter risks of extreme points. \item Conditional probability functions for the risks and the counter risks related to positions and potential locations of extreme points. \item Integral equations of class-conditional probability density functions. \end{enumerate} A high level overview of the development of the mathematical machinery and systems of locus equations is outlined below. I\ will develop class-conditional probability density functions and conditional probability functions for extreme points in the following manner: Let $k_{\mathbf{x}_{i\ast}}$ denote an extreme point, where $k_{\mathbf{x _{i\ast}}$ is a reproducing kernel of an extreme point $\mathbf{x}_{i\ast}$. Using Eq. (\ref{Geometric Locus of Second-order Reproducing Kernel}), any given extreme point $k_{\mathbf{x}_{i\ast}}$ is the endpoint on a locus of \emph{random} \emph{variables \ \begin{pmatrix} \left( \left\Vert \mathbf{x}_{i\ast}\right\Vert ^{2}+1\right) \cos \mathbb{\alpha}_{k\left( \mathbf{x}_{i\ast1}\right) 1}, & \left( \left\Vert \mathbf{x}_{i\ast}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( \mathbf{x}_{i\ast2}\right) 2}, & \cdots, & \left( \left\Vert \mathbf{x _{i\ast}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( \mathbf{x}_{i\ast d}\right) d \end{pmatrix} \text{, \] where each random variable is characterized by an expected valu \[ E\left[ \left( \left\Vert \mathbf{x}_{i\ast}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( \mathbf{x}_{i\ast}\right) j}\right] \] and a varianc \[ \operatorname{var}\left( \left( \left\Vert \mathbf{x}_{i\ast}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( \mathbf{x}_{i\ast}\right) j}\right) \text{. \] \qquad Therefore, an extreme point $k_{\mathbf{x}_{i\ast}}$ is described by a central location (an expected value) and a covariance (a spread). The relative likelihood that an extreme point has a given location is described by a conditional probability density function. The cumulative probability of given locations for an extreme point, i.e., the probability of finding the extreme point within a localized region, is described by a conditional probability function \citep{Ash1993,Flury1997 . So, take the Wolfe dual quadratic eigenlocus in Eq. (\ref{Wolfe Dual Vector Equation}) \begin{align*} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}+\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{2i\ast}\\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\text{, \end{align*} where each scaled, non-orthogonal unit vector $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ is a displacement vector that is correlated with an extreme vector $\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast }}+1\right) ^{2}$ or $\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}$ respectively. For a given se \[ \left\{ \left\{ \left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}\right\} _{i=1}^{l_{1}},\;\left\{ \left( \mathbf{x}^{T}\mathbf{x _{2_{i\ast}}+1\right) ^{2}\right\} _{i=1}^{l_{2}}\right\} \] of $k_{\mathbf{x}_{1i\ast}}$ and $k_{\mathbf{x}_{2i\ast}}$ reproducing kernels, I\ will show that each Wolfe dual principal eigenaxis component $\psi_{i\ast}\overrightarrow{\mathbf{e}}_{i\ast}$ on $\boldsymbol{\psi}$ specifies a class-conditional density $p\left( k_{\mathbf{x}_{i\ast }|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ for a correlated reproducing kernel $k_{\mathbf{x}_{i\ast}}$ of an extreme point $\mathbf{x _{i\ast}$, such that $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$ are class-conditional probability density functions in Wolfe dual eigenspace, $\boldsymbol{\kappa}_{1}$ is a parameter vector for a class-conditional probability density $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) $ for a given set $\left\{ \left( \mathbf{x}^{T}\mathbf{x _{1_{i\ast}}+1\right) ^{2}\right\} _{i=1}^{l_{1}}$ of $k_{\mathbf{x _{1i\ast}}$ reproducing kernels \[ \boldsymbol{\kappa}_{1}=p\left( \left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast }+1\right) ^{2}|\boldsymbol{\kappa}_{1}\right) \text{, \] and $\boldsymbol{\kappa}_{2}$ is a parameter vector for a class-conditional probability density $p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa _{2}\right) $ for a given set $\left\{ \left( \mathbf{x}^{T}\mathbf{x _{2_{i\ast}}+1\right) ^{2}\right\} _{i=1}^{l_{2}}$ of $k_{\mathbf{x _{2i\ast}}$ reproducing kernels \[ \boldsymbol{\kappa}_{2}=p\left( \left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast }+1\right) ^{2}|\boldsymbol{\kappa}_{2}\right) \text{, \] where the area under each pointwise conditional density $p\left( k_{\mathbf{x}_{i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ is a conditional probability that an extreme point $k_{\mathbf{x}_{i\ast}}$ will be observed in a $Z_{1}$ or $Z_{2}$ decision region of a decision space $Z$. In order to develop class-conditional probability densities for extreme points, I will devise a system of data-driven, locus equations in Wolfe dual eigenspace that provides tractable point and coordinate relationships between the weighted (labeled and scaled) extreme points on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ and the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$. I\ will use this system of equations to develop equations for geometric and statistical properties possessed by the Wolfe dual and the constrained, primal principal eigenaxis components. Next, I\ will use these equations and identified properties to define class-conditional probability densities for individual extreme points and class-conditional probability densities $p\left( \left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}|\boldsymbol{\kappa _{1}\right) $ and $p\left( \left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast }+1\right) ^{2}|\boldsymbol{\kappa}_{2}\right) $ for labeled sets of extreme points. Thereby, I will demonstrate that the conditional probability function $P\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ for a given set $\left\{ \left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}\right\} _{i=1}^{l_{1}}$ of $k_{\mathbf{x}_{1i\ast}}$ reproducing kernels is given by the area under the class-conditional probability density function $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ \begin{align*} P\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) & =\in _{Z}\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\left( \mathbf{x ^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}\right) d\boldsymbol{\kappa}_{1}\\ & =\int_{Z}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}\\ & =\int_{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}=\left\Vert \boldsymbol{\kappa}_{1}\right\Vert ^{2}+C_{1}\text{, \end{align*} over the decision space $Z$, where $\left\Vert \boldsymbol{\kappa _{1}\right\Vert ^{2}$ is the total allowed eigenenergy exhibited by $\boldsymbol{\kappa}_{1}$ and $C_{1}$ is an integration constant. Likewise, I will demonstrate that the conditional probability function $P\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $ for a given set $\left\{ \left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}\right\} _{i=l_{1}+1}^{l_{2}}$ of $k_{\mathbf{x}_{2i\ast}}$ reproducing kernels is given by the area under the class-conditional probability density function $p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $ \begin{align*} P\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) & =\in _{Z}\left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\left( \mathbf{x ^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}\right) d\boldsymbol{\kappa}_{2}\\ & =\int_{Z}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}\\ & =\int_{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}=\left\Vert \boldsymbol{\kappa}_{2}\right\Vert ^{2}+C_{2}\text{, \end{align*} over the decision space $Z$, where $\left\Vert \boldsymbol{\kappa _{2}\right\Vert ^{2}$ is the total allowed eigenenergy exhibited by $\boldsymbol{\kappa}_{2}$ and $C_{2}$ is an integration constant. In order to define the $C_{1}$ and $C_{2}$ integration constants, I\ will need to define the manner in which the total allowed eigenenergies possessed by the scaled extreme vectors on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa }_{2}$ are symmetrically balanced with each other. I\ will use these results to define the manner in which the area under the class-conditional probability density functions $p\left( \left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast }+1\right) ^{2}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( \left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}|\boldsymbol{\kappa _{2}\right) $ and the corresponding conditional probability functions $P\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ and $P\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $ for class $\omega_{1}$ and class $\omega_{2}$ are symmetrically balanced with each other. I\ will define the $C_{1}$ and $C_{2}$ integration constants in the following manner: I will use the KKT condition in Eq. (\ref{KKTE5 Q}) and the theorem of Karush, Kuhn, and Tucker to devise a system of data-driven, locus equations that determines the manner in which the total allowed eigenenergies of the scaled extreme vectors on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa }_{2}$ are symmetrically balanced with each other. I\ will use these results along with results obtained from the analysis of the Wolfe dual eigenspace to devise a system of data-driven, locus equations that determines the manner in which the class-conditional density functions $p\left( \left( \mathbf{x ^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( \left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}|\boldsymbol{\kappa}_{2}\right) $ satisfy an integral equatio \[ f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) :\int_{Z}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) d\boldsymbol{\kappa}_{1}+\nabla_{eq}=\int_{Z}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa }_{2}-\nabla_{eq}\text{, \] over the decision space $Z$, where $\nabla_{eq}$ is an equalizer statistic. Thereby, I\ will demonstrate that the statistical property of symmetrical balance exhibited by the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ ensures that the conditional probability functions $P\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ and $P\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $ for class $\omega_{1}$ and class $\omega_{2}$ are equal to each other, so that a quadratic eigenlocus discriminant function $\widetilde{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfies the integral equation in Eq. (\ref{Integral Equation of Likelihood Ratio and Decision Boundary}). I will use these results along with results obtained from the analysis of the Wolfe dual eigenspace to prove that quadratic eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}$ satisfy the fundamental integral equation of binary classification in Eq. (\ref{Equalizer Rule}) along with the corresponding integral equation in Eq. (\ref{Balancing of Bayes' Risks and Counteracting Risks}). I\ will also devise an integral equation $f\left( \widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ that illustrates the manner in which the property of symmetrical balance exhibited by the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa }$ enables quadratic eigenlocus discriminant functions $\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ to effectively balance all of the forces associated with the risk $\mathfrak{R _{\mathfrak{B}}\left( Z\mathbf{|}\boldsymbol{\kappa}\right) $ of a quadratic classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$: where all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa }_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) $ within the $Z_{1}$ decision region are symmetrically balanced with all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa }_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{1}\right) $ within the $Z_{2}$ decision region. Thereby, I will devise integral equations that are satisfied by quadratic eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$, by which the discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ is the solution to a discrete version of the fundamental integral equation of binary classification for a classification system in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & =\;\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast }|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}+\in \nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) d\boldsymbol{\kappa}_{1}+\nabla_{eq}\\ & =\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa }_{2}\right) d\boldsymbol{\kappa}_{2}+\int\nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa }_{2}-\nabla_{eq}\text{, \end{align*} where all of the forces associated with counter risks and risks for class $\omega_{1}$ and class $\omega_{2}$ are symmetrically balanced with each othe \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :\;\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast }|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}-\in \nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}+\nabla_{eq}\\ & =\int\nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa }_{2}\right) d\boldsymbol{\kappa}_{2}-\int\nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa }_{1}-\nabla_{eq}\text{, \end{align*} over the $Z_{1}$ and $Z_{2}$ decision regions. Quadratic eigenlocus transforms involve symmetrically balanced, first and second-order statistical moments of reproducing kernels of extreme data points. I\ will begin my analysis by defining first and second-order statistical moments of reproducing kernels of data points. \section{Statistical Moments II} Consider again the matrix $\mathbf{Q}$ associated with the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual Q}) \begin{equation} \mathbf{Q} \begin{pmatrix} \left( \mathbf{x}_{1}^{T}\mathbf{x}_{1}+1\right) ^{2} & \left( \mathbf{x}_{1}^{T}\mathbf{x}_{2}+1\right) ^{2} & \cdots & -\left( \mathbf{x}_{1}^{T}\mathbf{x}_{N}+1\right) ^{2}\\ \left( \mathbf{x}_{2}^{T}\mathbf{x}_{1}+1\right) ^{2} & \left( \mathbf{x}_{2}^{T}\mathbf{x}_{2}+1\right) ^{2} & \cdots & -\left( \mathbf{x}_{2}^{T}\mathbf{x}_{N}+1\right) ^{2}\\ \vdots & \vdots & \ddots & \vdots\\ -\left( \mathbf{x}_{N}^{T}\mathbf{x}_{1}+1\right) ^{2} & -\left( \mathbf{x}_{N}^{T}\mathbf{x}_{2}+1\right) ^{2} & \cdots & \left( \mathbf{x}_{N}^{T}\mathbf{x}_{N}+1\right) ^{2 \end{pmatrix} \text{,} \label{Autocorrelation Matrix Q \end{equation} where $\mathbf{Q}\triangleq\widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^{T}$, $\widetilde{\mathbf{X}}\triangleq\mathbf{D}_{y}\mathbf{X}$, $\mathbf{D}_{y}$ is a $N\times N$ diagonal matrix of training labels $y_{i}$ and the $N\times d$ reproducing kernel matrix i \[ \mathbf{X} \begin{pmatrix} \left( \mathbf{x}^{T}\mathbf{x}_{1}+1\right) ^{2}, & \left( \mathbf{x ^{T}\mathbf{x}_{2}+1\right) ^{2}, & \ldots, & \left( \mathbf{x ^{T}\mathbf{x}_{N}+1\right) ^{2 \end{pmatrix} ^{T}\text{. \] Without loss of generality (WLOG), assume that $N$ is an even number, the first $N/2$ vectors have the training label $y_{i}=1$ and the last $N/2$ vectors have the training label $y_{i}=-1$. WLOG, the analysis that follows does not take training label information into account. Recall that reproducing kernels $k_{\mathbf{x}_{i}}=\left( \mathbf{x ^{T}\mathbf{x}_{i}+1\right) ^{2}$ replace straight lines of vectors with second-order polynomial curves. Using Eq. (\ref{Scalar Projection Reproducing Kernels}), let the inner product statistic $\left( \mathbf{x}_{i}^{T}\mathbf{x}_{j}+1\right) ^{2}$ be interpreted as $\left\Vert k_{\mathbf{x}_{i}}\right\Vert $ times the scalar projection $\left\Vert k_{\mathbf{x}_{j}}\right\Vert \cos\theta_{k_{\mathbf{x_{i} }k_{\mathbf{x}_{j}}}$ of $k_{\mathbf{x}_{j}}$ onto $k_{\mathbf{x}_{i}} \begin{align*} \left( \mathbf{x}_{i}^{T}\mathbf{x}_{j}+1\right) ^{2} & =\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}\left( \mathbf{x}^{T \mathbf{x}_{j}+1\right) ^{2}\\ & =\left\Vert k_{\mathbf{x}_{i}}\right\Vert \times\left[ \left\Vert k_{\mathbf{x}_{j}}\right\Vert \cos\theta_{k_{\mathbf{x_{i}}}k_{\mathbf{x}_{j }}\right] \text{. \end{align*} It follows that row $\mathbf{Q}\left( i,:\right) $ in Eq. (\ref{Autocorrelation Matrix Q}) contains uniformly weighted $\left\Vert k_{\mathbf{x}_{i}}\right\Vert $ scalar projections $\left\Vert k_{\mathbf{x _{j}}\right\Vert \cos\theta_{k_{\mathbf{x_{i}}}k_{\mathbf{x}_{j}}}$ for each of the $N$ vectors $\left\{ \left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}\right\} _{j=1}^{N}$ onto the vector $\left( \mathbf{x}^{T \mathbf{x}_{i}+1\right) ^{2}$ \begin{equation} \widetilde{\mathbf{Q}} \begin{pmatrix} \left\Vert f\left( \mathbf{x}_{1}\right) \right\Vert \left\Vert f\left( \mathbf{x}_{1}\right) \right\Vert \cos\theta_{k_{\mathbf{x_{1}} k_{\mathbf{x}_{1}}} & \cdots & -\left\Vert f\left( \mathbf{x}_{1}\right) \right\Vert \left\Vert f\left( \mathbf{x}_{N}\right) \right\Vert \cos \theta_{k_{\mathbf{x}_{1}}k_{\mathbf{x}_{N}}}\\ \left\Vert f\left( \mathbf{x}_{2}\right) \right\Vert \left\Vert f\left( \mathbf{x}_{1}\right) \right\Vert \cos\theta_{k_{\mathbf{x}_{2} k_{\mathbf{x}_{1}}} & \cdots & -\left\Vert f\left( \mathbf{x}_{2}\right) \right\Vert \left\Vert f\left( \mathbf{x}_{N}\right) \right\Vert \cos \theta_{k_{\mathbf{x}_{2}}k_{\mathbf{x}_{N}}}\\ \vdots & \ddots & \vdots\\ -\left\Vert f\left( \mathbf{x}_{N}\right) \right\Vert \left\Vert f\left( \mathbf{x}_{1}\right) \right\Vert \cos\theta_{k_{\mathbf{x}_{N} k_{\mathbf{x}_{1}}} & \cdots & \left\Vert f\left( \mathbf{x}_{N}\right) \right\Vert \left\Vert f\left( \mathbf{x}_{N}\right) \right\Vert \cos \theta_{k_{\mathbf{x}_{N}}k_{\mathbf{x}_{N}} \end{pmatrix} \text{,} \label{Inner Product Matrix Q \end{equation} where $f\left( \mathbf{x}_{i}\right) =$ $k_{\mathbf{x}_{i}}$ and $0<\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}\leq\frac{\pi}{2}$ or $\frac{\pi {2}<\theta_{\mathbf{x}_{i}\mathbf{x}_{j}}\leq\pi$. Alternatively, column $\mathbf{Q}\left( :,j\right) $ in Eq. (\ref{Autocorrelation Matrix Q}) contains weighted $\left\Vert k_{\mathbf{x}_{i}}\right\Vert $ scalar projections $\left\Vert k_{\mathbf{x}_{j}}\right\Vert \cos\theta _{k_{\mathbf{x_{i}}}k_{\mathbf{x}_{j}}}$ for the vector $\left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}$ onto each of the $N$ vectors $\left\{ \left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}\right\} _{i=1}^{N}$. Now consider the $i$th row $\widetilde{\mathbf{Q}}\left( i,:\right) $ of $\widetilde{\mathbf{Q}}$ in Eq. (\ref{Inner Product Matrix Q}). Again, using Eq. (\ref{Scalar Projection Reproducing Kernels}), it follows that element $\widetilde{\mathbf{Q}}\left( i,j\right) $ of row $\widetilde{\mathbf{Q }\left( i,:\right) $ specifies the length $\left\Vert \left( \mathbf{x ^{T}\mathbf{x}_{i}+1\right) ^{2}\right\Vert $ of the vector $\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}$ multiplied by the scalar projection $\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x_{i}}}k_{\mathbf{x}_{j}}}$ of\ $\left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}$ onto $\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}$ \[ \widetilde{\mathbf{Q}}\left( i,j\right) =\left\Vert \left( \mathbf{x ^{T}\mathbf{x}_{i}+1\right) ^{2}\right\Vert \left[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}\right\Vert \cos\theta _{k_{\mathbf{x}_{i}}k_{\mathbf{x}_{j}}}\right] \text{, \] where the signed magnitude of the vector projection of $\left( \mathbf{x ^{T}\mathbf{x}_{j}+1\right) ^{2}$ along the axis of $\left( \mathbf{x ^{T}\mathbf{x}_{i}+1\right) ^{2} \begin{align*} \operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{i}}}}\left( \overrightarrow{k_{\mathbf{x}_{j}}}\right) & =\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}\right\Vert \cos\theta _{k_{\mathbf{x}_{i}}k_{\mathbf{x}_{j}}}\text{,}\\ & =\left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}\left( \frac{\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}}{\left\Vert \left( \mathbf{x ^{T}\mathbf{x}_{i}+1\right) ^{2}\right\Vert }\right) \text{, \end{align*} provides a measure of the first and second degree components of the vector $k_{\mathbf{x}_{j}} \[ k_{\mathbf{x}_{j}} \begin{pmatrix} x_{j1}^{2}+2x_{j1}+1, & x_{j2}^{2}+2x_{j2}+1, & \cdots, & x_{jd}^{2}+2x_{jd}+1 \end{pmatrix} ^{T \] along the axis of the vector $k_{\mathbf{x}_{i}} \[ k_{\mathbf{x}_{i}} \begin{pmatrix} x_{i1}^{2}+2x_{i1}+1, & x_{i2}^{2}+2x_{i2}+1, & \cdots, & x_{id}^{2}+2x_{id}+1 \end{pmatrix} ^{T}\text{. \] Accordingly, the signed magnitude $\left\Vert k_{\mathbf{x}_{j}}\right\Vert \cos\theta_{k_{\mathbf{x}_{i}}k_{\mathbf{x}_{j}}}$ provides an estimate for the amount of first degree and second degree components of the vector $k_{\mathbf{x}_{i}}$ that are distributed over the axis of the vector $k_{\mathbf{x}_{j}}$. This indicates that signed magnitudes $\left\Vert k_{\mathbf{x}_{j}}\right\Vert \cos\theta_{k_{\mathbf{x}_{i}}k_{\mathbf{x}_{j }}$ contained with $\widetilde{\mathbf{Q}}$ account for how the first degree and second coordinates of a data point $k_{\mathbf{x}_{i}}$ are distributed along the axes of a set of vectors $\left\{ k_{\mathbf{x}_{j}}\right\} _{j=1}^{N}$ within Euclidean space. Using the above assumptions and notation, for any given row $\widetilde{\mathbf{Q}}\left( i,:\right) $ of Eq. (\ref{Inner Product Matrix Q}), it follows that the statistic denoted by $E_{k_{\mathbf{x}_{i}}}\left[ k_{\mathbf{x}_{i}}|\left\{ k_{\mathbf{x}_{j }\right\} _{j=1}^{N}\right] \begin{align} E_{k_{\mathbf{x}_{i}}}\left[ k_{\mathbf{x}_{i}}|\left\{ k_{\mathbf{x}_{j }\right\} _{j=1}^{N}\right] & =\left\Vert k_{\mathbf{x}_{i}}\right\Vert {\displaystyle\sum\nolimits_{j}} \operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{i}}}}\left( \overrightarrow{k_{\mathbf{x}_{j}}}\right) \label{Row Distribution First Order Vector Coordinates Q}\\ & =\left\Vert k_{\mathbf{x}_{i}}\right\Vert {\displaystyle\sum\nolimits_{j}} \left\Vert k_{\mathbf{x}_{j}}\right\Vert \cos\theta_{k_{\mathbf{x}_{i }k_{\mathbf{x}_{j}}}\nonumber \end{align} provides an estimate $E_{k_{\mathbf{x}_{i}}}\left[ k_{\mathbf{x}_{i }|\left\{ k_{\mathbf{x}_{j}}\right\} _{j=1}^{N}\right] $ for the amount of first and second degree components of a vector $k_{\mathbf{x}_{i}}$ that are distributed over the axes of a set of vectors $\left\{ k_{\mathbf{x}_{j }\right\} _{j=1}^{N}$, where labels have not been taken into account. Thereby, Eq. (\ref{Row Distribution First Order Vector Coordinates Q}) describes a distribution of first and second degree coordinates for a vector $k_{\mathbf{x}_{i}}$ in a data collection. Given that Eq. (\ref{Row Distribution First Order Vector Coordinates Q}) involves signed magnitudes of vector projections along the axis of a fixed vector $k_{\mathbf{x}_{i}}$, the distribution of first and second degree vector coordinates described by Eq. (\ref{Row Distribution First Order Vector Coordinates Q}) is said to determine a \emph{first-order statistical moment about the locus of a reproducing kernel} $k_{\mathbf{x}_{i}}$. Because the statistic $E_{k_{\mathbf{x}_{i} }\left[ k_{\mathbf{x}_{i}}|\left\{ k_{\mathbf{x}_{j}}\right\} _{j=1 ^{N}\right] $ depends on the uniform direction of a fixed vector $k_{\mathbf{x}_{i}}$, the statistic $E_{k_{\mathbf{x}_{i}}}\left[ k_{\mathbf{x}_{i}}|\left\{ k_{\mathbf{x}_{j}}\right\} _{j=1}^{N}\right] $ is said to be unidirectional. In the next section, I will define pointwise covariance statistics that provide unidirectional estimates of covariance along a fixed reference axis. \subsection{Unidirectional Covariance Statistics} Recall that classical covariance statistics provide omnidirectional estimates of covariance along $N$ axes of $N$ vectors. (See Section $13.1$). I will now devise pointwise covariance statistics $\widehat{\operatorname{cov} _{up}\left( k_{\mathbf{x}_{i}}\right) $ for individual vectors $k_{\mathbf{x}_{i}}$. Pointwise covariance statistics for reproducing kernels are unidirectional statistics that provide coherent estimates of covariance along a fixed reference axis. WLOG, label information is not taken into consideration. \subsubsection{Pointwise Covariance Statistics for Reproducing Kernels} Take any row $\widetilde{\mathbf{Q}}\left( i,:\right) $ of the reproducing kernel matrix $\widetilde{\mathbf{Q}}$ in Eq. (\ref{Inner Product Matrix Q}) and consider the inner product statistic $\left\Vert k_{\mathbf{x}_{i }\right\Vert \left\Vert k_{\mathbf{x}_{j}}\right\Vert \cos\theta _{k_{\mathbf{x}_{i}}k_{\mathbf{x}_{j}}}$ in element $\widetilde{\mathbf{Q }\left( i,j\right) $. Using Eqs (\ref{Geometric Locus of Second-order Reproducing Kernel}) and (\ref{Inner Product Statistic Reproducing Kernels}), it follows that element $\widetilde{\mathbf{Q}}\left( i,j\right) $ in row $\widetilde{\mathbf{Q }\left( i,:\right) $ specifies the joint variations $\operatorname{cov \left( k_{\mathbf{x}_{i}},k_{\mathbf{x}_{j}}\right) \[ \operatorname{cov}\left( k_{\mathbf{x}_{i}},k_{\mathbf{x}_{j}}\right) =\left\Vert k_{\mathbf{x}_{i}}\right\Vert \left\Vert k_{\mathbf{x}_{j }\right\Vert \cos\theta_{k_{\mathbf{x}_{i}}k_{\mathbf{x}_{j}} \] between the components of the vector $\left( \mathbf{x}^{T}\mathbf{x _{i}+1\right) ^{2} \ \begin{pmatrix} \left( \left\Vert \mathbf{x}_{i}\right\Vert ^{2}+1\right) \cos \mathbb{\alpha}_{k\left( \mathbf{x}_{i1}\right) 1}, & \left( \left\Vert \mathbf{x}_{i}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( \mathbf{x}_{i2}\right) 2}, & \cdots, & \left( \left\Vert \mathbf{x _{i}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( \mathbf{x _{id}\right) d \end{pmatrix} \] and the components of the vector $\left( \mathbf{x}^{T}\mathbf{x _{j}+1\right) ^{2} \ \begin{pmatrix} \left( \left\Vert \mathbf{x}_{j}\right\Vert ^{2}+1\right) \cos \mathbb{\alpha}_{k\left( \mathbf{x}_{j1}\right) 1}, & \left( \left\Vert \mathbf{x}_{j}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( \mathbf{x}_{j2}\right) 2}, & \cdots, & \left( \left\Vert \mathbf{x _{j}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( \mathbf{x _{jd}\right) d \end{pmatrix} \text{, \] where the $d$ components $\left\{ \left( \left\Vert \mathbf{s}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( s_{i}\right) i}\right\} _{i=1}^{d} $ of any given vector $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}$ are random variables, each of which is characterized by an expected value and varianc \[ E\left[ \left( \left\Vert \mathbf{s}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( s_{i}\right) i}\right] \text{ and }\operatorname{var}\left( \left( \left\Vert \mathbf{s}\right\Vert ^{2}+1\right) \cos\mathbb{\alpha}_{k\left( s_{i}\right) i}\right) \text{. \] It follows that the $j$th element $\widetilde{\mathbf{Q}}\left( i,j\right) $ of row $\widetilde{\mathbf{Q}}\left( i,:\right) $ specifies the joint variations of the $d$ random variables of a vector $\left( \mathbf{x ^{T}\mathbf{x}_{j}+1\right) ^{2}$ about the $d$ random variables of the vector $\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}$. Thus, row $\widetilde{\mathbf{Q}}\left( i,:\right) $ specifies the joint variations between the random variables of a fixed vector $\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}$ and the random variables of an entire collection of vectors $\left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}$ of data. Again, take any row $\widetilde{\mathbf{Q}}\left( i,:\right) $ of the matrix $\widetilde{\mathbf{Q}}$ in Eq. (\ref{Inner Product Matrix Q}). Using Eq. (\ref{Scalar Projection Reproducing Kernels}), it follows that the statistic $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{i}}\right) $ \begin{align} \widehat{\operatorname{cov}}_{up}\left( \left( \mathbf{x}^{T}\mathbf{x _{i}+1\right) ^{2}\right) & {\displaystyle\sum\nolimits_{j=1}^{N}} \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}\right\Vert \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x_{i}}}k_{\mathbf{x}_{j}} \label{Pointwise Covariance Statistic Q}\\ & {\displaystyle\sum\nolimits_{j=1}^{N}} \left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}\left( \mathbf{x ^{T}\mathbf{x}_{j}+1\right) ^{2}\nonumber\\ & =\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2 {\displaystyle\sum\nolimits_{j=1}^{N}} \left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}\nonumber\\ & =\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}\right\Vert {\displaystyle\sum\nolimits_{j=1}^{N}} \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{j}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x_{i}}}k_{\mathbf{x}_{j}}}\nonumber \end{align} provides a unidirectional estimate of the joint variations of the $d$ random variables of each of the $N$ vectors of a training data collection $\left\{ k_{\mathbf{x}_{j}}\right\} _{j=1}^{N}$ and a unidirectional estimate of the joint variations of the $d$ random variables of the common mean {\displaystyle\sum\nolimits_{j=1}^{N}} k_{\mathbf{x}_{j}}$ of the $N$ vectors, about the $d$ random variables of a fixed vector $k_{\mathbf{x}_{i}}$, along the axis of the fixed vector $k_{\mathbf{x}_{i}}=\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}$. Thereby, the statistic $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{i}}\right) $ specifies the direction of the vector $k_{\mathbf{x}_{i}}=\left( \mathbf{x}^{T}\mathbf{x}_{i}+1\right) ^{2}$ and a signed magnitude along the axis of the vector $k_{\mathbf{x}_{i}}$. The statistic $\widehat{\operatorname{cov}}_{up}\left( \left( \mathbf{x ^{T}\mathbf{x}_{i}+1\right) ^{2}\right) $ in Eq. (\ref{Pointwise Covariance Statistic Q}) is defined to be a pointwise covariance estimate for a reproducing kernel $\left( \mathbf{x}^{T \mathbf{x}_{i}+1\right) ^{2}$, where the statistic $\widehat{\operatorname{cov}}_{up}\left( \left( \mathbf{x}^{T}\mathbf{x _{i}+1\right) ^{2}\right) $ provides a unidirectional estimate of the joint variations between the random variables of each training vector $k_{\mathbf{x _{j}}$ in a training data collection and the random variables of a fixed vector $k_{\mathbf{x}_{i}}$ and a unidirectional estimate of the joint variations between the random variables of the mean vector {\displaystyle\sum\nolimits_{j=1}^{N}} k_{\mathbf{x}_{j}}$ and the fixed vector $k_{\mathbf{x}_{i}}$. The statistic $\widehat{\operatorname{cov}}_{up}\left( \left( \mathbf{x}^{T}\mathbf{x _{i}+1\right) ^{2}\right) $ also accounts for first and second degree vector components. Given that the joint variations estimated by the statistic $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{i}}\right) $ are derived from second-order distance statistics $\left\Vert k_{\mathbf{x}_{i }-k_{\mathbf{x}_{j}}\right\Vert ^{2}$ which involve signed magnitudes of vector projections along the common axis of a fixed vector $k_{\mathbf{x}_{i }$, a pointwise covariance estimate $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{i}}\right) $ is said to determine a \emph{second-order statistical moment about the locus of a reproducing kernel} $k_{\mathbf{x _{i}}$. Using Eq. (\ref{Row Distribution First Order Vector Coordinates Q}), Eq. (\ref{Pointwise Covariance Statistic Q}) also specifies a distribution of first and second degree coordinates for a given reproducing kernel $k_{\mathbf{x}_{i}}$ which determines a first-order statistical moment about the locus of the data point $k_{\mathbf{x}_{i}}$. I\ will now demonstrate that pointwise covariance statistics can be used to discover reproducing kernels of extreme points. \subsection{Discovery of Extreme Reproducing Kernels} The kernel matrix associated with the constrained quadratic form in Eq. (\ref{Vector Form Wolfe Dual Q}) contains inner product statistics for\ two labeled collections of data. Denote those data points that belong to class $\omega_{1}$ by $k_{\mathbf{x}_{1_{i}}}$ and those that belong to class $\omega_{2}$ by $k_{\mathbf{x}_{2_{i}}}$. Let $\overline{k}_{\mathbf{x}_{1}}$ and $\overline{k}_{\mathbf{x}_{2}}$ denote the mean vectors of class $\omega_{1}$ and class $\omega_{2}$. Let $i=1:n_{1}$ where the vector $k_{\mathbf{x}_{1_{i}}}$ has the label $y_{i}=1$ and let $i=n_{1 +1:n_{1}+n_{2}$ where the vector $k_{\mathbf{x}_{2_{i}}}$ has the label $y_{i}=-1$. Using label information, Eq. (\ref{Pointwise Covariance Statistic Q}) can be rewritten a \[ \widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i}}}\right) =k_{\mathbf{x}_{1_{i}}}\left( \sum\nolimits_{j=1}^{n_{1}}k_{\mathbf{x _{1_{j}}}-\sum\nolimits_{j=n_{1}+1}^{n_{1}+n_{2}}k_{\mathbf{x}_{2_{j} }\right) \] an \[ \widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i}}}\right) =k_{\mathbf{x}_{2_{i}}}\left( \sum\nolimits_{j=n_{1}+1}^{n_{1}+n_{2 }k_{\mathbf{x}_{2_{j}}}-\sum\nolimits_{j=1}^{n_{1}}k_{\mathbf{x}_{1_{j} }\right) \text{. \] I will now show that extreme reproducing kernels possess large pointwise covariances relative to the non-extreme reproducing kernels in each respective pattern class. Recall that an extreme point is located relatively far from its distribution mean, relatively close to the mean of the other distribution and relatively close to other extreme points. Denote an extreme reproducing kernel by $k_{\mathbf{x}_{1_{i\ast}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$ and a non-extreme reproducing kernel by $k_{\mathbf{x}_{1_{i}}}$ or $k_{\mathbf{x _{2_{i}}}$. Take any extreme reproducing kernel $k_{\mathbf{x}_{1_{i\ast}}}$ and any non-extreme reproducing kernel $k_{\mathbf{x}_{1_{i}}}$ that belong to class $\omega_{1}$ and consider the pointwise covariance estimates for $k_{\mathbf{x}_{1_{i\ast}}}$ \[ \widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i\ast}}}\right) =k\left( \mathbf{x}_{1_{i\ast}},\overline{\mathbf{x}}_{1}\right) -k\left( \mathbf{x}_{1_{i\ast}},\overline{\mathbf{x}}_{2}\right) \] and for $k_{\mathbf{x}_{1_{i}}}$ \[ \widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i}}}\right) =k\left( \mathbf{x}_{1_{i}},\overline{\mathbf{x}}_{1}\right) -k\left( \mathbf{x}_{1_{i}},\overline{\mathbf{x}}_{2}\right) \text{. \] Because $k_{\mathbf{x}_{1_{i\ast}}}$ is an extreme reproducing kernel, it follows that $k\left( \mathbf{x}_{1_{i\ast}},\overline{\mathbf{x} _{1}\right) >$ $k\left( \mathbf{x}_{1_{i}},\overline{\mathbf{x}}_{1}\right) $ and that $k\left( \mathbf{x}_{1_{i\ast}},\overline{\mathbf{x}}_{2}\right) <$ $k\left( \mathbf{x}_{1_{i}},\overline{\mathbf{x}}_{2}\right) $. Thus, $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i\ast}}}\right) >\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i}}}\right) $. Therefore, each extreme reproducing kernel $k_{\mathbf{x}_{1_{i\ast}}}$ exhibits a pointwise covariance $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i\ast}}}\right) $ that exceeds the pointwise covariance $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i}}}\right) $ of all of the non-extreme reproducing kernels $k_{\mathbf{x}_{1_{i}}}$ in class $\omega_{1}$. Now take any extreme reproducing kernel $k_{\mathbf{x}_{2_{i\ast}}}$ and any non-extreme reproducing kernel $k_{\mathbf{x}_{2_{i}}}$ that belong to class $\omega_{2}$ and consider the pointwise covariance estimates for $k_{\mathbf{x}_{2_{i\ast}}}$ \[ \widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) =k\left( \mathbf{x}_{2_{i\ast}},\overline{\mathbf{x}}_{2}\right) -k\left( \mathbf{x}_{2_{i\ast}},\overline{\mathbf{x}}_{1}\right) \] and for $k_{\mathbf{x}_{2_{i}}}$ \[ \widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i}}}\right) =k\left( \mathbf{x}_{2_{i}},\overline{\mathbf{x}}_{2}\right) -k\left( \mathbf{x}_{2_{i}},\overline{\mathbf{x}}_{1}\right) \text{. \] Because $k_{\mathbf{x}_{2_{i\ast}}}$ is an extreme reproducing kernel, it follows that $k\left( \mathbf{x}_{2_{i\ast}},\overline{\mathbf{x} _{2}\right) >$ $k\left( \mathbf{x}_{2_{i}},\overline{\mathbf{x}}_{2}\right) $ and that $k\left( \mathbf{x}_{2_{i\ast}},\overline{\mathbf{x}}_{1}\right) <k\left( \mathbf{x}_{2_{i}},\overline{\mathbf{x}}_{1}\right) $. Thus, $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) >\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i}}}\right) $. Therefore, each extreme reproducing kernel $k_{\mathbf{x}_{2_{i\ast}}}$ exhibits a pointwise covariance $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) $ that exceeds the pointwise covariance $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i}}}\right) $ of all of the non-extreme reproducing kernels $k_{\mathbf{x}_{2_{i}}}$ in class $\omega_{2}$. Thereby, it is concluded that extreme reproducing kernels possess large pointwise covariances relative to non-extreme reproducing kernels in their respective pattern class. It is also concluded that the pointwise covariance $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i\ast}}}\right) $ or $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i\ast} }\right) $ exhibited by any given extreme reproducing kernel $k_{\mathbf{x _{1_{i\ast}}} $ or $k_{\mathbf{x}_{2_{i\ast}}}$ may exceed pointwise covariances of other extreme reproducing kernels in each respective pattern class. Therefore, it will be assumed that each extreme reproducing kernel $k_{\mathbf{x}_{1_{i\ast}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$ exhibits a critical first and second-order statistical moment $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i\ast}}}\right) $ or $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i\ast} }\right) $ that exceeds some threshold $\varrho$, for which each corresponding scale factor $\psi_{1i\ast}$ or $\psi_{2i\ast}$ exhibits a critical value that exceeds zero: $\psi_{1i\ast}>0$ or $\psi_{2i\ast}>0$. Accordingly, first and second-order statistical moments $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i}}}\right) $ or $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i}}}\right) $ about the loci of non-extreme reproducing kernels $k_{\mathbf{x}_{1_{i}}}$ or $k_{\mathbf{x}_{2_{i}}}$ do not exceed the threshold $\varrho$ and their corresponding scale factors $\psi_{1i}$ or $\psi_{2i}$ are effectively zero: $\psi_{1i}=0$ or $\psi_{2i}=0$. I will now devise a system of equations for a principal eigen-decomposition of the kernel matrix $\mathbf{Q}$ denoted in Eqs (\ref{Autocorrelation Matrix Q}) and (\ref{Inner Product Matrix Q}) that describes tractable point and coordinate relationships between the scaled reproducing kernels on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ and the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$. \section{Inside the Wolfe Dual Eigenspace II} Take the kernel matrix $\mathbf{Q}$ associated with the quadratic form in Eq (\ref{Vector Form Wolfe Dual Q}). Let $\mathbf{q}_{j\text{ }}$denote the $j$th column of $\mathbf{Q}$, which is an $N$-vector. Let $\lambda_{\max _{\boldsymbol{\psi}}}$ and $\boldsymbol{\psi}$ denote the largest eigenvalue and largest eigenvector of $\mathbf{Q}$ respectively. Using this notation \citep[see][]{Trefethen1998 , the principal eigen-decomposition of $\mathbf{Q} \[ \mathbf{Q}\boldsymbol{\psi}=\lambda\mathbf{_{\max_{\boldsymbol{\psi}} }\boldsymbol{\psi \] can be rewritten a \[ \lambda_{\max_{\boldsymbol{\psi}}}\boldsymbol{\psi} {\displaystyle\sum\nolimits_{j=1}^{N}} \psi_{_{j}}\mathbf{q}_{j\text{ } \] so that the Wolfe dual principal eigenaxis $\boldsymbol{\psi}$ of $\mathbf{Q}$ is expressed as a linear combination of transformed vectors $\frac{\psi_{j }{\lambda_{\max_{\boldsymbol{\psi}}}}\mathbf{q}_{j\text{ }}$ \begin{equation} \left[ \begin{array} [c]{c \\ \boldsymbol{\psi}\\ \\ \end{array} \right] =\frac{\psi_{1}}{\lambda_{\max_{\boldsymbol{\psi}}}}\left[ \begin{array} [c]{c \\ \mathbf{q}_{1\text{ }}\\ \\ \end{array} \right] +\frac{\psi_{2}}{\lambda_{\max_{\boldsymbol{\psi}}}}\left[ \begin{array} [c]{c \\ \mathbf{q}_{2\text{ }}\\ \\ \end{array} \right] +\cdots+\frac{\psi_{N}}{\lambda_{\max_{\boldsymbol{\psi}}}}\left[ \begin{array} [c]{c \\ \mathbf{q}_{N\text{ }}\\ \\ \end{array} \right] \text{,} \label{Alternate Eigendecomposition Equation Q \end{equation} where the $i$th element of the vector $\mathbf{q}_{j\text{ }}$ specifies an inner product statistic $K\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) $ between the reproducing kernels $k_{\mathbf{x}_{i}}$ and $k_{\mathbf{x}_{j}}$. Using Eqs (\ref{Autocorrelation Matrix Q}) and (\ref{Alternate Eigendecomposition Equation Q}), a Wolfe dual quadratic eigenlocus $\left( \psi_{1},\cdots,\psi_{N}\right) ^{T}$ can be written as \begin{align} \boldsymbol{\psi} & =\frac{\psi_{1}}{\lambda_{\max_{\boldsymbol{\psi}}} \begin{pmatrix} \left( \mathbf{x}_{1}^{T}\mathbf{x}_{1}+1\right) ^{2}\\ \left( \mathbf{x}_{2}^{T}\mathbf{x}_{1}+1\right) ^{2}\\ \vdots\\ -\left( \mathbf{x}_{N}^{T}\mathbf{x}_{1}+1\right) ^{2 \end{pmatrix} +\frac{\psi_{2}}{\lambda_{\max_{\boldsymbol{\psi}}} \begin{pmatrix} \left( \mathbf{x}_{1}^{T}\mathbf{x}_{2}+1\right) ^{2}\\ \left( \mathbf{x}_{2}^{T}\mathbf{x}_{2}+1\right) ^{2}\\ \vdots\\ -\left( \mathbf{x}_{N}^{T}\mathbf{x}_{2}+1\right) ^{2 \end{pmatrix} +\cdots\label{Dual Normal Eigenlocus Components Q}\\ \cdots & +\frac{\psi_{N-1}}{\lambda_{\max_{\boldsymbol{\psi}}} \begin{pmatrix} -\left( \mathbf{x}_{1}^{T}\mathbf{x}_{N-1}+1\right) ^{2}\\ -\left( \mathbf{x}_{2}^{T}\mathbf{x}_{N-1}+1\right) ^{2}\\ \vdots\\ \left( \mathbf{x}_{N}^{T}\mathbf{x}_{N-1}+1\right) ^{2 \end{pmatrix} +\frac{\psi_{N}}{\lambda_{\max_{\boldsymbol{\psi}}} \begin{pmatrix} -\left( \mathbf{x}_{1}^{T}\mathbf{x}_{N}+1\right) ^{2}\\ -\left( \mathbf{x}_{2}^{T}\mathbf{x}_{N}+1\right) ^{2}\\ \vdots\\ \left( \mathbf{x}_{N}^{T}\mathbf{x}_{N}+1\right) ^{2 \end{pmatrix} \nonumber \end{align} which illustrates that the magnitude $\psi_{j}$ of the $j^{th}$ Wolfe dual principal eigenaxis component $\psi_{j}\overrightarrow{\mathbf{e}}_{j}$ is correlated with joint variations of labeled reproducing kernels about the reproducing kernel $k_{\mathbf{x}_{j}}$. Alternatively, using Eqs (\ref{Inner Product Matrix Q}) and (\ref{Alternate Eigendecomposition Equation Q}), a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ can be written as \begin{align} \boldsymbol{\psi} & =\frac{\psi_{1}}{\lambda_{\max_{\boldsymbol{\psi}} }\left( \begin{array} [c]{c \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1}+1\right) ^{2}\right\Vert \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x_{1}}}k_{\mathbf{x}_{1}}}\\ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2}+1\right) ^{2}\right\Vert \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{2}}k_{\mathbf{x}_{1}}}\\ \vdots\\ -\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{N}+1\right) ^{2}\right\Vert \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{N}}k_{\mathbf{x}_{1}} \end{array} \right) +\cdots\label{Dual Normal Eigenlocus Component Projections Q}\\ & \cdots+\frac{\psi_{N}}{\lambda_{\max_{\boldsymbol{\psi}}}}\left( \begin{array} [c]{c -\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1}+1\right) ^{2}\right\Vert \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{N}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{1}}k_{\mathbf{x}_{N}}}\\ -\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2}+1\right) ^{2}\right\Vert \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{N}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{2}}k_{\mathbf{x}_{N}}}\\ \vdots\\ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{N}+1\right) ^{2}\right\Vert \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{N}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{N}}k_{\mathbf{x}_{N}} \end{array} \right) \nonumber \end{align} which illustrates that the magnitude $\psi_{j}$ of the $j^{th}$ Wolfe dual principal eigenaxis component $\psi_{j}\overrightarrow{\mathbf{e}}_{j}$ on $\boldsymbol{\psi}$ is correlated with scalar projections $\left\Vert k_{\mathbf{x}_{j}}\right\Vert \cos\theta_{k_{\mathbf{x}_{i}}k_{\mathbf{x}_{j }}$ of the vector $k_{\mathbf{x}_{j}}$ onto labeled vectors $k_{\mathbf{x _{i}}$. Equations (\ref{Dual Normal Eigenlocus Components Q}) and (\ref{Dual Normal Eigenlocus Component Projections Q}) both indicate that the magnitude $\psi_{j}$ of the $j^{th}$ Wolfe dual principal eigenaxis component $\psi_{j}\overrightarrow{\mathbf{e}}_{j}$ on $\boldsymbol{\psi}$ is correlated with a first and second-order statistical moment about the locus of the reproducing kernel $k_{\mathbf{x}_{j}}$. \subsection{Assumptions} It will be assumed that each extreme reproducing kernel $\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}$ or $\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}$ exhibits a critical first and second-order statistical moment $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) $ or $\widehat{\operatorname{cov} _{up}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) $ that exceeds some threshold $\varrho$, for which each corresponding scale factor $\psi_{1i\ast}$ or $\psi_{2i\ast}$ exhibits a critical value that exceeds zero: $\psi_{1i\ast}>0$ or $\psi_{2i\ast}>0$. It will also be assumed that first and second-order statistical moments $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x _{1_{i}}}\right) $ or $\widehat{\operatorname{cov}}_{up}\left( k_{\mathbf{x}_{2_{i}}}\right) $ about the loci of non-extreme reproducing kernels $\left( \mathbf{x}^{T}\mathbf{x}_{1_{i}}+1\right) ^{2}$ or $\left( \mathbf{x}^{T}\mathbf{x}_{2_{i}}+1\right) ^{2}$ do not exceed the threshold $\varrho$ and that the corresponding scale factors $\psi_{1i}$ or $\psi_{2i}$ are effectively zero: $\psi_{1i}=0$ or $\psi_{2i}=0$. Express a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ in terms of $l$ non-orthogonal unit vectors $\left\{ \overrightarrow{\mathbf{e}}_{1\ast },\ldots,\overrightarrow{\mathbf{e}}_{l\ast}\right\} \begin{align} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l}\psi_{i\ast \overrightarrow{\mathbf{e}}_{i\ast \label{Non-orthogonal Eigenaxes of Dual Normal Eigenlocus Q}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\overrightarrow{\mathbf{e }_{1i\ast}+\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\overrightarrow{\mathbf{e }_{2i\ast}\text{,}\nonumber \end{align} where each scaled, non-orthogonal unit vector denoted by $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ is correlated with an extreme vector $\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}$ or $\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}$ respectively. Accordingly, each Wolfe dual principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ is a scaled, non-orthogonal unit vector that contributes to the estimation of $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$. WLOG, indices do not indicate locations of inner product expressions in Eq. (\ref{Dual Normal Eigenlocus Component Projections Q}). \subsubsection*{Notation} Denote the extreme reproducing kernels $k_{\mathbf{x}_{1_{i\ast}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$ that belong to class $\omega_{1}$ and $\omega _{2}$ by $\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2}$ or $\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}$ with labels $y_{i}=1$ and $y_{i}=-1$ respectively. Let there be $l_{1}$ extreme reproducing kernels from class $\omega_{1}$ and $l_{2}$ extreme reproducing kernels from class $\omega_{2}$. Let there be $l_{1}$ principal eigenaxis components $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$, where each scale factor $\psi_{1i\ast }$ is correlated with an extreme vector $\left( \mathbf{x}^{T}\mathbf{x _{1_{i\ast}}+1\right) ^{2}$. Let there be $l_{2}$ principal eigenaxis components $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$, where each scale factor $\psi_{2i\ast}$ is correlated with an extreme vector $\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}$. Let $l_{1}+l_{2}=l$. Recall that the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) \right) $ \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) \right) =\mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +\mathfrak{R}_{\mathfrak{\min}}\left( Z|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \right) \] for a binary classification system involves \emph{opposing forces} that depend on the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ and the corresponding decision boundary $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$. In particular, the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{1}$ decision region and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) $ in the $Z_{2}$ decision region are forces associated with positions and potential locations of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, and the forces associated with risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ decision region and the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) $ in th $Z_{2}$ decision region are forces associated with positions and potential locations of pattern vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x |\omega_{2}\right) $. Quadratic eigenlocus transforms define the opposing forces of a classification system in terms of forces associated with counter risks $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast}k_{\mathbf{x _{1_{i\ast}}}\right) $ and $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast}}}\right) $ and risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast}k_{\mathbf{x _{2_{i\ast}}}\right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2 |\psi_{1i\ast}k_{\mathbf{x}_{1_{i\ast}}}\right) $ related to scaled reproducing kernels of extreme points $\psi_{1i\ast}k_{\mathbf{x}_{1_{i\ast} }$ and $\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast}}}$: which are forces associated with positions and potential locations of reproducing kernels of extreme points $k_{\mathbf{x}_{1_{i\ast}}}$ and $k_{\mathbf{x}_{2_{i\ast}}}$ in the $Z_{1}$ and $Z_{2}$ decision regions of a decision space $Z$. In particular, the forces associated with the counter risks $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast}k_{\mathbf{x _{1_{i\ast}}}\right) $ and the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\psi_{1i\ast}k_{\mathbf{x}_{1_{i\ast}}}\right) $ for class $\omega_{1}$ are determined by magnitudes and directions of scaled reproducing kernels $\psi_{1i\ast}k_{\mathbf{x}_{1_{i\ast}}}$ on $\boldsymbol{\kappa}_{1}$, and the forces associated with the counter risks $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast} }\right) $ and the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1 |\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast}}}\right) $ for class $\omega_{2}$ are determined by magnitudes and directions of scaled reproducing kernels $\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast}}}$ on $\boldsymbol{\kappa}_{2}$. I\ will show that a Wolfe dual quadratic eigenlocus $\mathbf{\psi}$ is a displacement vector that accounts for the magnitudes and the directions of all of the scaled extreme vectors on $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}$. Quadratic eigenlocus transforms determine the opposing forces of a classification system by means of symmetrically balanced, pointwise covariance statistics. Symmetrically balanced, pointwise covariance statistics determine forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region that are balanced with forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \right) $ for class $\omega_{2}$ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) \right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) \\ & \rightleftharpoons\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{2}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) \right) \text{. \end{align*} I\ will now define symmetrically balanced, pointwise covariance statistics. The geometric nature of the statistics is outlined first. \subsection{Symmetrically Balanced Covariance Statistics II} Take two labeled sets of extreme vectors, where each extreme vector is correlated with a scale factor that determines scaled, signed magnitudes, i.e., scaled components of the scaled extreme vector, along the axes of the extreme vectors in each pattern class, such that the integrated scale factors from each pattern class balance each other. Generally speaking, for any given set of extreme vectors, all of the scaled, signed magnitudes along the axis of any given extreme vector from a given pattern class, which are determined by vector projections of scaled extreme vectors from the \emph{other} pattern class, \emph{are distributed in opposite directions}. Thereby, for two labeled sets of extreme vectors, where each extreme vector is correlated with a scale factor and the integrated scale factors from each pattern class balance each other, it follows that scaled, signed magnitudes along the axis of any given extreme vector, which are determined by vector projections of scaled extreme vectors from the \emph{other} pattern class, \emph{are distributed on the opposite side of the origin}. Accordingly, scaled, signed magnitudes along the axes of all of the extreme vectors are distributed in a symmetrically balanced manner, where each scale factor specifies a symmetrically balanced distribution for an extreme point which ensures that the \emph{components of} an extreme vector are \emph{distributed over} the axes of a given \emph{collection} of extreme vectors in a symmetrically balanced \emph{and} well-proportioned manner. I will show that symmetrically balanced covariance statistics are the basis of quadratic eigenlocus transforms. For any given set of extreme points, I will demonstrate that quadratic eigenlocus transforms find a set of scale factors in Wolfe dual eigenspace, which are determined by the symmetrically balanced covariance statistics in Eqs (\ref{Eigen-balanced Pointwise Covariance Estimate Class One Q}) and (\ref{Eigen-balanced Pointwise Covariance Estimate Class Two Q}), such that symmetrical decision regions $Z_{1}\simeq Z_{1}$ are determined by symmetrically balanced forces associated with counter risks and risks \begin{align*} \mathfrak{R}_{\mathfrak{\min}}\left( Z:Z_{1}\simeq Z\right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\mathbf{\kappa _{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\mathbf{\kappa _{2}\right) \\ & \rightleftharpoons\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\mathbf{\kappa}_{2}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\mathbf{\kappa}_{1}\right) \text{. \end{align*} Figure $\ref{Balancing Feat in Wolfe Dual Eigenspace Q}$ illustrates that symmetrically balanced covariance statistics determine quadratic discriminant functions that satisfy a fundamental integral equation of binary classification for a classification system in statistical equilibrium, where the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z\mathbf{| \boldsymbol{\kappa}\right) $ and the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ of the classification system are minimized \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure41.png }\caption{Symmetrically balanced covariance statistics $\protect\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( k_{\mathbf{x}_{1_{i\ast}}}\right) $ and $\protect\widehat{\operatorname{cov }_{up_{\updownarrow}}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) $ for extreme points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i\ast}}$ are the basis of quadratic eigenlocus transforms. Note: $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $. \label{Balancing Feat in Wolfe Dual Eigenspace Q \end{figure} Using Eqs (\ref{Equilibrium Constraint on Dual Eigen-components Q}) and (\ref{Pointwise Covariance Statistic Q}), along with the notation and assumptions outlined above, it follows that summation over the $l$ components of $\boldsymbol{\psi}$ in Eq. (\ref{Dual Normal Eigenlocus Component Projections Q}) provides symmetrically balanced covariance statistics for the $\left( \mathbf{x}^{T}\mathbf{x _{1_{i_{\ast}}}+1\right) ^{2}$ extreme vectors, where each extreme point $k_{\mathbf{x}_{1_{i\ast}}}$ exhibits a symmetrically balanced, first and second-order statistical moment $\widehat{\operatorname{cov} _{up_{\updownarrow}}\left( k_{\mathbf{x}_{1_{i\ast}}}\right) \begin{align} \widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \left( \mathbf{x ^{T}\mathbf{x}_{_{1i\ast}}+1\right) ^{2}\right) & =\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{1} \psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}} \label{Eigen-balanced Pointwise Covariance Estimate Class One Q}\\ & -\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert \sum\nolimits_{j=1 ^{l_{2}}\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}\nonumber \end{align} relative to $l$ symmetrically balanced, scaled, signed magnitudes determined by vector projections of scaled extreme vectors in each respective pattern class. Likewise, summation over the $l$ components of $\boldsymbol{\psi}$ in Eq. (\ref{Dual Normal Eigenlocus Component Projections Q}) provides symmetrically balanced covariance statistics for the $\left( \mathbf{x}^{T}\mathbf{x _{2_{i_{\ast}}}+1\right) ^{2}$ extreme vectors, where each extreme point $k_{\mathbf{x}_{2_{i\ast}}}$ exhibits a a symmetrically balanced, first and second-order statistical moment $\widehat{\operatorname{cov} _{up_{\updownarrow}}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) \begin{align} \widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( \left( \mathbf{x ^{T}\mathbf{x}_{2_{i_{\ast}}}+1\right) ^{2}\right) & =\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{2} \psi_{2_{j\ast}}\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{2_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}} \label{Eigen-balanced Pointwise Covariance Estimate Class Two Q}\\ & -\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert \sum\nolimits_{j=1 ^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}\nonumber \end{align} relative to $l$ symmetrically balanced, scaled, signed magnitudes determined by vector projections of scaled extreme vectors in each respective pattern class. \subsection{Common Geometrical and Statistical Properties} I\ will now use Eqs (\ref{Eigen-balanced Pointwise Covariance Estimate Class One Q}) and (\ref{Eigen-balanced Pointwise Covariance Estimate Class Two Q}) to identify symmetrical, geometric and statistical properties possessed by principal eigenaxis components on $\boldsymbol{\kappa}$ and $\boldsymbol{\psi}$. \subsection{Loci of the $\psi_{1i\ast}\protect\overrightarrow{\mathbf{e }_{1i\ast}$ Components} Let $i=1:l_{1}$, where each extreme vector $\left( \mathbf{x}^{T \mathbf{x}_{1_{i_{\ast}}}+1\right) ^{2}$ is correlated with a Wolfe principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$. Using Eqs (\ref{Dual Normal Eigenlocus Component Projections Q}) and (\ref{Non-orthogonal Eigenaxes of Dual Normal Eigenlocus Q}), it follows that the locus of the $i^{th}$ principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}$ is a function of the expression \begin{align} \psi_{1i\ast} & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{1} \psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}} \label{Dual Eigen-coordinate Locations Component One Q}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}-\left\Vert k_{\mathbf{x _{1_{i\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{2_{j\ast}}}}\text{,}\nonumber \end{align} where $\psi_{1i\ast}$ provides a scale factor for the non-orthogonal unit vector $\overrightarrow{\mathbf{e}}_{1i\ast}$. Geometric and statistical explanations for the eigenlocus statistic \begin{equation} \psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}}}\text{ and \psi_{2_{j\ast}}\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}} \label{Projection Statistics psi1 Q \end{equation} in Eq. (\ref{Dual Eigen-coordinate Locations Component One Q}) are considered next. \subsubsection{Geometric Nature of Eigenlocus Statistics} The first geometric interpretation of the eigenlocus statistics in Eq. (\ref{Projection Statistics psi1 Q}) defines $\psi_{1_{j\ast}}$ and $\psi_{2_{j\ast}}$ to be scale factors for the signed magnitudes of the vector projection \[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1_{j_{\ast}}}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast }}}}\text{ and }\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2_{j_{\ast} }+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{2_{j\ast}}} \] of the scaled extreme vectors $\psi_{1_{j\ast}}k_{\mathbf{x}_{1_{j\ast}}}$ and $\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}}}$ along the axis of the extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$, where $\cos\theta_{k_{\mathbf{x _{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}}}$ and $\cos\theta_{k_{\mathbf{x _{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}$ specify the respective angles between the axes of the scaled, extreme vectors $\psi_{1_{j\ast} k_{\mathbf{x}_{1_{j\ast}}}$ and $\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}}}$ and the axis of the extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$. Note that the signed magnitude $-\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}$ is distributed in the opposite direction, so that the locus of the signed magnitude is on the opposite side of the origin, along the axis of the extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$. Figure $\ref{Wolfe Dual Quadratic Eigenlocus Statistics}$ illustrates the geometric and statistical nature of the eigenlocus statistics in Eq. (\ref{Projection Statistics psi1 Q}), where any given scaled, signed magnitude $\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}}}$ or $\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}$ may be positive or negative (see Figs $\ref{Wolfe Dual Quadratic Eigenlocus Statistics}$a and $\ref{Wolfe Dual Quadratic Eigenlocus Statistics}$b) \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure42.png }\caption{Examples of positive and negative, eigen-scaled, signed magnitudes of vector projections of eigen-scaled extreme vectors $\psi_{1_{j\ast }k_{\mathbf{x}_{1_{j\ast}}}$ and $\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}}}$, along the axis of an extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$ which is correlated with a Wolfe dual principal eigenaxis component $\psi_{1i\ast }\protect\overrightarrow{\mathbf{e}}_{1i\ast}$. \label{Wolfe Dual Quadratic Eigenlocus Statistics \end{figure} \subsubsection{An Alternative Geometric Interpretation} An alternative geometric explanation for the eigenlocus statistics in Eq. (\ref{Projection Statistics psi1 Q}) accounts for the representation of the $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ primal principal eigenlocus components within the Wolfe dual eigenspace. Consider the relationship \[ \psi_{1_{j\ast}}\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1_{j_{\ast} }+1\right) ^{2}\right\Vert =\left\Vert \psi_{1_{j\ast}}\left( \mathbf{x ^{T}\mathbf{x}_{1_{j_{\ast}}}+1\right) ^{2}\right\Vert =\left\Vert \boldsymbol{\kappa}_{1}(j)\right\Vert \] an \[ \psi_{2_{j\ast}}\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2_{j_{\ast} }+1\right) ^{2}\right\Vert =\left\Vert \psi_{2_{j\ast}}\left( \mathbf{x ^{T}\mathbf{x}_{2_{j_{\ast}}}+1\right) ^{2}\right\Vert =\left\Vert \boldsymbol{\kappa}_{2}(j)\right\Vert \text{, \] where $\boldsymbol{\kappa}_{1}(j)$ and $\boldsymbol{\kappa}_{2}(j)$ are the $j$th constrained, primal principal eigenaxis components on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$. Given the above relationships, it follows that the scaled $\psi_{1_{j\ast}}$, signed magnitude $\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1_{j_{\ast}}}+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast }}}}$ of the vector projection of the scaled, extreme vector $\psi_{1_{j\ast }k_{\mathbf{x}_{1_{j\ast}}}$ along the axis of the extreme vector $k_{\mathbf{x}_{1_{i\ast}}} \[ \psi_{1_{j\ast}}\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1_{j_{\ast} }+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{1_{j\ast}}} \] determines the scaled $\cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x _{1_{j\ast}}}}$ length of the $j$th constrained, primal principal eigenaxis component $\boldsymbol{\kappa}_{1}(j)$ on $\boldsymbol{\kappa}_{1} \[ \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}}}\left\Vert \boldsymbol{\kappa}_{1}(j)\right\Vert \text{, \] where $\psi_{1_{j\ast}}$ is the length of the $\psi_{1j\ast \overrightarrow{\mathbf{e}}_{1j\ast}$ Wolfe dual principal eigenaxis component and $\cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}}}$ specifies the angle between the extreme vectors $k_{\mathbf{x}_{1_{i\ast}}}$ and $k_{\mathbf{x}_{1_{j\ast}}}$. Likewise, the scaled $\psi_{2_{j\ast}}$, signed magnitude $\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2_{j_{\ast}}}+1\right) ^{2}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}$ of the vector projection of the scaled extreme vector $\psi_{2_{j\ast}}k_{\mathbf{x _{2_{j\ast}}}$ along the axis of the extreme vector $k_{\mathbf{x}_{1_{i\ast }} \[ \psi_{2_{j\ast}}\left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2_{j_{\ast} }+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{2_{j\ast}}} \] determines the scaled $\cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x _{2_{j\ast}}}}$ length of the $j$th constrained, primal principal eigenaxis component $\boldsymbol{\kappa}_{2}(j)$ on $\boldsymbol{\kappa}_{2} \[ \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}\left\Vert \boldsymbol{\kappa}_{2}(j)\right\Vert \text{, \] where $\psi_{2_{j\ast}}$ is the length of the $\psi_{2j\ast \overrightarrow{\mathbf{e}}_{2j\ast}$ Wolfe dual principal eigenaxis component and $\cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}$ specifies the angle between the extreme vectors $k_{\mathbf{x}_{1_{i\ast}}}$ and $k_{\mathbf{x}_{2_{j\ast}}}$. Therefore, the locus of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ is a function of the constrained, primal principal eigenaxis components on $\boldsymbol{\kappa _{1}$ and $\boldsymbol{\kappa}_{2}$ \begin{align} \psi_{1i\ast} & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}}}\left\Vert \boldsymbol{\kappa}_{1}(j)\right\Vert \label{Constrained Primal Eigenlocus psi1 Q}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{1_{i\ast }}}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\cos\theta_{k_{\mathbf{x}_{1_{i\ast }}}k_{\mathbf{x}_{2_{j\ast}}}}\left\Vert \boldsymbol{\kappa}_{2}(j)\right\Vert \text{,}\nonumber \end{align} where the angle between each principal eigenaxis component $\boldsymbol{\kappa }_{1}(j)$ and $\boldsymbol{\kappa}_{2}(j)$ and the extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$ is fixed. I will now define the significant geometric and statistical properties which are jointly exhibited by Wolfe dual $\psi_{1i\ast}\overrightarrow{\mathbf{e }_{1i\ast}$\textbf{\ }and constrained, primal $\psi_{1i\ast}k_{\mathbf{x _{1_{i\ast}}}$ principal eigenaxis components that regulate the symmetric partitioning of a feature space $Z$. \subsection{Significant Geometric and Statistical Properties} Using the definition of Eq. (\ref{Eigen-balanced Pointwise Covariance Estimate Class One Q}), Eq. (\ref{Dual Eigen-coordinate Locations Component One Q}) indicates that the locus of the principal eigenaxis component $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ is determined by a symmetrically balanced, signed magnitude along the axis of an extreme vector $k_{\mathbf{x _{1_{i\ast}}}$, relative to symmetrically balanced, scaled, signed magnitudes of extreme vector projections in each respective pattern class. \subsubsection{Symmetrically Balanced Signed Magnitudes} Let $\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x }\ast}\right\Vert _{_{1i_{\ast}}}}\right) $ denote the symmetrically balanced, signed magnitud \begin{align} \operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x }\ast}\right\Vert _{_{1i_{\ast}}}}\right) & =\sum\nolimits_{j=1}^{l_{1 }\psi_{1_{j\ast}}\label{Unidirectional Scaling Term One1 Q}\\ & \times\left[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1_{j_{\ast} }+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{1_{j\ast}}}}\right] \nonumber\\ & -\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\nonumber\\ & \times\left[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2_{j_{\ast} }+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{2_{j\ast}}}}\right] \nonumber \end{align} along the axis of the extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$ that is correlated with the Wolfe dual principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$, wher \[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1_{j_{\ast}}}+1\right) ^{2}\right\Vert =\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert ^{2}+1 \] an \[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2_{j_{\ast}}}+1\right) ^{2}\right\Vert =\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert ^{2}+1\text{. \] \subsubsection{Symmetrically Balanced Distributions} Using the definitions of Eqs (\ref{Pointwise Covariance Statistic Q}) and (\ref{Eigen-balanced Pointwise Covariance Estimate Class One Q}), it follows that Eq. (\ref{Unidirectional Scaling Term One1 Q}) determines a symmetrically balanced distribution of scaled, first and second degree coordinates of extreme vectors along the axis of $k_{\mathbf{x}_{1_{i\ast}}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies how an extreme vector $k_{\mathbf{x}_{1_{j\ast}}}$ or $k_{\mathbf{x}_{2_{j\ast}}}$ is distributed along the axis of $k_{\mathbf{x}_{1_{i\ast}}}$, and each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies a symmetrically balanced distribution of scaled, first and second degree coordinates of extreme vectors $\left\{ \psi_{_{j\ast}}k_{\mathbf{x}_{j\ast}}\right\} _{j=1}^{l}$ along the axis of an extreme vector $k_{\mathbf{x}_{1_{j\ast}}}$ or $k_{\mathbf{x}_{2_{j\ast}}}$. Therefore, each scaled, signed magnitud \[ \psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}}}\text{ \ or \ }\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x _{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}} \] provides an estimate for how the components of the extreme vector $k_{\mathbf{x}_{1_{i_{\ast}}}}$ are symmetrically distributed over the axis of a scaled extreme vector $\psi_{1_{j\ast}}k_{\mathbf{x}_{1_{j\ast}}}$ or $\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies a symmetrically balanced distribution of scaled, first and second degree coordinates of extreme vectors $\left\{ \psi_{_{j\ast}}k_{\mathbf{x}_{j\ast}}\right\} _{j=1}^{l}$ along the axis of an extreme vector $k_{\mathbf{x}_{1_{j\ast}}}$ or $k_{\mathbf{x _{2_{j\ast}}}$. Again, using Eqs (\ref{Pointwise Covariance Statistic Q} and (\ref{Eigen-balanced Pointwise Covariance Estimate Class One Q}), it follows that Eq. (\ref{Unidirectional Scaling Term One1 Q}) determines a symmetrically balanced, first and second-order statistical moment about the locus of $k_{\mathbf{x}_{1_{i\ast}}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{1_{j\ast}}$ specifies how the components of an extreme vector $k_{\mathbf{x}_{1_{j\ast}}}$ or $k_{\mathbf{x}_{2_{j\ast}}}$ are distributed along the axis of $k_{\mathbf{x}_{1_{i\ast}}}$, and each scale factor $\psi_{1_{j\ast}}$ or $\psi_{1_{j\ast}}$ specifies a symmetrically balanced distribution for an extreme vector $k_{\mathbf{x}_{1_{j\ast}}}$ or $k_{\mathbf{x}_{2_{j\ast}}}$. \subsubsection{Distributions of Eigenaxis Components} Using Eqs (\ref{Equilibrium Constraint on Dual Eigen-components Q}), (\ref{Dual Eigen-coordinate Locations Component One Q}), and (\ref{Unidirectional Scaling Term One1 Q}), it follows that symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ are distributed over the axis of the extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$. Again, using Eq. (\ref{Dual Eigen-coordinate Locations Component One Q}), it follows that identical, symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa }$ are distributed over the axis of the Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$. Thereby, symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ are identically and symmetrically distributed over the respective axes of each Wolfe dual principal eigenaxis component $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ and each correlated extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$. Alternatively, using Eq. (\ref{Constrained Primal Eigenlocus psi1 Q}), the symmetrically balanced, signed magnitude in Eq. (\ref{Unidirectional Scaling Term One1 Q}) depends upon the difference between integrated, cosine-scaled lengths of the constrained, primal principal eigenaxis components on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa _{2}$ \begin{align} \operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{1_{i\ast}}}}\right) & =\sum\nolimits_{j=1}^{l_{1 }\cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}}}\left\Vert \boldsymbol{\kappa}_{1}(j)\right\Vert \label{Unidirectional Scaling Term One2 Q}\\ & -\sum\nolimits_{j=1}^{l_{2}}\cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{2_{j\ast}}}}\left\Vert \boldsymbol{\kappa}_{2}(j)\right\Vert \nonumber \end{align} which also shows that symmetrically balanced, joint distributions of the eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ are identically distributed along the respective axes of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ and each correlated extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$. Using Eqs (\ref{Dual Eigen-coordinate Locations Component One Q}) and (\ref{Unidirectional Scaling Term One1 Q}), it follows that the length $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi _{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ is determined by the weighted length of a correlated, extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$ \begin{equation} \psi_{1i\ast}=\left[ \lambda_{\max_{\boldsymbol{\psi}}}^{-1}\times \operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{1_{i\ast}}}}\right) \right] \left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert \text{,} \label{Magnitude Dual Normal Eigenaxis Component Class One Q \end{equation} where the weighting factor specifies an eigenvalue $\lambda_{\max _{\boldsymbol{\psi}}}^{-1}$ scaling of a symmetrically balanced, signed magnitude $\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}} }\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x}}_{\ast}}\right\Vert _{_{1_{i\ast}}}}\right) $ along the axis of $k_{\mathbf{x}_{1_{i\ast}}}$. \subsubsection{Symmetrically Balanced Lengths} Given that $\psi_{1i\ast}>0$, $\lambda_{\max_{\boldsymbol{\psi}}}^{-1}>0$ and $\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert >0$, it follows that the symmetrically balanced, signed magnitude along the axis of each extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$ is a positive numbe \[ \operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{1_{i\ast}}}}\right) >0 \] which indicates that the weighting factor in Eq. (\ref{Magnitude Dual Normal Eigenaxis Component Class One Q}) determines a well-proportioned lengt \[ \lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{1_{i\ast}}}}\right) \left\Vert k_{\mathbf{x _{1_{i\ast}}}\right\Vert \] for an extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$. Thereby, the length $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi _{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ is determined by a well-proportioned length of a correlated extreme vector $k_{\mathbf{x _{1_{i\ast}}}$. Returning to Eqs (\ref{Eigen-balanced Pointwise Covariance Estimate Class One Q}) and (\ref{Dual Eigen-coordinate Locations Component One Q}), it follows that the length $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi} \[ \psi_{1i\ast}=\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{1_{i\ast}}}}\right) \left\Vert k_{\mathbf{x _{1_{i\ast}}}\right\Vert \] is shaped by a symmetrically balanced, first and second-order statistical moment about the locus of a correlated extreme vector $k_{\mathbf{x _{1_{i\ast}}}$. Now, take any given correlated pair $\left\{ \psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast},\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast }}\right\} $ of Wolfe dual and constrained, primal principal eigenaxis components. I will now show that the direction of $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ is identical to the direction of $\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast}}}$. \subsubsection{Directional Symmetries} The vector direction of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ is implicitly specified by Eq. (\ref{Dual Eigen-coordinate Locations Component One Q}), where it has been assumed that $\psi_{1i\ast}$ provides a scale factor for a non-orthogonal unit vector $\overrightarrow{\mathbf{e}}_{1i\ast}$. Using the definitions of Eqs (\ref{Pointwise Covariance Statistic Q}) and (\ref{Eigen-balanced Pointwise Covariance Estimate Class One Q}), it follows that the symmetrically balanced, pointwise covariance statistic in Eq. (\ref{Dual Eigen-coordinate Locations Component One Q}) specifies the direction of a correlated extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$ and a well-proportioned magnitude along the axis of the extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$. Returning to Eqs (\ref{Unidirectional Scaling Term One1 Q}), (\ref{Unidirectional Scaling Term One2 Q}), and (\ref{Magnitude Dual Normal Eigenaxis Component Class One Q}), take any given Wolfe dual principal eigenaxis component $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ that is correlated with an extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$. Given that the magnitude $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ is determined by the well-proportioned magnitude of a correlated extreme vector $k_{\mathbf{x}_{1_{i\ast}}} \[ \psi_{1i\ast}=\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{1_{i\ast}}}}\right) \left\Vert k_{\mathbf{x _{1_{i\ast}}}\right\Vert \text{, \] it follows that each non-orthogonal unit vector $\overrightarrow{\mathbf{e }_{1i\ast}$ has the same direction as an extreme vector $k_{\mathbf{x _{1_{i\ast}}} \[ \overrightarrow{\mathbf{e}}_{1i\ast}\equiv\frac{k_{\mathbf{x}_{1_{i\ast}} }{\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert }\text{. \] Thereby, the direction of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}$ is identical to the direction of a correlated, constrained primal principal eigenaxis component $\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast}}}$ on $\boldsymbol{\kappa}_{1}$, which is determined by the direction of an extreme vector $\mathbf{x}_{1_{i\ast}}$. Each Wolfe dual and correlated, constrained primal principal eigenaxis component are said to exhibit directional symmetry. Therefore, it is concluded that correlated principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\kappa}_{1}$ exhibit directional symmetry. \subsubsection{Directions of Large Covariance} It is concluded that the uniform directions of the Wolfe dual and the correlated, constrained primal principal eigenaxis components determine directions of large covariance which contribute to a symmetric partitioning of a geometric region that spans a region of large covariance between two data distributions. It is also concluded that each of the correlated principal eigenaxis components on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\kappa}_{1}$ possess well-proportioned magnitudes for which the constrained, quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ delineates symmetric regions of large covariance between any two data distributions. \subsection{Loci of the $\psi_{2i\ast}\protect\overrightarrow{\mathbf{e }_{2i\ast}$ Components} Let $i=1:l_{2}$, where each extreme vector $\left( \mathbf{x}^{T \mathbf{x}_{2_{i_{\ast}}}+1\right) ^{2}$ is correlated with a Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e }_{2i\ast}$. Using Eqs (\ref{Dual Normal Eigenlocus Component Projections Q}) and (\ref{Non-orthogonal Eigenaxes of Dual Normal Eigenlocus Q}), it follows that the locus of the $i^{th}$ Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ on $\boldsymbol{\psi}$ is a function of the expression \begin{align} \psi_{2i\ast} & =\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{2} \psi_{2_{j\ast}}\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos \theta_{k_{\mathbf{x}_{2_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}} \label{Dual Eigen-coordinate Locations Component Two Q}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{2_{i\ast }}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{2_{j\ast}}}}\text{,}\nonumber \end{align} where $\psi_{2i\ast}$ provides a scale factor for the non-orthogonal unit vector $\overrightarrow{\mathbf{e}}_{2i\ast}$. Results obtained from the previous analysis are readily generalized to the Wolfe dual $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ and the constrained, primal $\psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast}}}$ principal eigenaxis components, so the analysis will not be replicated. However, the counterpart to Eq. (\ref{Unidirectional Scaling Term One1 Q}) is necessary for a future argument. Let $i=1:l_{2}$, where each extreme vector $k_{\mathbf{x _{2_{i_{\ast}}}}$ is correlated with a Wolfe principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$. Accordingly, let $\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{2_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{2i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{2_{i\ast}}}}\right) $ denote the symmetrically balanced, signed magnitud \begin{align} \operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{2_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{2i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{2_{i\ast}}}}\right) & =\sum\nolimits_{j=1}^{l_{2 }\psi_{2_{j\ast}}\label{Unidirectional Scaling Term Two1 Q}\\ & \times\left[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2_{j_{\ast} }+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast} }k_{\mathbf{x}_{2_{j\ast}}}}\right] \nonumber\\ & -\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\nonumber\\ & \times\left[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1_{j_{\ast} }+1\right) ^{2}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast} }k_{\mathbf{x}_{1_{j\ast}}}}\right] \nonumber \end{align} along the axis of the extreme vector $k_{\mathbf{x}_{2_{i\ast}}}$ that is correlated with the Wolfe dual principal eigenaxis component $\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{2i\ast}$, wher \[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{1_{j_{\ast}}}+1\right) ^{2}\right\Vert =\left\Vert \mathbf{x}_{1_{j\ast}}\right\Vert ^{2}+1 \] an \[ \left\Vert \left( \mathbf{x}^{T}\mathbf{x}_{2_{j_{\ast}}}+1\right) ^{2}\right\Vert =\left\Vert \mathbf{x}_{2_{j\ast}}\right\Vert ^{2}+1\text{. \] \subsection{Similar Properties Exhibited by $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$} I\ will now identify similar geometric and statistical properties which are jointly exhibited by the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ and the correlated, constrained, primal principal eigenaxis components on $\boldsymbol{\kappa}$. The properties are summarized below. \paragraph{Directional Symmetry} \begin{enumerate} \item The direction of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi }\mathbf{_{1}}$ is identical to the direction of a correlated, constrained primal principal eigenaxis component $\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast }}}$ on$\mathbf{\ }\boldsymbol{\kappa}\mathbf{_{1}}$. \item The direction of each Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ on $\boldsymbol{\psi }\mathbf{_{2}}$ is identical to the direction of a correlated, constrained primal principal eigenaxis component $\psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast }}}$ on$\mathbf{\ }\boldsymbol{\kappa}\mathbf{_{2}}$. \end{enumerate} \paragraph{Symmetrically Balanced Lengths} \begin{enumerate} \item The lengths of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi }\mathbf{_{1}}$ and each correlated, constrained primal principal eigenaxis component $\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast}}}$ on$\mathbf{\ \boldsymbol{\kappa}\mathbf{_{1}}$ are shaped by identical, symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$. \item The lengths of each Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ on $\boldsymbol{\psi }\mathbf{_{2}}$ and each correlated, constrained primal principal eigenaxis component $\psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast}}}$ on$\mathbf{\ \boldsymbol{\kappa}\mathbf{_{2}}$ are shaped by identical, symmetrically balanced, joint distributions of the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$. \end{enumerate} \paragraph{Symmetrically Balanced Pointwise Covariance Statistics} \begin{enumerate} \item The magnitude $\psi_{1i\ast}$ of each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}\mathbf{_{1}} \[ \psi_{1i\ast}=\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1_{i\ast}}}}}\left( \overrightarrow{\widetilde{\psi}_{1i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{1_{i\ast}}}}\right) \left\Vert k_{\mathbf{x _{1_{i\ast}}}\right\Vert \] is determined by a symmetrically balanced, pointwise covariance estimat \begin{align*} \widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( k_{\mathbf{x _{1_{i_{\ast}}}}\right) & =\lambda_{\max_{\boldsymbol{\psi}} ^{-1}\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert \\ & \times\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x _{1_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x _{1_{j\ast}}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x _{1_{i_{\ast}}}}\right\Vert \\ & \times\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x _{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x _{2_{j\ast}}} \end{align*} for a correlated extreme vector\textbf{\ }$k_{\mathbf{x}_{1_{i\ast}}}$, such that the locus of each constrained, primal principal eigenaxis component\textbf{\ }$\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast}}}$\textbf{\ }on $\boldsymbol{\kappa}\mathbf{_{1}}$\textbf{\ }provides a maximum\textit{\ covariance estimate in a principal location $k_{\mathbf{x}_{1_{i_{\ast}}}}$, in the form of a symmetrically balanced, first and second-order statistical moment about the locus of an extreme point $k_{\mathbf{x}_{1_{i_{\ast}}}}$. \item The magnitude $\psi_{2i\ast}$ of each Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ on $\boldsymbol{\psi}\mathbf{_{2}} \[ \psi_{2i\ast}=\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\widetilde{\psi}_{2i\ast}\left\Vert k_{\widetilde{\mathbf{x }_{\ast}}\right\Vert _{_{2i_{\ast}}}}\right) \left\Vert k_{\mathbf{x _{2_{i\ast}}}\right\Vert \] is determined by a symmetrically balanced, pointwise covariance estimat \begin{align*} \widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( k_{\mathbf{x _{2_{i_{\ast}}}}\right) & =\lambda_{\max_{\boldsymbol{\psi}} ^{-1}\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert \\ & \times\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x _{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast}}}k_{\mathbf{x _{2_{j\ast}}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{2_{i\ast }}}\right\Vert \\ & \times\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x _{1_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast}}}k_{\mathbf{x _{1_{j\ast}}} \end{align*} for a correlated extreme vector\textbf{\ }$k_{\mathbf{x}_{2_{i\ast}}}$, such that the locus of each constrained, primal principal eigenaxis component\textbf{\ }$\psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast}}}$\textbf{\ }on $\boldsymbol{\kappa}\mathbf{_{2}}$\textbf{\ }provides a maximum\textit{\ covariance estimate in a principal location $k_{\mathbf{x}_{2_{i_{\ast}}}}$, in the form of a symmetrically balanced, first and second-order statistical moment about the locus of an extreme point $k_{\mathbf{x}_{2_{i_{\ast}}}}$. \end{enumerate} \paragraph{Symmetrically Balanced Statistical Moments} \begin{enumerate} \item Each Wolfe dual principal eigenaxis component\textbf{\ }$\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}\mathbf{_{1} $\textbf{\ }specifies a symmetrically balanced, first and second-order statistical moment about the locus of a correlated extreme point $k_{\mathbf{x}_{1_{i_{\ast}}}}$, relative to the loci of all of the scaled extreme points which determines the locus of a constrained, primal principal eigenaxis component $\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast}}}$ on $\boldsymbol{\kappa}_{1}$. \item Each Wolfe dual principal eigenaxis component\textbf{\ }$\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ on $\boldsymbol{\psi}\mathbf{_{2} $\textbf{\ }specifies a symmetrically balanced, first and second-order statistical moment about the locus of a correlated extreme point $k_{\mathbf{x}_{2_{i_{\ast}}}}$, relative to the loci of all of the scaled extreme points which determines the locus of a constrained, primal principal eigenaxis component $\psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast}}}$ on $\boldsymbol{\kappa}_{2}$. \end{enumerate} \paragraph{Symmetrically Balanced Distributions of Extreme Points} \begin{enumerate} \item Any given maximum covariance estimate $\widehat{\operatorname{cov }_{up_{\updownarrow}}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) $ describes how the components of $l$ scaled extreme vectors $\left\{ \psi_{1_{j\ast }k_{\mathbf{x}_{1_{j_{\ast}}}}\right\} _{j=1}^{l_{1}}$ and $\left\{ \psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j_{\ast}}}}\right\} _{j=1}^{l_{2}}$ are distributed along the axis of an extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies a symmetrically balanced distribution of $l$ scaled extreme vectors along the axis of an extreme vector $k_{\mathbf{x}_{1_{j_{\ast}}}}$ or $k_{\mathbf{x _{k_{j_{\ast}}}}$, such that a pointwise covariance estimate $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( k_{\mathbf{x _{1_{i_{\ast}}}}\right) $ provides an estimate for how the components of an extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$ are symmetrically distributed over the axes of the $l$ scaled extreme vectors. Thus, $\widehat{\operatorname{cov }_{up_{\updownarrow}}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) $ describes a distribution of first and second degree coordinates for $k_{\mathbf{x _{1_{i_{\ast}}}}$. \item Any given maximum\textit{\ }covariance estimate $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( k_{\mathbf{x _{2_{i_{\ast}}}}\right) $ describes how the components of $l$ scaled extreme vectors $\left\{ \psi_{1_{j\ast}}k_{\mathbf{x}_{1_{j_{\ast}}}}\right\} _{j=1}^{l_{1}}$ and $\left\{ \psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j_{\ast}} }\right\} _{j=1}^{l_{2}}$ are distributed along the axis of an extreme vector $k_{\mathbf{x}_{2_{i\ast}}}$, where each scale factor $\psi_{1_{j\ast}}$ or $\psi_{2_{j\ast}}$ specifies a symmetrically balanced distribution of $l$ scaled extreme vectors along the axis of an extreme vector $k_{\mathbf{x _{1_{j_{\ast}}}}$ or $k_{\mathbf{x}_{2_{j_{\ast}}}}$, such that a pointwise covariance estimate $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) $ provides an estimate for how the components of an extreme vector $k_{\mathbf{x}_{2_{i_{\ast}}}}$ are symmetrically distributed over the axes of the $l$ scaled extreme vectors. Thus, $\widehat{\operatorname{cov}}_{up_{\updownarrow}}\left( k_{\mathbf{x _{2_{i_{\ast}}}}\right) $ describes a distribution of first and second degree coordinates for $k_{\mathbf{x}_{2_{i_{\ast}}}}$. \end{enumerate} I\ will now define the equivalence between the total allowed eigenenergies exhibited by $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$. \subsection{Equivalence Between Eigenenergies of $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$} The inner product between the integrated Wolf dual principal eigenaxis components on $\boldsymbol{\psi} \begin{align*} \left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2} & =\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }+\sum\nolimits_{i=1 ^{l_{2}}\psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }\right) \\ & \times\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x _{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert +\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }\right) \end{align*} determines the total allowed eigenenergy $\left\Vert \boldsymbol{\psi }\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\psi}$ which is symmetrically equivalent with the critical minimum eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}$ within its Wolfe dual eigenspac \begin{align*} \left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2} & =\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\left( \mathbf{x}^{T \mathbf{x}_{1_{i_{\ast}}}+1\right) ^{2}-\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{2_{i_{\ast}}}+1\right) ^{2}\right) \\ & \times\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{1_{i_{\ast}}}+1\right) ^{2}-\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{2_{i_{\ast} }+1\right) ^{2}\right) \text{. \end{align*} I will now argue that the equivalence $\left\Vert \boldsymbol{\psi}\right\Vert _{\min_{c}}^{2}\simeq\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}$ between the total allowed eigenenergies exhibited by $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ involves symmetrically balanced, joint eigenenergy distributions with respect to the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$. \paragraph{Symmetrical Equivalence of Eigenenergy Distributions} Using Eqs (\ref{Equilibrium Constraint on Dual Eigen-components Q}), (\ref{Dual Eigen-coordinate Locations Component One Q}), (\ref{Unidirectional Scaling Term One1 Q}), and (\ref{Unidirectional Scaling Term Two1 Q}), it follows that identical, symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ are symmetrically distributed over the respective axes of each Wolfe dual principal eigenaxis component on $\boldsymbol{\psi}$ and each correlated and unconstrained primal principal eigenaxis component (extreme vector) on $\boldsymbol{\kappa}$. Therefore, constrained primal and Wolfe dual principal eigenaxis components that are correlated with each other are formed by equivalent, symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$. Thereby, symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ are symmetrically distributed over the axes of all of the Wolf dual principal eigenaxis components $\left\{ \psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast }\right\} _{i=1}^{l_{1}}$ and $\left\{ \psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}\right\} _{i=1}^{l_{2}}$ on $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi}_{2}$ and all of the constrained, primal principal eigenaxis components $\left\{ \psi_{1_{i\ast }k_{\mathbf{x}_{1_{i\ast}}}\right\} _{i=1}^{l_{1}}$ and $\left\{ \psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast}}}\right\} _{i=1}^{l_{2}}$ on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$, where $\overrightarrow{\mathbf{e}}_{1i\ast}=\frac{k_{\mathbf{x}_{1_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }$ and $\overrightarrow{\mathbf{e}}_{2i\ast}=\frac{k_{\mathbf{x}_{2_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }$. Therefore, the distribution of eigenenergies with respect to the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ is symmetrically equivalent to the distribution of eigenenergies with respect to the constrained, primal principal eigenaxis components on $\boldsymbol{\kappa}$, such that the total allowed eigenenergies $\left\Vert \boldsymbol{\psi }\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ satisfy symmetrically balanced, joint eigenenergy distributions with respect to the principal eigenaxis components on\emph{\ }$\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$. Thus, all of the constrained, primal principal eigenaxis components on $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ possess eigenenergies that satisfy symmetrically balanced, joint eigenenergy distributions with respect to the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$. Later on, I\ will show that the critical minimum eigenenergies exhibited by the scaled extreme vectors determine conditional probabilities of classification error for extreme points (which are reproducing kernels), where any given extreme point has a risk or a counter risk that is determined by a measure of central location and a measure of spread, both of which are described by a conditional probability density. In the next section, I\ will show that each Wolfe dual principal eigenaxis component specifies a conditional probability density for an extreme point $k_{\mathbf{x}_{1_{i\ast}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$. Recall that it is reasonable to assume that information about an unknown probability density function $p\left( \mathbf{x}\right) $ is distributed over the components of a parameter vector $\widehat{\mathbf{\theta}}$ \citep{Duda2001 . It has been demonstrated that symmetrically balanced, joint distributions of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa }$ are symmetrically distributed over the axes of all of the Wolf dual principal eigenaxis components on $\boldsymbol{\psi}$ and all of the constrained, primal principal eigenaxis components on $\boldsymbol{\kappa}$. In the next analysis, I\ will show that information for two unknown conditional density functions $p\left( k_{\mathbf{x}_{1_{i\ast} }|\widehat{\mathbf{\theta}}_{1}\right) $ and $p\left( k_{\mathbf{x _{2_{i\ast}}}|\widehat{\mathbf{\theta}}_{2}\right) $ is distributed over the scaled reproducing kernels of extreme points on $\boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}$, where $\boldsymbol{\kappa}$ is an unknown parameter vector $\widehat{\mathbf{\theta}}$ that contains information about the unknown conditional densities $\widehat{\mathbf{\theta} =\widehat{\mathbf{\theta}}_{1}-\widehat{\mathbf{\theta}}_{2}$. I will now define pointwise conditional densities which are determined by the components of a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa }=$ $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$, where each conditional density $p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ or $p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ for an $k_{\mathbf{x}_{1_{i\ast}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$ extreme point is given by components $\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) $ or $\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) $ of $\boldsymbol{\kappa} $ along the corresponding extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$. \subsection{Pointwise Conditional Densities} Consider again the equations for the loci of the $\psi_{1i\ast \overrightarrow{\mathbf{e}}_{1i\ast}$ and $\psi_{2i\ast \overrightarrow{\mathbf{e}}_{2i\ast}$ Wolfe dual principal eigenaxis components in Eqs (\ref{Dual Eigen-coordinate Locations Component One Q}) and (\ref{Dual Eigen-coordinate Locations Component Two Q}). It has been demonstrated that any given Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ correlated with a reproducing kernel $k_{\mathbf{x}_{1_{i\ast}}}$ of an $\mathbf{x}_{1_{i_{\ast }}}$ extreme point and any given Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ correlated with a reproducing kernel $k_{\mathbf{x}_{2_{i\ast}}}$ of an $\mathbf{x}_{2_{i_{\ast }}}$ extreme point provides an estimate for how the components of $l$ scaled extreme vectors $\left\{ \psi_{_{j\ast}}k_{\mathbf{x}_{j\ast}}\right\} _{j=1}^{l}$ are symmetrically distributed along the axis of a correlated extreme vector $k_{\mathbf{x}_{1_{i_{\ast}}}}$ or $k_{\mathbf{x}_{2_{i\ast}} $, where components of scaled extreme vectors $\psi_{_{j\ast}}k_{\mathbf{x _{j\ast}}$ are symmetrically distributed according to class labels $\pm1$, signed magnitudes $\left\Vert k_{\mathbf{x}_{j\ast}}\right\Vert \cos \theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{_{j\ast}}}}$ or $\left\Vert k_{\mathbf{x}_{j\ast}}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast} }k_{\mathbf{x}_{_{j\ast}}}}$ and symmetrically balanced distributions of scaled extreme vectors $\left\{ \psi_{_{j\ast}}k_{\mathbf{x}_{j\ast }\right\} _{j=1}^{l}$ specified by scale factors $\psi_{_{j\ast}}$. Thereby, symmetrically balanced distributions of first and second degree coordinates of all of the extreme points are symmetrically distributed along the axes of all of the extreme vectors, where all of the scale factors satisfy the equivalence relation $\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast} =\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}$. Accordingly, principal eigenaxis components $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ or $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ describe distributions of first and second degree coordinates for extreme points $k_{\mathbf{x _{1_{i_{\ast}}}}$ or $k_{\mathbf{x}_{2_{i_{\ast}}}}$, where any given extreme point is the endpoint of a directed line segment estimate. Therefore, for any given extreme vector $k_{\mathbf{x}_{1_{i\ast}}}$, the relative likelihood that the extreme point $k_{\mathbf{x}_{1_{i\ast}}}$ has a given location is specified by the locus of the Wolfe dual principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ \begin{align*} \psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast} & =\lambda_{\max _{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast }}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast }}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x _{1_{i_{\ast}}}}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast }\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x _{1_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}\text{, \end{align*} where $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ describes a conditional expectation (a measure of central location) and a conditional covariance (a measure of spread) for the extreme point $k_{\mathbf{x _{1_{i_{\ast}}}}$. Thereby, it is concluded that the principal eigenaxis component $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ specifies a\emph{\ }conditional density $p\left( k_{\mathbf{x}_{1i\ast} |\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ for the extreme point $k_{\mathbf{x}_{1_{i_{\ast}}}}$, where the scale factor $\psi_{1i\ast}$ is a \emph{unit }measure or estimate of density and likelihood for the extreme point $k_{\mathbf{x}_{1_{i_{\ast}}}}$. Likewise, for any given extreme vector $k_{\mathbf{x}_{2_{i\ast}}}$, the relative likelihood that the extreme point $k_{\mathbf{x}_{2_{i\ast}}}$ has a given location is specified by the locus of the Wolfe dual principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ \begin{align*} \psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast} & =\lambda_{\max _{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x}_{2_{j\ast }}}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast}}}k_{\mathbf{x}_{2_{j\ast }}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{2_{i\ast }}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast} }k_{\mathbf{x}_{1_{j\ast}}}}\text{, \end{align*} where $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ describes a conditional expectation (a measure of central location) and a conditional covariance (a measure of spread) for the extreme point $k_{\mathbf{x _{2_{i\ast}}}$. Thereby, it is concluded that the principal eigenaxis component $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ specifies a\emph{\ }conditional density $p\left( k_{\mathbf{x}_{2i\ast} |\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ for the extreme point $k_{\mathbf{x}_{2_{i\ast}}}$, where the scale factor $\psi_{2i\ast}$ is a \emph{unit }measure or estimate of density and likelihood for the extreme point $k_{\mathbf{x}_{2_{i\ast}}}$. It has been shown that a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ is formed by a locus of scaled, normalized extreme vector \begin{align*} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac {k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }+\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac{k_{\mathbf{x _{2_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert }\\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\text{, \end{align*} where $\boldsymbol{\psi}_{1}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast \frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }$ and $\boldsymbol{\psi}_{2}=\sum\nolimits_{i=1}^{l_{2} \psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x _{2_{i\ast}}}\right\Vert }$, and each $\psi_{1i\ast}$ or $\psi_{2i\ast}$ scale factor provides a unit measure or estimate of density and likelihood for an $k_{\mathbf{x}_{1_{i_{\ast}}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$ extreme point. Given that each Wolfe dual principal eigenaxis component $\psi_{1i\ast \frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }$ on $\boldsymbol{\psi}_{1}$ specifies a conditional density $p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ for a correlated extreme point $k_{\mathbf{x}_{1_{i_{\ast}}}}$, it follows that conditional densities for the $k_{\mathbf{x}_{1_{i_{\ast}}}}$ extreme points are distributed over the principal eigenaxis components of $\boldsymbol{\psi}_{1} \begin{align} \boldsymbol{\psi}_{1} & =\sum\nolimits_{i=1}^{l_{1}}p\left( k_{\mathbf{x _{1i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1i\ast}} }\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) \frac {k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }\label{Wolfe Dual Conditional Density Extreme Points 1 Q}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1_{i_{\ast} }}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }\text{,}\nonumber \end{align} where $\psi_{1i\ast}\frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }$ specifies a conditional density for $k_{\mathbf{x}_{1i\ast}}$, such that $\boldsymbol{\psi}_{1}$ is a parameter vector for a class-conditional probability density $p\left( \frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }|\boldsymbol{\psi}_{1}\right) $ for a given set $\left\{ k_{\mathbf{x}_{1_{i_{\ast}}}}\right\} _{i=1}^{l_{1}}$ of $k_{\mathbf{x _{1_{i_{\ast}}}}$ extreme points \[ \boldsymbol{\psi}_{1}=p\left( \frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \text{. \] Given that each Wolfe dual principal eigenaxis component $\psi_{2i\ast \frac{k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{2_{i\ast} }\right\Vert }$ on $\boldsymbol{\psi}_{2}$ specifies a conditional density $p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ for a correlated extreme point $k_{\mathbf{x}_{2_{i\ast}}}$, it follows that conditional densities for the $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points are distributed over the principal eigenaxis components of $\boldsymbol{\psi}_{2} \begin{align} \boldsymbol{\psi}_{2} & =\sum\nolimits_{i=1}^{l_{2}}p\left( k_{\mathbf{x _{2i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{2i\ast}} }\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) \frac {k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert }\label{Wolfe Dual Conditional Density Extreme Points 2 Q}\\ & =\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\frac{k_{\mathbf{x}_{2_{i\ast }}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert }\text{,}\nonumber \end{align} where $\psi_{2_{i\ast}}\frac{k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert }$ specifies a conditional density for $k_{\mathbf{x}_{2_{i\ast}}}$, such that $\boldsymbol{\psi}_{2}$ is a parameter vector for a class-conditional probability density $p\left( \frac {k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ for a given set $\left\{ k_{\mathbf{x _{2_{i\ast}}}\right\} _{i=1}^{l_{2}}$ of $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points \[ \boldsymbol{\psi}_{2}=p\left( \frac{k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{. \] Therefore, it is concluded that $\boldsymbol{\psi}_{1}$ is a parameter vector for the class-conditional probability density function $p\left( \frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $\boldsymbol{\psi}_{2}$ is a parameter vector for the class-conditional probability density function $p\left( \frac{k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x _{2_{i\ast}}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $. Returning to Eq. (\ref{Equilibrium Constraint on Dual Eigen-components Q}), it follows that the pointwise conditional densities $\psi_{1i\ast}\frac {k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }$ and $\psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }$ for all of the extreme points in class $\omega_{1}$ and class $\omega_{2}$ are symmetrically balanced with each other \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }\equiv\sum \nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert \] \qquad in the Wolfe dual eigenspace. Therefore, the class-conditional probability density functions $\boldsymbol{\psi}_{1}$ and $\boldsymbol{\psi }_{2}$ in the Wolfe dual eigenspace for class $\omega_{1}$ and class $\omega_{2}$ are \emph{symmetrically balanced with each other} \[ p\left( \frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x _{1_{i_{\ast}}}}\right\Vert }|\boldsymbol{\psi}_{1}\right) =p\left( \frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}} }\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{. \] I\ will now devise expressions for the class-conditional probability density functions in the decision space $Z$ for class $\omega_{1}$ and class $\omega_{2}$. \subsection{Class-conditional Probability Densities} I\ will now show that a quadratic eigenlocus $\boldsymbol{\kappa=\kappa _{1}-\boldsymbol{\kappa}_{2}$ is a parameter vector for class-conditional probability density functions $p\left( k_{\mathbf{x}_{1_{i\ast}}}|\omega _{1}\right) $ and $p\left( k_{\mathbf{x}_{2_{i\ast}}}|\omega_{2}\right) $. \subsubsection{Class-Conditional Density for Class $\omega_{1}$} Given that each Wolfe dual principal eigenaxis component $\psi_{1i\ast }\overrightarrow{\mathbf{e}}_{1i\ast}$ specifies a conditional density $p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ for a correlated extreme point $k_{\mathbf{x}_{1i\ast}}$, it follows that conditional densities for the $k_{\mathbf{x}_{1i\ast}}$ extreme points are distributed over the principal eigenaxis components of $\boldsymbol{\kappa}_{1} \begin{align} \boldsymbol{\kappa}_{1} & =\sum\nolimits_{i=1}^{l_{1}}p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{1_{i\ast}}}\label{Conditional Density Extreme Points 1 Q}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast} }\text{,}\nonumber \end{align} where $\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast}}}$ specifies a conditional density for $k_{\mathbf{x}_{1_{i_{\ast}}}}$, such that $\boldsymbol{\kappa }_{1}$ is a parameter vector for a class-conditional probability density $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ for a given set $\left\{ k_{\mathbf{x}_{1_{i\ast}}}\right\} _{i=1}^{l_{1}}$ of $k_{\mathbf{x}_{1_{i_{\ast}}}}$ extreme points \[ \boldsymbol{\kappa}_{1}=p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa }_{1}\right) \text{. \] \subsubsection{Class-Conditional Density for Class $\omega_{2}$} Given that each Wolfe dual principal eigenaxis component $\psi_{2i\ast }\overrightarrow{\mathbf{e}}_{2i\ast}$ specifies a conditional density $p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) $ for a correlated extreme point $k_{\mathbf{x}_{2i\ast}}$, it follows that conditional densities for the $k_{\mathbf{x}_{2i\ast}}$ extreme points are distributed over the principal eigenaxis components of $\boldsymbol{\kappa}_{2} \begin{align} \boldsymbol{\kappa}_{2} & =\sum\nolimits_{i=1}^{l_{2}}p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{2_{i\ast}}}\label{Conditional Density Extreme Points 2 Q}\\ & =\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast} }\text{,}\nonumber \end{align} where $\psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast}}}$ specifies a conditional density for $k_{\mathbf{x}_{2i\ast}}$, such that $\boldsymbol{\kappa}_{2}$ is a parameter vector for a class-conditional probability density $p\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|\boldsymbol{\kappa}_{2}\right) $ for a given set $\left\{ k_{\mathbf{x}_{2i\ast}}\right\} _{i=1}^{l_{2}}$ of $k_{\mathbf{x}_{2i\ast}}$ extreme points \[ \boldsymbol{\kappa}_{2}=p\left( k_{\mathbf{x}_{2_{i_{\ast}}} |\boldsymbol{\kappa}_{2}\right) \text{. \] Therefore, it is concluded that $\boldsymbol{\kappa}_{1}$ is a parameter vector for the class-conditional probability density function $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ and $\boldsymbol{\kappa}_{2}$ is a parameter vector for the class-conditional probability density function $p\left( k_{\mathbf{x}_{2_{i_{\ast}} }|\boldsymbol{\kappa}_{2}\right) $. I will now devise integrals for the conditional probability functions for class $\omega_{1}$ and class $\omega_{2}$. \subsection{Conditional Probability Functions} I\ will now show that the conditional probability function $P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ for class $\omega_{1}$ is given by the area under the class-conditional probability density function $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) $ over the decision space $Z$. \subsubsection{Conditional Probability Function for Class $\omega_{1}$} A quadratic eigenlocus $\boldsymbol{\kappa}=\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast}}}-\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast}}}$ is the basis of a quadratic classification system $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ that partitions any given feature space into symmetrical decision regions $Z_{1}\simeq Z_{2}$, whereby, for any two overlapping data distributions, an $k_{\mathbf{x _{1i\ast}}$ or $k_{\mathbf{x}_{2i\ast}}$ extreme point lies in either region $Z_{1}$ or region $Z_{2}$, and for any two non-overlapping data distributions, $k_{\mathbf{x}_{1i\ast}}$ extreme points lie in region $Z_{1}$ and $k_{\mathbf{x}_{2i\ast}}$ extreme points lie in region $Z_{2}$. Therefore, the area under each pointwise conditional density in Eq. (\ref{Conditional Density Extreme Points 1 Q} \[ \int_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) d\boldsymbol{\kappa _{1}\left( k_{\mathbf{x}_{1i\ast}}\right) \text{ or }\int_{Z_{2}}p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) d\boldsymbol{\kappa}_{1}\left( k_{\mathbf{x}_{1i\ast}}\right) \] is a conditional probability that an $k_{\mathbf{x}_{1i\ast}}$ extreme point will be observed in either region $Z_{1}$ or region $Z_{2}$. Thus, the area $P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa _{1}\right) $ under the class-conditional density function $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ in Eq. (\ref{Conditional Density Extreme Points 1 Q} \begin{align*} P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) & =\int_{Z}\left( \sum\nolimits_{i=1}^{l_{1}}p\left( k_{\mathbf{x}_{1i\ast }|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{1_{i\ast }}}\right) d\boldsymbol{\kappa}_{1}\\ & =\int_{Z}\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}k_{\mathbf{x _{1_{i\ast}}}\right) d\boldsymbol{\kappa}_{1}=\int_{Z}p\left( k_{\mathbf{x _{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}\\ & =\int_{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}=\frac{1 {2}\left\Vert \boldsymbol{\kappa}_{1}\right\Vert ^{2}+C=\left\Vert \boldsymbol{\kappa}_{1}\right\Vert ^{2}+C_{1 \end{align*} specifies the conditional probability of observing a set $\left\{ k_{\mathbf{x}_{1i\ast}}\right\} _{i=1}^{l_{1}}$ of $k_{\mathbf{x}_{1i\ast}}$ extreme points within \emph{localized regions} of the decision space $Z$, where conditional densities $\psi_{1_{i\ast}}k_{\mathbf{x}_{1i\ast}}$ for $k_{\mathbf{x}_{1i\ast}}$ extreme points that lie in the $Z_{2}$ decision region \emph{contribute} to the cost or risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{2}|\psi_{1_{i\ast}}k_{\mathbf{x}_{1i\ast}}\right) $ of making a decision error, and conditional densities $\psi_{1_{i\ast}}k_{\mathbf{x _{1i\ast}}$ for $k_{\mathbf{x}_{1i\ast}}$ extreme points that lie in the $Z_{1}$ decision region \emph{counteract }the cost or risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\psi_{1_{i\ast}}k_{\mathbf{x _{1i\ast}}\right) $ of making a decision error. It follows that the area $P\left( k_{\mathbf{x}_{1_{i\ast}} |\boldsymbol{\kappa}_{1}\right) $ under the class-conditional probability density function $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) $ is determined by regions of risk $\mathfrak{R}_{\mathfrak{\min }}\left( Z_{2}|\psi_{1_{i\ast}}k_{\mathbf{x}_{1i\ast}}\right) $ and regions of counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1 |\psi_{1_{i\ast}}k_{\mathbf{x}_{1i\ast}}\right) $ for the $k_{\mathbf{x _{1_{i\ast}}}$ extreme points, where regions of risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|\psi_{1_{i\ast}}k_{\mathbf{x}_{1i\ast }\right) $ and regions of counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{1}|\psi_{1_{i\ast}}k_{\mathbf{x}_{1i\ast }\right) $ are localized regions in decision space $Z$ that are determined by central locations (expected values) and spreads (covariances) of $k_{\mathbf{x}_{1_{i\ast}}}$ extreme points. Therefore, the conditional probability function $P\left( k_{\mathbf{x _{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ for class $\omega_{1}$ is given by the integra \begin{align} P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) & =\int_{Z}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) d\widehat{\Lambda}_{\boldsymbol{\kappa }}=\int_{Z}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1 \label{Conditional Probability Function for Class One Q}\\ & =\int_{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}=\left\Vert \boldsymbol{\kappa}_{1}\right\Vert ^{2}+C_{1}\text{,}\nonumber \end{align} over the decision space $Z$, which has a solution in terms of the critical minimum eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c }^{2}$ exhibited by $\boldsymbol{\kappa}_{1}$ and an integration constant $C_{1}$. I\ will now demonstrate that the conditional probability function $P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ for class $\omega_{2}$ is given by the area under the class-conditional probability density function $p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa _{2}\right) $ over the decision space $Z.$ \subsubsection{Conditional Probability Function for Class $\omega_{2}$} The area under each pointwise conditional density in Eq. (\ref{Conditional Density Extreme Points 2 Q} \[ \int_{Z_{1}}p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) d\boldsymbol{\kappa _{2}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) \text{ or }\int_{Z_{2}}p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) d\boldsymbol{\kappa}_{2}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) \] is a conditional probability that an $k_{\mathbf{x}_{2_{i\ast}}}$ extreme point will be observed in either region $Z_{1}$ or region $Z_{2}$. Thus, the area $P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa _{2}\right) $ under the class-conditional density $p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $ in Eq. (\ref{Conditional Density Extreme Points 2 Q} \begin{align*} P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) & =\int_{Z}\left( \sum\nolimits_{i=1}^{l_{2}}p\left( k_{\mathbf{x}_{2i\ast }|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{2_{i\ast }}}\right) d\boldsymbol{\kappa}_{2}\\ & =\int_{Z}\left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}k_{\mathbf{x _{2_{i\ast}}}\right) d\boldsymbol{\kappa}_{2}=\int_{Z}p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}\\ & =\int_{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}=\frac{1 {2}\left\Vert \boldsymbol{\kappa}_{2}\right\Vert ^{2}+C=\left\Vert \boldsymbol{\kappa}_{2}\right\Vert ^{2}+C_{2 \end{align*} specifies the conditional probability of observing a set $\left\{ k_{\mathbf{x}_{2_{i\ast}}}\right\} _{i=1}^{l_{2}}$ of $k_{\mathbf{x _{2_{i\ast}}}$ extreme points within localized regions of the decision space $Z$, where conditional densities $\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast}}}$ for $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points that lie in the $Z_{1}$ decision region \emph{contribute} to the cost or risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast}}}\right) $ of making a decision error, and conditional densities $\psi_{2i\ast}k_{\mathbf{x _{2_{i\ast}}}$ for $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points that lie in the $Z_{2}$ decision region \emph{counteract }the cost or risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast}k_{\mathbf{x _{2_{i\ast}}}\right) $ of making a decision error. It follows that the area $P\left( k_{\mathbf{x}_{2_{i\ast}} |\boldsymbol{\kappa}_{2}\right) $ under the class-conditional probability density function $p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa _{2}\right) $ is determined by regions of risk $\mathfrak{R}_{\mathfrak{\min }}\left( Z_{1}|\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast}}}\right) $ and regions of counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2 |\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast}}}\right) $ for the $k_{\mathbf{x _{2_{i\ast}}}$ extreme points, where regions of risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast} }\right) $ and regions of counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast}k_{\mathbf{x}_{2_{i\ast} }\right) $ are localized regions in decision space $Z$ that are determined by central locations (expected values) and spreads (covariances) of $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points. Therefore, the conditional probability function $P\left( k_{\mathbf{x _{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ for class $\omega_{2}$ is given by the integra \begin{align} P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) & =\int_{Z}p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) d\widehat{\Lambda}_{\boldsymbol{\kappa }}=\int_{Z}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2 \label{Conditional Probability Function for Class Two Q}\\ & =\int_{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}=\left\Vert \boldsymbol{\kappa}_{2}\right\Vert ^{2}+C_{2}\text{,}\nonumber \end{align} over the decision space $Z$, which has a solution in terms of the critical minimum eigenenergy $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert ^{2}$ exhibited by $\boldsymbol{\kappa}_{2}$ and an integration constant $C_{2}$. In order to precisely define the manner in which quadratic eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}$ satisfy the fundamental integral equation of binary classification for a classification system in statistical equilibrium, I need to precisely define the manner in which the total allowed eigenenergies of the principal eigenaxis components on $\boldsymbol{\kappa}$ are symmetrically balanced with each other. Furthermore, I need to identify the manner in which the property of symmetrical balance exhibited by the principal eigenaxis components on $\boldsymbol{\psi}$ \emph{and} $\boldsymbol{\kappa}$ enables quadratic eigenlocus classification systems $\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ to \emph{effectively balance} all of the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) $ and the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{1}\right) $ in the $Z_{1}$ decision region with all of the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa _{1}\right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|\boldsymbol{\kappa}_{2}\right) $ in the $Z_{2}$ decision region. Recall that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of a binary classification syste \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) =\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) \right) +\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \right) \] involves opposing forces that depend on the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ and the corresponding decision boundary $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$. It has been demonstrated that quadratic eigenlocus transforms define these opposing forces in terms of symmetrically balanced, pointwise covariance statistics \begin{align*} \psi_{1i\ast}\frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x _{1_{i_{\ast}}}}\right\Vert } & =\lambda_{\max_{\boldsymbol{\psi}} ^{-1}\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert \sum\nolimits_{j=1 ^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x}_{1_{j\ast}}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}-\left\Vert k_{\mathbf{x _{1_{i\ast}}}\right\Vert \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{2_{j\ast}}}}\text{, \end{align*} an \begin{align*} \psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x _{2_{i_{\ast}}}}\right\Vert } & =\lambda_{\max_{\boldsymbol{\psi}} ^{-1}\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert \sum\nolimits_{j=1 ^{l_{2}}\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x}_{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast}}}k_{\mathbf{x}_{2_{j\ast}}}}\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\left\Vert k_{\mathbf{x}_{2_{i\ast }}}\right\Vert \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x}_{1_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast} }k_{\mathbf{x}_{2_{j\ast}}}}\text{, \end{align*} such that any given conditional densit \[ p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) \text{ \ or \ }p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) \] for a respective extreme point $k_{\mathbf{x}_{1_{i_{\ast}}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$ is defined in terms of related counter risks and risks associated with positions and potential locations of $k_{\mathbf{x _{1_{i_{\ast}}}}$ and $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points within the $Z_{1}$ and $Z_{2}$ decision regions of a decision space $Z$. Quadratic eigenlocus transforms routinely accomplish an elegant, statistical balancing feat that involves finding the right mix of principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$. I\ will now show that the scale factors $\left\{ \psi_{i\ast}\right\} _{i=1}^{l}$ of the Wolfe dual principal eigenaxis components $\left\{ \psi_{i\ast \frac{k_{\mathbf{x}_{i\ast}}}{\left\Vert k_{\mathbf{x}_{i\ast}}\right\Vert }|\psi_{i\ast}>0\right\} _{i=1}^{l}$ on $\boldsymbol{\psi}$ play a fundamental role in this statistical balancing feat. I\ will develop an equation of statistical equilibrium for the axis of $\boldsymbol{\kappa}$ that is determined by the equation of statistical equilibrium \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }=\sum\nolimits_{i=1 ^{l_{2}}\psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert \] for the axis of $\boldsymbol{\psi}$. \subsection{Finding the Right Mix of Component Lengths} It has been demonstrated that the directions of the constrained primal and the Wolfe dual principal eigenaxis components are fixed, along with the angles between all of the extreme vectors. I will now argue that the lengths of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ must satisfy critical magnitude constraints. Using Eq. (\ref{Dual Eigen-coordinate Locations Component One Q}), it follows that the integrated lengths $\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}$ of the $\psi_{1i\ast}\overrightarrow{\mathbf{e}}_{1i\ast}$ components on $\boldsymbol{\psi}_{1}$ must satisfy the equation \begin{align} \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast} & =\lambda_{\max_{\boldsymbol{\psi }}}^{-1}\sum\nolimits_{i=1}^{l_{1}}\left\Vert k_{\mathbf{x}_{1_{i\ast} }\right\Vert \label{integrated dual loci one1 Q}\\ & \times\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x _{1_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x _{1_{j\ast}}}}\nonumber\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\sum\nolimits_{i=1}^{l_{1 }\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert \nonumber\\ & \times\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x _{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{1_{i\ast}}}k_{\mathbf{x _{2_{j\ast}}}}\nonumber \end{align} which reduces t \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}=\lambda_{\max_{\boldsymbol{\psi} }^{-1}\sum\nolimits_{i=1}^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}^{T}\left( \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}k_{\mathbf{x}_{1_{j\ast}} -\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}}}\right) \text{. \] Using Eq. (\ref{Dual Eigen-coordinate Locations Component Two Q}), it follows that the integrated lengths $\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}$ of the $\psi_{2i\ast}\overrightarrow{\mathbf{e}}_{2i\ast}$ components on $\boldsymbol{\psi}_{2}$ must satisfy the equation \begin{align} \sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast} & =\lambda_{\max_{\boldsymbol{\psi }}}^{-1}\sum\nolimits_{i=1}^{l_{2}}\left\Vert k_{\mathbf{x}_{2_{i\ast} }\right\Vert \label{integrated dual loci two1 Q}\\ & \times\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}\left\Vert k_{\mathbf{x _{2_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast}}}k_{\mathbf{x _{2_{j\ast}}}}\nonumber\\ & -\lambda_{\max_{\boldsymbol{\psi}}}^{-1}\sum\nolimits_{i=1}^{l_{2 }\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert \nonumber\\ & \times\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}\left\Vert k_{\mathbf{x _{1_{j\ast}}}\right\Vert \cos\theta_{k_{\mathbf{x}_{2_{i\ast}}}k_{\mathbf{x _{1_{j\ast}}}}\nonumber \end{align} which reduces t \[ \sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}=\lambda_{\max_{\boldsymbol{\psi} }^{-1}\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x}_{2_{i\ast}}}^{T}\left( \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}} -\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}k_{\mathbf{x}_{1_{j\ast}}}\right) \text{. \] I will now show that Eqs (\ref{integrated dual loci one1 Q}) and (\ref{integrated dual loci two1 Q}) determine a balanced eigenlocus equation, where RHS Eq. (\ref{integrated dual loci one1 Q}) $=$ RHS Eq. (\ref{integrated dual loci two1 Q}). \subsection{Balanced Quadratic Eigenlocus Equations} Returning to Eq. (\ref{Equilibrium Constraint on Dual Eigen-components Q} \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}=\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i\ast}}\text{, \] where the axis of $\boldsymbol{\psi}$ is in statistical equilibrium, it follows that the RHS\ of Eq. (\ref{integrated dual loci one1 Q}) must equal the RHS\ of Eq. (\ref{integrated dual loci two1 Q}) \begin{align} & \sum\nolimits_{i=1}^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}^{T}\left( \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}k_{\mathbf{x}_{1_{j\ast}} -\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}}}\right) \label{Balanced Eigenlocus Equation Quadratic}\\ & =\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x}_{2_{i\ast}}}^{T}\left( \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}} -\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}k_{\mathbf{x}_{1_{j\ast}}}\right) \text{.}\nonumber \end{align} Therefore, all of the $k_{\mathbf{x}_{1_{i\ast}}}$ and $k_{\mathbf{x _{2_{i\ast}}}$ extreme points are distributed over the axes of $\boldsymbol{\kappa}_{1}$\textbf{ }and $\boldsymbol{\kappa}_{2}$ in the symmetrically balanced manner \begin{equation} \sum\nolimits_{i=1}^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}^{T}\left( \boldsymbol{\kappa}_{1}\mathbf{-}\boldsymbol{\kappa}_{2}\right) =\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x}_{2_{i\ast}}}^{T}\left( \boldsymbol{\kappa}_{2}\mathbf{-}\boldsymbol{\kappa}_{1}\right) \text{,} \label{Balanced Eigenlocus Equation Q \end{equation} where the components of the $k_{\mathbf{x}_{1_{i\ast}}}$ extreme vectors along the axis of $\boldsymbol{\kappa}_{2}$ oppose the components of the $k_{\mathbf{x}_{1_{i\ast}}}$ extreme vectors along the axis of $\boldsymbol{\kappa}_{1}$, and the components of the $k_{\mathbf{x}_{2_{i\ast }}}$ extreme vectors along the axis of $\boldsymbol{\kappa}_{1}$ oppose the components of the $k_{\mathbf{x}_{2_{i\ast}}}$ extreme vectors along the axis of $\boldsymbol{\kappa}_{2}$. Rewrite Eq. (\ref{Balanced Eigenlocus Equation Q}) as \[ \sum\nolimits_{i=1}^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}\boldsymbol{\kappa _{1}+\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x}_{2_{i\ast}}}\boldsymbol{\kappa }_{1}=\sum\nolimits_{i=1}^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}\boldsymbol{\kappa }_{2}+\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x}_{2_{i\ast}}}\boldsymbol{\kappa }_{2}\text{, \] where components of the $k_{\mathbf{x}_{1_{i\ast}}}$ and $k_{\mathbf{x _{2_{i\ast}}}$ extreme vectors along the axes of $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ have forces associated with risks and counter risks that are determined by expected values and spreads of $k_{\mathbf{x _{1_{i\ast}}}$ and $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points located in the $Z_{1}$ and $Z_{2}$ or decision regions. It follows that, for any given collection of extreme points drawn from any given statistical distribution, the aggregate forces associated with the risks and the counter risks on the axis of $\boldsymbol{\kappa}_{1}$ are balanced with the aggregate forces associated with the risks and the counter risks on the axis of $\boldsymbol{\kappa}_{2}$. Let $\widehat{k}_{\mathbf{x}_{i\ast}}\triangleq\sum\nolimits_{i=1 ^{l}k_{\mathbf{x}_{i\ast}}$, where $\sum\nolimits_{i=1}^{l}k_{\mathbf{x _{i\ast}}=\sum\nolimits_{i=1}^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}+\sum \nolimits_{i=1}^{l_{2}}k_{\mathbf{x}_{2_{i\ast}}}$. Using Eq. (\ref{Balanced Eigenlocus Equation Q}), it follows that the component of $\widehat{k}_{\mathbf{x}_{i\ast}}$ along $\boldsymbol{\kappa}_{1}$ is symmetrically balanced with the component of $\widehat{k}_{\mathbf{x}_{i\ast }$ along $\boldsymbol{\kappa}_{2} \[ \operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa}_{1}}}\left( \overrightarrow{\widehat{k}_{\mathbf{x}_{i\ast}}}\right) \rightleftharpoons \operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa}_{2}}}\left( \overrightarrow{\widehat{k}_{\mathbf{x}_{i\ast}}}\right) \] so that the components $\operatorname{comp _{\overrightarrow{\boldsymbol{\kappa}_{1}}}\left( \overrightarrow{\widehat{k _{\mathbf{x}_{i\ast}}}\right) $ and $\operatorname{comp _{\overrightarrow{\boldsymbol{\kappa}_{2}}}\left( \overrightarrow{\widehat{k _{\mathbf{x}_{i\ast}}}\right) $ of clusters or aggregates of the extreme vectors from both pattern classes have \emph{equal forces associated with risks and counter risks} on opposite sides of the axis of $\boldsymbol{\kappa }$. \subsubsection{Statistical Equilibrium of Risks and Counter Risks} Given Eq. (\ref{Balanced Eigenlocus Equation Q}), it follows that the axis of $\boldsymbol{\kappa}$ is a lever of uniform density, where the center of $\boldsymbol{\kappa}$ is $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min _{c}}^{2}$, for which two equal weights $\operatorname{comp _{\overrightarrow{\boldsymbol{\kappa}_{1}}}\left( \overrightarrow{\widehat{k _{\mathbf{x}_{i\ast}}}\right) $ and $\operatorname{comp _{\overrightarrow{\boldsymbol{\kappa}_{2}}}\left( \overrightarrow{\widehat{k _{\mathbf{x}_{i\ast}}}\right) $ are placed on opposite sides of the fulcrum of $\boldsymbol{\kappa}$, whereby the axis of $\boldsymbol{\kappa}$ is in \emph{statistical equilibrium}. Figure $\ref{Statistical Equilibrium of Primal Quadratic Eigenlocus}$ illustrates the axis of $\boldsymbol{\kappa}$ in statistical equilibrium, where all of the forces associated with the counter risks and the risks of aggregates of extreme points are symmetrically balanced with each other \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure43.png }\caption{The axis of $\boldsymbol{\kappa}$ is in statistical equilibrium, where two equal weights $\operatorname{comp _{\protect\overrightarrow{\boldsymbol{\kappa}_{1}}}\left( \protect\overrightarrow{\protect\widehat{k}_{\mathbf{x}_{i\ast}}}\right) $ and $\operatorname{comp}_{\protect\overrightarrow{\boldsymbol{\kappa}_{2} }\left( \protect\overrightarrow{\protect\widehat{k}_{\mathbf{x}_{i\ast} }\right) $ are placed on opposite sides of the fulcrum of $\boldsymbol{\kappa }$ which is located at the center of total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}$ . \label{Statistical Equilibrium of Primal Quadratic Eigenlocus \end{figure} \subsubsection{Critical Magnitude Constraints} Equation (\ref{Balanced Eigenlocus Equation Q}) indicates that the length \[ \left\{ \psi_{1_{i\ast}}|\psi_{1_{i\ast}}>0\right\} _{i=1}^{l_{1}}\text{ and }\left\{ \psi_{2_{i\ast}}|\psi_{2_{i\ast}}>0\right\} _{i=1}^{l_{2} \] of the $l$ Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$ satisfy critical magnitude constraints, such that the Wolfe dual eigensystem in Eq. (\ref{Dual Normal Eigenlocus Components Q}), which specifies highly interconnected, balanced sets of inner product relationships amongst the Wolfe dual and the constrained, primal principal eigenaxis components in Eqs (\ref{integrated dual loci one1 Q}) and (\ref{integrated dual loci two1 Q}), determines well-proportioned lengths $\psi_{1i\ast}$ or $\psi_{2i\ast}$ for each Wolfe dual principal eigenaxis component $\psi_{1i\ast}\frac {k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }$ or $\psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }$ on $\boldsymbol{\psi }_{1}$ or $\boldsymbol{\psi}_{2}$, where each scale factor $\psi_{1i\ast}$ or $\psi_{2i\ast}$ determines a well-proportioned length for a correlated, constrained primal principal eigenaxis component $\psi_{1_{i\ast }k_{\mathbf{x}_{1_{i\ast}}}$ or $\psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast}}}$ on $\boldsymbol{\kappa}_{1}$ or $\boldsymbol{\kappa}_{2}$. I will demonstrate that the axis of $\boldsymbol{\psi}$, which is constrained to be in statistical equilibrium $\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast }\frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }=\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\frac{k_{\mathbf{x _{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }$, determines an equilibrium point $p\left( \widehat{\Lambda}_{\boldsymbol{\psi }}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) =0$ of an integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ such that a quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}$ is the solution to a fundamental integral equation of binary classification for a classification system $\left( \mathbf{x}^{T \mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ in statistical equilibrium. Let $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\kappa}\right) $ denote the expected risk of a quadratic classification system $\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ that is determined by a quadratic eigenlocus transform. Take any given set $\left\{ \left\{ k_{\mathbf{x _{1_{i_{\ast}}}}\right\} _{i=1}^{l_{1}},\;\left\{ k_{\mathbf{x}_{2_{i_{\ast }}}}\right\} _{i=1}^{l_{2}}\right\} $ of extreme points and take the set of scale factors $\left\{ \left\{ \psi_{1i\ast}\right\} _{i=1}^{l_{1 },\left\{ \psi_{2i\ast}\right\} _{i=1}^{l_{2}}\right\} $ that are determined by a quadratic eigenlocus transform. Let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi _{1_{j\ast}}k_{\mathbf{x}_{1j\ast}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the scaled extreme vector $\psi_{1_{j\ast}}k_{\mathbf{x}_{1j\ast}}$ in the decision space $Z$, where the force associated with the expected risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\mathbf{\kappa}\right) $ may be positive or negative. Let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the scaled extreme vector $\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast} }$ in the decision space $Z$, where the force associated with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\kappa}\right) $ may be positive or negative. Take any given extreme point $k_{\mathbf{x}_{1_{i_{\ast}}}}$ from class $\omega_{1}$. Let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{1_{j\ast}}k_{\mathbf{x}_{1_{i_{\ast}}}}^{T}k_{\mathbf{x}_{1j\ast }\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the component of the extreme vector $k_{\mathbf{x}_{1_{i_{\ast}}}}$ along the scaled extreme vector $\psi _{1_{j\ast}}k_{\mathbf{x}_{1j\ast}}$ in the decision space $Z$, where the force associated with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\kappa}\right) $ may be positive or negative. Likewise, let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{2_{j\ast }k_{\mathbf{x}_{1_{i_{\ast}}}}^{T}k_{\mathbf{x}_{2_{j\ast}}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the component of the extreme vector $k_{\mathbf{x _{1_{i_{\ast}}}}$ along the scaled extreme vector $\psi_{2_{j\ast }k_{\mathbf{x}_{2_{j\ast}}}$ in the decision space $Z$, where the force associated with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\kappa}\right) $ may be positive or negative. Take any given extreme point $k_{\mathbf{x}_{2_{i_{\ast}}}}$ from class $\omega_{2}$. Let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{i_{\ast}}}}^{T}k_{\mathbf{x}_{2_{j\ast} }\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the component of the extreme vector $k_{\mathbf{x}_{2_{i_{\ast}}}}$ along the scaled extreme vector $\psi _{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}}}$ in the decision space $Z$, where the force associated with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\kappa}\right) $ may be positive or negative. Likewise, let $\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{1_{j\ast }k_{\mathbf{x}_{2_{i_{\ast}}}}^{T}k_{\mathbf{x}_{1j\ast}}\right) $ denote the force associated with either the counter risk or the risk that is related to the locus of the component of the extreme vector $k_{\mathbf{x}_{2_{i_{\ast} }}$ along the scaled extreme vector $\psi_{1_{j\ast}}k_{\mathbf{x}_{1j\ast}}$ in the decision space $Z$, where the force associated with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\kappa}\right) $ may be positive or negative. Returning to Eq. (\ref{Balanced Eigenlocus Equation Quadratic} \begin{align*} & \sum\nolimits_{i=1}^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}^{T}\left( \sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}k_{\mathbf{x}_{1_{j\ast}} -\sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}}}\right) \\ & =\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x}_{2_{i\ast}}}^{T}\left( \sum\nolimits_{j=1}^{l_{2}}\psi_{2_{j\ast}}k_{\mathbf{x}_{2_{j\ast}} -\sum\nolimits_{j=1}^{l_{1}}\psi_{1_{j\ast}}k_{\mathbf{x}_{1_{j\ast}}}\right) \text{. \end{align*} it follows that the collective forces associated with the risks and the counter risks that are related to the positions and the potential locations of all of the extreme points are balanced in the following manner \begin{align*} \mathfrak{R}_{\mathfrak{\min}}\left( Z|\mathbf{\kappa}\right) & :\sum\nolimits_{i=1}^{l_{1}}\left[ \sum\nolimits_{j=1}^{l_{1} \overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{1_{j\ast }k_{\mathbf{x}_{1_{i_{\ast}}}}^{T}k_{\mathbf{x}_{1j\ast}}\right) -\sum\nolimits_{j=1}^{l_{2}}\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min }}\left( Z|\psi_{2_{j\ast}}k_{\mathbf{x}_{1_{i_{\ast}}}}^{T}k_{\mathbf{x _{2_{j\ast}}}\right) \right] \\ & =\sum\nolimits_{i=1}^{l_{2}}\left[ \sum\nolimits_{j=1}^{l_{2 }\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min}}\left( Z|\psi_{2_{j\ast }k_{\mathbf{x}_{2_{i_{\ast}}}}^{T}k_{\mathbf{x}_{2_{j\ast}}}\right) -\sum\nolimits_{j=1}^{l_{1}}\overleftrightarrow{\mathfrak{R}}_{\mathfrak{\min }}\left( Z|\psi_{1_{j\ast}}k_{\mathbf{x}_{2_{i_{\ast}}}}^{T}k_{\mathbf{x _{1j\ast}}\right) \right] \text{. \end{align*} So, take any given set $\left\{ \left\{ k_{\mathbf{x}_{1_{i_{\ast}} }\right\} _{i=1}^{l_{1}},\;\left\{ k_{\mathbf{x}_{2_{i_{\ast}}}}\right\} _{i=1}^{l_{2}}\right\} $ of extreme points and take the set of scale factors $\left\{ \left\{ \psi_{1i\ast}\right\} _{i=1}^{l_{1}},\left\{ \psi _{2i\ast}\right\} _{i=1}^{l_{2}}\right\} $ that are determined by a quadratic eigenlocus transform. I\ will show that quadratic eigenlocus transforms choose magnitudes or scale factors for the Wolfe dual principal eigenaxis components \[ \left\{ \psi_{1i\ast}\frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }\right\} _{i=1}^{l_{1}}\text{ and }\left\{ \psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }\right\} _{i=1}^{l_{2} \] on $\mathbf{\psi}$, which is constrained to satisfy the equation of statistical equilibrium \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1_{i_{\ast}}} }{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }=\sum\nolimits_{i=1 ^{l_{2}}\psi_{2i\ast}\frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }\text{, \] such that the likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ and the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ are in statistical equilibrium, and the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\kappa}\right) $ and the corresponding total allowed eigenenergies $\left\Vert \boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ are minimized. In the next section, I\ will explicitly define the manner in which constrained, quadratic eigenlocus discriminant functions $\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfy quadratic decision boundaries $D_{0}\left( \mathbf{s}\right) $ and quadratic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s \right) $. I will use these results to show that the principal eigenaxis $\boldsymbol{\kappa}$ of a quadratic eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ is a lever that is symmetrically balanced with respect to the center of eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}$, such that the total allowed eigenenergies of the scaled extreme vectors on $\boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}$ are symmetrically balanced about the fulcrum $\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}$. Thereby, I\ will show that the likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ and the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ are in statistical equilibrium. I\ will use all of these results to identify the manner in which the property of symmetrical balance exhibited by the principal eigenaxis components on $\boldsymbol{\psi}$ and $\boldsymbol{\kappa}$ enables quadratic eigenlocus classification systems $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ to effectively balance all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{1}\right) $ and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) $ in the $Z_{1}$ decision region with all of the forces associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2 |\boldsymbol{\kappa}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min }}\left( Z_{2}|\boldsymbol{\kappa}_{1}\right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast }|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}-\in \nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}+\delta\left( y\right) \boldsymbol{\psi }_{1}\\ & =\int\nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa }_{2}\right) d\boldsymbol{\kappa}_{2}-\int\nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa }_{1}-\delta\left( y\right) \boldsymbol{\psi}_{2}\text{, \end{align*} where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, and $Z_{1}$ and $Z_{2}$ are symmetrical decision regions $Z_{1}\simeq Z_{2}$, given the equilibrium point $\boldsymbol{\psi _{1}-\boldsymbol{\psi}_{2}=0$ and the class-conditional probability density functions $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $, where the areas under the probability density functions $p\left( k_{\mathbf{x _{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $ are symmetrically balanced with each other over the $Z_{1}$ and $Z_{2}$ decision regions. \section{Risk Minimization for Quadratic Classifiers} In the next two sections, I will show that the conditional probability function $P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ for class $\omega_{1}$, which is given by the integra \[ P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) =\in _{Z}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}=\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}+C_{1}\text{, \] over the decision space $Z$, and the conditional probability function $P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ for class $\omega_{2}$, which is given by the integra \[ P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) =\in _{Z}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}=\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}+C_{2}\text{, \] over the decision space $Z$, satisfy an integral equation where the area under the probability density function $p\left( k_{\mathbf{x}_{1i\ast }|\boldsymbol{\kappa}_{1}\right) $ for class $\omega_{1}$ is \emph{symmetrically balanced with} the area under the probability density function $p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $ for class $\omega_{2} \[ f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) :\int_{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}+\nabla _{eq}\equiv\int_{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\nabla _{eq}\text{, \] where $\nabla_{eq}$ is an equalizer statistic, such that the likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ and the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa }+\kappa_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are in statistical equilibrium. Accordingly, I\ will formulate a system of data-driven, locus equations that determines the total allowed eigenenergies $\left\Vert \boldsymbol{\kappa _{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\kappa _{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$, and I will derive values for the integration constants $C_{1}$ and $C_{2}$. I will use these results to devise an equalizer statistic $\nabla_{eq}$ for an integral equation that is satisfied by the class-conditional probability density functions $p\left( k_{\mathbf{x _{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $. I\ will now devise a system of data-driven, locus equations that determines the manner in which the total allowed eigenenergies of the scaled extreme points on $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ are symmetrically balanced about the fulcrum $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}$. Accordingly, I will devise three systems of data-driven, locus equations that explicitly determine the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c }^{2}$ exhibited by $\boldsymbol{\kappa}_{1}$, the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}_{2}$, and the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}$. \subsection{Critical Minimum Eigenenergy Constraints II} Let there be $l$ labeled, scaled reproducing kernels of extreme points on a quadratic eigenlocus $\boldsymbol{\kappa}$. Given the theorem of Karush, Kuhn, and Tucker and the KKT condition in Eq. (\ref{KKTE5 Q}), it follows that a Wolf dual quadratic eigenlocus $\boldsymbol{\psi}$ exists, for whic \[ \left\{ \psi_{i\ast}>0\right\} _{i=1}^{l}\text{, \] such that the $l$ constrained, primal principal eigenaxis components $\left\{ \psi_{i_{\ast}}k_{\mathbf{x}_{i_{\ast}}}\right\} _{i=1}^{l}$ on $\boldsymbol{\kappa}$ satisfy a system of $l$ eigenlocus equations \begin{equation} \psi_{i\ast}\left[ y_{i}\left( \left( \mathbf{x}^{T}\mathbf{x}_{i\ast }+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\right) -1+\xi_{i}\right] =0,\ i=1,...,l\text{.} \label{Minimum Eigenenergy Functional System Q \end{equation} I\ will now use Eq. (\ref{Minimum Eigenenergy Functional System Q}) to define critical minimum eigenenergy constraints on $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$. The analysis begins with the critical minimum eigenenergy constraint on $\boldsymbol{\kappa}_{1}$. \subsubsection{Total Allowed Eigenenergy of $\boldsymbol{\kappa}_{1}$} Take any scaled extreme vector $\psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast }}}$ that belongs to class $\omega_{1}$. Using Eq. (\ref{Minimum Eigenenergy Functional System Q}) and letting $y_{i}=+1$, it follows that the constrained, primal principal eigenaxis component $\psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}$ on $\boldsymbol{\kappa _{1}$ is specified by the equation \[ \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}\boldsymbol{\kappa =\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) \] which is part of a system of $l_{1}$ eigenlocus equations. Therefore, each constrained, primal principal eigenaxis component $\psi_{1_{i_{\ast} }k_{\mathbf{x}_{1_{i_{\ast}}}}$ on $\boldsymbol{\kappa}_{1}$ satisfies the above locus equation. Now take all of the $l_{1}$ scaled extreme vectors $\left\{ \psi_{1_{i_{\ast }}}k_{\mathbf{x}_{1_{i_{\ast}}}}\right\} _{i=1}^{l_{1}}$ that belong to class $\omega_{1}$. Again, using Eq. (\ref{Minimum Eigenenergy Functional System Q}) and letting $y_{i}=+1$, it follows that the complete set $\left\{ \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}\right\} _{i=1}^{l_{1}}$ of $l_{1}$ constrained, primal principal eigenaxis components $\psi_{1_{i_{\ast }}k_{\mathbf{x}_{1_{i_{\ast}}}}$ on $\boldsymbol{\kappa}_{1}$ is determined by the system of $l_{1}$ equations \begin{equation} \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}\boldsymbol{\kappa =\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) ,\ i=1,...,l_{1 \text{.} \label{Minimum Eigenenergy Class One Q \end{equation} Using Eq. (\ref{Minimum Eigenenergy Class One Q}), it follows that the entire set $\left\{ \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}\right\} _{i=1}^{l_{1}}$ of $l_{1}\times d$ transformed, extreme vector coordinates satisfies the system of $l_{1}$ eigenlocus equations \[ \text{ }(1)\text{ \ }\psi_{1_{1_{\ast}}}k_{\mathbf{x}_{1_{1_{\ast}} }\boldsymbol{\kappa}=\psi_{1_{1_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) \text{, \ \[ \text{ }(2)\text{ \ }\psi_{1_{2_{\ast}}}k_{\mathbf{x}_{1_{2_{\ast}} }\boldsymbol{\kappa}=\psi_{1_{2_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) \text{, \] \[ \vdots \ \[ (l_{1})\text{\ \ }\psi_{1_{l_{\ast}}}k_{\mathbf{x}_{1_{1_{\ast}} }\boldsymbol{\kappa}=\psi_{1_{l_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) \text{, \] where each constrained, primal principal eigenaxis component $\psi _{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}$ on $\boldsymbol{\kappa}_{1}$ satisfies the identity \[ \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}\boldsymbol{\kappa =\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) \text{. \] I\ will now formulate an identity for the total allowed eigenenergy of $\boldsymbol{\kappa}_{1}$. Let $E_{\boldsymbol{\kappa}_{1}}$ denote the functional of the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa }_{1}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}_{1}$ and let $\boldsymbol{\kappa}=\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$. Summation over the above system of $l_{1}$ eigenlocus equations produces the following equation for the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa }_{1}$ \[ \left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}k_{\mathbf{x _{1_{i_{\ast}}}}\right) \left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right) \equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) \] which reduces t \[ \boldsymbol{\kappa}_{1}^{T}\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{1 ^{T}\boldsymbol{\kappa}_{2}\equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast }}\left( 1-\xi_{i}-\kappa_{0}\right) \] so that the functional $E_{\boldsymbol{\kappa}_{1}}$ satisfies the identit \[ \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2 -\boldsymbol{\kappa}_{1}^{T}\boldsymbol{\kappa}_{2}\equiv\sum\nolimits_{i=1 ^{l_{1}}\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) \text{. \] Therefore, the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa _{1}\right\Vert _{\min_{c}}^{2}$ exhibited by the constrained, primal principal eigenlocus component $\boldsymbol{\kappa}_{1}$ is determined by the identit \begin{equation} \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2 }\equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left( 1-\xi_{i -\kappa_{0}\right) \text{,} \label{TAE Eigenlocus Component One Q \end{equation} where the functional $E_{\boldsymbol{\kappa}_{1}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}_{1} \[ E_{\boldsymbol{\kappa}_{1}}=\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa}_{2}\right\Vert \cos\theta_{\boldsymbol{\kappa _{1}\boldsymbol{\kappa}_{2} \] is equivalent to the functional $E_{\boldsymbol{\psi}_{1}} \[ E_{\boldsymbol{\psi}_{1}}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }\left( 1-\xi_{i}-\kappa_{0}\right) \] of the integrated magnitudes $\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}$ of the Wolfe dual principal eigenaxis components $\psi_{1_{i_{\ast}} \frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }$ and the $\kappa_{0}$ statistic. Returning to Eq. (\ref{Decision Border One Q}), it follows that the functionals $E_{\boldsymbol{\kappa}_{1}}$ and $E_{\boldsymbol{\psi}_{1}}$ specify the manner in which quadratic eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfy the quadratic decision border $D_{+1}\left( \mathbf{s}\right) $: $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}=1$. Given Eq. (\ref{TAE Eigenlocus Component One Q}), it is concluded that a quadratic eigenlocus discriminant function $\widetilde{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfies the quadratic decision border $D_{+1}\left( \mathbf{s}\right) $ in terms of the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}_{1}$, where the functional $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2}}$ is constrained by the functional $\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast }}}\left( 1-\xi_{i}-\kappa_{0}\right) $. The critical minimum eigenenergy constraint on $\boldsymbol{\kappa}_{2}$ is examined next. \subsubsection{Total Allowed Eigenenergy of $\boldsymbol{\kappa}_{2}$} Take any scaled extreme vector $\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast }}}$ that belongs to class $\omega_{2}$. Using Eq. (\ref{Minimum Eigenenergy Functional System Q}) and letting $y_{i}=-1$, it follows that the constrained, primal principal eigenaxis component $\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}}}}$ on $\boldsymbol{\kappa _{2}$ is specified by the equation \[ -\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}}}}\boldsymbol{\kappa =\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\kappa_{0}\right) \] which is part of a system of $l_{2}$ eigenlocus equations. Therefore, each constrained, primal principal eigenaxis component $\psi_{2_{i_{\ast} }k_{\mathbf{x}_{2_{i_{\ast}}}}$ on $\boldsymbol{\kappa}_{2}$ satisfies the above locus equation. Now take all of the $l_{2}$ scaled extreme vectors $\left\{ \psi_{2_{i_{\ast }}}k_{\mathbf{x}_{2_{i_{\ast}}}}\right\} _{i=1}^{l_{2}}$ that belong to class $\omega_{2}$. Again, using Eq. (\ref{Minimum Eigenenergy Functional System Q}) and letting $y_{i}=-1$, it follows that the complete set $\left\{ \psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}}}}\right\} _{i=1}^{l_{2}}$ of $l_{2}$ constrained, primal principal eigenaxis components $\psi_{2_{i_{\ast }}k_{\mathbf{x}_{2_{i_{\ast}}}}$ on $\boldsymbol{\kappa}_{2}$ is determined by the system of $l_{2}$ eigenlocus equations \begin{equation} -\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}}}}\boldsymbol{\kappa =\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\kappa_{0}\right) ,\text{\ i=1,...,l_{2}\text{.} \label{Minimum Eigenenergy Class Two Q \end{equation} Using the above equation, it follows that the entire set $\left\{ \psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}}}}\right\} _{i=1}^{l_{2}}$ of $l_{2}\times d$ transformed, extreme vector coordinates satisfies the system of $l_{2}$ eigenlocus equations \[ \text{ }(1)\text{ \ }-\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{1+l_{1\ast}} }\boldsymbol{\kappa}=\psi_{2_{1+l_{1\ast}}}\left( 1-\xi_{i}+\kappa _{0}\right) \text{, \ \[ (2)\text{ \ }-\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{2_{\ast}}} \boldsymbol{\kappa}=\psi_{2_{2_{\ast}}}\left( 1-\xi_{i}+\kappa_{0}\right) \text{, \ \[ \vdots \ \[ (l_{2})\ -\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{l_{2\ast}}}}\boldsymbol{\kappa }=\psi_{2_{l_{2}\ast}}\left( 1-\xi_{i}+\kappa_{0}\right) \text{, \] where each constrained, primal principal eigenaxis component $\psi _{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}}}}$ on $\boldsymbol{\kappa}_{2}$ satisfies the identity \[ -\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}}}}\boldsymbol{\kappa =\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\kappa_{0}\right) \text{. \] I will now formulate an identity for the total allowed eigenenergy of $\boldsymbol{\kappa}_{2}$. Let $E_{\boldsymbol{\kappa}_{2}}$ denote the functional of the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa }_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}_{2}$ and let $\boldsymbol{\kappa}=\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$. Summation over the above system of $l_{2}$ eigenlocus equations produces the following equation for the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa }_{2}$ \[ -\left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}k_{\mathbf{x _{2_{i_{\ast}}}}\right) \left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right) \equiv\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\kappa_{0}\right) \] which reduces t \[ \boldsymbol{\kappa}_{2}^{T}\boldsymbol{\kappa}_{2}-\boldsymbol{\kappa}_{2 ^{T}\boldsymbol{\kappa}_{1}\equiv\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast }}\left( 1-\xi_{i}+\kappa_{0}\right) \] so that the functional $E_{\boldsymbol{\kappa}_{2}}$ satisfies the identit \[ \left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2 -\boldsymbol{\kappa}_{2}^{T}\boldsymbol{\kappa}_{1}\equiv\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\kappa_{0}\right) \text{. \] Therefore, the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa _{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the constrained, primal eigenlocus component $\boldsymbol{\kappa}_{2}$ is determined by the identit \begin{equation} \left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1 }\equiv\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i +\kappa_{0}\right) \text{,} \label{TAE Eigenlocus Component Two Q \end{equation} where the functional $E_{\boldsymbol{\kappa}_{2}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}_{2} \[ E_{\boldsymbol{\kappa}_{2}}=\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa}_{1}\right\Vert \cos\theta_{\boldsymbol{\kappa _{2}\boldsymbol{\kappa}_{1} \] is equivalent to the functional $E_{\boldsymbol{\psi}_{2}} \[ E_{\boldsymbol{\psi}_{2}}=\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\left( 1-\xi_{i}+\kappa_{0}\right) \] of the integrated magnitudes $\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}$ of the Wolfe dual principal eigenaxis components $\psi_{2_{i_{\ast}} \frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}} }\right\Vert }$ and the $\kappa_{0}$ statistic. Returning to Eq. (\ref{Decision Border Two Q}), it follows the functionals $E_{\boldsymbol{\kappa}_{2}}$ and $E_{\boldsymbol{\psi}_{2}}$ specify the manner in which quadratic eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfy the quadratic decision border $D_{-1}\left( \mathbf{s}\right) $: $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}=-1$. Given Eq. (\ref{TAE Eigenlocus Component Two Q}), it is concluded that a quadratic eigenlocus discriminant function $\widetilde{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfies the quadratic decision border $D_{-1}\left( \mathbf{s}\right) $ in terms of the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}_{2}$, where the functional $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1}}$ is constrained by the functional $\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast }}}\left( 1-\xi_{i}+\kappa_{0}\right) $. The critical minimum eigenenergy constraint on $\boldsymbol{\kappa}$ is examined next. \subsubsection{Total Allowed Eigenenergy of $\boldsymbol{\kappa}$} I\ will now formulate an identity for the total allowed eigenenergy of a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$. Let $E_{\boldsymbol{\kappa}}$ denote the functional satisfied by the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa}$. Summation over the complete system of eigenlocus equations satisfied by $\boldsymbol{\kappa}_{1} \[ \left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}k_{\mathbf{x _{1_{i_{\ast}}}}\right) \boldsymbol{\kappa}\equiv\sum\nolimits_{i=1}^{l_{1 }\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) \] and by $\boldsymbol{\kappa}_{2} \[ \left( -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}k_{\mathbf{x _{2_{i\ast}}}\right) \boldsymbol{\kappa}\equiv\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\kappa_{0}\right) \] produces the following identity for the functional $E_{\boldsymbol{\kappa}}$ satisfied by the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa }\right\Vert _{\min_{c}}^{2}$ of $\boldsymbol{\kappa} \begin{align*} & \left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}k_{\mathbf{x _{1_{i_{\ast}}}}-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}k_{\mathbf{x _{2_{i\ast}}}\right) \boldsymbol{\kappa}\\ & \equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left( 1-\xi _{i}-\kappa_{0}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\kappa_{0}\right) \end{align*} which reduces t \begin{align} \left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) ^{T \boldsymbol{\kappa} & \equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}-\kappa_{0}\right) \label{Symmetrical Balance of TAE SDE Q}\\ & +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\kappa _{0}\right) \nonumber\\ & \equiv\sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}\left( 1-\xi_{i}\right) \text{,}\nonumber \end{align} where I\ have used the equilibrium constraint on $\boldsymbol{\psi}$ in Eq. (\ref{Equilibrium Constraint on Dual Eigen-components Q}). Thereby, the functional $E_{\boldsymbol{\kappa}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa} \begin{align*} E_{\boldsymbol{\kappa}} & =\left( \boldsymbol{\kappa}_{1 -\boldsymbol{\kappa}_{2}\right) ^{T}\boldsymbol{\kappa}\\ & \mathbf{=}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2 \end{align*} is equivalent to the functional $E_{\boldsymbol{\psi}} \[ E_{\boldsymbol{\psi}}=\sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}\left( 1-\xi _{i}\right) \] solely in terms of the integrated magnitudes $\sum\nolimits_{i=1}^{l \psi_{i_{\ast}}$ of the Wolfe dual principal eigenaxis components on $\boldsymbol{\psi}$. Thus, the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ exhibited by a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ is specified by the integrated magnitudes $\psi _{i_{\ast}}$ of the Wolfe dual principal eigenaxis components $\psi_{i\ast }\frac{k_{\mathbf{x}i\ast}}{\left\Vert k_{\mathbf{x}_{i\ast}}\right\Vert }$ on $\boldsymbol{\psi} \begin{align} \left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2} & \equiv \sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}\left( 1-\xi_{i}\right) \label{TAE SDE Q}\\ & \equiv\sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}-\sum\nolimits_{i=1}^{l \psi_{i_{\ast}}\xi_{i}\text{,}\nonumber \end{align} where the regularization parameters $\xi_{i}=\xi\ll1$ are seen to determine negligible constraints on $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$. Therefore, it is concluded that the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ exhibited by a constrained, primal quadratic eigenlocus $\boldsymbol{\kappa}$ is determined by the integrated magnitudes $\sum\nolimits_{i=1}^{l \psi_{i_{\ast}}$ of the Wolfe dual principal eigenaxis components $\psi _{i\ast}\frac{k_{\mathbf{x}i\ast}}{\left\Vert k_{\mathbf{x}_{i\ast }\right\Vert }$ on $\boldsymbol{\psi}$. Returning to Eq. (\ref{Decision Boundary Q}), it follows that the equilibrium constraint on $\boldsymbol{\psi}$ and the corresponding functionals $E_{\boldsymbol{\kappa}}$ and $E_{\boldsymbol{\psi}}$ specify the manner in which quadratic eigenlocus discriminant functions $\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfy quadratic decision boundaries $D_{0}\left( \mathbf{s}\right) $: $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}=0$. Given Eq. (\ref{TAE SDE Q}), it is concluded that a quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2 \boldsymbol{\kappa}+\kappa_{0}$ satisfies a quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $ in terms of its total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$, where the functional $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ is constrained by the functional $\sum\nolimits_{i=1}^{l}\psi_{i_{\ast}}\left( 1-\xi_{i}\right) $. Using Eqs (\ref{TAE Eigenlocus Component One Q}), (\ref{TAE Eigenlocus Component Two Q}), and (\ref{Symmetrical Balance of TAE SDE Q}), it follows that the symmetrically balanced constraint \[ E_{\boldsymbol{\psi}_{1}}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }\left( 1-\xi_{i}-\kappa_{0}\right) \text{ \ and \ }E_{\boldsymbol{\psi _{2}}=\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi_{i +\kappa_{0}\right) \] satisfied by a quadratic eigenlocus discriminant function $\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ on the respective quadratic decision borders $D_{+1}\left( \mathbf{s}\right) $ and $D_{-1}\left( \mathbf{s}\right) $, and the corresponding constrain \[ E_{\boldsymbol{\psi}}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}-\kappa_{0}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\left( 1-\xi_{i}+\kappa_{0}\right) \] satisfied by a quadratic eigenlocus discriminant function $\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ on the quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $, ensure that the total allowed eigenenergies $\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the scaled extreme points on $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ \begin{align*} \left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2}}\\ & +\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1} \end{align*} satisfy the law of cosines in the symmetrically balanced manner depicted in Fig. $\ref{Law of Cosines for Quadratic Classification Systems}$. Given the binary classification theorem, it follows that quadratic eigenlocus likelihood ratios $\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ and corresponding decision boundaries $D_{0}\left( \mathbf{s}\right) $ satisfy an integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) \right) $ where the areas $\int\nolimits_{Z p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}$ and $\int\nolimits_{Z}p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}$ under the class-conditional probability density functions $p\left( k_{\mathbf{x _{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $ are balanced with each other. Furthermore, the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) $ given class $\omega_{2}$ must be balanced with the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) $ given class $\omega_{1}$. Therefore, quadratic eigenlocus likelihood ratios $\widehat{\Lambda}_{\boldsymbol{\kappa }}\left( \mathbf{s}\right) =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ and corresponding decision boundaries $D_{0}\left( \mathbf{s}\right) $ also satisfy an integral equation where the total allowed eigenenergies $\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ of a quadratic eigenlocus $\boldsymbol{\kappa}=\boldsymbol{\kappa}_{1 -\boldsymbol{\kappa}_{2}$ are balanced with each other. I\ will show that the discrete, quadratic classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ seeks an equilibrium point $p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) =0$ of an integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $, where the total allowed eigenenergies $\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ of the classification system are balanced with each other, such that the eigenenergy and the expected risk of the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are minimized, and the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ is in statistical equilibrium. In the next section, I will develop an integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ that is satisfied by quadratic eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$, where the total allowed eigenenergies $\left\Vert \boldsymbol{\kappa}_{1 -\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the principal eigenaxis components on a quadratic eigenlocus $\boldsymbol{\kappa }=\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ are symmetrically balanced with each other. Thereby, I will show that the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) $ is in statistical equilibrium, and that the areas under the class-conditional density functions $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) $, over the decision space $Z=Z_{1}+Z_{2}$, are symmetrically balanced with each other. I will use these results to show that the discriminant function $\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ is the solution to a fundamental integral equation of binary classification for a classification system in statistical equilibrium. The solution involves a surprising, statistical balancing feat in decision space $Z$ which hinges on an elegant, statistical balancing feat in eigenspace $\widetilde{Z}$. \section{The Balancing Feat in Eigenspace II} A quadratic eigenlocu \[ \boldsymbol{\kappa}=\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2 \] which is formed by a locus of labeled ($+1$ or $-1$), scaled ($\psi_{1_{i\ast }}$ or $\psi_{2_{i\ast}}$) extreme vectors ($k_{\mathbf{x}_{1_{i\ast}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$ \[ \boldsymbol{\kappa}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}k_{\mathbf{x _{1_{i\ast}}}-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}k_{\mathbf{x _{2_{i\ast}} \] has a \emph{dual nature} that is \emph{twofold}: Each $\psi_{1_{i\ast}}$ or $\psi_{2_{i\ast}}$ scale factor determines the total allowed eigenenerg \[ \left\Vert \psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast}}}\right\Vert _{\min_{c }^{2}\text{ \ or \ }\left\Vert \psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast} }\right\Vert _{\min_{c}}^{2 \] of a principal eigenaxis component $\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast} }$ or $\psi_{2_{i\ast}}k_{\mathbf{x}_{2_{i\ast}}}$ on $\boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}$ in decision space $Z$, and each $\psi_{1_{i\ast }}$ or $\psi_{2_{i\ast}}$ scale factor determines the total allowed eigenenerg \[ \left\Vert \psi_{1_{i\ast}}\frac{\mathbf{x}_{1_{i\ast}}}{\left\Vert \mathbf{x}_{1_{i\ast}}\right\Vert }\right\Vert _{\min_{c}}^{2}\text{ \ or \ }\left\Vert \psi_{2_{i\ast}}\frac{\mathbf{x}_{2_{i\ast}}}{\left\Vert \mathbf{x}_{2_{i\ast}}\right\Vert }\right\Vert _{\min_{c}}^{2 \] of a principal eigenaxis component $\psi_{1_{i\ast}}\frac{k_{\mathbf{x _{1_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert }$ or $\psi_{2_{i\ast}}\frac{k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x _{2_{i\ast}}}\right\Vert }$ on $\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ in Wolfe dual eigenspace $\widetilde{Z}$. In addition, each $\psi_{1_{i\ast}}$ scale factor specifies dual conditional densities for an $k_{\mathbf{x}_{1_{i\ast}}}$ extreme point \[ p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) \frac{k_{\mathbf{x _{1_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert }\text{ \ and \ }p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{1_{i\ast }}}\text{, \] and each $\psi_{2_{i\ast}}$ scale factor specifies dual conditional densities for an $k_{\mathbf{x}_{2_{i\ast}}}$ extreme point \[ p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) \frac{k_{k_{\mathbf{x _{2i\ast}}}}{\left\Vert k_{k_{\mathbf{x}_{2i\ast}}}\right\Vert }\text{ \ and \ }p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{k_{\mathbf{x _{2i\ast}}}\text{. \] Accordingly, a Wolfe dual quadratic eigenlocus $\boldsymbol{\psi }=\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ is a parameter vector of likelihoods \begin{align*} \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) & =\sum\nolimits_{i=1}^{l_{1}}p\left( k_{\mathbf{x}_{1i\ast} |\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) \frac{k_{\mathbf{x _{1_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert }\\ & +\sum\nolimits_{i=1}^{l_{2}}p\left( k_{\mathbf{x}_{2i\ast} |\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) \frac{k_{k_{\mathbf{x _{2i\ast}}}}{\left\Vert k_{k_{\mathbf{x}_{2i\ast}}}\right\Vert \end{align*} \emph{and} a locus of principal eigenaxis components in Wolfe dual eigenspace $\widetilde{Z}$, and a primal quadratic eigenlocus $\boldsymbol{\kappa }=\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ is a parameter vector of likelihoods \begin{align*} \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =\sum\nolimits_{i=1}^{l_{1}}p\left( k_{\mathbf{x}_{1i\ast} |\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{1_{i\ast }}}\\ & -\sum\nolimits_{i=1}^{l_{2}}p\left( k_{\mathbf{x}_{2i\ast} |\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{2_{i\ast }} \end{align*} \emph{and} a locus of principal eigenaxis components in decision space $Z$, that jointly determine the basis of a quadratic classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$. Moreover, the Wolfe dual likelihood rati \begin{align*} \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) & =p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega _{1}\right) +p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2 \end{align*} is constrained to satisfy the equilibrium equation \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \] so that the Wolfe dual likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\psi }\left( \mathbf{x}\right) =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ is in statistical equilibrium \[ \boldsymbol{\psi}_{1}=\boldsymbol{\psi}_{2}\text{. \] I will demonstrate that the dual nature of $\boldsymbol{\kappa}$ enables a quadratic eigenlocus discriminant functio \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0 \] to be the solution to a fundamental integral equation of binary classification for a classification system in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & =\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast }|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}+\in \nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \boldsymbol{\psi }_{1}\\ & =\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{2_{i_{\ast}}} |\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}+\int\nolimits_{Z_{2 }p\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \boldsymbol{\psi}_{2}\text{, \end{align*} where all of the forces associated with the counter risks and the risks for class $\omega_{1}$ and class $\omega_{2}$ are symmetrically balanced with each othe \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :\int\nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast }|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}-\in \nolimits_{Z_{1}}p\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|\boldsymbol{\kappa }_{2}\right) d\boldsymbol{\kappa}_{2}+\delta\left( y\right) \boldsymbol{\psi}_{1}\\ & =\int\nolimits_{Z_{2}}p\left( k_{\mathbf{x}_{2_{i_{\ast}}} |\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}-\int\nolimits_{Z_{2 }p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}-\delta\left( y\right) \boldsymbol{\psi}_{2}\text{, \end{align*} over the $Z_{1}$ and $Z_{2}$ decision regions, by means of an elegant, statistical balancing feat in Wolfe dual eigenspace $\widetilde{Z}$, where the functional $E_{\boldsymbol{\kappa}_{1}}$ of $\left\Vert \boldsymbol{\kappa }_{1}\right\Vert _{\min_{c}}^{2}$ in Eq. (\ref{TAE Eigenlocus Component One Q ) and the functional $E_{\boldsymbol{\kappa}_{2}}$ of $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ in Eq. (\ref{TAE Eigenlocus Component Two Q}) are constrained to be equal to each other by means of a symmetric equalizer statistic $\nabla_{eq}$: $\frac {\delta\left( y\right) }{2}\boldsymbol{\psi}$, where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $. I have shown that each of the constrained, primal principal eigenaxis components $\psi_{1_{i\ast}}k_{\mathbf{x}_{1_{i\ast}}}$ or $\psi_{2_{i\ast }k_{\mathbf{x}_{2_{i\ast}}}$ on $\boldsymbol{\kappa}=\boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}$ have such magnitudes and directions that a constrained, quadratic eigenlocus discriminant function $\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ partitions any given feature space into symmetrical decision regions $Z_{1}\simeq Z_{2}$, which are symmetrically partitioned by a quadratic decision boundary, by means of three, symmetrical quadratic loci, all of which reference $\boldsymbol{\kappa}$. I\ will show that quadratic eigenlocus classification systems $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ generate decision regions $Z_{1}$ and $Z_{2}$ for which the dual parameter vectors of likelihoods $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ and $\widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}$ are in statistical equilibrium. Thereby, I\ will demonstrate that balancing the forces associated with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\kappa}\right) $ of the quadratic classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ hinges on balancing the eigenenergies associated with the positions or locations of the dual likelihood ratios $\widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{s}\right) $ and $\widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) $ \begin{align*} \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }+\sum\nolimits_{i=1}^{l_{2 }\psi_{2_{i\ast}}\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x _{2i\ast}}\right\Vert \end{align*} an \begin{align*} \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}k_{\mathbf{x}_{1i\ast} -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}k_{\mathbf{x}_{2i\ast}}\text{. \end{align*} \subsection{Balancing the Eigenenergies of $\boldsymbol{\kappa}_{1 -\boldsymbol{\kappa}_{2}$} I will now devise an equation that determines how the total allowed eigenenergies $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ are symmetrically balanced with each other. Using Eq. (\ref{TAE Eigenlocus Component One Q}) and the equilibrium constraint on $\boldsymbol{\psi}$ in Eq. (\ref{Equilibrium Constraint on Dual Eigen-components Q} \begin{align*} \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2}} & \equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left( 1-\xi _{i}-\kappa_{0}\right) \\ & \equiv\frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i_{\ast}}}\left( 1-\xi _{i}-\kappa_{0}\right) \text{, \end{align*} it follows that the functional $E_{\boldsymbol{\kappa}_{1}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c }^{2}$ of $\boldsymbol{\kappa}_{1} \[ E_{\boldsymbol{\kappa}_{1}}=\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa}_{2}\right\Vert \cos\theta_{\boldsymbol{\kappa _{1}\boldsymbol{\kappa}_{2} \] is equivalent to the functional $E_{\boldsymbol{\psi}_{1}}$ \begin{equation} E_{\boldsymbol{\psi}_{1}}=\frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i_{\ast} }\left( 1-\xi_{i}\right) -\kappa_{0}\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i_{\ast}}}\text{.} \label{TAE Constraint COMP1 Q \end{equation} Using Eq. (\ref{TAE Eigenlocus Component Two Q}) and the equilibrium constraint on $\boldsymbol{\psi}$ in Eq. (\ref{Equilibrium Constraint on Dual Eigen-components Q} \begin{align*} \left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1}} & \equiv\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left( 1-\xi _{i}+\kappa_{0}\right) \\ & \equiv\frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i_{\ast}}}\left( 1-\xi _{i}+\kappa_{0}\right) \text{, \end{align*} it follows that the functional $E_{\boldsymbol{\kappa}_{2}}$ of the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c }^{2}$ of $\boldsymbol{\kappa}_{2} \[ E_{\boldsymbol{\kappa}_{2}}=\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa}_{1}\right\Vert \cos\theta_{\boldsymbol{\kappa _{2}\boldsymbol{\kappa}_{1} \] is equivalent to the functional $E_{\boldsymbol{\psi}_{2}}$ \begin{equation} E_{\boldsymbol{\psi}_{2}}=\frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i_{\ast} }\left( 1-\xi_{i}\right) +\kappa_{0}\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i_{\ast}}}\text{.} \label{TAE Constraint COMP2 Q \end{equation} Next, I\ will use the identity for $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ in Eq. (\ref{TAE SDE Q} \[ \left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}\equiv\sum \nolimits_{i=1}^{l}\psi_{i_{\ast}}\left( 1-\xi_{i}\right) \] to rewrite $E_{\boldsymbol{\psi}_{1}} \begin{align*} E_{\boldsymbol{\psi}_{1}} & =\frac{1}{2}\sum\nolimits_{i=1}^{l \psi_{_{i_{\ast}}}\left( 1-\xi_{i}\right) -\kappa_{0}\sum\nolimits_{i=1 ^{l_{1}}\psi_{1_{i_{\ast}}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}-\kappa_{0}\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}} \end{align*} and $E_{\boldsymbol{\psi}_{2}} \begin{align*} E_{\boldsymbol{\psi}_{2}} & =\frac{1}{2}\sum\nolimits_{i=1}^{l \psi_{_{i_{\ast}}}\left( 1-\xi_{i}\right) +\kappa_{0}\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}+\kappa_{0}\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}} \end{align*} in terms of $\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c }^{2}$ and a symmetric equalizer statistic. Substituting the rewritten expressions for $E_{\boldsymbol{\psi}_{1}}$ and $E_{\boldsymbol{\psi}_{2}}$ into Eqs (\ref{TAE Eigenlocus Component One Q}) and (\ref{TAE Eigenlocus Component Two Q}) produces the equation \[ \left( \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c} ^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa}_{2}\right\Vert \cos\theta_{\boldsymbol{\kappa _{1}\boldsymbol{\kappa}_{2}}\right) +\kappa_{0}\sum\nolimits_{i=1}^{l_{1 }\psi_{1_{i_{\ast}}}\equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2 \] an \[ \left( \left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c} ^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa}_{1}\right\Vert \cos\theta_{\boldsymbol{\kappa _{2}\boldsymbol{\kappa}_{1}}\right) -\kappa_{0}\sum\nolimits_{i=1}^{l_{2 }\psi_{2_{i_{\ast}}}\equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}\text{, \] where the terms $\kappa_{0}\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}$ and $-\kappa_{0}\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}$ specify a symmetric equalizer statistic $\nabla_{eq}$ for integrals of class-conditional probability density functions $p\left( k_{\mathbf{x}_{1i\ast} |\boldsymbol{\kappa}_{1}\right) $ and $p\left( k_{\mathbf{x}_{2_{i_{\ast}} }|\boldsymbol{\kappa}_{2}\right) $ that determine conditional probability functions $P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ and $P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $. Therefore, let $\ \nabla_{eq}$ denote $\kappa_{0}\sum\nolimits_{i=1}^{l_{1 }\psi_{1_{i_{\ast}}}$ and $\kappa_{0}\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i_{\ast}}}$, wher \[ \nabla_{eq}\triangleq\frac{\kappa_{0}}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast }}\text{. \] It follows that the dual, class-conditional parameter vectors of likelihoods $\boldsymbol{\psi}_{1}$, $\boldsymbol{\psi}_{2}$, $\boldsymbol{\kappa}_{1}$, and $\boldsymbol{\kappa}_{2}$ satisfy the eigenlocus equation \begin{equation} \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2 }+\nabla_{eq}\equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2} \label{Balancing Feat SDEC1 Q \end{equation} an \begin{equation} \left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1 }-\nabla_{eq}\equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}\text{,} \label{Balancing Feat SDEC2 Q \end{equation} where $\nabla_{eq}\triangleq\frac{\kappa_{0}}{2}\sum\nolimits_{i=1}^{l \psi_{_{i\ast}}$. I will now examine the statistical and geometric properties of the equalizer statistic $\nabla_{eq}$ in eigenspace. \subsubsection{Properties of $\nabla_{eq}$ in Eigenspace} Substituting the vector expression $\boldsymbol{\kappa}=\sum\nolimits_{i=1 ^{l_{1}}\psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}-\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}}}}$ for $\boldsymbol{\kappa}$ in Eq. (\ref{Pair of Normal Eigenlocus Components Q}) into the expression for $\kappa_{0}$ in Eq. (\ref{Normal Eigenlocus Projection Factor Q}) produces the statistic for $\kappa_{0}$ \begin{align} \kappa_{0} & =-\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\sum \nolimits_{j=1}^{l_{1}}\psi_{1_{j_{\ast}}}k_{\mathbf{x}_{1_{j_{\ast}} }\label{Eigenlocus Projection Factor Two Q}\\ & +\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\sum\nolimits_{j=1}^{l_{2 }\psi_{2_{j_{\ast}}}k_{\mathbf{x}_{2_{j_{\ast}}}}+\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) \text{.}\nonumber \end{align} Substituting the statistic for $\kappa_{0}$ in Eq. (\ref{Eigenlocus Projection Factor Two Q}) into the expression for $\nabla_{eq}$ produces the statistic for $\nabla_{eq}$ \begin{align*} \nabla_{eq} & =\frac{\kappa_{0}}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & =-\left( \sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\boldsymbol{\kappa }_{1}\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & +\left( \sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\boldsymbol{\kappa }_{2}\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\text{, \end{align*} where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $. Let $k_{\widehat{\mathbf{x}}_{i\ast}}\triangleq \sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}$. It follows that $\kappa_{0}$ regulates a symmetrical balancing act for components of $k_{\widehat{\mathbf{x}}_{i\ast}}$ along $\boldsymbol{\kappa }_{1}$ and $\boldsymbol{\kappa}_{2}$, where the statistic $\nabla_{eq}$ is written a \[ +\nabla_{eq}=\left[ \operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa }_{2}}}\left( \overrightarrow{k_{\widehat{\mathbf{x}}_{i\ast}}}\right) -\operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa}_{1}}}\left( \overrightarrow{k_{\widehat{\mathbf{x}}_{i\ast}}}\right) +\delta\left( y\right) \right] \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \] an \[ -\nabla_{eq}=\left[ \operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa }_{1}}}\left( \overrightarrow{k_{\widehat{\mathbf{x}}_{i\ast}}}\right) -\operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa}_{2}}}\left( \overrightarrow{k_{\widehat{\mathbf{x}}_{i\ast}}}\right) -\delta\left( y\right) \right] \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\text{. \] Returning to Eq. (\ref{Balanced Eigenlocus Equation Q}) \[ \sum\nolimits_{i=1}^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}\left( \boldsymbol{\kappa}_{1}\mathbf{-}\boldsymbol{\kappa}_{2}\right) =\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x}_{2_{i\ast}}}\left( \boldsymbol{\kappa}_{2}\mathbf{-}\boldsymbol{\kappa}_{1}\right) \text{, \] given that components of $k_{\widehat{\mathbf{x}}_{1i\ast}}$ and $k_{\widehat{\mathbf{x}}_{2i\ast}}$ along $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ satisfy the state of statistical equilibriu \[ \left[ \operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa}_{1}}}\left( \overrightarrow{k_{\widehat{\mathbf{x}}_{1i\ast}}}\right) -\operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa}_{2}}}\left( \overrightarrow{k_{\widehat{\mathbf{x}}_{1i\ast}}}\right) \right] \equiv\left[ \operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa}_{2} }\left( \overrightarrow{k_{\widehat{\mathbf{x}}_{2i\ast}}}\right) -\operatorname{comp}_{\overrightarrow{\boldsymbol{\kappa}_{1}}}\left( \overrightarrow{k_{\widehat{\mathbf{x}}_{2i\ast}}}\right) \right] \text{, \] where $k_{\widehat{\mathbf{x}}_{1i\ast}}\triangleq\sum\nolimits_{i=1}^{l_{1 }k_{\mathbf{x}_{1_{i\ast}}}$ and $k_{\widehat{\mathbf{x}}_{2i\ast} \triangleq\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x}_{2_{i\ast}}}$, it follows tha \[ +\nabla_{eq}=\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l \psi_{_{i\ast}}\equiv\delta\left( y\right) \boldsymbol{\psi}_{1 \] an \[ -\nabla_{eq}\equiv-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1 ^{l}\psi_{_{i\ast}}\equiv-\delta\left( y\right) \boldsymbol{\psi _{2}\text{. \] I\ will now demonstrate that the equalizer statistic $\nabla_{eq}$ ensures that the class-conditional probability density functions $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|\boldsymbol{\kappa}_{2}\right) $ satisfy the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z}p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa }_{1}\right) d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \frac{1}{2 \sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & =\int_{Z}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \frac{1}{2 \sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\text{, \end{align*} over the decision space $Z$: $Z=Z_{1}+Z_{2}$ and $Z_{1}\cong Z_{2}$, whereby the likelihood ratio $\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) $ of the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ is in statistical equilibrium. In the process, I will formulate an equation which ensures that $\left\Vert \boldsymbol{\kappa _{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\kappa _{2}\right\Vert _{\min_{c}}^{2}$ are symmetrically balanced with each other. \subsection{Quadratic Eigenlocus Integral Equation} Let $\nabla_{eq}=\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1 ^{l}\psi_{_{i\ast}}$. Substituting the expression for $+\nabla_{eq}$ into Eq. (\ref{Balancing Feat SDEC1 Q}) produces an equation that is satisfied by the conditional probabilities of locations for the set $\left\{ k_{\mathbf{x _{1_{i\ast}}}\right\} _{i=1}^{l_{1}}$ of $k_{\mathbf{x}_{1_{i\ast}}}$ extreme points within the decision space $Z$ \begin{align*} P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) & =\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2 }+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}\text{, \end{align*} and substituting the expression for $-\nabla_{eq}$ into Eq. (\ref{Balancing Feat SDEC2 Q}) produces an equation that is satisfied by the conditional probabilities of locations for the set $\left\{ k_{\mathbf{x _{2_{i\ast}}}\right\} _{i=1}^{l_{2}}$ of $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points within the decision space $Z$ \begin{align*} P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) & =\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1 }-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}\text{, \end{align*} where the equalizer statistic $\nabla_{eq} \[ \nabla_{eq}=\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l \psi_{_{i\ast} \] \emph{equalizes} the conditional probabilities $P\left( k_{\mathbf{x _{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ and $P\left( k_{\mathbf{x _{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ of observing the set $\left\{ k_{\mathbf{x}_{1_{i\ast}}}\right\} _{i=1}^{l_{1}}$ of $k_{\mathbf{x _{1_{i\ast}}}$ extreme points and the set $\left\{ k_{\mathbf{x}_{2_{i\ast} }\right\} _{i=1}^{l_{2}}$ of $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points within the $Z_{1}$ and $Z_{2}$ decision regions of the decision space $Z$. Therefore, the equalizer statistic $\delta\left( y\right) \frac{1}{2 \sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}$ ensures that $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ are symmetrically balanced with each other in the following manner \[ \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2 }+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast }\equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2 \] an \[ \left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1 }-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast }\equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}\text{. \] Thereby, the equalizer statisti \[ \delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \] \emph{equalizes} the total allowed eigenenergies $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ so that the total allowed eigenenergies $\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right\Vert _{\min_{c}}^{2}$ exhibited by the scaled extreme points on $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ are symmetrically balanced with each other about the fulcrum of $\boldsymbol{\kappa}$ \begin{equation} \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\equiv\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}} \label{Symmetrical Balance of Total Allowed Eigenenergies Q \end{equation} which is located at the center of eigenenergy $\left\Vert \boldsymbol{\kappa }\right\Vert _{\min_{c}}^{2}$: the geometric center of $\boldsymbol{\kappa}$. Thus, the likelihood rati \begin{align*} \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \\ & =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2 \end{align*} of the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ is in statistical equilibrium. It follows that the eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}$ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is symmetrically balanced with the eigenenergy $\left\Vert \boldsymbol{\kappa }_{2}\right\Vert _{\min_{c}}^{2}$ associated with the position or location of the parameter vector of likelihoods $p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) $ given class $\omega_{2}$. Returning to Eq. (\ref{Conditional Probability Function for Class One Q} \[ P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) =\in _{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}=\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}+C_{1 \] and Eq. (\ref{Conditional Probability Function for Class Two Q} \[ P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) =\in _{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}=\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}+C_{2}\text{, \] it follows that the value for the integration constant $C_{1}$ in Eq. (\ref{Conditional Probability Function for Class One Q}) i \[ C_{1}=-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa}_{2}\right\Vert \cos\theta_{\boldsymbol{\kappa _{1}\boldsymbol{\kappa}_{2} \] and the value for the integration constant $C_{2}$ in Eq. (\ref{Conditional Probability Function for Class Two Q}) i \[ C_{2}=-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa}_{1}\right\Vert \cos\theta_{\boldsymbol{\kappa _{2}\boldsymbol{\kappa}_{1}}\text{. \] Therefore, the area $P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa }_{1}\right) $ under the class-conditional density function $p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ in Eq. (\ref{Conditional Probability Function for Class One Q}) \begin{align} P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) & =\int_{Z}p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \frac{1}{2}\sum \nolimits_{i=1}^{l}\psi_{_{i\ast}}\label{Integral Equation Class One Q}\\ & =\int_{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\nonumber\\ & =\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2 }+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast }\nonumber\\ & =\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2 }+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }\nonumber\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}\text{,}\nonumber \end{align} over the decision space $Z$, is \emph{symmetrically balanced} with the area $P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ under the class-conditional density function $p\left( k_{\mathbf{x}_{2_{i_{\ast}} }|\boldsymbol{\kappa}_{2}\right) $ in Eq. (\ref{Conditional Probability Function for Class Two Q}) \begin{align} P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) & =\int_{Z}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \frac{1}{2}\sum \nolimits_{i=1}^{l}\psi_{_{i\ast}}\label{Integral Equation Class Two Q}\\ & =\int_{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\nonumber\\ & =\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1 }-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast }\nonumber\\ & =\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1 }-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\nonumber\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}\text{,}\nonumber \end{align} over the decision space $Z$, where the area $P\left( k_{\mathbf{x}_{1_{i\ast }}}|\boldsymbol{\kappa}_{1}\right) $ under $p\left( k_{\mathbf{x}_{1_{i\ast }}}|\boldsymbol{\kappa}_{1}\right) $ and the area $P\left( k_{\mathbf{x _{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ under $p\left( k_{\mathbf{x _{2_{i_{\ast}}}}|\boldsymbol{\kappa}_{2}\right) $ are constrained to be equal to $\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ \[ P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) =P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) \equiv\frac{1 {2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}\text{. \] It follows that the quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ is the solution to the integral equatio \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa _{1}+\int_{Z_{2}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}} \label{Quadratic Eigenlocus Integral Equation}\\ & =\int_{Z_{1}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}+\int_{Z_{2 }\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{,}\nonumber \end{align} over the decision space $Z=Z_{1}+Z_{2}$, where the dual likelihood ratios $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}$ and $\widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}$ are in statistical equilibrium, all of the forces associated with the counter risks $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{1}\right) $ and the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa _{1}\right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of reproducing kernels $k_{\mathbf{x _{1_{i\ast}}}$ of extreme points $\mathbf{x}_{1_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, are equal to all of the forces associated with the risks $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) $ and the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa }_{2}\right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of reproducing kernels $k_{\mathbf{x _{2_{i\ast}}}$ of extreme points $\mathbf{x}_{2_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $, and the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) $ given class $\omega_{1}$ is equal to the eigenenergy associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) $ given class $\omega_{2}$. So, let $p\left( \frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x _{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $p\left( \frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ denote the Wolfe dual parameter vectors of likelihoods $\boldsymbol{\psi}_{1}=\sum\nolimits_{i=1}^{l_{1}}\psi _{1_{i_{\ast}}}\frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x _{1i\ast}}\right\Vert }$ and $\boldsymbol{\psi}_{2}=\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }$. It follows that the class-conditional probability density functions $p\left( \frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi _{1}\right) $ and $p\left( \frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ in the Wolfe dual eigenspace $\widetilde{Z}$ and the class-conditional probability density functions $p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) $ and $p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa _{2}\right) $ in the decision space $Z$ satisfy the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa }_{1}\right) d\boldsymbol{\kappa}_{1}+\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \\ & =\int_{Z}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}-\delta\left( y\right) p\left( \frac{k_{\mathbf{x _{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi }_{2}\right) \text{, \end{align*} over the decision space $Z$, where $Z=Z_{1}+Z_{2}$, $Z_{1}\simeq Z_{2}$, and $Z\subse \mathbb{R} ^{d}$. Thus, it is concluded that the quadratic eigenlocus discriminant functio \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right) \left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) \mathbf{+ \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \] is the solution to the integral equation \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa }_{1}\right) d\boldsymbol{\kappa}_{1}+\int_{Z_{2}}p\left( k_{\mathbf{x _{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1 \label{Quadratic Eigenlocus Integral Equation I}\\ & +\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \nonumber\\ & =\int_{Z_{1}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}+\int_{Z_{2}}p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}\nonumber\\ & -\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{,}\nonumber \end{align} over the decision space $Z=Z_{1}+Z_{2}$, where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, the integral $\int_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) d\boldsymbol{\kappa}_{1}$ accounts for all of the forces associated with the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{1}|\psi_{1i\ast}k_{\mathbf{x}_{1i\ast}}\right) $ which are related to positions and potential locations of corresponding $k_{\mathbf{x _{1i\ast}}$ extreme points that lie in the $Z_{1}$ decision region, the integral $\int_{Z_{2}}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) d\boldsymbol{\kappa}_{1}$ accounts for all of the forces associated with the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2 |\psi_{1i\ast}k_{\mathbf{x}_{1i\ast}}\right) $ which are related to positions and potential locations of corresponding $k_{\mathbf{x}_{1i\ast}}$ extreme points that lie in the $Z_{2}$ decision region, the integral $\int_{Z_{1 }p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}$ accounts for all of the forces associated with the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast k_{\mathbf{x}_{2i\ast}}\right) $ which are related to positions and potential locations of corresponding $k_{\mathbf{x}_{2i\ast}}$ extreme points that lie in the $Z_{1}$ decision region, and the integral $\int_{Z_{2}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa }_{2}$ accounts for all of the forces associated with the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast }k_{\mathbf{x}_{2i\ast}}\right) $ which are related to positions and potential locations of corresponding $k_{\mathbf{x}_{2i\ast}}$ extreme points that lie in the $Z_{2}$ decision region. The equalizer statistics $+\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $-\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ ensure that the collective forces associated with the expected risks $\mathfrak{R _{\mathfrak{\min}}\left( Z|\boldsymbol{\kappa}_{1}\right) $ and $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\kappa}_{2}\right) $ for class $\omega_{1}$ and class $\omega_{2}$, which are given by the respective integrals $\int_{Z}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa _{1}\right) d\boldsymbol{\kappa}_{1}$ and $\int_{Z}p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}$, are symmetrically balanced with each other. Therefore, the classification syste \[ \left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right) \left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) +\delta\left( y\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0 \] is in statistical equilibrium \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) : & \int_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa }_{1}\right) d\boldsymbol{\kappa}_{1}-\int_{Z_{1}}p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2 \label{Quadratic Eigenlocus Integral Equation II}\\ & +\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \nonumber\\ & =\int_{Z_{2}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}-\int_{Z_{2}}p\left( k_{\mathbf{x _{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}\nonumber\\ & -\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{,}\nonumber \end{align} where all of the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa }_{2}\right) $ in the $Z_{1}$ decision region are symmetrically balanced with all of the forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa _{1}\right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) \\ & =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) \right) \end{align*} such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system is minimized, and the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }}\left( \mathbf{s}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ decision region are balanced with the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) \right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) -E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \right) \\ & =E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) |\omega_{2}\right) \right) -E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) \right) \end{align*} such that the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system is minimized. Thus, the locus of principal eigenaxis components on $\boldsymbol{\kappa }=\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ satisfies the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1 +\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & =\int_{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\\ & \equiv\frac{1}{2}\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $\boldsymbol{\kappa}_{1}$ and $\boldsymbol{\kappa}_{2}$ are components of a principal eigenaxis $\boldsymbol{\kappa}$, and $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ is the total allowed eigenenergy exhibited by $\boldsymbol{\kappa}$. The above integral equation can be written as \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa _{1}+\int_{Z_{2}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \label{Quadratic Eigenlocus Integral Equation III}\\ & =\int_{Z_{1}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}+\int_{Z_{2 }\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\text{,}\nonumber \end{align} where the integral $\int_{Z_{1}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa }_{1}$ accounts for all of the eigenenergies $\left\Vert \psi_{1_{i_{\ast} }k_{\mathbf{x}_{1i\ast}}\right\Vert _{\min_{c}}^{2}$ exhibited by all of the $k_{\mathbf{x}_{1i\ast}}$ extreme points that lie in the $Z_{1}$ decision region, the integral $\int_{Z_{2}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa }_{1}$ accounts for all of the eigenenergies $\left\Vert \psi_{1_{i_{\ast} }k_{\mathbf{x}_{1i\ast}}\right\Vert _{\min_{c}}^{2}$ exhibited by all of the $k_{\mathbf{x}_{1i\ast}}$ extreme points that lie in the $Z_{2}$ decision region, the integral $\int_{Z_{1}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa }_{2}$ accounts for all of the eigenenergies $\left\Vert \psi_{2_{i_{\ast} }k_{\mathbf{x}_{2i\ast}}\right\Vert _{\min_{c}}^{2}$ exhibited by all of the $k_{\mathbf{x}_{2i\ast}}$ extreme points that lie in the $Z_{2}$ decision region, and the integral $\int_{Z_{2}}\boldsymbol{\kappa}_{2 d\boldsymbol{\kappa}_{2}$ accounts for all of the eigenenergies $\left\Vert \psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2i\ast}}\right\Vert _{\min_{c}}^{2}$ exhibited by all of the $k_{\mathbf{x}_{2i\ast}}$ extreme points that lie in the $Z_{1}$ decision region. The equalizer statistics $+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}$ and $-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}$ ensure that the integrals $\int_{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa }_{1}$ and $\int_{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}$ are symmetrically balanced with each other. Equation (\ref{Quadratic Eigenlocus Integral Equation III}) can be rewritten a \begin{align} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa _{1}-\int_{Z_{1}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}+\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast} \label{Quadratic Eigenlocus Integral Equation IV}\\ & =\int_{Z_{2}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\int_{Z_{2 }\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}-\delta\left( y\right) \frac{1}{2}\sum\nolimits_{i=1}^{l}\psi_{_{i\ast}}\text{,}\nonumber \end{align} where all of the eigenenergies $\left\Vert \psi_{1_{i_{\ast}}}k_{\mathbf{x _{1i\ast}}\right\Vert _{\min_{c}}^{2}$ and $\left\Vert \psi_{2_{i_{\ast} }k_{\mathbf{x}_{2i\ast}}\right\Vert _{\min_{c}}^{2}$ associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1 |\mathbf{\kappa}_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min }\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) $ in the $Z_{1}$ decision region are \emph{symmetrically balanced} with all of the eigenenergies $\left\Vert \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1i\ast}}\right\Vert _{\min_{c }^{2}$ and $\left\Vert \psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2i\ast}}\right\Vert _{\min_{c}}^{2}$ associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\mathbf{\kappa}_{1}\right) $ in the $Z_{2}$ decision region. Given Eqs (\ref{Quadratic Eigenlocus Integral Equation I}) - (\ref{Quadratic Eigenlocus Integral Equation IV}), it is concluded that quadratic eigenlocus discriminant functions $\widetilde{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ satisfy discrete and data-driven versions of the integral equation of binary classification in Eq. (\ref{Integral Equation of Likelihood Ratio and Decision Boundary}), the fundamental integral equation of binary classification for a classification system in statistical equilibrium in Eq. (\ref{Equalizer Rule}), and the corresponding integral equation for a classification system in statistical equilibrium in Eq. (\ref{Balancing of Bayes' Risks and Counteracting Risks}). \subsection{Equilibrium Points of Integral Equations II} Returning to the binary classification theorem, recall that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ and the corresponding eigenenergy $E_{\min }\left( Z|\widehat{\Lambda}\left( \mathbf{x}\right) \right) $ of a classification system $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0 \] of the integral equatio \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & =\in _{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega _{1}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}\\ & =\int_{Z_{1}}p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}+\int_{Z_{2}}p\left( \widehat{\Lambda }\left( \mathbf{x}\right) |\omega_{2}\right) d\widehat{\Lambda}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where the equilibrium point $p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}\left( \mathbf{x}\right) |\omega_{2}\right) =0$ is the focus of a decision boundary $D\left( \mathbf{x}\right) $. Returning to Eq. (\ref{Wolfe Dual Equilibrium Point Q}) \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}-\sum\nolimits_{i=1}^{l_{2} \psi_{2i\ast}=0\text{, \] it follows that the Wolfe dual quadratic eigenlocus $\boldsymbol{\psi}$ of likelihoods and principal eigenaxis components $\widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{s}\right) \begin{align*} \boldsymbol{\psi} & =\sum\nolimits_{i=1}^{l}\psi_{i\ast}\frac{k_{\mathbf{x _{i\ast}}}{\left\Vert k_{\mathbf{x}_{i\ast}}\right\Vert }\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }+\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast }}\right\Vert }\\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2 \end{align*} is the \emph{equilibrium point} $p\left( \widehat{\Lambda}_{\boldsymbol{\psi }}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) =0$ \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }-\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast }}\right\Vert }=0 \] of the integral equation in Eq. (\ref{Quadratic Eigenlocus Integral Equation}) and all of its derivatives in Eqs (\ref{Quadratic Eigenlocus Integral Equation I}) - (\ref{Quadratic Eigenlocus Integral Equation IV}). Therefore, it is concluded that the expected risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ and the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ are governed by the equilibrium point $p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega _{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) =0$ \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }-\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast }}\right\Vert }=0 \] of the integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) \right) $ \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast}}|\boldsymbol{\kappa }_{1}\right) d\boldsymbol{\kappa}_{1}+\int_{Z_{2}}p\left( k_{\mathbf{x _{1i\ast}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}\\ & +\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \\ & =\int_{Z_{1}}p\left( k_{\mathbf{x}_{2i\ast}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}+\int_{Z_{2}}p\left( k_{\mathbf{x _{2i\ast}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}\\ & -\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where the equilibrium point $\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }-\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast }}\right\Vert }=0$ is the dual focus of a quadratic decision boundary $D\left( \mathbf{s}\right) $. I will now develop an integral equation that explicitly accounts for the primal focus $\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) $ of a quadratic decision boundary $D\left( \mathbf{s}\right) $ and the equilibrium point $\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) $ of the integral equation $f\left( \widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $. \section{The Balancing Feat in Dual Space II} Let $\boldsymbol{\kappa}=\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ and substitute the statistic for $\kappa_{0}$ in Eq. (\ref{Eigenlocus Projection Factor Two Q} \begin{align*} \kappa_{0} & =-\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\sum \nolimits_{j=1}^{l_{1}}\psi_{1_{j_{\ast}}}k_{\mathbf{x}_{1_{j_{\ast}}}}\\ & +\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\sum\nolimits_{j=1}^{l_{2 }\psi_{2_{j_{\ast}}}k_{\mathbf{x}_{2_{j_{\ast}}}}+\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) \end{align*} into Eq. (\ref{Minimum Eigenenergy Class One Q} \[ \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}\boldsymbol{\kappa =\psi_{1_{i_{\ast}}}\left( 1-\xi_{i}-\kappa_{0}\right) ,\ i=1,...,l_{1 \text{, \] where each conditional density $\psi_{1_{i_{\ast}}}\frac{k_{\mathbf{x _{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }$ of an $k_{\mathbf{x}_{1i\ast}}$ extreme point satisfies the identity \[ \psi_{1_{i_{\ast}}}\left( 1-\xi_{i}\right) \equiv\psi_{1_{i_{\ast} }k_{\mathbf{x}_{1_{i_{\ast}}}}\boldsymbol{\kappa}+\psi_{1_{i_{\ast}} \kappa_{0}\text{. \] Accordingly, the above identity can be rewritten in terms of an eigenlocus equation that is satisfied by the conditional density $\psi_{1_{i_{\ast} }\frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }$ of an $k_{\mathbf{x}_{1i\ast}}$ extreme point \begin{align} \psi_{1_{i_{\ast}}} & =\psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}} }\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) \label{Pointwise Conditional Density Constraint One Q}\\ & +\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1}^{l}k_{\mathbf{x}_{j\ast }\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa}_{1}\right) \right\} \nonumber\\ & +\xi_{i}\psi_{1_{i_{\ast}}}+\delta\left( y\right) \psi_{1_{i_{\ast} }\text{,}\nonumber \end{align} where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, $\psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}$ is a principal eigenaxis component on $\boldsymbol{\kappa}_{1}$, and the set of scaled extreme vectors\emph{\ }$\psi_{1_{i_{\ast}}}\left( \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\right) $ are symmetrically distributed over $\boldsymbol{\kappa}_{2}-\boldsymbol{\kappa}_{1}$ \[ \psi_{1_{i_{\ast}}}\left( \sum\nolimits_{j=1}^{l}k_{\mathbf{x}_{j\ast }\right) \boldsymbol{\kappa}_{2}-\psi_{1_{i_{\ast}}}\left( \sum \nolimits_{j=1}^{l}k_{\mathbf{x}_{j\ast}}\right) \boldsymbol{\kappa _{1}\text{. \] Again, let $\boldsymbol{\kappa}=\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}$. Substitute the statistic for $\kappa_{0}$ in Eq. (\ref{Eigenlocus Projection Factor Two Q} \begin{align*} \kappa_{0} & =-\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\sum \nolimits_{j=1}^{l_{1}}\psi_{1_{j_{\ast}}}k_{\mathbf{x}_{1_{j_{\ast}}}}\\ & +\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\sum\nolimits_{j=1}^{l_{2 }\psi_{2_{j_{\ast}}}k_{\mathbf{x}_{2_{j_{\ast}}}}+\sum\nolimits_{i=1}^{l y_{i}\left( 1-\xi_{i}\right) \end{align*} into Eq. (\ref{Minimum Eigenenergy Class Two Q} \[ -\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}}}}\boldsymbol{\kappa =\psi_{2_{i_{\ast}}}\left( 1-\xi_{i}+\kappa_{0}\right) ,\ i=1,...,l_{2 \text{, \] where each conditional density $\psi_{2_{i_{\ast}}}\frac{k_{\mathbf{x _{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }$ of an $k_{\mathbf{x}_{2i\ast}}$ extreme point satisfies the identity \[ \psi_{2_{i_{\ast}}}\left( 1-\xi_{i}\right) =-\psi_{2_{i_{\ast} }k_{\mathbf{x}_{2_{i_{\ast}}}}\boldsymbol{\kappa}-\psi_{2_{i_{\ast}} \kappa_{0}\text{. \] Accordingly, the above identity can be rewritten in terms of an eigenlocus equation that is satisfied by the conditional density $\psi_{2_{i_{\ast} }\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }$ of an $k_{\mathbf{x}_{2i\ast}}$ extreme point \begin{align} \psi_{2_{i_{\ast}}} & =\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}} }\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa}_{1}\right) \label{Pointwise Conditional Density Constraint Two Q}\\ & +\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1}^{l}k_{\mathbf{x}_{j\ast }\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) \right\} \nonumber\\ & +\xi_{i}\psi_{2_{i_{\ast}}}-\delta\left( y\right) \psi_{2_{i_{\ast} }\text{,}\nonumber \end{align} where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, $\psi_{2_{i_{\ast}}}\mathbf{x}_{2_{i_{\ast}}}$ is a principal eigenaxis component on $\boldsymbol{\kappa}_{2}$, and the set of scaled extreme vectors $\psi_{2_{i_{\ast}}}\left( \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\right) $ are symmetrically distributed over $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ \[ \psi_{2_{i_{\ast}}}\left( \sum\nolimits_{j=1}^{l}k_{\mathbf{x}_{j\ast }\right) \boldsymbol{\kappa}_{1}-\psi_{2_{i_{\ast}}}\left( \sum \nolimits_{j=1}^{l}k_{\mathbf{x}_{j\ast}}\right) \boldsymbol{\kappa _{2}\text{. \] Using Eqs (\ref{Integral Equation Class One Q}) and (\ref{Pointwise Conditional Density Constraint One Q}), it follows that the conditional probability $P\left( k_{\mathbf{x}_{1_{i\ast}} |\boldsymbol{\kappa}_{1}\right) $ of observing the set $\left\{ k_{\mathbf{x}_{1_{i_{\ast}}}}\right\} _{i=1}^{l_{1}}$ of $k_{\mathbf{x _{1_{i_{\ast}}}}$ extreme points within localized regions of the decision space $Z$ is determined by the eigenlocus equation \begin{align} P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}} }\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) \label{Bayes' Risk One Q}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa }_{1}\right) \right\} \nonumber\\ & +\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }+\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi_{1_{i_{\ast}}}\nonumber\\ & \equiv\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\text{,}\nonumber \end{align} where $P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ evaluates to $\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}$, and scaled extreme vectors are symmetrically distributed over $\boldsymbol{\kappa _{2}-\boldsymbol{\kappa}_{1}$ in the following manner \[ \left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\right) \boldsymbol{\kappa}_{2}-\left( \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\right) \boldsymbol{\kappa}_{1}\text{. \] Using Eqs (\ref{Integral Equation Class Two Q}) and (\ref{Pointwise Conditional Density Constraint Two Q}), it follows that the conditional probability $P\left( k_{\mathbf{x}_{2_{i\ast}} |\boldsymbol{\kappa}_{2}\right) $ of observing the set $\left\{ k_{\mathbf{x}_{2_{i\ast}}}\right\} _{i=1}^{l_{2}}$ of $k_{\mathbf{x _{2_{i\ast}}}$ extreme points within localized regions of the decision space $Z$ is determined by the eigenlocus equation \begin{align} P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) & =\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast}} }\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa}_{1}\right) \label{Bayes' Risk Two Q}\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right) \right\} \nonumber\\ & -\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }+\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi_{2_{i_{\ast}}}\nonumber\\ & \equiv\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{,}\nonumber \end{align} where $P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ evaluates to $\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}$, and scaled extreme vectors are symmetrically distributed over $\boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}$ in the following manner \[ \left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\right) \boldsymbol{\kappa}_{1}-\left( \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\right) \boldsymbol{\kappa}_{2}\text{. \] I will now use Eqs (\ref{Bayes' Risk One Q}) and (\ref{Bayes' Risk Two Q}) to devise an equilibrium equation that determines the overall manner in which quadratic eigenlocus discriminant functions $\widetilde{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ minimize the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\kappa }\right) $ and the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa }\right\Vert _{\min_{c}}^{2}$ for a given decision space $Z$. \subsection{Minimization of Risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\boldsymbol{\kappa}\right) $ and Eigenenergy$\left\Vert \boldsymbol{\kappa }\right\Vert _{\min_{c}}^{2}$} Take the estimates in Eqs (\ref{Bayes' Risk One Q}) and (\ref{Bayes' Risk Two Q}) \[ P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}} \] an \[ P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) =\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}} \] for the conditional probabilities $P\left( k_{\mathbf{x}_{1_{i\ast} }|\boldsymbol{\kappa}_{1}\right) $ and $P\left( k_{\mathbf{x}_{2_{i\ast} }|\boldsymbol{\kappa}_{2}\right) $ of observing the $k_{\mathbf{x}_{1_{i\ast }}}$ and $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points within localized regions of the decision space $Z$. Given that the Wolfe dual eigenlocus of principal eigenaxis components and likelihoods \begin{align*} \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }+\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast }}\right\Vert \end{align*} satisfies the equilibrium equation \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }=\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast }}\right\Vert }\text{, \] it follows that the conditional probabilities of observing the $k_{\mathbf{x _{1_{i\ast}}}$ and the $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points within localized regions of the decision space $Z$ are equal to each other \[ P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) =P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) \text{. \] Accordingly, set the vector expressions in Eqs (\ref{Bayes' Risk One Q}) and (\ref{Bayes' Risk Two Q}) equal to each other \begin{align*} & \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}} }\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa }_{1}\right) \right\} \\ & +\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }+\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi_{1_{i_{\ast}}}\\ & =\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i_{\ast} }}\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa}_{1}\right) -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right) \right\} \\ & -\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }+\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi_{2_{i_{\ast}}}\text{. \end{align*} It follows that the equilibrium equation \begin{align} & \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2 }+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast} }\label{Balancing Bayes' Risk Quadratic}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa }_{1}\right) \right\} +\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi_{1_{i_{\ast} }\nonumber\\ & =\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1 }-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast} }\nonumber\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right) \right\} +\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi_{2_{i_{\ast} }\nonumber \end{align} is satisfied by the equilibrium point \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) =0 \] and the likelihood ratio \[ \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \] of the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$, where the equilibrium equation in Eq. (\ref{Balancing Bayes' Risk Quadratic}) is constrained by the equilibrium point in Eq. (\ref{Wolfe Dual Equilibrium Point Q}). I will now use Eq. (\ref{Balancing Bayes' Risk Quadratic}) to develop a fundamental quadratic eigenlocus integral equation of binary classification for a classification system in statistical equilibrium. \subsection{Fundamental Balancing Feat in Dual Spaces II} Returning to Eq. (\ref{Conditional Probability Function for Class One Q} \begin{align*} P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) & =\int_{Z}p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}\\ & =\int_{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}=\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}+C_{1 \end{align*} and Eq.(\ref{Conditional Probability Function for Class Two Q} \begin{align*} P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) & =\int_{Z}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}\\ & =\int_{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}=\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}+C_{2}\text{, \end{align*} and using Eq. (\ref{Balancing Bayes' Risk Quadratic} \begin{align*} & \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2 }+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa }_{1}\right) \right\} +\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi_{1_{i_{\ast} }\\ & =\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1 }-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right) \right\} +\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi_{2_{i_{\ast}} \end{align*} where $P\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) =P\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $, it follows that the value for the integration constant $C_{1}$ in Eq. (\ref{Conditional Probability Function for Class One Q}) is \[ C_{1}=-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa}_{2}\right\Vert \cos\theta_{\boldsymbol{\kappa _{1}\boldsymbol{\kappa}_{2}}+\sum\nolimits_{i=1}^{l_{1}}\xi_{i}\psi _{1_{i_{\ast}}}\text{, \] and that the value for the integration constant $C_{2}$ in Eq.(\ref{Conditional Probability Function for Class Two Q}) is \[ C_{2}=-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa}_{1}\right\Vert \cos\theta_{\boldsymbol{\kappa _{2}\boldsymbol{\kappa}_{1}}+\sum\nolimits_{i=1}^{l_{2}}\xi_{i}\psi _{2_{i_{\ast}}}\text{. \] Substituting the values for $C_{1}$ and $C_{2}$ into Eqs (\ref{Conditional Probability Function for Class One Q}) and (\ref{Conditional Probability Function for Class Two Q}) produces the integral equatio \begin{align*} & f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) \right) =\int_{Z}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa _{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa }_{1}\right) \right\} \\ & =\int_{Z}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right) \right\} \text{, \end{align*} over the decision space $Z$, where the equalizer statistic \[ \nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) =\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}+\sum\nolimits_{i=1}^{l_{1} \psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1}^{l}k_{\mathbf{x}_{j\ast }\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa}_{1}\right) \right\} \] an \[ \nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \right) =-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}-\sum\nolimits_{i=1}^{l_{2} \psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1}^{l}k_{\mathbf{x}_{j\ast }\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) \right\} \] ensure that the eigenenergy $\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}$ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) $ given class $\omega_{1}$ is symmetrically balanced with the eigenenergy $\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{2}\right) $ given class $\omega_{2}$. The primal class-conditional probability density functions $p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ and the Wolfe dual class-conditional probability density functions $p\left( \frac {k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $p\left( \frac{k_{\mathbf{x}_{2i\ast} }{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi _{2}\right) $ satisfy the integral equation in the following manner \begin{align} & f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) \right) {\displaystyle\int\nolimits_{Z}} p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}+\delta\left( y\right) p\left( \frac{k_{\mathbf{x _{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi }_{1}\right) \label{Quadratic Eigenlocus Integral Equation V}\\ & +k_{\widehat{\mathbf{x}}_{i\ast}}\left[ p\left( k_{\mathbf{x}_{2_{i\ast }}|\boldsymbol{\kappa}_{2}\right) -p\left( k_{\mathbf{x}_{1_{i\ast} }|\boldsymbol{\kappa}_{1}\right) \right] p\left( \frac{k_{\mathbf{x _{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi }_{1}\right) \nonumber\\ & =\int\nolimits_{Z}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa }_{2}\right) d\boldsymbol{\kappa}_{2}-\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \nonumber\\ & +k_{\widehat{\mathbf{x}}_{i\ast}}\left[ p\left( k_{\mathbf{x}_{2_{i\ast }}|\boldsymbol{\kappa}_{2}\right) -p\left( k_{\mathbf{x}_{1_{i\ast} }|\boldsymbol{\kappa}_{1}\right) \right] p\left( \frac{k_{\mathbf{x _{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi }_{2}\right) \text{,}\nonumber \end{align} over the $Z_{1}$ and $Z_{2}$ decision regions, where $k_{\widehat{\mathbf{x }_{i\ast}}\triangleq\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}$. The equalizer statistics \begin{align} \nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) & =\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x}_{1i\ast }\right\Vert }|\boldsymbol{\psi}_{1}\right) \label{Equalizer Statistic Class One Q}\\ & +k_{\widehat{\mathbf{x}}_{i\ast}}p\left( k_{\mathbf{x}_{2_{i\ast} }|\boldsymbol{\kappa}_{2}\right) p\left( \frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \nonumber\\ & -k_{\widehat{\mathbf{x}}_{i\ast}}p\left( k_{\mathbf{x}_{1_{i\ast} }|\boldsymbol{\kappa}_{1}\right) p\left( \frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) \nonumber \end{align} an \begin{align} \nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) \right) & =-\delta\left( y\right) p\left( \frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast }\right\Vert }|\boldsymbol{\psi}_{2}\right) \label{Equalizer Statistic Class Two Q}\\ & +k_{\widehat{\mathbf{x}}_{i\ast}}p\left( k_{\mathbf{x}_{2_{i\ast} }|\boldsymbol{\kappa}_{2}\right) p\left( \frac{k_{\mathbf{x}_{2i\ast} }{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) \nonumber\\ & -k_{\widehat{\mathbf{x}}_{i\ast}}p\left( k_{\mathbf{x}_{1_{i\ast} }|\boldsymbol{\kappa}_{1}\right) p\left( \frac{k_{\mathbf{x}_{2i\ast} }{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi _{2}\right) \text{,}\nonumber \end{align} where $p\left( \frac{k_{\mathbf{x}_{1i\ast}}}{\left\Vert k_{\mathbf{x _{1i\ast}}\right\Vert }|\boldsymbol{\psi}_{1}\right) $ and $p\left( \frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast}}\right\Vert }|\boldsymbol{\psi}_{2}\right) $ determine the equilibrium poin \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) =0 \] of the integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) \right) $ in Eq. (\ref{Quadratic Eigenlocus Integral Equation V}), ensure that all of the forces associated with the counter risks $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast}k_{\mathbf{x}_{1i\ast}}\right) $ and the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast }k_{\mathbf{x}_{2i\ast}}\right) $ in the $Z_{1}$ decision region, which are related to positions and potential locations of corresponding $k_{\mathbf{x _{1i\ast}}$ and $k_{\mathbf{x}_{2i\ast}}$ extreme points in the $Z_{1}$ decision region, are \emph{symmetrically balanced with} all of the forces associated with the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min }\left( Z_{2}|\psi_{2i\ast}k_{\mathbf{x}_{2i\ast}}\right) $ and the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\psi_{1i\ast}k_{\mathbf{x _{1i\ast}}\right) $ in the $Z_{2}$ decision region, which are related to positions and potential locations of corresponding $k_{\mathbf{x}_{2i\ast}}$ and $k_{\mathbf{x}_{1i\ast}}$ extreme points in the $Z_{2}$ decision region, such that the collective forces associated with the integrals $p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa }_{1}$ and $\int_{Z}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa }_{2}\right) d\boldsymbol{\kappa}_{2}$ are \emph{symmetrically balanced with} each other. The above integral equation can be written as \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}p\left( k_{\mathbf{x}_{1_{i\ast}} |\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}+\int_{Z_{2}}p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa }_{1}+\nabla_{eq}\left( \widehat{\Lambda}_{\boldsymbol{\psi}_{1}}\left( \mathbf{s}\right) \right) \\ & =\int_{Z_{1}}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}+\int_{Z_{2}}p\left( k_{\mathbf{x _{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2 +\nabla_{eq}\left( \widehat{\Lambda}_{\boldsymbol{\psi}_{2}}\left( \mathbf{s}\right) \right) \text{, \end{align*} where the integral $\int_{Z_{1}}p\left( k_{\mathbf{x}_{1_{i\ast} }|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}$ accounts for all of the forces associated with the counter risks $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|\psi_{1i\ast}k_{\mathbf{x}_{1_{i\ast} }\right) $ which are related to positions and potential locations of corresponding $k_{\mathbf{x}_{1_{i\ast}}}$ extreme points that lie in the $Z_{1}$ decision region, the integral $\int_{Z_{2}}p\left( k_{\mathbf{x _{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}$ accounts for all of the forces associated with the risks $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|\psi_{1i\ast}k_{\mathbf{x}_{1_{i\ast} }\right) $ which are related to positions and potential locations of corresponding $k_{\mathbf{x}_{1_{i\ast}}}$ extreme points that lie in the $Z_{2}$ decision region, the integral $\int_{Z_{2}}p\left( k_{\mathbf{x _{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}$ accounts for all of the forces associated with the counter risks $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\psi_{2i\ast }k_{\mathbf{x}_{2_{i\ast}}}\right) $ which are related to positions and potential locations of corresponding $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points that lie in the $Z_{2}$ decision region, and the integral $\int_{Z_{1 }p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}$ accounts for all of the forces associated with the risks $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\psi_{2i\ast k_{\mathbf{x}_{2_{i\ast}}}\right) $ which are related to positions and potential locations of corresponding $k_{\mathbf{x}_{2_{i\ast}}}$ extreme points that lie in the $Z_{1}$ decision region. It follows that the classification system $\left( \mathbf{x}^{T \mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ $\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ seeks a point of statistical equilibrium $p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) =0$ where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the expected risk of the classification system are minimized, and the classification system is in statistical equilibrium. Therefore, it is concluded that the quadratic eigenlocus discriminant functio \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right) \left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) \mathbf{+ \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \] is the solution to the integral equation \begin{align*} & f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) \right) =\int_{Z_{1}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa _{1}+\int_{Z_{2}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & +\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{2}-\boldsymbol{\kappa }_{1}\right) \right\} \\ & =\int_{Z_{1}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}+\int_{Z_{2 }\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\\ & -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\left\{ \sum\nolimits_{j=1 ^{l}k_{\mathbf{x}_{j\ast}}\left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}\right) \right\} \text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $Z_{1}$ and $Z_{2}$ are symmetrical decision regions $Z_{1}\simeq Z_{2}$ that have respective counter risks \[ \overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa _{1}\right) \text{ \ and \ }\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{2}\right) \] and respective risks \[ \mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) \text{ \ and \ }\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa }_{1}\right) \text{, \] where all of the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min }}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) $ for class $\omega_{2}$ in the $Z_{1}$ decision region and the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{2}\right) $ for class $\omega_{2}$ in the $Z_{2}$ decision region are balanced with all of the forces associated with the counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{1}\right) $ for class $\omega_{1}$ in the $Z_{1}$ decision region and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{1}\right) $ for class $\omega_{1}$ in the $Z_{2}$ decision region \begin{align*} f\left( \widehat{\Lambda}\left( \mathbf{x}\right) \right) & :\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) +\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa }_{2}\right) \\ & =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa }_{1}\right) +\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa }_{1}\right) \end{align*} such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ and the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) \right) $ of the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium point $p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) =0$ \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\frac{k_{\mathbf{x}_{1i\ast} }{\left\Vert k_{\mathbf{x}_{1i\ast}}\right\Vert }-\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\frac{k_{\mathbf{x}_{2i\ast}}}{\left\Vert k_{\mathbf{x}_{2i\ast }}\right\Vert }=0 \] of the integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) \right) $. It follows that the classification syste \[ \left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right) \left( \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right) \mathbf{+ \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0 \] is in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :\int_{Z_{1}}p\left( k_{\mathbf{x}_{1_{i\ast}} |\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}-\int_{Z_{1}}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa }_{2}+\nabla_{eq}\left( \widehat{\Lambda}_{\boldsymbol{\psi}_{1}}\left( \mathbf{s}\right) \right) \\ & =\int_{Z_{2}}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}-\int_{Z_{2}}p\left( k_{\mathbf{x _{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1 +\nabla_{eq}\left( \widehat{\Lambda}_{\boldsymbol{\psi}_{2}}\left( \mathbf{s}\right) \right) \text{, \end{align*} where all of the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa }_{2}\right) $ in the $Z_{1}$ decision region are balanced with all of the forces associated with the counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{1}\right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1 |\boldsymbol{\kappa}_{1}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) \\ & =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa }_{2}\right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa }_{1}\right) \text{, \end{align*} and the eigenenergies associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa}_{1}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|\boldsymbol{\kappa }_{2}\right) $ in the $Z_{1}$ decision region are balanced with the eigenenergies associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa}_{2}\right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|\boldsymbol{\kappa _{1}\right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :E_{\min_{c}}\left( Z_{1}|\boldsymbol{\kappa}_{1}\right) -E_{\min_{c}}\left( Z_{1}|\boldsymbol{\kappa}_{2}\right) \\ & =E_{\min_{c}}\left( Z_{2}|\boldsymbol{\kappa}_{2}\right) -E_{\min_{c }\left( Z_{2}|\boldsymbol{\kappa}_{1}\right) \text{. \end{align*} Therefore, it is concluded that the expected risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ and the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ are governed by the equilibrium point $\widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{s}\right) $ \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) =0 \] of the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}p\left( k_{\mathbf{x}_{1_{i\ast}} |\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa}_{1}+\int_{Z_{2}}p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) d\boldsymbol{\kappa }_{1}\\ & +\nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) \\ & =\int_{Z_{2}}p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa _{2}\right) d\boldsymbol{\kappa}_{2}+\int_{Z_{1}}p\left( k_{\mathbf{x _{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) d\boldsymbol{\kappa}_{2}\\ & +\nabla_{eq}\left( p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) \right) \text{, \end{align*} where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the expected risk of the classification system are minimized, and the classification system is in statistical equilibrium. Figure $\ref{Symmetrical Balance of Bayes' Error Quadratic}$ illustrates the manner in which quadratic eigenlocus discriminant functions $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ minimize the total allowed eigenenergy $\left\Vert \boldsymbol{\kappa }\right\Vert _{\min_{c}}^{2}$ and the expected risk $\mathfrak{R _{\mathfrak{\min}}\left( Z|\mathbf{\kappa}\right) $ of quadratic classification systems \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.6063in {Figure44.png }\caption{Quadratic eigenlocus transforms generate quadratic classification systems $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa }+\kappa_{0}\protect\overset{\omega_{1}}{\protect\underset{\omega _{2}}{\gtrless}}0$ that satisfy a fundamental integral equation of binary classification for a classification system in statistical equilibrium. \label{Symmetrical Balance of Bayes' Error Quadratic \end{figure} By way of illustration, Fig. $\ref{Bayes' Decision Boundaries Quadratic}$ shows that quadratic eigenlocus transforms generate decision regions $Z_{1}$ and $Z_{2}$ that minimize the expected risk $\mathfrak{R}_{\mathfrak{\min }\left( Z|\boldsymbol{\kappa}\right) $ for overlapping data distributions that have dissimilar covariance matrices, completely overlapping data distributions, and non-overlapping and overlapping data distributions that have similar covariance matrices. Accordingly, quadratic eigenlocus classification systems $\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ generate optimal quadratic decision boundaries for overlapping data distributions that have dissimilar covariance matrices (see Fig. $\ref{Bayes' Decision Boundaries Quadratic}$a and Fig. $\ref{Bayes' Decision Boundaries Quadratic}$b) and completely overlapping data distributions (see Fig. $\ref{Bayes' Decision Boundaries Quadratic}$c and Fig. $\ref{Bayes' Decision Boundaries Quadratic}$d), where unconstrained, primal principal eigenaxis components (extreme points) are enclosed in blue circles. Moreover, quadratic eigenlocus discriminant functions provide estimates of linear decision boundaries for non-overlapping data distributions that have similar covariance matrices (see Fig. $\ref{Bayes' Decision Boundaries Quadratic}$e) and overlapping data distributions that have similar covariance matrices (see Fig. $\ref{Bayes' Decision Boundaries Quadratic}$f) \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure45.png }\caption{Quadratic eigenlocus classification systems $\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0 \protect\overset{\omega_{1}}{\protect\underset{\omega_{2}}{\gtrless}}0$ generate optimal quadratic decision boundaries for $\left( 1\right) $ overlapping data distributions that have dissimilar covariance matrices: see $\left( a\right) $ and $\left( b\right) $, $\left( 2\right) $ completely overlapping data distributions: see $\left( c\right) $ and $\left( d\right) $, and $\left( 3\right) $ linear decision boundaries for data distributions that have similar covariance matrices: see $\left( e\right) $ and $\left( f\right) $. \label{Bayes' Decision Boundaries Quadratic \end{figure} I\ will now show that quadratic eigenlocus transforms generate linear decision boundaries that are approximated by second-order curves. \subsection{Approximations of Linear Decision Boundaries} Consider the decision rule in Eq. (\ref{General Gaussian Equalizer Rule}) for two classes of Gaussian data that have similar covariance matrices ($\mathbf{\Sigma}_{1}=$ $\mathbf{\Sigma}_{2}=\mathbf{\Sigma}$), where the likelihood ratio test $\widehat{\Lambda}\left( \mathbf{x}\right) \overset{H_{1}}{\underset{H_{2}}{\gtrless}}0$ generates linear decision boundaries. So, take any given quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa$ that is determined by a quadratic eigenlocus transform for any two sets of Gaussian data that have similar covariance matrices: $\mathbf{\Sigma}_{1}=$ $\mathbf{\Sigma}_{2}=\mathbf{\Sigma}$. Recall that, for any given linear or quadratic eigenlocus transform, the inner product statistics contained within a Gram or kernel matrix determine the shapes of three, symmetrical quadratic partitioning surfaces. Recall also that topological properties exhibited by loci of vectors are unchanged if directed line segments of vectors are replaced by sinuous curves. Given that second-order, polynomial reproducing kernels approximate vectors with continuous, second-order curves, it follows that the inner product elements of the kernel matrix determine eigenvalues which determine three, symmetrical hyperplane partitioning surfaces: for the two given sets of Gaussian data that have similar covariance matrices: $\mathbf{\Sigma}_{1}=$ $\mathbf{\Sigma _{2}=\mathbf{\Sigma}$. Therefore, the class-conditional probability densities $p\left( k_{\mathbf{x}_{1_{i\ast}}}|\boldsymbol{\kappa}_{1}\right) $ and $p\left( k_{\mathbf{x}_{2_{i\ast}}}|\boldsymbol{\kappa}_{2}\right) $ specified by the parameter vector of likelihoods $\boldsymbol{\kappa}=\boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2} \begin{align*} \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =\sum\nolimits_{i=1}^{l_{1}}p\left( k_{\mathbf{x}_{1i\ast} |\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{1_{i\ast }}}\\ & -\sum\nolimits_{i=1}^{l_{2}}p\left( k_{\mathbf{x}_{2i\ast} |\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{2_{i\ast }} \end{align*} describe similar covariance matrices $\mathbf{\Sigma}_{1}$ and $\mathbf{\Sigma }_{2}$ for class $\omega_{1}$ and class $\omega_{2} \[ \mathbf{\Sigma}_{1}\approx\mathbf{\Sigma}_{2}\text{, \] where each pointwise conditional densit \[ p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{1_{i\ast }}}\text{ or }p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) k_{\mathbf{x}_{2_{i\ast }} \] describes a distribution of first and second degree coordinates for an extreme point $\mathbf{x}_{1_{i_{\ast}}}$ or $\mathbf{x}_{2_{i_{\ast}}}$. It follows that quadratic eigenlocus transforms generate linear decision boundaries that are approximated by second-order curves. Therefore, it is concluded that linear decision boundaries are generated by quadratic eigenlocus classification systems $\left( \mathbf{x}^{T \mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$. It is also concluded that linear decision boundaries are generated by quadratic eigenlocus classification systems $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \overset{H_{1}}{\underset{H_{2}}{\gtrless}}0$ based on Gaussian reproducing kernels $\exp\left( -\gamma\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $, where the hyperparameter $\gamma=1/100$ and the quadratic eigenlocus decision rule is \begin{equation} \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) \boldsymbol{\kappa}+\kappa_{0}\text{.} \label{Quadratic Equalizer Rule Based on Gaussian RK \end{equation} I\ am now in a position to formally state a \emph{discrete, quadratic classification theorem}. \section*{Discrete Quadratic Classification Theorem} Take a collection of $d$-component random vectors $\mathbf{x}$ that are generated according to probability density functions $p\left( \mathbf{x |\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics, and let $\widetilde{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) =\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ denote the likelihood ratio test for a discrete, quadratic classification system, where $\omega_{1}$ or $\omega_{2}$ is the true data category, $\boldsymbol{\kappa}$ is a locus of principal eigenaxis components and likelihoods \begin{align*} \boldsymbol{\kappa} & \triangleq\widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \\ & =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i\ast} }-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i\ast} }\text{, \end{align*} where $k_{\mathbf{x}_{1_{i\ast}}}$ and $k_{\mathbf{x}_{2_{i\ast}}}$ are reproducing kernels for respective data points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$: the reproducing kernel $K(\mathbf{x,s )=k_{\mathbf{s}}(\mathbf{x})$ is either $k_{\mathbf{s}}(\mathbf{x )\triangleq\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}$ or $k_{\mathbf{s }(\mathbf{x})\triangleq\exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s \right\Vert ^{2}\right) $, $\mathbf{x}_{1_{i_{\ast}}}\sim p\left( \mathbf{x}|\omega_{1}\right) $, $\mathbf{x}_{2_{i_{\ast}}}\sim p\left( \mathbf{x}|\omega_{2}\right) $, $\psi_{1_{i_{\ast}}}$ and $\psi_{2_{i_{\ast }}$ are scale factors that provide unit measures of likelihood for respective data points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ which lie in either overlapping regions or tails regions of data distributions related to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, and $\kappa_{0}$ is a functional of $\boldsymbol{\kappa}$ \[ \kappa_{0}=\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) -\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\boldsymbol{\kappa}\text{, \] where $\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}=\sum\nolimits_{i=1 ^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}+\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x _{2_{i\ast}}}$ is a cluster of reproducing kernels of the data points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ used to form $\boldsymbol{\kappa}$, $y_{i}$ are class membership statistics: if $\mathbf{x}_{i\ast}\in\omega_{1}$, assign $y_{i}=1$; if $\mathbf{x}_{i\ast \in\omega_{2}$, assign $y_{i}=-1$, and $\xi_{i}$ are regularization parameters: $\xi_{i}=\xi=0$ for full rank kernel matrices or $\xi_{i}=\xi\ll1$ for low rank kernel matrices. The quadratic discriminant functio \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0 \] is the solution to the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa _{1}+\int_{Z_{2}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{1}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}+\int_{Z_{2 }\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $Z_{1}$ and $Z_{2}$ are symmetrical decision regions: $Z_{1}\simeq Z_{2}$ and $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}-\sum\nolimits_{i=1}^{l_{2} \psi_{2i\ast}=0 \] of the integral equation $f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) \right) $, where the equilibrium point is a dual locus of principal eigenaxis components and likelihoods \begin{align*} \boldsymbol{\psi} & \triangleq\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{1}\right) +p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{s}\right) |\omega_{2}\right) \\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{k_{\mathbf{x _{1_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\frac{k_{\mathbf{x}_{2_{i\ast }}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert \end{align*} that is constrained to be in statistical equilibrium \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{k_{\mathbf{x}_{1_{i\ast} }}{\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert }=\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}\frac{k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert }\text{. \] Therefore, the forces associated with the counter risk $\overline {\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of reproducing kernels $k_{\mathbf{x}_{1_{i\ast}}}$ of data points $\mathbf{x}_{1_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, are balanced with the forces associated with the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }}\left( \mathbf{s}\right) |\omega_{2}\right) \right) $ and the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of reproducing kernels $k_{\mathbf{x}_{2_{i\ast}}}$ of data points $\mathbf{x}_{2_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $. Furthermore, the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) $ given class $\omega_{1}$ is balanced with the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) $ given class $\omega_{2}$ \[ \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\equiv\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \] where the total eigenenerg \begin{align*} \left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}\\ & =\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2}}\\ & +\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1}}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\left( 1-\xi_{i}\right) \end{align*} of the discrete, quadratic classification system $\boldsymbol{\kappa ^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ is determined by the eigenenergies associated with the position or location of the likelihood ratio $\boldsymbol{\kappa=\kappa _{1}-\boldsymbol{\kappa}_{2}$ and the locus of a corresponding, quadratic decision boundary $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}=0$. It follows that the discrete, quadratic classification system $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ is in statistical equilibrium \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) : & \int_{Z_{1}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa _{1}-\int_{Z_{1}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{2}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\int_{Z_{2 }\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \end{align*} where the forces associated with the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) $ in the $Z_{1}$ decision region are balanced with the forces associated with the counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }}\left( \mathbf{s}\right) |\omega_{2}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) \\ & =\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) -\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) \right) \end{align*} such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system is minimized, and the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R _{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }}\left( \mathbf{s}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ decision region are balanced with the eigenenergies associated with the counter risk $\overline{\mathfrak{R}}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) \right) $ in the $Z_{2}$ decision region \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & :E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) -E_{\min}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \right) \\ & =E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) |\omega_{2}\right) \right) -E_{\min}\left( Z_{2}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) |\omega_{1}\right) \right) \end{align*} such that the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system is minimized. Thus, any given discrete, quadratic classification system $\boldsymbol{\kappa }^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ exhibits an error rate that is consistent with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system: for all random vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, where $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics. Therefore, a discrete, quadratic classification system $\boldsymbol{\kappa }^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ seeks a point of statistical equilibrium where the opposing forces and influences of the classification system are balanced with each other, such that the eigenenergy and the expected risk of the classification system are minimized, and the classification system is in statistical equilibrium. I\ will now show that the eigenenergy of a discrete, quadratic classification system is conserved and remains relatively constant, so that the eigenenergy and the expected risk of a discrete, quadratic classification system cannot be created or destroyed, but only transferred from one classification system to another. \section*{Law of Conservation of Eigenenergy:} \subsection*{For Discrete Quadratic Classification Systems} Take a collection of $N$ random vectors $\mathbf{x}$ of dimension $d$ that are generated according to probability density functions $p\left( \mathbf{x |\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics, where the number of random vectors $\mathbf{x}\sim p\left( \mathbf{x}|\omega_{1}\right) $ equals the number of random vectors $\mathbf{x}\sim p\left( \mathbf{x}|\omega_{2}\right) $, and let $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ denote the likelihood ratio test for a discrete, quadratic classification system, where $\omega_{1}$ or $\omega _{2}$ is the true data category, $\boldsymbol{\kappa}$ is a locus of principal eigenaxis components and likelihoods \begin{align*} \boldsymbol{\kappa} & \triangleq\widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) |\omega_{1}\right) -p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \\ & =\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i\ast} }-\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i\ast} }\text{, \end{align*} where $k_{\mathbf{x}_{1_{i\ast}}}$ and $k_{\mathbf{x}_{2_{i\ast}}}$ are reproducing kernels for respective data points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$: the reproducing kernel $K(\mathbf{x,s )=k_{\mathbf{s}}(\mathbf{x})$ is either $k_{\mathbf{s}}(\mathbf{x )\triangleq\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}$ or $k_{\mathbf{s }(\mathbf{x})\triangleq\exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s \right\Vert ^{2}\right) $, $\mathbf{x}_{1_{i_{\ast}}}\sim p\left( \mathbf{x}|\omega_{1}\right) $, $\mathbf{x}_{2_{i_{\ast}}}\sim p\left( \mathbf{x}|\omega_{2}\right) $, $\psi_{1_{i_{\ast}}}$ and $\psi_{2_{i_{\ast }}$ are scale factors that provide unit measures of likelihood for respective data points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ which lie in either overlapping regions or tails regions of data distributions related to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, and $\kappa_{0}$ is a functional of $\boldsymbol{\kappa}$ \[ \kappa_{0}=\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) -\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}\boldsymbol{\kappa}\text{, \] where $\sum\nolimits_{i=1}^{l}k_{\mathbf{x}_{i\ast}}=\sum\nolimits_{i=1 ^{l_{1}}k_{\mathbf{x}_{1_{i\ast}}}+\sum\nolimits_{i=1}^{l_{2}}k_{\mathbf{x _{2_{i\ast}}}$ is a cluster of reproducing kernels of the data points $\mathbf{x}_{1_{i_{\ast}}}$ and $\mathbf{x}_{2_{i_{\ast}}}$ used to form $\boldsymbol{\kappa}$, $y_{i}$ are class membership statistics: if $\mathbf{x}_{i\ast}\in\omega_{1}$, assign $y_{i}=1$; if $\mathbf{x}_{i\ast \in\omega_{2}$, assign $y_{i}=-1$, and $\xi_{i}$ are regularization parameters: $\xi_{i}=\xi=0$ for full rank kernel matrices or $\xi_{i}=\xi\ll1$ for low rank kernel matrices. The expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of a discrete, quadratic classification system $\boldsymbol{\kappa}^{T}k_{\mathbf{s} +\kappa_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are governed by the equilibrium poin \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}-\sum\nolimits_{i=1}^{l_{2} \psi_{2i\ast}=0 \] of the integral equatio \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa _{1}+\int_{Z_{2}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{1}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}+\int_{Z_{2 }\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $Z_{1}$ and $Z_{2}$ are symmetrical decision regions: $Z_{1}\simeq Z_{2}$, $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, and the forces associated with the counter risk $\overline{\mathfrak{R} _{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }}\left( \mathbf{s}\right) |\omega_{1}\right) \right) $ and the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of reproducing kernels $k_{\mathbf{x}_{1_{i\ast}}}$ of data points $\mathbf{x}_{1_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $, are balanced with the forces associated with the risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z_{1}|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) $ and the counter risk $\overline{\mathfrak{R }_{\mathfrak{\min}}\left( Z_{2}|p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \right) $ in the $Z_{1}$ and $Z_{2}$ decision regions: which are related to positions and potential locations of reproducing kernels $k_{\mathbf{x}_{2_{i\ast}}}$ of data points $\mathbf{x}_{2_{i_{\ast}}}$ that are generated according to $p\left( \mathbf{x}|\omega_{2}\right) $. Furthermore, the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) $ given class $\omega_{1}$ is balanced with the eigenenergy $E_{\min}\left( Z|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) \right) $ associated with the position or location of the likelihood ratio $p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) $ given class $\omega_{2}$ \[ \left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\equiv\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \] wher \begin{align*} \left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2}}\\ & +\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1}}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\left( 1-\xi_{i}\right) \text{. \end{align*} The eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c} ^{2}=\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}$ is the state of a discrete, quadratic classification system $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ that is associated with the position or location of a dual likelihood ratio \begin{align*} \boldsymbol{\psi} & \triangleq\widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) =p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) +p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \\ & =\boldsymbol{\psi}_{1}+\boldsymbol{\psi}_{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{k_{\mathbf{x _{1_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert +\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\frac{k_{\mathbf{x}_{2_{i\ast }}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert }\text{, \end{align*} which is constrained to be in statistical equilibrium \[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\frac{k_{\mathbf{x}_{1_{i\ast} }}{\left\Vert k_{\mathbf{x}_{1_{i\ast}}}\right\Vert }=\sum\nolimits_{i=1 ^{l_{2}}\psi_{2_{i_{\ast}}}\frac{k_{\mathbf{x}_{2_{i\ast}}}}{\left\Vert k_{\mathbf{x}_{2_{i\ast}}}\right\Vert }\text{, \] and the locus of a corresponding, quadratic decision boundary $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}=0$. Thus, any given discrete, quadratic classification system $\boldsymbol{\kappa }^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ exhibits an error rate that is consistent with the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ and the corresponding eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system: for all random vectors $\mathbf{x}$ that are generated according to $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, where $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are related to statistical distributions of random vectors $\mathbf{x}$ have constant or unchanging statistics. The total eigenenergy of a discrete, quadratic classification system $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ is found by adding up contributions from characteristics of the classification system: The eigenenergies $E_{\min}\left( Z|p\left( \widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) $ and $E_{\min}\left( Z|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa }\left( \mathbf{s}\right) |\omega_{2}\right) \right) $ associated with the positions or locations of the class-conditional likelihood ratios $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{1}\right) $ and $p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{2}\right) $, wher \[ E_{\min}\left( Z|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega_{1}\right) \right) =\left\Vert \boldsymbol{\kappa }_{1}\right\Vert _{\min_{c}}^{2}\text{ \ and \ }E_{\min}\left( Z|p\left( \widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) |\omega _{2}\right) \right) =\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2 \] are related to eigenenergies associated with positions and potential locations of extreme points that lie in either overlapping regions or tails regions of statistical distributions related to the class-conditional probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $, and the total eigenenergy $\left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}$ satisfies the vector equations \begin{align*} \left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2} & =\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}\\ & =\left\Vert \boldsymbol{\kappa}_{1}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{1}\right\Vert \left\Vert \boldsymbol{\kappa _{2}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{1}\boldsymbol{\kappa}_{2}}\\ & +\left\Vert \boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}-\left\Vert \boldsymbol{\kappa}_{2}\right\Vert \left\Vert \boldsymbol{\kappa _{1}\right\Vert \cos\theta_{\boldsymbol{\kappa}_{2}\boldsymbol{\kappa}_{1} \end{align*} an \[ \left\Vert \boldsymbol{\kappa}\right\Vert _{\min_{c}}^{2}=\sum\nolimits_{i=1 ^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}\right) +\sum\nolimits_{i=1}^{l_{2 }\psi_{2i\ast}\left( 1-\xi_{i}\right) \text{. \] Any given discrete, quadratic classification system that is determined by a likelihood ratio test \[ \boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0\text{, \] where the class-conditional probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega_{2}\right) $ are related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics, and the locus of a quadratic decision boundary \[ D\left( \mathbf{x}\right) :\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa _{0}=0 \] is governed by the locus of a dual likelihood ratio $p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) +p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega _{2}\right) $ in statistical equilibrium \[ p\left( \widehat{\Lambda}_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{1}\right) \rightleftharpoons p\left( \widehat{\Lambda }_{\boldsymbol{\psi}}\left( \mathbf{x}\right) |\omega_{2}\right) \text{, \] is a closed classification system. Thus, the total eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) \begin{align*} E_{\min}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) & \triangleq\left\Vert \boldsymbol{\kappa }\right\Vert _{\min_{c}}^{2}\\ & =\left\Vert \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}\right\Vert _{\min_{c}}^{2}\\ & =\sum\nolimits_{i=1}^{l_{1}}\psi_{1i\ast}\left( 1-\xi_{i}\right) +\sum\nolimits_{i=1}^{l_{2}}\psi_{2i\ast}\left( 1-\xi_{i}\right) \end{align*} of any given discrete, quadratic classification system $\boldsymbol{\kappa }^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ is conserved and remains relatively constant. Therefore, the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda _{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of a discrete, quadratic classification system $\boldsymbol{\kappa}^{T}k_{\mathbf{s} +\kappa_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ cannot be created or destroyed, but only transferred from one classification system to another. It follows that the corresponding expected risk $\mathfrak{R}_{\mathfrak{\min }}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of a discrete, quadratic classification system $\boldsymbol{\kappa }^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ cannot be created or destroyed, but only transferred from one classification system to another. I\ will now identify the fundamental property which is common to each of the scaled extreme points on any given likelihood ratio $\widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) $ and quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $ that is determined by a quadratic eigenlocus decision rule $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \overset{H_{1}}{\underset{H_{2}}{\gtrless}}0$. \subsubsection{Inherent Property of Eigen-scaled Extreme Points on $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$} Given that a quadratic eigenlocus $\boldsymbol{\kappa}=\boldsymbol{\kappa _{1}-\boldsymbol{\kappa}_{2}$ is a locus of likelihoods that determines a likelihood ratio $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) $ and a locus of principal eigenaxis components that determines the coordinate system of a quadratic decision boundary $D_{0}\left( \mathbf{s}\right) $, it follows that the total allowed eigenenerg \[ \left\Vert \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert _{\min_{c}}^{2}\text{ or \ }\left\Vert \psi_{2_{i_{\ast}}}k_{\mathbf{x _{2_{i\ast}}}\right\Vert _{\min_{c}}^{2 \] exhibited by each scaled extreme vecto \[ \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}\text{ or \ }\psi_{2_{i_{\ast }}}k_{\mathbf{x}_{2_{i\ast}} \] on $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2}$ and the corresponding class-conditional risk \[ \int_{Z_{2}}p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) d\boldsymbol{\kappa _{1}\left( k_{\mathbf{x}_{1i\ast}}\right) \text{ or \ }\int_{Z_{1}}p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) d\boldsymbol{\kappa}_{2}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) \] or class-conditional counter risk \[ \int_{Z_{1}}p\left( k_{\mathbf{x}_{1i\ast}}|\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{1i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) d\boldsymbol{\kappa _{1}\left( k_{\mathbf{x}_{1i\ast}}\right) \text{ or \ }\int_{Z_{2}}p\left( k_{\mathbf{x}_{2i\ast}}|\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{2i\ast}}}}\left( \overrightarrow{\boldsymbol{\kappa}}\right) \right) d\boldsymbol{\kappa}_{2}\left( k_{\mathbf{x}_{2_{i\ast}}}\right) \] possessed by each extreme point $k_{\mathbf{x}_{1_{i_{\ast}}}}$ or $k_{\mathbf{x}_{2_{i\ast}}}$, which are determined by $\left\Vert \psi_{1_{i_{\ast}}}k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert _{\min_{c}}^{2}$ or\ $\left\Vert \psi_{2_{i_{\ast}}}k_{\mathbf{x}_{2_{i\ast}}}\right\Vert _{\min_{c}}^{2}$, \emph{jointly satisfy} the fundamental quadratic eigenlocus integral equation of binary classification in Eq. (\ref{Quadratic Eigenlocus Integral Equation V}). Thereby, it is concluded that the \emph{fundamental property} possessed by each of the scaled extreme points on a quadratic eigenlocus $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa }_{2}$ is the \emph{total allowed eigenenergy} exhibited by a corresponding, scaled extreme vector. I will now devise an expression for a quadratic eigenlocus that is a locus of discrete conditional probabilities. \section{Quadratic Eigenlocus of Probabilities} Write a quadratic eigenlocus $\boldsymbol{\kappa}$ in terms o \begin{align*} \boldsymbol{\kappa} & =\lambda_{\max_{\psi}}^{-1}\sum\nolimits_{i=1}^{l_{1 }\frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert ^{2}\widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( k_{\mathbf{x _{1_{i_{\ast}}}}\right) \\ & -\lambda_{\max_{\psi}}^{-1}\sum\nolimits_{i=1}^{l_{2}}\frac{k_{\mathbf{x _{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert ^{2 \widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( k_{\mathbf{x _{2_{i_{\ast}}}}\right) \text{, \end{align*} where $\widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( k_{\mathbf{x _{1_{i_{\ast}}}}\right) $ and $\widehat{\operatorname{cov}}_{sm_{\updownarrow }}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) $ denote the symmetrically balanced, signed magnitudes in Eqs (\ref{Unidirectional Scaling Term One1 Q}) and (\ref{Unidirectional Scaling Term Two1 Q}): the terms $\frac {k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}} }\right\Vert }$ and $\frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }$ have been introduced and rearranged. Next, rewrite $\boldsymbol{\kappa}$ in terms of total allowed eigenenergie \begin{align} \boldsymbol{\kappa} & =\sum\nolimits_{i=1}^{l_{1}}\left\Vert \lambda _{\max_{\psi}}^{-1}\left( \widehat{\operatorname{cov}}_{sm_{\updownarrow }\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) ^{\frac{1}{2 }k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert _{\min_{c}}^{2}\frac{k_{\mathbf{x _{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }\label{Probabilisitc Expression for Normal Eigenlocus Q}\\ & -\sum\nolimits_{i=1}^{l_{2}}\left\Vert \lambda_{\max_{\psi}}^{-1}\left( \widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( k_{\mathbf{x _{2_{i_{\ast}}}}\right) \right) ^{\frac{1}{2}}k_{\mathbf{x}_{2_{i_{\ast}} }\right\Vert _{\min_{c}}^{2}\frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }\text{,}\nonumber \end{align} where the conditional probability $\mathcal{P}\left( k_{\mathbf{x _{1_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) $ of observing an $k_{\mathbf{x}_{1_{i_{\ast}}}}$ extreme point within a localized region $\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}} }\right) $ of a decision space $Z=Z_{1}+Z_{2}$ is given by the expression \[ \mathcal{P}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) =\left\Vert \lambda_{\max _{\psi}}^{-1}\left( \widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) ^{\frac{1}{2}}k_{\mathbf{x _{1_{i_{\ast}}}}\right\Vert _{\min_{c}}^{2}\text{, \] where $\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \subset Z_{1}$ or $\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \subset Z_{2}$, and the conditional probability $\mathcal{P}\left( k_{\mathbf{x}_{2_{i_{\ast}} }|\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) \mathcal{\ }$of observing an $k_{\mathbf{x}_{2_{i_{\ast}}}}$ extreme point within a localized region $\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}} }\right) $ of a decision space $Z$ is given by the expression \[ \mathcal{P}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) =\left\Vert \lambda_{\max _{\psi}}^{-1}\left( \widehat{\operatorname{cov}}_{sm_{\updownarrow}}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) ^{\frac{1}{2}}k_{\mathbf{x _{2_{i_{\ast}}}}\right\Vert _{\min_{c}}^{2}\text{, \] where $\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \subset Z_{1}$ or $\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \subset Z_{2}$. Now rewrite Eq. (\ref{Probabilisitc Expression for Normal Eigenlocus Q}) as a locus of discrete conditional probabilities \begin{align} \boldsymbol{\kappa}_{1}-\boldsymbol{\kappa}_{2} & =\sum\nolimits_{i=1 ^{l_{1}}\mathcal{P}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) \frac{k_{\mathbf{x _{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }\label{SDNE Conditional Likelihood Ratio Q}\\ & -\sum\nolimits_{i=1}^{l_{2}}\mathcal{P}\left( k_{\mathbf{x}_{2_{i_{\ast} }}|\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) \frac{k_{\mathbf{x}_{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}} }\right\Vert }\nonumber \end{align} which provides discrete measures for conditional probabilities of classification error \[ \mathcal{P}_{\min_{e}}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}|Z_{2}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) \text{ and }\mathcal{P _{\min_{e}}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|Z_{1}\left( k_{\mathbf{x _{2_{i_{\ast}}}}\right) \right) \] for $k_{\mathbf{x}_{1_{i_{\ast}}}}$ extreme points that lie in the $Z_{2}$ decision region and $k_{\mathbf{x}_{2_{i_{\ast}}}}$ extreme points that lie in the $Z_{1}$ decision region. I\ will now use Eq. (\ref{SDNE Conditional Likelihood Ratio Q}) to devise a probabilistic expression for a quadratic eigenlocus discriminant function. \subsection{A Probabilistic Expression for $\boldsymbol{\kappa}$} Returning to Eq. (\ref{Statistical Locus of Category Decision Q}), consider the estimate $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s \right) $ that an unknown pattern vector $\mathbf{x}$ is located within some particular region of \mathbb{R} ^{d} \begin{align*} \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right) \boldsymbol{\kappa}\mathbf{/}\left\Vert \boldsymbol{\kappa}\right\Vert \\ & \mathbf{+}\frac{1}{\left\Vert \boldsymbol{\kappa}\right\Vert \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \end{align*} based on the value of the decision locus $\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\kappa}}}}\left( \overrightarrow{\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast }\right) }\right) $ and class membership statistic $\frac{1}{\left\Vert \boldsymbol{\kappa}\right\Vert }\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi _{i}\right) $, where $\operatorname{comp _{\overrightarrow{\widehat{\boldsymbol{\kappa}}}}\left( \overrightarrow{\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast }\right) }\right) $ denotes a signed magnitude $\left\Vert k_{\mathbf{x }-k_{\widehat{\mathbf{x}}_{i\ast}}\right\Vert \cos\theta$ along the axis of $\widehat{\boldsymbol{\kappa}}$, $\theta$ is the angle between the vector $\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right) $ and $\widehat{\boldsymbol{\kappa}}$, and $\widehat{\boldsymbol{\kappa}}$ denotes the unit quadratic eigenlocus $\boldsymbol{\kappa}\mathbf{/}\left\Vert \boldsymbol{\kappa}\right\Vert $. I will now demonstrate that the signed magnitude expression $\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right) \boldsymbol{\kappa }\mathbf{/}\left\Vert \boldsymbol{\kappa}\right\Vert $ is a locus of discrete conditional probabilities. Substitute the expression for $\boldsymbol{\kappa}_{1}-\boldsymbol{\kappa _{2}$ in Eq. (\ref{SDNE Conditional Likelihood Ratio Q}) into the expression for the quadratic eigenlocus test in Eq. (\ref{NormalEigenlocusTestStatistic Q}). Denote the unit primal principal eigenaxis components $\frac{k_{\mathbf{x}_{1_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{1_{i_{\ast}}}}\right\Vert }$ and $\frac{k_{\mathbf{x _{2_{i_{\ast}}}}}{\left\Vert k_{\mathbf{x}_{2_{i_{\ast}}}}\right\Vert }$ by $\widehat{k}_{\mathbf{x}_{1_{i_{\ast}}}}$ and $\widehat{k}_{\mathbf{x _{2_{i_{\ast}}}}$. It follows that the probability that the unknown pattern vector $\mathbf{x}$ is located within a specific region of \mathbb{R} ^{d}$ is provided by the expression \begin{align*} \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =\sum\nolimits_{i=1}^{l_{1}}\left[ \left( k_{\mathbf{x} -k_{\widehat{\mathbf{x}}_{i\ast}}\right) \widehat{k}_{\mathbf{x}_{1_{i_{\ast }}}}\right] \mathcal{P}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) \\ & -\sum\nolimits_{i=1}^{l_{2}}\left[ \left( k_{\mathbf{x} -k_{\widehat{\mathbf{x}}_{i\ast}}\right) \widehat{k}_{\mathbf{x}_{2_{i_{\ast }}}}\right] \mathcal{P}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) \\ & \mathbf{+}\frac{1}{\left\Vert \boldsymbol{\kappa}\right\Vert \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \text{, \end{align*} where $\mathcal{P}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) $ and $\mathcal{P}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}} }\right) \right) $ provides discrete measures for conditional probabilities of classification errors $\mathcal{P}_{\min_{e}}\left( k_{\mathbf{x _{1_{i_{\ast}}}}|Z_{2}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) $ and $\mathcal{P}_{\min_{e}}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|Z_{1}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) $ for $k_{\mathbf{x _{1_{i_{\ast}}}}$ extreme points that lie in the $Z_{2}$ decision region and $k_{\mathbf{x}_{2_{i_{\ast}}}}$ extreme points that lie in the $Z_{1}$ decision region. The above expression reduces to a locus of discrete conditional probabilitie \begin{align} \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =\sum\nolimits_{i=1}^{l_{1}}\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{1_{i_{\ast}}}}}}\left( \overrightarrow{\left( k_{\mathbf{x} -k_{\widehat{\mathbf{x}}_{i\ast}}\right) }\right) \mathcal{P}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}} }\right) \right) \label{Normal Eigenlocus Likelihood Ratio Q}\\ & -\sum\nolimits_{i=1}^{l_{2}}\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2_{i_{\ast}}}}}}\left( \overrightarrow{\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast }\right) }\right) \mathcal{P}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|\tilde {Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) \nonumber\\ & \mathbf{+}\frac{1}{\left\Vert \boldsymbol{\kappa}\right\Vert \sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \nonumber \end{align} so that the conditional probability $\mathcal{P}\left( \mathbf{x}|\tilde {Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) $ of finding the unknown pattern vector $\mathbf{x}$ within the localized region $\tilde {Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) $ of the decision space $Z$ is determined by the likelihood statistic \begin{equation} \mathcal{P}\left( \mathbf{x}|\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}} }\right) \right) =\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{1_{i_{\ast}}}}}}\left( \overrightarrow{\left( k_{\mathbf{x} -k_{\widehat{\mathbf{x}}_{i\ast}}\right) }\right) \mathcal{P}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}} }\right) \right) \text{,} \label{Probability Estimate One Q \end{equation} and the conditional probability $\mathcal{P}\left( \mathbf{x}|\tilde {Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) $ of finding the unknown pattern vector $\mathbf{x}$ within the localized region $\tilde {Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) $ of the decision space $Z$ is determined by the likelihood statistic \begin{equation} \mathcal{P}\left( \mathbf{x}|\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}} }\right) \right) =\operatorname{comp}_{\overrightarrow{k_{\mathbf{x _{2_{i_{\ast}}}}}}\left( \overrightarrow{\left( k_{\mathbf{x} -k_{\widehat{\mathbf{x}}_{i\ast}}\right) }\right) \mathcal{P}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}} }\right) \right) \text{.} \label{Probability Estimate Two Q \end{equation} Therefore, the likelihood statistic $\mathcal{P}\left( \mathbf{x}|\tilde {Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) $ in Eq. (\ref{Probability Estimate One Q}) is proportional, according to the signed magnitude $\operatorname{comp}_{\overrightarrow{k_{\mathbf{x}_{1_{i_{\ast}}} }}\left( \overrightarrow{\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x }_{i\ast}}\right) }\right) $ along the axis of $k_{\mathbf{x}_{1_{i_{\ast} }}$, to the conditional probability $\mathcal{P}\left( k_{\mathbf{x _{1_{i_{\ast}}}}|\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) \right) $ of finding the extreme point $k_{\mathbf{x}_{1_{i_{\ast}}}}$ within a localized region $\tilde{Z}\left( k_{\mathbf{x}_{1_{i_{\ast}}}}\right) $ of the decision space $Z$. Similarly, the likelihood statistic $\mathcal{P \left( \mathbf{x}|\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) $ in Eq. (\ref{Probability Estimate Two Q}) is proportional, according to the signed magnitude $\operatorname{comp _{\overrightarrow{k_{\mathbf{x}_{2_{i_{\ast}}}}}}\left( \overrightarrow{\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast }\right) }\right) $ along the axis of $k_{\mathbf{x}_{2_{i_{\ast}}}}$, to the conditional probability $\mathcal{P}\left( k_{\mathbf{x}_{2_{i_{\ast}} }|\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) \right) $ of finding the extreme point $k_{\mathbf{x}_{2_{i_{\ast}}}}$ within a localized region $\tilde{Z}\left( k_{\mathbf{x}_{2_{i_{\ast}}}}\right) $ of the decision space $Z$. Thus, it is concluded that the signed magnitude expression $\left( k_{\mathbf{x}}-k_{\widehat{\mathbf{x}}_{i\ast}}\right) \boldsymbol{\kappa }\mathbf{/}\left\Vert \boldsymbol{\kappa}\right\Vert $ in Eq. (\ref{Normal Eigenlocus Likelihood Ratio Q}) is a locus of discrete conditional probabilities that satisfies the quadratic eigenlocus integral equation of binary classification in Eq. (\ref{Quadratic Eigenlocus Integral Equation V}). \section{Design of Optimal Decision Systems} I will now consider the design and development of optimal, statistical pattern recognition or classification systems using either linear eigenlocus or quadratic eigenlocus discriminant and decision functions. I\ will show that both classes of discriminant functions are scalable modules for optimal, statistical classification systems where the eigenenergy and the expected risk of the classification system are minimized. Accordingly, I\ will show that both classes of discriminant functions are scalable, individual components of optimal ensemble systems, where any given ensemble of linear or quadratic binary classifiers exhibits optimal generalization performance for an $M$-class feature space. I\ will also show that both classes of decision functions provide a practical statistical gauge for measuring data distribution overlap and the decision error rate for two given sets of feature or pattern vectors. The statistical gauge can also be used to identify homogeneous data distributions. I will begin by demonstrating that linear and quadratic eigenlocus discriminant functions are characteristic functions for a given class $\mathcal{\omega}_{i}$ of data. Given a subset $A$ of a larger set, the characteristic function $\chi_{A}$, sometimes also called the indicator function, is the function defined to be identically one on $A$, and is zero elsewhere. \subsection{Characteristic Functions} Let $\widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) $ denote a linear or quadratic discriminant function for two given pattern classes $\mathcal{\omega}_{i}$ and $\omega_{j}$, where the feature vectors in class $\mathcal{\omega}_{i}$ have the training label $+1$, and the feature vectors in class $\omega_{j}$ have the training label $-1$. I will now demonstrate that $\widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) $ is a characteristic function or indicator function for feature vectors $\mathbf{x}$ that belong to class $\omega_{i}$: $\mathbf{x\in\,}\omega_{i}$. Let $A$ be the event that an unseen feature vector $\mathbf{x\in\,}\omega_{i}$ lies in the decision region $Z_{1}$ so that $\operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) \right) =1$. Then the probability $P\left( \left( A\right) \right) $ of $A$ can be written as an expectation as follows: Define the characteristic functio \[ \chi_{A}=\left\{ \begin{array} [c]{cc 1\text{,} & \text{if event }A\text{ occurs}\\ 0\text{,} & \text{otherwise. \end{array} \right. \] Therefore, $\chi_{A}$ is a random variable an \begin{align*} E\left[ \chi_{A}\right] & {\displaystyle\sum\nolimits_{r=o}^{1}} rP\left( \chi_{A}=r\right) \\ & =P\left( A\right) \text{. \end{align*} Thus \[ P\left( A\right) =E\left[ \chi_{A}\right] \text{. \] \qquad\qquad\qquad\qquad Thereby, the discriminant function $\widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{x}\right) $ is an indicator\emph{\ }function $\chi _{\omega_{i}}$ for feature vectors $\mathbf{x}$ that belong to class $\omega_{i}$, where $\chi_{\omega_{i}}$ denotes the event that an unseen feature vector $\mathbf{x\in\,}\omega_{i}$ lies in the decision region $Z_{1}$ so that $\operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{x}\right) \right) =1$ \begin{align*} E\left[ \chi_{\omega_{i}}\right] & {\displaystyle\sum\nolimits_{r=o}^{1}} rP\left( \chi_{\omega_{i}}=r\right) \\ & =P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{x}\right) \right) =1\right) \text{. \end{align*} It follows that, for any given $M$-class feature space $\left\{ \omega _{i}\right\} _{i=1}^{M}$, an ensemble of $M-1$ discriminant functions {\displaystyle\sum\nolimits_{j=1}^{M-1}} \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) $, for which the discriminant function $\widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) $ is an indicator function $\chi_{\omega_{i}}$ for class $\omega_{i}$, provides $M-1$ characteristic functions $\chi_{\omega_{i}}$ for feature vectors $\mathbf{x}$ that belong to class $\omega_{i}$ \begin{align*} E\left[ \chi_{\omega_{i}}\right] & {\displaystyle\sum\nolimits_{j=1}^{M-1}} {\displaystyle\sum\nolimits_{r=o}^{1}} rP\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{x}\right) \right) =1\right) \\ & {\displaystyle\sum\nolimits_{j=1}^{M-1}} P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{x}\right) \right) =1\right) \\ & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) \right) =1\text{, \end{align*} where the probability $E\left[ \chi_{\omega_{i}}\right] $ that an unseen feature vector $\mathbf{x}$ belongs to class $\omega_{i}$ satisfies a fundamental integral equation of binary classification for a classification system in statistical equilibrium \begin{align*} \mathfrak{R}_{\mathfrak{B}}\left( Z|\widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{x}\right) \right) & =\int_{Z_{1}}p\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) |\omega _{2}\right) d\widehat{\Lambda}_{\mathfrak{B}_{ij}}+\int_{Z_{2}}p\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) |\omega _{2}\right) d\widehat{\Lambda}_{\mathfrak{B}_{ij}}+\nabla_{eq}\\ & =\int_{Z_{2}}p\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}_{\mathfrak{B}_{ij }+\int_{Z_{1}}p\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) |\omega_{1}\right) d\widehat{\Lambda}_{\mathfrak{B}_{ij }-\nabla_{eq}\text{, \end{align*} over the decision regions $Z_{1}$ and $Z_{2}$, where $\nabla_{eq}$ is an equalizer statistic, for each discriminant function $\widehat{\Lambda }_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) $ in the ensemble {\displaystyle\sum\nolimits_{j=1}^{M-1}} \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) $. Thus, $E\left[ \chi_{\omega_{i}}\right] $ is determined by an ensemble of $M-1$ discriminant functions {\displaystyle\sum\nolimits_{j=1}^{M-1}} \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) $, where the output of each characteristic function $\chi_{\omega_{i}}$ is determined by the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i |Z_{2}\right) $ for class $\omega_{i}$, given the discriminant function $\widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) $ and the decision region $Z_{2}$. I will now show that the class of linear eigenlocus decision rules are scalable modules for optimal linear classification systems. \subsection{Linear Eigenlocus Decision Rules} I\ have devised a system of data-driven, locus equations which determines unknown, linear discriminant function \[ \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0 \] that are the basis of likelihood ratio test \[ \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0 \] which generate linear decision boundaries that satisfy a fundamental integral equation of binary classification for a classification system in statistical equilibrium (see Fig. $\ref{Symetrically Balanced Eigenaxis of Linear Eigenlocus}$), whereby two-class feature spaces are divided into congruent decision regions such that for all data distributions, the forces associated with counter risks and risks, within each of the congruent decision regions, are balanced with each other, and for data distributions that have similar covariance matrices, the forces associated with counter risks within each of the congruent decision regions are equal to each other, and the forces associated with risks within each of the congruent decision regions are equal to each other \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure46.png }\caption{Linear eigenlocus classification systems $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\protect\overset{\omega_{1}}{\protect\underset{\omega _{2}}{\gtrless}}0$ are in statistical equilibrium because the axis of a linear eigenlocus $\boldsymbol{\tau}$ is in statistical equilibrium. \label{Symetrically Balanced Eigenaxis of Linear Eigenlocus \end{figure} I have demonstrated that a linear eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ is the solution to the integral equation \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) = & \int_{Z_{1}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1 +\int_{Z_{2}}\boldsymbol{\tau}_{1}d\boldsymbol{\tau}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{1}}\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}+\int_{Z_{2 }\boldsymbol{\tau}_{2}d\boldsymbol{\tau}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ and the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x \right) \right) $ of the classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are minimized for data drawn from statistical distributions that have similar covariance matrices. Thereby, a linear eigenlocus classification system $\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ generates the locus of a linear decision boundary for any two classes of feature vectors, including completely overlapping data distributions. For data distributions that have constant or unchanging statistics and similar covariance matrices, linear eigenlocus classification systems $\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ exhibit optimal generalization performance, where the generalization error is the lowest possible decision error. I\ will now argue that linear eigenlocus decision rules $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ are scalable, individual components of superior or optimal ensemble systems, where any given ensemble of linear eigenlocus decision rules exhibits superior or optimal generalization performance for its $M$-class feature space. \subsection{Ensemble Systems of Eigenlocus Decision Rules I} Because linear eigenlocus decision rules involve linear combinations of extreme points and scaled extreme points \begin{align*} \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) & =\left( \mathbf{x}-\sum\nolimits_{i=1}^{l}\mathbf{x}_{i\ast}\right) ^{T}\left[ \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\mathbf{x}_{1_{i\ast} -\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\mathbf{x}_{2_{i\ast}}\right] \\ & \mathbf{+}\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{, \end{align*} it follows that linear combinations of linear eigenlocus discriminant functions can be used to build optimal pattern recognition systems, where the \emph{overall system complexity is scale-invariant} for the feature space dimension and the number of pattern classes. Thus, linear eigenlocus decision rules $\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ are scalable modules for optimal linear classification systems. Denote an optimal pattern recognition system by $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $. I\ will now outline an architecture for an ensemble system of linear eigenlocus decision rules. Given that a linear eigenlocus discriminant function $\widetilde{\Lambda }_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}$ is an indicator\emph{\ }function $\chi_{\omega_{i}}$ for any given class of feature vectors $\mathcal{\omega}_{i}$ that have the training label $+1$, it follows that the decision function $\operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) \[ \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) =\operatorname{sign}\left( \boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\right) \text{, \] where $\operatorname{sign}\left( x\right) \equiv\frac{x}{\left\vert x\right\vert }$ for $x\neq0$, provides a natural means for discriminating between multiple classes of data, where decisions can be made that are based on the \emph{largest probabilistic output} of decision banks $\mathcal{DB _{\mathcal{\omega}_{i}}\left( \mathbf{x}\right) $ formed by linear combinations of linear eigenlocus decision functions $\operatorname{sign \left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ \[ \mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) \text{, \] where the decision bank $\mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) $ for a pattern class $\omega_{i}$ is an ensembl \ {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) \] of $M-1$ decision functions $\left\{ \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) \right\} _{j=1}^{M-1}$ for which the pattern vectors in the given class $\mathcal{\omega}_{i}$ have the training label $+1$, and the pattern vectors in all of the other pattern classes have the training label $-1$, where the probabilistic output of any given decision bank $\mathcal{DB}_{\omega_{i }\left( \mathbf{x}\right) $ is given by a set of $M-1$ characteristic functions \begin{align*} E\left[ \chi_{\omega_{i}}\right] & {\displaystyle\sum\nolimits_{j=1}^{M-1}} P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{x}\right) \right) =1\right) \\ & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) \right) =1\text{. \end{align*} Decision banks $\mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) $ are formed by the system of scalable modules depicted in Fig. $\ref{System of Scalable Modules Linear Eigenlocus}$, where linear combinations of optimal binary linear classification systems can be used to build an optimal statistical pattern recognition system $\mathfrak{P _{\boldsymbol{o}}\left( \mathbf{x}\right) $ which distinguishes between the objects in $M$ different pattern classes: $\left\{ \mathcal{\omega _{i}\right\} _{i=1}^{M}$. Objects in pattern classes can involve any type of distinguishing features that have been extracted from collections of: $\left( 1\right) $ networks formed by interconnected systems of people and/or things: material or biological, $\left( 2\right) $ documents, $\left( 3\right) $ images, or $\left( 4\right) $ waveforms, signals, or sequences, including stationary random processes \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure47.png }\caption{Illustration of a system of scalable modules used to build optimal statistical pattern recognition machines. The system includes a feature extractor, a linear eigenlocus discriminant function $\protect\widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) =\boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}$ and a decision function $\operatorname{sign}\left( \boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\right) $. \label{System of Scalable Modules Linear Eigenlocus \end{figure} I will devise an optimal, linear statistical pattern recognition system $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $ formed by $M$ decision banks $\left\{ \mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) \right\} _{i=1}^{M}$ of linear eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ \begin{equation} \mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) =\left\{ \mathcal{DB}_{\omega_{i}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) \right) \right\} _{i=1}^{M}\text{,} \label{Ensemble of Linear Eigenlocus Decision Functions \end{equation} wher \[ \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) =\operatorname{sign}\left( \boldsymbol{\tau ^{T}\mathbf{x}+\tau_{0}\right) \text{, \] that provides a set of $M\times(M-1)$ decision statistic \[ \left\{ \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau }\left( \mathbf{x}\right) \right) \right\} _{j=1}^{_{M\times(M-1)} \] for $M$ pattern classes $\left\{ \mathcal{\omega}_{i}\right\} _{i=1}^{M}$, where each decision statistic is a characteristic function $\chi_{\omega_{i }\mapsto P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B _{ij}}\left( \mathbf{x}\right) \right) =1\right) $ that is determined by an optimal likelihood ratio test for a two-class feature space \[ \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{, \] such that the maximum value selector of the pattern recognition system $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $ chooses the pattern class $\mathcal{\omega}_{i}$ for which a decision bank $\mathcal{DB _{\omega_{i}}\left( \mathbf{x}\right) $ has the maximum probabilistic output \[ D_{\mathfrak{B}}\left( \mathbf{x}\right) \underset{i\in1,\cdots ,M}{=ArgMax}\left( \mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) \right) \text{, \] where the probabilistic output of each decision bank $\mathcal{DB}_{\omega _{i}}\left( \mathbf{x}\right) $ is given by a set of $M-1$ characteristic functions \begin{align*} E\left[ \chi_{\omega_{i}}\right] & {\displaystyle\sum\nolimits_{j=1}^{M-1}} P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{x}\right) \right) =1\right) \\ & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) \right) =1\text{. \end{align*} For data distributions that have similar covariance matrices, I\ will now show that the ensemble system of linear eigenlocus decision functions in Eq. (\ref{Ensemble of Linear Eigenlocus Decision Functions}) generates a set of linear decision boundaries and decision statistics that minimize the probability of misclassification or decision error for an $M$-class feature space. \subsection{Expected Risk for Eigenlocus Ensemble Systems I} Take any given $M$-class feature space, where all $M$ data distributions have similar covariance matrices. Now take any given ensemble system $\mathfrak{P _{\boldsymbol{o}}\left( \mathbf{x}\right) $ formed by $M$ decision banks $\left\{ \mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) \right\} _{i=1}^{M}$ of linear eigenlocus decision functions $\operatorname{sign \left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) $ \[ \mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) =\left\{ \mathcal{DB}_{\omega_{i}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}}\left( \mathbf{x}\right) \right) \right) \right\} _{i=1}^{M}\text{, \] where each likelihood ratio test $\widetilde{\Lambda}_{\boldsymbol{\tau}_{j }\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ in an ensemble {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}_{j}}\left( \mathbf{x}\right) \right) $ minimizes the total probability of error and achieves the minimum possible error rate for two given pattern classes. Next, take the decision bank $\mathcal{DB}_{\omega_{i}}\left( \mathbf{x \right) {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}_{j}}\left( \mathbf{x}\right) \right) $ for any given pattern class $\omega_{i}$, and let $\mathfrak{R}_{\mathfrak{\min}_{ij}}\left( Z_{2}|\widetilde{\Lambda }_{\boldsymbol{\tau}_{j}}\right) $ denote the expected risk for class $\omega_{i}$ for any given likelihood ratio test $\widetilde{\Lambda }_{\boldsymbol{\tau}_{j}}\left( \mathbf{x}\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ \[ \mathfrak{R}_{\mathfrak{\min}ij}\left( Z_{2}|\widetilde{\Lambda }_{\boldsymbol{\tau}_{j}}\right) =\int_{Z_{2}}p\left( \widetilde{\Lambda }_{\boldsymbol{\tau}_{j}}\left( \mathbf{x}\right) |\omega_{i}\right) d\widetilde{\Lambda}_{\boldsymbol{\tau}_{j} \] which determines the conditional probability of classification error for class $\omega_{i}$ in a two-class decision space, where $Z=Z_{1}+Z_{2}$. It follows that each classification system $\widetilde{\Lambda}_{\boldsymbol{\tau}_{j }\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ for class $\omega_{i}$ minimizes the expected risk $\mathfrak{R}_{\mathfrak{\min}ij}\left( Z_{2}|\widetilde{\Lambda }_{\boldsymbol{\tau}_{j}}\right) $ in a two-class decision space, where $Z=Z_{1}+Z_{2}$. So, let $\mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline{Z _{i}\right) $ denote the expected risk for class $\omega_{i}$ for any given ensemble of linear eigenlocus decision rules \[ \mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) {\displaystyle\sum\nolimits_{j=1}^{M-1}} \left[ \widetilde{\Lambda}_{\boldsymbol{\tau}_{j}}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\right] \] which determines the conditional probability of classification error for class $\omega_{i}$ in $M-1$ two-class decision spaces $\left\{ Z_{j}\right\} _{i=1}^{M-1}$, where the decision space $\overline{Z}_{i}$ for the ensemble of linear classifiers is the union of the decision spaces $Z_{1},Z_{2 ,\ldots,Z_{M-1}$ \[ \overline{Z}_{i} {\displaystyle\sum\nolimits_{j=1}^{M-1}} Z_{j}=\cup_{j=1}^{M-1}\;Z_{j}\text{. \] Therefore, given an ensemble of linear eigenlocus decision rules $\mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) $, it follows that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline {Z}_{i}\right) $ for class $\omega_{i}$ in an $M$-class feature space is determined b \begin{align*} \mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline{Z}_{i}\right) & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \mathfrak{R}_{\mathfrak{\min}_{ij}}\left( Z_{2}|\widetilde{\Lambda }_{\boldsymbol{\tau}_{j}}\left( \mathbf{x}\right) \right) \\ & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \int_{Z_{2}}p\left( \widetilde{\Lambda}_{\boldsymbol{\tau}_{j}}\left( \mathbf{x}\right) |\omega_{i}\right) d\widetilde{\Lambda}_{\boldsymbol{\tau }_{j}}\text{, \end{align*} where minimization of the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline{Z}_{i}\right) $ is equivalent to minimizing the total probability of decision error for class $\omega_{i}$ in the $M$-class decision space $\overline{Z}_{i}$. Finally, let $\widehat{Z}=\cup_{i=1}^{M}\;\overline{Z}_{i}$ denote the decision space that is determined by the ensemble syste \[ \mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) =\left\{ \mathcal{DB}_{\omega_{i}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\tau}_{j}}\left( \mathbf{x}\right) \right) \right) \right\} _{i=1}^{M \] of $M$ decision banks $\left\{ \mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) \right\} _{i=1}^{M}$. It follows that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( \widehat{Z}\right) $ for all $M$ pattern classes $\left\{ \omega_{i}\right\} _{i=1}^{M}$ is \begin{align*} \mathfrak{R}_{\min}\left( \widehat{Z}\right) & {\displaystyle\sum\nolimits_{i=1}^{M}} \mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline{Z}_{i}\right) {\displaystyle\sum\nolimits_{i=1}^{M}} {\displaystyle\sum\nolimits_{j=1}^{M-1}} \mathfrak{R}_{\mathfrak{\min}ij}\left( Z_{2}|\widetilde{\Lambda }_{\boldsymbol{\tau}_{j}}\right) \\ & {\displaystyle\sum\nolimits_{i=1}^{M}} {\displaystyle\sum\nolimits_{j=1}^{M-1}} \int_{Z_{2}}p\left( \widetilde{\Lambda}_{\boldsymbol{\tau}_{j}}\left( \mathbf{x}\right) |\omega_{i}\right) d\widetilde{\Lambda}_{\boldsymbol{\tau }_{j}}\text{, \end{align*} where the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( \omega _{i}|\overline{Z}_{i}\right) $ for each pattern class $\omega_{i}$ is determined by an ensemble of linear classifiers $\mathcal{DB}_{\omega_{i }\left( \mathbf{x}\right) $ that minimize the total probability of error for class $\omega_{i}$ in the $M$-class decision space $\widehat{Z}$. Thus, it is concluded that the ensemble system $\mathfrak{P}_{\boldsymbol{o }\left( \mathbf{x}\right) $ in Eq. (\ref{Ensemble of Linear Eigenlocus Decision Functions}) generates a set of linear decision boundaries and decision statistics that minimize the probability of decision error for an $M$-class feature space, where all $M$ data distributions have similar covariance matrices. \subsection{Quadratic Eigenlocus Decision Rules} I\ have devised a system of data-driven, locus equations which determines unknown, quadratic discriminant function \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\text{, \] based on second-order, polynomial reproducing kernels $\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}$, that are the basis of likelihood ratio test \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0 \] which generate quadratic decision boundaries that satisfy a fundamental integral equation of binary classification for a classification system in statistical equilibrium (see Fig. $\ref{Symetrically Balanced Eigenaxis of Quadratic Eigenlocus}$), whereby two-class feature spaces are divided into symmetrical decision regions such that for data distributions that have dissimilar covariance matrices, the forces associated with the counter risks and the risks, within each of the symmetrical decision regions, are balanced with each other, and for data distributions that have similar covariance matrices, the forces associated with the counter risks within each of the symmetrical decision regions are equal to each other, and the forces associated with the risks within each of the symmetrical decision regions are equal to each other. The system of data-driven, locus equations is readily extended for Gaussian reproducing kernels $\exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) $, where unknown, quadratic discriminant functions are defined by \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) \boldsymbol{\kappa}+\kappa_{0}\text{. \ \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure48.png }\caption{Quadratic eigenlocus classification systems $\boldsymbol{\kappa }k_{\mathbf{s}}+\kappa_{0}\protect\overset{\omega_{1 }{\protect\underset{\omega_{2}}{\gtrless}}0$ are in statistical equilibrium because the axis of a quadratic eigenlocus $\boldsymbol{\kappa}$ is in statistical equilibrium. \label{Symetrically Balanced Eigenaxis of Quadratic Eigenlocus \end{figure} I have demonstrated that a quadratic eigenlocus discriminant function $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}$, where the reproducing kernel $k_{\mathbf{s}}$ for the data point $\mathbf{s}$ is eithe \[ k_{\mathbf{s}}=\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\text{ \ or \ }k_{\mathbf{s}}=\exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) \text{, \] is the solution to the integral equation \begin{align*} f\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) = & \int_{Z_{1}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa _{1}+\int_{Z_{2}}\boldsymbol{\kappa}_{1}d\boldsymbol{\kappa}_{1}+\delta\left( y\right) \sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i_{\ast}}}\\ & =\int_{Z_{1}}\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}+\int_{Z_{2 }\boldsymbol{\kappa}_{2}d\boldsymbol{\kappa}_{2}-\delta\left( y\right) \sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i_{\ast}}}\text{, \end{align*} over the decision space $Z=Z_{1}+Z_{2}$, where $\delta\left( y\right) \triangleq\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) $, such that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( Z|\widehat{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ and the eigenenergy $E_{\min}\left( Z|\widehat{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ of the classification system $\boldsymbol{\kappa }^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ are minimized. Thereby, a quadratic eigenlocus classification system $\boldsymbol{\kappa ^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ generates the locus of a quadratic decision boundary for any two classes of feature vectors, including completely overlapping data distributions. For any two classes of feature vectors $\mathbf{x}$ that have common covariance matrices, quadratic eigenlocus classification systems $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ generate the locus of a quadratic decision boundary that provides a robust estimate of a linear decision boundary. Thus, for any given data distributions that have constant or unchanging statistics and similar or dissimilar covariance matrices, quadratic eigenlocus classification systems $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa _{0}\overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ exhibit optimal generalization performance, where the generalization error is the lowest possible decision error. Therefore, quadratic eigenlocus classification systems $\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ \emph{automatically} generate the \emph{best decision boundary} for a wide variety of statistical pattern recognition tasks. Accordingly, any given quadratic eigenlocus classification system $\kappa^{T}k_{\mathbf{s}}+\kappa_{0}\overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ achieves the lowest error rate that can be achieved by a discriminant function and the best generalization error that can be achieved by a learning machine. I\ will now argue that quadratic eigenlocus decision rules $\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0$ are scalable, individual components of optimal ensemble systems, where any given ensemble of quadratic eigenlocus decision rules exhibits optimal generalization performance for its $M$-class feature space. The argument is developed for the class of quadratic eigenlocus decision rules defined by \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0 \] and is applicable to the class of quadratic eigenlocus decision rules defined by \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) \boldsymbol{\kappa}+\kappa_{0}\text{. \] \subsection{Ensemble Systems of Eigenlocus Decision Rules II} Because quadratic eigenlocus decision rules involve linear combinations of extreme vectors, scaled reproducing kernels of extreme points, class membership statistics, and regularization parameter \begin{align*} \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) & =\left( \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}-\sum\nolimits_{i=1 ^{l}\left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2}\right) \boldsymbol{\kappa}_{1}\\ & -\left( \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}-\sum \nolimits_{i=1}^{l}\left( \mathbf{x}^{T}\mathbf{x}_{i\ast}+1\right) ^{2}\right) \boldsymbol{\kappa}_{2}\\ & \mathbf{+}\sum\nolimits_{i=1}^{l}y_{i}\left( 1-\xi_{i}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{, \end{align*} wher \[ \boldsymbol{\kappa}_{1}=\sum\nolimits_{i=1}^{l_{1}}\psi_{1_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{1_{i\ast}}+1\right) ^{2 \] an \[ \boldsymbol{\kappa}_{2}=\sum\nolimits_{i=1}^{l_{2}}\psi_{2_{i\ast}}\left( \mathbf{x}^{T}\mathbf{x}_{2_{i\ast}}+1\right) ^{2}\text{, \] it follows that linear combinations of quadratic eigenlocus discriminant functions can be used to build optimal statistical pattern recognition systems $P_{\mathfrak{B}}\left( \mathbf{s}\right) $, where the \emph{overall system complexity is scale-invariant} for the feature space dimension and the number of pattern classes. Thus, quadratic eigenlocus decision rules $\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) $ are scalable modules for optimal quadratic classification systems. I\ will now outline an architecture for optimal ensemble systems of quadratic eigenlocus decision rules. Given that a quadratic eigenlocus discriminant function $\widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}$ is an indicator\emph{\ }function $\chi_{\omega_{i}}$ for any given class of feature vectors $\mathcal{\omega}_{i}$ that have the training label $+1$, it follows that the decision function $\operatorname{sign}\left( \widetilde{\Lambda }_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) \[ \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) =\operatorname{sign}\left( \left( \mathbf{x ^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\right) \text{, \] where $\operatorname{sign}\left( x\right) \equiv\frac{x}{\left\vert x\right\vert }$ for $x\neq0$, provides a natural means for discriminating between multiple classes of data, where decisions can be made that are based on the largest probabilistic output of decision banks $\mathcal{DB _{\omega_{i}}\left( \mathbf{s}\right) $ formed by linear combinations of quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) \right) $ \[ \mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) \text{, \] where the decision bank $\mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) $ for a pattern class $\mathcal{\omega}_{i}$ is an ensembl \ {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) \] of $M-1$ decision functions $\left\{ \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j}}\left( \mathbf{s}\right) \right) \right\} _{j=1}^{M-1}$ for which the pattern vectors in the given class $\mathcal{\omega}_{i}$ have the training label $+1$, and the pattern vectors in all of the other pattern classes have the training label $-1$. Accordingly, the probabilistic output of any given decision bank $\mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) $ is given by a set of $M-1$ characteristic functions \begin{align*} E\left[ \chi_{\omega_{i}}\right] & {\displaystyle\sum\nolimits_{j=1}^{M-1}} P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \widetilde{\mathbf{x}}\right) \right) =1\right) \\ & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \widetilde{\mathbf{x}}\right) \right) =1\text{. \end{align*} Decision banks $\mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) $ are formed by the system of scalable modules depicted in Fig. $\ref{System of Scalable Modules Quadratic Eigenlocus}$, where linear combinations of optimal binary quadratic classification systems can be used to build an optimal statistical pattern recognition system $\mathfrak{P _{\boldsymbol{o}}\left( \mathbf{x}\right) $ which distinguishes between the objects in $M$ different pattern classes $\left\{ \mathcal{\omega _{i}\right\} _{i=1}^{M}$: Objects in pattern classes can involve any type of distinguishing features that have been extracted from collections of: $\left( 1\right) $ networks formed by interconnected systems of people and/or things: material or biological, $\left( 2\right) $ documents, $\left( 3\right) $ images, or $\left( 4\right) $ waveforms, signals or sequences, including stationary random processes \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4402in {Figure49.png }\caption{Illustration of a system of scalable modules used to build optimal statistical pattern recognition systems. The system includes a feature extractor, a quadratic eigenlocus discriminant function $\protect\widetilde{\Lambda}_{\boldsymbol{\kappa}}\left( \mathbf{s}\right) =\boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa_{0}$ and a decision function $\operatorname{sign}\left( \boldsymbol{\kappa}^{T}k_{\mathbf{s}}+\kappa _{0}\right) $. \label{System of Scalable Modules Quadratic Eigenlocus \end{figure} I will devise an optimal, statistical pattern recognition system $P_{\mathfrak{B}}\left( \mathbf{s}\right) $ formed by $M$ decision banks $\left\{ \mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) \right\} _{i=1}^{M}$ of quadratic eigenlocus decision functions $\operatorname{sign \left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j}}\left( \mathbf{s \right) \right) $ \begin{equation} \mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) =\left\{ \mathcal{DB}_{\omega_{i}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) \right) \right\} _{i=1}^{M}\text{,} \label{Ensemble of Quadratic Eigenlocus Decision Functions \end{equation} wher \[ \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) =\operatorname{sign}\left( \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\right) \] o \[ \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) =\operatorname{sign}\left( \exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) \boldsymbol{\kappa}+\kappa_{0}\right) \text{, \] that provides a set of $M\times(M-1)$ decision statistic \[ \left\{ \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa }_{j}}\left( \mathbf{s}\right) \right) \right\} _{j=1}^{_{M\times(M-1) }\text{, \] for $M$ pattern classes $\left\{ \mathcal{\omega}_{i}\right\} _{i=1}^{M}$, where each decision statistic is a characteristic function $\chi_{\omega_{i }\mapsto P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B _{ij}}\left( \mathbf{x}\right) \right) =1\right) $ that is determined by an optimal likelihood ratio test for a two-class feature space \[ \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j}}\left( \mathbf{s}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\text{, \] such that the maximum value selector of the pattern recognition system $P_{\mathfrak{B}}\left( \mathbf{s}\right) $ chooses the pattern class $\mathcal{\omega}_{i}$ for which a decision bank $\mathcal{DB}_{\omega_{i }\left( \mathbf{s}\right) $ has the maximum probabilistic output \[ D_{\mathfrak{B}}\left( \mathbf{s}\right) \underset{i\in1,\cdots ,M}{=ArgMax}\left( \mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) \right) \text{, \] where the probabilistic output of each decision bank $\mathcal{DB}_{\omega _{i}}\left( \mathbf{s}\right) $ is given by a set of $M-1$ characteristic functions \begin{align*} E\left[ \chi_{\omega_{i}}\right] & {\displaystyle\sum\nolimits_{j=1}^{M-1}} P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{s}\right) \right) =1\right) \\ & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{s}\right) \right) =1\text{. \end{align*} For data distributions that have unchanging mean and covariance functions, I\ will now show that the ensemble of quadratic eigenlocus decision functions in Eq. (\ref{Ensemble of Quadratic Eigenlocus Decision Functions}) generates a set of quadratic decision boundaries and decision statistics that minimize the probability of classification error for $M$ given pattern classes. \subsection{Expected Risk for Eigenlocus Ensemble Systems II} Take any given $M$-class feature space, where all $M$ data distributions have unchanging mean and covariance functions. Now take any given ensemble system $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ formed by $M$ decision banks $\left\{ \mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) \right\} _{i=1}^{M}$ of quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) $ \[ P_{\mathfrak{B}}\left( \mathbf{s}\right) =\left\{ \mathcal{DB}_{\omega_{i }\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) \right) \right\} _{i=1}^{M}\text{, \] where each likelihood ratio test $\widetilde{\Lambda}_{\boldsymbol{\kappa _{j}}\left( \mathbf{s}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ in an ensemble {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) $ minimizes the total probability of error and achieves the lowest possible error rate for two given pattern classes. Next, take the decision bank $\mathcal{DB}_{\omega_{i}}\left( \mathbf{x \right) {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) $ for any given any given pattern class $\omega_{i}$, and let $\mathfrak{R}_{\mathfrak{\min}ij}\left( Z_{2 |\widetilde{\Lambda}_{\boldsymbol{\kappa}_{j}}\right) $ denote the expected risk for class $\omega_{i}$ for any given likelihood ratio test $\widetilde{\Lambda}_{\boldsymbol{\kappa}_{j}}\left( \mathbf{s}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ \[ \mathfrak{R}_{\mathfrak{\min}ij}\left( Z_{2}|\widetilde{\Lambda }_{\boldsymbol{\kappa}_{j}}\right) =\int_{Z_{2}}p\left( \widetilde{\Lambda }_{\boldsymbol{\kappa}_{j}}\left( \mathbf{s}\right) |\omega_{i}\right) d\widetilde{\Lambda}_{\boldsymbol{\kappa}_{j} \] which determines the conditional probability of classification error for class $\omega_{i}$ in a two-class decision space, where $Z=Z_{1}+Z_{2}$. It follows that each classification system $\widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ for class $\omega_{i}$ minimizes the expected risk $\mathfrak{R}_{\mathfrak{\min}_{ij}}\left( Z_{2}|\widetilde{\Lambda }_{\boldsymbol{\kappa}_{j}}\right) $ in a two-class decision space, where $Z=Z_{1}+Z_{2}$. So, let $\mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline{Z _{i}\right) $ denote the expected risk for class $\omega_{i}$ for any given ensemble of quadratic eigenlocus decision rules \[ \mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) {\displaystyle\sum\nolimits_{j=1}^{M-1}} \left[ \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j}}\left( \mathbf{s \right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0\right] \] which determines the conditional probability of classification error for class $\omega_{i}$ in $M-1$ two-class decision spaces $\left\{ Z_{j}\right\} _{i=1}^{M-1}$, where the decision space $\overline{Z}_{i}$ for the ensemble of quadratic classifiers is the union of the decision spaces $Z_{1},Z_{2 ,\ldots,Z_{M-1}$ \[ \overline{Z}_{i} {\displaystyle\sum\nolimits_{j=1}^{M-1}} Z_{j}=\cup_{j=1}^{M-1}\;Z_{j}\text{. \] Therefore, given an ensemble of quadratic eigenlocus decision rules $\mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) $, it follows that the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline {Z}_{i}\right) $ for class $\omega_{i}$ in an $M$-class feature space i \begin{align*} \mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline{Z}_{i}\right) & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \mathfrak{R}_{\mathfrak{\min}_{ij}}\left( Z_{2}|\widetilde{\Lambda }_{\boldsymbol{\kappa}_{j}}\right) \\ & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \int_{Z_{2}}p\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j}}\left( \mathbf{s}\right) |\omega_{i}\right) d\widetilde{\Lambda _{\boldsymbol{\kappa}_{j}}\text{, \end{align*} where minimization of the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline{Z}_{i}\right) $ is equivalent to minimizing the total probability of decision error for class $\omega_{i}$ in the $M$-class decision space $\overline{Z}_{i}$. Finally, let $\widehat{Z}=\cup_{i=1}^{M}\;\overline{Z}_{i}$ denote the decision space that is determined by the ensemble syste \[ \mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) =\left\{ \mathcal{DB}_{\omega_{i}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j }\left( \mathbf{s}\right) \right) \right) \right\} _{i=1}^{M \] of $M$ decision banks $\left\{ \mathcal{DB}_{\omega_{i}}\left( \mathbf{s}\right) \right\} _{i=1}^{M}$. It follows that the expected risk for all $M$ pattern classes $\left\{ \omega_{i}\right\} _{i=1}^{M}$ is \begin{align* {\displaystyle\sum\nolimits_{i=1}^{M}} \mathfrak{R}_{\mathfrak{\min}}\left( \omega_{i}|\overline{Z}_{i}\right) & {\displaystyle\sum\nolimits_{i=1}^{M}} {\displaystyle\sum\nolimits_{j=1}^{M-1}} \mathfrak{R}_{\mathfrak{\min}_{ij}}\left( Z_{2}|\widetilde{\Lambda }_{\boldsymbol{\kappa}_{j}}\right) \\ & {\displaystyle\sum\nolimits_{i=1}^{M}} {\displaystyle\sum\nolimits_{j=1}^{M-1}} \int_{Z_{2}}p\left( \widetilde{\Lambda}_{\boldsymbol{\kappa}_{j}}\left( \mathbf{s}\right) |\omega_{i}\right) d\widetilde{\Lambda _{\boldsymbol{\kappa}_{j}}\text{, \end{align*} where the expected risk $\mathfrak{R}_{\mathfrak{\min}}\left( \omega _{i}|\overline{Z}_{i}\right) $ for each pattern class $\omega_{i}$ is determined by an ensemble of quadratic classifiers $\mathcal{DB}_{\omega_{i }\left( \mathbf{s}\right) $ that minimize the total probability of misclassification or decision error for class $\omega_{i}$ in the $M$-class decision space $\widehat{Z}$. Thus, it is concluded that the ensemble system in Eq. (\ref{Ensemble of Quadratic Eigenlocus Decision Functions}) generates a set of quadratic decision boundaries and decision statistics that minimize the probability of classification error for an $M$-class feature space, where all $M$ data distributions have unchanging mean and covariance functions. \subsection{Design of Decision Banks} WLOG, let $\widetilde{\Lambda}\left( \mathbf{x}\right) $ denote a linear or a quadratic eigenlocus discriminant function. The design of optimal, statistical pattern recognition systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $ involves designing $M$ decision banks, where each decision bank contains an ensemble of $M-1$ decision functions $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $, and each decision function is determined by a feature extractor and a linear or a quadratic eigenlocus discriminant function $\widetilde{\Lambda}\left( \mathbf{x}\right) $. A\ feature extractor generates $d$-dimensional feature vectors from collections of networks, documents, images, \emph{or} signals for \emph{all} of the $M$ pattern classes. Alternatively, feature vectors from different data sources can be fused by means of inductive matrix completion methods. \paragraph{Fusion of Feature Vectors} Inductive matrix completion methods enable the fusion of feature vectors that have been extracted from collections of networks, documents, images, and signals, where the dimension of the feature vectors for different collections may differ \citep[see][]{Jain2013 . Fusion of feature vectors enables the production of optimal, statistical pattern recognition systems based on complex feature spaces. I\ will now outline the process for producing optimal, statistical pattern recognition systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $. Suppose that $M$ sets of $d$-dimensional feature vectors have been extracted from either fused or non-fused collections of networks, documents, images, or signals for $M$ pattern classes. Optimal, statistical pattern recognition systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $ are produced in the following manner. \subsubsection{Production of Decision Banks} Let there be $M$ pattern classes $\left\{ \mathcal{\omega}_{i}\right\} _{i=1}^{M}$ of $d$-dimensional feature vectors. Produce a decision ban \[ \mathcal{DB}_{\omega_{i}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) \right) \] for each pattern class $\mathcal{\omega}_{i}$ that consists of a bank or ensemble {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ of $M-1$ decision functions $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $. Accordingly, build $M-1$ linear or quadratic eigenlocus discriminant functions $\widetilde{\Lambda}\left( \mathbf{x}\right) $, where the feature vectors in the given class $\mathcal{\omega}_{i}$ have the training label $+1$ and the feature vectors in all of the other pattern classes have the training label $-1$. It has been demonstrated that the decision bank $\mathcal{DB}_{\omega_{i }\left( \mathbf{x}\right) $ for each pattern class $\mathcal{\omega}_{i} \[ \mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) \] generates a set of $M-1$ decision statistics $\left\{ \operatorname{sign \left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) \right\} _{j}^{M-1}$, where each decision statistic $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ is a characteristic function $\chi_{\omega_{i}}\mapsto P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) \right) =1\right) $ that is determined by a likelihood ratio tes \[ \widetilde{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0 \] for a two-class feature space. Accordingly, an optimal, statistical pattern recognition system $\mathfrak{P _{\boldsymbol{o}}\left( \mathbf{x}\right) \[ \mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) =\left\{ \mathcal{DB}_{\omega_{i}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) \right) \right\} _{i=1}^{M}\text{, \] contains $M$ decision banks $\left\{ \mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) \right\} _{i=1}^{M}$, i.e., $M$ ensembles {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ of optimal decision functions $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $, all of which provide a set of $M\times(M-1)$ decision statistics $\left\{ \operatorname{sign \left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) \right\} _{j=1}^{_{M\times(M-1)}}$ that minimize the probability of decision error for an $M$-class feature space, such that the maximum value selector of the pattern recognition system $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $ chooses the pattern class $\mathcal{\omega}_{i}$ for which a decision bank $\mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) $ has the maximum probabilistic output \[ D_{\mathfrak{B}}\left( \mathbf{x}\right) \underset{i\in1,\cdots ,M}{=ArgMax}\left( \mathcal{DB}_{\omega_{i}}\left( \mathbf{x}\right) \right) \text{, \] where the probabilistic output of each decision bank $\mathcal{DB}_{\omega _{i}}\left( \mathbf{x}\right) $ is determined by a set of $M-1$ characteristic functions \begin{align*} E\left[ \chi_{\omega_{i}}\right] & {\displaystyle\sum\nolimits_{j=1}^{M-1}} P\left( \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij }\left( \mathbf{x}\right) \right) =1\right) \\ & {\displaystyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) \right) =1\text{. \end{align*} For data distributions that have unchanging mean and covariance functions, optimal, statistical pattern recognition systems $\mathfrak{P}_{\boldsymbol{o }\left( \mathbf{x}\right) $ that are formed by ensembles of quadratic eigenlocus decision functions generate a set of quadratic decision boundaries and decision statistics that minimize the probability of decision error. Moreover, for data distributions that have common covariance functions, ensembles of quadratic eigenlocus decision functions generate a set of linear decision boundary estimates and decision statistics that minimize the probability of decision error. It follows that ensembles of quadratic eigenlocus decision functions generate a set of decision boundaries and decision statistics that minimize the probability of decision error: for any given sets of pattern or feature vectors which have been extracted from digital signals, images, documents, or networks and are generated according to probability density functions related to statistical distributions of random vectors that have constant or unchanging statistics. For data distributions that have similar covariance matrices, optimal, statistical pattern recognition systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $ that are formed by ensembles of linear eigenlocus decision functions generate a set of linear decision boundaries and decision statistics that minimize the probability of decision error. Because an optimal, statistical pattern recognition system $\mathfrak{P _{\boldsymbol{o}}\left( \mathbf{x}\right) $ is a linear combination of linear or quadratic eigenlocus discriminant functions $\widetilde{\Lambda }\left( \mathbf{x}\right) $, it follows that the overall network complexity is scale-invariant for the feature space dimension and the number of pattern classes. Moreover, optimal, statistical pattern recognition systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $ that achieve minimum error rates are optimal ensembles of binary classifiers, where each binary classifier in an ensemble based system $\mathfrak{P}_{\boldsymbol{o }\left( \mathbf{x}\right) $ achieves the lowest possible error rate and exhibits optimal generalization performance for its two-class feature space. \subsection{Optimal Ensembles of Binary Classifiers} Let $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ denote an optimal, statistical classification system that is formed by ensembles of quadratic eigenlocus discriminant functions. Optimal, statistical classification systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s \right) $ are ensemble based systems that \emph{outperform }other ensembles of classifiers. I\ have demonstrated that optimal, statistical decision systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ can be formed by \emph{optimal ensembles} of individual binary classifiers, where each binary classifier is a high-performance learning machine that achieves the lowest possible error rate and exhibits optimal generalization performance for its two-class feature space. In particular, optimal, statistical decision systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ formed by ensembles of quadratic eigenlocus decision functions $\operatorname{sign \left( \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa }+\kappa_{0}\right) $ or $\operatorname{sign}\left( \exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) \boldsymbol{\kappa}+\kappa_{0}\right) $ achieve minimum error rates and exhibit optimal generalization performance for $d$-component random vectors $\mathbf{x}$ that are generated according to probability density functions $p\left( \mathbf{x}|\omega_{1}\right) $ and $p\left( \mathbf{x}|\omega _{2}\right) $ related to statistical distributions of random vectors $\mathbf{x}$ that have constant or unchanging statistics and similar or dissimilar covariance matrices. Figure $\ref{Ensembles of Bayes' Tests}$ illustrates ensemble based systems of optimal likelihood ratio tests that generate optimal decision rules \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure50.png }\caption{Optimal, statistical decision systems $P_{\mathfrak{B}}\left( \mathbf{s}\right) $ are formed by $M$ decision banks\ or ensembles of optimal likelihood ratio tests, where each decision bank $\mathcal{DB}_{\omega_{i }\left( \mathbf{s}\right) $ consists of an ensemble of $M-1$ optimal decision functions: ${\textstyle\protect\sum\nolimits_{j=1}^{M-1 }\operatorname{sign}\left( \protect\widetilde{\Lambda}_{j}\left( \mathbf{s}\right) \right) $. \label{Ensembles of Bayes' Tests \end{figure} I will now outline a method for fusing feature vectors from different data sources that enables the production of optimal, statistical pattern recognition systems based on complex feature spaces. \subsection{Fusion of Decision Banks} Feature vectors that have been extracted from collections of networks, documents, images, or signals can be fused with each other by designing decision banks for data obtained from different sources and combining the outputs of the decision banks. I will outline the method for two different data sources. The method is readily extended to $L$ sources of data. Suppose that $M$ sets of $d$-dimensional and $n$-dimensional feature vectors have been extracted from two different collections of networks, documents, images, or signals for $M$ pattern classes. Optimal, statistical pattern recognition systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ are produced in the following manner. \subsubsection{Production of Fused Decision Banks} Let there be $M$ pattern classes: $\left\{ \mathcal{\omega}_{i}\right\} _{i=1}^{M}$. Let $\mathcal{DB}_{\omega_{i1}}$ and $\mathcal{DB}_{\omega_{i2}}$ denote decision banks for $d$-dimensional and $n$-dimensional feature vectors respectively, where feature vectors in class $\mathcal{\omega}_{i}$ have the training label $+1$ and feature vectors in all of the other pattern classes have the training label $-1$. Produce the decision bank \[ \mathcal{DB}_{\omega_{i1}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{j}\left( \mathbf{x}\right) \right) \right) \] an \[ \mathcal{DB}_{\omega_{i2}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{j}\left( \mathbf{x}\right) \right) \right) \] for each pattern class $\mathcal{\omega}_{i}$, where $\mathcal{DB _{\omega_{i1}}$ and $\mathcal{DB}_{\omega_{i2}}$ consist of a bank or ensemble {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{j}\left( \mathbf{x}\right) \right) $ of $M-1$ quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}_{j}\left( \mathbf{x}\right) \right) $. Accordingly, for each decision bank, build $M-1$ quadratic eigenlocus discriminant functions $\widetilde{\Lambda}\left( \mathbf{x \right) $, where the pattern vectors in the given class $\mathcal{\omega _{i}$ have the training label $+1$ and the pattern vectors in all of the other pattern classes have the training label $-1$. So, for each pattern class $\mathcal{\omega}_{i}$, the decision banks $\mathcal{DB}_{\omega_{i1}}$ and $\mathcal{DB}_{\omega_{i2}} \[ \mathcal{DB}_{\omega_{i1}}\left( \mathbf{x}\right) {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{j}\left( \mathbf{x}\right) \right) \] an \[ \mathcal{DB}_{\omega_{i2}}\left( \mathbf{x}\right) {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{j}\left( \mathbf{x}\right) \right) \] generate two sets of $M-1$ decision statistic \[ \mathcal{DB}_{\omega_{i1}}\left( \mathbf{x}\right) =\left\{ \operatorname{sign}\left( \widetilde{\Lambda}_{j}\left( \mathbf{x}\right) \right) \right\} _{j=1}^{M-1 \] an \[ \mathcal{DB}_{\omega_{i2}}\left( \mathbf{x}\right) =\left\{ \operatorname{sign}\left( \widetilde{\Lambda}_{j}\left( \mathbf{x}\right) \right) \right\} _{j=1}^{M-1}\text{, \] where each decision statistic $\operatorname{sign}\left( \widetilde{\Lambda }_{j}\left( \mathbf{x}\right) \right) $ is a characteristic function $\chi_{\omega_{i}}\mapsto P\left( \operatorname{sign}\left( \widehat{\Lambda }_{\mathfrak{B}_{ij}}\left( \mathbf{x}\right) \right) =1\right) $ that is determined by an optimal likelihood ratio test for a two-class feature space \[ \widetilde{\Lambda}_{j}\left( \mathbf{x}\right) \overset{\omega _{1}}{\underset{\omega_{2}}{\gtrless}}0\text{, \] such that the maximum value selector of the statistical pattern recognition system $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ chooses the pattern class $\mathcal{\omega}_{i}$ for which the fused decision banks {\displaystyle\sum\nolimits_{j=1}^{2}} \mathcal{DB}_{\omega_{ij}}\left( \mathbf{x}\right) $ have the maximum probabilistic output \[ D_{\mathfrak{B}}\left( \mathbf{s}\right) \underset{i\in1,\cdots ,M}{=ArgMax}\left( {\displaystyle\sum\nolimits_{j=1}^{2}} \mathcal{DB}_{\omega_{ij}}\left( \mathbf{s}\right) \right) \text{. \] Accordingly, an optimal, statistical pattern recognition system $\mathfrak{P _{\boldsymbol{o}}\left( \mathbf{s}\right) $ formed by the fused decision banks $\mathcal{DB}_{\omega_{i1}}$ and $\mathcal{DB}_{\omega_{i2}}$ \[ \mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) =\left\{ \begin{array} [c]{c \mathcal{DB}_{\omega_{i1}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{BE}_{j}}\left( \mathbf{s}\right) \right) \right) \\ +\mathcal{DB}_{\omega_{i2}}\left( {\textstyle\sum\nolimits_{j=1}^{M-1}} \operatorname{sign}\left( \widehat{\Lambda}_{\mathfrak{BE}_{j}}\left( \mathbf{s}\right) \right) \right) \end{array} \right\} _{i=1}^{M \] contains $2\times M$ decision banks i.e., $2\times M$ ensembles of optimal, statistical decision functions, all of which provide a set of $2\times M\times(M-1)$ decision statistic \[ \left\{ \operatorname{sign}\left( \widetilde{\Lambda}_{j}\left( \mathbf{s}\right) \right) \right\} _{j=1}^{2\times M\times(M-1) \] that minimize the probability of decision error for two sources of data in an $M$-class feature space, such that the maximum value selector of the statistical pattern recognition system $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) \[ \mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) =\left\{ {\displaystyle\sum\limits_{j=1}^{2}} \mathcal{DB}_{\omega_{ij}}\left( {\textstyle\sum\nolimits_{k=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{k}\left( \mathbf{s}\right) \right) \right) \right\} _{i=1}^{M \] chooses the pattern class $\mathcal{\omega}_{i}$ for which two fused decision banks {\displaystyle\sum\nolimits_{j=1}^{2}} \mathcal{DB}_{\omega_{ij}}\left( \mathbf{s}\right) $ have the maximum probabilistic output \[ D_{\mathfrak{B}}\left( \mathbf{s}\right) \underset{i\in1,\cdots ,M}{=ArgMax}\left( {\displaystyle\sum\nolimits_{j=1}^{2}} \mathcal{DB}_{\omega_{ij}}\left( \mathbf{s}\right) \right) \text{. \] Thus, feature vectors from two different data sources can be fused by forming fused ensembles of quadratic eigenlocus decision functions. Feature vectors from two different data sources can also be fused by forming fused ensembles of linear eigenlocus decision functions. The ensemble method is readily extended to $L$ different data sources. Given that fusion of decision banks based on different data sources involves linear combinations of decision banks, it follows that optimal, statistical pattern recognition systems $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ can be designed for data sets that have been collected from $L$ different data sources \[ \mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) =\left\{ {\displaystyle\sum\limits_{j=1}^{L}} \mathcal{DB}_{\omega_{ij}}\left( {\textstyle\sum\nolimits_{k=1}^{M-1}} \operatorname{sign}\left( \widetilde{\Lambda}_{k}\left( \mathbf{s}\right) \right) \right) \right\} _{i=1}^{M \] such that the maximum value selector of the optimal, statistical pattern recognition system $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ chooses the pattern class $\mathcal{\omega}_{i}$ for which $L$ fused decision banks {\displaystyle\sum\nolimits_{j=1}^{L}} \mathcal{DB}_{\omega_{ij}}\left( \mathbf{s}\right) $ have the maximum probabilistic output \[ D_{\mathfrak{B}}\underset{i\in1,\cdots,M}{=ArgMax}\left( {\displaystyle\sum\nolimits_{j=1}^{L}} \mathcal{DB}_{\omega_{ij}}\left( \mathbf{s}\right) \right) \text{. \] Figure $\ref{Fused Ensembles of Bayes' Tests}$ illustrates how feature vectors from $L$ different data sources can be fused with each other by forming fused ensembles of linear or quadratic eigenlocus decision rules \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure51.png }\caption{Feature vectors from different data sources can be fused by forming fused ensembles of linear or quadratic eigenlocus decision functions. \label{Fused Ensembles of Bayes' Tests \end{figure} \subsection{Perfect Generalization Performance} All classes of feature vectors drawn from non-overlapping statistical or data distributions exhibit perfect discrimination capacity, where the expected risk and the decision error is zero. Therefore, given $M$ pattern classes $\left\{ \mathcal{\omega}_{i}\right\} _{i=1}^{M}$, if a feature extractor can be developed for which negligible or no overlap exists between all $M$ data distributions, then the scale-invariance of the optimal, statistical pattern recognition system $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ described above ensures low estimation variance and \emph{perfect} generalization performance. Moreover, $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{s}\right) $ can be replaced by an optimal, \emph{linear}, statistical pattern recognition system $\mathfrak{P}_{\boldsymbol{o}}\left( \mathbf{x}\right) $: formed by ensembles of linear eigenlocus decision functions. The probability of error is the key parameter of all statistical pattern recognition systems \citep{Fukunaga1990,Jain2000 . The amount of overlap between data distributions determines the classification error rate which is the lowest error rate that can be achieved by any statistical classifier. In general, the classification error rate is \emph{difficult to evaluate} \citep{Fukunaga1990 . \subsection{Design of Feature Vectors \citet{Geman1992} suggested that some important biases or generalizations can be achieved through proper data representations. For the problem of learning discriminant functions and decision boundaries, an important form of proper data representations involves the identification and exploitation of distinguishing features that are simple to extract, invariant to irrelevant transformations, insensitive to noise, and useful for discriminating between objects in different categories \citep{Duda2001 . \subsubsection{Sufficient Class Separability} Useful sets of distinguishing features for discrimination tasks must exhibit sufficient class separability, i.e., a negligible overlap exists between all data distributions. In general, the design of distinguishing feature vectors which exhibit sufficient class separability is the most fundamental and difficult problem in the overall design of a statistical pattern recognition system \citep{Fukunaga1990,Jain2000,Duda2001 . \subsection{Measures of Feature Vector Effectiveness} The criteria to evaluate the effectiveness of feature vectors \emph{must be} a measure of the \emph{overlap or class separability among data distributions} and \emph{not} a \emph{measure of fit} such as the mean-square error of a statistical model \citep{Fukunaga1990 . For example, the Bhattacharyya distance provides a convenient measure of class separability for two pattern classes. The measure provides an upper bound of the Bayes' error if training data are drawn from Gaussian distributions. However, the upper bound may be significantly higher than the Bayes' error. Furthermore, the Bhattacharyya distance is difficult to evaluate because the trace and the determinant of matrices are combined in the criterion \citep{Fukunaga1990 . \subsubsection{Predicting Generalization Performance} Because linear or quadratic eigenlocus classification systems optimize trade-offs between \emph{decision errors}: based on trade-offs between counter risks and risks, they can be used to \emph{predict} how well they will \emph{generalize to new patterns}. Thereby, linear or quadratic eigenlocus classification systems provide a robust measure of class separability and the expected error rate for two given sets of feature vectors whose mean and covariance functions remain constant over time. In addition, because quadratic eigenlocus classification systems optimize trade-offs between counter risks and risks for \emph{any two data distributions}, quadratic eigenlocus classification systems provide \emph{accurate and precise} measures of data distribution overlap and the expected error rate for any two given sets of feature vectors. \subsection{A Practical Statistical Multimeter} Linear eigenlocus or quadratic eigenlocus decision functions provide a practical statistical gauge for measuring data distribution overlap and the expected error rate for two given sets of feature or pattern vectors. Moreover, it has been demonstrated that quadratic eigenlocus classification systems generate estimates of linear decision boundaries. WLOG, let $\widetilde{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ denote a linear or quadratic eigenlocus classification system: $\widetilde{\Lambda}\left( \mathbf{x}\right) $ denotes a discriminant function. Recall that linear and quadratic eigenlocus discriminant functions are likelihood ratio tests $\widetilde{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$, where any given likelihood ratio test $\widetilde{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ is based on trade-offs between counter risks and risks (decision errors) for two given pattern classes. Thereby, linear or quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ can be used to measure data distribution overlap and the expected error rate for any two given sets of feature vectors whose mean and covariance functions are constant over time. To measure the expected error rate and data distribution overlap, build a linear or quadratic eigenlocus classification system $\widetilde{\Lambda }\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ using feature vectors which have been extracted from any given collections of networks, documents, images, or signals for two pattern classes. While equal numbers of training examples are not absolutely necessary, the number of training examples from each of the pattern classes should be reasonably balanced with each other. Apply the decision function $\operatorname{sign}\left( \widetilde{\Lambda }\left( \mathbf{x}\right) \right) $ to a collection of feature vectors which have not been used to build the classification system $\widetilde{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$. In general, if $\boldsymbol{\tau}$ or $\boldsymbol{\kappa}$ is based on a small number of principal eigenaxis components, e.g., $2-10$, the data distributions are not overlapping with each other and the expected error rate is negligible or zero. However, a large number of principal eigenaxis components does not necessarily indicate a large expected risk. Also, because quadratic eigenlocus classification systems optimize trade-offs between counter risks and risks (decision errors) for \emph{any two data distributions}, quadratic eigenlocus decision functions $\operatorname{sign}\left( \left( \mathbf{x}^{T}\mathbf{s}+1\right) ^{2}\boldsymbol{\kappa}+\kappa_{0}\right) $ or $\operatorname{sign}\left( \exp\left( -0.01\left\Vert \mathbf{x}-\mathbf{s}\right\Vert ^{2}\right) \boldsymbol{\kappa}+\kappa_{0}\right) $ provide \emph{accurate and precise} measures of data distribution overlap and the expected error rate for any two data distributions. Alternatively, linear eigenlocus decision functions $\operatorname{sign}\left( \boldsymbol{\tau}^{T}\mathbf{x}+\tau_{0}\right) $ provide accurate and precise measures of data distribution overlap and the expected error rate for any two data distributions that have similar covariance matrices. If data collection is cost prohibitive, use $k$-fold cross validation, where a collection of feature vectors is split randomly into $k$ partitions. Build a linear or quadratic eigenlocus classification system $\widetilde{\Lambda }\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ with a data set consisting of $k-1$ of the original $k$ parts and use the remaining portion for testing. Repeat this process $k$ times. The expected error rate and data distribution overlap is the average over the $k$ test runs. \paragraph{Caveats:} Because linear eigenlocus classification systems optimize trade-offs between counter risks and risks (decision errors) for any two data distributions that have similar covariance matrices, a\ statistical multimeter based on a linear eigenlocus decision function \emph{may not} provide accurate and precise measures of distribution overlap and the expected error rate for data distributions that have different covariance matrices. Alternatively, because quadratic eigenlocus classification systems optimize trade-offs between counter risks and risks (decision errors) for any two data distributions, a statistical multimeter based on a quadratic eigenlocus decision function provides accurate and precise measures of data distribution overlap and the expected error rate for any two data distributions. Figure $\ref{Statistical Multimeter}$ illustrates how linear or quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda }\left( \mathbf{x}\right) \right) $ provide a practical statistical multimeter for measuring data distribution overlap and the expected error rate for two given sets of feature vectors. \begin{figure}[ptb \centering \fbox{\includegraphics[ height=2.5875in, width=3.4411in {Figure52.png }\caption{Linear or quadratic eigenlocus decision functions $\operatorname{sign}\left( \protect\widetilde{\Lambda}\left( \mathbf{x \right) \right) $ provide a practical statistical multimeter for measuring data distribution overlap and the expected error rate for two given sets of feature vectors. \label{Statistical Multimeter \end{figure} \subsection{A Practical Statistical Gauge} Linear or quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ can also be used to identify homogeneous data distributions. Build a linear or quadratic eigenlocus classification system $\widetilde{\Lambda}\left( \mathbf{x \right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ using samples drawn from two distributions. Apply the decision function $\operatorname{sign \left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ to samples which have not been used to build the classification system $\widetilde{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$. Given homogeneous data distributions, essentially all of the training data are transformed into constrained, primal principal eigenaxis components, such that the error rate of the classification system $\widetilde{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ is $\approx50\%$. If data collection is cost prohibitive, use $k$-fold cross validation, where a collection of feature vectors is split randomly into $k$ partitions. Build a linear or quadratic eigenlocus classification system $\widetilde{\Lambda }\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ with a data set consisting of $k-1$ of the original $k$ parts and use the remaining portion for testing. Repeat this process $k$ times. The expected error rate and data distribution overlap is the average over the $k$ test runs. \subsubsection{The Two Sample Problem} Alternatively, linear or quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ can be used to determine if two samples are from different distributions. Build a linear or quadratic eigenlocus classification system $\widetilde{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1 }{\underset{\omega_{2}}{\gtrless}}0$ using samples drawn from two distributions. Apply the decision function $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ to samples which have not been used to build the classification system. Given different data distributions, the error rate of the classification system $\widetilde{\Lambda }\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega _{2}}{\gtrless}}0$ is less than $50\%$. \paragraph{Caveats:} Because linear eigenlocus classification systems optimize trade-offs between counter risks and risks (decision errors) for any two data distributions that have similar covariance matrices, a\ statistical gauge based on a linear eigenlocus decision function may \emph{not} provide accurate and precise measures of error rates for samples drawn from distributions that have different covariance matrices. Indeed, given two data distributions that have different covariance matrices, the error rate of a linear eigenlocus classification system $\widetilde{\Lambda}\left( \mathbf{x}\right) \overset{\omega_{1}}{\underset{\omega_{2}}{\gtrless}}0$ may be $50\%$. Alternatively, because quadratic eigenlocus classification systems optimize trade-offs between counter risks and risks (decision errors) for any two data distributions, a statistical gauge based on a quadratic eigenlocus decision function provides accurate and precise measures of error rates for samples drawn from any two distributions. \subsubsection{Primary Statistical Gauge} Because quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ provide accurate and precise measures of data distribution overlap and the expected error rate for any two data distributions in addition to accurate and precise statistical indicators of homogenous and different data distributions, quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda }\left( \mathbf{x}\right) \right) $ provide a primary statistical gauge. Therefore, quadratic eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $ should always be used to verify results that have been obtained using linear eigenlocus decision functions $\operatorname{sign}\left( \widetilde{\Lambda}\left( \mathbf{x}\right) \right) $. I have covered a lot of ground in this working paper. The main ideas of the paper are summarized in my final remarks. \section{Final Remarks} I have devised data-driven, mathematical laws that generate optimal, statistical classification systems which achieve minimum error rates for data distributions with unchanging statistics. The data-driven, mathematical laws involve finding a solution of locus equations, subject to fundamental statistical laws for a classification system in statistical equilibrium. I\ have also introduced new ways of thinking about learning machines: in terms of fundamental statistical laws that model-based architectures of learning machines are subject to. I have devised equations of statistical equilibrium along with equations of minimization of eigenenergy and expected risk that learning machines architectures are subject to. In addition, I\ have introduced new ways of thinking about linear, polynomial, and Gaussian kernel support vector machines (SVMs): I have defined effective hyperparameters for polynomial and Gaussian kernel SVMs, and I\ have defined regularization methods for linear, polynomial, and Gaussian kernel SVMs; I have also resolved the geometric locus dilemma for all three classes of SVMs. I\ have devised a system of fundamental equations of binary classification for a classification system in statistical equilibrium that must be satisfied by likelihood ratios and decision boundaries of binary classification systems. I have demonstrated that classification systems seek a point of statistical equilibrium where the opposing forces and influences of a classification system are balanced with each other, and the eigenenergy and the expected risk of a classification system are minimized. I\ have also demonstrated that the total eigenenergy of any given binary classification system is conserved and remains relatively constant, so that the eigenenergy and the corresponding expected risk of a binary classification system cannot be created or destroyed, but only transferred from one classification system to another. In addition, I have demonstrated that the eigenenergy and the corresponding expected risk of discrete, linear and quadratic classification systems cannot be created or destroyed, but only transferred from one classification system to another. I\ have used these results to rigorously define three classes of learning machines that are scalable modules for optimal, statistical classification or pattern recognition systems, where each class of learning machines exhibits optimal generalization performance for a category of statistical distributions. One class of learning machines achieves minimum error rates for data sets drawn from statistical distributions that have unchanging statistics and similar covariance matrices. The other two classes of learning machines achieve minimum error rates for any given data sets drawn from statistical distributions that have either similar or dissimilar covariance matrices and unchanging statistics. All three classes of learning machines are solutions to fundamental integral equations of likelihood ratios and corresponding decision boundaries, so that each class of learning machines finds a point of statistical equilibrium where the opposing forces and influences of a statistical classification system are balanced with each other, and the eigenenergy and the corresponding expected risk of the learning machine are minimized. The generalization error of each class of learning machines is determined by the minimum probability of classification error: which is the lowest error rate that can be achieved by a discriminant function and the best generalization error that can be achieved by a learning machine. I\ have also defined optimal ensemble systems for each class of learning machines so that the generalization error of any given ensemble system is determined by the minimum probability of classification error. In this paper, I\ have formulated the problem of learning unknown, linear and quadratic discriminant functions from data as a locus problem, thereby formulating geometric locus methods within a statistical framework. Accordingly, I\ have devised general eigen-coordinate systems for all forms of linear and quadratic loci, and I\ have identified similar geometric properties exhibited by the points on all forms of linear and quadratic loci. I have devised fundamental, data-driven, locus equations of binary classification for linear and quadratic classification systems in statistical equilibrium, where the opposing forces and influences of a system are balanced with each other, and the eigenenergy and the corresponding expected risk of a classification system are minimized. Accordingly, I have devised three systems of data-driven, vector-based locus equations that generate optimal discriminant functions and decision boundaries. The three systems of locus equations involve solving variants of the inequality constrained optimization problem for linear, polynomial, and Gaussian kernel SVMs. All three classes of learning machines are capable of performing a wide variety of statistical pattern recognition tasks, where any given learning machine exhibits optimal generalization performance for a two-class feature space. For each class of learning machines, I have demonstrated that any given learning machine is a scalable, individual component of an optimal ensemble system, where any given ensemble system of learning machines exhibits optimal generalization performance for an $M$-class feature space. I have named the system of data-driven, mathematical laws that generates optimal, linear classification systems a linear eigenlocus transform. Linear eigenlocus transforms generate linear discriminant functions that are scalable modules for optimal, binary and multiclass, linear classification systems. Because any given statistical pattern recognition system is a linear combination of linear eigenlocus discriminant functions, the overall network complexity is scale-invariant for the feature space dimension and the number of pattern classes. I have named the system of data-driven, mathematical laws that generates optimal, quadratic classification systems a quadratic eigenlocus transform. I\ have demonstrated that quadratic eigenlocus classification systems automatically generate the best decision boundary for a given set of data distributions, so that any given quadratic eigenlocus classification system achieves the lowest possible error rate that can be achieved by a discriminant function and the best generalization error that can be achieved by a learning machine. Quadratic eigenlocus transforms generate quadratic discriminant functions that are scalable modules for optimal, binary and multiclass, linear and quadratic classification systems. Because any given statistical pattern recognition system is a linear combination of quadratic eigenlocus discriminant functions, the overall network complexity is scale-invariant for the feature space dimension and the number of pattern classes. Because the generalization error of quadratic eigenlocus classification systems is determined by the minimum probability of classification error for any given data distributions that have unchanging statistics, optimal, statistical classification systems can be designed for applications where "black box" classification methods are too risky. For example, homeland security, medical, and military decisions can be based on data representations that have been designed to minimize expected error rates. I\ have defined innovative and practical ways to use linear and quadratic eigenlocus decision functions as statistical multimeters and statistical gauges. Linear and quadratic eigenlocus decision functions provide a robust measure of class separability and expected error rates for two given sets of pattern or feature vectors whose mean and covariance functions remain constant over time. Quadratic eigenlocus decision functions provide the most reliable statistical multimeter: based on accurate and precise measures of data distribution overlap and expected error rates for any two data distributions that have unchanging statistics. Linear and quadratic eigenlocus decision functions can also be used to identify homogeneous data distributions. Alternatively, linear and quadratic eigenlocus decision functions can be used to determine if two samples are from different distributions. Again, quadratic eigenlocus decision functions provide the most reliable statistical gauge: based on accurate and precise measures of data distribution overlap and expected error rates for any two data distributions that have unchanging statistics. Finally, I have defined novel applications using linear or quadratic eigenlocus transforms, where feature vectors from different data sources are fused by forming fused ensembles of linear or quadratic eigenlocus decision functions. Because quadratic eigenlocus transforms generate optimal statistical classification systems that achieve minimum error rates for any given data distributions that have unchanging statistics, quadratic eigenlocus transforms are a gold standard for fusing feature vectors from different data sources. \subsection{Acknowledgments} My master's thesis \citep{Reeves1995} was the primary impetus for this work. I am indebted to Oscar Gonzalez who was my master's thesis advisor. The counsel of Oscar Gonzalez has sustained the trailblazer within me. The discoveries that I\ have presented in this paper were also motivated by my Ph.D. dissertation \citep{Reeves2009 . I am grateful to Garry Jacyna for illuminating conversations regarding my dissertation research. The guidance of Garry Jacyna enabled me to navigate the Ph.D. pipeline. \bibliographystyle{abbrvnat}
proofpile-arXiv_067-4720
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} An ultraluminous X-ray source (ULX) is an off-nuclear point-like source whose X-ray luminosity exceeds $ 10^{39} \,\rm erg\,s^{-1}$ \citep[see][for a review]{kfr17}. It is generally believed that such a source is an X-ray binary (XRB) powered by accretion onto a compact object. An intermediate-mass ($ 10^{2}-10^{5} M_{\odot}$) black hole (BH) with sub-Eddington accretion was firstly suggested by \citet{cm99} to account for the observed X-ray luminosities. Several ULXs with extremely high luminosities, peaked at $ \gtrsim 10^{41} \,\rm erg\,s^{-1}$, are thought to be promising intermediate-mass BH candidates \citep{f09,p14}. Alternatively the accretor is a stellar-mass compact object, accreting at a super-Eddington rate, as suggested by theoretical models \citep{k01,b02,p07} and observational features \citep{g09,s13,w18}. The mass of the compact object in M101 ULX-1 was dynamically measured to be in the stellar-mass BH range \citep{liu13}. \begin{table*} \begin{center} \caption{Basic properties of the ULX systems with an NS accretor, including peak X-ray luminosity $ L_{\rm peak} $, pulsar's spin period $ P_{\rm spin} $, binary orbital period $ P_{\rm orb} $, donor's mass function $ f_M $, and estimated donor mass $ M_{\rm d} $. \label{tbl-1}} \begin{tabular}{lccccccl} \\ \hline Name & $ L_{\rm peak} $& $ P_{\rm spin} $ & $P_{\rm orb} $& $ f_M $& $ M_{\rm d} $\\ & ($\rm erg\,s^{-1}$) & (s) & (days)& ($M_{\odot}$) & ($M_{\odot}$) \\ \hline M82 X-2 (1) & $ 2\times 10^{40} $ & 1.37 & 2.51 (?) & 2.1 & $\gtrsim 5.2$ \\ NGC 7793 P13 (2) & $ 10^{40} $ & 0.42 & 63.9 & & 18$-$23 \\ NGC 5907 ULX-1 (3) & $ 10^{41} $ & 1.13 & 5.3 (?)&$ 6\times 10^{-4} $ & \\ NGC 300 ULX-1 (4) & $ 5\times 10^{39} $ & $ \sim 31.5 $ & &$ \lesssim 8 \times 10^{-4} $ & \\ M51 ULX-8$^{*}$ (5) & $ 2\times 10^{39} $ & \\ NGC 1313 X-2 (6) & $ \sim 10^{40} $ & $ \sim 1.5 $ & $<4$ (?)& &$\lesssim 12 $ \\ M51 ULX-7 (7) & $7\times 10^{39} $ & 2.8 & 2 & 6.1 & $ \gtrsim 8 $ \\ \hline \end{tabular} \end{center} $ ^{*} $ The compact object in the source M51 ULX-8 is believed to be an NS, since the detection of a likely cyclotron resonance scattering feature that produced by the NS's surface magnetic field. \\ References. (1) \citet{b14}. (2) \citet{f16,i17a,m14}. (3) \citet{i17b}. (4) \citet{c18,bl18}. (5) \citet{bh18}. (6) \citet{sr19}. (7) \citet{rc19}. \end{table*} M82 X-2 is the first confirmed ULX hosting an accreting neutron star (NS) due to the discovery of X-ray pulsations \citep{b14}. To date, quite a few of ULXs have been identified to host an NS accretor (see Table 1). In some cases, Be XRBs containing an NS can appear as ULX systems, as their peak X-ray luminosities\footnote{Throughout this paper, the X-ray luminosity means the inferred luminosity for an assumed isotropic emission, even though the emission is not actually isotropic.} reach above $ 10^{39} \,\rm erg\,s^{-1}$ during outbursts \citep[e.g.,][]{td17,w17,dts18}. Theoretical models indicated that many unpulsed ULXs must actually contain an NS, because the ULX systems can be observed as pulsars only under rather special conditions (e.g., high spin-up rates) for the rotating NSs \citep{kl16,klk17}. Binary population synthesis (BPS) calculations showed that a large fraction of ULXs are likely to host an NS rather than a BH accretor \citep[e.g.][]{sl15,f15,ws17}. Until now, the properties of NS ULXs are still unclear. The detected X-ray luminosities can reach $ \sim 10^{40}-10^{41}\rm \,erg$ $\rm s^{-1} $ \citep[e.g.,][]{b14,f16,i17a,i17b}, meaning that the NS is accreting material at a rate of $ 2-3 $ orders of magnitude higher than its Eddington limit. These ULXs are highly variable sources, the luminosities in the faint phase can drop as low as $ \lesssim 10^{37}-10^{38}\rm \,erg$ $\rm s^{-1} $. This significant variability is proposed to be related to the interaction between the accreting material and the NS's magnetic fields which were estimated to be $ \sim 10^{9}-10^{15} $ G \citep{b14,e15,d15,kl15,t15,km16,tm16,c17,xl17}. The nature of the donor stars in NS ULXs is not very clear. Based on the optical observations, the ULX donor of NGC 7793 P13 was determined to be a BI9a star with a mass of $ 18-23M_{\odot} $ \citep{m14}. The donor mass of M82 X-2 was estimated to be greater than $ 5.2M_{\odot} $ if assuming a $ 1.4M_{\odot} $ NS companion \citep{b14}. For the donor of NGC 5907 ULX-1, \citet{i17b} suggested that it is likely to be a less-evolved massive star or a less-massive (super)giant. The optical observations with \textit{HST} data still cannot confirm the donor nature of this ULX system \citep{h19}. The donor mass of NGC 1313 X-2 was limited to be $ \lesssim 12M_\odot $, provided that it is associated with a young star cluster in its vicinity \citep{sr19}. In the source M51 ULX-7, \citet{rc19} suggested that the donor star is an OB giant with mass $ \gtrsim 8M_\odot $. Several investigations were performed to search for the ULX donors, but only a handful have been detected among hundreds of known ULXs \citep{r08,g13,h14}. As a result of the observational bias that favoring bright stars, the detected donors in NS ULX systems tend to be luminous massive stars. It is known that NS ULXs are binary systems in which the NS is being fed by the donor via Roche lobe overflow (RLOF) \citep[e.g.,][]{sl15,f15}. Modeling the evolution of NS X-ray binaries reveals that the mass transfer is dynamically unstable when the initial mass ratio of the donor star to the NS is larger than $ \sim 3.5 $ \citep{kolb00,pr00,t00,prp02,sl12}. Such a binary will go into a common envelope \citep[CE, see a review by][]{i13} phase, and the remnant system may be an NS$-$helium star binary if the progenitor donor has evolved off main-sequence prior to mass transfer \citep{bv91}. The subsequent evolution of this binary can lead to the formation of a close system with a massive white dwarf (WD) orbited by a (partially) recycled pulsar \citep[e.g.][]{vt84,dewi02,t11}. Binary evolution simulations reveal that the NS$-$helium star binaries are potential ULXs since the mass transfer can proceed at super-Eddington rates via RLOF \citep{ws15,tlp15}. However, there is little attention on the formation and evolution of these ULX binaries in the literature. In this paper, we attempt to explore the properties of the NS ULXs with a helium star companion in Milky Way-like galaxies, including the parameter distribution of the binary systems and the number size of this ULX population. For comparison, we simultaneously provide the information of the NS ULXs with a normal star companion \citep[see also][]{sl15}. The remainder of this paper is organized as follows. In Section 2 we introduce the adopted methods, using the BPS code \textit{BSE} \citep{h02} to obtain the birthrate distribution of incipient NS binaries and the stellar evolution code \textit{MESA} \citep{p11} to model the binary evolutionary tracks. In Section 3, we present the calculated results and give some discussions. Finally we make a brief summary in Section~4. \section{Methods and Calculations} \subsection{Generation of incipient NS binaries} An incipient NS binary is defined as a binary system containing either a normal star (NS$-$normal star) just after the NS formation, or a helium star (NS$-$helium star) just after the CE evolution during which its hydrogen envelope is stripped by the NS. The subsequent evolution of incipient NS binaries may become ULX systems if the donor supplies its material to the NS at a super-Eddington RLOF rate. To obtain the birthrate distributions of the incipient NS binaries, we adopt the BPS code \textit{BSE} originally developed by \citet{h02}. With \textit{BSE} we can simulate the evolution of a large number of binary stars with different initial parameters, i.e. the component masses and the orbital parameters. The evolutionary process is assumed to begin from a primordial binary containing two zero-age main-sequence stars. Modeling the evolution of a binary system is then subject to many factors, e.g. tides, stellar winds, mass and angular momentum transfer, asymmetric supernova (SN) explosions and natal kicks, and CE evolution. Some modifications in the code have been made by \citet{sl14}, in the following we will introduce some key points. During the evolution of a primordial binary, the primary star firstly evolves to fill its Roche lobe and supplies its envelope matter to the secondary star. If the secondary star accretes so rapidly that it gets out of thermal equilibrium and significantly expands to fill its own Roche lobe, then the binary goes into a contact phase \citep{ne01}. Therefore the mass transfer efficiency (the fraction of matter accreted onto the secondary star among the transferred matter) is an important factor that determining whether the primordial binary goes into a contact phase. When dealing with the evolution of the primordial binaries, \citet{sl14} built three mass transfer models with significantly different efficiencies. It is found that the rotation-dependent mass transfer model (in which the efficiency is assumed to be dependent on the rotational velocity of the secondary star) appears to be consistent with the observed parameter distributions of Galactic binaries including Be$-$BH systems \citep{sl14}, Wolf Rayet$-$O systems \citep{sl16} and NS$-$NS systems \citep{sl18}. So we employ the rotation-dependent mass transfer model in our calculations. In this model, the accretion rate onto the rotating secondary is assumed to be the mass transfer rate multiplying a factor of ($ 1-\Omega/\Omega_{\rm cr} $), where $ \Omega $ is the angular velocity of the secondary star and $ \Omega_{\rm cr} $ is its critical value \citep{plv05,dm09,se09}. Since accretion of a small amount of mass can accelerate the secondary to reach its critical rotation \citep{p81}, the mass transfer efficiency can be as low as $ < 0.2 $. As a consequence, the maximal initial mass ratio of the primary to the secondary stars for avoiding the contact phase can reach $ \sim 6 $, and then a large number of the primordial binaries can experience stable mass transfer phases until the primary's envelope is completely exhausted. The contact binaries are assumed to enter a CE phase if the primary star has evolved off main-sequence, otherwise they are assumed to merge into a single star \citep[see e.g.,][]{d13,sl14}. If a binary system goes into the CE evolution, we use the standard energy conservation equation \citep{w84} to calculate the orbital decay during the spiral-in phase. The orbital energy of the embedded binary is used to expel the envelope. After CE evolution, the binary system is assumed to merge into a single star if the final separation leads to contact between the binary components, which will not contribute the ULX population. In the code, we use the results of \citet{xl10} and \citet{w16} for the binding energy parameter of the donor envelope and take the CE efficiency to be 1.0\footnote{\citet{dt00} indicated that observations of the NS$-$WD binaries originating from a CE evolution are consistent with the CE efficiency of $ \lesssim 1$. When dealing with the post-CE binaries with a WD and a main-sequence star, \citet{zs10} suggested that the CE efficiency should be in the range of $ 0.2-0.3 $. If we adopt a low efficiency of 0.3 instead of 1.0 in the BPS calculations, the obtained birthrates of incipient NS$-$helium star (NS$-$normal star) binaries will be decreased by a factor of $\sim 0.6 $ ($\sim 0.8 $), and the corresponding ULX numbers will be reduced by a factor of $\sim 0.3 $ ($\sim 0.6 $).}. Alternatively if the mass transfer in a binary proceeds stably without involving a CE phase, the donor envelope will be gradually stripped via RLOF. In the case of stable mass transfer, we simulate the binary orbital evolution by assuming that the ejected matter takes away the specific orbital angular momentum of the accretor. During the whole evolution, we use the prescription of \citet{h00} to treat the stellar wind mass losses, except for hot OB stars, for which we adopt the mass loss rates of \citet{v01}. \begin{figure*}[hbtp] \centering \includegraphics[width=0.6\textwidth]{f1.pdf} \caption{The birthrate distributions of the incipient NS binaries with a normal star (top panels) and a helium star (bottom panels) companion. The left and right panels depict the distributions of the donor mass and the orbital period, respectively. In each panel, the black and grey curves correspond to the differential and cumulative distributions, respectively. \label{figure1}} \end{figure*} \begin{figure*}[hbtp] \centering \includegraphics[width=0.6\textwidth]{f2.pdf} \caption{Exampled evolution of the mass transfer rate (left panels) and the orbital period (right panels) for two typical NS$-$helium star binaries, as a function of the time and the donor mass. The initial masses of binary components are $ M_{\rm NS} = 1.4M_{\odot}$ and $ M_{\rm d} = 1.0M_{\odot}$. The top and bottom panels correspond to the initial orbital periods of 0.1 and 0.8 day, respectively. \label{figure1}} \end{figure*} We assume that NSs are formed through either electron-capture or core-collapse SNe, the criterion suggested by \citet{f12} is used to distinguish them. In the \textit{BSE} code, the helium core mass at the AGB base is used to set the limits for the formation of various CO cores \citep{h02}. If the helium core mass is smaller than $ 1.83M_\odot $, the star forms a degenerate CO core, and eventually leaves a CO WD. If the core is more massive than $ 2.25M_\odot $, the star forms a non-degenerate CO core, stable nuclear burning will continue until the occurrence of a core-collapse SN. Stars with core masses between $ 1.83M_\odot $ and $ 2.25M_\odot $ form partially degenerate CO cores. If such a core reaches a critical mass of $ 1.08M_\odot $, it will non-explosively burn into an ONe core. If in subsequent evolution the ONe core can increase its mass to $ 1.38M_\odot $, the core is believed to collapse into an NS through an electron-capture SN. It should be noted that the trigger of SN explosions is actually subject to big uncertainties, and binary evolution makes it more complicated \citep[e.g.,][]{plp04}. During a SN explosion, the newborn NS will be imparted a natal kick, resulting in an eccentric orbit or even disruption of the binary system. The kick velocities are assumed to obey the Maxwellian distributions with a dispersion of $\sigma = 40 \rm\, km\, s^{-1} $ \citep{d06} for NSs formed from electron-capture SNe and $\sigma = 265 \rm\, km\, s^{-1} $ \citep{h05} for NSs formed from core-collapse SNe. The initial parameters of the primordial binaries are taken as follows. The primary stars obey the initial mass function suggested by \citet{k93}, and the mass ratios of the secondary to the primary are drawn from a flat distribution between 0 and 1. The orbital separations are assumed to be uniform in the logarithm \citep{a83}. We assume the initial orbits of all binaries are circular, as shown by \citet{h02}, the outcome of the interactions of systems with the same semilatus rectum is almost independent of eccentricity. We adopt a binary fraction of 0.5 for stars with initial masses below $ 10M_{\odot} $, otherwise we assume all massive stars are in binaries. The initial metallicity of stars is set to be 0.02. We take the star formation history of Milky Way-like galaxies into account, assuming a constant star formation rate of $ 3 M_{\odot}\,\rm yr^{-1} $ over the past 10 Gyr period. In Figure~1 we plot the birthrate distributions of incipient NS$-$normal star (top panels) and NS$-$helium star (bottom panels) binaries. The left and right panels correspond to the distributions of the donor mass and the orbital period, respectively. In each panel, the black and grey curves represent the differential and cumulative distributions, respectively. We can see that the total birthrate of incipient NS$-$normal star systems is $\sim 1.2\times 10^{-4} \,\rm yr^{-1}$. Such incipient binaries tend to have eccentric orbits due to mass loss and kick during SN. For simplicity, we assume that they are quickly circularized by tidal torques with the orbital angular momentum conserved\footnote{There is a caveat that this assumption is not valid for long-period (e.g., $ > 20 $ days) systems. But under this assumption, we need only to take into account the binary parameters of the donor masses and the orbital periods when simulating the subsequent evolution of the incipient NS$-$normal star binaries (see Section 2.2 below).}. The corresponding orbital separation is then reduced by a factor of ($ 1- e^{2} $), where $ e $ is the eccentricity. It can be seen that the orbital period distribution has two peaks at $ \sim 2 $ and $ \sim 50 $ days, which reflects whether the evolution of the primordial binaries has experienced a CE phase. The incipient NS$-$helium star binaries, as the descendants of long-period NS$-$normal star systems that are followed by a CE phase\footnote{It was proposed that an ONe WD may collapse into an NS induced by mass accretion in originally WD-helium star binaries \citep{chen11,liu18}. This channel may also lead to the formation of the NS$-$helium star binaries, but is not considered in our calculations. }, have a lower birthrate of $ \sim 4 \times 10^{-5} \,\rm yr^{-1}$. These binaries are mainly close systems in a circular orbit, most of them have orbital periods of $ \lesssim 1 $ day. \subsection{Evolution of NS XRBs} \begin{figure*}[hbtp] \centering \includegraphics[width=0.70\textwidth]{f3.pdf} \caption{The number distributions of NS XRBs in the orbital period $ P_{\rm orb} $ vs. mass transfer rate $ \dot{M} _{\rm tr}$ plane, with the assumption of a constant star formation rate of $ 3 M_{\odot}\,\rm yr^{-1} $ over a period of 10 Gyr. The left and right panels correspond to the binaries containing a helium star and a normal star donor, respectively. Only the systems with mass transfer rates larger than $10^{-8}\,\rm M_{\odot}\,yr^{-1}$ are presented, and the colors are scaled according to the number of the XRBs. The five black circles denote the observed NS ULXs with known orbital periods (see Table 1). \label{figure1}} \end{figure*} Based on the obtained results in Figure~1, we track the evolutionary paths of incipient NS binaries with the stellar evolution code \textit{MESA} \citep[version number 10398,][]{p11,p13,p15}. The NS is treated as a point mass and its initial mass is set to be $ 1.4M_{\odot} $. The initial chemical compositions are taken to be $ X = 0.7 $, $ Y = 0.28 $, $ Z = 0.02 $ for normal stars and $ Y = 0.98 $, $ Z = 0.02 $ for helium stars. Each incipient binary is characterized by the donor mass $ M_{\rm d} $ and the orbital period $ P_{\rm orb} $. The incipient NS binaries from our BPS calculations are used to guide the limits of the \textit{MESA} grid of initial binary parameters, thus we have evolved thousands of binary systems with different donor masses and orbital periods. For the NS$-$normal star systems, we vary the normal star masses from 1 to $ 10M_{\odot} $\footnote{Note that the NS binaries with donor masses larger than $ 10M_{\odot} $ would have very small contribution to the ULX population in Milky Way-like galaxies \citep{sl15}.} by steps of $ 0.1M_{\odot} $ and the orbital periods (in units of days) logarithmically from $ -0.5 $ to 3 by steps of 0.1. For the NS$-$helium star systems, we increase the helium star masses from 0.6 to $ 4M_{\odot} $ by steps of $ 0.1M_{\odot} $ and the orbital periods (in units of days) logarithmically from $ -0.6 $ to 2 by steps of 0.1. These binaries are used to represent all incipient NS binaries, the birthrate of a specific binary is obtained by summing the ones of the incipient binaries reside in the corresponding grid interval of $ \Delta M_{\rm d}\times \Delta (\log P_{\rm orb}) $. During the evolution, we adopt the scheme of \citet{r88} to compute the mass transfer rate via RLOF. We assume the mass increase onto an NS is limited by the Eddington accretion rate ($ \sim1.5\times10^{-8} \, M_{\odot}\rm\,yr^{-1}$ for hydrogen accretion and $ \sim 4\times10^{-8} \, M_{\odot}\rm\,yr^{-1}$ for helium accretion). For the matter that is not accreted by the NS, we assume it escapes the binary system in the form of isotropic wind, taking away the NS's specific orbital angular momentum. In some cases, the mass transfer rates rapidly increase to $ \gtrsim 10^{-2} \,M_{\odot}\rm \,yr^{-1}$, and the code fails to converge. These systems are expected to go into CE evolution soon. So we use this rate as a limited condition to judge whether the code is terminated. Whereas the ULX population are actually dominated by the binaries undergoing dynamically stable mass transfer with a modest rate of $ \sim 10^{-7} - 10^{-6}\, M_{\odot}\rm \,yr^{-1}$, the systems with extremely high mass transfer rates cannot significantly contribute the ULX systems \citep[e.g.,][]{sl15}. For the NS$-$helium star binaries, the mass transfer takes place on the nuclear timescale if the initial helium stars are less massive than $ \sim2.0-2.5M_\odot $, otherwise the phase of mass transfer proceeds rapidly on the thermal timescale \citep[see also][]{dewi02}. Hence the majority of the NS$-$helium star binaries obtained from our BPS calculations will experience a mass transfer phase that is driven by the nuclear evolutionary expansion of the helium stars. In Figure~2 we present the evolutionary tracks of two typical NS$-$helium star binaries as example. The initial systems contain a $ 1 M_{\odot} $ helium star around the NS, and the orbital periods are chosen to be 0.1 (top panels) and 0.8 day (bottom panels). In the top panels, the beginning of RLOF occurs at the time of about $ 14.2\,\rm Myr $. The mass transfer can persist over $ 0.7 \,\rm Myr $ at a rate of $\gtrsim 10^{-7}\, M_{\odot}\rm\,yr^{-1}$. After $ 0.26M_{\odot} $ of the envelope matter is transferred, the binary leaves an NS$-$WD system in a 0.15 day orbit. The final merger of this binary will happen in $ \sim 40 \,\rm Myr$ later. In the bottom panels, the helium star evolves to fill its Roche lobe at the time of about $ 14.9\,\rm Myr $. The mass transfer rapidly increases to $ \sim 10^{-6}\, M_{\odot}\rm\,yr^{-1}$ and then gradually decreases to $ \sim 10^{-7}\, M_{\odot}\rm\,yr^{-1}$ within a span of about $ 0.2\,\rm Myr$. About $ 0.2M_{\odot} $ material is stripped during the mass transfer phase, the helium star turns into a massive WD of mass $\sim 0.8M_{\odot} $. The binary eventually becomes an NS$-$WD binary with the orbital period of 1.08 day, which is similar to a system like PSR B0655+64 \citep{vt84}. It is obvious that the NS$-$helium star binaries with typical initial parameters can spend a few tenths of $\rm Myr$ in the ULX phases. \begin{figure*}[hbtp] \centering \includegraphics[width=0.7\textwidth]{f4.pdf} \caption{Expected number distributions of the NS ULXs containing a helium star (left panel) or a normal star (right panel) in the $M_{\rm d}-P_{\rm orb} $ plane. The colors are scaled according to the number of the ULX systems. \label{figure1}} \end{figure*} \begin{figure*}[hbtp] \centering \includegraphics[width=0.7\textwidth]{f5.pdf} \caption{Expected number distributions of the NS ULXs as a function of the donor mass (left panel) and the orbital period (right panel). The solid and dashed curves denote the ULX systems containing a helium star and a normal star donor, respectively. \label{figure1}} \end{figure*} Given the mass transfer rate $\dot{M} $ in an XRB, the X-ray luminosity can be simply estimated with the traditional formula \begin{equation} L_{\rm X}=0.1\dot{M}c^{2}, \end{equation} where $ c $ is the speed of light in vacuum. However, when $\dot{M} $ is greater than the Eddington rate $ \dot{M}_{\rm E} $, the accretion disk becomes geometrically thick, which influences the X-ray luminosity. An NS that is being fed at a super-Eddington rate can shed more and more of the material as it approaches the NS along the disk, thereby never violating the Eddington limit locally. In this case, we follow \citet{kl16} to convert the mass transfer rate into the X-ray luminosity. The total accretion luminosity, by integrating the local disk emission \citep{ss73}, is given as \begin{equation} L_{\rm acc} \simeq L_{\rm E}\left[1+\ln \left( \frac{\dot{M}}{\dot{M}_{\rm E}} \right) \right], \end{equation} where $ L_{\rm E} $ is the Eddington luminosity. With this equation, the binary system can emit an X-ray luminosity that is limited to a few times the Eddington limit. Due to the geometric collimation, one can see the source in directions within one of the radiation cones, with the apparent (isotropic) X-ray luminosity \begin{equation} L_{\rm X} \simeq \frac{L_{\rm E}}{b}\left[1+\ln \left( \frac{\dot{M}}{\dot{M}_{\rm E}} \right) \right], \end{equation} where $ b $ is the beaming factor. \citet{k09} proposed an approximate formula \begin{equation} b \simeq \frac{73}{\dot{m}^2}, \end{equation} where $ \dot{m} = \dot{M} / \dot{M}_{\rm E}$. This formula is valid for $ \dot{m} \gtrsim 8.5$, otherwise the beaming effect is not operated (i.e., $ b = 1$). Accordingly, we assume that the probability of detecting a source along the beam is reduced by a factor of $ b $. \section{Results and discussions} In Figure~3 we plot the number distributions of the NS XRBs with a helium star (left panel) and a normal star donor (right panel) in the orbital period$-$mass transfer rate ($ P_{\rm orb}-\dot{M}_{\rm tr} $) plane. Only the systems with mass transfer rates larger than $10^{-8}\, M_{\odot}\rm\,yr^{-1}$ are presented. According to the mass transfer rates labelled on the left side axis of each panel, we can calculate the corresponding apparent X-ray luminosities by considering possible beaming effect, which are labelled on the right side axis for comparison. Note that the calculated X-ray luminosities between the left and right panels have a slight difference, as the Eddington limits for helium and hydrogen accretion are differently adopted. The five black circles show the positions of the observed NS ULXs with known orbital periods (see Table 1). Each panel contains a $ 50\times 50 $ image matrix, the colors are scaled according to the number of the XRBs. After recording the evolutionary tracks of all NS binaries, we can calculate the number of binary systems passing through a specific matrix element by accumulating the product of their birthrates and the durations. We obtain that the NS XRBs with a helium star companion have the number of $ \sim23 $ in a Milky Way-like galaxy, the ages of such systems are typically $\sim100 \rm Myr $. It can be seen that only a small group ($ \sim 9 $) of them can appear as ULXs because a high mass transfer rate of $\gtrsim 10^{-7}\, M_{\odot}\rm\,yr^{-1}$ is required \citep{klk17}. When comparing these two diagrams, the calculated number distribution of the NS$-$normal star binaries seems to match the observations much better than the one of the NS$-$helium star binaries. It should be noted that, however, the observed sample of the NS ULXs is still too small and subject to the observational bias that favoring luminous massive stars. In addition, the NSs in many ULX systems are likely to be unpulsed unless having high spin-up rates \citep{klk17}. It is unclear that whether the NS$-$helium star ULXs can be easily identified due to the emission of X-ray pulsations. \begin{figure*}[hbtp] \centering \includegraphics[width=0.7\textwidth]{f6.pdf} \caption{Color-magnitude diagrams for the donors in the NS ULXs. The left and right panels correspond to the distributions of the helium star and the normal star donors, respectively. The colors are scaled according to the number of the ULX systems. \label{figure1}} \end{figure*} \begin{figure*}[hbtp] \centering \includegraphics[width=0.7\textwidth]{f7.pdf} \caption{Expected numbers of NS ULXs at the present epoch as a function of X-ray luminosity in Milky Way-like galaxies with a constant star formation rate of $ 3 M_{\odot}\,\rm yr^{-1} $. The left and right panels correspond to the cases when adopting Equations (3) and (1) to calculate the X-ray luminosity. In each panel, the black solid and black dashed curves respectively correspond to the ULX systems with a helium star and a normal star companion, and the grey solid curve corresponds to the whole population of the NS ULXs. Based on the observed ULX sample in nearby galaxies, the luminosity functions are fitted when applying two models of a power-law with an exponential cut-off (red solid curve) and a pure power-law (red dashed curve). Here the fit parameters are taken from \citet{s11}. \label{figure1}} \end{figure*} Figure~4 depicts the number distributions of the NS ULXs with X-ray luminosities greater than $ 10^{39} \rm\, erg\,s^{-1} $ in the $M_{\rm d} - P_{\rm orb}$ plane. The effect of geometric beaming on detection probability is taken into account. The left and right panels correspond to the NS ULXs with a helium star and a normal star \citep[see also][]{sl15} companion, respectively. Also plotted in Figure~5 is the histogram distributions of the ULX numbers as a function of the donor mass and the orbital period. The solid and dashed curves respectively correspond to the NS ULXs with a helium star and a normal star donor. We find that the majority of the ULX systems containing a helium star donor are close systems with orbital periods distributing at a peak of $ \sim 0.1 $ day (with a tail up to $ \sim100 $ days), and the mass distribution of the helium stars has a maximum probability at $ \sim1M_{\odot} $ within the whole range of $\sim 0.6-2M_{\odot} $. This is apparent because the helium stars in short-period systems are less evolved and possess longer durations of mass transfer phases. The systems with a massive ($ \gtrsim2M_{\circ} $) helium star are hardly produced because of a combination of relatively low birthrates, short mass transfer durations and low detection possibilities (due to the beaming effect). Figure 6 presents the color-magnitude diagrams for the donors of the NS ULXs. The left and right panels correspond to the distributions of the helium star and the normal star donors, respectively. In both cases, the majority of the ULX systems tend to have donors with absolute magnitude $ M_{\rm V} $ larger than $ -1 $ magnitude, since the donors are predominantly low-mass ($ \lesssim 2M_{\odot} $) stars as mentioned above. If located in external galaxies, the donor counterparts are probably too dim to be detected. Furthermore the optical emission from the donor star in a ULX system may be confused with that from the accretion disk around the compact star \citep{tf11}. Our results suggest that the characteristic of very short orbital periods can be used to distinguish the NS ULXs with a helium star donor. In Figure~7 we show the X-ray luminosity function of the NS ULX population in Milky Way-like galaxies. The black solid and black dashed curves respectively correspond to the ULXs with a helium star and a normal star companion, and the grey solid curve corresponds to the whole population of the NS ULXs. The red curves present the fitted luminosity functions of the observed ULX sample in nearby galaxies \citep[for details see][]{s11}. Note that these fitted luminosity functions have been normalized to a star formation rate of $ 3 M_{\odot}\,\rm yr^{-1} $. In the left panel, the luminosity function for the ULX systems with a helium star donor has an obvious break at the position of $L_{\rm X} \sim 2\times 10^{39} \rm\, erg\,s^{-1} $, which corresponds to a point that the beaming effect starts operating. For the NS ULXs with a normal star donor, the corresponding luminosity function also has a similar break but at $L_{\rm X} \sim 6\times 10^{38} \rm \,erg\,s^{-1} $, which is not covered in this diagram. In the right panel, we show the X-ray luminosity function without considering the beaming effect for comparison, by use of Equation~(1) to calculate the X-ray luminosity. We expect that the NS$-$helium star systems can contribute several ULXs in a Milky Way-like galaxy, which is comparable with that from the NS$-$normal star binaries. The number of the extremely luminous sources with $L_{\rm X} \geq 10^{40} \rm\, erg\,s^{-1} $ is only about $ \sim 0.3 $. Although the ULX systems with a BH accretor are not included, our obtained number of the NS ULXs seems to match the observations. Recently \citet{ws17} performed a BPS study on the origin of the ULX systems, the NS$-$helium star binaries were also included in their calculations. The NS$-$helium star binaries could only appear as the extremely luminous sources, and they were expected to be very rare. At solar metallicity, the corresponding masses of the helium stars were in the range of $ 1.7 - 2.6M_{\odot}$ \citep{ws17}. This mass range is significantly larger than the one ($\sim 0.6-2.0M_{\odot}$) obtained by us, since we adopt the rotation-dependent (highly non-conservative) mass transfer model during the primordial binary evolution. This model allows a large amount of the primordial binaries to experience stable mass transfer, and then evolve to be relatively wide systems containing an NS and an intermediate-mass ($ \sim3-8M_{\odot} $) normal star \citep{sl14}. The subsequent evolution of these wide binaries is expected to go into a CE phase when the intermediate-mass donor starts transferring mass to the NS. After the CE evolution, the NS's companion may be a relatively low mass helium star. It is pointed out that a fraction of close binaries containing a (partially) recycled pulsar and a CO WD (with mass of $ \sim0.6-1.3 M_{\odot} $) in the Milky Way are likely formed through this channel involving a CE phase \citep{t11}. If so, our results that involve relatively low mass helium stars can better reproduce the observed binaries containing a relatively light ($ \lesssim 1 M_\odot $) CO WD. \section{Summary} With a population synthesis study, we have shown that NS XRBs containing a helium star companion have a significant contribution to the ULX population in Milky Way-like galaxies. Assuming a constant star formation rate of $ 3 M_{\odot}\,\rm yr^{-1} $, we predict that there are several NS$-$helium star ULX systems in a Milly Way-like galaxy, whose ages are typically $\sim 100\,\rm Myr $. These ULX systems favor short orbital periods, so their subsequent evolution will lead to the formation of close NS$-$WD binaries which are important gravitational wave sources. \acknowledgements We thank the referee for useful suggestions to improve this paper. This work was supported by the Natural Science Foundation of China (Nos. 11973026, 11603010, 11773015, and 11563003) and Project U1838201 supported by NSFC and CAS, and the National Program on Key Research and Development Project (Grant No. 2016YFA0400803).
proofpile-arXiv_067-6859
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} Let $R$ be a $d$-dimensional standard graded $K$-domain over a perfect field $K$ of characteristic $p>0$ which is $F$-finite. For every finitely generated $R$-module $M$ and every natural number $e$ we denote by $F^{e*}(M)=M\otimes_R\! ^eR$, the $e$-th iteration of the Frobenius functor given by base change along the Frobenius homomorphism. In particular if $I$ is an ideal of $R$ we have that $F^{e*}(R/I)\cong R/I^{[p^e]}$. We denote by $q=p^e$ a power of the characteristic. Let $M$ be a graded $R$-module, the function \begin{equation*} gHK(M,q):=l_R(H^0_{R_{+}}(F^{e*}(M))) \end{equation*} and the limit \begin{equation*} e_{gHK}(M):=\lim_{e\rightarrow+\infty}\frac{l_R\left(H^0_{R_+}\left(F^{e*}(M)\right)\right)}{q^d}, \end{equation*} are called \emph{generalized Hilbert-Kunz function} and \emph{generalized Hilbert-Kunz multiplicity} of $M$ respectively. If $I$ is an $R_+$-primary ideal, then $gHK(R/I,q)$ and $e_{gHK}(R/I)$ coincide with the classical Hilbert-Kunz functions and multiplicity. For a survey on the classical Hilbert-Kunz function and multiplicity see \cite{Hun13}. \par The generalized Hilbert-Kunz function and multiplicity have been introduced, under a different name and notation, first by Epstein and Yao in \cite{EY11}, and studied in details by Dao and Smirnov in \cite{DS13}, where they prove the existence of $e_{gHK}(M)$ under some assumptions, for example if $M$ is a module over a Cohen-Macaulay isolated singularity. In the same paper, they study the behaviour of the function $gHK(M,q)$ and compare it with the classical Hilbert-Kunz function. Further study of the generalized Hilbert-Kunz function and multiplicity have been done by Dao and Watanabe in \cite{DW15}, where they compute $e_{gHK}(M)$ if $M$ is a module over a ring of finite Cohen-Macaulay type, or it is an ideal of a normal toric singularity. \par In this paper we study the function $gHK(M,q)$ for a graded module $M$ over a two-dimensional standard graded normal domain over an algebraically closed field. In \cite{Bre07} Brenner proves that if $I$ is a homogeneous $R_+$-primary ideal, then the Hilbert-Kunz function of $I$ has the following form \begin{equation*} HK(I,q)=e_{HK}(I)q^2+\gamma(q), \end{equation*} where $e_{HK}(I)$ is a rational number and $\gamma(q)$ is a bounded function, which is eventually periodic if $K$ is the algebraic closure of a finite field. In \cite[Example 6.2]{DS13}, Dao and Smirnov exhibit numerical evidence that in this setting also the generalized Hilbert Kunz function has the same form. Using an extension of the methods of Brenner, we are able to prove their claim. In fact we obtain in Theorem \ref{TheoremgHKgradedcase} that the generalized Hilbert-Kunz function of a graded module $M$ has the form \begin{equation*} gHK(M,q)=e_{gHK}(M)q^2+\gamma(q), \end{equation*} where $\gamma(q)$ is a bounded function, which is eventually periodic if $K$ is the algebraic closure of a finite field. Moreover we give an explicit formula for $e_{gHK}(M)$ in terms of the Hilbert-Kunz slope of certain locally free sheaves on the projective curve $Y=\mathrm{Proj}R$. As a consequence of this fact we obtain that the generalized Hilbert-Kunz multiplicity $e_{gHK}(M)$ exists and it is a rational number. \par Furthermore in the last section of the paper, we consider the following problem. Assume that $R$ is a standard graded $\mathbb{Z}$-domain of relative dimension two and $M$ a graded $R$-module. For each prime number $p$, we may consider the reduction $R_p$ of $R$ mod $p$ and the extended module $M_p:=M\otimes_RR_p$. For this module we compute the generalized Hilbert-Kunz multiplicity $e_{gHK}^{R_p}(M_p)$ and we ask whether the limit \begin{equation*} \lim_{p\rightarrow+\infty}e_{gHK}^{R_p}(M_p) \end{equation*} exists. Using a result of Trivedi (\cite{Tri07}) we are able to prove (Theorem \ref{limitHKtheorem}) that the previous limit exists, and it is in fact a rational number, assuming that the rings $R_p$ are normal two-dimensional domains for almost all prime numbers. \par After submitting the first version of this paper, the referee and Asgharzadeh pointed us to a recent paper of Vraciu \cite{Vra16}. There, she provides another method to prove Theorem \ref{TheoremgHKgradedcase} for ideals by showing that under suitable conditions, which are fulfilled in our situation, the generalized Hilbert-Kunz function of a homogeneous ideal can be expressed as a $\mathbb{Z}$-linear combination of the classical Hilbert-Kunz function of $R_+$-primary ideals. The relevant condition is called $(LC)$ property and was introduced by Hochster and Huneke in \cite{HH90}. This condition is known to hold in some special cases, but it is an open problem whether it holds in a more general setting, see \cite{Asg15}. \section{Reflexive modules} We recall some preliminary facts concerning reflexive modules. Let $R$ be a two-dimensional normal domain with homogeneous maximal ideal $\mathfrak{m}$ and let $U$ be the punctured spectrum of $R$, that is $U=\mathrm{Spec}R\setminus\{\mathfrak{m}\}$. \par We denote by $(-)^*$ the functor $\mathrm{Hom}_R(-,R)$. If $M$ is an $R$-module, then the module $M^{**}$ is called the \emph{reflexive hull} of $M$. There is a canonical map \begin{equation*} \lambda:M\rightarrow M^{**}. \end{equation*} If $\lambda$ is injective, $M$ is said to be \emph{torsionless}, if $\lambda$ is an isomorphism then $M$ is called \emph{reflexive}. Finitely generated projective modules are reflexive, but the converse does not hold in general. We recall the following geometric characterization of the reflexive hull in the normal situation (cf. \cite[Proposition 3.10]{BD08}): \begin{equation}\label{geometriccharacterization} M^{**}\cong\Gamma(U,\widetilde{M}), \end{equation} where $\widetilde{M}$ denotes the coherent sheaf associated to the module $M$. It follows that the restriction of this sheaf to the punctured spectrum $\widetilde{M}|_U$ coincides with the sheaf $\widetilde{M^{**}}|_U$ on $U$. Moreover if $M$ is reflexive, the sheaf $\widetilde{M}|_{U}$ is locally free. \par The following lemma is a well known fact, see \cite[Proposition 1.4.1]{BH98}, we give a proof here for sake of completeness. \begin{Lemma}\label{reflexivecohomologyzero} Let $R$ be a normal domain of dimension at least $2$ with homogeneous maximal ideal $\mathfrak{m}$ and let $I$ be a reflexive submodule of $R^n$. Then \begin{equation*} H^0_{\mathfrak{m}}(R^n/I)=0. \end{equation*} \end{Lemma} \begin{proof} We consider the short exact sequence $0\rightarrow I\rightarrow R^n\rightarrow R^n/I\rightarrow0$ and we apply the local cohomology functor $H_{\mathfrak{m}}^0(-)$. We obtain a long exact sequence \begin{equation*} \cdots\rightarrow H_{\mathfrak{m}}^0(R^n)\rightarrow H_{\mathfrak{m}}^0(R^n/I)\rightarrow H_{\mathfrak{m}}^1(I)\rightarrow \cdots. \end{equation*} Since $R^n$ and $I$ are reflexive modules over a normal domain, they have depth at least $2$. It follows that $H_{\mathfrak{m}}^0(R^n)=H_{\mathfrak{m}}^1(I)=0$, hence $H_{\mathfrak{m}}^0(R^n/I)=0$ too. \end{proof} \par We mention also the following result (cf. \cite[Proposition 2.2]{DW15}), concerning the generalized Hilbert-Kunz multiplicity of reflexive ideals. \begin{Prop}\label{reflexiveproposition} Let $R$ be a standard graded domain of dimension $2$, and let $I$ be a homogeneous reflexive ideal of $R$. Then $e_{gHK}(R/I)=0$ if and only if $I$ is principal. \end{Prop} The fact that principal reflexive ideals have generalized Hilbert-Kunz multiplicity $0$ holds also in dimension $\geq2$ and is a consequence of Lemma \ref{reflexivecohomologyzero}. In fact, if $I$ is a principal ideal, then $I^{[p^e]}$ is again principal, and in particular reflexive. It follows that $H^0_{R_{+}}(F^{e*}(R/I))\cong H^0_{R_{+}}(R/I^{[p^e]})=0$, so $e_{gHK}(R/I)=0$. \begin{Lemma}\label{HKclassgroup} Let $R$ be a normal $K$-domain of dimension $d\geq2$ over an algebraically closed field $K$ of prime characteristic $p$. Let $I$ be a non-zero homogeneous ideal of $R$ such that $e_{gHK}(R/I)$ exists and let $f\neq0$ be a homogeneous element of $R$. Then \begin{equation*} e_{gHK}(R/fI)=e_{gHK}(R/I). \end{equation*} \end{Lemma} \begin{proof} From the short exact sequence $0\rightarrow I\rightarrow R\rightarrow R/I\rightarrow0$ and the corresponding long exact sequence of local cohomology modules with support in $\mathfrak{m}:=R_+$ we obtain that $H_{\mathfrak{m}}^0(R/I)\cong H_{\mathfrak{m}}^1(I)$. It follows that the generalized Hilbert-Kunz multiplicity can be seen as \begin{equation*} e_{gHK}(R/I)=\lim_{e\rightarrow+\infty}\frac{l_R\left(H_{\mathfrak{m}}^1(I^{[q]})\right)}{q^d}. \end{equation*} Then the $R$-module isomorphism $f^qI^{[q]}\cong I^{[q]}$ implies the claim. \end{proof} \begin{Rem} Let $[I]$ be an element of the divisor class group $\mathrm{Cl}(R)$ of $R$, and let $I$ be a homogeneous reflexive ideal representative of this element. If $R$ is a standard graded normal $K$-domain of dimension $2$, with $K$ algebraically closed and of positive characteristic, we obtain a function $e_{gHK}(-):\mathrm{Cl}(R)\rightarrow\mathbb{Q}$, $[I]\mapsto e_{gHK}(R/I)$. Thanks to Theorem \ref{TheoremgHKgradedcase} and Lemma \ref{HKclassgroup}, this function is well-defined and we have $e_{gHK}([R])=0$. This does not mean that the generalized Hilbert-Kunz multiplicity for all ideals $I$ which are invertible on the punctured spectrum depends only on $[I]$. For example, the homogeneous maximal ideal $\mathfrak{m}$ and its reflexive hull $\mathfrak{m}^{**}=R$ define the same element in the class group, but $e_{gHK}(R/\mathfrak{m})=e_{HK}(\mathfrak{m})\neq0$ in general, while $e_{gHK}(R/R)=0$. Moreover Proposition \ref{reflexiveproposition} implies that the preimage of $0$ is trivial. This does not mean that the function $e_{gHK}(-)$ is injective, since in general it is not a group homomorphism as in the case of Example \ref{exampleidealofapoint}. \end{Rem} \par Therefore the following question makes sense. \begin{Que} Given two homogeneous reflexive ideals $I$ and $J$, is there a formula for $e_{gHK}([IJ])$ in terms of $e_{gHK}([I])$ and $e_{gHK}([J])$? \end{Que} \section{The Hilbert-Kunz slope} \par Let $Y$ be a smooth projective curve over an algebraically closed field with a very ample invertible sheaf of degree $\deg\mathcal{O}_Y(1)=\deg Y$. We recall some classical notions of vector bundles and some definitions from \cite{Bre06} and \cite{Bre07}. We refer to those papers for further details and explanations. \par Let $\mathcal{S}$ be a locally free sheaf of rank $r$ over $X$. The degree of $\mathcal{S}$ is defined as the degree of the corresponding determinant line bundle $\mathrm{deg}\mathcal{S}=\deg\bigwedge^r\mathcal{S}$. The slope of $\mathcal{S}$ is $\mu(\mathcal{S})=\deg\mathcal{S}/r$. The degree is additive on short exact sequences and moreover $\mu(\mathcal{S}\otimes\mathcal{T})=\mu(\mathcal{S})+\mu(\mathcal{T})$. \par The sheaf $\mathcal{S}$ is called \emph{semistable} if for every locally free subsheaf $\mathcal{T}\subseteq\mathcal{S}$ the inequality $\mu(\mathcal{T})\leq\mu(\mathcal{S})$ holds. If the strict inequality $\mu(\mathcal{T})<\mu(\mathcal{S})$ holds for every proper subsheaf $\mathcal{T}\subset\mathcal{S}$, then $\mathcal{S}$ is called \emph{stable}. \par For any locally free sheaf $\mathcal{S}$ on $Y$ there exists a unique filtration, called \emph{Harder-Narasimham filtration}, $\mathcal{S}_1\subseteq\dots\subseteq\mathcal{S}_t=\mathcal{S}$ with the following properties: \begin{compactitem} \item $\mathcal{S}_k$ is locally free, \item $\mathcal{S}_k/\mathcal{S}_{k-1}$ is semistable, \item $\mu(\mathcal{S}_k/\mathcal{S}_{k-1})>\mu(\mathcal{S}_{k+1}/\mathcal{S}_{k})$. \end{compactitem} \par If the base field has positive characteristic, we can consider the absolute Frobenius morphism $F:Y\rightarrow Y$ on the curve and its iterates $F^e$. In general the pull-back via $F^e$ of the Harder-Narasimham filtration of $\mathcal{S}$ is not the Harder-Narasimham filtration of $F^{e*}\mathcal{S}$, since the quotients $F^{e*}(\mathcal{S}_k)/F^{e*}(\mathcal{S}_{k-1})$ need not to be semistable. \par In \cite{Lan04}, Langer proved that for $q\gg0$ there exists the so-called \emph{strong Harder-Narasimham} filtration of $F^{e*}(\mathcal{S})$. In fact there exists a natural number $e_0$ such that the Harder-Narasimham filtration of $F^{e_0*}(\mathcal{S})$ \begin{equation*} 0\subseteq \mathcal{S}_{e_0,1}\subseteq\dots\subseteq\mathcal{S}_{e_0,t}=F^{e_0*}(\mathcal{S}) \end{equation*} has the property that the quotients $F^{e*}(\mathcal{S}_{e_0,k})/F^{e*}(\mathcal{S}_{e_0,k-1})$ of the pullback along $F^{e}$ are semistable. Thus for $e\geq e_0$ we have $F^{e*}(\mathcal{S})=F^{(e-e_0)*}(F^{e_0*}(\mathcal{S}))$, and the Harder-Narasimham filtration of $F^{e*}(\mathcal{S})$ is given by \begin{equation*} F^{(e-e_0)*}(\mathcal{S}_{e_0,1}) \subseteq\dots\subseteq F^{(e-e_0)*}(\mathcal{S}_{e_0,t})=F^{e*}(\mathcal{S}). \end{equation*} For ease of notation we put $\mathcal{S}_{e,k}:=F^{(e-e_0)*}(\mathcal{S}_{e_0,k})$ for every $e\geq e_0$ and $0\leq k\leq t$. The length $t$ of such a sequence and the ranks of the quotients $\mathcal{S}_{e,k}/\mathcal{S}_{e,k-1}$ are independent of $e$, while the degrees are not. We define the following rational numbers: \begin{itemize} \item $\bar{\mu}_k=\bar{\mu}_k(\mathcal{S})=\displaystyle\frac{\mu(\mathcal{S}_{e,k}/\mathcal{S}_{e,k-1})}{p^e}$, where $\mu(-)$ denotes the usual slope of the bundle, \item $r_k=\mathrm{rank}(\mathcal{S}_{e,k}/\mathcal{S}_{e,k-1})$, \item $\nu_k=-\displaystyle\frac{\bar{\mu}_k}{\mathrm{deg}Y}$. \end{itemize} \begin{Rem} We point out that the numbers $\bar{\mu}$, $r_k$ and $\nu_k$ are rational and independent from $e$ for $e\gg0$. In fact we have that $\sum_{k=1}^tr_k\mu(\mathcal{S}_{e,k}/\mathcal{S}_{e,k-1})=\deg(F^{e*}\mathcal{S})=p^e\deg\mathcal{S}$, which implies the relation \begin{equation*} \sum_{k=1}^tr_k\bar{\mu}_k=\deg\mathcal{S}. \end{equation*} \end{Rem} \begin{Def} Let $\mathcal{S}$ be a locally free sheaf over a projective curve over an algebraically closed field of prime characteristic and let $\bar{\mu}$ and $r_k$ be as above. The \emph{Hilbert-Kunz slope} of $\mathcal{S}$ is the rational number \begin{equation*} \mu_{HK}(\mathcal{S})=\sum_{k=1}^tr_k\bar{\mu}_k^2. \end{equation*} \end{Def} This notion was introduced by the first author in \cite{Bre07}, where he also proves Theorem \ref{theoremalternatingsum} below. \begin{Ex}\label{slopelinebundle} Let $\mathcal{L}$ be a line bundle, then $\mathcal{L}$ is semistable of slope $\mu(\mathcal{L})=\deg{\mathcal{L}}$. The pullback along Frobenius is again a line bundle: $F^{e*}\mathcal{L}=\mathcal{L}^{q}=\mathcal{L}^{\otimes q}$, with $q=p^e$. It follows that $0\subseteq\mathcal{L}$ is the strong Harder-Narasimham filtration of $\mathcal{L}$ and the Hilbert-Kunz slope is just: \begin{equation*} \mu_{HK}(\mathcal{L})=(\deg\mathcal{L})^2. \end{equation*} \end{Ex} \begin{Ex}\label{slopesum} Let $d_1<d_2<\cdots<d_m$ be non-negative integers and let $\mathcal{T}:=\bigoplus_{i=1}^m\mathcal{O}(-d_i)^{\oplus r_i}$, where $\mathcal{O}:=\mathcal{O}_Y$ and $r_i\in\mathbb{N}$. The Harder-Narasimham filtration of $\mathcal{T}$ is \begin{equation*} 0\subseteq\mathcal{O}(-d_1)^{\oplus r_1}\subseteq\mathcal{O}(-d_1)^{\oplus r_1}\oplus\mathcal{O}(-d_2)^{\oplus r_2}\subseteq\dots\subseteq\bigoplus_{i=1}^m\mathcal{O}(-d_i)^{\oplus r_i}. \end{equation*} The quotients are direct sums of line bundles of the same degree, so their pullbacks under Frobenius are semistable. Hence this is also the strong Harder-Narasimham filtration of $\mathcal{T}$ with invariants $r_k$, and $\bar{\mu}_k=\deg\mathcal{O}(-d_k)=-d_k\deg\mathcal{O}_Y(1)=-d_k\deg Y$. Then the Hilbert-Kunz slope of $\mathcal{T}$ is \begin{equation*} \mu_{HK}(\mathcal{T})=(\deg Y)^2\sum_{k=1}^mr_kd_k^2. \end{equation*} \end{Ex} \begin{Theo}[Brenner \cite{Bre07}]\label{theoremalternatingsum} Let $Y$ denote a smooth projective curve of genus $g$ over an algebraically closed field of positive characteristic $p$ and let $q=p^e$ for a non-negative integer $e$. Let $0\rightarrow\mathcal{S}\rightarrow\mathcal{T}\rightarrow\mathcal{Q}\rightarrow0$ denote a short exact sequence of locally free sheaves on $Y$. Then the following hold. \begin{enumerate} \item For every non-negative integer $e$ the alternating sum of the dimensions of the global sections is \begin{equation* \begin{split} \sum_{m\in\mathbb{Z}}\left(h^0(F^{e*}\mathcal{S}(m))-h^0(F^{e*}\mathcal{T}(m))+h^0(F^{e*}\mathcal{Q}(m))\right)\\ =\frac{q^2}{2\deg Y}\left(\mu_{HK}(\mathcal{S})-\mu_{HK}(\mathcal{T})+\mu_{HK}(\mathcal{Q})\right)+O(q^0). \end{split} \end{equation*} \item If the field is the algebraic closure of a finite field, then the $O(q^0)$-term is eventually periodic. \end{enumerate} \end{Theo} The alternating sum in Theorem \ref{theoremalternatingsum} is in fact a finite sum for every $q$. For $m\ll 0$ the locally free sheaves have no global sections, so all the terms are $0$ and for $m\gg0$ we have $H^1(Y,F^{e*}\mathcal{S}(m))=0$ and the sum is $0$. Moreover the sum is the dimension of the cokernel \begin{equation*} \sum_{m\in\mathbb{Z}}\dim(\Gamma(Y,F^{e*}\mathcal{Q}(m)))/\mathrm{im}(\Gamma(Y,F^{e*}\mathcal{T}(m))). \end{equation*} \par In \cite{Bre07}, Brenner uses Theorem \ref{theoremalternatingsum} to prove that the Hilbert-Kunz function of a homogeneous $R_+$-primary ideal $I$ in a normal two-dimensional standard-graded $K$-domain $R$ has the following form \begin{equation*} HK(I,q)=e_{HK}(I)q^2+\gamma(q), \end{equation*} where $e_{HK}(I)$ is a rational number and $\gamma(q)$ is a bounded function, which is eventually periodic if $K$ is the algebraic closure of a finite field. In particular if $I$ is generated by homogeneous elements $f_1,\dots,f_n$ of degrees $d_1,\dots,d_n$ and $r_k$, $\bar{\mu}_k$ denote the numerical invariants of the strong Harder-Narasimham filtration of the locally free sheaf $\mathrm{Syz}(f_1,\dots,f_n)$ on the curve $Y=\mathrm{Proj}R$ then the Hilbert-Kunz multiplicity of $I$ is given by \begin{equation*} e_{HK}(I)=\frac{1}{2\deg Y}\left(\sum_{k=1}^tr_k\bar{\mu}_k^2-(\deg Y)^2\sum_{i=1}^nd_i^2\right). \end{equation*} \par In Section 3 we apply this method to deduce a similar result for the generalized Hilbert-Kunz function and answer a question of Dao and Smirnov \cite[Example 6.2]{DS13}. \section{The generalized Hilbert-Kunz function in dimension $2$} \begin{Lemma}\label{lenghtquotientlemma2} Let $R$ be a two-dimensional normal $K$-domain of positive characteristic $p$ with homogeneous maximal ideal $\mathfrak{m}$. We denote by $U=\mathrm{Spec}R\setminus\{\mathfrak{m}\}$ the punctured spectrum. Let $M$ be a finitely generated graded $R$-module with a presentation \begin{equation}\label{presentationofM} 0\rightarrow I\rightarrow R^n\rightarrow M\rightarrow0. \end{equation} Let $J=I^{**}$ be the reflexive hull of $I$ (considered inside $R^n$) and $\mathcal{L}$ the coherent sheaf corresponding to $J$ on $U$, that is $\mathcal{L}=\widetilde{J}|_{U}$, then \begin{equation}\label{reductionequation} \begin{split} gHK(M,q)&=l_R\left(\Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e*}I\right)\\ &=l_R\left((F^{e*}J)^{**}/\mathrm{im}F^{e*}I\right), \end{split} \end{equation} where $q=p^e$ and $\mathrm{im}F^{e*}I$ denotes the image of the map $F^{e*}I\rightarrow F^{e*}R^n\cong R^n$. \end{Lemma} Before proving the lemma, we explain the right hand side of the equality \eqref{reductionequation}. \par First of all, in virtue of \eqref{geometriccharacterization} we have $\Gamma(U,F^{e*}\mathcal{L})=(F^{e*}J)^{**}$, so the second equality is clear. Then the inclusion $I\hookrightarrow R^n$ factors through the reflexive module $J$. Applying the Frobenius functor to these maps we get a commutative diagram \begin{equation}\label{frobeniuscommutative} \begin{tikzcd} &F^{e*}I \arrow{r}\arrow{d} &F^{e*}R^n\cong R^n \\ &F^{e*}J\arrow{ru}. \end{tikzcd} \end{equation} Since the functor $F^{e*}$ is not left exact in general, the maps in \eqref{frobeniuscommutative} are not injective, for this reason we consider the image $\mathrm{im}F^{e*}I\subseteq{R}^n$. \par Since $R$ is normal, $U$ is smooth and the absolute Frobenius morphism $F^{e}:U\rightarrow U$ is exact on $U$. So we pullback along $F^{e}$ the inclusion $\mathcal{L}\hookrightarrow\mathcal{O}^n_U$ and we take sections on $U$ obtaining the inclusion \begin{equation*} \Gamma(U,F^{e*}\mathcal{L})\hookrightarrow \Gamma(U,F^{e*}\mathcal{O}^n_U)\cong \Gamma(U,\mathcal{O}^n_U)=R^n. \end{equation*} Therefore the quotient $\Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e*}I$ is a quotient of submodules of $R^n$. \begin{proof} We apply the functor $F^{e*}$ to the short exact sequence \eqref{presentationofM} and we get $F^{e*}I\rightarrow R^n\rightarrow F^{e*}M\rightarrow0$. Therefore we have \begin{equation}\label{frobeniusequality1} F^{e*}M=R^n/\mathrm{im}F^{e*}I. \end{equation} Then we consider the short exact sequence \begin{equation*} 0\rightarrow \Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e*}I\rightarrow R^n/\mathrm{im}F^{e*}I \rightarrow R^n/\Gamma(U,F^{e*}\mathcal{L})\rightarrow0. \end{equation*} Taking local cohomology yields \begin{equation*} 0\rightarrow H_{\mathfrak{m}}^0\left(\Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e*}I\right)\rightarrow H_{\mathfrak{m}}^0\left(R^n/\mathrm{im}F^{e*}I\right)\rightarrow H_{\mathfrak{m}}^0\left(R^n/\Gamma(U,F^{e*}\mathcal{L})\right). \end{equation*} The module $\Gamma(U,F^{e*}\mathcal{L})$ is reflexive by \eqref{geometriccharacterization}, then by Lemma \ref{reflexivecohomologyzero} the last module of the previous sequence is $0$. So we get the following isomorphism \begin{equation}\label{frobeniusequality2} \begin{split} H_{\mathfrak{m}}^0\left(R^n/\mathrm{im}F^{e*}I\right)&\cong H_{\mathfrak{m}}^0\left(\Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e*}I\right)\\ &= \Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e*}I. \end{split} \end{equation} The last equality holds because the module $\Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e*}I$ has support in $\mathfrak{m}$, since the sheaves $\mathcal{L}$ and $\widetilde{I}$ coincide on $U$. Then the desired formula follows from \eqref{frobeniusequality1} and \eqref{frobeniusequality2}. \end{proof} \begin{Theo}\label{TheoremgHKgradedcase} Let $R$ be a two-dimensional normal standard graded $K$-domain over an algebraically closed field $K$ of prime characteristic $p$ and let $M$ be a finitely generated graded $R$-module. Then the generalized Hilbert-Kunz function of $M$ has the form \begin{equation*} gHK(M,q)=e_{gHK}(M)q^2+\gamma(q), \end{equation*} where $e_{gHK}(M)$ is a rational number and $\gamma(q)$ is a bounded function. \par Moreover if $K$ is the algebraic closure of a finite field, then $\gamma(q)$ is an eventually periodic function. In particular, given a graded presentation of $M$ \begin{equation*} \bigoplus_{i=1}^nR(-d_i)\xrightarrow{\psi}\bigoplus_{j=1}^mR(-e_j)\rightarrow M\rightarrow0 \end{equation*} and the corresponding short exact sequence of locally free sheaves on the curve $Y=\mathrm{Proj}R$ \begin{equation}\label{syzygysequence2} 0\rightarrow\mathcal{S}:=\widetilde{\mathrm{ker}{\psi}}\rightarrow\mathcal{T}:=\bigoplus_{i=1}^n\mathcal{O}_Y(-d_i)\rightarrow\mathcal{Q}:=\widetilde{\mathrm{im}\psi}\rightarrow0, \end{equation} then the generalized Hilbert-Kunz multiplicity of $M$ is \begin{equation*} e_{gHK}(M)=\frac{1}{2\deg Y}\left(\mu_{HK}(\mathcal{S})-(\deg Y)^2\sum_{i=1}^nd_i^2+\mu_{HK}(\mathcal{Q})\right). \end{equation*} \end{Theo} \begin{proof} Let $u_1,\dots,u_m$ be homogeneous generators of $M$ of degrees $e_1,\dots,e_m$ respectively and let \begin{equation*} 0\rightarrow I\rightarrow\bigoplus_{j=1}^mR(-e_j)\xrightarrow{u_1,\dots,u_m} M\rightarrow0 \end{equation*} be the corresponding short exact sequence. Let $f_1,\dots, f_n$ be homogeneous generators of $I$ of degrees $d_1,\dots,d_n$ respectively and let \begin{equation*} 0\rightarrow N\rightarrow\bigoplus_{i=1}^{n}R(-d_i)\xrightarrow{f_1,\dots,f_n}I\rightarrow0 \end{equation*} be the corresponding graded short exact sequence. This last sequence induces the short exact sequence \eqref{syzygysequence2} on $Y$, and the short exact sequence \begin{equation}\label{syzygysequence3} 0\rightarrow \mathcal{E}:=\widetilde{N}|_{U}\rightarrow\mathcal{F}:=\bigoplus_{i=1}^n\mathcal{O}_U(-d_i)\rightarrow\widetilde{I}|_U\rightarrow0 \end{equation} on the punctured spectrum $U$. The modules $N$ and $I$ are submodules of finite free $R$-modules, so they are torsion-free. It follows that the corresponding sheaves $\mathcal{E}$ and $\widetilde{I}|_U$ on $U$ are locally free, since $U$ is regular. Moreover if $J=I^{**}$ is the reflexive hull of $I$ and $\mathcal{L}$ the coherent sheaf corresponding to $J$ on $U$, we have that $\mathcal{L}=\widetilde{I}|_U$ as sheaves on $U$. \par From Lemma \ref{lenghtquotientlemma2}, the generalized Hilbert-Kunz function of $M$ is given by \begin{equation*} gHK(M,q)=l_R\left(\Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e}I\right)=\sum_{m\in\mathbb{Z}}l_R\left(\left(\Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e}I\right)_m\right). \end{equation*} \par To compute the last sum, we consider the sequence \eqref{syzygysequence2}, we pull it back along the $e$-th absolute Frobenius morphism on $Y$ and we tensor with $\mathcal{O}_Y(m)$, for an integer $m$. We obtain an exact sequence \begin{equation*} 0\rightarrow F^{e*}\mathcal{S}(m)\rightarrow F^{e*}\mathcal{T}(m)\rightarrow F^{e*}\mathcal{Q}(m)\rightarrow0. \end{equation*} Then we take global sections $\Gamma(Y,-)$ of the last sequence and we get \begin{equation*} 0\rightarrow \Gamma(Y,F^{e*}\mathcal{S}(m))\rightarrow\Gamma(Y,F^{e*}\mathcal{T}(m))\xrightarrow{\varphi_m}\Gamma(Y,F^{e*}\mathcal{Q}(m))\rightarrow\dots \end{equation*} We are interested in the cokernel of the map $\varphi_m$. Its image is clearly $(\mathrm{im}F^{e}I)_m$. For the evaluation of the sheaf $F^{e*}\mathcal{Q}(m)$ on $Y$, we consider the sequences \eqref{syzygysequence2} and \eqref{syzygysequence3}, we obtain \begin{equation*} \Gamma(Y,F^{e*}\mathcal{Q}(m))\cong\Gamma(U,F^{e*}\mathcal{L})_m. \end{equation*} So we get $\mathrm{Coker}(\varphi_m)=\Gamma(U,F^{e*}\mathcal{L})_m/(\mathrm{im}F^{e}I)_m=\left(\Gamma(U,F^{e*}\mathcal{L})/\mathrm{im}F^{e}I\right)_m$. It follows that \begin{equation*} gHK(M,q)=\sum_{m\in\mathbb{Z}}\mathrm{Coker}(\varphi_m). \end{equation*} We compute the last sum with Theorem \ref{theoremalternatingsum} and we obtain the desired formula for the generalized Hilbert-Kunz function. \par For the generalized Hilbert-Kunz multiplicity, it is enough to notice that $\mu_{HK}(\mathcal{T})=(\deg Y)^2\sum_{i=1}^nd_i^2$ by Example \ref{slopesum}. \end{proof} \begin{Cor}\label{corollaryHKideal} Let $I$ be a non-zero ideal generated by homogeneous elements $f_1,\dots,f_n$ of degrees $d_1,\dots,d_n$ respectively, and let $d$ be the degree of the ideal sheaf associated to $I$ on $Y=\mathrm{Proj} R$. Then the generalized Hilbert-Kunz multiplicity of $R/I$ is given by \begin{equation*} e_{gHK}(R/I)=\frac{1}{2\deg Y}\left(\sum_{k=1}^tr_k\bar{\mu}_k^2-(\deg Y)^2\sum_{i=1}^nd_i^2+d^2\right), \end{equation*} where $r_k$, $\bar{\mu}_k$ and $t$ are the numerical invariants of the strong Harder-Narasimham filtration of the syzygy bundle $\mathrm{Syz}(f_1,\dots,f_n)$. \end{Cor} \begin{proof} In this case the presenting sequence of $R/I$ is just $0\rightarrow I\rightarrow R\rightarrow R/I\rightarrow 0$ and the sequence \eqref{syzygysequence2} is then \begin{equation*} 0\rightarrow\mathrm{Syz}(f_1,\dots,f_n)\rightarrow\bigoplus_{i=1}^n\mathcal{O}_Y(-d_i)\xrightarrow{f_1,\dots,f_n}\mathcal{Q}\rightarrow0. \end{equation*} So by Theorem \ref{TheoremgHKgradedcase}, the generalized Hilbert-Kunz multiplicity of $R/I$ is given by \begin{equation*} \frac{1}{2\deg Y}\left(\mu_{HK}(\mathrm{Syz}(f_1,\dots,f_n))-(\deg Y)^2\sum_{i=1}^nd_i^2+\mu_{HK}(\mathcal{Q})\right). \end{equation*} In this situation $\mathcal{Q}$ is a line bundle, so by Example \ref{slopelinebundle}, $\mu_{HK}(\mathcal{Q})=(\deg \mathcal{Q})^2=d^2$, and by definition $\mu_{HK}(\mathrm{Syz}(f_1,\dots,f_n))=\sum_{k=1}^tr_k\bar{\mu}_k^2$. \end{proof} \begin{Ex}\label{exampleprincipalideal} Let $h$ be a homogeneous element of degree $a>0$, and let $I=(h)$. Then we have $0\rightarrow \mathcal{O}_Y(-a)\xrightarrow{\simeq}\mathcal{L}\rightarrow0$, hence $\mathrm{Syz}(h)=0$. Since $I$ is principal, the degree of the ideal sheaf associated to $I$ is $a\cdot\deg Y$. The generalized Hilbert-Kunz multiplicity is then \begin{equation*} e_{gHK}(R/I)=\frac{1}{2\deg Y}\left(-(\deg Y)^2a^2+(\deg Y)^2a^2\right)=0, \end{equation*} in accordance with Proposition \ref{reflexiveproposition}. \end{Ex} \begin{Ex}\label{exampleideal2elements} Let $I$ be a prime ideal of height one generated by two homogeneous elements $f$ and $g$ of degrees $a$ and $b$ respectively. Then the syzygy sequence is \begin{equation*} 0\rightarrow \mathrm{Syz}(f,g)\rightarrow \mathcal{O}_Y(-a)\oplus\mathcal{O}_Y(-b)\rightarrow\mathcal{Q}\rightarrow0, \end{equation*} with $\mathcal{Q}$ the line bundle of say degree $d$ associated to the ideal $I$. From this sequence we see that the syzygy bundle has rank one and degree $\deg Y(-a-b)-d$. Therefore we have \begin{equation*} \begin{split} e_{gHK}(R/I)&=\frac{1}{2\deg Y}\left(\left(\deg Y(-a-b)-d\right)^2-(\deg Y)^2(a^2+b^2)+d^2\right)\\ &= \frac{1}{2\deg Y}\Big(2d^2+(\deg Y)^2(a^2+b^2)+2ab(\deg Y)^2+2d\deg Y(a+b)\\ &-(\deg Y)^2(a^2+b^2)\Big)\\ &=\frac{1}{\deg Y}\left(d^2+ab(\deg Y)^2+d(a+b)\deg Y\right)\\ &=\frac{d^2}{\deg Y}+ab\deg Y+d(a+b). \end{split} \end{equation*} \end{Ex} \begin{Ex}\label{exampleidealofapoint} Let $P$ be a point of the smooth projective curve $Y=\mathrm{Proj}R$, and let $I\subseteq R$ be the corresponding homogeneous prime ideal of height one. The ideal $I$ is minimally generated by two linear forms $f$ and $g$. In fact, $f$ and $g$ correspond to two hyperplanes in the projective space where $Y$ is embedded which meet transversally in $P$. Then, using the notations of Example \ref{exampleideal2elements} we have $a=b=1$, and $d=-1$, since the line bundle $\mathcal{Q}$ associated to the ideal $I$ is a subsheaf of $\mathcal{O}_Y$. So we obtain \begin{equation*} e_{gHK}(R/I)=\frac{1}{\deg Y}+\deg Y-2=\frac{(\deg Y-1)^2}{\deg Y}. \end{equation*} \end{Ex} \section{The limit of generalized Hilbert-Kunz multiplicity} Let $R$ be a standard graded domain flat over $\mathbb{Z}$ such that almost all fiber rings $R_p=R\otimes_{\mathbb{Z}}\mathbb{Z}/p\mathbb{Z}$ are geometrically normal two-dimensional domains. We define $R_0:=R\otimes_{\mathbb{Z}}\mathbb{Q}$ and the corresponding projective curve $Y_0:=\mathrm{Proj}R_0$ over the generic point. We denote by $Y_p:=\mathrm{Proj}R_p$, the projective curve over the prime number $p$. This is a smooth projective curve for almost all primes. If $\mathcal{S}$ is a sheaf over the curve $Y:=\mathrm{Proj}R$ we will denote by $\mathcal{S}_p$ (respectively $\mathcal{S}_0$) the corresponding restriction to the curves $Y_p$ (resp. $Y_0$). \begin{Rem} In our setting the curves $Y_0$ and $Y_p$ are not defined over an algebraically closed field. However, we may consider the curves $\overline{Y}_0:=Y_0\times_{\mathbb{Q}}\overline{\mathbb{Q}}$ and $\overline{Y}_p:=Y_p\times_{\mathbb{Z}/p\mathbb{Z}}\overline{\mathbb{Z}/p\mathbb{Z}}$, which are smooth projective curves over the algebraic closures. We can consider the definitions of degree, slope, semistable, HN filtration and strong HN filtration for those curves and transfer them to the original curves $Y_0$ and $Y_p$. Therefore we will move to the algebraic closure and back whenever this is convenient. \end{Rem} \par Let $M$ be a graded $R$-module. For every prime $p$ we can consider the reduction to characteristic $p$, $M_p:=M\otimes_RR_p\cong M\otimes_{\mathbb{Z}}\mathbb{Z}/p\mathbb{Z}$, and compute the generalized Hilbert-Kunz multiplicity $e_{gHK}^{R_p}(M_p)$ of the $R_p$-module $M_p$. Since the projective curve $Y_p$ is smooth for almost all primes $p$, by Theorem \ref{TheoremgHKgradedcase} we know that $e_{gHK}^{R_p}(M_p)$ exists and that it is rational for these primes. We are interested in the behaviour of $e_{gHK}^{R_p}(M_p)$ for $p\rightarrow+\infty$. \par We introduce the following characteristic zero version of the Hilbert-Kunz slope. \begin{Def} Let $\mathcal{S}$ be a locally free sheaf over a projective curve over an algebraically closed field of characteristic zero and let $\mathcal{S}_1\subseteq\dots\subseteq\mathcal{S}_t=\mathcal{S}$ be the Harder-Narasimham filtration of $\mathcal{S}$. For every $k=1,\dots,t$ we set $\bar{\mu}_k=\bar{\mu}_k(\mathcal{S})=\mu(\mathcal{S}_{k}/\mathcal{S}_{k-1})$ and $r_k=\mathrm{rank}(\mathcal{S}_{k}/\mathcal{S}_{k-1})$. The \emph{Hilbert-Kunz slope} of $\mathcal{S}$ is the rational number \begin{equation*} \mu_{HK}(\mathcal{S})=\sum_{k=1}^tr_k\bar{\mu}_k^2. \end{equation*} \end{Def} The name Hilbert-Kunz slope is justified by the following result of Trivedi (cf. \cite[Lemma 1.14]{Tri07}). \begin{Lemma}[Trivedi]\label{trivedilemma} Let $h\in\mathbb{Z}_+$, let $Y$ be a smooth projective curve over $\mathrm{Spec}\mathbb{Z}_h$, and let $\mathcal{S}$ be a locally free sheaf over $Y$. We denote by $\mathcal{S}_0$ and $\mathcal{S}_p$ the restrictions of $\mathcal{S}$ to $Y_0$ and to $Y_p$, for $p\nmid h$. Then \begin{equation*} \lim_{\substack{p\rightarrow+\infty\\ p\nmid h}}\mu_{HK}(\mathcal{S}_p)=\mu_{HK}(\mathcal{S}_0). \end{equation*} \end{Lemma} \begin{Theo}\label{limitHKtheorem} Let $R$ be a standard graded domain flat over $\mathbb{Z}$ such that almost all fiber rings $R_p=R\otimes_{\mathbb{Z}}\mathbb{Z}/p\mathbb{Z}$ are geometrically normal two-dimensional domains. Let $M$ be a graded $R$-module with a graded presentation \begin{equation*} \bigoplus_{i=1}^nR(-d_i)\xrightarrow{\psi}\bigoplus_{j=1}^mR(-e_j)\rightarrow M\rightarrow0, \end{equation*} and corresponding short exact sequence of locally free sheaves $0\rightarrow\mathcal{S}_0\rightarrow\mathcal{T}_0\rightarrow\mathcal{Q}_0\rightarrow0$ on the generic fiber $Y_0=\mathrm{Proj}R_0$, with notations as above. Then the limit \begin{equation*} \lim_{p\rightarrow+\infty}e_{gHK}^{R_p}(M_p) \end{equation*} exists and it is equal to the rational number \begin{equation*} \frac{1}{2\deg Y_0}\left(\mu_{HK}(\mathcal{S}_0)-\mu_{HK}(\mathcal{T}_0)+\mu_{HK}(\mathcal{Q}_0)\right). \end{equation*} \end{Theo} \begin{proof} Let $u_1,\dots,u_m$ be homogeneous generators of $M$ as $R$-module, and let $f_1,\dots,f_n$ be homogeneous generators of $I:=\mathrm{Syz}(u_1,\dots,u_m)$. We obtain two short exact sequences \begin{equation}\label{sequencesrelativesituation} \begin{split} &0\rightarrow I\rightarrow\bigoplus_{j=1}^mR(-e_j)\xrightarrow{u_1,\dots,u_m} M\rightarrow0, \ \text{ and }\\ &0\rightarrow N\rightarrow\bigoplus_{i=1}^{n}R(-d_i)\xrightarrow{f_1,\dots,f_n}I\rightarrow0. \end{split} \end{equation} \par Tensoring these sequences with the flat $\mathbb{Z}$-module $\mathbb{Q}$ we obtain exact sequences of $R_0$-modules. On the other hand if we apply the functor $-\otimes_{\mathbb{Z}}\mathbb{Z}/p\mathbb{Z}$ to the sequences \eqref{sequencesrelativesituation}, exactness is preserved for all primes except for a finite number of them. Let $h$ be the product of those primes, and we consider the smooth projective curve $Y=\mathrm{Proj}R_h$ over $\mathrm{Spec}\mathbb{Z}_h$. \par Let $U=D(R_{h+})$ denote the relative punctured spectrum. The sheaf $\widetilde{I}|_{U}$ restricts to $U_0=U\cap\mathrm{Spec}R_0$ as a locally free sheaf. By possibly shrinking the set $D(h)$ we may assume that $\widetilde{I}|_{U}$ is locally free. By further shrinking we may assume that $\mathcal{E}:=\widetilde{N}|_{U}$ is also locally free. Then for almost all $p$, $I_p$ and $N_p$ are locally free on $U_p$. \par Let $\mathcal{S}$, $\mathcal{T}$, $\mathcal{Q}$ be the locally free sheaves on $Y$ corresponding to $\mathcal{E}$, $\bigoplus_{i=1}^nR(-d_i)$, $\widetilde{I}|_{U}$, which, by the second sequence of \eqref{sequencesrelativesituation}, form an exact sequence $0\rightarrow\mathcal{S}\rightarrow\mathcal{T}\rightarrow\mathcal{Q}\rightarrow0$ on $Y$. Its restrictions give the short exact sequences $0\rightarrow\mathcal{S}_0\rightarrow\mathcal{T}_0\rightarrow\mathcal{Q}_0\rightarrow0$ on the generic fiber $Y_0$, and $0\rightarrow\mathcal{S}_p\rightarrow\mathcal{T}_p\rightarrow\mathcal{Q}_p\rightarrow0$ on the fiber $Y_p$, for $p \nmid h$. \par Let $p$ be a prime number not dividing $h$, then we are in the situation of Theorem \ref{TheoremgHKgradedcase}, so we obtain \begin{equation*} e_{gHK}^{R_p}(M_p)=\frac{1}{2\deg Y_p}\left(\mu_{HK}(\mathcal{S}_p)-\mu_{HK}(\mathcal{T}_p)+\mu_{HK}(\mathcal{Q}_p)\right). \end{equation*} Then taking the limit for $p\rightarrow+\infty$ and applying Lemma \ref{trivedilemma} concludes the proof. \end{proof} \section*{Acknowledgements} We would like to thank Mohsen Asgharzadeh for a careful reading of an earlier version of this article, and Hailong Dao for many helpful conversations. We thank the referee for showing us how to simplify the proofs of Lemma \ref{reflexivecohomologyzero} and Lemma \ref{HKclassgroup}. Moreover, we thank the DFG \emph{Graduiertenkolleg Kombinatorische Strukturen in der Geometrie} at Osnabr\"uck for support.
proofpile-arXiv_067-7854
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Galaxy mergers are central to the widely accepted hierarchical structure formation paradigm \citep{toomre72}. Because most massive galaxies harbor a massive black hole \citep[MBH;][]{Kormendy1995}, mergers should produce binary MBHs \citep{begelman80}. In rarer situations, triple MBHs may form when a subsequent merger with a third galaxy occurs before the first two BHs coalesce \citep{valtonen96}. Therefore, direct evidence for triple MBHs would offer a unique verification of the hierarchical merger paradigm. Furthermore, triple MBHs are of significant merit for both understanding key aspects of galaxy formation and probing fundamental physics. For example, triple MBHs are predicted to play a crucial role in regulating the formation of the stellar cores in massive elliptical galaxies \citep{DEGN}. Simulations suggest that triple MBHs scour out stellar cores with mass deficits one or two times the total BH mass, and sizes that are larger than those formed around binary MBHs; this process may be responsible for the unusually large cores observed in some massive elliptical galaxies such as M87 \citep{hoffman07}. In addition, close triple MBHs offer a laboratory for studying the dynamics of three-body interactions in general relativity \citep{DEGN}. Theory suggests that hierarchical systems of close triple MBHs may exhibit phases of very high eccentricity in the inner binary. These systems are thought to produce intense bursts of gravitational waves which could be detectable with ongoing and future low-frequency gravitational wave experiments \citep{amaro10}. The precursors of close triple MBHs, including those that are still separated by $\lesssim$ a few kpc, are of interest, because they may be used to inform the subsequent evolution at closer separations \citep{DEGN}. Despite the intense theoretical interest, and strong reasons to believe that they exist, triple MBHs that are in direct gravitational interaction have not been conclusively detected. This lack arises mostly because the typical separation of close triples ($\lesssim$ a few parsecs) is too small to resolve at cosmological distances. Kpc-scale triple MBHs are a possible precursor of close triples. They are not yet in direct gravitational interaction, but their separations are both large enough to resolve with current facilities and small enough to be dynamically interesting. Moreover, the typical orbital decay timescale of the kpc phase is long enough ($\gtrsim10^8$\,yr) that it may provide the best chance for direct detection. Kpc-scale triple MBHs can be identified as kpc-scale triple Active Galactic Nuclei (AGNs), when all three BHs are accreting -- a process expected in gas-rich mergers \citep{hernquist89}. There are two candidate physical triple quasars known, QQ 1429.008 at $z=2.1$ \citep{djorgovski07}, and QQQ J1519+0627 at $z=1.51$ \citep{Farina2013}, but the projected separations between the quasars are much larger, i.e., of 30--50 kpc and of 200 kpc, respectively, and it is unclear whether the host galaxies are in direct merging process, due to the lack of tidal features indicative of ongoing interactions. Until recently only one possible candidate was serendipitously discovered in NGC 3341 \citep{Barth2008}; the disk galaxy contains three nuclei (with two offset nuclei at projected separations of 5.1 and 8.4 kpc from its primary nucleus), which are optically classified as a Seyfert, a LINER, and a LINER or LINER-H {\tiny II} composite, respectively. There is a possible candidate reported at $z=1.35$ \citep{schawinski11} based on indirect evidence from rest-frame optical diagnostic line ratios, although the resolution of the HST grism spectrum is too low to cleanly separate the individual components and it is more likely to be a clumpy star-forming galaxy at $z>1$ rather than a bona-fide triple AGN. There was another candidate triple AGN system (J1502+1115) at $z=0.39$ discovered by the NSF’s Karl G. Jansky Very Large Array (VLA) \citep{Deane2014}, although two nuclei were later found to be radio hot spots instead of a pair of MBHs \citep{Wrobel2014}. A systematic SDSS search found a candidate, SDSS J1027+1749 \citep{Liu2011}, but only one of the nuclei is optically classified as a Seyfert whereas the other two nuclei are a LINER and an AGN/H {\tiny II} composite. Another candidate through a systematic SDSS search is SDSS J1056+5516 \citep{2017ApJ...851L..15K} at $z$ = 0.256, but it is inconclusive since one of the nuclei is a star formation/LINER composite which requires further X-ray or radio observations to confirm. NGC 6240 has also been suggested to host three nuclei of which two are active \citep{Kollatschny2020}. In this work, we present deep VLA multi-band radio imaging for SDSS J0849+1114 (at $z=0.078$), the best kpc-scale triple AGN candidate known to date \citep{Liu2019,Pfeifle2019a,Foord2021,Foord2021a}. The target was originally identified from a systematic survey of AGN pairs in the optical \citep{Liu2011a} combined with comprehensive follow-ups \citep{Liu2019}. It was also independently identified \citep{Pfeifle2019a} from a systematic search of IR-selected mergers \citep{Pfeifle2019} with the Wide-Field Infrared Survey Explorer all-sky survey \citep{Wright2010}. All of its three nuclei are optically classified as Type 2 Seyferts, where the nuclear emission is obscured in the optical, and their AGN nature was indirectly inferred based on diagnostic narrow emission-line ratios using the BPT diagram \citep{bpt,veilleux87}. The excitation mechanisms estimated from optical diagnostics were inconclusive, because of the often present star formation, dust/gas extinction, and/or shock heating, which may either dilute or mimic AGN excitation. Furthermore, there could be only one or two active MBHs that are ionizing all three galaxies, producing three AGN-like nuclei in the optical. The spatial distribution of ionization parameters estimated from optical emission-line measurements was unable to pin down the locations of the ionizing sources, due both to the proximity of the nuclei and in particular to systematic uncertainties in the electron density measurements \citep{Liu2010b}. While the pilot program successfully measured the radio flux densities for two of its three nuclei and put an upper limit on the third one \citep{Liu2019}, the shallow, single-band data was insufficient to unambiguously confirm the AGN nature in the radio in all but one nucleus. Radio detection may provide the most direct evidence for nuclear activity \citep{ho08}. Although the spatial resolutions and/or sensitivities of previously existing FIRST and VLASS images were insufficient, the VLA's sub-arcsecond spatial resolution in its most extended A configuration, together with its superb capability of spectral imaging in the radio, makes it the ideal match for our goals to detect all three AGNs in the radio or impose stringent upper limits in the case of a non-detection. Indeed, high-resolution VLA imaging has been instrumental in confirming many of the currently known handful cases of kpc-scale dual AGNs \citep{Wrobel2014a,Wrobel2014,Fu2015,Fu2015a}. To pin down the nature of the ionizing sources in all three nuclei, we have obtained deeper VLA multi-band radio imaging. Along with the existing complementary multi-wavelength observations from the {\it Hubble Space Telescope} (HST) and {\it Chandra X-ray Observatory}, the new radio observations provide the most sensitive test of the triple AGN hypothesis for SDSS J0849+1114. Along with supplementary data, the much deeper, multi-band VLA observations allow us to confirm the AGN nature with radio spectral measurements in two of the three Seyfert nuclei and set the most stringent upper limit on the radio flux density from the faintest nucleus. We use radio variability as a complementary method of AGN confirmation \citep{Nyland2020}, although no significant radio variability is found. Finally, we present a detailed investigation of the two radio jets associated with the two radio-detected nuclei and address their connection to merger-driven AGN activity \citep{Wrobel2014}. Throughout this paper, quoted errors are of 1-$\sigma$ unless otherwise specified. We adopt a luminosity distance of 355.5 Mpc and an angular diameter distance of 306.0 Mpc for SDSS J0849+1114, assuming $\Omega_{\rm m}=0.286$, $\Omega_{\Lambda} = 0.714 $, and $\rm H_{0}= 69.6\ km \ s^{-1}\ Mpc^{-1}$. \section{New VLA Observations and Data Preparation} \label{sec:data} Observations of SDSS J0849+1114 were performed with the VLA in its B-configuration in April 2019 and A-configuration between August--October 2019 (Program ID: 19A-085; PI: X. Liu). The B-configuration observations were taken in the X band (central frequency 10.0 GHz) on four epochs for a total on-source integration of 3.38 hr, while the A-configuration observations were taken in the S, C, X, and Ku bands (central frequency 3.0, 6.0, 10.0 and 15.0 GHz) for a total on-source integration of 5.70, 0.87, 1.82 and 1.52 hr, respectively. Among these, the S and X bands were observed for two epochs, while the C and Ku bands were observed for only one epoch. A phase-referenced mode was adopted with J0842+1835 serving as a phase calibrator. J0319+4130 was used as a leakage calibrator and 3C286 was utilized as a flux density and bandpass calibrator. Table \ref{tab:obsinfor} summarizes the basic observational information. The raw data were calibrated with the Common Astronomy Software Applications package (CASA, version: 5.6.2) by running the VLA pipeline\footnote{https://casaguides.nrao.edu/index.php?title=VLA\_CASA\_Pipeline-CASA5.6.2} \citep{2007ASPC..376..127M} with additional manual flagging and phase-only self-calibration on the visibilities. We have examined the X-band images of the six individual epochs for potential flux density variability in either nucleus A or nucleus C (Table \ref{tab:X-band}). No significant ($\rm \lesssim 10\%$) flux density variation was found among either the A-configuration epochs or the B-configuration epochs; a $\sim$20\% difference was found between the mean peak flux density in the A- and B-configuration images, which can be understood as due to the extended nature of both nuclei (see Section~\ref{sec:result}). Therefore, the visibilities of a given band at the same configuration were concatenated using the task `CONCAT'. Finally, we set `MTMFS' deconvolve mode in the task `TCLEAN' to obtain the cleaned image. The resultant A-configuration concatenated images have a synthesized beam of $\rm 0\farcs63 \times 0\farcs53$, $\rm 0\farcs33 \times 0\farcs27$, $\rm 0\farcs27 \times 0\farcs18$, and $0\farcs21 \times 0\farcs12$, and an RMS of 5, 5, 7, and 6 $\rm \mu Jy\ beam^{-1}$, at the S, C, X, and Ku bands, respectively. The B-configuration X-band concatenated image has a synthesized beam of $\rm 0\farcs66 \times 0\farcs56$ and an RMS of 5 $\rm \mu Jy\ beam^{-1}$. We supplement the VLA images with the {\it HST}/WFC3 U-band and Y-band images (Figure \ref{fig:radio}e,f), originally presented in \citet{Liu2019}. The {\it HST} images have been registered to the astrometry of SDSS and have an absolute uncertainty of 0\farcs15 \citep{Liu2019}. \begin{figure}[h] \centering \includegraphics[width=0.89\textwidth]{radio_2.eps} \caption{ VLA and {\it HST} images of the triple-AGN system SDSS J0849+1114. (a)--(d): radio images at a central frequency of 3.0, 6.0, 10.0 and 15.0 GHz. The radio synthesized beam is indicated in the lower-left corner in panels (a)--(d). The purple to yellow contours are at levels of $\rm (4, 8, 16, 32, 64, 128,256, 512) \times RMS$, which is 5, 5, 7 and 6 $\rm \mu Jy\ beam^{-1}$ at 3, 6, 10, and 15 GHz. In panel (b), $HST$ Y-band intensity contours (orange) are overlaid to indicate the optical positions of nuclei A, B, and C. Two radio features, the Arc and the Apex, are also marked. In panel (c), some weak features along southeast-northwest around nucleus A are artifacts probably due to imperfect self-calibration correction, which do not significantly affect our analysis. (e): {\it HST}/WFC3 IR/F105W (Y-band) image. The intensity contours are at levels of $\rm (106.8, 427.3, 1709.2)\ \mu Jy\ arcsec^{-2}$. (f): {\it HST}/WFC3 UVIS/F336W (U-band) image. The intensity contours are at levels of $\rm (11.6, 46.4, 185.5)\ \mu Jy\ arcsec^{-2}$. \label{fig:radio}} \end{figure} \section{Results} \label{sec:result} The VLA images of a $\rm 10''\times 10''$ (corresponding to $\rm 14.8\ kpc \times 14.8\ kpc$) region around SDSS J0849+1114 at 3.0, 6.0, 10.0, and 15.0 GHz are shown in Figure \ref{fig:radio}, which provides new insights to this triple AGN candidate. Here the 10.0 GHz image includes only data taken under A-configuration for an optimal angular resolution. \subsection{The Triple Nuclei} Both nucleus A and nucleus C are clearly detected in all four bands. \citet{Liu2019} reported 9.0 GHz (X-band) detection of nucleus A and C, and here we have new detections at 3.0, 6.0, and 15.0 GHz. Moreover, interesting structures are revealed in both nuclei. Figure \ref{fig:zoomin} provides a close-up view of nucleus A at 15.0 GHz and nucleus C at 10.0 GHz. In Figure \ref{fig:zoomin}a, nucleus A exhibits an elongated morphology, reminiscent of a radio jet pointing along southeast-northwest at a position angle of $\sim$307$\arcdeg$ (east from the north) with an apparent extent of $\sim$0\farcs6 ($\sim$0.9 kpc). This elongated feature (larger than the corresponding synthesized beam size) is also evident in the 10.0 GHz image of nucleus A (Figure~\ref{fig:radio}c). The intensity peak of nucleus A in the {\it HST} Y-band (orange contours; Figure \ref{fig:radio}e) and U-band images (cyan contours; Figure \ref{fig:radio}f) is consistent with the 15.0 GHz intensity peak at [RA, DEC]=$[08^{h}49^{m}5\fs526, +11\arcdeg14\arcmin47\farcs57]$ (marked as the white cross in Figure~\ref{fig:zoomin}a) to within the astrometric uncertainty of $\sim$0\farcs15 between the optical and radio images, suggesting that the latter may be the position of the jet base. If this were the case, a discrete knot seen at $\sim$0\farcs3 southeast of the jet base may be the trace of an otherwise insignificant counter-jet, likely due to the relativistic Doppler dimming effect. Due to the extended nature of nucleus A, we report its integrated flux density at each frequency (Table \ref{tab:nuclei}), which is measured using the CASA task IMFIT with a two-dimensional Gaussian model. The resultant integrated flux density ($S_\nu$) is $\rm 17.75 \pm 0.56 $, $\rm 10.28\pm 0.56 $, $\rm 6.26\pm 0.21$ and $\rm 4.21\pm 0.17$ $\rm m Jy$ at 3, 6, 10, and 15 GHz, respectively. Using orthogonal distance regression, we derive a spectral index of $\alpha = -0.90 \pm 0.04$ ($S_\nu \propto \nu^{\alpha}$) for nucleus A. Accounting for the coherence loss, the measurements of nucleus A from the VLA agree with the results from European VLBI Network (EVN) at 1.7 GHz \citep{2019A&A...630L...5G}. Figure \ref{fig:zoomin}b shows that the 10 GHz emission from nucleus C is dominated by a compact core, which peaks at [RA, DEC]=$[08^{h}49^{m}05\fs436, +11\arcdeg14\arcmin50\farcs76]$ and is coincident with the southern intensity peak in the Y-band image (orange contours; Figure \ref{fig:radio}e), to within the astrometric uncertainty of 0\farcs15. On the other hand, the U-band intensity peak (also coincident with the northern peak in the Y-band) is significantly offset ($\sim0\farcs5$ to the northeast) from the radio peak. If the latter marked the true position of the putative MBH, the U-band peak may be understood as the site of circumnuclear star formation. Significant 10.0 GHz emission seen to the immediate northeast of the radio core may be associated with this star-forming activity. Strong extended emission is also present to the west of the radio core, exhibiting an edge-brightened morphology reminiscent of a radio bubble. This bubble-like feature may be connected to the radio core with weak emission. The integrated flux density of nucleus C is measured to be $\rm 1.62\pm 0.09$, $\rm 0.83\pm 0.07$,$\rm 0.45\pm 0.04$ and $\rm 0.32\pm 0.02$ $\rm mJy$ at 3.0, 6.0, 10.0, and 15.0 GHz, respectively. The best-fit spectral index is $\rm -1.03 \pm 0.04$. Despite the unprecedented sensitivity of these images, nucleus B remains undetected in all four bands. Therefore, we estimate a 3-$\sigma$ upper limit for the peak flux density of nucleus B, which is 15, 15, 15, and 18 $\rm \mu Jy\ beam^{-1}$ at 3.0, 6.0, 10.0, and 15.0 GHz, respectively. Here the 10 GHz upper limit is from the B-configuration concatenated image, which has an RMS 1.4 times lower than that of the A-configuration image. These limits are much tighter than reported by \citet{2019A&A...630L...5G} based on the EVN observations, which is $\rm < 450\ \mu Jy\ beam^{-1}$ at 1.7 GHz. Figure \ref{fig:sed} displays the observed radio spectral energy distributions of the triple nucleus (upper limits in the case of nucleus B), along with the best-fit power-law. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{zoomin.eps} \caption{A zoom-in view of (a) nucleus A at 15.0 GHz and (b) nucleus C at 10.0 GHz. Each panel has a size of $2\arcsec \times 2\arcsec$. The orange and cyan intensity contours are of the {\it HST} Y-band and U-band, respectively (same as in Figure~\ref{fig:radio}). The radio intensity peak of the nucleus is marked by a white `+' sign. \label{fig:zoomin} } \end{figure} \subsection{Extended Features} The VLA images also reveal remarkable features on the galactic scales. Most prominent is a linear feature through nucleus A in the 3.0, 6.0, and 10.0 GHz images (Figure~\ref{fig:radio}), which is reminiscent of a two-sided jet. To distinguish it from the jet-like feature within half arcsec of nucleus A (Figure~\ref{fig:zoomin}a), we shall refer to this linear feature as the outer jet (and the former as the inner jet). Remarkably, the position angle of the outer jet is $\sim$327$\arcdeg$, i.e., deviating from that of the inner jet by 20$\arcdeg$. The outer jet is more extended on the southeast side, reaching an apparent extent of at least $\sim5\farcs5$ ($\sim$8.1 kpc) in the 3.0 GHz image, which is most sensitive to extended features. The S-band image shows that the far side is brightened and slightly bent eastward, perhaps due to deflection by some external pressure. On the northwestern side, the far end of the outer jet is defined by an arc-shaped feature located at $\sim$1\farcs8 from nucleus A, which is clearly seen in the 10.0 and 15.0 GHz images and reminiscent of a jet-driven shock. That the `arc' and nucleus A are connected by weak emission is evident in the 3.0 and 6.0 GHz images. The markedly different radio morphology on the two sides of nucleus A might be due to an intrinsic difference in the galactic-scale environment or the relativistic Doppler dimming. We measure the integrated flux densities of the arc in individual bands adopting a two-dimensional Gaussian model, which are given in Table \ref{tab:nuclei} and plotted in Figure~\ref{fig:sed}. The best-fit spectral index of the arc is found to be $-0.99 \pm 0.05$, consistent with synchrotron radiation from shock-accelerated high-energy particles. The arc has no clear counterpart in the {\it HST} images. A visual examination of the {\it Chandra} image of \citet[][Fig.6 therein]{Liu2019} (see also \citealp{Pfeifle2019a}) suggests that a few X-ray photons are detected at the position of the arc, but deeper {\it Chandra} exposures are necessary to confirm this marginal excess. We have also examined the optical spectra presented in \citet{Liu2019} for potential shock-induced spectral features at the position of the arc. No clear evidence of this kind is found. A two-sided linear feature through nucleus C is also evident in all four bands but most clearly seen in the 3.0 and 6.0 GHz images (Figure~\ref{fig:radio}). This linear feature has a length of $\sim$1.5\arcsec\ ($\sim$2.2 kpc) on both sides and a width of $\sim$ 0.6\arcsec\ ($\sim$0.9 kpc). On the western side, the linear feature ends at a prominent knot (hereafter called the `apex'), which may be the top of the bubble-like feature shown in Figure~\ref{fig:zoomin}b. Emission on the eastern side is substantially fainter compared to its western counterpart and shows no sign of a jet-inflated bubble. Again, this difference on the two sides of nucleus C might be due to an intrinsic difference in the environment. We find no significant optical or X-ray counterpart of this two-sided radio feature. We measure the integrated flux densities of the apex in individual bands adopting a two-dimensional Gaussian model (Table \ref{tab:nuclei} and Figure \ref{fig:sed}). The best-fit spectral index is $ \rm -1.07 \pm 0.08$, again suggestive of a synchrotron origin. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{Index_1.eps} \caption{The radio spectral energy distributions of nucleus A (red open circle), nucleus C (blue open triangle), the Arc (orange filled circle) and the Apex (cyan filled triangle). The integrated flux density of nucleus A and C from \citet{Liu2019} at 9 GHz are also plotted. 1-$\sigma$ errors are plotted, but are too small to be discerned for nucleus A and nucleus C. The dashed lines are the best-fitted power-law, with values of the spectral index given in the insert. 3-$\sigma$ upper limits of nucleus B are also shown (black arrows). \label{fig:sed} } \end{figure} \section{Discussion and Conclusion} \subsection{Extended Jets from Nucleus A} The inner jet in nucleus A points along southeast-northwest at a position angle of $\sim 307\arcdeg$ extending to $\rm 0.6\arcsec$ ($\rm 0.9$ kpc). The inner counter-jet can be resolved at 15.0 GHz, which follows the same direction. On the scale up to $1\arcsec$ ($\rm 1.5$ kpc) away from the peak of nucleus A, the outer jets on both sides deviate from the position angle of the inner jet by $\sim 20\arcdeg$. This indicates that the jet and the counter-jet have both turned an angle of $20\arcdeg$ on their way forward. However, it is too coincidental to explain the same offset angle of the jets on both sides due primarily to an external pressure. We speculate that a more natural scenario to explain the offset could be that a spinning black hole is powering episodic jets. An older jet points at a certain position angle. When a newly born younger jet emerges, the jet axis has rotated by another certain angle, resulting in the younger jet not moving forward at the previous position angle of the older jet. While galaxy mergers are not a necessary condition for radio jet reorientation, it is plausible that the ongoing galaxy merger may have promoted more gas inflows that could affect both the accretion states of the active MBHs and the dynamical state of the circumnuclear medium. Indeed, theoretical studies demonstrate that the spin axis of a strongly accreting MBH can be significantly altered and ultimately aligned with the angular momentum of a gas inflow feeding the accretion disk over a timescale of $\lesssim$ Myr \citep{2018MNRAS.477.3807F, 2021MNRAS.500.3719C}. We estimate the minimum time gap between the old and young jets of nucleus A. By assuming the jet has an average speed of $ 0.1c$ ($c$ is the speed of light), it takes $\sim \rm 1.5 \times 10^{5}$ yr to travel 4.7 kpc. As for the young jet, a smaller distance of 0.9 kpc needs $\sim \rm 2.9 \times 10^{4}$ yr of travel. Therefore, the minimum time gap is around $\rm 1.2 \times 10^{5}$ yr, compatible with the aforementioned timescale suggested by theoretical studies \citep{2018MNRAS.477.3807F, 2021MNRAS.500.3719C}. We note that the relatively strong magnetic field could explain a negative spectral index. A rapid synchrotron cooling happens in a strong magnetic field. \subsection{Nature of the Western Lobe in Nucleus C} The 15 GHz (Ku-band) image well describes the outline of the western lobe that has a length of $\sim$1.5\arcsec\ ($\sim$2.2 kpc) and a width of $\sim$ 0.6\arcsec\ ($\sim$0.9 kpc). The Apex seems to be the hot spot of the western lobe. Approximating the lobe as an ellipsoid with a circular cross-section and assuming that the line-of-sight is close to edge-on, the volume of the lobe is $\sim$ 0.8 $\rm kpc^{3}$. The integrated flux density in such a region is $\rm 0.81\pm 0.03$, $\rm 0.52\pm 0.04$ and $\rm 0.39\pm 0.02$ mJy at 6.0, 10.0 and 15.0 GHz, respectively, and at least 1.35 mJy at 3.0 GHz. The average radio spectral index of this region is $\rm -0.78 \pm 0.02$. Adopting the classic energy equipartition assumption, the minimum energy density ($u_{\rm min}$, equation 25 in \citet{2004IJMPD..13.1549G}) in the lobe is estimated to be $\sim \rm 2.3\times 10^{-9}\ erg\ cm^{-3}$, for an equipartition magnetic field strength of $B_{\rm eq}= (24\pi u_{\rm min}/7)^{\frac{1}{2}} \approx 160\ \mu$G. The total energy (i.e., a sum of magnetic and particle energy) of the lobe amounts to $\rm 5.0 \times 10^{55}$ erg, which is comparable to the work done during its expansion. Because the steep spectrum extends to 3.0 GHz, the synchrotron cooling timescale is at least more than $\rm 4\times 10^{5} $ yr based on the equipartition magnetic field strength. If the jet had a speed of 0.1c, it would need a travel time of $\rm 7\times 10^{4} $ yr. In fact, the lobe expansion could take more time by at least a factor of 10. Here, we choose a minimum timescale of $\rm 7\times 10^{5} $ yr and the jet power is estimated to be $ \rm \sim 4\times 10^{42}\ erg\ s^{-1}$. Such power is plausibly available from nucleus C which is a Seyfert galaxy with an estimated black hole mass of $5 \times 10^{6}\rm~M_{\odot}$ \citep{Liu2019,2015ApJ...813...82R}. On the other hand, it is hard to determine any visible star-forming impact from the jet/lobe feedback in such a short timescale. The lobe associated with nucleus C is an interesting analog to the famous case of the Circinus galaxy. Exhibiting two extended edge-brightened radio lobes each with a size of $\sim 1.5$ kpc, the Circinus galaxy has an estimated jet power of $\rm \sim 10^{41}\ erg\ s^{-1}$ \citep{2012ApJ...758...95M}. This is about an order of magnitude lower than the one we find above for the lobe associated with nucleus C, and so is the total energy involved, $\rm 2 \times 10^{55}\ erg $ \citep{2012ApJ...758...95M}. \subsection{Updated Radio Constraint on the Star Formation Rate in Nucleus B} The non-detection of nucleus B gives an upper limit of 15, 15, 15, and 18 $ \rm \mu Jy\ beam^{-1}$ at 3, 6, 10, and 15 GHz, respectively. Assuming $\rm S_{\nu} \propto \nu^{-0.59}$, which is appropriate for star-forming galaxies \citep{2018A&A...611A..55K}, we extrapolate our 3 GHz measurement to 1.45 GHz, which results in $\rm 28\ \mu Jy\ beam^{-1}$. Using the $L_{\rm 1.4 GHz} - \rm SFR_{UV+TIR}$ correlation \citep{Davies2016,Davies2017}, the inferred $3\sigma$ upper limit of star formation rate (SFR) is $\rm 0.4\ M_{\odot}\ yr^{-1}$. This is slightly more stringent than but is broadly consistent with the $3\sigma$ SFR upper limit of $<0.8\rm~M_{\odot}\ yr^{-1}$ reported in \citet{Liu2019}. A flatter spectral index would lead to a lower SFR, thus the inferred limit is a conservative value. \citet{Liu2019} also found the dust attenuation corrected SFR is $\rm \sim 0.2 - 7 \ M_{\odot} \ yr^{-1}$ derived from {\it HST} U-band and continuum-index. Based on near-infrared spectroscopy from the Large Binocular Telescope, \citet{Pfeifle2019a} obtained an SFR of $\rm \sim 0.48 \ M_{\odot} \ yr^{-1}$ for nucleus B. Both estimates are consistent with our radio limit. In addition, based on the Kennicutt-Schmidt law \citep{2012ARA&A..50..531K}, our limit is also in agreement with the non-detection of molecular gas in nucleus B at arcsecond resolution, which corresponds to an upper limit of $\rm 79\ M_{\odot}\ pc^{-2}$ in molecular gas surface density (M. Hou et al. in preparation). Therefore, our VLA results reinforce the conclusion that star-forming activity alone is insufficient to account for the observed X-ray flux in nucleus B, and an additional heating source such as AGN and/or shocks are required \citep{Liu2019,Pfeifle2019a}, although such additional power is not necessarily produced by the putative MBH in nucleus B.\\ In summary, we have presented new VLA observations of SDSS J0849+1114 at 3.0, 6.0, 10.0, and 15 GHz, which provide an unprecedented radio view for this triple AGN candidate. Two of the three nuclei, nucleus A and C, are detected for the first time at 3.0, 6.0, and 15 GHz. They both show a steep spectrum over 3--15 GHz, consistent with an origin of synchrotron radiation. The high-resolution images also reveal kpc-scale extended features related to both nuclei, which can be attributed to AGN-driven jets/outflows. Nucleus B remains undetected at all four frequencies, and further studies are warranted to unambiguously determine the nature of this nucleus. \acknowledgments S.P and Z.L. acknowledge support by the National Key Research and Development Program of China (2017YFA0402703) and National Natural Science Foundation of China (grants 11873028, 11473010). X.L. acknowledges support from NSF grant AST-2108162. M.H. acknowledges support by the fellowship of China National Postdoctoral Program for Innovation Talents (grant BX2021016). We thank Dr. Binbin Zhang for his help with computing resources. This research was supported in part by the National Science Foundation under PHY-1748958. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Based in part on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program number GO-12363 (PI: X. Liu). \clearpage \begin{deluxetable*}{cccccccc} \tablecaption{Log of VLA Observations\label{tab:obsinfor}} \tablecolumns{8} \tablenum{1} \tablewidth{0pt} \tablehead{ \colhead{Band} & \colhead{Frequency} & \colhead{Bandwidth} & \colhead{Configuration} & \colhead{Time} &\colhead{Date} &\colhead{Synthesized Beam} &\colhead{RMS} \\ \colhead{} & \colhead{(GHz)} & \colhead{(GHz)} &\colhead{} &\colhead{(hr)} &\colhead{(2019)}& \colhead{$(\rm '',\ '',\ ^{o})$} & \colhead{($\rm \mu Jy~beam^{-1}$)} } \colnumbers \startdata S & 3.0 & 2.0 & A & 5.70 & Oct 4/5 & $\rm 0\farcs63 \times 0\farcs53, -22.4\arcdeg$ &5 \\ C & 6.0 & 4.0& A & 0.87 & Oct 7 & $\rm 0\farcs33 \times 0\farcs27, -33.6\arcdeg$ &5 \\ X & 10.0 & 4.0 & A & 1.82 & Aug 17/25 & $\rm 0\farcs27 \times 0\farcs18, -48.5\arcdeg$ & 7 \\ & & & B & 3.38 & Apr 16/25/26/26 & $0\farcs66\times 0\farcs56, -24.2\arcdeg$ & 5 \\ Ku & 15.0 & 6.0 & A & 1.52 & Oct 11 & $\rm 0\farcs21 \times 0\farcs12, -45.6\arcdeg$ & 6 \\ \enddata \tablecomments{(1) Observational band. (2)-(3) Central frequency and Bandwidth. (4) VLA array configuration. The longest and shortest baseline of A-configuration are 36 and 0.68 kilometers, while those of B-configuration are 36 and 0.21 kilometers. (5) The on-source integration time. (6) Date of observation, in the year 2019. (7) Synthesized beam, including the FWHM of the major and minor axes and the position angle. (8) The image RMS level.} \end{deluxetable*} \begin{deluxetable*}{cccc} \tablecaption{10.0 GHz (X-band) Peak Flux Density in Different Epochs \label{tab:X-band}} \tablecolumns{4} \tablenum{2} \tablewidth{5pt} \tablehead{ \colhead{Date} &\colhead{Configuration} & \colhead{Nucleus A} & \colhead{Nucleus C} \\ \colhead{(2019)} &\colhead{} & \colhead{($\rm m Jy~beam^{-1}$)} & \colhead{($\rm m Jy~beam^{-1}$)} } \colnumbers \startdata April 16 & B & $\rm 5.77 \pm 0.18$ & $\rm 0.53 \pm 0.03$ \\ April 25 & B & $\rm 5.32 \pm 0.16$ & $\rm 0.47 \pm 0.03$ \\ April 25 & B & $\rm 5.69 \pm 0.18$ & $\rm0.52 \pm 0.02 $ \\ April 26 & B & $\rm 6.03 \pm 0.19$ & $\rm 0.55 \pm 0.02$ \\ Aug 17 & A & $\rm 4.03 \pm 0.13$ & $\rm 0.44 \pm 0.03$ \\ Aug 18 & A & $\rm 4.00 \pm 0.13$ & $\rm 0.46 \pm 0.03$ \enddata \tablecomments{ (1) The date of the epoch, in 2019. (2) Array configuration. (3) Peak flux density of nucleus A. (4) Peak flux density of nucleus C. The errors are at 1-$\sigma$ level including a 3\% relative uncertainty in the flux calibration added in quadrature.} \end{deluxetable*} \begin{deluxetable*}{c|cc|cc|cc|cc} \tablecaption{Multi-band flux density \label{tab:nuclei}} \tablecolumns{8} \tablenum{3} \tablewidth{0pt} \tablehead{ \colhead{Name} & \multicolumn{2}{c}{3.0 GHz (S-band)} & \multicolumn{2}{c}{6.0 GHz (C-band)} & \multicolumn{2}{c}{10.0 GHz (X-band)} & \multicolumn{2}{c}{15.0 GHz (Ku-band)}\\ \colhead{} & \colhead{Peak} & \colhead{Integrated} &\colhead{Peak} & \colhead{Integrated} &\colhead{Peak} & \colhead{Integrated} &\colhead{Peak} & \colhead{Integrated} \\ \colhead{} &\colhead{($\rm m Jy~beam^{-1}$)} &\colhead{($\rm m Jy$)}& \colhead{($\rm m Jy~beam^{-1}$)} &\colhead{($\rm m Jy$)} & \colhead{$(\rm \mu Jy~beam^{-1})$} &\colhead{($\rm m Jy$)} & \colhead{$(\rm \mu Jy~beam^{-1})$} &\colhead{($\rm m Jy$)} } \colnumbers \startdata Nucleus A & $\rm 15.22 \pm 0.47$ & $\rm 17.75 \pm 0.56$ & $\rm 7.21 \pm 0.22$ & $\rm 10.28 \pm 0.32$ & $\rm 4.08 \pm 0.13$ & $\rm 6.26 \pm 0.21 $ & $\rm 2.33 \pm 0.08$ & $\rm 4.21 \pm 0.17$ \\ Nucleus B & $< $ 0.018 & - & $< $ 0.015 & - & $< $ 0.021 & - & $< $ 0.018 & -\\ Nucleus C & $\rm 0.97 \pm 0.04$ & $\rm 1.62 \pm 0.09 $ & $\rm 0.55 \pm 0.03$ & $\rm 0.83 \pm 0.07 $ & $\rm0.45 \pm 0.02 $ & $\rm 0.45 \pm 0.04$ & $\rm 0.35 \pm 0.01$ & $\rm 0.32\pm 0.02$ \\ Arc& $\rm 0.52 \pm 0.04$ & $\rm 0.98 \pm 0.12$ & $\rm 0.18 \pm 0.02$ & $\rm 0.48 \pm 0.05 $ & $\rm0.10 \pm 0.01 $ & $\rm 0.28 \pm 0.04$ & $\rm 0.055 \pm 0.008$ & $\rm 0.207 \pm 0.039$ \\ Apex & $\rm 0.47 \pm 0.02$ & $\rm 0.84 \pm 0.06$ & $\rm 0.15\pm 0.01$ & $\rm 0.34 \pm 0.04 $ & $\rm0.08 \pm 0.01 $ & $\rm 0.22 \pm 0.03$ & $\rm 0.046 \pm 0.007$ & $\rm 0.153 \pm 0.031$ \\ \enddata \tablecomments{(1) Object name. (2), (4), (6), and (8) Peak flux density at the four central frequencies. The errors are at 1-$\sigma$ level including a 3\% relative uncertainty in the flux calibration added in quadrature. Upper limits for nucleus B are of 3-$\sigma$. (3), (5), (7), and (9) Integrated flux density at the four central frequencies.} \end{deluxetable*} \facilities{HST (WFC3), VLA.} \clearpage
proofpile-arXiv_067-9252
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The primary task of a communication network architect is to provision as well as utilize network resources efficiently to satisfy the demands imposed on it. The main algorithmic problem is that of allocating or scheduling resources among various entities or data units, e.g. packets, flows, that are contending to access them. In recent years, the question of designing a simple, myopic, distributed and high-performance (aka stable) scheduling algorithm has received considerable interest in the context of emerging communication network models. Two such models that we consider this paper are that of a wireless network and a buffered circuit switched network. The wireless network consists of wireless transmission capable nodes. Each node receives exogenous demand in form of packets. These nodes communicate these packets through a shared wireless medium. Hence their simultaneous transmission may contend with each other. The purpose of a scheduling algorithm is to resolve these contentions among transmitting nodes so as to utilize the wireless network bandwidth efficiently while keeping the queues at nodes finite. Naturally the desired scheduling algorithm should be distributed, simple/low-complexity and myopic (i.e. utilize only the network state information like queue-sizes). The buffered circuit switched network can be utilized to model the dynamics of flows or calls in an optical core of future Internet. Here a link capacitated network is given with a collection of end-to-end routes. At the ingress (i.e. input or entry point) of each route, calls arriving as per exogenous process are buffered or queued. Each such call desires resources on each link of its route for a random amount of time duration. Due to link capacity constraints, calls of routes sharing links contend for resources. And, a scheduling algorithm is required to resolve this contention so as to utilize the network links efficiently while keeping buffers or queues at ingress of routes finite. Again, the scheduling algorithm is desired to be distributed, simple and myopic. An important scheduling algorithm is the maximum weight policy that was proposed by Tassiulas and Ephremides \cite{TE92}. It was proposed in the context of a packet queueing network model with generic scheduling constraints. It is primarily applicable in a scenario where scheduling decisions are synchronized or made every discrete time. It suggests scheduling queues, subject to constraints, that have the maximum net weight at each time step with the weight of a queue being its queue-size. They established throughput optimality property of this algorithm for this general class of networks. Further, this algorithm, as the description suggests, is myopic. Due to the general applicability and myopic nature, this algorithm and its variants have received a lot of attention in recent years, e.g. \cite{MAW,DB,stolyar,SW06,dailin,SW07}. The maximum weight algorithm provides a myopic and stable scheduling algorithm for the wireless network model. However, it requires solving a combinatorial optimization problem, the maximum weight independent set problem, to come up with a schedule every time. And the problem of finding a maximum weight independent set is known to be NP-hard as well as hard to approximate in general \cite{IS}. To address this concern, there has been a long line of research conducted to devise implementable approximations of the maximum weight scheduling algorithm, e.g. \cite{islip,tassiulas98,G-P-S,DW04,MSZ06}. A comprehensive survey of such maximum weight inspired and other algorithmic approaches that have been studied over more than four decades in the context of wireless networks can be found in \cite{RSS09,LSSW}. In the context of buffered circuit switched network, calls have random service requirement. Therefore, scheduling decisions can not be synchronized. Therefore, the maximum weight scheduling algorithm is not applicable. To the best of our knowledge, no other myopic and stable algorithm is known for this network model. \subsection{Contributions} We propose a scheduling algorithm for both wireless and buffered circuit switched network model. The algorithm utilizes only local, queue-size information to make scheduling decisions. That is, the algorithm is myopic and distributed. It requires each queue (or node) in the network to perform few (literally, constant) logical operations per scheduling decision. We establish that it is throughput optimal. That is, the network Markov process is positive Harris recurrent as long as the network is under-loaded (or not overloaded). Philosophically, our algorithm design is motivated by a certain product-form distribution that can be characterized as the stationary distribution of a simple and distributed Markovian dynamics over the space of schedules. For the wireless network, it corresponds to the known Glauber dynamics (cf. \cite{Glauber}) over the space of independent sets of the wireless network interference graph; for the buffered circuit switched network, it corresponds to the known stochastic loss network (cf. \cite{Kelly}). To establish the stability property of the algorithm, we exhibit an appropriate Lyapunov function. This, along with standard machinery based on identifying an appropriate `petit set', leads to the positive Harris recurrence property of the network Markov process. Technically, this is the most challenging part of our result. It requires proving an effective `time scale separation' between the network queuing dynamics and the scheduling dynamics induced by the algorithm. To make this possible, we use an appropriately slowly increasing function ($\log\log (\cdot + e)$) of queue-size as weight in the scheduling algorithm. Subsequently, the time scale separation follows by studying the mixing property of a specific time varying Markov chain over the space of schedules. We note that use of Lyapunov function for establishing stability is somewhat classical now (for example, see \cite{TE92, stolyar, SW06}). Usually difficulty lies in finding an appropriate candidate function followed by establishing that it is indeed a ``Lyapunov'' function. \subsection{Organization} We start by describing two network models, the wireless network and the buffered circuit switched network in Section \ref{sec:model}. We formally introduce the problem of scheduling and performance metric for scheduling algorithms. The maximum weight scheduling algorithm is described as well. Our randomized algorithm and its throughput optimality for both network models are presented in Section \ref{sec:main}. The paper beyond Section \ref{sec:main} is dedicated to establishing the throughput optimality. Necessary technical preliminaries are presented in Section \ref{sec:prelim}. Here we relate our algorithm for both models with appropriate reversible Markov chains on the space of schedules and state useful properties of these Markov chains. We also describe known facts about the positive Harris recurrence as well as state the known Lyapunov drift criteria, to establish positive Harris recurrence. Detailed proofs of our main results are presented in Section \ref{sec:mainproof}. \section{Setup}\label{sec:model} \subsection{Wireless Network} We consider a {\em single-hop} wireless network of $n$ queues. Queues receive work as per exogenous arrivals and work leaves the system upon receiving service. Specifically, let $Q_i(t) \in \mathbb{R}_+ = \{ x \in \mathbb{R}: x\geq 0\}$ denote the amount of work in the $i$th queue at time $t\in \mathbb{R}_+$ and $\mathbf{Q}(t)=[Q_i(t)]_{1{\le}i{\le}n}$; initially $t=0$ and $\mathbf{Q}(0) = \mathbf{0}$\footnote{Bold letters are reserved for vectors; $\mathbf{0}, \mathbf{1}$ represent vectors of all $0$s \& all $1$s respectively.}. Work arrives to each queue in terms of unit-sized packets as per a discrete-time process. Let $A_i(s,t)$ denote the amount of work arriving to queue $i$ in time interval $[s,t]$ for $0\leq s < t$. For simplicity, assume that for each $i$, $A_i(\cdot)$ is an independent Bernoulli process with parameter $\lambda_i$, where $A_i(\tau) \stackrel{\triangle}{=} A_i(0,\tau)$. That is, $A_i(\tau+1)-A_i(\tau) \in \{0,1\}$ and $\Pr(A_i(\tau+1)-A_i(\tau) = 1) = \lambda_i$ for all $i$ and $\tau\in \mathbb{Z}_+ = \{ k \in \mathbb{Z} : k \geq 0\}$. Denote the arrival rate vector as $\boldsymbol{\lambda} = [\lambda_i]_{1\le i\le n}$. We assume that arrivals happen at the end of a time slot. The work from queues is served at the unit rate, but subject to {\em interference} constraints. Specifically, let $G = (V,E)$ denote the inference graph between the $n$ queues, represented by vertices $V = \{1,\dots n\}$ and edges $E$: an $(i,j) \in E$ implies that queues $i$ and $j$ can not transmit simultaneously since their transmission {\em interfere} with each other. Formally, let $\sigma_i(t) \in \{0,1\}$ denotes whether the queue $i$ is transmitting at time $t$, i.e. work in queue $i$ is being served at unit rate at time $t$ and $\boldsymbol{\sigma}(t) = [\sigma_i(t)]$. Then, it must be that for $t \in \mathbb{R}_+$, $$\boldsymbol{\sigma}(t) \in \mc{I}(G) \stackrel{\Delta}{=} \{ \boldsymbol{\rho} = [\rho_i] \in \{0,1\}^n : \rho_i + \rho_j \le 1\text{ for all }(i,j) \in E \}.$$ The total amount of work served at queue $i$ in time interval $[s,t]$ is $$D_i(s,t) = \int_{s}^t \sigma_i(y) \bold{I}_{\{Q_i(y) > 0\}} dy,$$ where $\bold{I}_{\{x\}}$ denotes the indicator function. In summary, the above model induces the following queueing dynamics: for any $0 \leq s < t$ and $1\leq i\leq n$, $$ Q_i(t) = Q_i(s) - \int_{s}^t \sigma_i(y) \bold{I}_{\{Q_i(y) > 0\}} dy + A_i(s,t). $$ \subsection{Buffered Circuit Switched Network} We consider a buffered circuit switched network. Here the network is represented by a capacitated graph $G=(V,E)$ with $V$ being vertices, $E \subset V \times V$ being links (or edges) with each link $e \in E$ having a finite integral capacity $C_e \in \mathbb{N}$. This network is accessed by a fixed set of $n$ routes $R_1, \dots, R_n$; each route is a collection of interconnected links. At each route $R_i$, flows arrive as per an exogenous arrival process. For simplicity, we assume it to be an independent Possion process of rate $\lambda_i$ and let $A_i(s,t)$ denote total number of flow arrivals to route $R_i$ in time interval $[s,t]$. Upon arrival of a flow to route $R_i$, it joins the queue or buffer at the ingress of $R_i$. Let $Q_i(t)$ denote the number of flows in this queue at time $t$; initially $t=0$ and $Q_i(0)=0$. Each flow arriving to $R_i$, comes with the service requirement of using unit capacity simultaneously on all the links of $R_i$ for a time duration -- it is assumed to be distributed independently as per Exponential of unit mean. Now a flow in the queue of route $R_i$ can get simultaneous possession of links along route $R_i$ in the network at time $t$, if there is a unit capacity available at {\em all} of these links. To this end, let $z_i(t)$ denote the number of flows that are {\em active} along route $R_i$, i.e. posses links along the route $R_i$. Then, by capacity constraints on the links of the network, it must be that $z(t) = [z_i(t)]$ satisfies $$ \boldsymbol{z}(t)\in \X \stackrel{\Delta}{=}\{\boldsymbol{z}=[z_i]\in \mathbb{Z}_+^n : \sum_{i:e\in R_i} z_i \leq C_e, ~~\forall~ e\in E\}.$$ This represents the scheduling constraints of the circuit switched network model similar to the interference constraints of the wireless network model. Finally, a flow active on route $R_i$, departs the network after the completion of its service requirement and frees unit capacity on the links of $R_i$. Let $D_i(s,t)$ denote the number of flows which are served (hence leave the system) in time interval $[s,t]$. \subsection{Scheduling Algorithm \& Performance Metric} In both models described above, the scheduling is the key operational question. In the wireless network, queues need to decide which of them transmit subject to interference constraints. In the circuit switched network, queues need to agree on which flows becomes active subject to network capacity constraints. And, a {\em scheduling algorithm} is required to make these decisions every time. In wireless network, the scheduling algorithm decides the schedule $\boldsymbol{\sigma}(t) \in \mc{I}(G)$ at each time $t$. We are interested in {\em distributed} scheduling algorithms, i.e. queue $i$ decides $\sigma_i(t)$ using its local information, such as its queue-size $Q_i(t)$. We assume that queues have instantaneous {\em carrier sensing} information, i.e. if a queue (or node) $j$ starts transmitting at time $t$, then all neighboring queues can {\em listen} to this transmission immediately. In buffered circuit switched network, the scheduling algorithm decides active flows or schedules $\mb{z}(t)$ at time $t$. Again, our interest is in {\em distributed} scheduling algorithms, i.e. queue at ingress of route $R_i$ decides $z_i(t)$ using its local information. Each queue (or route) can obtain instantaneous information on whether all links along its route have unit capacity available or not. In summary, both models need scheduling algorithms to decide when each queue (or its ingress port) will request the network for availability of resources; upon a positive answer (or successful request) from the network, the queue acquires network resources for certain amount of time. And, these algorithm need to be based on local information. From the perspective of network performance, we would like the scheduling algorithm to be such that the queues in network remain as small as possible for the largest possible range of arrival rate vectors. To formalize this notion of performance, we define the capacity regions for both of these models. Let $\boldsymbol{\Lambda}_{w}$ be the capacity region of the wireless network model defined as \begin{eqnarray} \boldsymbol{\Lambda}_{w} & = & \Conv(\mc{I}(G))\nonumber\\ &=&\left\{ \boldsymbol{y} \in \mathbb{R}_+^n : \boldsymbol{y} \leq \sum_{\boldsymbol{\sigma} \in \mc{I}(G)} \alpha_{\boldsymbol{\sigma}} \boldsymbol{\sigma}, ~\mbox{with}~\alpha_{\boldsymbol{\sigma}} \geq 0,~\mbox{and}~\sum_{\boldsymbol{\sigma} \in \mc{I}(G)} \alpha_{\boldsymbol{\sigma}} \leq 1 \right\}. \label{eq:c1} \end{eqnarray} And let $\boldsymbol{\Lambda}_{cs}$ be the capacity region of the buffered circuit switched network defined as \begin{eqnarray} \boldsymbol{\Lambda}_{cs} & = & \Conv(\X)\nonumber\\ &=&\left\{ \boldsymbol{y} \in \mathbb{R}_+^n : \boldsymbol{y} \leq \sum_{\boldsymbol{z} \in \X} \alpha_{\boldsymbol{z}} \boldsymbol{z}, ~\mbox{with}~\alpha_{\boldsymbol{z}} \geq 0,~\mbox{and}~\sum_{\boldsymbol{z} \in \X} \alpha_{\boldsymbol{z}} \leq 1 \right\}. \label{eq:c2} \end{eqnarray} Intuitively, these bounds of capacity regions comes from the fact that any algorithm produces the `service rate' from $\mc{I}(G)$ (or $\X$) each time and hence the time average of the service rate induced by any algorithm must belong to its convex hull. Therefore, if arrival rates $\boldsymbol{\lambda}$ can be `served well' by any algorithm then it must belong to $\Conv(\mc{I}(G))$ (or $\Conv(\X)$). Motivated by this, we call an arrival rate vector $\boldsymbol{\lambda}$ admissible if $\boldsymbol{\lambda} \in \boldsymbol{\Lambda}$, and say that an arrival rate vector $\boldsymbol{\lambda}$ is strictly admissible if $\boldsymbol{\lambda} \in {\boldsymbol{\Lambda}}^o$, where $\boldsymbol{\Lambda}^o$ is the interior of $\boldsymbol{\Lambda}$ formally defined as $$ \boldsymbol{\Lambda}^o = \left\{ \boldsymbol{\lambda} \in \mathbb{R}^n_{+} : \boldsymbol{\lambda} < \boldsymbol{\lambda}^* \text{ componentwise, for some }\boldsymbol{\lambda}^* \in \boldsymbol{\Lambda} \right\}. $$ Equivalently, we may say that the network is \emph{under-loaded}. Now we are ready to define a performance metric for a scheduling algorithm. Specifically, we desire the scheduling algorithm to be throughput optimal as defined below. \begin{definition}[throughput optimal] {\em A scheduling algorithm is called \\ { throughput optimal}, or {stable}, or providing {100\% throughput}, if for any $\boldsymbol{\lambda} \in \boldsymbol{\Lambda}^o$ the (appropriately defined) underlying network Markov process is \emph{positive (Harris) recurrent}}. \end{definition} \subsection{The {\textsf{MW}} Algorithm} Here we describe a popular algorithm known as the maximum weight or in short \textsf{MW}~algorithm that was proposed by Tassiulas and Ephremides \cite{TE92}. It is throughput optimal for a large class of network models. The algorithm readily applies to the wireless network model. However, it does not apply (exactly or any variant of it) in the case of circuit switched network. Further, this algorithm requires solving a hard combinatorial problem each time slot, e.g. maximum weight independent set for wireless network, which is NP-hard in general. Therefore, it's far from being practically useful. In a nutshell, the randomized algorithm proposed in this paper will overcome these drawbacks of the \textsf{MW}~algorithm while retaining the throughput optimality property. For completeness, next we provide a brief description of the \textsf{MW}~algorithm. In the wireless network model, the \textsf{MW}~algorithm chooses a schedule $\boldsymbol{\sigma}(\tau) \in \mc{I}(G)$ every time step $\tau \in \mathbb{Z}_+$ as follows\footnote{Here and everywhere else, we use notation $\mb{u} \cdot \mb{v} = \sum_{i=1}^d u_i v_i$ for any $d$-dimensional vectors $\mb{u}, \mb{v} \in \mathbb{R}^d$. That is, $\mathbf{Q}(\tau)\cdot \boldsymbol{\rho} =\sum_i Q_i(\tau)\cdot\rho_i$.}: $$ \boldsymbol{\sigma}(\tau) \in \arg\max_{\boldsymbol{\rho} \in \mc{I}(G)} \mathbf{Q}(\tau)\cdot \boldsymbol{\rho}. $$ In other words, the algorithm changes its decision once in unit time utilizing the information $\mathbf{Q}(\tau)$. The maximum weight property allows one to establish positive recurrence by means of Lyapunov drift criteria (see Lemma \ref{lem:two}) when the arrival rate is admissible, i.e. $\boldsymbol{\lambda} \in \boldsymbol{\Lambda}^o_{w}$. However, as indicated above picking such a schedule every time is computationally burdensome. A natural generalization of this, called \textsf{MW}-$f$ algorithm, that uses weight $f(Q_i(\cdot))$ instead of $Q_i(\cdot)$ for an increasing non-negative function $f$ also leads to throughput optimality (cf. see \cite{stolyar, SW06, SW07}). For the buffered circuit switched network model, the \textsf{MW}~algorithm is not applicable. To understand this, consider the following. The \textsf{MW}~algorithm would require the network to schedule active flows as $\boldsymbol{z}(\tau)\in\X$ where $$ \boldsymbol{z}(\tau) \in \arg\max_{\boldsymbol{z} \in \X} \mathbf{Q}(\tau)\cdot\boldsymbol{z}.$$ This will require the algorithm to possibly preempt some of active flows without the completion of their service requirement. And this is not allowed in this model. \section{Main Result: Simple \& Efficient Randomized Algorithms}\label{sec:main} As stated above, the \textsf{MW}~algorithm is not practical for wireless network and is not applicable to circuit switched network. However, it has the desirable throughput optimality property. As the main result of this paper, we provide a simple, randomized algorithm that is applicable to both wireless and circuit switched network as well as it's throughput optimal. The algorithm requires each node (or queue) to perform only a few logical operations at each time step, it's distributed and effectively it `simulates' the \textsf{MW}-$f$ algorithm for an appropriate choice of $f$. In that sense, it's a simple, randomized, distributed implementation of the \textsf{MW}~algorithm. In what follows, we shall describe algorithms for wireless network and buffered circuit switched network respectively. We will state their throughput optimality property. While these algorithms seem different, philosophically they are very similar -- also, witnessed in the commonality in their proofs. \subsection{Algorithm for Wireless Network}\label{ssec:algo1} Let $t \in \mathbb{R}_+$ denote the time index and $\boldsymbol{W}(t) = [W_i(t)] \in \mathbb{R}_+^n$ be the vector of weights at the $n$ queues. The $\boldsymbol{W}(t)$ will be a function of $\mathbf{Q}(t)$ to be determined later. In a nutshell, the algorithm described below will choose a schedule $\boldsymbol{\sigma}(t) \in \mc{I}(G)$ so that the weight, $\boldsymbol{W}(t)\cdot \boldsymbol{\sigma}(t)$, is as large as possible. The algorithm is randomized and asynchronous. Each node (or queue) has an independent Exponential clock of rate $1$ (i.e. Poisson process of rate $1$). Let the $k$th tick of the clock of node $i$ happen at time $T_k^i$; $T^i_0 = 0$ for all $i$. By definition $T_{k+1}^i - T_k^i, k \geq 0,$ are i.i.d. mean $1$ Exponential random variables. Each node changes its scheduling decision only at its clock ticks. That is, for node $i$ the $\sigma_i(t)$ remains constant for $t \in (T^i_k, T^i_{k+1}]$. Clearly, with probability $1$ no two clock ticks across nodes happen at the same time. Initially, we assume that $\sigma_i(0) = 0$ for all $i$. The node $i$ at the $k$th clock tick, $t = T^i_k$, {\em listens} to the medium and does the following: \begin{itemize} \item[$\circ$] If any neighbor of $i$ is transmitting, i.e. $\sigma_j(t) = 1$ for some $j\in \mathcal{N}(i)=\{j':(i,j')\in E\}$, then set $\sigma_i(t^+) = 0$. \item[] ~~ \item[$\circ$] Else, set $$ \sigma_i(t^+) = \begin{cases} 1 & \quad \text{with probability} \quad \frac{\exp(W_i(t))}{1+\exp(W_i(t))} \\ 0 & \quad\text{otherwise.} \end{cases} $$ \end{itemize} Here, we assume that if $\sigma_i(t) = 1$, then node $i$ will always transmit data irrespective of the value of $Q_i(t)$ so that the neighbors of node $i$ can infer $\sigma_i(t)$ by {\em listening} to the medium. \subsubsection{Throughout Optimality} The above described algorithm for wireless network is throughput optimal for an appropriate choice of weight $\boldsymbol{W}(t)$. Define weight $W_i(t)$ at node $i$ in the algorithm for wireless network as \begin{equation} W_i(t) = \max\left\{f(Q_i(\lfloor t\rfloor)),\sqrt{f(Q_{\max}(\lfloor t\rfloor))}\right\}, \label{eq:weight1} \end{equation} where\footnote{Unless stated otherwise, here and everywhere else the $\log(\cdot)$ is natural logarithm, i.e. base $e$.} $f(x) = \log \log (x+e)$ and $Q_{\max}(\cdot) = \max_{i} Q_i(\cdot)$. The non-local information of $Q_{\max}(\lfloor t\rfloor))$ can be replaced by its approximate estimation that can computed through a very simple distributed algorithm. This does not alter the throughput optimality property of the algorithm. A discussion is provided in Section \ref{sec:discuss}. We state the following property of the algorithm. \begin{theorem}\label{thm:main1} Suppose the algorithm of Section \ref{ssec:algo1} uses the weight as per \eqref{eq:weight1}. Then, for any $\boldsymbol{\lambda} \in \boldsymbol{\Lambda}_w^o$, the network Markov process is positive Harris recurrent. \end{theorem} In this paper, Theorem \ref{thm:main1} (as well as Theorem \ref{thm:main2}) is established for the choice of $f(x) = \log\log (x + e)$. However, the proof technique of this paper extends naturally for any choice of $f : \mathbb{R}_+ \to \mathbb{R}_+$ that satisfies the following conditions: $f(0) = 0$, $f$ is a monotonically strictly increasing function, $\lim_{x\to\infty} f(x) = \infty$ and \[ \lim_{x\to\infty} \exp\Bigl(f(x)\Bigr)~f'\Bigl(f^{-1}(\delta f(x))\Bigr) = 0, \quad \text{for any} \quad \delta \in (0,1). \] Examples of such functions includes: $f(x) = \varepsilon(x) \log (x+1)$, where $\varepsilon(0)=1$, $\varepsilon(x)$ is monotonically decreasing function to $0$ as $x \to \infty$; $f(x) = \sqrt{\log (x + 1)}$; $f(x) = \log \log \log (x + e^e)$, etc. \subsection{Algorithm for Buffered Circuit Switched Network}\label{ssec:algo2} In a buffered circuit switched network, the scheduling algorithm decided when each of the ingress node (or queue) should request the network for availability of resources (links) along its route and upon positive response from the network, it acquires the resources. Our algorithm to make such a decision at each node is described as follows: \begin{itemize} \item[$\circ$] Each ingress node of a route, say $R_i$, generates request as per a time varying Poisson process whose rate at time $t$ is equal to $\exp(W_i(t))$. \item[$\circ$] If the request generated by an ingress node of route, say $R_i$, is accepted, a flow from the head of its queue leaves the queue and acquire the resources in the network. Else, do nothing. \end{itemize} In above, like the algorithm for wireless network we assume that if the request of ingress node $i$ is accepted, a new flow will acquire resources in the network along its route. This is irrespective of whether queue is empty or not -- if queue is empty, a {\em dummy} flow is generated. This is merely for technical reasons. \subsubsection{Throughput Optimality} We describe a specific choice of weight $\boldsymbol{W}(t)$ for which the algorithm for circuit switched network as described above is throughput optimal. Specifically, for route $R_i$ its weight at time $t$ is defined as \begin{equation} W_i(t) = \max\left\{f(Q_i(\lfloor t\rfloor)),\sqrt{f({Q}_{\max}(\lfloor t\rfloor))} \right\}, \label{eq:weight2} \end{equation} where $f(x) = \log \log (x+e)$. The remark about distributed estimation of $Q_{\max}(\lfloor t\rfloor))$ after \eqref{eq:weight1} applies here as well. We state the following property of the algorithm. \begin{theorem}\label{thm:main2} Suppose the algorithm of Section \ref{ssec:algo2} uses the weight as per \eqref{eq:weight2}. Then, for any $\boldsymbol{\lambda} \in \boldsymbol{\Lambda}_{cs}^o$, the network Markov process is positive Harris recurrent. \end{theorem} \section{Technical Preliminaries}\label{sec:prelim} \subsection{Finite State Markov Chain}\label{ssec:fsmc} Consider a discrete-time, time homogeneous Markov chain over a finite state space $\Omega$. Let its probability transition matrix be $P =[P_{ij}]\in \mathbb{R}_+^{|\Omega| \times |\Omega|}$. If $P$ is irreducible and aperiodic, then the Markov chain is known to have a unique stationary distribution $\pi = [\pi_i] \in \mathbb{R}_+^{|\Omega|}$ and it is ergodic, i.e. $$\lim_{\tau \to\infty} P^\tau_{ji} \to \pi_i, \qquad \text{for any} \qquad i,j \in \Omega. $$ The adjoint of $P$, also known as the time-reversal of $P$, denoted by $P^*$ is defined as follows: \begin{eqnarray} \pi_iP^*_{ij} & = & \pi_j P_{ji}, \qquad \text{for any}\qquad i, j \in \Omega.\label{eq:db} \end{eqnarray} By definition, $P^*$ has $\pi$ as its stationary distribution as well. If $P = P^*$ then $P$ is called \emph{reversible} or {\em time reversible}. Similar notions can be defined for a continuous time Markov proces over $\Omega$. To this end, let $P(s,t) = [P_{ij}(s,t)] \in \mathbb{R}_+^{|\Omega| \times |\Omega|}$ denote its transition matrix over time interval $[s,t]$. The Markov process is called time-homogeneous if $P(s,t)$ is stationary, i.e. $P(s,t)=P(0,t-s)$ for all $0\leq s < t$ and is called reversible if $P(s,t)$ is reversible for all $0\leq s < t$. Further, if $P(0,t)$ is irreducible and aperiodic for all $t>0$, then this time-homogeneous reversible Markov process has a unique stationary distribution $\pi$ and it is ergodic, i.e. $$\lim_{t \to\infty} P_{ji}(0,t) \to \pi_i, \qquad \text{for any} \qquad i,j \in \Omega. $$ \subsection{Mixing Time of Markov Chain} Given an ergodic finite state Markov chain, the distribution at time $\tau$ converge to the stationary distribution starting from any initial condition as described above. We will need quantitative bounds on the time it takes for them to reach ``close'' to their stationary distribution. This time to reach stationarity is known as the {\em mixing time} of the Markov chain. Here we introduce necessary preliminaries related to this notion. We refer an interested reader to survey papers \cite{LW, MT06}. We start with the definition of distances between probability distributions. \begin{definition}(Distance of measures) Given two probability distributions $\nu$ and $\mu$ on a finite space $\Omega$, we define the following two distances. The total variation distance, denoted as $\norm{\nu - \mu}_{TV}$ is $$ \norm{\nu - \mu}_{TV} = \frac12\sum_{i\in\Omega}\abs{\nu(i)-\mu(i)}. $$ The \emph{$\chi^2$ distance}, denoted as $\norm{\frac{\nu}{\mu} - 1}_{2,{\mu}}$ is $$\norm{\frac{\nu}{\mu}-1}_{2,\mu}^2 = \norm{\nu - \mu}_{2,\frac{1}{\mu}}^2 = \sum_{i\in\Omega}{\mu(i)\p{\frac{\nu(i)}{\mu(i)}-1}^2}. $$ More generally, for any two vectors $\mb{u},\mb{v} \in \mathbb{R}_+^{|\Omega|}$, we define $$ \norm{\mb{v}}_{2,\mb{u}}^2 = \sum_{i \in \Omega} u_i v_i^2.$$ \end{definition} We make note of the following relation between the two distances defined above: using the Cauchy-Schwarz inequality, we have \begin{equation} \norm{\frac{\nu}{\mu}-1}_{2,\mu} \geq 2 \norm{\nu - \mu}_{TV}. \label{eq:chiTV} \end{equation} Next, we define a matrix norm that will be useful in determining the rate of convergence or the mixing time of a finite-state Markov chain. \begin{definition}[Matrix norm] Consider a $|\Omega| \times |\Omega|$ non-negative valued matrix $A \in \mathbb{R}_+^{|\Omega| \times |\Omega|}$ and a given vector $\mb{u} \in \mathbb{R}_+^{|\Omega|}$. Then, the matrix norm of $A$ with respect to $\mb{u}$ is defined as follows: $$\|A\|_\mb{u} = \sup_{\mb{v} : \mathbb{E}_{\mb{u}}[\mb{v}]=0 } {\frac{{\|A \mb{v}\|}_{2,\mb{u}}}{\|\mb{v}\|_{2,\mb{u}}}}, $$ where $\mathbb{E}_{\mb{u}}[\mb{v}] = \sum_{i} u_i v_i$. \end{definition} It can be easily checked that the above definition of matrix norm satisfies the following properties. \begin{itemize} \item[{\bf P1}.] For matrices $A, B \in \mathbb{R}_+^{|\Omega| \times |\Omega|}$ and $\pi \in \mathbb{R}_+^{|\Omega|}$ $$ \|A + B \|_\pi \leq \|A \|_\pi + \|B \|_\pi. $$ \item[{\bf P2}.] For matrix $A \in \mathbb{R}_+^{|\Omega| \times |\Omega|}$, $\pi \in \mathbb{R}_+^{|\Omega|}$ and $c \in \mathbb{R}$, $$ \|c A \|_\pi = |c| \|A \|_\pi.$$ \item[{\bf P3}.] Let $A$ and $B$ be transition matrices of reversible Markov chains, i.e.\ $A = A^*$ and $B = B^*$. Let both of them have $\pi$ as their unique stationary distribution. Then, $$ \| A B \|_\pi \leq \|A \|_\pi \|B \|_\pi.$$ \item[{\bf P4}.] Let $A$ be the transition matrix of a reversible Markov chain, i.e. $A = A^*$. Then, $$\|A\|\leq \lambda_{\max},$$ where $\lambda_{\max}=\max\{|\lambda| \neq 1:\mbox{$\lambda$ is an eigenvalue of $A$}\}$. \end{itemize} For a probability matrix $P$, we will mostly be interested in the matrix norm of $P$ with respect to its stationary distribution $\pi$, i.e.\ $\|P\|_\pi$. Therefore, in this paper if we use a matrix norm for a probability matrix without mentioning the reference measure, then it is with respect to the stationary distribution. That is, in the above example $\|P\|$ will mean $\|P\|_\pi$. With these definitions, it follows that for any distribution $\mu$ on $\Omega$ \begin{equation}\label{eq:mixing} \norm{\frac{\mu P}{\pi}-1}_{2,\pi}\leq \|P^*\|\norm{\frac{\mu}{\pi}-1}_{2,\pi}, \end{equation} since $\mathbb{E}_{\pi}\left[\frac{\mu}{\pi}-1\right]=0$, where $\frac{\mu}{\pi} = [\mu(i)/\pi(i)]$. The Markov chain of our interest, Glauber dynamics, is reversible i.e.\ $P = P^*$. Therefore, for a reversible Markov chain starting with initial distribution $\mu(0)$, the distribution $\mu(\tau)$ at time $\tau$ is such that \begin{eqnarray} \norm{\frac{\mu(\tau)}{\pi}-1}_{2,\pi}\leq \|P\|^{\tau}\norm{\frac{\mu(0)}{\pi}-1}_{2,\pi}.\label{eq:mixing2} \end{eqnarray} Now starting from any state $i$, i.e.\ probability distribution with unit mass on state $i$, the initial distance $\norm{\frac{\mu(0)}{\pi}-1}_{2,\pi}$ in the worst case is bounded above by {$\sqrt{1/\pi_{\min}}$} where {$\pi_{\min} = \min_i \pi_i$}. Therefore, for any $\delta > 0$ we have $\norm{\frac{\mu(\tau)}{\pi}-1}_{2,\pi} \leq \delta$ for any $\tau$ such that\footnote{Throughout this paper, we shall utilize the standard order-notations: for two functions $g, f : \mathbb{R}_+ \to \mathbb{R}_+$, $g(x) = \omega(f(x))$ means $\liminf_{x\to\infty} g(x)/f(x) = \infty$; $g(x) = \Omega(f(x))$ means $\liminf_{x\to\infty} g(x)/f(x) > 0$; $g(x) = \Theta(f(x))$ means $0 < \liminf_{x\to\infty} g(x)/f(x) \leq \limsup_{x\to\infty} g(x)/f(x) < \infty$; $g(x) = O(f(x))$ means $\limsup_{x\to\infty} g(x)/f(x) < \infty$; $g(x) = o(f(x))$ means $\limsup_{x\to\infty} g(x)/f(x) =0$.} {$$ \tau \geq \frac{\log 1/\pi_{\min} + \log 1/\delta}{\log 1/\|P\|} ~=~\Theta\left(\frac{\log 1/\pi_{\min} + \log 1/\delta}{1-\|P\|}\right).$$} This suggests that the ``mixing time'', i.e.\ time to reach (close to) the stationary distribution of the Markov chain scales inversely with $1-\|P\|$. Therefore, we will define the ``mixing time'' of a Markov chain with transition matrix $P$ as $1/(1-\|P\|)$. \subsection{Glauber Dynamics \& Algorithm for Wireless Network}\label{ssec:glauber} We will describe the relation between the algorithm for wireless network (cf. Section \ref{ssec:algo1}) and a specific irreducible, aperiodic, reversible Markov chain on the space of independent sets $\mc{I}(G)$ or schedules for wireless network with graph $G=(V,E)$. It is also known as the Glauber dynamics, which is used by the standard Metropolis-Hastings \cite{MRRTT53, H70} sampling mechanismthat is described next. \subsubsection{Glauber Dyanmics \& Its Mixing Time} We shall start off with the definition of the Glauber dynamics followed by a useful bound on its mixing time. \begin{definition}[Glauber dynamics] Consider a graph $G = (V,E)$ of $n = |V|$ nodes with node weights $\boldsymbol{W} = [W_i] \in \mathbb{R}_+^n$. The Glauber dynamics based on weight $\boldsymbol{W}$, denoted by $GD(\boldsymbol{W})$, is a Markov chain on the space of independent sets of $G$, $\mc{I}(G)$. The transitions of this Markov chain are described next. Suppose the Markov chain is currently in the state $\boldsymbol{\sigma} \in \mc{I}(G)$. Then, the next state, say $\boldsymbol{\sigma}'$ is decided as follows: pick a node $i \in V$ uniformly at random and \begin{enumerate} \item[$\circ$] set $\sigma_j' = \sigma_j$ for $j \neq i$, \item[$\circ$] if $\sigma_k = 0$ for all $k \in \mathcal{N}(i)$, then set $$ \sigma_i' = \begin{cases} 1 &\mbox{with probability } \frac{\exp(W_i)}{1 + \exp(W_i)}\\ 0 &\mbox{otherwise,} \end{cases}$$ \item[$\circ$] else set $\sigma_i' = 0$. \end{enumerate} \end{definition} It can be verified that the Glauber dynamics $GD(\boldsymbol{W})$ is reversible with stationary distribution $\pi$ given by \begin{eqnarray} \pi_{\boldsymbol{\sigma}} & \propto & \exp(\boldsymbol{W}\cdot\boldsymbol{\sigma}), \qquad \text{for any} \quad \boldsymbol{\sigma} \in \mc{I}(G).\label{eq:glauber} \end{eqnarray} Now we describe bound on the mixing time of Glauber dynamics. \begin{lemma}\label{lem:glaumixing} Let $P$ be the transition matrix of the Glauber dynamics $GD(\boldsymbol{W})$ with $n$ nodes. Then, \begin{eqnarray} \|P\| &\leq& 1 - \frac{1}{n^2 2^{2n+3} \exp\left(2(n+1)W_{\max}\right)},\\ \left\|e^{n(P-I)}\right\| & \leq & 1 - \frac{1}{n 2^{2n+4} \exp(2(n+1)W_{\max})}. \end{eqnarray} \end{lemma} \begin{proof} By the property {\bf P4} of the matrix norm and Cheeger's inequality \cite{C, DFK91, JS, DS, sinclair}, it is well known that $\|P\|\leq \lambda_{\max} \le 1 - \frac{\Phi^2}{2}$ where $\Phi$ is the conductance of $P$, defined as $$ \Phi ~=~ \min_{S \subset \mc{I}(G): \pi(S)\le\frac12} \frac{Q(S,S^c)}{\pi(S)\pi(S^c)},$$ where $S^c = \mc{I}(G) \backslash S$, $Q(S,S^c) = \sum_{\boldsymbol{\sigma}\in S, \boldsymbol{\sigma}'\in S^c}{\pi(\boldsymbol{\sigma})P(\boldsymbol{\sigma},\boldsymbol{\sigma}')}.$ Now we have \begin{eqnarray*} \Phi &\ge& \min_{S\subset \mc{I}(G)}{Q(S,S^c)} \\ & \ge & \min_{P(\boldsymbol{\sigma},\boldsymbol{\sigma}')\ne0} \pi(\boldsymbol{\sigma})P(\boldsymbol{\sigma},\boldsymbol{\sigma}') \\ &\ge& \pi_{\min}\cdot\min_i \frac1n \frac1{1+\exp(W_i)} \\ & \geq & \frac1{2^n\exp(nW_{\max})}\cdot\frac1n\frac1{1+\exp(W_{\max})}\\ &\ge& \frac1{n2^{n+1}\exp\left((n+1)W_{\max}\right)}. \end{eqnarray*} Therefore $$\norm{P} \le 1 - \frac{1}{n^2 2^{2n+3}\exp\left(2(n+1)W_{\max}\right)}.$$ Now consider $e^{n(P(\tau)-I)}$. Using properties {\bf P1}, {\bf P2} and {\bf P3} of matrix norm, we have: \begin{eqnarray} \left\|e^{n(P-I)}\right\| & = & \left\| e^{-n} \sum_{k=0}^\infty \frac{n^k P^k}{k!} \right\| \nonumber \\ & \leq & e^{-n} \sum_{k=0}^\infty \frac{n^k \left\|P\right\|^k}{k!} \nonumber \\ & = & e^{n(\|P\| - 1)} \nonumber \\ & \leq & 1 - \frac{n(1-\|P\|)}{2}.\label{eq:dx1} \end{eqnarray} In the last inequality, we have used the fact that $\|P\| < 1$ and $e^{-x} \leq 1 - x/2$ for all $x \in [0,1]$. Hence, from the bound of $\|P\|$, we obtain \begin{eqnarray} \left\|e^{n(P-I)}\right\| & \leq & 1 - \frac{1}{ n 2^{2n+4} \exp(2(n+1)W_{\max})}. \end{eqnarray} This completes the proof of Lemma \ref{lem:glaumixing}. \end{proof} \subsubsection{Relation to Algorithm} Now we relate our algorithm for wireless network scheduling described in Section \ref{ssec:algo1} with an appropriate continuous time version of the Glauber dynamics with time-varying weights. Recall that $\mathbf{Q}(t)$ and $\boldsymbol{\sigma}(t)$ denote the queue-size vector and schedule at time $t$. The algorithm changes its scheduling decision, $\boldsymbol{\sigma}(t)$, when a node's exponential clock of rate $1$ ticks. Due to memoryless property of exponential distribution and independence of clocks of all nodes, this is equivalent to having a global exponential clock of rate $n$ and upon clock tick one of the $n$ nodes gets chosen. This node decides its transition as explained in Section \ref{ssec:algo1}. Thus, the effective dynamics of the algorithm upon a global clock tick is such that the schedule $\boldsymbol{\sigma}(t)$ evolves exactly as per the Glauber dynamics $GD(\boldsymbol{W}(t))$. Here recall that $\boldsymbol{W}(t)$ is determined based on $\mathbf{Q}(\lfloor t \rfloor)$. With abuse of notation, let the transition matrix of this Glauber dynamics be denoted by $GD(\boldsymbol{W}(t))$. Now consider any $\tau \in \mathbb{Z}_+$. Let $\mathbf{Q}(\tau), \boldsymbol{\sigma}(\tau)$ be the states at time $\tau$. Then, {\begin{eqnarray* \mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau+1)} \, \Big| \, \mathbf{Q}(\tau), \boldsymbol{\sigma}(\tau)\right] & = & \sum_{k=0}^\infty \bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau)} \Pr(\zeta = k) GD(\boldsymbol{W}(\tau))^k, \end{eqnarray*} where we have used notation $\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}}$ for the distribution with singleton support $\{\boldsymbol{\sigma}\}$ and $\zeta$ is a Poisson random variable of mean $n$. In above, the expectation is taken with respect to the distribution of $\boldsymbol{\sigma}(\tau+1)$ given $\mathbf{Q}(\tau), \boldsymbol{\sigma}(\tau)$.} {Therefore, it follows that \begin{eqnarray}\label{eq:fz1} \mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau+1)} \, \Big| \, \mathbf{Q}(\tau), \boldsymbol{\sigma}(\tau)\right] & = & \bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau)} e^{n(GD(\boldsymbol{W}(\tau))-I)} \nonumber \\ & = & \bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau)} P(\tau) \end{eqnarray} where $P(\tau) \stackrel{\triangle}{=} e^{n(GD(\boldsymbol{W}(\tau))-I)}$. In general, for any $\delta\in[0,1]$ \begin{eqnarray}\label{eq:fz1-1} \mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau+\delta)} \, \Big| \, \mathbf{Q}(\tau), \boldsymbol{\sigma}(\tau)\right] & = & \bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau)} P^{\delta}(\tau), \end{eqnarray} where $P^{\delta}(\tau) \stackrel{\triangle}{=} e^{\delta n(GD(\boldsymbol{W}(\tau))-I)}$.} \subsection{Loss Network \& Algorithm for Circuit Switched Network}\label{ssec:lossnet} For the buffered circuit switched network, the Markov chain of interest is related to the classical stochastic loss network model. This model has been popularly utilized to study the performance of various systems including the telephone networks, human resource allocation, etc. (cf. see \cite{Kelly}). The stochastic loss network model is very similar to the model of the buffered circuit switched network with the only difference that it does not have any buffers at the ingress nodes. \subsubsection{Loss Network \& Its Mixing Time} A loss network is described by a network graph $G = (V,E)$ with capacitated links $[C_e]_{e\in E}$, $n$ routes $\{R_i:R_i\subset E, 1\leq i\leq n\}$ and without any buffer or queues at the ingress of each route. For each route $R_i$, there is a dedicated exogenous, independent Poisson arrival process with rate $\phi_i$. Let $z_i(t)$ be number of active flows on route $i$ at time $t$, with notation $\boldsymbol{z}(t) = [z_i(t)]$. Clearly, $\boldsymbol{z}(t) \in \X$ due to network capacity constraints. At time $t$ when a new exogenous flow arrives on route $R_i$, if it can be accepted by the network, i.e. $\boldsymbol{z}(t) + e_i \in \X$, then it is accepted with $z_i(t) \to z_i(t)+1$. Or else, it is dropped (and hence lost forever). Each flow holds unit amount of capacity on all links along its route for time that is distributed as Exponential distribution with mean $1$, independent of everything else. Upon the completion of holding time, the flow departs and frees unit capacity on all links of its own route. Therefore, effectively this loss network model can be described as a finite state Markov process with state space $\X$. Given state $\boldsymbol{z} =[z_i] \in \X$, the possible transitions and corresponding rates are given as \begin{equation} z_i ~~\to~~ \begin{cases} z_i + 1,\quad\mbox{with rate}\quad \phi_i\quad\mbox{if}\quad \boldsymbol{z}+e_i \in \X, \\ z_i - 1,\quad\mbox{with rate}\quad x_i. \end{cases} \label{eq:0} \end{equation} It can be verified that this Markov process is irreducible, aperiodic, and time-reversible. Therefore, it is positive recurrent (due to the finite state space) and has a unique stationary distribution. Its stationary distribution $\pi$ is known (cf. \cite{Kelly}) to have the following product-form: for any $\boldsymbol{z} \in \X$, \begin{eqnarray} \pi_{\boldsymbol{z}} & \propto & \prod_{i=1}^n\frac{\phi_i^{z_i}}{z_i!}. \label{eq:lossnet} \end{eqnarray} We will be interested in the discrete-time (or {\em embedded}) version of this Markov processes, which can be defined as follows. \begin{definition}[Loss Network] A loss network Markov chain with capacitated graph $G=(V,E)$, capacities $C_e, e\in E$ and $n$ routes $R_i, 1\leq i\leq n$, denoted by $\textbf{LN}(\boldsymbol{\phi})$ is a Markov chain on $\X$. The transition probabilities of this Markov chain are described next. Given a current state $\boldsymbol{z} \in \X$, the next state $\boldsymbol{z}^*\in \X$ is decided by first picking a route $R_i$ uniformly at random and performing the following: \begin{enumerate} \item[$\circ$] $z^*_j=z_j$ for $j\neq i$ and $z^*_i$ is decided by $$ z^*_i = \begin{cases} z_i+1 &\mbox{with probability } \frac{\phi_i}{\mc{R}}\cdot\mathbf{1}_{\{\boldsymbol{z}+e_i\in \X\}}\\ z_i-1 &\mbox{with probability } \frac{z_i}{\mc{R}}\\ z_i&\mbox{otherwise.} \end{cases},$$ \end{enumerate} where $ \mc{R} = \sum_i \phi_i + C_{\max}.$ \end{definition} $\textbf{LN}(\boldsymbol{\phi})$ has the same stationary distribution as in \eqref{eq:lossnet}, and it is also irreducible, aperiodic, and reversible. Next, we state a bound on the mixing time of the loss network Markov chain $\textbf{LN}(\boldsymbol{\phi})$ as follows. \begin{lemma}\label{lem:lossmixing} Let $P$ be the transition matrix of $\textbf{LN}(\boldsymbol{\phi})$ with $n$ routes. If $\boldsymbol{\phi}=\exp(\boldsymbol{W})$ with \footnote{We use the following notation: given a function $g: \mathbb{R} \to \mathbb{R}$ and a $d$-dimensional vector $\mb{u} \in \mathbb{R}^d$, let $g(\mb{u}) = [g(u_i)] \in \mathbb{R}^d$.} $W_i\geq 0$ for all $i$, then, \begin{eqnarray} \norm{P} &\le& 1 - \frac{1}{8 n^4 C_{\max}^{2nC_{\max}+2n+2}~ \exp\left(2(nC_{\max}+1)W_{\max}\right)},\\ \norm{e^{n\mc{R} (P - I)}}&\le& 1 - \frac{1}{16 n^3 C_{\max}^{2nC_{\max}+2n+2}~ \exp\left(2(nC_{\max}+1)W_{\max}\right)}. \end{eqnarray} \end{lemma} \begin{proof} Similarly as the proof of Lemma \ref{lem:glaumixing}, a simple lower bound for the conductance $\Phi$ of $P$ is given by \begin{eqnarray} \Phi &\geq & \pi_{\min} \cdot \min_{P_{\boldsymbol{z},\boldsymbol{z}'}\ne0} P_{\boldsymbol{z},\boldsymbol{z}'}. \label{ed0} \end{eqnarray} To obtain the lower bound of $\pi_{\min}$, recall the equation \eqref{eq:lossnet}, $$\pi_{\boldsymbol{z}} ~=~\frac1Z \prod_{i=1}^n\frac{\phi_i^{z_i}}{z_i!},$$ where $ Z = \sum_{\boldsymbol{z} \in \mc{X}} \prod_{i=1}^n\frac{\phi_i^{z_i}}{z_i !}$, and consider the following: \begin{eqnarray*} Z ~ \leq ~ |\mc{X}| \phi_{\max}^{nC_{\max}} & \leq & C_{\max}^n \exp(nC_{\max}W_{\max}), \end{eqnarray*} and \begin{eqnarray*} \prod_{i=1}^n\frac{\phi_i^{z_i}}{z_i !} ~ \geq ~ \frac1{\left(C_{\max}!\right)^n} & \geq &\frac1{C_{\max}^{n C_{\max}}}. \end{eqnarray*} By combining the above inequalities, we obtain \begin{eqnarray} \pi_{\min}~\geq~ \frac1{C_{\max}^{n C_{\max}+n}\exp(nC_{\max}W_{\max})}.\label{ed1} \end{eqnarray} On the other hand, one can bound $\min_{P_{\boldsymbol{z},\boldsymbol{z}'}\ne0} P_{\boldsymbol{z},\boldsymbol{z}'}$ as follows: \begin{eqnarray} P_{\boldsymbol{z},\boldsymbol{z}'}~\geq~ \frac1n \cdot \frac1{\mc{R}}~\geq~\frac1n \cdot \frac1{n\phi_{\max}+C_{\max}} ~\geq~\frac1{2n^2C_{\max}\exp(W_{\max})},\label{ed2} \end{eqnarray} where we use the fact that $x + y \leq 2xy$ if $x, y \geq 1$. Now, by combining \eqref{ed1} and \eqref{ed2}, we have $$\Phi \geq \frac1{2n^2C_{\max}^{n C_{\max}+n+1}\exp\left((nC_{\max}+1)W_{\max}\right)}.$$ Therefore, using the property {\bf P4} of the matrix norm and Cheeger's inequality, we obtain the desired conclusion as $$\norm{P} ~\le~ \lambda_{\max} ~\le~ 1-\frac{\Phi^2}{2} ~\le~ \frac1{8n^4C_{\max}^{2n C_{\max}+2n+2}\exp\left(2(nC_{\max}+1)W_{\max}\right)}. $$ Furthermore, using this bound and similar arguments in the proof of Lemma \ref{lem:glaumixing}, we have $$\norm{e^{n\mc{R} (P - I)}}~\le~ 1 - \frac{1}{16 n^3 C_{\max}^{2nC_{\max}+2n+2}~ \exp\left(2(nC_{\max}+1)W_{\max}\right)}.$$ ~~ \end{proof} \subsubsection{Relation to Algorithm} The scheduling algorithm for buffered circuit switched network described in Section \ref{ssec:algo2} effectively simulates a stochastic loss network with time-varying arrival rates $\boldsymbol{\phi}(t)$ where $\boldsymbol{\phi}_i(t) = \exp(W_i(t))$. That is, the relation of the algorithm in Section \ref{ssec:algo2} with loss network is similar to the relation of the algorithm in Section \ref{ssec:algo1} with Glauber dynamics that we explained in the previous section. To this end, for a given $\tau \in \mathbb{Z}_+$, let $\mathbf{Q}(\tau)$ and $\boldsymbol{z}(\tau)$ be queue-size vector and active flows at time $\tau$. With abuse of notation, let $LN(\exp(\boldsymbol{W}(\tau)))$ be the transition matrix of the corresponding Loss Network with $\boldsymbol{W}(\tau)$ dependent on $\mathbf{Q}(\tau)$. Then, for any $\delta\in[0,1]$ {\begin{equation}\label{eq:fz2} \mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{z}(\tau+\delta)} \, \Big| \, \mathbf{Q}(\tau), \boldsymbol{z}(\tau)\right] = \bdelta}%{\boldsymbol{\delta}_{\boldsymbol{z}(\tau)} e^{n\delta \mc{R}(\tau)(LN(\exp(\boldsymbol{W}(\tau)))-I)}, \end{equation} where $\mc{R}(\tau)=\sum_i \exp(W_i(\tau))+C_{\max}$. \subsubsection{Positive Harris Recurrence \& Its Implication}\label{sssec:harris} For completeness, we define the well known notion of positive Harris recurrence (e.g.\ see \cite{dai95,dainotes}). We also state its useful implications to explain its desirability. In this paper, we will be concerned with discrete-time, time-homogeneous Markov process or chain evolving over a complete, separable metric space ${\sf X}$. Let $\mc{B}_{\sf X}$ denote the Borel $\sigma$-algebra on ${\sf X}$. We assume that the space ${\sf X}$ be endowed with a norm\footnote{One may assume it to be induced by the metric of ${\sf X}$, denoted by $d$. For example, for any $\mb{x} \in {\sf X}$, $|\mb{x}| = d(\mathbf{0},\mb{x})$ with respect to a fixed $\mathbf{0} \in {\sf X}$.}, denoted by $|\cdot|$. Let $X(\tau)$ denote the state of Markov chain at time $\tau \in \mathbb{Z}_+$. Consider any $A \in \mc{B}_{\sf X}$. Define stopping time $T_A = \inf\{\tau \geq 1 : X(\tau) \in A\}$. Then the set $A$ is called Harris recurrent if $$ {\Pr}_{\mb{x}}(T_A < \infty) = 1 \qquad \mbox{for any } \mb{x} \in {\sf X}, $$ where $\Pr_{\mb{x}}(\cdot) \equiv \Pr( \cdot | X(0) = \mb{x})$. A Markov chain is called Harris recurrent if there exists a $\sigma$-finite measure $\mu$ on $({\sf X}, \mc{B}_{\sf X})$ such that whenever $\mu(A) > 0$ for $A\in \mc{B}_{\sf X}$, $A$ is Harris recurrent. It is well known that if $X$ is Harris recurrent then an essentially unique invariant measure exists (e.g.\ see Getoor \cite{Getoor}). If the invariant measure is finite, then it may be normalized to obtain a unique invariant probability measure (or stationary probability distribution); in this case $X$ is called positive Harris recurrent. Now we describe a useful implication of positive Harris recurrence. Let $\pi$ be the unique invariant (or stationary) probability distribution of the positive Harris recurrent Markov chain $X$. Then the following ergodic property is satisfied: for any $x \in {\sf X}$ and non-negative measurable function $f: {\sf X} \to \mathbb{R}_+$, $$ \lim_{T\to\infty} \frac{1}{T} \sum_{\tau = 0}^{T-1} f(X(\tau)) \to \mathbb{E}_\pi[f], ~~{\Pr}_{\mb{x}}\mbox{-almost surely}.$$ Here $\mathbb{E}_\pi[f] = \int f(z) \pi(z).$ Note that $\mathbb{E}_\pi[f]$ may not be finite. \subsubsection{Criteria for Positive Harris Recurrence} Here we introduce a well known criteria for establishing the positive Harris recurrence based on existence of a Lyapunov function and an appropriate petit set. We will need some definitions to begin with. Given a probability distribution (also called sampling distribution) $a$ on $\mathbb{N}$, the $a$-sampled transition matrix of the Markov chain, denoted by $K_a$ is defined as $$ K_a(\mb{x}, B) = \sum_{\tau\geq 0} a(\tau)P^\tau(\mb{x}, B), ~~\mbox{for any}~~ \mb{x}\in {\sf X}, ~B \in \mc{B}_{\sf X}.$$ Now, we define a notion of a \emph{petite} set. A non-empty set $A \in \mc{B}_{\sf X}$ is called $\mu_a$-\emph{petite} if $\mu_a$ is a non-trivial measure on $({\sf X},\mc{B}_{\sf X})$ and $a$ is a probability distribution on $\mathbb{N}$ such that for any $\mb{x} \in A$, $$ K_a(\mb{x}, \cdot) \geq \mu_a(\cdot).$$ A set is called a \emph{petite} set if it is $\mu_a$-petite for some such non-trivial measure $\mu_a$. A known sufficient condition to establish positive Harris recurrence of a Markov chain is to establish positive Harris recurrence of closed petite sets {as stated in the following lemma}. We refer an interested reader to the book by Meyn and Tweedie \cite{MT-book} or the recent survey by Foss and Konstantopoulos \cite{Foss-Fluid} for details. \begin{lemma}\label{lem:one} Let $B$ be a closed petite set. Suppose $B$ is Harris recurrent, i.e.\ $\Pr_\mb{x}(T_B < \infty) = 1$ for any $\mb{x} \in {\sf X}$. Further, let $$ \sup_{\mb{x} \in B} \mathbb{E}_\mb{x}\left[T_B\right] < \infty.$$ Then the Markov chain is positive Harris recurrent. Here $\mathbb{E}_{\mb{x}}$ is defined with respect to $\Pr_{\mb{x}}$. \end{lemma} Lemma \ref{lem:one} suggests that to establish the positive Harris recurrence of the network Markov chain, it is sufficient to find a closed petite set that satisfies the conditions of Lemma \ref{lem:one}. To establish positive recurrence of a closed petit set, we shall utilize the {\em drift criteria} based on an appropriate Lyapunov function stated in the following Lemma (cf. \cite[Theorem 1]{Foss-Fluid}). \begin{lemma}\label{lem:two} Let $L : {\sf X} \to \mathbb{R}_+$ be a function such that $L(\mb{x}) \to \infty$ as $|\mb{x}| \to \infty$. For any $\kappa > 0$, let $B_{\kappa} = \{ \mb{x} : L(\mb{x}) \leq \kappa\}$. And let there exist functions $h, g : {\sf X} \to \mathbb{Z}_+$ such that for any $\mb{x} \in {\sf X}$, $$\mathbb{E}\left[L(X(g(\mb{x}))) - L(X(0)) | X(0) = \mb{x} \right] \leq -h(\mb{x}),$$ that satisfy the following conditions: \begin{itemize} \item[(a)] $\inf_{\mb{x} \in {\sf X}} h(\mb{x}) > -\infty$. \item[(b)] $\lim\inf_{L(\mb{x}) \to \infty} h(\mb{x}) > 0$. \item[(c)] $\sup_{L(\mb{x}) \leq \gamma} g(\mb{x}) < \infty$ for all $\gamma > 0$. \item[(d)] $\lim\sup_{L(\mb{x})\to\infty} g(\mb{x})/h(\mb{x}) < \infty$. \end{itemize} Then, there exists constant $\kappa_0 > 0$ so that for all $\kappa_0 < \kappa$, the following holds: \begin{eqnarray} \mathbb{E}_\mb{x}\left[ T_{B_\kappa} \right] & < & \infty, \qquad \mbox{for any $\mb{x} \in {\sf X}$} \label{eq:d1a} \\ \sup_{\mb{x} \in B_\kappa} \mathbb{E}_\mb{x}\left[ T_{B_\kappa} \right] & < & \infty. \label{eq:d2a} \end{eqnarray} That is, $B_\kappa$ is positive recurrent. \end{lemma} \section{Proofs of Theorems \ref{thm:main1} \& \ref{thm:main2}}\label{sec:mainproof} This section provides proofs of Theorems \ref{thm:main1} and \ref{thm:main2}. We shall start with necessary formalism followed by a summary of the entire proof. This summary will utilize a series of Lemmas whose proofs will follow. \subsection{Network Markov Process} We describe discrete time network Markov processes under both algorithms that we shall utilize throughout. Let $\tau\in \mathbb{Z}_+$ be the time index. Let $\mathbf{Q}(\tau) = [Q_i(\tau)]$ be the queue-size vector at time $\tau$, $\boldsymbol{x}(\tau)$ be the schedule at time $\tau$ with $\boldsymbol{x}(\tau)=\boldsymbol{\sigma}(\tau)\in\mc{I}(G)$ for the wireless network and $\boldsymbol{x}(\tau)=\boldsymbol{z}(\tau)\in\X$ for the circuit switched network. It can be checked that the tuple $X(\tau) = (\mathbf{Q}(\tau), \boldsymbol{x}(\tau))$ is the Markov state of the network for both setups. Here $X(\tau) \in {\sf X}$ where ${\sf X} = \mathbb{R}_+^n \times {\mc{I}(G)}$ or ${\sf X} = \mathbb{Z}_+^n \times {\X}$. Clearly, ${\sf X}$ is a Polish space endowed with the natural product topology. Let $\mc{B}_{{\sf X}}$ be the Borel $\sigma$-algebra of ${\sf X}$ with respect to this product topology. For any $\mb{x} = (\mathbf{Q}, \boldsymbol{x}) \in {\sf X}$, we define norm of $\mb{x}$ denoted by $|\mb{x}|$ as $$ |\mb{x}| = |\mathbf{Q}| + |\boldsymbol{x}|, $$ where $|\mathbf{Q}|$ denotes the standard $\ell_1$ norm while $|\boldsymbol{x}|$ is defined as its index in $\{0,\dots, {|\Omega|-1}\}$, which is assigned arbitrarily. Since $|\boldsymbol{x}|$ is always bounded, $|\mb{x}| \to \infty$ if and only if $|\mathbf{Q}| \to \infty$. Theorems \ref{thm:main1} and \ref{thm:main2} wish to establish that the Markov process $X(\tau)$ is positive Harris recurrent. \subsection{Proof Plan} To establish the positive Harris recurrence of $X(\tau)$, we will utilize the Lyapunov drift criteria to establish the positive recurrence property of an appropriate petit set (cf. Lemma \ref{lem:one}). To establish the existence of such a Lyapunov function, we shall study properties of our randomized scheduling algorithms. Specifically, we shall show that in a nutshell our schedule algorithms are {\em simulating} the maximum weight scheduling algorithm with respect to an appropriate weight, function of the queue-size. This will lead to the desired Lyapunov function and a drift criteria. The detailed proof of positive Harris recurrence that follows this intuition is stated in four steps. We briefly give an overview of these four steps. To this end, recall that the randomized algorithms for wireless or circuit switched network are effectively asynchronous, continuous versions of the time-varying $GD(\boldsymbol{W}(t))$ or $\textbf{LN}(\exp(\boldsymbol{W}(t)))$ respectively. Let $\pi(t)$ be the stationary distribution of the Markov chain $GD(\boldsymbol{W}(t))$ or $\textbf{LN}(\exp(\boldsymbol{W}(t)))$; $\mu(t)$ be the distribution of the schedule, either $\boldsymbol{\sigma}(t)$ or $\boldsymbol{z}(t)$, under our algorithm at time $t$. In the first step, roughly speaking we argue that the weight of schedule sampled as per the stationary distribution $\pi(t)$ is close to the weight of maximum weight schedule for both networks (with an appropriately defined weight). In the second step, roughly speaking we argue that indeed the distribution $\mu(t)$ is close enough to that of $\pi(t)$ for all time $t$. In the third step, using these two properties we establish the Lyapunov drift criteria for appropriately defined Lyapunov function (cf. Lemma \ref{lem:two}). In the fourth and final step, we show that this implies positive recurrence of an appropriate closed petit set. Therefore, due to Lemma \ref{lem:one} this will imply the positive Harris recurrence property of the network Markov process. \subsection{Formal Proof} To this end, we are interested in establishing Lyapunov drift criteria (cf. Lemma \ref{lem:two}). For this, consider Markov process starting at time $0$ in state $X(0) = (\mathbf{Q}(0), \boldsymbol{x}(0))$ and as per hypothesis of both Theorems, let $\boldsymbol{\lambda} \in (1-\varepsilon)\Conv(\Omega)$ with some $\varepsilon > 0$ and $\Omega = \mc{I}(G)$ (or $\X$). Given this, we will go through four steps to prove positive Harris recurrence. \vspace{.1in} \subsubsection{Step One} Let $\pi(0)$ be the stationary distribution of $GD(\boldsymbol{W}(0))$ or $LN(\exp(\boldsymbol{W}(0)))$. The following Lemma states that the average weight of schedule as per $\pi(0)$ is essentially as good as that of the maximum weight schedule with respect to weight $f(\mathbf{Q}(0))$. \begin{lemma}\label{LEM:GOODPI} Let $\boldsymbol{x}$ be distributed over $\Omega$ as per $\pi(0)$ given $\mathbf{Q}(0)$. Then, \begin{equation}\label{eq:lemgoodpi} \mathbb{E}_{\pi(0)}[f(\mathbf{Q}(0)) \cdot \boldsymbol{x} ] ~\geq~ \left(1- \frac{\varepsilon}4\right) \left(\max_{\boldsymbol{y}\in \Omega} f(\mathbf{Q}(0)) \cdot\boldsymbol{y} \right) - O(1). \end{equation} \end{lemma} The proof of Lemma \ref{LEM:GOODPI} is based on the variational characterization of distribution in the exponential form. Specifically, we state the following proposition which is a direct adaptation of the known results in literature (cf. \cite{GBook}). \begin{proposition}\label{prop:goodpi} Let $T: \Omega \to \mathbb{R}$ and let $\mc{M}(\Omega)$ be space of all distributions on $\Omega$. Define $F : \mc{M}(\Omega) \to \mathbb{R}$ as $$F(\mu) = \mathbb{E}_{\mu}(T(\mb{x})) + H_{ER}(\mu),$$ where $H_{ER}(\mu)$ is the standard discrete entropy of $\mu$. Then, $F$ is uniquely maximized by the distribution $\nu$, where $$ \nu_\mb{x} = \frac{1}{Z} \exp\left(T(\mb{x})\right),~~\mbox{for any}~~\mb{x} \in \Omega,$$ where $Z$ is the normalization constant (or partition function). Further, with respect to $\nu$, we have $$ \mathbb{E}_{\nu}[T(\mb{x})] \geq \left[\max_{\mb{x} \in \mc{X}} T(\mb{x})\right] - \log |\Omega|. $$ \end{proposition} \begin{proof} Observe that the definition of distribution $\nu$ implies that for any $\mb{x} \in \Omega$, $$T(\mb{x}) = \log Z + \log \nu_{\mb{x}}.$$ Using this, for any distribution $\mu$ on $\Omega$, we obtain \begin{equation*} \begin{split} F(\mu) &= \sum_{\mb{x}}\mu_{\mb{x}}T(\mb{x}) - \sum_{\mb{x}}\mu_{\mb{x}}\log \mu_{\mb{x}}\\ &= \sum_{\mb{x}}\mu_{\mb{x}}(\log Z + \log \nu_{\mb{x}}) - \sum_{\mb{x}}{\mu_{\mb{x}}\log\mu_{\mb{x}}}\\ &= \sum_{\mb{x}}{\mu_{\mb{x}}\log Z} + \sum_{\mb{x}}{\mu_{\mb{x}}\log{\frac{\nu_{\mb{x}}}{\mu_{\mb{x}}}}}\\ &= \log Z + \sum_{\mb{x}}{\mu_{\mb{x}}\log{\frac{\nu_{\mb{x}}}{\mu_{\mb{x}}}}}\\ &\le \log Z + \log\biggl(\sum_{\mb{x}}{\mu_{\mb{x}}\frac{\nu_{\mb{x}}}{\mu_{\mb{x}}}}\biggr) \\ &= \log Z \end{split} \end{equation*} with equality if and only if $\mu=\nu$. To complete other claim of proposition, consider $\mb{x}^* \in \arg\max{T(\mb{x})}$. {Let $\mu$ be Dirac distribution $\mu_{\mb{x}} = \ind{\mb{x}=\mb{x}^*}$.} Then, for this distribution $$ F(\mu) = T(\mb{x}^*).$$ But, $F(\nu) \geq F(\mu)$. Also, the maximal entropy of any distribution on $\Omega$ is $\log |\Omega|$. Therefore, \begin{eqnarray} T(\mb{x}^*) & \leq & F(\nu) \nonumber \\ & = & \mathbb{E}_{\nu}[T(\mb{x})] + H_{ER}(\nu) \nonumber \\ & \leq & \mathbb{E}_{\nu}[T(\mb{x})] + \log |\Omega|. \label{eq:ed10} \end{eqnarray} Re-arrangement of terms in \eqref{eq:ed10} will imply the second claim of Proposition \ref{prop:goodpi}. This completes the proof of Proposition \ref{prop:goodpi}. \end{proof} \vspace{.1in} {\em Proof of Lemma \ref{LEM:GOODPI}.} The proof is based on known observations in the context of classical Loss networks literature (cf. see \cite{Kelly}). In what follows, for simplicity we use $\pi = \pi(0)$ for a given $\mathbf{Q} = \mathbf{Q}(0)$. From \eqref{eq:glauber} and \eqref{eq:lossnet}, it follows that for both network models, the stationary distribution $\pi$ has the following form: for any $\boldsymbol{x} \in \Omega$, $$ \pi_{\boldsymbol{x}} \propto \prod_{i} \frac{\exp\left(W_i x_i\right)}{x_i !} ~=~\exp\left(\sum_i W_i x_i - \log (x_i!)\right). $$ To apply Proposition \ref{prop:goodpi}, this suggest the choice of function $T: \mc{X} \to \mathbb{R}$ as $$ T(\boldsymbol{x}) = \sum_i W_i x_i - \log (x_i!), ~~\mbox{for any}~~\boldsymbol{x} \in \Omega.$$ Observe that for any $\boldsymbol{x} \in \Omega$, $x_i$ takes one of the finitely many values in wireless or circuit switched network for all $i$. Therefore, it easily follows that $$0 ~\leq~ \sum_{i} \log (x_i!) ~\leq~ O(1), $$ where the constant may depend on $n$ and the problem parameter (e.g. $C_{\max}$ in circuit switched network). Therefore, for any $\boldsymbol{x} \in \Omega$, \begin{eqnarray} T(\boldsymbol{x}) & \leq & \sum_i W_i x_i \nonumber \\ & \leq & T(\boldsymbol{x}) + O(1). \label{eq:ed11} \end{eqnarray} Define $\hat{\boldsymbol{x}} = \arg\max_{\boldsymbol{x} \in \Omega} \sum_i W_i x_i$. From \eqref{eq:ed11} and Proposition \ref{prop:goodpi}, it follows that \begin{eqnarray} \mathbb{E}_{\pi}\left[\sum_i W_i x_i\right] & \geq & \mathbb{E}_{\pi}\left[T(\boldsymbol{x})\right] \nonumber \\ & \geq & \max_{\boldsymbol{x} \in \Omega} T(\boldsymbol{x})-\log|\Omega| \nonumber \\ & \geq & T(\hat{\boldsymbol{x}})-\log|\Omega| \nonumber \\ & = & \left(\sum_i W_i \hat{x}_i\right) - O(1) - \log|\Omega|\nonumber \\ & = & \left(\max_{\boldsymbol{x} \in \Omega} \boldsymbol{W} \cdot \boldsymbol{x} \right) - O(1). \label{eq:ed12} \end{eqnarray} From the definition of weight in both algorithms (\eqref{eq:weight1} and \eqref{eq:weight2}) for a given $\mathbf{Q}$, weight $\boldsymbol{W} = [W_i]$ is defined as \begin{eqnarray*} W_i & = & \max\left\{f(Q_i),\sqrt{ f({Q}_{\max})} \right\}. \end{eqnarray*} Define $\eta \stackrel{\Delta}{=} \frac{\varepsilon}{4\max_{\boldsymbol{x}\in \Omega} \|x\|_1}$. To establish the proof of Lemma \ref{LEM:GOODPI}, we will consider $Q_{\max}$ such that it is large enough satisfying $$\eta f(Q_{\max}) ~\geq~ \sqrt{ f({Q}_{\max})}.$$ For smaller $Q_{\max}$ we do not need to argue as in that case \eqref{eq:lemgoodpi} (due to $O(1)$ term) is straightforward. Therefore, in the remainder we assume $Q_{\max}$ large enough. For this large enough $Q_{\max}$, it follows that for all $i$, \begin{eqnarray} 0 & \leq & W_i - f(Q_i) ~\leq~ \sqrt{ f({Q}_{\max})}~\leq~\eta f(Q_{\max}) \label{eq:ed13} \end{eqnarray} Using \eqref{eq:ed13}, for any $\boldsymbol{x} \in \Omega$, \begin{eqnarray} 0 & \leq & \boldsymbol{W} \cdot \boldsymbol{x} - f(\mathbf{Q}) \cdot \boldsymbol{x} ~=~ (\boldsymbol{W} - f(\mathbf{Q})) \cdot \boldsymbol{x} \nonumber \\ & \leq & \|\boldsymbol{x}\|_1 \|\boldsymbol{W} - f(\mathbf{Q})\|_\infty\nonumber \\ & \leq & \|\boldsymbol{x}\|_1 \times \eta f(Q_{\max}) \nonumber \\ & \stackrel{(a)}{\leq} & \frac{\varepsilon}4 f(Q_{\max}) \nonumber \\ & \stackrel{(b)}{\leq} & \frac{\varepsilon}4 \left(\max_{\boldsymbol{y} \in \Omega} f(\mathbf{Q}) \cdot \boldsymbol{y}\right),\label{eq:L10} \end{eqnarray} where (a) is from our choice of $\eta=\frac{\varepsilon}{4\max_{\boldsymbol{x}\in \Omega} \|x\|_1}$. For (b), we use the fact that the singleton set $\{i\}$, i.e. independent set $\{i\}$ for wireless network and a single active on route $i$ for circuit switched network, is a valid schedule. And, for $i = \arg\max_{j} Q_j$, it has weight $f(Q_{\max})$. Therefore, the weight of the maximum weighted schedule among all possible schedules in $\Omega$ is at least $f(Q_{\max})$. Finally, using \eqref{eq:ed12} and \eqref{eq:L10} we obtain \begin{eqnarray*} \mathbb{E}_{\pi}\left[f(\mathbf{Q})\cdot \boldsymbol{x}\right] &\geq & \mathbb{E}_{\pi}\left[\boldsymbol{W}\cdot\boldsymbol{x}\right] - \frac{\varepsilon}4 \left(\max_{\boldsymbol{y}\in \Omega} f(\mathbf{Q})\cdot \boldsymbol{y} \right) \\ &\geq & \left(\max_{\boldsymbol{y}\in \Omega} \boldsymbol{W}\cdot\boldsymbol{y} \right) - O(1) - \frac{\varepsilon}4 \left(\max_{\boldsymbol{y}\in \Omega} f(\mathbf{Q})\cdot \boldsymbol{y} \right) \\ &\geq & \left(\max_{\boldsymbol{y}\in \Omega} f(\mathbf{Q})\cdot \boldsymbol{y} \right) - O(1) - \frac{\varepsilon}4 \left(\max_{\boldsymbol{y}\in \Omega} f(\mathbf{Q})\cdot \boldsymbol{y} \right) \\ & = & \left(1- \frac{\varepsilon}4\right) \left(\max_{\boldsymbol{y}\in \Omega} f(\mathbf{Q})\cdot \boldsymbol{y} \right) - O(1). \end{eqnarray*} This completes the proof of Lemma \ref{LEM:GOODPI}. \vspace{.1in} \subsubsection{Step Two} Let $\mu(t)$ be the distribution of schedule $\boldsymbol{x}(t)$ over $\Omega$ at time $t$, given initial state $X(0) = (\mathbf{Q}(0), \boldsymbol{x}(0))$. We wish to show that for any initial condition $\boldsymbol{x}(0) \in \Omega$, for $t$ large (but not too large) enough, $\mu(t)$ is close to $\pi(0)$ if $Q_{\max}(0)$ is large enough. Formal statement is as follows. \begin{lemma}\label{lem:adiabetic1} For a large enough $Q_{\max}(0)$, \begin{eqnarray} \norm{{\mu}(t)-\pi(0)}_{TV} < \varepsilon/4,\label{eq:adiabetic1} \end{eqnarray} for $t\in I=[b_1(Q_{\max}(0)), b_2(Q_{\max}(0))]$, where $b_1,b_2$ are integer-valued functions on $\mathbb{R}_+$ such that $$b_1,b_2={\sf polylog}\left(Q_{\max}(0)\right)\qquad\mbox{and}\qquad b_2/b_1=\Theta\left(\log\left(Q_{\max}(0)\right)\right).$$ In above the constants may depend on $\varepsilon, C_{\max}$ and $n$. \end{lemma} The notation ${\sf polylog}(z)$ represents a positive real-valued function of $z$ that scales no faster than a finite degree polynomial of $\log z$. \vspace{.1in} \noindent{\em Proof of Lemma \ref{lem:adiabetic1}.} We shall prove this Lemma for the wireless network. The proof of buffered circuit switch network follows in an identical manner. Hence we shall skip it. Therefore, we shall assume $\Omega = \mc{I}(G)$ and $\boldsymbol{x}(t) = \boldsymbol{\sigma}(t)$. First, we establish the desired claim for integral times. The argument for non integral times will follow easily as argued near the end of this proof. For $t = \tau \in \mathbb{Z}_+$, we have {\begin{eqnarray*} \mu(\tau+1)&=&\mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau+1)}\right]\\ &=&\mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau)}\cdot P(\tau)\right], \end{eqnarray*} where recall that $P(\tau) {=} e^{n(GW(\boldsymbol{W}(\tau))-I)}$ and the last equality follows from \eqref{eq:fz1}. Again recall that the expectation is with respect to the joint distribution of $\{\mathbf{Q}(\tau), \boldsymbol{\sigma}(\tau)\}$. Hence, it follows that \begin{eqnarray*} \mu(\tau+1)&=&\mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau)}\cdot P(\tau)\right]\\ &=&\mathbb{E}\left[\mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau)}\cdot P(\tau)\,\Big|\,\mathbf{Q}(\tau)\right]\right]\\ &\stackrel{(a)}{=}&\mathbb{E}\left[\mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau)}\,\Big|\,\mathbf{Q}(\tau)\right]\cdot P(\tau)\right]\\ &=&\mathbb{E}\left[\tilde{\mu}(\tau)\cdot P(\tau)\right], \end{eqnarray*} where \begin{eqnarray*} \tilde{\mu}(\tau)=\tilde{\mu}(\mathbf{Q}(\tau))\stackrel{\Delta}{=}\mathbb{E}\left[\bdelta}%{\boldsymbol{\delta}_{\boldsymbol{\sigma}(\tau)}\,\Big|\,\mathbf{Q}(\tau)\right]. \end{eqnarray*} In above the expectation is taken with respect to the conditional marginal distribution of $\boldsymbol{\sigma}(\tau)$ given $\mathbf{Q}(\tau)$; (a) follows since $P(\tau)$ is a function of $\mathbf{Q}(\tau)$.} Next, we establish the relation between $\mu(\tau)$ and $\mu(\tau+1)$. { \begin{eqnarray*} \mu(\tau+1) &=&\mathbb{E}\left[\tilde{\mu}(\tau)\cdot P(\tau)\right]\\ &=&\mathbb{E}\left[\tilde{\mu}(\tau)\cdot P(0)\right] +\mathbb{E}\left[\tilde{\mu}(\tau)\cdot (P(\tau)-P(0))\right]\\ &=&\mathbb{E}\left[\tilde{\mu}(\tau)\right]\cdot P(0) +e(\tau)\\ &=&\mu(\tau)\cdot P(0) +e(\tau), \end{eqnarray*} where $e(\tau) \stackrel{\Delta}{=}\mathbb{E}\left[\tilde{\mu}(\tau)\cdot (P(\tau)-P(0))\right]$. Here the expectation is with respect to $\mathbf{Q}(\tau)$.} Similarly, \begin{eqnarray} \mu(\tau+1) &=&\mu(\tau)\cdot P(0) +e(\tau)\nonumber\\ &=&\left(\mu(\tau-1)\cdot P(0) +e(\tau-1)\right)\cdot P(0) +e(\tau)\nonumber\\ &=&\mu(\tau-1)\cdot P(0)^2 +e(\tau-1)\cdot P(0) +e(\tau). \nonumber \end{eqnarray} Therefore, recursively we obtain \begin{eqnarray} \mu(\tau+1) &=&\mu(0)\cdot P(0)^{\tau+1} +\sum_{s=0}^{\tau} e(\tau-s)\cdot P(0)^s.\label{eq:relmu} \end{eqnarray} We will choose $b_1$ (which will depend on $Q_{\max}(0)$) such that for $\tau \geq b_1$, \begin{eqnarray} \left\|\mu(0)\cdot P(0)^{\tau}-\pi(0)\right\|_{TV}&\leq& \varepsilon/8.\label{eq:defc1} \end{eqnarray} That is, $b_1$ is the {\em mixing time} of $P(0)$. Using inequalities \eqref{eq:mixing2}, \eqref{eq:chiTV} and Lemma \ref{lem:glaumixing}, it follows that $$ b_1 \equiv b_1(Q_{\max}(0)) ~=~ {\sf polylog}\left(Q_{\max}(0)\right).$$ In above, constants may depend on $n$ and $\varepsilon$. Therefore, from \eqref{eq:relmu} and \eqref{eq:defc1}, it suffices to show that \begin{eqnarray} \left\|\sum_{s=0}^{\tau-1} e(\tau-1-s)\cdot P(0)^s\right\|_1\leq \varepsilon/8,\label{eq:errsum} \end{eqnarray} for $\tau \in I=[b_1,b_2]$ with an appropriate choice of $b_2 = b_2(Q_{\max}(0))$. To this end, we choose $$ b_2 \equiv b_2(Q_{\max}(0)) ~=~ \lceil b_1 \log(Q_{\max}(0)) \rceil.$$ Thus, $b_2(Q_{\max}(0)) ~=~ {\sf polylog}\left(Q_{\max}(0)\right)$ as well. With this choice of $b_2$, we obtain the following bound on $e(\tau)$ to conclude \eqref{eq:errsum}. { \begin{eqnarray} \|e(\tau)\|_1&=&\|\mathbb{E}\left[\tilde{\mu}(\tau)\cdot (P(\tau)-P(0))\right]\|_1\nonumber\\ &\leq&\mathbb{E}\left[\|\tilde{\mu}(\tau)\cdot (P(\tau)-P(0))\|_1\right]\nonumber\\ &\stackrel{(a)}{\leq}&O\left(\mathbb{E}\left[\|P(\tau)-P(0)\|_{\infty}\right]\right)\nonumber\\ &\stackrel{(b)}{=}&O\left(\mathbb{E}\left[\left\|GW(\boldsymbol{W}(\tau))-GW(\boldsymbol{W}(0))\right\|_{\infty}\right]\right)\nonumber\\ &\stackrel{(c)}{=}&O\left(\mathbb{E}\left[\max_i \left|\frac1{1+\exp(W_i(\tau))}-\frac1{1+\exp(W_i(0))}\right|\right]\right)\nonumber\\ &\stackrel{(d)}{=}&O\left(\mathbb{E}\left[\max_i\left|W_i(\tau)-W_i(0)\right|\right]\right)\nonumber\\ &\stackrel{(e)}{=}&O\left(\max_i\,\mathbb{E}\left[\left|W_i(\tau)-W_i(0)\right|\right]\right)\label{eq:boundet}. \end{eqnarray} } In above, (a) follows from the standard norm inequality and the fact that $\|\tilde{\mu}(\tau)\|_1 = 1$, (b) follows from Lemma \ref{lem:last} in Appendix, (c) follows directly from the definition of transition matrix $GD(\boldsymbol{W})$, (d) follows from 1-Lipschitz\footnote{A function $f:\mathbb{R}\rightarrow\mathbb{R}$ is $k$-Lipschitz if $|f(s)-f(t)|\leq k|s-t|$ for all $s,t\in\mathbb{R}$.} property of function $1/(1+e^x)$ and (e) follows from the fact that vector $\boldsymbol{W}(\tau)$ being $O(1)$ dimensional\footnote{We note here that the $O(\cdot)$ notation means existences of constants that do not depend scaling quantities such as time $\tau$ and $\mathbf{Q}(0)$; however it may depend on the fixed system parameters such as number of queues. The use of this terminology is to retain the clarity of exposition.}. Next, we will show that for all $i$ and $\tau\leq b_2$, { \begin{eqnarray} \mathbb{E}\left[\left|W_i(\tau)-W_i(0)\right|\right] &=&O\left(\frac1{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right), \label{eq:boundet2} \end{eqnarray} } the notation ${\sf superpolylog}(z)$ represents a positive real-valued function of $z$ that scales faster than any finite degree polynomial of $\log z$. This is enough to conclude \eqref{eq:errsum} (hence complete the proof of Lemma \ref{lem:adiabetic1}) since \begin{eqnarray*} \left\|\sum_{s=0}^{\tau-1}e(\tau-1-s)\cdot P(0)^s\right\|_1&\leq& \sum_{s=0}^{\tau-1}\left\|e(\tau-1-s)\cdot P(0)^s\right\|_1\\ &=&\sum_{s=0}^{\tau-1}O\left(\left\|e(\tau-1-s)\right\|_1\right)\\ &\stackrel{(a)}{=}&O\left(\frac{\tau}{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right)\\ &\stackrel{(b)}{\leq}&\frac{\varepsilon}4, \end{eqnarray*} where we use \eqref{eq:boundet} and \eqref{eq:boundet2} to obtain (a), (b) holds for large enough $Q_{\max}(0)$ and $\tau\leq b_2={\sf polylog}\left(Q_{\max}(0)\right)$. \vspace{.1in} Now to complete the proof, we only need to establish \eqref{eq:boundet2}. This is the step that utilizes `slowly varying' property of function $f(x) = \log\log (x+e)$. First, we provide an intuitive sketch of the argument. Somewhat involved details will be follow. To explain the intuition behind \eqref{eq:boundet2}, let us consider a simpler situation where $i$ is such that $Q_i(0) = Q_{\max}(0)$ and $f(Q_i(\tau)) > \sqrt{f(Q_{\max}(\tau))}$ for a given $\tau \in [0,b_2]$. That is, let $W_i(\tau) = f(Q_i(\tau))$. Now, consider following sequence of inequalities: \begin{eqnarray} |W_i(\tau) - W_i(0)| & = & |f(Q_i(\tau) - f(Q_i(0))| \nonumber \\ & \stackrel{(a)}{\leq} & f'(\zeta) |Q_i(\tau) - Q_i(0)|, \qquad \text{for some $\zeta$ around $Q_i(0)$} \nonumber \\ & \stackrel{(b)}{\leq} & f'(\min\{Q_i(\tau), Q_i(0)\}) O(\tau) \nonumber \\ & \stackrel{(c)}{\leq} & f'(Q_i(0)-O(\tau)) O(\tau) \nonumber \\ & \stackrel{(d)}{=} & O\left(\frac{\tau}{Q_i(0)}\right). \label{eq:fz3} \end{eqnarray} In above, (a) follows from the mean value theorem; (b) follows from monotonicity of $f'$ and Lipschitz property of $Q_i(\cdot)$ (as a function of $\tau$) -- which holds deterministically for wireless network and probabilistically for circuit switched network; (c) uses the same Lipschitz property; and (d) uses the fact that $\tau \leq b_2$ and $b_2 = {\sf polylog}(Q_{\max}(0))$, $Q_{\max}(0) = Q_i(0)$. Therefore, effectively the bound of \eqref{eq:fz3} is $O(1/{\sf superpolylog}(Q_{\max}(0))$. The above explains the gist of the argument that is to follow. However, to make it precise, we will need to provide lots more details. Toward this, we consider the following two cases: (i) $f(Q_i(0))\geq \sqrt{f(Q_{\max}(0))}$, and (ii) $f(Q_i(0))< \sqrt{f(Q_{\max}(0))}$. In what follows, we provide detailed arguments for (i). The arguments for case (ii) are similar in spirit and will be provided later in the proof. \vspace{.1in} \noindent{\em Case (i):} Consider an $i$ such that $f(Q_i(0))\geq \sqrt{f(Q_{\max}(0))}$. Then, { \begin{eqnarray} & & \mathbb{E}\left[\left|W_i(\tau)-W_i(0)\right|\right]\nonumber \\ & & ~ = \mathbb{E}\left[\left|W_i(\tau)-f(Q_i(0))\right|\right]\nonumber\\ & & ~ =\mathbb{E}\left[\left|f(Q_i(\tau))-f(Q_i(0))\right|\cdot \bold{I}_{\left\{f(Q_i(\tau))\geq \sqrt{f(Q_{\max}(\tau))}\right\}}\right]\nonumber\\ && \qquad + ~\mathbb{E}\left[\left|\sqrt{f(Q_{\max}(\tau))}-f(Q_i(0))\right| \cdot \bold{I}_{\left\{f(Q_i(\tau))< \sqrt{f(Q_{\max}(\tau))}\right\}}\right],\label{eq:js1} \end{eqnarray}} where each equality follows from \eqref{eq:weight1}. The first term in \eqref{eq:js1} can be bounded as follows { \begin{eqnarray} &&\mathbb{E}\left[\left|f(Q_i(\tau))-f(Q_i(0))\right|\cdot \bold{I}_{\{f(Q_i(\tau))\geq \sqrt{f(Q_{\max}(\tau))}\}}\right]\nonumber\\ & & \leq\mathbb{E}\left[\left|f(Q_i(\tau))-f(Q_i(0))\right|\right]\nonumber\\ & & \stackrel{(o)}{\leq} \mathbb{E}\left[f^{\prime}\left(\min\{Q_i(\tau),Q_i(0)\}\right)|Q_i(\tau)-Q_i(0)|\right]\nonumber\\ & & \stackrel{(a)}{\leq} \sqrt{\mathbb{E}\left[f^{\prime}\left(\min\{Q_i(\tau),Q_i(0)\}\right)^2\right]}\cdot \sqrt{\mathbb{E}\left[(Q_i(\tau)-Q_i(0))^2\right]}\nonumber\\ & & \stackrel{(b)}{\leq} \sqrt{f^{\prime}\left(\frac{Q_i(0)}2\right)^2+\Theta\left(\frac{\tau}{Q_i(0)}\right)} \cdot O(\tau)\nonumber\\ & & \stackrel{(c)}{\leq} \sqrt{f^{\prime}\left(\frac12f^{-1}\left(\sqrt{f(Q_{\max}(0))}\right)\right)^2 +\Theta\left(\frac{\tau}{f^{-1}\left(\sqrt{f(Q_{\max}(0))}\right)}\right)} \cdot O(\tau)\nonumber\\ & & \stackrel{(d)}{=} O\left(\frac1{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right).\label{eq:js2} \end{eqnarray}} In above, (o) follows from concavity of $f$. For (a) we use the standard Cauchy-Schwarz inequality $\mathbb{E}[XY]\leq \sqrt{\mathbb{E}[X^2]}\sqrt{\mathbb{E}[Y^2]}$. For (b), note that given $Q_i(0)$, $\mathbb{E}[[Q_i(0) - Q_i(\tau)]^2] = O(\tau^2)$ for both network models -- for wireless network, it is deterministically true due to Lipschitz property of $\mathbf{Q}(\cdot)$; for circuit switched network, it is due to the fact that the arrival as well as (the overall) departure processes are bounded rate Poisson processes. Given this, using Markov's inequality it follows that $$\Pr\left(\min\{Q_i(\tau), Q_i(0)\} \leq \frac{Q_i(0)}2\right) ~=~ O\left(\frac{\tau}{Q_i(0)}\right).$$ Finally, using the fact that $\sup_{y \in \mathbb{R}_+} f'(y) = O(1)$, we obtain (b). Now (c) follows from the condition of $Q_i(0)$ that $f(Q_i(0))\geq \sqrt{f(Q_{\max}(0))}$. And, (d) is implied by $\tau\leq b_2={\sf polylog}(Q_{\max}(0))$, $f(x)=\log\log(x+e)$. Next, we bound the second term in \eqref{eq:js1}. We will use notation $$A(\tau) = \left\{f(Q_i(\tau))< \sqrt{f(Q_{\max}(\tau))} \,\&\, \sqrt{f(Q_{\max}(\tau))}\geq f(Q_i(0))\right\}, $$ $$ B(\tau) = \left\{f(Q_i(\tau))< \sqrt{f(Q_{\max}(\tau))} \,\&\, \sqrt{f(Q_{\max}(\tau))} < f(Q_i(0))\right\}.$$ Then, { \begin{eqnarray} && \mathbb{E}\left[\left|\sqrt{f(Q_{\max}(\tau))}-f(Q_i(0))\right| \cdot \bold{I}_{\left\{f(Q_i(\tau))< \sqrt{f(Q_{\max}(\tau))}\right\}}\right]\nonumber\\ & & \quad =\mathbb{E}\left[\left(\sqrt{f(Q_{\max}(\tau))}-f(Q_i(0))\right) \cdot \bold{I}_{A(\tau)}\right] \nonumber \\ & & \quad\qquad + \quad \mathbb{E}\left[\left(f(Q_i(0))-\sqrt{f(Q_{\max}(\tau))}\right) \cdot \bold{I}_{B(\tau)}\right] \nonumber \\ & & \quad \stackrel{(a)}{\leq} \mathbb{E}\left[\left(\sqrt{f(Q_{\max}(\tau))}-\sqrt{f(Q_{\max}(0))}\right) \cdot \bold{I}_{A(\tau)}\right] \nonumber\\ & & \quad\qquad + \quad \mathbb{E}\left[\left(f(Q_i(0))-f(Q_i(\tau))\right) \cdot \bold{I}_{B(\tau)}\right]\nonumber\\ & & \quad \stackrel{(b)}{\leq} \mathbb{E}\left[|f(Q_{\max}(\tau))-f(Q_{\max}(0))|\right] +\mathbb{E}\left[|f(Q_i(0))-f(Q_i(\tau))|\right]\nonumber\\ & & \quad = O\left(\frac1{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right). \label{eq:js3} \end{eqnarray}} In above, (a) follows because we are considering case (i) with $f(Q_i(0)) \geq \sqrt{f(Q_{\max}(0))}$ and definition of event $B(\tau)$; (b) follows from 1-Lipschitz property of $\sqrt{\cdot}$ function and appropriate removal of indicator random variables. For the final conclusion, we observe that the arguments used to establish \eqref{eq:js2} imply the $O(1/{\sf superpolylog}(Q_{\max}(0)))$ bound on both the terms in very similar manner: for the term corresponding to $|f(Q_{\max}(\tau))-f(Q_{\max}(0))|$, one has to adapt arguments of \eqref{eq:js2} by essentially replacing the index $i$ by $\max$. This concludes the proof of \eqref{eq:boundet2} for case (i) of $f(Q_i(0)) \geq \sqrt{f(Q_{\max}(0))}$. \vspace{.1in} \noindent{\em Case (ii):} Now consider $i$ such that $f(Q_i(0)) < \sqrt{f(Q_{\max}(0))}$. Then, { \begin{eqnarray} & & \mathbb{E}\left[\left|W_i(\tau)-W_i(0)\right|\right] ~ = \mathbb{E}\left[\left|W_i(\tau)-\sqrt{f(Q_{\max}(0))}\right|\right]\nonumber\\ & & ~ =\mathbb{E}\left[\left|f(Q_i(\tau))-\sqrt{f(Q_{\max}(0))}\right|\cdot \bold{I}_{\left\{f(Q_i(\tau))\geq \sqrt{f(Q_{\max}(\tau))}\right\}}\right]\nonumber\\ && \qquad + ~\mathbb{E}\left[\left|\sqrt{f(Q_{\max}(\tau))}-\sqrt{f(Q_{\max}(0))}\right| \cdot \bold{I}_{\left\{f(Q_i(\tau))< \sqrt{f(Q_{\max}(\tau))}\right\}}\right].\label{eq:js1a} \end{eqnarray}} First observe that by 1-Lipschitz property of $\sqrt{\cdot}$ function, the second term can be bounded as (similar to \eqref{eq:js3}) { \begin{eqnarray} & & \mathbb{E}\left[\left|\sqrt{f(Q_{\max}(\tau))}-\sqrt{f(Q_{\max}(0))}\right| \cdot \bold{I}_{\left\{f(Q_i(\tau))< \sqrt{f(Q_{\max}(\tau))}\right\}}\right] \nonumber \\ & & \quad \leq \mathbb{E}\left[\left|{f(Q_{\max}(\tau))}-{f(Q_{\max}(0))}\right|\right] \nonumber \\ & & \quad = O\left(\frac1{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right). \end{eqnarray}} Therefore, we are left with proving the first term of \eqref{eq:js1a}. We will follow similar line of arguments as those used for \eqref{eq:js3}. Define $$A'(\tau) = \left\{f(Q_i(\tau)) \geq \sqrt{f(Q_{\max}(\tau))} \,\&\, \sqrt{f(Q_{\max}(0))}\geq f(Q_i(\tau))\right\}, $$ $$ B'(\tau) = \left\{f(Q_i(\tau)) \geq \sqrt{f(Q_{\max}(\tau))} \,\&\, \sqrt{f(Q_{\max}(0))} < f(Q_i(\tau))\right\}.$$ Then, { \begin{eqnarray} && \mathbb{E}\left[\left|f(Q_i(\tau))-\sqrt{f(Q_{\max}(0))}\right| \cdot \bold{I}_{\left\{f(Q_i(\tau))\geq \sqrt{f(Q_{\max}(\tau))}\right\}}\right]\nonumber\\ & & \quad =\mathbb{E}\left[\left(\sqrt{f(Q_{\max}(0))}-f(Q_i(\tau))\right) \cdot \bold{I}_{A(\tau)}\right] \nonumber \\ & & \quad\qquad + \quad \mathbb{E}\left[\left(f(Q_i(\tau))-\sqrt{f(Q_{\max}(0))}\right) \cdot \bold{I}_{B(\tau)}\right] \nonumber \\ & & \quad \stackrel{(a)}{\leq} \mathbb{E}\left[\left(\sqrt{f(Q_{\max}(0))}-\sqrt{f(Q_{\max}(\tau))}\right) \cdot \bold{I}_{A(\tau)}\right] \nonumber\\ & & \quad\qquad + \quad \mathbb{E}\left[\left(f(Q_i(\tau))-\sqrt{f(Q_{\max}(0))}\right) \cdot \bold{I}_{B(\tau)}\right]\nonumber\\ & & \quad \stackrel{(b)}{\leq} O\left(\frac1{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right) \nonumber \\ & & \qquad \qquad + \quad \mathbb{E}\left[\left(f(Q_i(\tau))-\sqrt{f(Q_{\max}(0))}\right) \cdot \bold{I}_{B(\tau)}\right]. \nonumber \\ \label{eq:js3a} \end{eqnarray}} In above, (a) follows because we are considering case (i) with $f(Q_i(\tau)) \geq \sqrt{f(Q_{\max}(\tau))}$ and definition of event $B(\tau)$; (b) follows from 1-Lipschitz property of $\sqrt{\cdot}$ function and appropriate removal of indicator random variables as follows: { \begin{eqnarray} & & \mathbb{E}\left[\left(\sqrt{f(Q_{\max}(0))}-\sqrt{f(Q_{\max}(\tau))}\right)\cdot \bold{I}_{A(\tau)}\right] \nonumber \\ & & \qquad \leq \mathbb{E}\left[|f(Q_{\max}(\tau))-f(Q_{\max}(0))|\right] \nonumber \\ & & \qquad = O\left(\frac1{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right). \end{eqnarray}} Finally, to complete the proof of case (ii) using \eqref{eq:js1a}, we wish to establish { \begin{eqnarray} \mathbb{E}\left[\left(f(Q_i(\tau))-\sqrt{f(Q_{\max}(0))}\right) \cdot \bold{I}_{B(\tau)}\right] & = & O\left(\frac1{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right).\nonumber \\ \label{eq:js3b} \end{eqnarray}} Now suppose $x \in \mathbb{R}_+$ be such that $f(x) = \sqrt{f(Q_{\max}(0)}$. Then, { \begin{eqnarray} & & \mathbb{E}\left[\left(f(Q_i(\tau))-\sqrt{f(Q_{\max}(0))}\right)\cdot \bold{I}_{B(\tau)}\right] \nonumber \\ & & \quad = \mathbb{E}\left[\left(f(Q_i(\tau))- f(x)\right)\cdot \bold{I}_{B(\tau)}\right] \nonumber \\ & & \quad \stackrel{(a)}{\leq} \mathbb{E}\left[f'(x) (Q_i(\tau) - x) \cdot \bold{I}_{B(\tau)}\right] \nonumber \\ & & \quad = f'(x)~ \mathbb{E}\left[ (Q_i(\tau) - x) \cdot \bold{I}_{B(\tau)}\right] \nonumber \\ & & \quad \stackrel{(b)}{\leq} f'(x)~ \mathbb{E}\left[ (Q_i(\tau) - Q_i(0)) \cdot \bold{I}_{B(\tau)}\right] \nonumber \\ & & \quad \leq f'(x)~ \mathbb{E}\left[ |Q_i(\tau) - Q_i(0)|\right] \nonumber \\ & & \quad \stackrel{(c)}{=} f'(x)~ O\left({\tau}\right) \nonumber \\ & & \quad \stackrel{(d)}{=} O\left(\frac1{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right).\label{eq:js3c} \end{eqnarray}} In above, (a) follows from concavity of $f$; (b) from $Q_i(0) \leq x$ and $Q_i(\tau) \geq x$ implied by case (ii) and $B'(\tau)$ respectively; (c) follows from arguments used earlier that for any $i$, $\mathbb{E}[(Q_i(\tau)-Q_i(0))^2] = O(\tau^2)$; (d) follows from $\tau \leq b_2 = {\sf polylog}\left(Q_{\max}(0)\right)$ and $$f'(x) = O\left(\frac1{{\sf superpolylog}\left(Q_{\max}(0)\right)}\right).$$ This complete the proof of \eqref{eq:boundet2} for both cases and the proof of Lemma \ref{lem:adiabetic1} for integral time steps. A final remark validity of this result about non-integral times is in order. Consider $t \in I$ and $t \notin \mathbb{Z}_+$. Let $\tau = \lfloor t \rfloor$ and $t = \tau + \delta$ for $\delta \in (0,1)$. Then, it follows that (using formal definition $P^\delta$ as in \eqref{eq:fz1-1}) { \begin{eqnarray} \mu(t)~=~ \mu(\tau+\delta) & = & \mu(\tau) P^{\delta}(0) + \mathbb{E}\left[\tilde{\mu}(\tau) (P^\delta(\tau)-P^\delta(0))\right] \nonumber \\ & = & \mu(0)P(0)^\tau P^\delta(0) + e(\tau+\delta). \label{eq:fz5} \end{eqnarray}} Now it can be checked that $P^\delta(0)$ is a probability matrix and has $\pi(0)$ as its stationary distribution for any $\delta > 0$; and we have argued that for $\tau$ large enough $\mu(0)P(0)^\tau$ is close to $\pi(0)$. Therefore, $\mu(0)P(0)^\tau P^\delta(0)$ is also equally close to $\pi(0)$. For $e(\tau+\delta)$, it can be easily argued that the bound obtained in \eqref{eq:boundet} for $e(\tau+1)$ will dominate the bound for $e(\tau+\delta)$. Therefore, the statement of Lemma holds for any non-integral $t$ as well. This complete the proof of Lemma \ref{lem:adiabetic1}. \vspace{.1in} \subsubsection{Step Three: Wireless Network}\label{sssec:step3wireless} In this section, we prove Lemma \ref{lem:two} for the wireless network model. For Markov process $X(t) = (\mathbf{Q}(t), \boldsymbol{\sigma}(t))$, we consider Lyapunov function $$L(X(t)) = \sum_{i} F(Q_i(t)), $$ where $F(x) = \int_{0}^x f(y)~ dy$ and recall that $f(x) = \log \log (x+e)$. For this Lyapunov function, it suffices to find appropriate functions $h$ and $g$ as per Lemma \ref{lem:two} for a large enough $Q_{\max}(0)$. Therefore, we assume that $Q_{\max}(0)$ is large enough so that it satisfies the conditions of Lemma \ref{lem:adiabetic1}. To this end, from Lemma \ref{lem:adiabetic1}, we have that for $t \in I$, \begin{eqnarray*} \abs{\mathbb{E}_{\pi(0)}[f(\mathbf{Q}(0))\cdot\boldsymbol{\sigma}] - \mathbb{E}_{{\mu}(t)}[f(\mathbf{Q}(0))\cdot\boldsymbol{\sigma}]} &\leq& \frac{\varepsilon}4 \left(\max_{\boldsymbol{\rho}\in \mc{I}(G)}{f(\mathbf{Q}(0))\cdot\boldsymbol{\rho}}\right). \end{eqnarray*} Thus from Lemma \ref{LEM:GOODPI}, it follows that \begin{equation} \mathbb{E}_{{\mu}(t)}[f(\mathbf{Q}(0))\cdot\boldsymbol{\sigma}] ~\ge~ \left(1-\frac{\varepsilon}2\right)\left(\max_{\boldsymbol{\rho} \in \mc{I}(G)} f(\mathbf{Q}(0))\cdot\boldsymbol{\rho} \right) - O(1).\label{eq:L47} \end{equation} Now we can bound the difference between $L(X(\tau+1))$ and $L(X(\tau))$ as follows. \begin{eqnarray} & & L(X(\tau+1)) - L(X(\tau)) = (F(\mathbf{Q}(\tau+1)) - F(\mathbf{Q}(\tau))) \cdot \mathbf{1} \nonumber\\ & & \quad \leq~ f(\mathbf{Q}(\tau+1)) \cdot (\mathbf{Q}(\tau+1)-\mathbf{Q}(\tau)), \nonumber\\%\qquad \qquad \text{since $F$ is convex}, \nonumber\\ & & \quad {\leq}~ f(\mathbf{Q}(\tau)) \cdot (\mathbf{Q}(\tau+1)-\mathbf{Q}(\tau))+n,\nonumbe \end{eqnarray} where the first inequality is from the convexity of $F$ and the last inequality follows from the fact that $f(Q)$ is $1$-Lipschitz. Therefore, \begin{eqnarray} & & L(X(\tau+1)) - L(X(\tau)) = (F(\mathbf{Q}(\tau+1)) - F(\mathbf{Q}(\tau))) \cdot \mathbf{1} \nonumber\\ & & \quad \leq f(\mathbf{Q}(\tau)) \cdot \left(A(\tau, \tau+1) - \int^{\tau+1}_\tau \boldsymbol{\sigma}(y)\mathbf{1}_{\{Q_i(y) > 0\}}\, dy\right)+n \nonumber\\ & & \quad \stackrel{(a)}{\leq} f(\mathbf{Q}(\tau)) \cdot A(\tau, \tau+1)-\int^{\tau+1}_\tau f(\mathbf{Q}(y)) \cdot \boldsymbol{\sigma}(y) \mathbf{1}_{\{Q_i(y) > 0\}} \,dy+2n\nonumber\\ & & \quad = f(\mathbf{Q}(\tau)) \cdot A(\tau, \tau+1)-\int^{\tau+1}_\tau f(\mathbf{Q}(y)) \cdot \boldsymbol{\sigma}(y) \,dy+2n,\label{eq:L7} \end{eqnarray} where again (a) follows from the fact that $f(Q)$ is $1$-Lipschitz. Given initial state $X(0) = \mb{x}$, taking the expectation of \eqref{eq:L7} for $\tau, \tau +1 \in I$, \begin{eqnarray*} & & \mathbb{E}_\mb{x}[L(X(\tau+1)) - L(X(\tau))] \nonumber \\ & & \qquad \leq \mathbb{E}_\mb{x}[f(\mathbf{Q}(\tau)) \cdot A(\tau, \tau+1)] -\int^{\tau+1}_\tau \mathbb{E}_\mb{x}[f(\mathbf{Q}(y)) \cdot \boldsymbol{\sigma}(y)] \,dy ~+~ 2n \\ & & \qquad = \mathbb{E}_\mb{x}[f(\mathbf{Q}(\tau)) \cdot \boldsymbol{\lambda}] -\int^{\tau+1}_\tau \mathbb{E}_{\mb{x}}[f(\mathbf{Q}(y)) \cdot \boldsymbol{\sigma}(y)] \,dy ~+ ~2n, \end{eqnarray*} where the last equality follows from the independence between $\mathbf{Q}(\tau)$ and $A(\tau,\tau+1)$ (recall, Bernoulli arrival process). Therefore, \begin{eqnarray*} && \mathbb{E}_\mb{x}[L(X(\tau+1))- L(X(\tau))] \nonumber \\ & \leq& \mathbb{E}_\mb{x}[f(\mathbf{Q}(\tau)) \cdot \boldsymbol{\lambda}] -\int^{\tau+1}_\tau \mathbb{E}_{\mb{x}}[f(\mathbf{Q}(0)) \cdot \boldsymbol{\sigma}(y)] \,dy \\ & &\qquad\qquad -\int^{\tau+1}_\tau \mathbb{E}_{\mb{x}}[\left(f(\mathbf{Q}(y))-f(\mathbf{Q}(0))\right) \cdot \boldsymbol{\sigma}(y)] \,dy+2n\\ &\stackrel{(a)}{\leq}& f(\mathbf{Q}(0)+\tau\cdot \mathbf{1}) \cdot \lam -\int^{\tau+1}_\tau \mathbb{E}_{\mb{x}}[f(\mathbf{Q}(0)) \cdot \boldsymbol{\sigma}(y)] \,dy\\ & &\qquad-\int^{\tau+1}_\tau \left(f(\mathbf{Q}(0)-y\cdot \mathbf{1})-f(\mathbf{Q}(0))\right) \cdot \mathbf{1} \,dy+ 2n \\ &\stackrel{(b)}{\leq}& f(\mathbf{Q}(0)) \cdot \boldsymbol{\lambda} +f(\tau \cdot \mathbf{1}) \cdot \boldsymbol{\lambda} -\left(1-\frac{\varepsilon}2\right)\left(\max_{\boldsymbol{\rho}\in\mc{I}(G)}f(\mathbf{Q}(0)) \cdot \boldsymbol{\rho}\right) \qquad \qquad \qquad \qquad\\ && \qquad+\int^{\tau+1}_\tau f(y\cdot \mathbf{1}) \cdot \mathbf{1} \,dy+O(1)\\ &{\leq}& f(\mathbf{Q}(0)) \cdot \boldsymbol{\lambda} -\left(1-\frac{\varepsilon}2\right)\left(\max_{\boldsymbol{\rho}\in\mc{I}(G)}f(\mathbf{Q}(0)) \cdot \boldsymbol{\rho}\right)+ 2n f(\tau+1)+O(1). \end{eqnarray*} In above, (a) uses Lipschitz property of $\mathbf{Q}(\cdot)$ (as a function of $\tau$); (b) follows from \eqref{eq:L47} and the inequality that for $f(x) = \log\log (x+e)$, $f(x)+f(y)+ \log 2 \geq f(x+y)$ for all $x,y\in \mathbb{R}_+$. The $O(1)$ term is constant, dependent on $n$, and captures the constant from \eqref{eq:L47}. \noindent Now since $\boldsymbol{\lambda} \in (1-\varepsilon)\Conv(\mc{I}(G))$, we obtain \begin{eqnarray*} & & \mathbb{E}_\mb{x}[L(X(\tau+1))- L(X(\tau))] \\ & & \qquad \leq -\frac{\varepsilon}2\left(\max_{\boldsymbol{\rho}\in\mc{I}(G)}f(\mathbf{Q}(0)) \cdot \boldsymbol{\rho}\right)+2n\,f(\tau+1)+O(1)\\ & & \qquad \leq -\frac{\varepsilon}2f(Q_{\max}(0))+2n\,f(\tau+1)+O(1). \end{eqnarray*} Therefore, summing $\tau$ from $b_1 = b_1(Q_{\max}(0))$ to $b_2= b_2(Q_{\max}(0))$, we have \begin{eqnarray} & & \mathbb{E}_\mb{x}\left[L(X(b_2)) - L(X(b_1))\right ] \nonumber \\ & & \qquad \le - \frac{\varepsilon}{2}(b_2-b_1)f(Q_{\max}(0))+2n\sum_{\tau=b_1}^{b_2-1}f(\tau+1) +O(b_2-b_1)\nonumber\\ & & \qquad \le - \frac{\varepsilon}{2}(b_2-b_1)f(Q_{\max}(0))+2n(b_2-b_1)f(b_2) +O(b_2-b_1). \end{eqnarray} Thus, we obtain \begin{eqnarray} && \mathbb{E}_\mb{x}\left[L(X(b_2)) - L(X(0))\right ]\nonumber\\ &=& \mathbb{E}_\mb{x}\left[L(X(b_1)) - L(X(0))\right ]+ \mathbb{E}_\mb{x}\left[L(X(b_2)) - L(X(b_1))\right ]\nonumber\\ &\stackrel{(a)}{\le}& \mathbb{E}_\mb{x}\left[f(\mathbf{Q}(b_1))\cdot (\mathbf{Q}(b_1)-\mathbf{Q}(0))\right] -\frac{\varepsilon}{2}(b_2-b_1)f(Q_{\max}(0)) \nonumber\\ & & \qquad \qquad +2n\sum_{\tau=b_1}^{b_2-1}f(\tau+1) +O(b_2-b_1)\nonumber\\ &\stackrel{(b)}{\le}& nb_1\,f(Q_{\max}(0)+b_1)) - \frac{\varepsilon}{2}(b_2-b_1)f(Q_{\max}(0))\nonumber\\ & & \qquad \qquad +2n(b_2-b_1)f(b_2) +O(b_2-b_1),\label{eq:corenegative} \end{eqnarray} where (a) follows from the convexity of $L$ and (b) is due to the $1$-Lipschitz property of $\mathbf{Q}$. Now if we choose $g(\mb{x}) = b_2$ and $$h(\mb{x})=-n b_1\,f(Q_{\max}(0)+b_1))+\frac{\varepsilon}{2}(b_2-b_1)f(Q_{\max}(0))-2n(b_2-b_1)f(b_2) -O(b_2-b_1),$$ the desired inequality follows: \begin{eqnarray*} \mathbb{E}_{\mb{x}}\left[L(X(g(\mb{x}))) - L(X(0))\right] & \leq & - h(\mb{x}). \end{eqnarray*} The desired conditions of Lemma \ref{lem:two} can be checked as follows. First observe that with respect to $Q_{\max}(0)$, the function $h$ scales as $b_2(Q_{\max}(0)) f (Q_{\max}(0))$ due to $b_2/b_1=\Theta\left(\log Q_{\max}(0)\right)$ as per Lemma \ref{lem:adiabetic1}. Further, $h$ is a function that is lower bounded and its value goes to $\infty$ as $Q_{\max}(0)$ goes to $\infty$. Therefore, $h/g$ scales as $f (Q_{\max}(0))$. These propeties will imply the verification conditions of Lemma \ref{lem:two}. \vspace{.1in} \subsubsection{Step Three: Buffered Circuit Switched Network}\label{sssec:step3switch} In this section, we prove Lemma \ref{lem:two} for the circuit switched network model. Similar to wireless network, we are interested in large enough $Q_{\max}(0)$ that satisfies condition of Lemma \ref{lem:adiabetic1}. Given the state $X(t) = (\mathbf{Q}(t), \boldsymbol{z}(t))$ of the Markov process, we shall consider the following Lyapunov function : $$ L(X(t)) = \sum_i F(R_i(t)).$$ Here $\mathbf{R}(t) = [R_i(t)]$ with $R_i(t) = Q_i(t)+ z_i(t)$ and as before $F(x) = \int_{0}^x f(y) ~dy$. Now we proceed towards finding appropriate functions $h$ and $g$ as desired in Lemma \ref{lem:two}. For any $\tau \in \mathbb{Z}_+$, \begin{eqnarray} & & L(X(\tau+1)) - L(X(\tau)) \nonumber \\ & & \quad = \left(F(\mathbf{R}(\tau+1)) - F(\mathbf{R}(\tau))\right) \cdot \mathbf{1} \nonumber\\ & & \quad \leq f(\mathbf{R}(\tau+1)) \cdot (\mathbf{R}(\tau+1)-\mathbf{R}(\tau)), \nonumber\\ & & \quad= f(\mathbf{R}(\tau)+A(\tau, \tau+1) - D(\tau, \tau+1)) \cdot \left(A(\tau, \tau+1) - D(\tau, \tau+1)\right) \nonumber\\ & & \quad \leq f(\mathbf{R}(\tau)) \cdot \left(A(\tau, \tau+1) - D(\tau, \tau+1)\right)+\|A(\tau, \tau+1) - D(\tau, \tau+1)\|_2^2. \qquad \qquad \qquad \nonumber \end{eqnarray} Given initial state $X(0) = \mb{x}$, taking expectation for $\tau, \tau+1 \in I$, we have \begin{eqnarray}\label{eq:ad3} && \mathbb{E}_{\mb{x}}[L(X(\tau+1)) - L(X(\tau))]\nonumber\\ & & \quad \leq \mathbb{E}_{\mb{x}}\left[f(\mathbf{R}(\tau)) \cdot A(\tau, \tau+1)\right]- \mathbb{E}_{\mb{x}}\left[f(\mathbf{R}(\tau)) \cdot D(\tau, \tau+1)\right] \nonumber \\ & & \qquad \qquad +\mathbb{E}_{\mb{x}}\left[\|A(\tau, \tau+1)-D(\tau, \tau+1)\|_2^2\right] \nonumber\\ & & \quad {=} \mathbb{E}_{\mb{x}}\left[f(\mathbf{R}(\tau))\cdot\boldsymbol{\lambda}\right]- \mathbb{E}_{\mb{x}}\left[f(\mathbf{R}(\tau)) \cdot D(\tau, \tau+1)\right]+O(1). \end{eqnarray} The last equality follows from the fact that arrival process is Poisson with rate vector $\boldsymbol{\lambda}$ and $\mathbf{R}(\tau)$ is independent of $A(\tau, \tau+1)$. In addition, the overall departure process for any $i$, $D_i(\cdot)$, is governed by a Poisson process of rate at most $C_{\max}$. Therefore, the second moment of the difference of arrival and departure processes in unit time is $O(1)$. Now, \begin{eqnarray}\label{eq:ad3-1} \mathbb{E}_{\mb{x}}\left[f(\mathbf{R}(\tau))\cdot\boldsymbol{\lambda}\right] &=& f(\mathbf{R}(0))\cdot\boldsymbol{\lambda} +\mathbb{E}_{\mb{x}}\left[(f(\mathbf{R}(\tau))-f(\mathbf{R}(0)))\cdot\boldsymbol{\lambda}\right]. \end{eqnarray} And, \begin{eqnarray}\label{eq:ad3-2} & & \mathbb{E}_{\mb{x}}\left[f(\mathbf{R}(\tau)) \cdot D(\tau, \tau+1)\right] \nonumber \\ & & \quad = \mathbb{E}_{\mb{x}}\left[f(\mathbf{R}(0)) \cdot D(\tau, \tau+1)\right] + \mathbb{E}_{\mb{x}}\left[(f(\mathbf{R}(\tau))-f(\mathbf{R}(0))) \cdot D(\tau,\tau+1) \right] \end{eqnarray} The first term on the right hand side in \eqref{eq:ad3-1} can be bounded as \begin{eqnarray} f(\mathbf{R}(0))\cdot\boldsymbol{\lambda} &\leq&(1-\varepsilon)\left(\max_{\boldsymbol{y}\in\X}f(\mathbf{R}(0))\cdot\boldsymbol{y}\right)\nonumber\\ &\leq&-\frac{3\varepsilon}4\left(\max_{\boldsymbol{y}\in\X}f(\mathbf{R}(0))\cdot\boldsymbol{y}\right) +\mathbb{E}_{\pi(0)}\left[f(\mathbf{R}(0)) \cdot \boldsymbol{z}\right]+O(1),\label{eq:ad4} \end{eqnarray} where the first inequality is due to $\boldsymbol{\lambda}\in(1-\varepsilon)\Conv(\X)$ and the second inequality follows from Lemma \ref{LEM:GOODPI} with the fact that $|f_i(\mathbf{R}(\tau))-f_i(\mathbf{Q}(\tau))|<f(C_{\max})=O(1)$ for all $i$. On the other hand, the first term in the right hand side of \eqref{eq:ad3-2} can be bounded below as \begin{eqnarray}\label{eq:ad5} \mathbb{E}_{\mb{x}}\left[f(\mathbf{R}(0)) \cdot D(\tau, \tau+1)\right] &=&f(\mathbf{R}(0)) \cdot\mathbb{E}_{\mb{x}}\left[D(\tau, \tau+1)\right]\nonumber\\ &\geq& f(\mathbf{R}(0)) \cdot\int^{\tau+1}_{\tau}\mathbb{E}_{\mb{x}}\left[\boldsymbol{z}(s)\right]~ds\nonumber\\ &=&\int^{\tau+1}_{\tau}\mathbb{E}_{\mu(s)}\left[f(\mathbf{R}(0)) \cdot \boldsymbol{z}\right]~ds. \end{eqnarray} In above, we have used the fact that $D_i(\cdot)$ is a Poisson process with rate given by $z_i(\cdot)$. Further, the second term in the right hand side of\eqref{eq:ad3} can be bounded as follows. \begin{eqnarray}\label{eq:ad5-1} \mathbb{E}_{\mb{x}}\left[\|f(\mathbf{R}(\tau))-f(\mathbf{R}(0))\|_1 \right] &\leq&\mathbb{E}_{\mb{x}}\left[f\left(|\mathbf{R}(\tau))-\mathbf{R}(0)|\right) \right] + O(1) \nonumber\\ &\leq&f\left(\mathbb{E}_{\mb{x}}\left[|\mathbf{R}(\tau))-\mathbf{R}(0)|\right]\right) +O(1)\nonumber\\ &\leq& n f(C_{\max}\tau) + O(1)\nonumber \\ & = & O(f(\tau)), \end{eqnarray} The first inequality follows from $f(x+y) \leq f(x) + f(y) + 2$ for any $x, y \in \mathbb{R}_+$. This is because $\log (x+y+e) \leq \log (x+e) + \log (y+e)$ for any $x, y \geq \mathbb{R}_+$, $\log a+b \leq 2 + \log a + \log b$ for any $a, b \geq 1$ and $f(x) = \log \log (x+e)$. The second inequality follows by applying Jensen's inequality for concave function $f$. Combining \eqref{eq:ad3}-\eqref{eq:ad5-1}, we obtain \begin{eqnarray*} &&\mathbb{E}_{\mb{x}}[L(X(\tau+1)) - L(X(\tau))]\\ &\leq& -\frac{3\varepsilon}4\left(\max_{\boldsymbol{y}\in\X}f(\mathbf{R}(0))\cdot\boldsymbol{y}\right) +\mathbb{E}_{\pi(0)}\left[f(\mathbf{R}(0)) \cdot \boldsymbol{z}\right] \\ & & \qquad -\int^{\tau+1}_{\tau}\mathbb{E}_{\mu(s)}\left[f(\mathbf{R}(0)) \cdot \boldsymbol{z}\right]~ds+O(f(\tau))\\ &\leq&-\frac{3\varepsilon}4\left(\max_{\boldsymbol{y}\in\X}f(\mathbf{R}(0))\cdot\boldsymbol{y}\right) \nonumber \\ & & \qquad + ~ \int^{\tau+1}_{\tau}\left(\max_{\boldsymbol{y}\in\X}f(\mathbf{R}(0))\cdot\boldsymbol{y}\right) \|\mu(s)-\pi(0)\|_{TV}~ds+O(f(\tau))\\ &\stackrel{(a)}{\leq}&-\frac{\varepsilon}2\left(\max_{\boldsymbol{y}\in\X}f(\mathbf{R}(0))\cdot\boldsymbol{y}\right)+ O(f(\tau))\\ &\leq&-\frac{\varepsilon}2f(Q_{\max}(0))+ O(f(\tau)), \end{eqnarray*} where (a) follows from Lemma \ref{lem:adiabetic1}. Summing this for $\tau\in I=[b_1,b_2-1]$, \begin{equation}\label{eq:ad8} \mathbb{E}_{\mb{x}}[L(X(b_2)) - L(X(b_1))] ~\leq~-\frac{\varepsilon}{2}f(Q_{\max}(0))(b_2-b_1)+O((b_2-b_1)f(b_2)). \end{equation} Therefore, we have \begin{eqnarray*} &&\mathbb{E}_{\mb{x}}[L(X(b_2)) - L(X(0))]\\ &=&\mathbb{E}_{\mb{x}}[L(X(b_1)) - L(X(0))]+\mathbb{E}_{\mb{x}}[L(X(b_2)) - L(X(b_1))]\\ &\stackrel{(a)}{\leq}&\mathbb{E}_{\mb{x}}[f(\mathbf{R}(b_1)) \cdot (\mathbf{R}(b_1)-\mathbf{R}(0))]+\mathbb{E}_{\mb{x}}[L(X(b_2)) - L(X(b_1))]\\ &=&\sum_i\mathbb{E}_{\mb{x}}[f(R_i(b_1)) \cdot (R_i(b_1)-R_i(0))]+\mathbb{E}_{\mb{x}}[L(X(b_2)) - L(X(b_1))]\\ &\stackrel{(b)}{\leq}&\sum_i\sqrt{\mathbb{E}_{\mb{x}}[f(R_i(b_1))^2]} \sqrt{\mathbb{E}_{\mb{x}}[(R_i(b_1)-R_i(0))^2]}+\mathbb{E}_{\mb{x}}[L(X(b_2)) - L(X(b_1))]\\ &\stackrel{(c)}{\leq}&\sum_i\sqrt{f(\mathbb{E}_{\mb{x}}[R_i(b_1)])^2 + O(1)} \cdot O(b_1)+\mathbb{E}_{\mb{x}}[L(X(b_2)) - L(X(b_1))]\\ &\stackrel{(d)}{=}&n\,f(Q_{\max}(0)+O(b_1))\cdot O(b_1)-\frac{\varepsilon}{2}f(Q_{\max}(0))(b_2-b_1) \\ & & \qquad + ~O((b_2-b_1)f(b_2))\\ &\stackrel{\triangle}{=}&-h(\mb{x}). \end{eqnarray*} Here (a) follows from convexity of $L$; (b) from Cauchy-Schwarz, (c) is due to the bounded second moment $\mathbb{E}_{\mb{x}}[|R_i(b_1)-R_i(0)|]=O(b_1)$ as argued earlier in the proof and observing that there exists a concave function $g$ such that $f^2 = g + O(1)$ over $\mathbb{R}_+$, subsequently Jensen's inequality can be applied; (d) follows from \eqref{eq:ad8}. Finally, choose $g(\mb{x}) = b_2$. With these choices of $h$ and $g$, the desired conditions of Lemma \ref{lem:two} can be checked as follows. First observe that with respect to $Q_{\max}(0)$, the function $h$ scales as $b_2(Q_{\max}(0)) f (Q_{\max}(0))$ due to $b_2/b_1=\Theta\left(\log Q_{\max}(0)\right)$ as per Lemma \ref{lem:adiabetic1}. Further, $h$ is a function that is lower bounded and its value goes to $\infty$ as $Q_{\max}(0)$ goes to $\infty$. Therefore, $h/g$ scales as $f (Q_{\max}(0))$. These properties will imply the verification conditions of Lemma \ref{lem:two}. \subsubsection{Step Four For completing the proof of the positive Harris recurrence of both algorithms, it only remains to show that for $\kappa>0$, the set $B_\kappa = \{ \mb{x} \in {\sf X} : L(\mb{x}) \leq \kappa \}$ is a closed petit. This is because other conditions of Lemma \ref{lem:one} follow from Lemma \ref{lem:two}. And the Step Three exhibited choice of Lyapunov function $L$ and desired `drift' functions $h, g$. To this end, first note that $B_{\kappa}$ is closed by definition. To establish that it is a petit set, we need to find a non-trivial measure $\mu$ on $({\sf X}, \mc{B}_{\sf X})$ and sampling distribution $a$ on $\mathbb{Z}_+$ so that for any $\mb{x} \in B_\kappa$, $$ K_a(\mb{x}, \cdot) \geq \mu(\cdot).$$ To construct such a measure $\mu$, we shall use the following Lemma. \begin{lemma}\label{lem:reachzero} Let the network Markov chain $X(\cdot)$ start with the state $\mb{x} \in B_\kappa$ at time $0$ i.e. $X(0)=\mb{x}$. Then, there exists $T_\kappa \geq 1$ and $\gamma_\kappa > 0$ such that $$ \sum_{\tau=1}^{T_\kappa} {\Pr}_{\mb{x}}(X(\tau) = \mathbf{0}) \geq \gamma_\kappa, ~~\forall \mb{x} \in B_\kappa.$$ Here $\mathbf{0} = (\mathbf{0}, \mathbf{0}) \in {\sf X}$ denote the state where all components of $\mathbf{Q}$ are $0$ and the schedule is the empty independent set. \end{lemma} \begin{proof} We establish this for wireless network. The proof for circuit switched network is identical and we skip the details. Consider any $\mb{x} \in B_\kappa$. Then by definition $L(\mb{x}) \leq \kappa + 1$ for given $\kappa > 0$. Hence by definition of $L(\cdot)$ it can be easily checked that each queue is bounded above by $\kappa$. Consider some large enough (soon to be determined) $T_\kappa$. By the property of Bernoulli (or Poisson for circuit switched network) arrival process, there is a positive probability $\theta^0_\kappa > 0$ of no arrivals happening to the system during time interval of length $T_\kappa$. Assuming that no arrival happens, we will show that in large enough time $t^1_\kappa$, with probability $\theta^1_\kappa > 0$ each queue receives at least $\kappa$ amount of service; and after that in additional time $t^2$ with positive probability $\theta^2 > 0$ the empty set schedule is reached. This will imply that by defining $T_\kappa \stackrel{\triangle}{=} t^1_\kappa + t^2$ the state $\mathbf{0} \in {\sf X}$ is reached with probability at least $$\gamma_\kappa \stackrel{\triangle}{=} \theta^0_\kappa \theta^1_\kappa \theta^2 > 0.$$ And this will immediately imply the desired result of Lemma \ref{lem:reachzero}. To this end, we need to show existence of $t^1_\kappa, \theta^1_\kappa$ and $t^2, \theta^2$ with properties stated above to complete the proof of Lemma \ref{lem:reachzero}. First, existence of $t^1_\kappa, \theta_\kappa^1$. For this, note that the Markov chain corresponding to the scheduling algorithm has time varying transition probabilities and is irreducible over the space of all independent sets, $\mc{I}(G)$. If there are no new arrivals and initial $\mb{x} \in B_\kappa$, then clearly queue-sizes are uniformly bounded by $\kappa$. Therefore, the transition probabilities of all feasible transitions for this time varying Markov chain is uniformly lower bounded by a strictly positive constant (dependent on $\kappa, n$). It can be easily checked that the transition probability induced graph on $\mc{I}(G)$ has diameter at most $2n$ and Markov chain transits as per Exponential clock of overall rate $n$. Therefore, it follows that starting from any initial scheduling configuration, there exists finite time $\hat{t}_\kappa$ such that a schedule is reached so that any given queue $i$ is scheduled for at least unit amount of time with probability at least $\hat{\theta}_\kappa > 0$. Here, both $\hat{t}_\kappa, \hat{\theta}_\kappa$ depend on $n, \kappa$. Therefore, it follows that in time $t^1_\kappa \stackrel{\triangle}{=} \kappa n \hat{t}_\kappa$ all queues become empty with probability at least $\theta^1_\kappa \stackrel{\triangle}{=} \left(\hat{\theta}_\kappa\right)^{n\kappa}$. Next, to establish existence of $t^2, \theta^2$ as desired, observe that once the system reaches empty queues, it follows that in the absence of new arrivals the empty schedule $\mathbf{0}$ is reached after some finite time $t^2$ with probability $\theta^2 > 0$ by similar properties of the Markov chain on $\mc{I}(G)$ when all queues are $0$. Here $t^2$ and $\theta^2$ are dependent on $n$ only. This completes the proof of Lemma \ref{lem:reachzero}. \end{proof} In what follows, Lemma \ref{lem:reachzero} will be used to complete the proof that $B_\kappa$ is a closed petit. To this end, consider Geometric($1/2$) as the sampling distribution $a$, i.e. $$ a(\ell) = 2^{-\ell}, ~~\ell \geq 1.$$ Let $\boldsymbol{\delta}_\mathbf{0}$ be the Dirac distribution on element $\mathbf{0} \in {\sf X}$. Then, define $\mu$ as $$ \mu = 2^{-T_\kappa} \gamma_k \boldsymbol{\delta}_\mathbf{0}, ~~\mbox{that is}~~\mu(\cdot) = 2^{-T_\kappa} \gamma_k \boldsymbol{\delta}_\mathbf{0}(\cdot).$$ Clearly, $\mu$ is non-trivial measure on $({\sf X}, \mc{B}_{\sf X})$. With these definitions of $a$ and $\mu$, Lemma \ref{lem:reachzero} immediately implies that for any $\mb{x} \in B_\kappa$, $$ K_a(\mb{x}, \cdot) \geq \mu(\cdot).$$ This establishes that set $B_\kappa$ is a closed petit set. \section{Discussion}\label{sec:discuss} This paper introduced a new randomized scheduling algorithm for two constrained queueing network models: wireless network and buffered circuit switched network. The algorithm is simple, distributed, myopic and throughput optimal. The main reason behind the throughput optimality property of the algorithm is two folds: (1) The relation of algorithm dynamics to the Markovian dynamics over the space of schedules that have a certain product-form stationary distribution, and (2) choice of slowly increasing weight function $\log\log(\cdot+e)$ that allows for an effective time scale separation between algorithm dynamics and the queueing dynamics. We chose wireless network and buffered circuit switched network model to explain the effectiveness of our algorithm because (a) they are becoming of great interest \cite{mesh,optical} and (b) they represent two different, general class of network models: synchronized packet network model and asynchronous flow network model. Now we turn to discuss the distributed implementation of our algorithm. As described in Section \ref{ssec:algo1}, given the weight information at each wireless node (or ingress of a route), the algorithm completely distributed. The weight, as defined in \eqref{eq:weight1} (or \eqref{eq:weight2}), depends on the local queue-size as well as the $Q_{\max}$ information. As is, $Q_{\max}$ is global information. To keep the exposition simpler, we have used the precise $Q_{\max}$ information to establish the throughput property. However, as remarked earlier in the Section \ref{ssec:algo1} (soon after \eqref{eq:weight1}), the $Q_{\max}$ can be replaced by its appropriate distributed estimation without altering the throughput optimality property. Such a distributed estimation can be obtained through an extremely simple Markovian like algorithm that require each node to perform broadcast of exactly one number in unit time. A detailed description of such an algorithm can be found in Section 3.3 of \cite{RSS09}. On the other hand, consider the algorithm that does not use $Q_{\max}$ information. That is, instead of \eqref{eq:weight1} or \eqref{eq:weight2}, let weight be $$ W_i(t) = f(Q_i(\lfloor t\rfloor)).$$ We conjecture that this algorithm is throughput optimal. \section*{Acknowledgements} We would like to acknowledge the support of the NSF projects CNS 0546590, TF 0728554 and DARPA ITMANET project. \bibliographystyle{plain}
proofpile-arXiv_067-9697
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Current technology (robotic and otherwise) falls well short of a human's ability to perceive the world using vision. A nearly limitless range of applications would be facilitated by successful embodied object recognition (i.e., the ability of a mobile platform to perform human-like visual scene and object understanding). We believe that with several key advances in the ability of a computer system to interpret visual imagery, namely robust object recognition of a large number of object classes and more capable scene understanding, future robot systems will substantially enhance the lives of their users. A robot introduced into a home environment will quickly be able to respond to commands such as "Robot, fetch my shoes!", assistive mobility devices will be able to determine whether a dangerous object is in the user's path, and navigation systems will aid travelers by identifying accidents and construction delays. In order to accelerate the progress of state-of-the-art research, many fields in science and engineering have employed standardized benchmarks or data sets to evaluate similar techniques and provide a means for their comparison. However, these measures can be detrimental when they do not reflect the reality or complexity of the problem in question. If the benchmark represents a severe simplification of reality, its use for evaluation of techniques may lead to overconfidence in a system's accuracy and robustness. In addition, they may discourage research directions that are not aligned with success on such benchmarks. Research competitions, while potentially possessing these same limitations, are more desirable than standard benchmarks on many points. First, like standard benchmarks they provide a context in which participants can evaluate their techniques under uniform conditions and make meaningful comparisons. Second, their periodic nature allows the competition to evolve with the state-of-the-art, and discourages techniques tailored to specifics of a particular benchmark. Finally, they provide an exciting venue that brings together a community for collaboration and synthesis. However, this assumes that those engaged in the state-of-the-art research participate actively in such competitions, otherwise the events become merely a venue for displaying known techniques. Although there have been competitions focused upon a variety of robotic tasks, these have tended to minimize the contribution of vision. Conversely, in vision, particularly object recognition, the active acquisition of images for analysis is generally of secondary concern. Here, benchmark datasets and competitions are neither a representative sample of the real world, nor a sample of how a robot would see the world. By separating robotics and vision, the majority of cutting-edge object recognition research has focused only upon appearance-based approaches, ignoring scene cues that may prove beneficial for both accuracy and efficiency. We believe that in order to push state-of-the-art methods towards the challenging goals outlined earlier in this paper, a competition must bring these communities together by evaluating embodied object recognition systems in realistic environments, and thus reducing over-simplifications and erroneous research directions. A recent competition featuring embodied object recognition is the Semantic Robot Vision Challenge (SRVC). The overall task in this contest is similar to a photo-scavenger hunt in an unknown indoor environment, with information on the objects typically acquired from the Internet. This setting brings together numerous sub-fields of AI, including vision, robotics, and natural language processing, along with Internet search technologies. Although this competition does involve embodied vision and can help stimulate robotics and vision research, it has yet to gain notoriety in the research community and significantly advance the state-of-the-art. Drawing upon our experience as a competitor in the SRVC for the past two years, we have identified issues in both robotic competitions and embodied recognition. We provide an outlook for the future of the SRVC that will allow it to increase its impact on the community. Our contribution in this respect is two-fold. Firstly, we discuss the value of the existing SRVC competition to research in embodied vision, and how it has pushed our own research in new directions. Secondly, we review possible modifications to improve the competition in terms of the research directions it encourages and the number of participants it attracts. \section{Robotics and Computer Vision Competitions} Competitions in robotics and computer vision that display state-of-the-art techniques are a relatively recent phenomena. This is partially due to the fact that historically, the state-of-the-art in either domains were not mature enough to handle compelling tasks. However, beginning with Robocup, competitions have become a somewhat regular feature at both academic conferences and independent venues. It is worth taking a moment to consider some of the more successful competitions and the features that have made them relevant and viable. The premier example of a successful competition is Robocup \cite{Kitano97}, pioneered by Alan Mackworth \cite{Mack93}, where robots compete against each other in a soccer-like setting. With an over-arching goal of having robots compete against humans in the mid-21 century, Robocup has proved to be a valuable education tool and testbed for many ideas in AI. One of the key features in its early success was that it offered a variety of leagues for participation. Robot and simulation leagues were offered, providing a venue for state-of-the-art research in robot control as well as techniques in planning and multi-agent systems. As a result, the competition has attracted a large number of participants, raising the profile of attendant research and providing a valuable research experience. More recently, RoboCup@Home is a new RoboCup league which aims to develop service and assistive robots used in real-world personal domestic applications. The intent of the league is to promote the development of robotic technologies that can assist humans in everyday life. The competition proposes a number of benchmark tasks in a home environment, where success is determined by the number of tasks which the entrant's robot completes. Among one of the benchmarks is a task to find a specified object in the environment. Although this contest does contain some aspects of embodied vision, it does not offer a sufficiently challenging task to attract vision researchers to a competition that is not held in conjunction with AI or vision conferences. It does, however, offer opportunities for teams to attempt a wide variety of tasks, each requiring expertise in different areas of research. For example, while one task might require speech synthesis and aesthetic presentation, another might evaluate teams on safe navigation, tracking and human recognition. This setup provides teams the flexibility to attempt specific tasks that they have research expertise in and opt out of others. Another wildly successful competition in AI was the DARPA Grand Challenge \cite{gc2006}, offering one million dollars to the first team which could autonomously complete a 240 kilometer on- and off-road course. For the first Grand Challenge in 2004 the best competitor traveled just 11 kilometers before flipping over and catching on fire. In the following year, there were five vehicles which successfully completed the entire course. In 2007, just three and a half years after the first competition, six teams finished the Urban Challenge, which mixed robotic and non-robotic vehicles together in an urban setting and enforced California traffic laws. The Grand Challenge is the perfect illustration of a competition which pushed the state-of-the-art, particularly in systems engineering. Prior to the competition it was widely believed that current technology was simply not up to the challenge of this difficult task. This success was due in part to the fact that from the start it was well funded, attracted top-notch research institutions, received wide media attention, and provided a compelling task. Another successful competition has arisen in the computer vision community, the Pascal Visual Object Classes Challenge (VOC) (\url{http://pascallin.ecs.soton.ac.uk/challenges/VOC/}). This is a EU-funded competition which began in 2005 with the goals of providing a yearly competition for object class recognition and localization and a set of standards and tools for evaluating algorithm performance. In contrast to standard benchmark datasets, entrants are evaluated on a novel dataset every year, which prevents algorithms from being tailored specifically to a single data set. The key feature of this competition that led to its success was that it involved high profile researchers at the organizing level, was held in conjunction with major conferences, and was relatively inexpensive to participants. \section{SRVC and Our Experience} Although the previously mentioned competitions have been successful, they do not address many of the issues of embodied vision. The SRVC is an ideal competition to push the state-of-the-art in this field. This competition was held for the first time at the Association for the Advancement of Artificial Intelligence (AAAI) conference in 2007 in Vancouver, and again in conjunction with the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2008 in Alaska. The competition is a visual search task in an unknown environment. The entrants are given a list of objects to find in the environment, with the environment containing only a subset of those listed objects along with additional distractor objects. Using this list, the robots autonomously acquire data about these objects from the Internet in a fixed amount of time. Once data collection and learning are complete the robot searches the unknown environment with the task of finding the objects using the data acquired from the Internet. At the end of the exploration phase the robot returns an image for each object type containing a single bounding box around the target. The scoring is based on the bounding box accuracy. In addition, there is a software league, where the entrants are not responsible for acquiring images of the environment. Instead they are given a set of images taken of the environment that include both the objects and other scene elements. Our team entered two versions of our Curious George robot to the 2007 and 2008 SRVC. We gained a wealth of experience during our system development process and actual contest participation, which will be described in the following section. \subsection{Internet Data Collection and Filtering} In the SRVC, all of the training data is acquired from the Internet at contest time with no human intervention. For visual appearance models, this generally means collecting a dataset of images via an Internet image search engine like Google. Given the varied nature of Internet image search results, a system was needed to filter the output before training could be performed. We implemented two phases of training data filtering. The first phase removed all cartoons, illustrations, technical schematics, and other non-photographic images using a quality score developed in \cite{quality2006}. The second phase prioritized groups of images that displayed a high degree of similarity, since we determined empirically that these images were more likely to contain the target object. It is interesting to note that approaches for image filtering and ranking such as \cite{fergus04} are generally evaluated on a dataset drawn from a similar distribution to the training data. This is not the case for the SRVC scenario since data collected by a robot is not likely to be from canonical viewpoints in uncluttered backgrounds -- two properties that are common in Internet images. As a result, we did not pre-filter results based on generic object categorization techniques. In addition to acquiring training images, other relevant data may be found on the Internet, such as contextual clues from LabelMe \cite{RussellIJCV2008} and size priors from the WalMart catalog. These all represent important sources of information for recognition purposes, but this potential has been left unaddressed in the literature. Although we were not able to integrate this information into our system in time for the SRVC competitions, they have encouraged us to pursue this direction for collecting additional training data aside form the traditional image datasets. \subsection{Robotics} The nature of a robotics contest demands the construction of a physical system with numerous abilities ranging from basic navigation, to the construction of a distributed computation system, to performance on the task itself (object recognition in our case). While each of these individual tasks are easily achieved, their integration within a physical system presents a high level of complexity for system designers. For example, from a research perspective, robot navigation and mapping is largely a solved problem in indoor environments. However, it is a significant practical challenge to prepare a robot to navigate a previously unseen contest environment where it is required to visit potentially unsafe locations that provide good views of objects. Similarly, distributing a computational process across several networked processors is not a significant challenge in many situations, but when this system must be mounted on a mobile robot and thus subject to constraints on weight, power and size, many difficulties present themselves. For the 2008 SRVC, we developed an active exploration and real-time vision system in order to announce objects that were discovered in real-time during the run. Our robot's architecture involved the low power, on-board PC mounted inside our Pioneer AT3 robot for low-level control, and four networked laptop systems responsible for: i) real-time processing of visual imagery, visual attention, robotic planning, gaze planning, and overall control; ii) specific object recognition; iii+iv) generic category recognition. The distributed nature of this architecture required captured imagery to be transfered between computers via network connection, and the associated software for sending and receiving components. Our visual attention system processed imagery obtained from a stereo camera system in real-time in order to determine the locations of interesting objects and structures in the environment. This represented a significant new functionality when compared with our 2007 contest entry that required the robot to ``stop and shoot'' before performing visual attention. The real-time functionality prevented our robot from ``driving blindly'' and allowed it to continuously monitor the peripheral view until a sufficiently interesting location was seen, at which point foveal images could be collected. This behaviour allowed the robot to cover the environment rapidly while ignoring uninteresting regions and thus capturing images of a large number of candidate objects. It is unlikely that any of our team members would have developed such a behaviour had it not been for the SRVC, since stationary visual attention is equally easy to demonstrate in academic publication. However, now that such a system has been developed, our team has the ability to evaluate the behaviour of interactive, real-time visual attention on a mobile platform, and this continues to be an interesting research direction for our research group. \subsection{Vision} Images collected by a robot during the embodied object recognition scenario often capture objects from a non-standard viewpoint, scale, or orientation. In other cases, the images do not contain an object at all. In fact, during our SRVC experience, we found that images collected by the robot rarely contained any target object. As a result, we designed our classification system to have a low false positive rate. We employed a two-stage object detection approach. The first stage used a specific object recognition system based on matching SIFT features and geometric consistency that generally produced few false positives but provided low recall for generic object classes. The second stage employed a generic object classifier based on the spatial pyramid match kernel \cite{Lazebnik} to produce detections for those objects that were not captured by the previous approach. We designed a peripheral-foveal vision system that attempts to improve the quality of robot-collected imagery by locating interesting regions of the environment and imaging these regions in high resolution. This design choice was inspired by the human visual system, which makes extensive use of peripheral-foveal vision. Our peripheral camera was a Point Grey Research Bumblebee stereo camera with a relatively wide field of view. Spectral saliency \cite{hou_saliency_2007} was fused with stereo depth information to locate regions of interest in peripheral images. The foveal camera was a Canon G7 point-and-shoot camera. We employed the G7's high zoom, combined with a pan-tilt unit to obtain tightly cropped, high resolution images of interesting objects identified in the peripheral view. We found that the image quality obtained by our foveal system significantly improved object recognition performance. This is likely due to the fact that Internet images are also often captured by high-quality digital cameras. \subsection{Benefits to Our Research} UBC's participation in the SRVC has lead us to develop Curious George, a powerful evaluation platform that enabled further development of embodied recognition algorithms. In terms of quantifiable research output, the platform developed directly for the SRVC contest has lead to a number of publications \cite{meger07,MegerRAS2008} and several higher-level algorithms have since been designed which leverage the platform \cite{forssen08,Viswanathan09}. Our resulting research directions can be summarized into three categories: the effect of viewpoint in object recognition, the use of existing online databases for semantic training information, and the use of additional cues available to an embodied platform during scene understanding. Our study of viewpoint in object recognition has examined the implications of having only a single canonical viewpoint in the training image dataset (as is often the case with Internet images). We evaluated several recognition methods (namely feature matching with and without a geometric constraint) in terms of their ability to recognize objects from a range of viewpoints, and reported a range of success for this task. We showed that annotated datasets such as the LabelMe \cite{RussellIJCV2008} database can provide semantic information for tasks other than simply object recognition. Object-place relations from LabelMe (e.g., fridges are likely to be found in the kitchen) were learned, and used this spatial-semantic model to perform place labeling in simulated environments. We also described the use of this model to inform object search \cite{Viswanathan09}. Our future plans are to combine this technology with object recognition, demonstrated in the SRVC, to construct a successful integrated scene understanding system. Finally, we have employed structure from stereo to register object locations and construct a 3D object map and demonstrated how this object map allows a robot to collect multiple viewpoints of target objects to improve classification accuracy. One of our team members is currently employing the raw structure information to utilize scale priors for object recognition. Overall, the SRVC has stimulated a wide variety of excellent research in our group by forcing us to examine object recognition in a realistic setting. \section{Improving Research Outcomes} Research competitions should advance the state-of-the-art by providing additional training data and context information. In addition, realistic environments would allow the use of more advanced learning methods. This section provides potential modifications to the SRVC contest that we believe will encourage these directions. \subsection{Training} Embodied object recognition systems require a source of training data from which to learn the appearance and properties of target objects. In the past, for the SRVC, this data has been obtained entirely using the Internet at the time of competition. However, the vast majority of images from the Internet are from a single canonical viewpoint, which implies that the resulting classifier will only be successful on that viewpoint. Given the paucity of the data, this setting does not encourage 3D recognition, which may be required for successful embodied recognition. One modification would be to allow competitors to know a superset of the classes beforehand, enabling the use of manually labeled training data. This is not an unrealistic scenario since most robots will likely be deployed in known environments where the set of objects can be carefully catalogued. Also, it still presents a significant challenge, as demonstrated by the VOC competition where recognition is still very poor. Alternatively, the types of environments (e.g., office, kitchen, bedroom, etc) could be provided. For example, knowing that the scene was a kitchen would allow researchers to construct priors on appearance, 3d shape and scale for all objects that are likely to occur in a kitchen. In either case, Internet data acquisition would still be allowed at competition time to augment data provided by system designers. This would allow for research into the interplay between scene information like surface orientations and real-world scale and appearance, similar to works such as \cite{hoeim06,gould08}. \subsection{Environment and Context} The SRVC contest environment has, so far, required the robot to navigate in an area that is quite small and to locate objects that were placed on tables covered with white table cloths. This scenario presents a much simpler segmentation problem when compared with a realistic home environment, and does not allow for evaluation of system performance over long distances and operating durations. Thus, while object recognition methods that rely on good segmentation results might succeed in the contest, they are likely to fail in more realistic environments. This outcome is misaligned with the objective of pushing research in the direction of improving real-world performance, and we believe that future SRVC contests should include increasingly realistic environment designs. To a na\"{\i}ve audience, embedding the competition in a realistic environment might seem likely to increase difficulty, however, it can actually lead to better performance if the additional information available about context is leveraged by the competition systems. Respecting relationships of co-occurance and co-location of natural environments when placing objects would allow one to exploit these relationships for object recognition. It would also help eliminate false matches by recognizing that an object does not belong in a particular location. In addition, it would be interesting to partition the environment into places that appear in real environments (e.g., kitchen, bedroom, etc.) and having query objects in the locations that they are normally found. This would allow competitors to exploit object-place relations to identify potential object locations, thus facilitating efficient coverage of the environment. There are obviously logistical problems in having a multi-room environment and allowing for an audience. However, using dividers, it is possible to create room-like subdivisions without the need for entirely separate rooms. This would create an environment similar to many ``open concept'' homes. It is also possible to create recognizable, logical locations in a single room by separating these locations with empty space, however this should be specified to competitors. \section{Improving Participation} The purpose of a competition in research is to provide both an opportunity to exchange ideas as well as a venue to evaluate and encourage state-of-the-art research. A particular challenge in an embodied recognition competition is to encourage participation of \emph{both} robotics and vision researchers. In this section, we discuss practical suggestions to increase researcher participation. \subsection{Changes in the Setting and Rules} Various methods are currently used in object class recognition research such as colour, contours, texture, etc. In addition, there is an active research community \cite{vogel07} that seeks to utilize scene context for recognition. We propose varying the difficulty and scoring of the competition in a way that rewards the successes of specific methods on certain object types that might be challenging to recognize using simple object recognition techniques. Another interesting modification might be to provide different levels of information before and during the competition. For example, the object type ``bottle'' could be provided beforehand, and the robot might be required to recognize a specific object (e.g. coke bottle, milk bottle, etc) during the competition. To make the problem more challenging, the contest could allot points for identifying unknown objects (i.e. those that do not appear on the list). In addition, including relative location information for some objects (e.g. the book is beside the TV) can provide context information useful for recognition of objects that are particularly challenging given the state-of-the-art. Additionally, the contest could allow two teams to compete simultaneously. The team which finds the objects in the environment faster would receive a higher score. Another interesting case that can push forward the robotics aspect of the contest would be to allow multiple robots per team to explore the environment. The robots can cooperatively capture the images from different viewpoints and share the information to recognize the objects more precisely. However, this change is most likely infeasible to implement in the near future due to the complexity and cost of robots currently being used in SRVC. \subsection{Software League} \label{SoftwareLeague} Although a competition which requires the integration of various research areas is desirable, such a competition discourages participation from smaller research groups that may not have the expertise to implement every aspect required for success. The software league is an example of separating the recognition task from the robotics challenges of active vision and navigation, however some modifications are needed in order to improve participation in this league. The first thing to note is that the impact of this competition is dependent on the significance of the results in the competition. In the object recognition community, techniques are evaluated on a large number of images, thus ensuring that improvements over previous techniques are statistically significant. This is the case even in a competition environment like VOC. Results from the SRVC competition, however, carry little statistical significance due to the small sample size (e.g. one mug in the environment). One possibility to address this is in the software league. Here, image data can be acquired from real environments instead of the contest setting. This provides an opportunity to include much more realistic context. Images of the same target objects distributed in a natural environment, such as a kitchen or office, can be taken ahead of time. In order to incorporate the ``embodied vision'' aspect of the contest, additional information such as a map of the environment, the location and orientation of the camera for each image, and stereo image pairs can be provided with little extra effort. This would make the software league a more interesting research problem and help distinguish it from other object recognition competitions, thus attracting more participants. In addition, removing the limitations of data collection by a robot also allows for the creation of data sets composed of a larger number of objects and environments, thus increasing the statistical significance of the results. \subsection{Robot League} As already mentioned, it has been a significant challenge for teams in previous years of the SRVC contest to achieve reliable navigation within the contest environment. Since the primary research problems posed by the SRVC are not intended to focus on low-level robot navigation, it may be useful to consider relieving teams of the navigation burden in the future. First, the contest organizers could provide entrants with a standardized robot platform that has basic navigation abilities. In this case, teams would only be responsible for higher level task planning and processing of the visual imagery obtained by the robot. While this solution solves many of the problems posed by navigation, it also unfortunately introduces several complications. Primarily, each team depends on a slightly different set of sensing modalities. During the 2007 and 2008 SRVC contests, we have seen: monocular video cameras, monocular still cameras, laser rangefinders, sonar range sensors, binocular stereo cameras, and multi-camera stereo systems. Any standardized test platform would be required to provide teams with some subset of these sensors, and this set would ideally be sufficiently large so that it does not discourage any teams from competing. Another significant challenge is the ability for each team to practice and develop on the standard platform. Either numerous platforms would need to be distributed, or teams would require periodic access to a single platform. Both of these options entail significant cost that would need to be minimized. This could likely be accomplished by employing a standard robot architecture such as ROS (\url{http://pr.willowgarage.com/wiki/ROS}) that would allow much of the development to occur in simulation and with surrogate robots for hardware testing. A simpler method for reducing navigation challenges is to provide teams with a more detailed specification of the contest environment geometry. For example, knowing the exact size and shape of furniture allows for proper mounting of sensors and tuning of sensor models in mapping algorithms. In this case, it might be possible for each team to still employ their own robot. \subsection{Facilitating Code Re-use} Re-usable code is an important output of a successful competition, as it is in any collaborative effort. Since the goal of competitions is to move the state-of-the art in a desired direction, successive solutions to the competition's problem benefit from having previous work available as a starting point. Re-usable code also lowers the barrier to entry for teams new to the competition, enhancing accessibility of the competition, and in turn visibility. Although code \emph{sharing} is encouraged/required by SRVC, subsequent re-use of this code appears to be non-existent. This is a result of the differences in platforms and approaches used by each of the participants. For example, our robot base, sensor package, peripheral-foveal vision system, and multi-processor distributed recognition system was a very specific point in the solution space. This entire setup would likely have to be replicated in order to re-use our code. However, there are elements of a code base that would be generally usable (training set construction and feature extraction, for example). One possible solution would be to require the use of an open source robotics package with a distributed architecture such as ROS, which allows different components to be easily chained together. In such a system, the different components are unaware of each other, so they can be mixed-and-matched at will. Standardization on a single robotics platform would be an optimal solution if the funding were available. Without the logistical problems inherent in hardware, the software league offers much greater potential for code sharing. One possibility to help encourage this would be to design the software challenges to be explicitly modular in nature. Instead of a single software league challenge, it could be broken into steps such as download and filtering, classification, and localization. \subsection{Funding and Visibility} A significant constraint for both organization and participation is funding. It requires a large amount of exhibition space in which to set up the environment. Obstacles and target objects for the environment must be purchased. Support must be offered to teams to subsidize the cost of shipping their robots to the competition in order to encourage participation. In addition, travel costs for the robot teams can be high since a large number of team members may be needed to run and maintain the robotic hardware. Clearly, secure funding through public and private sponsorship will attract participation. Aside from attracting more participation from research groups, improving the visibility of the contest can attract sponsorship. One simple change to the 2008 competition that was surprisingly compelling was the addition of bonus points for teams who made a realtime status display showing matches as they were made. This made the contest more interesting for the audience to watch by providing a sense of what the robots were doing even when they were not moving. The crowd was audibly excited when a new match was displayed, identifying with the robots and responding to the irregular reinforcement aspect of the display. As an element of visibility and outreach for a robotic competition, this is a very powerful lesson. In addition, this also pushed research towards techniques to provide real-time recognition. In future, explicitly encouraging competitors to provide real-time displays of what the robot has found or is trying to do will draw even more attention to the contest. \section{Conclusions} Properly designed contests significantly promote the development of the state-of-the-art. They can comprise realistic and complex settings not seen in standard benchmark datasets, providing both a strong test for current solutions and rich context that can be leveraged to advance research. The Semantic Robot Vision Challenge represents one such competition which provides a venue for embodied recognition. It has provided a valuable impetus to our own research, providing insights and elucidating new directions that need more research. However, to be successful in the future, this contest needs numerous modifications in order to have significant impact. \bibliographystyle{named}
proofpile-arXiv_067-9931
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Experimental physicists often need to recognize objects, count them, follow them and characterise them. \citet{perrin} had to count colloids by hand to establish the sedimentation-diffusion equilibrium. Nowadays computer vision algorithms are used routinely in the lab to track hundreds of thousands of objects as diverse as stars in a galaxy~\cite{Bertin1996}, tracers in a microfluidic device~\cite{Wereley2010}, pattern formation in polymer systems~\cite{tanaka1986application,tanaka1989digital}, dust in a plasma, bacteria~\cite{Zhang2010,Gibiansky2010} or viruses in a living cell~\cite{Brandenburg2007}. In all these cases, tracking is possible if the particles are either point-like and far apart, or several pixel wide and almost monodisperse in size (at least the smallest dimension for anisotropic objects~\cite{Zhang2010,Gibiansky2010}). To our knowledge, algorithm that allow tracking of polydisperse particles in crowded environments have not reached the soft matter community. Overall particle size distribution in colloidal suspensions and emulsions influences crystallisation~\cite{Pusey1987,Henderson1996,Fasolo2003,Schope2007,pusey2009hard}, glass forming ability~\cite{Pusey1987,Henderson1996,Senkov2001,Schope2007,pusey2009hard}, sedimentation~\cite{Binks1998,Leocmach2010} and emulsion stability~\cite{Biben1993,Binks1998} among other physical phenomena. It can be characterized by various methods that rely on measurements done in well-controlled environments~\cite{Lange1995,Provder1997,Finder2004}. However the local size distribution is not accessible experimentally \emph{in situ} and has thus not been studied so far. Particle-level microscopy experiments usually access the coordinates of the particles via the algorithm proposed by \citet{Crocker1996}. The original noisy image is blurred by convolution with a Gaussian kernel of width $\sigma$ to yield a soft peak per particle. Local intensity maxima within this blurred image give the coordinates of the particles with pixel resolution. Sub-pixel resolution ($0.1\sim0.3$~pixels error) can be achieved by taking the centre of mass of a neighbourhood around the local maxima. The extension of this algorithm to localize particles in three-dimensional (3D) confocal microscopy images has been done in two ways: either tracking particles in each confocal plane and reconstructing the results (2D-flavour)~\citep{vanblaaderen1995rss, Lu2007}, or full image analysis on three dimensional pictures (3D-flavour)~\citep{dinsmore2001tdc}. The choice of the width $\sigma$ of the blurring kernel is critical: if it is too small, then the intensity profile is flat near the centre of a particle, leading to multiple and ill-localized maxima per particle; if it is too large, then the peaks of nearby particles overlap, leading to shifts in the detected positions~\citep{Baumgartl2005,Jenkins2008}, or even fusion of the particles (only one particle detected instead of two). If the colloids are fairly monodisperse one can argue (at least in the 3D-flavour) that there exists a range of possible width where the choice of $\sigma$ has almost no effect on the number of particles detected. Choosing $\sigma$ within this range gives confidence in the localisation results. \begin{figure*} \centering \includegraphics{fig_localise.pdf} \caption{Visualisation of the results of various tracking methods for the same portion of image. (a) Multiscale 3D tracking. (b) Reconstruction from multiscale 2D tracking. (c-h) Crocker and Grier method in 3D with blurring radius increasing from \unit{2}{px} to \unit{4.5}{px} by steps of \unit{0.5}{px}. The circles on each picture are the result of 2D multiscale tracking of each XY slice of the 3D pictures. Sphere are displayed with radii determined by the tracking methods in (a) and (b), and equal to the blurring radius for (c)-(h).} \label{fig:localise} \end{figure*} However, we found that no such ``good blur width'' exists in a sample of moderate ($6-7\%$) polydispersity (see Fig.~\ref{fig:localise}c-h). The detection of smaller particles with small blurring is incompatible with the detection of the larger particles, and conversely. This unacceptable failure of the \citet{Crocker1996} algorithm, as well as the want of the particles' radii data, triggered our design of a novel localisation algorithm that would be robust even for a system of any finite polydispersity, which is unavoidable and sometime desired in real experiments. Recently \citet{Kurita2011,Kurita2011b} have designed a sizing method using particle coordinates from confocal experiments. However their method do not work at the image processing level and relies on coordinates extracted via the \citet{Crocker1996} algorithm. If these coordinates are wrong or if some particles are missed (as shown in Fig. \ref{fig:localise}), the output of their method could not be exact. The key notion to detect objects of unknown and possibly diverse sizes in an image is the \emph{scale space}~\cite{Lindeberg1993}. A popular implementation for isotropic objects (or ``blobs'') is the Scale Invariant Feature Transform (\textsc{sift}) of \citet{Lowe2004}. It is often used to match between different images from complex objects consisting of many rigidly linked blobs (\emph{e.g.}, to create a large scale image from overlapping pictures)~\citep{Lowe2004, Urschler2006, Cheung2009}. To our knowledge, this method has never been used for the quantitative localisation and sizing of independent single-blob objects like spherical colloids, droplets in an emulsion, or crystal nuclei. Here we apply this new particle tracking method to study how the sizes of particles affect local structural ordering in a supercooled colloidal suspension and the process of heterogeneous nucleation from a substrate. We reveal non-trivial local couplings between such orderings and the spatial distribution of particle sizes, which may provide crucial information for our understanding on how the polydispersity influences liquid dynamics and ordering phenomena. The organization of our paper is as follows. In Section~\ref{sec:method}, we will describe our localisation method and its results on synthetic but more and more realistic data. In Section~\ref{sec:confocal} of the paper, we will focus on the specific case of 3D confocal data. In Section~\ref{sec:yon6} we will apply our method to a crystallizing glass of polydisperse hard spheres observed by confocal microscopy and discuss the influence of size distribution on the ordering process. We conclude in Section \ref{sec:conclusion}. \section{Localisation and sizing method} \label{sec:method} In this section, we will start by recalling the principle of \textsc{sift}, then we will explain our method in the ideal case of an isolated binary ball, to add successively finite dilution and difference in brightness between the particles. \subsection{Scale invariant feature transform} \label{sec:blur} The \textsc{sift} consists in convolving the original image $I$ by Gaussian kernels $g_{\sigma_s}$ of logarithmically increasing widths $\sigma_s$ to obtain a series of blurred images $G_s$ \begin{align} \forall s>0,\quad G_s = I \star g_{\sigma_s}, \label{eq:gaussian_blur}\\ \intertext{where $\star$ is the convolution operator and} \forall s>0,\quad \sigma_{s} = 2^{s/n} \sigma_0 , \label{eq:sigma_s} \end{align} with $n$ a fixed integer. Following Ref. \cite{Lowe2004} we use $\sigma_0=1.6$ and $n=3$. Bright objects in the original image appear as bright blobs in the blurred images, and the blobs fuse together as the kernel width increases (see Fig.~\ref{fig:localise}c-h). This can be seen as a series of low-pass filters in the frequency domain. If we take the difference between consecutive blurred images we obtain a comb of band pass filtered versions of $I$: \begin{equation} \forall s>1,\quad DoG_s = G_s - G_{s-1} \label{eq:DoG_s} \end{equation} The difference of Gaussians ($DoG$) response function defined in this way depends on the position in space $\vec{r}$ and on the scale $s$. Bright objects in the original image are detected as local minima in $DoG$. With this procedure any feature with a radius as small as \unit{2}{px} and as large as its distance to the edges of the image can be detected. Furthermore, the intensity of the response is optimal at a $\sigma$ that can be related to the size of the object (see below). Thus a local minima in both space and scale in the response function $DoG$ indicates both localisation and \emph{size} of an object, without any assumption on the target size. \begin{figure} \centering \includegraphics{fig_applications.pdf} \caption{Application of \textsc{sift} in 2D. (a) Fluorescent droplets in a microfluidic device. The wall of the device is visible in the top-left corner. (b) Nucleation under phase contrast microscope. The blue circles are the result of our algorithm. For any tracking algorithm some detection failure near the edge of the image are unavoidable and indeed visible here since we show the whole pictures used for tracking.} \label{fig:applications} \end{figure} Because of the inherent polydispersity of many soft matter systems, the possible applications of \textsc{sift}-based localisation are countless. For example, droplets of an emulsion can be followed through a microfluidic device. Fig.~\ref{fig:applications}a shows the result of our version of the \textsc{sift} on a very polydispered and dense emulsion observed by wide-field fluorescence microscopy~\cite{Montagne2011}. The sizes extracted are obviously correct except when the system cannot be considered as 2D. Another possible application is nucleation rate measurement. When a phase transition proceeds via nucleation growth, the sizes of the nuclei are very diverse, making their automatic counting difficult by other methods. As shown in Fig.~\ref{fig:applications}b the \textsc{sift} allows to count and to measure the size of nuclei during a liquid-liquid transition in a water-glycerol mixture~\cite{Murata2012}. However, to the extent of our knowledge, the \textsc{sift} has not been used in a physics context to obtain reliable measurements of particles numbers and sizes from well calibrated images. We will address this reliability in the following. We will mainly cope with the three-dimensional extension of the \textsc{sift}. As in the case of the \citet{Crocker1996} algorithm, the \textsc{sift} can be extended in three dimensions (\emph{e.g.}, confocal microscopy images) in two ways: either by extracting 2D blobs independently on each slice and then reconstructing 3D objects (Fig.~\ref{fig:localise}b) or by working directly in three dimensions~\cite{Urschler2006, Cheung2009} (Fig.~\ref{fig:localise}a). We found that the former method is prone to errors, missing about a tenth of the particles in our best implementation (Fig.~\ref{fig:localise}b). The 3D results presented in this paper are obtained solely by the later method. We stress that a volumetric implementation of \textsc{sift} implies a large amount of data (typically more than \unit{1}{\giga b} for a $(\unit{256}{px})^3$ picture) and thus requires careful memory management. Our best implementation (C++) on a 4 cores i7 computer takes less that \unit{10}{\second} to extract the positions and scales of $\sim 10^4$ particles in such a volumetric picture. A much slower implementation (Python+Scipy, single core) takes \unit{1}{\second} to deal with $(\unit{1600}{px})^2$ 2D images like Fig.~\ref{fig:applications}b. We would expect real-time processing for 2D images on GPU-enabled implementations. \subsection{Sub-pixel and sub-scale resolution} Here we assume perfect noiseless, distortion-free, images. The objects to localise are thus (pixellised) balls of uniform intensity. To mimic the low resolution of experimental images in our test images (see Fig.~\ref{fig:perfect}), we draw uniformly white balls on a 4 to 16 times larger image and then we reduce accordingly the resolution using area resampling. \begin{figure} \centering \includegraphics{fig_perfect.pdf} \caption{Results from perfect images. (a-b) Sizing of an isolated sphere. Left of the vertical line our algorithm uses a doubled image. (c) Localisation error and (d) sizing error function of the distance between two particles. Oscillations are due to off-lattice centre position. (e) Size distribution extracted from digitized configuration of 4000 monodisperse hard spheres at $0.50$ volume fraction (the vertical line indicates the input radius). The tail to the right is due to particles on the edges of the image who have fewer neighbours and thus are more `dilute'. (d) and (e) also show the effect of finite dilution correction up to convergence.} \label{fig:perfect} \end{figure} The $DoG$ constructed above is defined on a $d+1$ dimensional grid, with $d$ the spatial dimensionality. One can interpolate it as a continuous function of the position $\vec{r}$ and the scale $s$ and thus localise the minima with a precision below the grid size, which corresponds to sub-pixel resolution on the position and sub-scale sizing~\citep{Lowe2004}. We found that second order estimate of the spatial derivatives and first order estimates of the scale derivatives gave the best precision. The object-by-object optimal scale determination allows us to perform the spatial sub-pixel resolution step for each object on an image that is blurred just enough to have neither a flat intensity profile nor an influence of the overlap with a nearby object's blob (an effect that plagues the \citet{Crocker1996} algorithm~\cite{Baumgartl2005,Jenkins2008}). We found that if for a given object the $DoG$ is minimum at $\sigma_s$, the best image to use is $G_{s-1}$. This leads to a spatial resolution below $0.1$~pixels in the worst case (when particles are at hard-core contact), which is the same as the average precision claimed by \citet{Crocker1996}. Moreover, when particle's surfaces are further than $1$~pixel, the error on the positions is less than $0.02$~pixels ($0.3\%$ of the diameter) (see Fig.~\ref{fig:perfect}c). \subsection{Sizing at infinite dilution} \label{sec:dilute} The analytical response of a binary ball to a Gaussian blur, at the centre of the ball is a function of dimensionless ratio $x=R/\sqrt{2}\sigma$: $G(x)$ (see Appendix for an exact expression). The $DoG$ response function is the difference between the two such functions. However, the choice of the width of the two functions is not arbitrary: we make the difference between two consecutive blurred images, the image blurred by $\sigma_{s+1}$ and the image blurred by $\sigma_s$. Therefore, in each of our discrete $DoG$ images the value at the centre of the particle can be expressed as \begin{align} DoG(R,\sigma_s, \alpha) = DoG(x_s, \alpha) &= G(x_s/\alpha) - G(x_s),\\ \intertext{with $\alpha=2^{1/n}$. Given sub-scale refinement, this can be written as a continuous function of $\sigma$:} DoG(R,\sigma, \alpha) = DoG(x, \alpha) &= G(x/\alpha) - G(x). \end{align} Here it is clear that minimizing $DoG(x, \alpha)$ with respect to $x$ yields a value $x^*$ that depends only on $\alpha$. Exact calculation yields (see Appendix): \begin{equation} x^* = R/\sqrt{2}\sigma^* = \sqrt{\frac{d\ln \alpha}{1-\alpha^{-2}}}, \label{eq:scale_dil} \end{equation} where $d$ is the spatial dimensionality. Practically one obtains $\sigma^*$, the value of $\sigma$ that minimises $DoG(R,\sigma, \alpha)$, by a polynomial fit for discrete data $\sigma_j$. Eq.~(\ref{eq:scale_dil}) allows to translate the $n$-dependent $\sigma^*$ to the parameter-free real radius of the particle, $R$. The error on $R$ does depend on the number of subdivisions. We found that with $n=3$ the radius of an isolated pixelated ball can be indeed measured within $0.3\%$ relative error with this method (see Fig.~\ref{fig:perfect}b). The scale $s=0.5$ corresponds (via Eq.~(\ref{eq:sigma_s}) and Eq.~(\ref{eq:scale_dil})) to the smaller detectable radius $R_{min}\approx \unit{3.5}{px}$ ($\sigma_0=1.6$, $n=3$). In order to detect small objects, \citet{Lowe2004} recommends to double the size of the input image using linear interpolation prior to building the first level of the pyramid. This method fares relatively well in noiseless images (see Fig.~\ref{fig:perfect}b) despite larger errors, but we found that it has to be used with care in confocal microscopy images due to noise and deconvolution artefacts. In addition doubling an image size implies a 8-fold increase in memory consumption (reaching \unit{60}{\giga\byte} in the case of a $(\unit{512}{px})^3$ original image). Our implementation allows this on $\unit{64}{\bit}$ computers by relying on memory mapped files, nevertheless it is mostly impractical. All following tests results are obtained without relying on doubled images. \subsection{Edge and overlap removal} The difference-of-Gaussian function has a strong response not only at the centre of bright objects but also along their edges. To eliminate these spurious detections, \citet{Lowe2004} suggests to construct the local Hessian matrix around each minimum of the $DoG$ and then compare its eigenvalues to identify elongated objects. We found that this was often not enough, especially in crowded environments where an isolated void induces local minima of the $DoG$ response with rather isotropic signatures on the edge of nearby particles. An other case not covered by the Hessian technique is the physically hierarchical structure of many soft matter systems, \emph{e.g.}, particles forming clusters. In this situation our algorithm detects a blob for each particle and a much larger blob for the cluster. In both cases the $DoG$ response of the spurious feature is smaller (less negative) than the one of the valid particles covering the same portion of space. Our method to remove the spurious feature is as follow: we looked for pairs of particles closer to each other than the larger of their radii (assuming infinite dilution). This means the centre of one of the particles is situated inside the other. In the physical systems we study, this cannot be correct due to excluded volume effects, thus we remove the feature of lesser $DoG$ response. We name this method `half overlap removal'. One may be tempted to implement a full overlap removal (no pair of particle closer than the sum of their radii) to eliminate even more spurious detection. We found that the gain was extremely limited ($0.01\%$ of the detected features suppressed) and that imprecision in positioning or in sizing caused valid particles to be discarded. We note that the orientation of non-spherical particles or the deformation of soft objects can be monitored in principle by using the local Hessian matrix. \subsection{Finite dilution} The Gaussian (and $DoG$) response of a particle decays rapidly away from its surface (see Eq.~(\ref{eq:split_G}). We found that even at contact the position shift induced by a nearby particle was less than a tenth of a pixel (see Fig.~\ref{fig:perfect}c). However when two particles are closer than a few times the blurring width, their influence on each other cannot be neglected: the minima of the $DoG$ is effectively shifted toward smaller scales, leading to smaller radii if one uses Eq.~(\ref{eq:scale_dil}) out of the dilute context (see Fig.~\ref{fig:perfect}d). The response of $N$ particles of radii $\lbrace R_i\rbrace$ can be superimposed at any scale $\sigma$ and at any point of space, in particular we define $DoG_i$ as the response at the centre of particle $i$. \begin{equation} DoG_i(\lbrace A_j\rbrace, \lbrace r_{ij}\rbrace, \lbrace R_j\rbrace, \sigma) = \sum_j A_j DoG(r=r_{ij}, R=R_j, \sigma), \label{eq:superpos} \end{equation} where the function $DoG(r,R,\sigma)$ is the response of a particle of radius $R$ at distance $r$ from its centre; $r_{ij}$ is the distances between particles $i$ and $j$ and $\lbrace A_i\rbrace$ are the respective brightness of the particles. The \textsc{sift} algorithm yields $\lbrace \sigma_i^*\rbrace$ so that for all $i$, $DoG_i$ is minimum with respect to $\sigma$ at $\sigma_i^*$, thus differentiating Eq.~(\ref{eq:superpos}) with respect to $\sigma$: \begin{equation} \forall i,\quad \frac{\partial DoG_i}{\partial\sigma}(\lbrace A_j\rbrace, \lbrace r_{ij}\rbrace, \lbrace R_j\rbrace, \sigma_i^*) = 0. \label{eq:DoG_min} \end{equation} The system defined by Eq.~(\ref{eq:DoG_min}) is non-linear with respect to $\lbrace R_j\rbrace$ but can be solved iteratively by Newton's method: \begin{equation} \left[ \frac{\partial^2 DoG_i}{\partial R_j\partial\sigma}\right] \times \left( \lbrace R_j\rbrace^{(k+1)} - \lbrace R_j\rbrace^{(k)} \right) = -\frac{\partial DoG_i}{\partial\sigma}, \label{eq:Newton} \end{equation} where the matrix and the right hand side are computed given $(\lbrace A_j\rbrace, \lbrace r_{ij}\rbrace, \sigma_i^*)$ and iteratively $\lbrace R_j\rbrace^{(k)}$, with the upper parenthesised index indicating the iteration rank. The results of Eq.~(\ref{eq:scale_dil}) are good starting values for the radii. Using Eq.~(\ref{eq:superpos}), the elements of the matrix simplify to \begin{equation} \frac{\partial^2 DoG_i}{\partial R_j\partial\sigma} = A_j \frac{\partial^2 DoG}{\partial R\partial\sigma}(r=r_{ij}, R=R_j^{(k)}, \sigma=\sigma_i^*). \end{equation} In principle, Eq.~(\ref{eq:Newton}) is a $N\times N$ system of equations. However the $DoG$ functions and its derivatives are rapidly decaying functions, thus the matrices are actually very sparse (about as many non-zero coefficients as particles in the first coordination shell), alleviating dramatically the computational burden when using sparse system solvers. Fig.~\ref{fig:perfect}d shows the result of such correction for two identical particles, where Eq.~(\ref{eq:Newton}) converges in a single iteration. In a many body case (see Fig.~\ref{fig:perfect}e) the convergence is reached in two iterations; however, the extremely low error of the dilute case is not totally recovered ($\approx 3\%$ relative error rather than $0.3\%$). \subsection{Brightnesses} To solve Eq.~(\ref{eq:Newton}), one needs knowledge of the brightnesses $\lbrace A_i\rbrace$. In a first approximation, they can be assumed equal to a constant, which allows to simplify them out. This is often a sensible approximation. Nevertheless, the particles in an experimental image are not uniformly bright due to synthesis imperfection (quantity of dye fixed by each particle) and photo bleaching. If one does not take into account the relative brightness of the particles, less bright particles will appear smaller. A better approximation is to measure during the \textsc{sift} process the value of the $DoG$ response at the position and scale of each particle, \emph{i.e.} $DoG_i(\lbrace A_j\rbrace, \lbrace r_{ij}\rbrace, \lbrace R_j\rbrace, \sigma_i^*)$. Given the (iterative) values of the $\lbrace R_j\rbrace^{(k)}$, one can solve Eq.~(\ref{eq:superpos}) to get an iterative value of $\lbrace A_i\rbrace^{(k)}$. With respect to the brightnesses, Eq.~(\ref{eq:superpos}) is a linear system of $N$ equations with $N$ unknowns, thus directly solvable. It is also as sparse as Eq.~(\ref{eq:Newton}). To sum up, the coefficients $\lbrace A_i\rbrace$ can be computed along with the radii in a joint iterative process: \begin{algorithmic} \State $\lbrace R_i\rbrace^{(0)} \xleftarrow{\text{Eq.~(\ref{eq:scale_dil})}} \lbrace \sigma_i^*\rbrace$ \Repeat \State $\lbrace A_i\rbrace^{(k+1)} \xleftarrow{\text{Eq.~(\ref{eq:superpos})}} \lbrace DoG_i \rbrace, \lbrace R_i\rbrace^{(k)}, \lbrace \sigma_i^*\rbrace, \lbrace r_{ij}\rbrace$ \State $\lbrace R_i\rbrace^{(k+1)} \xleftarrow{\text{Eq.~(\ref{eq:Newton})}} \lbrace R_i\rbrace^{(k)}, \lbrace A_i\rbrace^{(k+1)}, \lbrace \sigma_i^*\rbrace, \lbrace r_{ij}\rbrace$ \Until{convergence} \end{algorithmic} In our tests, we found that for both cases with and without the brightness determination this algorithm converges quickly in one or two iterations. \section{Application to 3D confocal microscopy images} \label{sec:confocal} \subsection{Effect of a point spread function} \begin{figure*} \centering \includegraphics{fig_deconv.pdf} \caption{Deconvolution. Detail of the same $YZ$ slice of (a) original confocal image, (b) previous blurred by $\sigma_0=1.6$, (c) previous deconvolved by measured kernel. Circles indicate the tracked particles position and size when using whether (b) or (c) as first Gaussian layer $G_0$. All three centres are in the slice $\pm \unit{0.5}{px}$. (d) Radial distribution function of almost monodisperse sticky spheres localised (blue) without and (red dash) with deconvolution.} \label{fig:deconv} \end{figure*} Real images suffer from optical limitations, \emph{e.g.}, the point-spread function (\textsc{psf}) of the microscope. In particular, spherical particles observed by confocal microscopy appear elongated along $Z$. The influence of such anisotropic distortion on the Gaussian and $DoG$ responses is not trivial and most of the equations given in Appendix do not have analytical equivalents. In particular, the minimum of the $DoG$ response is found at larger values of $R/\sigma$, thus the naive use of the methods detailed in the previous section lead to overestimate particle sizes. In addition, the overlap of neighbouring particle images is no longer negligible when the particles are aligned along $Z$. This leads to large imprecision in particle positions, especially in the case of anisotropic environment (\emph{e.g.}, isolated pair of particles, interface, colloidal gel). In Fig.~\ref{fig:deconv}d the radial distribution function of almost monodisperse colloids with short range attraction displays a spurious shoulder before its first peak, indicating that particles centres are often found closer than the sum of their hard core radii, as illustrated in Fig.~\ref{fig:deconv}b. We found that these issues can be better addressed by pre-processing the images (as detailed below) rather than post processing the \textsc{sift} output \emph{via} analytical methods. \subsection{Deconvolution} The image $y$ acquired by a microscope can be expressed as \begin{equation} y = x \star h + \epsilon, \label{eq:psf} \end{equation} where $x$ is the perfect image, $h$ is the \textsc{psf} of the microscope and $\epsilon$ is the noise independent of both $x$ and $h$. The process of estimating $x$ from $y$ and some theoretical or measured expression of $h$ is called deconvolution. Deconvolution in the presence of noise is a difficult problem~\cite{Riad1986}. Hopefully here we do not need to reconstruct the original image, but only our first Gaussian blurred version of it, \emph{i.e.}, $G_0 = x \star g_{\sigma_0}$ starting from $y_0 = y \star g_{\sigma_0}$. Indeed, after a reasonable amount of blur in three dimensions, the noise can be neglected and we thus obtain: \begin{equation} y_0 \approx G_0 \star h, \end{equation} or in Fourier space \begin{equation} \mathcal{F}[y_0] = \mathcal{F}[G_0] \times \mathcal{F}[h]. \label{eq:Fourier_conv} \end{equation} Once $\mathcal{F}[h]$ is known the deconvolution reduces to a simple division in Fourier space. Let us measure $\mathcal{F}[h]$ in an isotropic system where we can write \begin{eqnarray} \left\langle \left|\mathcal{F}_X[x]\right|^2 \right\rangle &= \left\langle \left|\mathcal{F}_Z[x]\right|^2 \right\rangle. \end{eqnarray} Using Eq.~(\ref{eq:psf}) we obtain \begin{eqnarray} \frac{\left\langle\left|\mathcal{F}_X[y]\right|^2 \right\rangle}{\left|\mathcal{F}_X[h]\right|^2} = \frac{\left\langle \left|\mathcal{F}_Z[y]\right|^2\right\rangle}{\left|\mathcal{F}_Z[h]\right|^2}. \end{eqnarray} Here $\mathcal{F}_X$ indicates the Fourier transform along axis $X$. In point scanning confocal imaging, the \textsc{psf} has negligible lobes along $X$ and $Y$ ($X$ only for line scanning), thus we have \begin{eqnarray} \left|\mathcal{F}_Z[h]\right|^2 = \frac{\left\langle \left|\mathcal{F}_Z[y]\right|^2\right\rangle}{\left\langle\left|\mathcal{F}_X[y]\right|^2 \right\rangle}. \end{eqnarray} For example an Hermitian kernel (real valued spectrum) is \begin{eqnarray} \mathcal{F}_Z[h] = \sqrt{\frac{\left\langle \left|\mathcal{F}_Z[y]\right|^2\right\rangle}{\left\langle\left|\mathcal{F}_X[y]\right|^2 \right\rangle}}. \end{eqnarray} Fig.~\ref{fig:deconv}b-c shows the particles localized from the original image and from the deconvolved image. Deconvolution mends both size overestimation and imprecision in $z$ coordinate. This also translates in the radial distribution function (Fig.~\ref{fig:deconv}d), where the spurious shoulder before the first peak disappears. \subsection{From optical to hard-core radius} The method exposed above relies on the uniformity of the brightness within each particle to extract optical radii that is a good approximation to the physical (hard core) radii. However a colloidal particle often shows a smooth intensity profile in itself under the microscope, less bright at the edge than at the centre of the particle. This can be due to inhomogeneous fixation of the dye during the colloid synthesis, but the bottom line is the in-plane \textsc{psf} not corrected for in our above deconvolution method. Under such conditions, the particles are detected smaller than their expected sizes. The real to optical size ratio has to be set using our knowledge of the sample. For example one may use the position of the first peak of the $g(r)$ to measure the average hard-core diameter. In general the real to optical size ratio may depend on the size of the particles and may evolve in time (\emph{i.e.}, because of photobleaching). We present below a detailed analysis of these issues for a case system. We also note that darker edges lead to smaller influence of a particle on the $DoG$ response of its neighbours. A smaller optical radius makes the particles optically further from each other. Accordingly, we found that finite dilution corrections were less important in real images than in our synthetic test images. For the same reason, the scaling factor must be measured and applied after --- \emph{not} before --- finite dilution corrections. \section{Revealing polydispersity effects on structural orderings in a glassy colloidal hard sphere liquid} \label{sec:yon6} Here we apply our tracking method to access couplings between the spatial particle size distribution and local structural ordering in a supercooled colloidal liquid with size polydispersity. \subsection{Experimental} We used \textsc{pmma} (poly(methyl methacrylate)) colloids sterically stabilized with methacryloxypropyl terminated \textsc{pdms} (poly(dimethyl siloxane)) and fluorescently labelled with rhodamine isothiocyanate chemically bonded to the \textsc{pmma}. The colloids were suspended in a solvent mixture of cis-decalin and cyclohexyl-bromide for both optical index and density matching. To screen any (weak) electrostatic interactions, we dissolved tetrabutylammonium bromide salt, to a concentration of \unit{300}{\nano\mole\per\liter}~\citep{royall2005}. The estimated Debye screening length is \unit{13}{\nano\metre}, well below the length scale of the colloids that can be considered as hard spheres. The \unit{8}{bit} graylevel data was collected on a Leica SP5 confocal microscope, using \unit{532}{\nano\meter} laser excitation and voxel size of $(\unit{283}{\nano\metre})^2\times\unit{293}{\nano\metre}$. To realize a more precise density matching, the temperature was controlled on both stage and objective lens. This setup allows us to alter the buoyancy simply by a temperature change, thus to set the effective gravity upward, downward or almost null. After careful shear melting, the sample was filled into a $\unit{100}{\micro\metre}\times\unit{1}{\milli\metre}$ capillary (Vitrocom) and set on the microscope stage. We then spent a few days to find the temperature corresponding to exact density matching (within \unit{0.1}{K}). This waiting time was enough for crystallites to form on both top and and bottom walls. Then, we heat up our sample by a few degrees compared to the density-matching temperature in order to make the colloids heavier than the solvent. At the top of the sample (close to the objective lens and thus allowing much clear imaging) the volume fraction drops and all crystallites melt. Finally, we set back the thermostat to the density-matching temperature, allowing the top of the sample to slowly return to its supercooled state. We could then observe the heterogeneous nucleation from the beginning. We tracked the sample from top to bottom and we are thus sure that the crystallite actually form at the wall and do not come from the rest of the sample. Namely, crystallisation in this sample is caused only by heterogeneous nucleation on the substrates, presumably due to the wall-induced enhancement of crystal-like bond orientational ordering \cite{watanabe2011}. \subsection{Global size distribution} We localise the particles using our method with a preblur of only $\sigma_0=1.0$ (see Section \ref{sec:blur}) to be able to detect the smaller particles ($R_{min}\approx \unit{2}{px}$) without expensive oversampling. We checked that choosing higher $\sigma_0$ truncates the size distribution but has no other effect on its shape. After removing half-overlapping features, we applied a single iteration of finite dilution corrections for both intensities and radii to obtain the optical radius of each particle. We then estimate the hard-core diameter function of the optical radius $R$ by locating the first peak of the partial radial distribution function $g_R(r)$ of the particles having $R_i$ close to $R$. As shown in Fig.~\ref{fig:sizing}a, the real to optical size ratio is around $1.5$ and is rather constant respective to $R$, thus a single overall scaling factor is enough. It can be determined by a single partial $g_R(r)$ with $R$ near the peak of the size distribution, or by locating the peak of $g(\hat{r})$, with $\hat{r}_{ij} = r_{ij}/(R_i+R_j)$. We found that the real-to-optical size ratio was increasing with time due to photobleaching (see Fig.~\ref{fig:sizing}b). We fit this increase by a linear relation and applied the resulting time-dependent ratio to obtain the real size of each particles. The consistency of our method can be checked by constructing the radial distribution function $g(r)$. In monodisperse hard spheres, the $g(r)$ should have a sharp first peak at $r=2R$ corresponding to hard core contact. Polydispersity implies hard core contacts at various $r$ and thus broadens the peak. One can recover a sharp peak by constructing $g(\hat{r})$ (this time $\hat{r}$ is scaled by the real radii, not the optical ones). In Fig.~\ref{fig:sizing}d we successfully used the sizes measured by our method to rectify the first peak. Fig.~\ref{fig:sizing}c shows the size distribution only $140$ dry colloids measured on high resolution scanning electron microscopy (\textsc{sem}) images. We also show the size distribution obtained by our \emph{in situ} measurements ($\sim 1.7\times 10^6$ instantaneous sizing). The main peak of the later compares well with the former once a solvent swelling of $25\%$ in radius is taken into account. However the small sampling of the SEM measurements completely misses the tail toward small sizes featuring two low peaks that probably correspond to secondary and ternary nucleation during the synthesis of the colloids~\cite{bosma2002,Poon2012}. From our data we compute an overall volume fraction of $0.60$, almost constant during the experiment. The polydispersity estimated from \textsc{sem} data is $6.2\%$. This is coherent with a fit of the main peak of our data ($6.9\%$), however the polydispersity of the whole distribution including the tail toward small sizes is $14.8\%$. We will see below how such a complex size distribution affects the physical behaviour of the system. \begin{figure}[h] \centering \includegraphics{fig_sizing.pdf} \caption{\emph{In situ} sizing of colloids in a glass. (a) Position of the first peak of the partial $g_R(r)$ function of the optical radius $R$. Solid line corresponds to a real-to-optical size ratio of $1.5$. (b) Time dependence of the real-to-optical size ratio. Solid line is a linear fit. (c) Size distribution estimated by our algorithm (dashed line). Comparison with the estimation from \textsc{sem} of only $140$ dry particles (steps) is possible once $25\%$ of swelling is taken into account (full line). (d) First peak of the radial distribution function with (full line) and without (dashed) the individual sizes data.} \label{fig:sizing} \end{figure} \subsection{Polydispersity and structural heterogeneity in a supercooled colloidal liquid} Hard sphere supercooled liquids and glasses contain both medium ranged crystal-like bond orientational ordering (\textsc{mrco}) and local icosahedral ordering~\cite{Leocmach2012}. The later acts as a frustration against the expansion of the former and thus against crystallisation. Both kind of structures can be detected using bond orientational order (\textsc{boo}) introduced by \citet{steinhardt1983boo} (see Fig.~\ref{fig:mrco_ico_small}). As we have described elsewhere~\cite{Leocmach2012}, crystal-like bond ordering is well described by the scalar bond order parameter $Q_6$, and the icosahedral bond ordering by $w_6$. \textsc{fcc} or \textsc{hcp} crystals under thermal vibrations typically have $Q_6>0.4$~\cite{Lechner2008}. We distinguish \textsc{mrco} from liquid structures by a threshold at $Q_6=0.25$. The perfect 13 particles icosahedron is the minimum of $w_6$, at negative values. We consider that a neighbourhood is icosahedral when $w_6<-0.023$. \begin{figure}[h] \centering \includegraphics{fig_mrco_ico_small.pdf} \caption{Structure visualisation at $t=0$ (left) and $t=\unit{40}{\hour}$ (right). Small particles ($R<\unit{1.5}{\micro\metre}$) are shown in green, crystal-like ordered particles ($Q_6>0.25$) in red and icosahedral particles and their neighbours in purple. Other particles are not shown.} \label{fig:mrco_ico_small} \end{figure} In Fig.~\ref{fig:size_struc}, we compare the overall size distribution to the size distribution within each remarkable structure. The roles of the particle at the centre of an icosahedron and of the particles surrounding it are asymmetric. The size distribution of the particles at the surface of icosahedra is almost identical to the overall size distribution, however the size distribution of the particles at the center of icosahedra features a second peak at radii about $80\%$ smaller than the main peak. A centre to surface size ratio near $0.8$ seems to stabilize icosahedral order. This is consistent with a recent simulation study by Shimono and Onodera \cite{shimono2012icosahedral}. \begin{figure} \centering \includegraphics{fig_size_struc.pdf} \caption{Size distribution within local structures in a supercooled hard sphere colloidal liquid. (a) With increasing local crystalline bond orientational order: all particles, MRCO particles ($0.4>Q_6>0.25$) and crystalline particles ($Q_6>0.4$). Crystalline ordered regions are less polydisperse. (b) Particles at the centre or on the surface of icosahedra ($w_6<-0.023$). A small central particle with large surface particles tend to stabilize icosahedra.} \label{fig:size_struc} \end{figure} We note that the tail of small particles characteristic of our overall size distribution is less pronounced in \textsc{mrco} and almost disappears in well-formed crystals (Fig.~\ref{fig:size_struc}a). The width of the main peak of the distribution does not change significantly upon crystal-like ordering. This means that a small amount of polydispersity is acceptable inside \textsc{mrco}, coherently with the results for the (defective) crystalline lattice by \citet{Fasolo2003}. However \textsc{mrco} cannot appear where markedly smaller particles are present. We also checked the difference between \textsc{fcc} ($w_4<0$) and \textsc{hcp} ($w_4>0$) to find that they have both exactly the same size distribution (not shown). This suggests that the size distribution controls the formation of \textsc{mrco} but has no influence on its symmetry (probably, the crystal polymorph). We stress that this coupling of \textsc{mrco} and icosahedral ordering with the spatial distribution of particle sizes does not imply fractionation, but rather suggests that a rather homogeneous size distribution tends to help or stabilize \textsc{mrco} and a slightly smaller size of a particle also stabilizes the formation of icosahedral order around it. We will see below that the fractionation appends later in the crystallisation process. \subsection{Polydispersity and heterogeneous crystallisation in a supercooled colloidal liquid} As explained above, we triggered heterogeneous crystal nucleation on the top wall of our container. At each time step we can construct a $z$-dependent density profile for any species of interest. In Fig.~\ref{fig:profiles} we show at two time steps the density profiles of large particles ($R>\unit{1.5}{\micro\metre}$), small particles ($R<\unit{1.5}{\micro\metre}$) and particles with medium to high crystalline order ($Q_6>0.25$). Icosahedral particles (not shown) are an order of magnitude fewer than small particles (see Fig.~\ref{fig:mrco_ico_small}) and are basically irrelevant. Near the walls, the density profiles show oscillations characteristic of layering (see, \emph{e.g.}, Refs. \cite{watanabe2011,kob3}), whose wavelength corresponds to the peak of the size distribution (the `majority species'). We note that the small particles also shows layering on the same wavelength, although with a smaller amplitude. \begin{figure} \centering \includegraphics{fig_profiles.pdf} \caption{Instantaneous density profiles at $t=0$ (left) and $t=\unit{40}{\hour}$ (right). Top: large particles ($R>\unit{1.5}{\micro\metre}$) without smoothing (black line) and with a Gaussian smoothing of width $2R_\text{peak}$ (gray area). Bottom: small particles ($R<\unit{1.5}{\micro\metre}$) with the same color code. For all figures, the red dashed curve is the smoothed density of ordered particles ($Q_6>0.25$) irrespective of their size.} \label{fig:profiles} \end{figure} \begin{figure} \centering \includegraphics{fig_small_expelled.pdf} \caption{Heterogeneous nucleation. Successive computer reconstruction of the first three layers from the wall, every \unit{12}{\hour}, seen from the fluid side. Particles sizes are according to our method. Small ($R<\unit{1.5}{\micro\metre}$) particles are shown in green. Other particles are coloured according to their crystal-like order. Note how the small particles are expelled from the forming crystallites to concentrate in the grain boundaries.} \label{fig:small_expelled} \end{figure} Once the short oscillations removed via a Gaussian smoothing of width $2R_\text{peak}$, one can see that the density of large particles is constant from top to bottom of the sample (the slight dip on the edges is due to a small misalignment between our $z$ axis and the normal of the wall). It confirms the accuracy of our density matching but also stresses that crystallisation takes place without an increase in density. The smoothed density profile of small particles starts almost flat. However the small particles are expelled from the forming crystal, and as the crystalline front propagates it pushes in front of itself a growing ridge of small particles, very apparent after $\unit{40}{\hour}$ in both Fig.~\ref{fig:profiles} and Fig.~\ref{fig:mrco_ico_small}. We confirmed this view by looking at the behaviour of small particles and crystalline order in a constant plain (Fig.~\ref{fig:small_expelled} and Supplementary Movie 1). Before crystallisation, small particles are scattered randomly and \textsc{mrco} develop where few very small particles are present. Then, crystallisation proceeds from the \textsc{mrco}, pushing away the small particles that accumulate in the grain boundaries. As a consequence, heterogeneous crystallisation do not proceed via a spatially homogeneous layer-by-layer growth, as observed at low polydispersity~\cite{Sandomirski2011}, but by many nucleation-growth events reflecting the initial fluctuations of the \textsc{mrco}. This dynamics is reminiscent of the homogeneous nucleation case~\cite{Kawasaki2010c,Russo2012,Russo2012b} but enhanced by the wetting of the wall by crystalline bond orientational order \cite{watanabe2011}. To sum up, the small particles affect the growth of the crystal by coupling ordering to diffusion. Without the small particles, moving grain boundaries implies moving particles only locally and thus propagation is easy. With very small particles, moving grain boundaries implies moving them by the same amount by diffusion. In the same way, the crystallisation front has to push the small particles in front of it, effectively increasing the polydispersity in the melt and thus slowing its propagation. Indeed, weeks after preparation our samples was still only partially crystallised and highly polycrystalline. We note that this coupling is due to the conserved nature of the species `small particles'. Non-conserved species defined by local structural ordering such as icosahedra cannot produce such a coupling. \section{Conclusion} \label{sec:conclusion} We have presented a new method to extract the coordinates of particles of very diverse sizes even when they are very close (at contact) to each other. This opens new experimental possibilities in soft matter systems where the particles are polydisperse (\emph{e.g.}, colloids, emulsions, and granular particles) or even changing size (\emph{e.g.}, phase separation dynamics). Our method also reliably extracts the size of individual particle, which allows a \emph{in situ} analysis of the consequences of naturally occurring size distributions (beyond monodisperse, bidisperse or Gaussian distribution). We showed that in the case of colloidal hard sphere, a size distribution with a main peak and a tail toward small sizes induces different behaviours than what would be expected from a Gaussian size distribution. Our method allowed us to measure this distribution properly and to look into the interactions between size distribution, local ordering and heterogeneous crystal nucleation in a glass. We found that the presence of a long tail toward small sizes in the size distribution, was leading to highly polycrystalline materials that may be of interest for engineering purposes. We hope that this novel particle tracking method with a capability of particle-size determination will be applied to a wide field in soft matter physics. \section*{Acknowledgements} We are grateful to Koshi Hasatani, Anthony Genot and Yannick Rondelez for providing the image of Fig.~\ref{fig:applications}a; to Ken-ichiro Murata for providing the image of Fig.~\ref{fig:applications}b; to John Russo for providing us with simulation data of dense hard spheres, which are used in Fig.~\ref{fig:perfect}e; to Hideyo Tsurusawa for the synthesis of the colloid and the confocal data of Fig.~\ref{fig:deconv}; and to Rebecca Rice and C. Paddy Royall for \textsc{sem} measurements of the size distribution of dried particles. This work was partially supported by a grant-in-aid from the Ministry of Education, Culture, Sports, Science and Technology, Japan and also by Aihara Project, the FIRST program from JSPS, initiated by CSTP.
proofpile-arXiv_067-10684
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Let $\Gamma$ be a countable group and let $S$ be a finite symmetric subset of $\Gamma$. The \textbf{co-spectral radius} of a subgroup $H\subset \Gamma$ (with respect to $S$) is defined as the norm of the operator $M\colon L^2(\Gamma/H)\to L^2(\Gamma/H)$: \[ M\phi(\gamma H)=\frac{1}{|S|}\sum_{s\in S}\phi(s\gamma H),\quad \rho(\Gamma/H):=\|M\|.\] The groups with co-spectral radius $1$ for every choice of $S$ are called {\bf co-amenable}. If the group $\Gamma$ is finitely generated, one needs only to verify that the co-spectral radius is $1$ for some generating set $S$. In this paper we investigate the behavior of co-spectral radius under intersections. For general subgroups $H_1,H_2\subset \Gamma$ there is not much that can be said about $\rho(\Gamma/ (H_1\cap H_2))$ other than the trivial inequality \[\rho(\Gamma/ (H_1\cap H_2)) \leq \min\{\rho(\Gamma/H_1),\rho(\Gamma/H_2)\}.\] The problem of finding lower bounds on the co-spectral radius of an intersection is even more dire, as there are examples of non-amenable $\Gamma$ with two co-amenable subgroups $H_1,H_2$ with trivial intersection (see Example \ref{ex:Wreath}). However, when considering all conjugates simultaneously, we have the following elementary lower bound on the co-spectral radius of an intersection. Here and for the remainder of the paper, we write $H^g:=g^{-1}Hg$. \begin{thm}\label{thm:DetInt} Let $\Gamma$ be a finitely generated group and let $S$ be a finite symmetric generating set. Let $H_1,H_2$ be subgroups of $\Gamma$ and assume that $H_1$ is co-amenable. Then \[\sup_{g\in \Gamma} \rho(\Gamma/ (H_1\cap H_2^g))= \rho(\Gamma/H_2).\] \end{thm} The supremum over all conjugates in the statement of the theorem is in fact necessary, as shown by Example \ref{ex:Wreath}. However, in the presence of invariance, this can be improved upon: E.g. if $H_2=N$ is normal, Theorem \ref{thm:DetInt} immediately implies that $\rho(\Gamma/ (H_1\cap N))= \rho(\Gamma/N)$. Our aim is to generalize this to invariant random subgroups. An \textbf{invariant random subgroup} of $\Gamma$ (see \cite{AGV14}) is a random subgroup of $\Gamma$ whose distribution is invariant under conjugation. Invariant random subgroups simultaneously generalize the notion of finite index subgroups and normal subgroups. They have proven to be very useful tools in measured group theory (see for example \cite{7SamN,Bowen2014,BLT}). Many results on invariant random subgroups are obtained as generalizations of statements previously known for normal subgroups. We follow this tradition and show that one can remove the supremum in Theorem \ref{thm:DetInt} when $H_2$ is an IRS. The co-spectral radius of a subgroup $H$ is invariant under conjugation of $H$ by elements of $\Gamma$, so the co-spectral radius of an ergodic IRS $H$ is constant a.s. and therefore $\rho(\Gamma/H)$ is well-defined. We say an IRS is co-amenable if it is co-amenable a.s. Our main result is: \begin{thm} Let $\Gamma$ be a countable with a finite symmetric subset $S$. Let $H_1\subset \Gamma$ be a deterministic co-amenable subgroup and let $H_2$ be an ergodic invariant random subgroup of $\Gamma$. Then \[ \rho(\Gamma/ (H_1\cap H_2))=\rho(\Gamma/ H_2)\] almost surely. \label{thm:main-spectral} \end{thm} This result was inspired by a question of Alex Furman, asking whether the intersection of a co-amenable IRS'ses remains co-amenable. A positive answer follows from Theorem \ref{thm:main-spectral} applied in the case when both $H_1,H_2$ are co-amenable IRS'es. \begin{cor} Let $\Gamma$ be a countable group and let $H_1,H_2$ be independent co-amenable invariant random subgroups. Then the intersection $H_1\cap H_2$ is co-amenable. \label{cor:intersectIRS} \end{cor} \begin{rmk} The independence assumption in the above corollary is necessary (see Example \ref{ex:independence}).\end{rmk} Combined with Cohen-Grigorchuk's co-growth formula \cite{Cohen,Grig} and Gekhtman-Levit's lower bound on the critical exponent of an IRS of a free group \cite[Thm 1.1]{GL19}, Theorem \ref{thm:main-spectral} yields the following corollary on the critical exponents of subgroups of the free group. \begin{cor} Let $F_d$ be a free group on $d\geq 2$ generators and let $S$ be the standard symmetric generating set. Write $\delta(H)$ for the critical exponent of a subgroup $H$. Suppose $H_1$ is co-amenable and $H_2$ is an ergodic IRS. Then \[ \delta(H_1\cap H_2)=\delta(H_2),\] almost surely. \end{cor} \subsection{Outline of the proof} \label{sec:outline} We outline the proof of Theorem \ref{thm:main-spectral}. For the sake of simplicity we restrict to the case when both $H_1,H_2$ are co-amenable. We realize $H_1$ and $H_2$ as stabilizers of $\Gamma$-actions on suitable spaces $X_1$ and $X_2$. Since $H_1$ is a (deterministic) subgroup, we can take $X_1=\Gamma / H_1$. On the other hand, $H_2$ is an ergodic IRS so $X_2$ is a probability space with an ergodic measure-preserving action of $\Gamma$. The intersection $H_1\cap H_2$ is then the stabilizer of a point in $X_1\times X_2$ that is deterministic in the first variable and random in the second. Using an analogue of the Rokhlin lemma, we can find a positive measure subset $E$ of $X_2$ that locally approximates the coset space $H_2\backslash\Gamma$. The product of $E$ with $X_1$ will locally approximate the coset space of $(H_1\cap H_2)\backslash \Gamma$. The co-amenability of $H_2$ means that the set $E$ contains a subset $P$ that is nearly $\Gamma$-invariant. The product of such a set with a F{\o}lner set $F$ in $X_1$ should be a nearly $\Gamma$-invariant set in $X_1\times E$, and hence witnesses the amenability of the coset graph $(H_1\cap H_2)\backslash \Gamma$. The latter implies that $H_1\cap H_2$ is co-amenable. The actual proof is more complicated because if the product system is not ergodic, one has to show the product set is nearly invariant after restriction to each ergodic component and not just on average. Otherwise we can only deduce the bound from Theorem \ref{thm:DetInt} using a supremum over all conjugates. Obtaining control on each ergodic component is a key part of the proof where we actually use the invariance of $H_2$. To prove Theorem \ref{thm:main-spectral} for the co-spectral radius, one should replace the F{\o}lner set in $X_2$ with a function $f_2$ that (nearly) witnesses the fact that $\rho(\Gamma/H_2)=\lambda_2$, and adapt the remainder of the proof accordingly. \subsection*{Outline of the paper} Section \ref{sec:recap} contains background material. In Section \ref{sec:deterministic} we prove the deterministic bound on spectral radius given by Theorem \ref{thm:DetInt}. Next, in Section \ref{sec:whf}, we rephrase co-spectral radius of the (discrete) orbits in terms of \emph{embedded spectral radius} on a (continuous) measure space. \subsection*{Acknowledgments} The authors thank Gabor Pete and Tianyi Zheng for showing us a related problem for intersections of percolations on $\ensuremath{\mathbb{Z}}$. We thank Alex Furman for suggesting the problem as well as for helpful discussions. MF thanks Miklos Abert for useful discussions. We thank the University of Illinois at Chicago for providing support for a visit by MF. MF was partly supported by ERC Consolidator Grant 648017. WvL is supported by NSF DMS-1855371. \section{Background} \label{sec:recap} \subsection{Co-amenability} \label{sec:coamenable} Let $\Gamma$ be a finitely generated group and let $S$ be a finite symmetric set of generators. A subgroup $H$ of $\Gamma$ is called \textbf{ co-amenable} if the Schreier graph $\Sch(H\backslash \Gamma,S)$ is amenable, i.e for any $\varepsilon>0$ and any $S$ there exists a set $F\subset H\backslash \Gamma$ such that $|F\Delta FS|\leq \varepsilon |F|$. Such sets will be called $\varepsilon$-F{\o}lner sets. Alternatively, a subgroup $H$ is co-amenable if and only if the representation $\ell^2(\Gamma/H)$ has almost invariant vectors, or that $\|M\|_{L^2(H\backslash \Gamma)}=1.$ \subsection{Invariant random subgroups} \label{sec:IRS} Let $\Sub_\Gamma$ be the space of subgroups of $\Gamma$, equipped with the topology induced from $\{0,1\}^\Gamma$. An \textbf{invariant random subgroup} is a probability measure $\mu\in \mathcal{P}(\Sub_\Gamma)$ which is invariant under conjugation of $\Gamma$. An IRS is called {\bf co-amenable} if $$\mu(\{H\in\Sub_\Gamma| H \textrm{ is co-amenable}\})=1.$$ Similarly, we say that an IRS $H$ has {\bf co-spectral radius at least} $\lambda$ if $\rho(H\backslash\Gamma)\geq \lambda$ almost surely. For any action $\Gamma\curvearrowright X$ and $x\in X$ write $\Gamma_x$ for the stabilizer of $x$. Every IRS can be realized as a stabilizer of a random point in a probability measure preserving system: \begin{thm}[Abert-Glasner-Virag \cite{AGV14}] For every IRS $\mu$, there exists a standard Borel probability space $(X,\nu)$ and a Borel p.m.p. $\Gamma$-action on $(X,\nu)$ such that $\mu=\int_{X}\delta_{\Gamma_x} \, d\nu(x).$ \end{thm} \subsection{Ergodic decomposition of infinite measures}\label{sec:decomp} The material in this subsection is well-known to experts but difficult to locate in the literature. Our goal is to construct an ergodic decomposition for measure-preserving actions of countable groups on spaces with an infinite measure. We deduce this from the corresponding result for nonsingular actions on probability spaces: \begin{thm}[{Greschonig-Schmidt \cite[Thm 1]{ergodic-decomp}}] Let $\Gamma$ be a countable group and let $\Gamma\curvearrowright (X,\Sigma_X, \nu)$ be a nonsingular Borel action on a standard Borel probability space. Then there exist a standard Borel probability space $(Z,\Sigma_Z, \tau)$ and a family of quasi-invariant, ergodic, pairwise mutually singular probability measures $\{\nu_z\}_{z\in Z}$ with the same Radon-Nikodym cocycle as $\nu$, and such that for every $B\in\Sigma_X$, we have \begin{equation} \nu(B)=\int_Z \nu_z(B) d\tau(z). \label{eq:disintegrate} \end{equation} \label{thm:greschonig-schmidt} \end{thm} As an application we have: \begin{cor} Let $\Gamma$ be a countable group and let $\Gamma\curvearrowright (X_1,\Sigma_{X_1}, \nu_1)$ and $\Gamma\curvearrowright (X_2,\Sigma_{X_2}, \nu_2)$ be measure-preserving Borel actions on standard Borel spaces. Suppose that $(X_1,\nu_1)$ is ergodic and that $\nu_2(X_2)=1.$ Then there exists a standard Borel probability space $(Z,\Sigma_Z,\tau)$ and a family of $\Gamma$-invariant, ergodic, pairwise mutually singular measures $\{\nu_z\}_{z\in Z}$ on $X_1\times X_2$ such that for every $B\in\Sigma_{X_1\times X_2}$, we have \begin{equation} \nu(B)=\int_Z \nu_z(B)d\tau(z). \label{eq:disintegrate-us} \end{equation} Moreover, for every measurable set $F\subset X_1$ and $z\in Z$ we have $$\nu_z(F\times X_2)=\nu_1(F).$$ \label{cor:erg-decomp} \end{cor} \begin{proof} Fix a countably-valued Borel function $w\colon X_1\to\mathbb R_{>0}$, such that $\int_{X_1} w \, d\nu_1=1$. Write $w=c_i$ on the set $A_i$, where $\{A_i\}$ is a Borel partition of $X_1$. Then $w(x_1)d\nu(x_1)d\nu_2(x_2)$ is a $\Gamma$-quasi-invariant probability measure on $X_1\times X_2$ with Radon-Nikodym cocycle $dw(x_1,x_2,\gamma)=\frac{w(\gamma x_1)}{w(x_1)}$. Let $(Z,\Sigma_Z,\tau)$, $z\mapsto (w\nu)_z$, be its ergodic decomposition as provided by Theorem \ref{thm:greschonig-schmidt}. Now pass back from $w(x_1)d\nu_1(x_1)d\nu_2(x_2)$ to $\nu_1\times\nu_2$ by setting $$d\nu_z(x_1,x_2):=w(x_1)^{-1}d(w\nu)_z.$$ Since $\{(w\nu)_z\}_z$ are ergodic and pairwise mutually singular, the same is true for $\{\nu_z\}_z$. Since $(w\nu)_z$ have Radon-Nikodym cocycle $dw$, the measures $\nu_z$ are $\Gamma$-invariant. It is easy to verify that Equation \eqref{eq:disintegrate} implies the corresponding Equation \eqref{eq:disintegrate-us}. Finally, to satisfy the last identity, choose a positive measure subset $F\subset X_1$ and renormalize $\nu_z$ and $\tau$ as follows: $$\nu_z\mapsto \frac{\nu_1(F)}{\nu_z(F\times X_2)}\nu_z,\quad d\tau(z)\to \frac{\nu_z(F\times X_2)}{\nu_1(F)}d\tau(z).$$ By ergodicity of $\nu_1$, this normalization does not depend on the choice of $F$. \end{proof} \subsection{Ergodic theory of equivalence relations} \label{sec:relations} Let $(X,\nu)$ be a probability measure space and let $\varphi_i:U_i\to X$ be a finite family of non-singular measurable maps defined on subsets $U_i$ of $X$. The triple $(X,\nu,(\varphi_i)_{i\in I})$ is called a \textbf{graphing}. We assume that $(\varphi_i)_{i\in I}$ is \textbf{symmetric}, i.e. for each $i\in I$ the map $\varphi_i^{-1}\colon \varphi_i(U_i)\to U_i$ is also in the set $(\varphi_i)_{i\in I}$. A graphing is {\bf finite} if the index set $I$ is finite. \begin{rmk} In our applications of this theory, $X$ will be a finite measure subset (not necessarily invariant) of a measure-preserving action of $\Gamma$, equipped with the graphing corresponding to a finite symmetric generating set $S$ of $\Gamma$, and $\nu$ will be the restricted measure or the restriction of an ergodic component. \end{rmk} Let $\mathcal R$ be the orbit equivalence relation generated by the maps $(\varphi_i)_{i\in I}$. A measured graphing yields a random graph in the following way: For every $x\in X$, let $\mathcal G_x$ be the graph with vertex set given by the equivalence class $[x]_{\mathcal R}$ and place an edge between $y,z\in [x]_{\mathcal R}$ whenever $z=\varphi_i(y)$ for some $i\in I$ (multiple edges are allowed). The graphs $\mathcal G_x$ have degrees bounded by $|I|$ and are undirected since $(\varphi_i)_{i\in I}$ is symmetric. If we choose a $\nu$-random point $x$, the resulting graph $\mathcal G_x$ is a random rooted graph. The properties of $\mathcal G_x$ will depend on the graphing. For example, if the graphing consists of measure preserving maps then the resulting random graph is unimodular (see \cite{AL}). Suppose from now on the graphing is measure-preserving. Then the \textbf{mass transport principle} \cite{AL} asserts that for any measurable function $K:\mathcal R\to \ensuremath{\mathbb{R}}$, we have \begin{equation}\label{eq:MTPsub} \int_X\left( \sum_{x'\in [x]_{\mathcal R}}K(x,x')\right) \, d\nu(x)=\int_X\left( \sum_{x\in [x']_{\mathcal R}}K(x,x')\right) \, d\nu(x'). \end{equation} \section{Co-spectral radius for deterministic intersections}\label{sec:deterministic} In this section, we prove Theorem \ref{thm:DetInt} that gives the elementary deterministic lower bound on the supremum of co-spectral radii over all conjugates. Then we show an example that consideration of all conjugates is necessary. This example will also show the necessity of the independence assumption in Corollary \ref{cor:intersectIRS} on the co-amenability of the intersection of a pair of independent co-amenable IRS'es. \begin{proof}[Proof of Theorem \ref{thm:DetInt}] As in the introduction, we let $M:=\frac{1}{|S|}\sum_{s\in S}s\in \mathbb C[\Gamma]$. We have the following identity between the unitary representations of $\Gamma$: $$L^2(H_1\backslash\Gamma)\otimes L^2(H_2\backslash\Gamma)\simeq \bigoplus_{g\in H_1\backslash \Gamma/H_2} L^2((H_1\cap H_2^g)\backslash \Gamma).$$ Write $\pi_1,\pi_2$ for the unitary representations corresponding to $L^2(H_1\backslash\Gamma)$ and $L^2(H_2\backslash\Gamma)$. The above identity implies that \[ \sup_{g\in H_1\backslash \Gamma/H_2} \rho((H_1\cap H_2^g)\backslash \Gamma)=\|(\pi_1\otimes \pi_2)(M)\|.\] To prove the theorem, it is enough to verify that $$\|(\pi_1\otimes \pi_2)(M)\|\geq \|\pi_2(M)\|=\rho(H_2\backslash\Gamma).$$ Let $\varepsilon>0$. Choose unit vectors $u_1\in L^2(H_1\backslash\Gamma)$ and $u_2\in L^2(H_2\backslash\Gamma)$ such that $\langle \pi_1(s)u_1, u_1\rangle \geq 1-\varepsilon$ for all $s\in S$ and $\langle \pi_2(M)u_2,u_2\rangle\geq \|\pi_2(M)\|-\varepsilon$. Then \begin{align*}\langle (\pi_1\otimes \pi_2)(M) u_1\otimes u_2,u_1\otimes u_2\rangle =&\frac{1}{|S|}\sum_{s\in s} \langle \pi_1(s)u_1,u_1\rangle \langle \pi_2(s)u_2,u_2\rangle\\ \geq& \frac{1}{|S|}\sum_{s\in S} (1-\varepsilon)\langle \pi_2(s)u_2,u_2\rangle\\ \geq & (1-\varepsilon)(\|\pi_2(M)\|-\varepsilon).\end{align*} Letting $\varepsilon\to 0$ we conclude that $\|(\pi_1\otimes \pi_2)(M)\|\geq \|\pi_2(M)\|.$ \end{proof} The supremum in the inequality seems to be necessary. Below we construct an example of a non-amenable finitely generated group $\Gamma$ with two co-amenable subgroups $H_1,H_2$ such that the intersection $H_1\cap H_2$ is trivial. In particular $$\rho(\Gamma)=\rho((H_1\cap H_2)\backslash \Gamma)<\rho(H_2\backslash\Gamma)=1.$$ \begin{ex}\label{ex:Wreath} Let $\Gamma := F_2^{\oplus \ensuremath{\mathbb{Z}}} \rtimes \ensuremath{\mathbb{Z}}$, where $F_2$ stands for the free group on two generators. The group is obviously non-amenable. Let $a,b$ be the standard generators of $F_2$ and let $s$ be the generator of the copy of $\ensuremath{\mathbb{Z}}$ in $\Gamma$. The triple $\{s,a,b\}$ generates $\Gamma$. Put $S:=\{s,a,b,s^{-1},a^{-1},b^{-1}\}.$ For any subset $E\subset \ensuremath{\mathbb{Z}}$ let $H_E:=F_2^{\oplus E}\subset \Gamma$. Now let $A,B$ be disjoint subsets of $\ensuremath{\mathbb{Z}}$ containing arbitrary long segments. Since $A\cap B=\varnothing$, the intersection $H_A\cap H_B=1$ is not co-amenable. On the other hand, we claim that for any subset $C$ containing arbitrarily long segments, $H_C$ is co-amenable, so that in particular $H_A$ and $H_B$ are co-amenable: Indeed, suppose $C\subseteq\ensuremath{\mathbb{Z}}$ contains arbitrarily long segments. Then for any $g\in \Gamma$, the Schreier graphs for $H_C$ and $H_C^g$ are isomorphic, so $\rho(H_C\backslash\Gamma)=\rho(H_C^g\backslash\Gamma)$. For every $n\in\mathbb N$ \[\rho(H_C\backslash \Gamma)=\rho(H_C^{s^n}\backslash \Gamma)=\rho(H_{C-n}\backslash\Gamma).\] Let $(n_k)_{k\in\mathbb N}$ be a sequence such that $\{-k,-k+1,\ldots,k-1,k\}\subset C-{n_k}$. Then $H_{C-n_k}$ converges to $H_{\mathbb Z}$ in ${\rm Sub}(\Gamma)$ as $k\to\infty$. Since the spectral radius is lower semi-continuous on the space of subgroups, we get \[\rho(H_C\backslash \Gamma)=\liminf_{k\to\infty}\rho(H_{C-n_k}\backslash \Gamma)\geq \rho(H_{\mathbb Z}\backslash \Gamma)=\rho(\mathbb Z)=1.\] \end{ex} \begin{ex} The above example also shows that for the intersection of two co-amenable IRS'es to be co-amenable (Corollary \ref{cor:intersectIRS}), the independence assumption is necessary. Indeed, let $\Gamma$ be as in the previous example and let $A$ be an invariant percolation on $\ensuremath{\mathbb{Z}}$ such that both $A$ and its complement contain arbitrarily long segments (e.g. Bernoulli percolation). Then $H_A$ and $H_{A^c}$ are co-amenable but their intersection is trivial. \label{ex:independence}\end{ex} \section{Embedded spectral radius} \label{sec:whf} Let us introduce some terminology. Let $(X,\nu)$ be a measure preserving $\Gamma$-action, and write $\ensuremath{\mathcal{R}}$ for the corresponding orbit equivalence relation. We shall assume that $\nu$ is $\sigma$-finite but not necessarily finite. We recall that for every $x\in X$, $\mathcal G_x$ is the labeled graph with vertex set $[x]_\ensuremath{\mathcal{R}}$ and edge set $(y,s y), y\in \Gamma x$, labeled by $s\in S$. \begin{dfn}\label{def:FinComp} A set $P\subset X$ is called a \textbf{finite connected component} if for almost all $x\in P$ the connected component of $x$ in the graph $P\cap \mathcal{G}_x$ is finite. In other words, the graphing restricted to $P$ generates a finite equivalence relation. \end{dfn} For any subset $P\subset X$ write $\partial P:=SP\setminus P $ for the (outer) boundary and ${\rm int}(P):=P\setminus \partial(X\setminus P)$ for the interior of $P$. \begin{dfn}\label{def:WeakSpectralRad} Let $(X,\nu)$ be a measure-preserving $\Gamma$-action. We say that $(X,\nu)$ has \textbf{embedded spectral radius} $\lambda$ if for every finite measure finite connected component $P\subset X$ and every $f\in L^2(X,\nu)$ supported on the interior ${\rm int}(P)$, we have $$\langle (I-M)f, f\rangle \geq (1-\lambda)\|f\|^2,$$ and $\lambda$ is minimal with this property. \end{dfn} \begin{rmk} Using the monotone convergence theorem, we may assume that $f$ in the above definition is bounded. Further, taking the absolute value of $f$ leaves the right-hand side unchanged, and decreases the left-hand side. Therefore it suffices to consider nonnegative functions $f\geq 0$. \label{rmk:nonnegative} \end{rmk} Our goal in this section is to prove that the embedded spectral radius of a measure preserving system $\Gamma\curvearrowright (X,\nu)$ is detected by the co-spectral radius along orbits: \begin{prop}\label{prop:WHandCoAm} Let $(X,\nu)$ be a $\sigma$-finite measure-preserving $\Gamma$-system. Then the stabilizer of almost every point has co-spectral radius at least $\lambda$ if and only if almost every ergodic component of $\nu$ has embedded spectral radius at least $\lambda$. \end{prop} \begin{rmk} This result can be used to give examples whose embedded spectral radius is strictly less than the spectral radius of $M$ on $L^2_0(X,\nu)$. This happens for example when $\Gamma$ is a non-abelian free group and $X=X_1\times X_2$ is a product of an essentially free action $X_1$ with an action $X_2$ that has no spectral gap. In this case, the graphs $\mathcal G_x$ are just copies of the Cayley graph of $\Gamma$ so their spectral radius is bounded away from $1$. On the other hand, the spectral radius of $M$ on $L^2_0(X,\nu)$ is $1$, because it contains $L^2_0(X_2,\nu_2)$. \end{rmk} \begin{proof} By passing to ergodic components we can assume without loss of generality that $(X,\nu)$ is ergodic. If $(X,\nu)$ is periodic then there is nothing to do, so henceforth we will assume that $(X,\nu)$ is an aperiodic measure preserving ergodic system. First let us prove that if the co-spectral radius of the stabilizer $\Gamma_x$ is at least $\lambda$, then $X$ has embedded spectral radius at least $\lambda$. Let $\varepsilon>0$ be arbitrary. Then $\nu$-almost every orbit $\mathcal{G}_x$ supports a function $f_x:\mathcal{G}_x\to \ensuremath{\mathbb{R}}$ such that \begin{equation} \langle (I-M)f_x,f_x\rangle \leq (1-\lambda+\varepsilon)\|f_x\|^2. \label{eq:rhogeqlambda} \end{equation} Since $\mathcal{G}_x$ is countable, using the Monotone Convergence Theorem, we can assume $f_x$ is supported on a finite ball. Let $R_x>0$ be minimal such that the interior of the ball $B_{\mathcal G_x}(x,R_x)$ of radius $R_x$ around $x$ supports a function $\psi_x$ satisfying \eqref{eq:rhogeqlambda}. The map $x\mapsto R_x$ is measurable, so we can choose $R_0>0$ such that $$\nu\, (\{x\in X \mid R_x\leq R_0\}) >0$$ and put $X_1:=\{x\in X \mid R_x\leq R_0\}.$ Since there are only finitely many rooted graphs of radius $R_0$ labeled by $S$, there exists a positive measure set $X_2\subset X_1$ such that for all $x\in X_2$, the rooted graphs $(B_{\mathcal G_x}(x,R_0),x)$ are all isomorphic to some $(\mathcal G,o)$ as rooted $S$-labeled graphs. By restricting to a smaller subset, we can assume that $\nu(X_2)$ is finite. Fix $\psi:\mathcal{G}\to\ensuremath{\mathbb{R}}$ satisfying $$\langle (I-M)\psi, \psi\rangle \leq (1-\lambda+\varepsilon)\|\psi\|^2,$$ and for $x\in X_2$, let $B_x\subset X$ be the image of $\mathcal{G}$ via the unique labeled isomorphism $(\mathcal G,o)\simeq (B_{\mathcal{G}_x}(x,R_0),x)$. Let $S'$ be the set of all products of at most $R_0$ elements of $S$. At this point we need to use a Rokhlin-type lemma, which will be stated and proved below (see Lemma \ref{lem:TurboRokhlin}). Upon applying this to the graphing $(S'X_2,\nu, S')$ we find a partition $S'X_2=B\sqcup \bigsqcup_{j=1}^N A_j$ with $\nu(B)<\nu(X_2)/2$, such that $A_j\cap sA_j=\{x\in A_j\mid sx=x\}$ for every $s\in S'$. This translates to the condition that that $B_{x}$ and $B_{x'}$ are disjoint for every distinct pair of points $x,y \in A_j$. Since the sets $A_j$ cover a subset of $X_2$ of measure at least $\nu(X_2)/2$, there exists $j$ such that $X_3:=X_2\cap A_j$ has positive measure. The set $P:=\bigcup_{x\in X_3} B_x$ is then a disjoint union of its finite connected components $B_x$, so it is a finite connected component in the sense of Definition \ref{def:FinComp}. The function $\psi:\mathcal{G}\to\ensuremath{\mathbb{R}}$ naturally induces a function $f:P\to\ensuremath{\mathbb{R}}$ defined by $f|_{B_x} := \psi$ for all $x\in X_3$, where we have identified $B_x$ with $\mathcal{G}$ using the isomorphism of rooted $S$-labeled graphs $(B_x, x)\simeq (\mathcal{G},o)$. Then we easily verify $\langle (I-M)f, f\rangle \leq (1-\lambda+\varepsilon)\|f\|^2$, namely $$\|f\|^2 = \int_{X_3} \|\psi\|^2 d\nu = \nu(X_3) \|\psi\|^2,$$ and similarly $$ \langle Mf, f\rangle = \int_{X_3} \langle M\psi,\psi\rangle \, d\nu = \nu(X_3) \langle M\psi , \psi\rangle.$$ Taking $\varepsilon\to 0$, we see that $X$ has embedded spectral radius at least $\lambda$. We prove the other direction. The proof will use the mass transport principle for unimodular random graphs. In our case the unimodular random graph is given by $(\mathcal P_x,x)$ where $x\in X$ is $\nu|_P$-random. We argue by contradiction, so assume that $(X,\nu)$ weakly has spectral radius at least $\lambda$ but at the same time stabilizers have co-spectral radius $\rho<\lambda$ with positive probability. By ergodicity, there exists an $h>0$ such that $\rho(\mathcal{G}_x)\leq\lambda-h$ a.s. Since $X$ has spectral radius at least $\lambda$, there exists $f\in L^2(X,\nu)$ nonzero and supported on the interior of a finite connected component $P\subseteq X$ with $\nu(P)<\infty$ such that \begin{equation}\label{eq:WeakHypCnd} \langle (I-M)f,f\rangle \leq \left(1-\lambda+\frac{h}{2}\right)\|f\|^2.\end{equation} As in Section \ref{sec:relations}, let $\mathcal{R}_P$ be the equivalence relation generated by the graphing on $P$. We write $P_x^o:=[x]_{\mathcal{R}_P}$ for the connected component of $x\in P$. Define $K:\mathcal{R}_P\to\ensuremath{\mathbb{R}}$ by \begin{equation} K(x,y):=\frac{f(x)^2}{\|f\|_{P_x^o}^2} (2|S|)^{-1} \sum_{s\in S} (f(ys)-f(y))^2 \end{equation} By the mass transport principle, we equate \begin{equation} \int_P \sum_{x\in P_y^o} K(x,y) \, d\nu(y) = \int_P \sum_{y\in P_x^o} K(x,y) \, d\nu(x). \label{eq:mtp} \end{equation} We start by computing the integrand on the right-hand side. Rewriting $$(f(ys)-f(y))^2=f(ys)(f(ys)-f(y))+f(y)(f(y)-f(ys)),$$ we find (using $S$ is symmetric) for every $x\in X$ that \begin{align*} \sum_{y\in P_x^o} K(x,y) &= \frac{f(x)^2}{\|f\|_{P_x^o}^2} |S|^{-1} \sum_{y\in P_x^o} \sum_{s\in S} f(y)\left(f(y)-f(ys)\right) \\ &= \frac{f(x)^2}{\|f\|_{P_x^o}^2} \left\langle (I-M)f, f\right\rangle_{P_x^o}. \end{align*} Using $\rho(\mathcal{G}_x)\leq\lambda-h$ and $f\geq 0$, we can estimate \begin{equation} \sum_{y\in P_x^o} K(x,y)\geq (1-\lambda+h) f(x)^2. \label{eq:sumy-comp} \end{equation} Therefore we have the following estimate for the right-hand side in the Mass Transport Equation \eqref{eq:mtp} \begin{equation} \int_P \left(\sum_{y\in P_x^o} K(x,y)\right) \, d\nu(x)\geq (1-\lambda+h) \|f\|^2. \label{eq:MTPsumy} \end{equation} Next, we compute the integrand on the left-hand side of the Mass Transport Equation \eqref{eq:mtp}, namely for $y\in X$, we have \begin{align*} \sum_{x\in P_y^o} K(x,y) &= \sum_{x\in P_y^o} \frac{f(x)^2}{\|f\|^2_{P_y^o}} (2|S|)^{-1} \sum_{s\in S} (f(ys)-f(y))^2 \\ &= f(y)(I-M)(f)(y), \end{align*} where we used $S$ is symmetric and the action is measure-preserving. Hence, integrating the above equation over $y$ and using the mass transport principle to estimate this by the right-hand side of Equation \eqref{eq:MTPsumy}, we find $$\langle (I-M)f,f\rangle \geq (1-\lambda+h)\|f\|^2.$$ This contradicts the choice of $f$ in Equation \eqref{eq:WeakHypCnd}. \end{proof} We end this section with the following technical Rokhlin-type lemma that was used in the above proof: \begin{lem}\label{lem:TurboRokhlin} $(X,\nu,(\varphi_i)_{i\in I})$ be a finite measure preserving symmetric graphing on a finite measure space. Then, for every $\delta>0$ there exists a measurable partition $X=B\sqcup \bigsqcup_{j=0}^N A_j$, such that $\nu(B)\leq \delta$ and $$A_j \cap \varphi_i (A_j\cap U_i)= \{x\in A_j\cap U_i \mid \varphi_i(x)=x\}.$$ \end{lem} \begin{proof} We start by proving the lemma for a single measure preserving invertible map $\varphi\colon U\to \varphi(U).$ Since we do not assume that $\varphi$ is defined on all of $X$ we need to treat separately the subset of elements where $\varphi$ can be applied only finitely many times. For any $n\in \mathbb N$ define $$E_n:=\{x\in X\mid \varphi^{n-1}(x)\in U \text{ but } \varphi^n(x)\not\in U\}.$$ Put $A_{\rm 0}=\cup_{n=0}^\infty E_{2n}$ and $A_{\rm 1}=\cup_{n=0}^\infty E_{2n+1}.$ We obviously have $A_0\cap A_1=\emptyset$, $\varphi^{\pm 1}(A_0)\subset A_1$ and $\varphi^{\pm 1}(A_1)\subset A_0$. This reduces the problem to the subset $Y:=X\setminus \bigcup_{n=0}^\infty E_n$. By definition, $\varphi(Y)=Y$. We further decompose $Y$ into the periodic and aperiodic parts $Y^{\rm p}, Y^{\rm ap}$. The periodic part can be partitioned into a fixed point set $A_2=\{x\in Y^{\rm p}\mid \varphi(x)=x\}$, finitely many sets $A_3,\ldots, A_M$ permuted by $\varphi$ and a remainder $B_1$ of measure $\nu(B_1)<\delta/2$ coming from large odd periods. By the usual Rokhlin lemma, the aperiodic part $Y^{\rm ap}$ can be decomposed as $Y^{\rm ap}=B_2\sqcup A_{M+1}\sqcup\ldots \sqcup A_N$ where $\nu(B_2)<\delta/2$, $\varphi(A_{M+k})=A_{M+k+1}$ for all $M+k<N$ and $\varphi(A_N)\subset B_2.$ Put $B=B_1\cup B_2$. This ends the construction for a single map. Suppose now that the graphing consists of $d$ maps $\varphi_1,\ldots,\varphi_d$ and their inverses. For each $i=1,\ldots d$ there exists a partition $X=B^i\sqcup \bigsqcup_{j=1}^{N_i}A_j^i$ such that $\nu(B^i)<\frac{\delta}{d}$ and $\varphi_i (A_j^i\cap U_i)\cap A_j^i= \{x\in (A_j^i\cap U_i) \mid \varphi_i(x)=x\}.$ Let $B:=\cup_{i=1}^d B^i$ and define the partition $\{A_j\}$ as the product partition $\bigwedge_{i=1}^d \{A_j^i\}.$ This partition satisfies all the desired conditions. \end{proof} \section{Proof of the Main Theorem} \subsection{Preliminary reductions and general strategy} Let $\Gamma$ be a countable group with a co-amenable subgroup $H_1$ and an IRS $H_2$ with co-spectral radius $\lambda_2$. We need to show that $H_1\cap H_2$ has co-spectral radius at least $\lambda_2$ as well. First of all, without loss of generality we can assume $H_2$ is ergodic. The group $H_1$ is realized as the stabilizer of a point in $X_1:=H_1\backslash \Gamma$ and $H_2$ is realized as the stabilizer of a random point in a p.m.p. action of $\Gamma$ on $(X_2,\nu_2)$. We use Proposition \ref{prop:WHandCoAm} to find a finite connected component $P_2$ of $X_2$ and a function $f_2$ on $P_2$ that witnesses the spectral radius $\lambda_2$. Next, using a large F\"olner set in $X_1$, we produce a new finite connected component in the product system $X_1\times X_2$ and a new function which certifies that the co-spectral radius of stabilizers in $X_1\times X_2$ is arbitrarily close to $\lambda_2$ on almost every ergodic component of the product measure. \subsection{Reformulation of the problem in measure theoretic terms.}\label{sec:MSetup} Write $(X_1,\nu_1)$ for the set $H_1\backslash \Gamma$ endowed with the counting measure. It is an infinite ergodic measure-preserving action of $\Gamma$. Let $\Gamma\curvearrowright (X_2,\nu_2)$ be a p.m.p. Borel action on a standard Borel probability space such that $H_2=\Gamma_x$ for $\nu_2$-random $x$. We will consider the action of $\Gamma$ on the product system $(X_1\times X_2, \nu_1\times \nu_2).$ To shorten notation we write $\nu=\nu_1\times \nu_2$. The intersection $H_1\cap H_2$ is nothing else than the stabilizer of a random point $x\in \{[H_1]\}\times X_2$. Note that for such $x$, we have $\rho(\Gamma_x\backslash\Gamma)\leq \lambda_2:=\rho(H_2\backslash\Gamma)$ almost surely. Set $$C_0:=\{x\in X_1\times X_2 \mid \rho(\Gamma_x \backslash \Gamma)= \lambda_2\}.$$ Since conjugate subgroups have the same co-spectral radius, the set $C_0$ is invariant under the action of $\Gamma$. Let $(X_1\times X_2, \nu)\to (Z,\tau)$ be the ergodic decomposition given by Corollary \ref{cor:erg-decomp}, and set \[Z_0:=\{z\in Z\mid \nu_z(C_0)>0\}.\] By ergodicity and invariance, the set $C_0$ has full $\nu_z$-measure for every $z\in Z_0$. Theorem \ref{thm:main-spectral} is equivalent to the identity $C_0=X_1\times X_2$ modulo a null set, so it will follow once we show that $\tau(Z_0)=1$. By Proposition \ref{prop:WHandCoAm}, $z\in Z_0$ if and only if the following condition holds: For every $\eta>0$ there exists a function $h$ supported on the interior of a finite connected component of $(X_1\times X_2,\nu_z, S)$ (according to Definition \ref{def:FinComp}), such that \begin{equation}\label{eq:Condition} \langle (I-M)h, h\rangle_{\nu_z}\leq (1-\lambda_2+\eta)\|h\|_{\nu_z}^2. \end{equation} We will refer to nonnegative, nonzero functions supported on interiors of finite connected components of $(X_1\times X_2,\nu, S)$ as \textbf{test functions}. It is easy to check that a test function for $\nu$ is also a test function for almost all ergodic components $\nu_z$. It will be convenient to name the set of ergodic components $z$ for which there exist a test function satisfying \eqref{eq:Condition} with specific $\eta$. Let $$Z_\eta:=\{z\in Z\mid \text{ there exists } h \text{ such that } \langle (I-M)h, h\rangle_{\nu_z}\leq (1-\lambda_2+\eta)\|h\|_{\nu_z}^2\}.$$ Obviously we have $Z_0=\bigcap_{\eta>0} Z_\eta$ and $Z_{\eta}\subset Z_{\eta'}$ for $\eta'>\eta$. To prove the theorem it will be enough to show that $\tau(Z_{\eta})\to 1$ as $\eta\to 0$. \subsection{Construction of test functions.} \begin{lem}\label{lem:TestF} Let $\delta>0$. There exists a test function $f$ and a set $Z'\subset Z$ such that \begin{enumerate} \item $\|f\|_{\nu_z}^2\geq (1-\delta)\|f\|_{\nu}^2$, for every $z\in Z'$. \item $\tau(Z')\geq 1-\delta.$ \item $\langle (I-M)f,f\rangle_{\nu} \leq (1-\lambda_2+\delta)\|f\|_{\nu}^2.$ \end{enumerate} \end{lem} \begin{proof} Let $\varepsilon_2>0$. Since $\Gamma\curvearrowright (X_2,\nu_2)$ has embedded spectral radius $\lambda_2$, there is a finite measure, finite connected component $P_2\subset X_2$ and nonzero $f_2\in L^2(X_2,\nu_2)$ as is in Definition \ref{def:WeakSpectralRad}, i.e. $f_2$ is supported on the interior ${\rm int}(P_2)$ and \begin{equation}\label{eq:f2}\langle (I-M)f_2,f_2\rangle\leq (1-\lambda_2+\varepsilon_2)\|f_2\|^2.\end{equation} By Remark \ref{rmk:nonnegative}, we may assume $f_2\geq 0$. We will show that for a good enough F{\o}lner set $F\subseteq X_1$ and small enough $\varepsilon_2$, the function $$f:=\mathds{1}_{F} \times f_2$$ satisfies the conditions of the lemma. While (3) is relatively straightforward, conditions (1) and (2) require some work and strongly use the fact that $X_2$ is a p.m.p. action. Consider the following probability measures on $\Gamma$: $$\mu:=\frac{1}{|S|}\sum_{s\in S}\delta_s\textrm{ and } \mu^m:= \frac{1}{m}\sum_{i=0}^{m-1}\mu^{\ast i}\textrm{ for } m\in\ensuremath{\mathbb{N}}.$$ By Kakutani's ergodic theorem \cite{Kakutani1951}, there exists $m_0\geq 1$ such that \begin{equation}\label{eq:Kakutani} \left|\int_{\Gamma}f_2^2(x\gamma^{-1})d\mu^{m_0}(\gamma)-\int_{X_2} f_2^2 d\nu_2\right|\leq \varepsilon_2\|f_2\|^2 \end{equation} for all $x\in X_2'$ where $\nu_2(X_2')\geq 1-\varepsilon_2$. Fix $0<\varepsilon_1\ll \varepsilon_2$ very small. The precise choice only depends on $\varepsilon_2$ and will be specified at the end of the proof. Let $F\subset X_1$ be an $\varepsilon_1$-F{\o}lner set and write $Y=(F\cup \partial F)\times X_2$. Write $F'$ for the set of points of $F$ which are at distance at least $m_0$ from the boundary $\partial F$ and set $Y':=F'\times X_2'$, where $X_2'$ is as in (\ref{eq:Kakutani}). We claim that for $\varepsilon_1$ small enough, we will have $(\nu_1\times \nu_2)(Y')\geq |F|(1-2\varepsilon_2)$. Indeed, using that $F$ is $\varepsilon_1$-F{\o}lner, we have $$|F'|\geq |F\cup\partial F|-|\partial F|\sum_{i=1}^{m_0-1}|S|^{i}\geq |F\cup \partial F|(1-|S|^{m_0}\varepsilon_1).$$ Clearly for sufficiently small $\varepsilon_1$, we have \begin{equation}\label{eq:YprimLarge} \nu(Y')=|F'|\nu_2(X'_2)\geq (1-\varepsilon_1|S|^{m_0})(1-\varepsilon_2)|F|\geq (1-2\varepsilon_2)|F|. \end{equation} Write $P:=(F\cup\partial F)\times P_2\subset Y$. By construction, the support of $f$ is contained in $P\subset Y$. Note that since $P_2$ is a finite connected component of $X_2$ and $F\cup\partial F$ is finite, the set $P$ will be a finite connected component of $(X_1\times X_2,\nu,S)$ in the sense of the Definition \ref{def:FinComp}. Let $\nu=\int_Z\nu_zd\tau(z)$ be the ergodic decomposition of $\nu$ as in Section \ref{sec:decomp}. The set $P$ is also a finite connected component of $(X_1\times X_2,\nu_z,S)$ for almost every $z\in Z$. For every $z\in Z$, the measure $\nu_z$ is invariant under the action of $\Gamma$, so that \begin{align*} \int f^2(x_1,x_2)d\nu_z(x_1,x_2)&= \int_\Gamma \int f^2(x_1\gamma^{-1},x_2\gamma^{-1}) d\nu_z(x_1,x_2) d\mu^{m_0}(\gamma)\\ &\geq \int_{Y'}\int_\Gamma f^2(x_1\gamma^{-1},x_2\gamma^{-1}) d\mu^{m_0}(\gamma)d\nu_z(x_1,x_2) \end{align*} Since $Y'=F'\times X_2'$ and $F'\gamma^{-1}\subset F$ for any $\gamma\in \supp \mu^{m_0}$ we can use the identity $f=\mathds 1_F \times f_2$ to rewrite the last integral as \[ \int_{F'\times X_2'} \left(\int_\Gamma f_2^2(x_2\gamma^{-1})d\mu^{m_0}(\gamma)\right)d\nu_z(x_1,x_2).\] We use (\ref{eq:Kakutani}) to estimate the innermost integral and obtain a lower bound on $\|f\|_{\nu_z}^2$: \begin{align*} \int_P f^2 d\nu_z\geq & (1-\varepsilon_2)\nu_z(F'\times X_2')\int_{X_2} f_2^2 d\nu_2 = (1-\varepsilon_2) \nu_z(Y')\|f_2\|_{\nu_2}^2. \end{align*} By \eqref{eq:YprimLarge}, we have $\nu(Y')\geq (1-2\varepsilon_2)|F|$, so we can apply Markov's inequality to get a set $Z'\subset Z$ with $\tau(Z')\geq 1-\sqrt{2\varepsilon_2}$ such that $\nu_z(Y')\geq (1-\sqrt{2\varepsilon_2})|F|$ for $z\in Z'$. Finally we get that for $z\in Z'$: \begin{equation}\label{eq:LBonP} \|f\|_{\nu_z}^2\geq (1-\varepsilon_2)(1-\sqrt{2\varepsilon_2})|F|\|f_2\|_{\nu_2}^2=(1-\varepsilon_2)(1-\sqrt{2\varepsilon_2})\|f\|_{\nu}^2. \end{equation} This establishes Properties (1) and (2) of the lemma. It remains to address (3). Using $f_2\geq 0$, we estimate $\langle f,Mf\rangle_\nu$ in terms of $\langle f_2, M f_2\rangle_{\nu_2}$ as follows: \begin{align*} \langle f,Mf\rangle_{\nu} &= |S|^{-1} \int_{X_1} \int_{X_2} \ensuremath{\mathds{1}}_F(x_1) f_2(x_2) \sum_{s\in S} \ensuremath{\mathds{1}}_F(x_1 s) f_2(x_2 s) d\nu_2(x_2) d\nu_1(x_1) \\ &\geq |S|^{-1} \int_{\text{int}(F)} \int_{X_2} f_2(x_2) \sum_{s\in S} f_2(x_2 s) d\nu_2(x_2) d\nu_1(x_1) \\ &= |\text{int}(F)| \langle f_2, M f_2\rangle_{\nu_2}. \end{align*} Hence \begin{align*} \langle (I-M)f,f\rangle_{\nu} &\leq |F|\langle f_2,f_2\rangle_{\nu_2} -|\text{int}(F)|\langle f_2, Mf_2\rangle_{\nu_2} \\ &=|F|\langle f_2-M f_2,f_2\rangle_{\nu_2} +|F|\left(1-\frac{|\text{int}(F)|}{|F|}\right)\langle Mf_2, f_2\rangle_{\nu_2}. \end{align*} Since $F$ is $\varepsilon_1$-F{\o}lner, we have $|\text{int}(F)|/|F|\geq 1-|S|\varepsilon_1$, so finally we obtain \begin{align*} \langle (I-M)f,f\rangle_{\nu}\leq& |F|\left(\langle f_2-Mf_2,f_2\rangle_{\nu_2}+|S|\varepsilon_1 \langle Mf_2, f_2\rangle_{\nu_2}\right)\\ \leq&|F|\left(\langle f_2-Mf_2,f_2\rangle_{\nu_2}+|S|\varepsilon_1\|f_2\|_{\nu_2}^2\right).\end{align*} By the defining property of $f_2$ (see Equation \eqref{eq:f2}), we then find $$\langle (I-M)f,f\rangle_{\nu}\leq |F|(1-\lambda_2+\varepsilon_2+|S|\varepsilon_1)\|f_2\|_{\nu_2}^2=(1-\lambda_2+\varepsilon_2+|S|\varepsilon_1)\|f\|_{\nu}^2.$$ To finish the proof, choose $\varepsilon_2>0$ such that $(1-\varepsilon_2)(1-\sqrt{2\varepsilon_2})\geq (1-\delta)$ and then choose $\varepsilon_1>0$ such that $\varepsilon_2+|S|\varepsilon_1\leq\delta$ and such that \eqref{eq:YprimLarge} holds. \end{proof} \subsection{End of the proof.} Let $f, \delta, Z'$ be as in the Lemma \ref{lem:TestF}. Using Property (3) of the Lemma, we have \begin{align*} \int_{Z'}\langle (I-M)f,f\rangle_{\nu_z}d\tau(z)\leq& \int_{Z}\langle (I-M)f,f\rangle_{\nu_z}d\tau(z)\\ =&\langle (I-M)f,f\rangle_\nu \\ \leq& (1-\lambda_2+\delta)\|f\|_\nu^2. \end{align*} By Property (1), we can then estimate \[ \int_{Z'}\frac{\langle (I-M)f,f\rangle_{\nu_z}}{\|f\|_{\nu_z}^2}d\tau(z)\leq\frac{1-\lambda_2+\delta}{1-\delta}.\] On the other hand, Proposition \ref{prop:WHandCoAm} and the fact that co-spectral radii of stabilizers are all at most $\lambda_2$, yield the inequality $$\frac{\langle (I-M)f,f\rangle_{\nu_z}}{\|f\|_{\nu_z}^2}\geq 1-\lambda_2 \text{ for almost all }z\in Z.$$ Therefore, by Markov's inequality there is a positive $\eta=O(\sqrt\delta)$ and a subset $Z''\subset Z$ such that $\tau(Z'')\geq 1-\eta$ and $$ \langle (I-M)f,f\rangle_{\nu_z}\leq (1-\lambda_2+\eta){\|f\|_{\nu_z}^2} \text{ for } z\in Z''.$$ $\eta$ could be made explicit in terms of $\delta$ and $\lambda_2$, but we will only need that $\eta\to 0$ as $\delta\to 0$. We have $Z''\subset Z_\eta$, so it follows that $\tau(Z_\eta)\to 1$ as $\eta\to 0$. This ends the proof per the discussion at the end of Section \ref{sec:MSetup}. \qed \bibliographystyle{plain}
proofpile-arXiv_067-11453
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} Establishing the microscopic nature of Dark Matter (DM) is one of the central, open questions in cosmology and particle physics. In the context of cold nonbaryonic DM, the prevailing paradigm is based on weakly interacting massive particles (WIMPs), and extensive theoretical and experimental resources have been devoted towards identifying viable candidates and developing methods to detect them. One of the most studied WIMPs scenarios arises in the Minimal Supersymmetric Standard Model (MSSM), where an assumed $R$-parity ensures that the lightest superpartner (LSP) is a stable neutralino $\chi$ composed of bino, wino, and Higgsino eigenstates. The interactions between DM and the SM particles are mainly mediated by squark and Higgses in the case of bino like DM. However, it is also possible to study DM interactions with the SM particles in a model independent way by using an effective field theory approach in which the particles mediating the interactions are assumed to be heavy and are integrated out. A main strength of this approach is to provide model-independent relations among distinct null DM searches~\cite{Goodman:2010yf}. As different search strategies probe different energy scales, this separation of scales can have striking consequences when a connection between direct detection experiments and LHC searches is done. \section{Effective Field Theory} For operators contributing directly to spin independent scattering, direct detection gives in general much better constraints than LHC searches. As was shown in Ref.~\cite{Frandsen:2012db,Crivellin:2014qxa,Crivellin:2014gpa,D'Eramo:2014aba} there are cases in which operators which do not contribute to spin independent scattering at tree-level, but enter at the one-loop level. As in this case direct detection is loop suppressed, LHC searches can give competitive and complementary constraints. At dim-6 the operator $O^{VA}_{qq}=\bar\chi {\gamma^\mu } \chi \; \bar q \, {\gamma_\mu\gamma^5 q}$ mixes into $O_{HH D}^S = \bar \chi \, {\Gamma^\mu } \chi \, [{H^\dag }\overleftrightarrow{D}^\mu H]$ ($H$ being the SM Higgs doublet and $\overleftrightarrow{D}$ the covariant derivative) which then generates threshold corrections to $\bar\chi {\gamma^\mu } \chi \; \bar q \, {\gamma_\mu}q$ entering spin independent direct detection~\cite{Crivellin:2014qxa}. The resulting bounds are shown in the left plot of Fig.~\ref{plot:EFT} depicting that even though the contribution is loop suppressed, direct detection gives stronger bound unless DM is very light. At dimension dim-7 a similar effect occurs for the operators $O_W = \bar \chi \chi \, W_{\mu \nu } W^{\mu \nu}$ involving electroweak field strength tensors. Again, this operator enters direct detection only via mixing and threshold correction~\cite{Crivellin:2014gpa}. The resulting bounds are shown in the right plot of Fig.~\ref{plot:EFT}. In this case the collider bounds are in general stronger~\cite{Crivellin:2015wva}, unless dark matter is quite heavy. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{CVA} ~~~ \includegraphics[width=0.45\textwidth]{plotCW} \caption{Left: Allowed regions from LHC searches (yellow) and SI WIMP--nucleon scattering from LUX (green). Projected allowed regions for SCDMS (red) and XENON1T (blue) are also shown, as well as the curve giving the correct thermal relic density (black). Here we set $C^{VA}_{qq} = 1$ while all other Wilson coefficients are assumed to be zero. Right: Restrictions in the $m_\chi$--$\hspace{0.25mm} C_W (\Lambda)$ plane, assuming DM to be Majorana and setting $\Lambda = 300 \, {\rm GeV}$. The green curves illustrate the best limits from missing ${E}_T$ searches at the LHC, while the black dotted lines correspond to the observed value $\Omega_\chi h^2 = 0.11$ of the relic density. The colored dashed curves mark the bounds from existing and future direct detections experiments. The currently allowed parameter regions are indicated by yellow shading. The contour lines denote the fraction of the observed relic density obtained from the operator under consideration.}\label{plot:EFT} \end{figure} \section{MSSM} Following Ref.~\cite{Crivellin:2015bva}, we use naturalness as a guiding principle in order to study neutralino dark matter scattering in the MSSM (see also Ref.~\cite{Barducci:2015ffa} for a recent analysis). In the left plot of Fig.~\ref{fig:simple} we show four simplified spectra which are increasingly natural ($A$ to $D$). Interestingly, in all scenarios blind spots \cite{Hisano:2012wm,Cheung:2012qy,Huang:2014xua,Anandakrishnan:2014fia} with vanishing scattering cross section can occur. In the proximity of these blind spots isospin violation is enhanced, making a precise determination of the scalar couplings to nucleons crucial~\cite{Crivellin:2013ipa}.\footnote{The same scalar couplings to the nucleon are also important for $\mu\to e$ conversion in nuclei~\cite{Crivellin:2014cta}.} In the case in which DM interactions are transmitted by the SM Higgs only, a blind spot occurs at $M_1 + \mu s_{2\beta} = 0$ as shown in the right plot of Fig.~\ref{fig:simple}. If we consider in addition the heavy CP-even Higgs $H^0$ (whose mass is nearly degenerate with the CP-odd Higgs $A^0$) the situation is more interesting, as we do not only have additional contributions to DM scattering but also get effects in $b\to s\gamma$ \cite{Misiak:2015xwa} and obtain bounds from LHC searches for $A^0\to\tau^+\tau^-$ \cite{CMS:2013hja} whose interplay is shown in the left plot in Fig.~\ref{plot:MSSM}. The occurrence of a blind spot where the $h^0$ and the $H^0$ contributions cancel is possible. Interestingly, future LHC searches for $A^0\to\tau^+\tau^-$ will be able to cover this region in parameter space which cannot be tested with direct detection. \begin{figure}[t] \centering\includegraphics[scale=0.55]{natural_spectrum}~~ \centering\includegraphics[scale=0.55]{m1muplot_XE_tb10} \caption{Left: Spectra of the simplified models for SI $\chi$--nucleus scattering considered in this work. For each model, the SM-like Higgs is denoted by $h$, while all other states are assumed to lie below 1 TeV, including Higgsinos (not shown). From left-to-right, the spectra become increasingly more natural as one includes the additional CP-even Higgs $H$ and third-generation squarks $\tilde{t}_1,\tilde{t}_2,\tilde{b}_L$. Right: Current and projected limits on SI $\chi$--xenon scattering due to $h$ exchange with $\tan\beta=10$. The pink band shows the existing constraints from LUX, while projected limits from XENON1T and LZ are given by the blue and orange regions respectively. The blind spot where the SI cross section vanishes is denoted by the red line and lies within the irreducible neutrino background ($\nu_\mathrm{BG}$) shown in gray. The triangular, hatched region corresponds to the case where the LSP is Higgsino-like.} \label{fig:simple} \end{figure} The situation if in addition squarks of the third generation are included (the presence of a left-handed stop requires a left-handed sbottom as well due to $SU(2)_L$ gauge invariance) as dynamical degrees of freedom (scenario D) is shown in the right plot in Fig.~\ref{plot:MSSM}. Here the complementarity of LHC searches for stops and sbottoms with DM direct detection is illustrated as well as the effect in $B_s\to\mu^+\mu^-$ which we calculated with SUSY$\underline{\;\;}$FLAVOR~\cite{Crivellin:2012jv}. Again, part of the region in the proximity of the blind spot which cannot be covered by direct detection is already ruled out by LHC searches whose sensitivity to high masses will significantly increase at the $14\,$TeV run. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{mAtbplot_CMS_Xe} ~~~ \includegraphics[width=0.48\textwidth]{M1mt1plotXE_mA750_tb25} \caption{Left: Current and projected limits on SI $\chi$--xenon scattering due to $h,H$ exchange with different benchmark values for $M_1$ and $\mu$. The cross-hatched region in dark-blue corresponding to CMS limits on $H,A\to \tau^+\tau^-$. The region to the left of the dark-red dashed line at $m_A \cong m_{H^+} \simeq 480$ GeV is excluded by $b\to s\gamma$. \newline Right: Current and projected limits in the $(m_{\tilde{t}_1},M_1)$ plane from $h,H$ and $\tilde{t}_{1,2},\tilde{b}_L$ exchange in $\chi$--xenon scattering. In the figures, the value of $m_A$ is increased for fixed $\tan\beta$. }\label{plot:MSSM} \end{figure} \section{Conclusions} In these proceedings we reviewed the interplay between DM direct detection, flavor, and LHC searches by highlighting two prime examples: First we considered the EFT approach. Here LHC searches give complementary constraints on operators which enter spin independent scattering only at the loop level. Second, we considered the MSSM where LHC searches for stops, sbottoms, and heavy Higgses place constraints on the parameter space which are complementary to flavor observables and direct detection. We identify regions in parameter space with blind spots which cannot be covered by direct detection, but can be covered by LHC searches. \section*{Acknowledgments} We thank the organizers of the invitation to \emph{Moriond Gravitation} 2015 and for the opportunity to present these results. A.C. is supported by a Marie Curie Intra-European Fellowship of the European Community's 7th Framework Programme under contract number (PIEF-GA-2012-326948).
proofpile-arXiv_067-13882
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsubsection*{\bibname}} \bibliographystyle{unsrtnat} \usepackage{xcolor} \definecolor{niceRed}{RGB}{190,38,38} \definecolor{royalBlue}{HTML}{057DCD} \usepackage{graphicx} \usepackage{amsmath,amsfonts,amssymb,amsthm,bbm} \usepackage[pagebackref]{hyperref} \hypersetup{ colorlinks=true, linkcolor=royalBlue, filecolor=royalBlue, citecolor =niceRed, urlcolor=magenta, } \usepackage{algorithm} \usepackage{verbatim} \usepackage[noend]{algpseudocode} \newenvironment{sproof}{% \renewcommand{\proofname}{Proof (Sketch)}\proof}{\endproof} \usepackage{appendix} \input{defs} \title{Aggregating Incomplete and Noisy Rankings} \author{\large{Dimitris Fotakis}\thanks{National Technical University of Athens, mail: \color{magenta} fotakis@cs.ntua.gr \color{black}} \And \large{Alkis Kalavasis}\thanks{National Technical University of Athens, mail: \color{magenta} kalavasisalkis@mail.ntua.gr \color{black}} \And \large{Konstantinos Stavropoulos}\thanks{National Technical University of Athens, mail: \color{magenta} kons.stavropoulos@gmail.com \color{black}}} \begin{document} \footnotetext{Accepted at the 24th International Conference on Artificial Intelligence and Statistics (AISTATS) 2021.} \maketitle \begin{abstract} We consider the problem of learning the true ordering of a set of alternatives from largely incomplete and noisy rankings. We introduce a natural generalization of both the classical Mallows model of ranking distributions and the extensively studied model of noisy pairwise comparisons. Our \emph{selective Mallows model} outputs a noisy ranking on any given subset of alternatives, based on an underlying Mallows distribution. Assuming a sequence of subsets where each pair of alternatives appears frequently enough, we obtain strong asymptotically tight upper and lower bounds on the sample complexity of learning the underlying complete ranking and the (identities and the) ranking of the top-$k$ alternatives from selective Mallows rankings. Moreover, building on the work of (Braverman and Mossel, 2009), we show how to efficiently compute the maximum likelihood complete ranking from selective Mallows rankings. \end{abstract} \section{Introduction} \label{s:intro} Aggregating a collection of (possibly noisy and incomplete) ranked preferences into a complete ranking over a set of alternatives is a fundamental and extensively studied problem with numerous applications. Ranking aggregation has received considerable research attention in several fields, for decades and from virtually all possible aspects. Most relevant, Statistics investigates the properties of \emph{ranking distributions}, which provide principled ways to generate noisy rankings from structural information about the alternatives' relative order. Best known among them are the distance-based model of \citet{mallows1957non} and the parametric models of \citet{thurstone1927law, smith1950discussion, bradley1952rank, plackett1975analysis} and \citet{luce2012individual}. Moreover, Machine Learning and Statistical Learning Theory aim to develop (statistically and computationally) efficient ways of retrieving the true ordering of the alternatives from noisy (and possibly incomplete) rankings (see e.g., \citep{xia2019learning} and the references therein). Virtually all previous work in the latter research direction assumes that the input is a collection of either complete rankings (chosen adversarially, e.g., \cite{ailon2008aggregating,MS07}, or drawn from an unknown ranking distribution, e.g., \citep{caragiannis2013noisy,busa2019optimal}), or outcomes of noisy pairwise comparisons (see e.g., \citep{feige1994computing,mao2018minimax}). Due to a significant volume of relatively recent research, the computational and statistical complexity of determining the best ranking based on either complete rankings or pairwise comparisons are well understood. However, in most modern applications of ranking aggregation, the input consists of incomplete rankings of more than two alternatives. E.g., think of e-commerce or media streaming services, with a huge collection of alternatives, which generate personalized recommendations based on rankings aggregated by user ratings (see also \cite{hajek2014minimax}). Most users are able to rank (by rating or reviewing) several alternatives, definitely much more than two, but it is not even a remote possibility that a user is familiar with the entire inventory (see also \citep{MCE16} for applications of incomplete rankings to ranked preference aggregation, and \citep{YildizDEKOCCI20} on why incomplete rankings are preferable in practice). Motivated by the virtual impossibility of having access to complete rankings in modern applications, we introduce the \emph{selective Mallows model}, generalizing both the classical Mallows model of ranking distributions and the extensively studied model of noisy pairwise comparisons. Under the selective Mallows model, we investigate the statistical complexity of learning the central ranking and the (identities and the) ranking of the top-$k$ alternatives, and the computational complexity of maximum likelihood estimation. \subsection{The Selective Mallows Model} \label{s:model} The \emph{Mallows model} \citep{mallows1957non} is a fundamental and extensively studied family of ranking distributions over the symmetric group $\mathfrak{S}_n$. A \emph{Mallows distribution} $\M_{\central,\b}$ on a set of $n$ alternatives is parameterized by the \emph{central ranking} $\pi_{0} \in \mathfrak{S}_n$ and the \emph{spread parameter} $\beta > 0$. The probability of observing a ranking $\pi \in \mathfrak{S}_n$ is proportional to $\exp(-\beta d(\pi_{0}, \pi))$, where $d$ is a notion of ranking distance of $\pi_0$ to $\pi$. In this work, we consider the number of discordant pairs, a.k.a. the Kendall tau distance, defined as $d_{KT}(\pi_0, \pi) = \sum_{i < j} \mathbbm{1}\left\{(\pi_0(i) - \pi_0(j))(\pi(i) - \pi(j)) < 0 \right\}$. The problem of aggregating a collection $\pi_1, \ldots, \pi_r \in \mathfrak{S}_n$ of complete rankings asks for the \emph{median} ranking $\pi^\star = \arg\min_{\sigma \in \mathfrak{S}_n} \sum_{j=1}^r d_{KT}(\sigma, \pi_j)$. Computing the median is equivalent to a weighted version of Feedback Arc Set in tournaments, which is NP-hard \citep{ailon2008aggregating} and admits a polynomial-time approximation scheme \citep{MS07}. If the rankings are independent samples from a Mallows distribution, the median coincides with the \emph{maximum likelihood ranking} and can be computed efficiently with high probability \citep{braverman2009sorting}. The \emph{selective Mallows model} provides a principled way of generating noisy rankings over any given subset of alternatives, based on an underlying Mallows distribution $\M_{\central,\b}$. Given a central ranking $\pi_0\in\mathfrak{S}_n$, a spread parameter $\beta>0$ and a selection sequence $\mathcal{S}=(\Set_1,\dots,\Set_r)$, where each $ S_\ell \subseteq [n]$, the \textit{selective Mallows distribution} $\Mal^{\Scal}$ assigns a probability of \[ \textnormal{Pr}[(\pi_1,\dots,\pi_r)|\pi_0,\beta,\mathcal{S}] = \prod_{\ell\in [r]}\frac{1}{Z(S_\ell,\beta)}e^{-\beta d_{KT}(\pi_0,\pi_\ell)}\,, \] to each incomplete ranking profile $(\pi_1,\dots,\pi_r)$. In the probability above, each $\pi_\ell$ is a permutation of $S_\ell$, $d_{KT}(\pi_0,\pi_\ell)$ is the number of pairs in $S_i$ ranked reversely in $\pi_0$ and $\pi_\ell$ (which naturally generalizes the Kendall tau distance to incomplete rankings), and the normalization constants $Z(S_\ell, \beta)$ correspond to a Mallows distribution on alternatives $S_\ell$ and depend only on $|S_\ell|$ and $\beta$. The samples $\pi_1, \ldots, \pi_r$ are independent, conditioned on the selection sequence $\mathcal{S}$. We refer to $\Pi = (\pi_1,\dots,\pi_r)$ as a sample profile of length $r$. In a selective Mallows sample $\pi_\ell$, the probability that two alternatives in $S_\ell$ are not ranked as in $\pi_0$ depends on their distance in the restriction of $\pi_0$ to $S_\ell$ (instead of their original distance in $\pi_0$). E.g., if $\pi_0$ is the identity permutation and $S_\ell = \{1, n\}$, the probability that $\pi_\ell = (1, n)$ (resp. $\pi'_\ell = (n, 1)$) is the $\ell$-th sample in $\Pi$ is $1/(1+e^{-\beta})$ (resp. $e^{-\beta}/(1+e^{-\beta})$). Hence, the selective Mallows model generalizes both the standard Mallows model (if each $S_\ell = [n]$) and the model of noisy pairwise comparisons (if $\mathcal{S}$ consists entirely of sets $S_\ell$ with $|S_\ell|=2$). Moreover, using sets $S_\ell$ of different cardinality, one can smoothly interpolate between complete rankings and pairwise comparisons. The amount of information provided by a selective Mallows model $\Mal^{\Scal}$ about $\pi_0$ is quantified by how frequently different pairs of alternatives compete against each other in $\Pi$. We say that a selective Mallows model $\Mal^{\Scal}$ is \emph{$p$-frequent}, for some $p \in (0, 1]$, if every pair of alternatives appears in at least a $p$ fraction of the sets in $\mathcal{S}$ (we assume that each pair appears together at least once in $\mathcal{S}$). E.g., for $p = 1$, we recover the standard Mallows model, while $p \approx 2/n^2$ corresponds to pairwise comparisons. The definition of ($p$-frequent) selective Mallows model can be naturally generalized to unbounded selection sequences $\mathcal{S}$, which however is beyond the scope of this work. In this work, we investigate the statistical complexity of retrieving either the central ranking $\pi_0$ or its top-$k$ ranking from $p$-frequent selective Mallows samples, and the computational complexity of finding a maximum likelihood ranking from a fixed number $r$ of $p$-frequent samples. In \emph{learning from incomplete rankings}, for any given $p, \beta, \varepsilon > 0$, we aim to upper and lower bound the least number of samples $r^\star(p, \beta, \varepsilon)$ (resp. $r^\star_k(p, \beta, \varepsilon)$) from a selective Mallows distribution $\Mal^{\Scal}$ required to learn $\pi_0$ (resp. the top-$k$ ranking of $\pi_0$) with probability at least $1-\varepsilon$, where $\mathcal{S}$ is any $p$-frequent selection sequence. In \emph{maximum likelihood estimation}, for any given $p, \beta, \varepsilon > 0$, given a sample profile $\Pi$ of length $r$ from a $p$-frequent selective Mallows distribution $\Mal^{\Scal}$, we aim to efficiently compute either a ranking that is at least as likely as $\pi_0$, or even a maximum likelihood ranking $\pi^\star$. The interesting regime for maximum likelihood estimation is when $r$ is significantly smaller than $r^\star(p, \beta, \varepsilon)$. We shall note here that the $p$-frequent condition can be replaced by a milder one, where each selection set is drawn independently from a given distribution over the subsets of $[n]$, such that the probability that any specific pair of alternatives appears in a sampled set is at least $p$. Although we focus (for simplicity) on the deterministic $p$-frequent assumption, we expect similar results to hold for the randomized case. For a detailed discussion of the randomized $p$-frequent assumption, we refer the reader to the Appendix~\ref{appendix:rand}. \subsection{Contribution} \label{s:contrib} On the conceptual side, we introduce the selective Mallows model, which allows for a smooth interpolation between learning from noisy complete rankings and sorting from noisy pairwise comparisons. On the technical side, we practically settle the statistical complexity of learning the central ranking and the top-$k$ ranking of a $p$-frequent selective Mallows model. Moreover, we show how to efficiently compute a maximum likelihood ranking from $r$ selective samples. We believe that a significant advantage of our work lies in the simplicity and the uniformity of our approach. Specifically, all our upper bounds are based on the so-called \emph{positional estimator} (Algorithm~\ref{algo:positional}), which ranks an alternative $i$ before any other alternative $j$ ranked after $i$ in the majority of the samples. The positional estimator belongs to the class of pairwise majority consistent rules \citep{caragiannis2013noisy}. Generalizing the (result and the) approach of \citet[Theorem~3.6]{caragiannis2013noisy}, we show (Theorem~\ref{thm:upper_central}) that if $pr = O(\frac{\log(n/\varepsilon)}{(1-e^{-\beta})^2})$, the central ranking of a $p$-frequent selective Mallows model can be recovered with probability at least $1-\varepsilon$. Namely, observing a logarithmic number of noisy comparisons per pair of alternatives suffices for determining their true order. Theorem~\ref{thm:upper_central} generalizes (and essentially matches) the best known bounds on the number of (passively chosen% \footnote{If the algorithm can actively select which pairs of alternatives to compare, $O(n\log n)$ noisy comparisons suffice for sorting, e.g., \cite{braverman2008noisy,feige1994computing}.}) comparisons required for sorting \citep{mao2018minimax}. Interestingly, we show that the above upper bound is practically tight. Specifically, Theorem~\ref{thm:lower_central} shows that for any $p \in (0, 1/2]$, unless $rp = \Omega(\log(n/\varepsilon)/\beta)$ noisy comparisons per pair of alternatives are observed, any estimator of the central ranking from $p$-frequent selective samples fails to recover $\pi_0$ with probability larger than $\varepsilon$. Hence, observing incomplete rankings with (possibly much) more than two alternatives may help in terms of the number of samples, but it does not improve the number of noisy comparisons per pair required to recover the true ordering of the alternatives. In Section~\ref{s:approx}, we generalize the proof of Theorem~\ref{thm:approx} and show that the positional estimator smoothly (and uniformly wrt different alternatives) converges to the central ranking $\pi_0$ of a $p$-frequent selective Mallows model $\Mal^{\Scal}$, as the number of samples $r$ (and the number of noisy comparisons $pr$ per pair) increase. Specifically, Theorem~\ref{thm:approx} shows that the positional estimator has a remarkable property of the \emph{average position estimator} \citep[Lemma~18]{braverman2008noisy}\,: as $r$ increases, the position of any alternative $i$ in the estimated ranking converges fast to $\pi_0(i)$, with high probability. Since we cannot use the average position estimator, due to our incomplete rankings, where the positions of the alternatives are necessarily relative, we need to extend \citep[Lemma~18]{braverman2008noisy} to the positional estimator. Combining Theorem~\ref{thm:upper_central} and Theorem~\ref{thm:approx}, we show (Section~\ref{sec:topk}) that for any $k = \Omega(1/(p\beta))$, we can recover the identities and the true ranking of the top-$k$ alternatives in $\pi_0$, with probability at least $1-\varepsilon$, given $r = O(\frac{\log(k/\varepsilon)}{p(1-e^{-\beta})^2}+\frac{\log(n/\varepsilon)}{p^2\beta k})$ $p$-frequent selective samples. The second term accounts for learning the identities of the top $O(k)$ elements in $\pi_0$ (Theorem~\ref{thm:approx}), while the first term accounts for learning the true ranking of these $O(k)$ elements (Theorem~\ref{thm:upper_central}). For sufficiently large $k$, the first term becomes dominant. Applying the approach of Theorem~\ref{thm:lower_central}, we show that such a sample complexity is practically best possible. Moreover, building on the approach of \citet[Lemma~18]{braverman2008noisy} and exploiting Theorem~\ref{thm:approx}, we show how to compute a maximum likelihood ranking (resp. a ranking that is at least as likely as $\pi_0$), given $r$ samples of a $p$-frequent selective Mallows distribution $\Mal^{\Scal}$, in time roughly $n^{O(1/(r\beta p^4))}$ (resp. $n^{O(1/(r\beta p^2))}$), with high probability (see also Theorem~\ref{thm:mle_alg} for the exact running time). The interesting regime for maximum likelihood estimation is when $r$ is much smaller than the sample complexity of learning $\pi_0$ in Theorem~\ref{thm:upper_central}. Our result compares favorably against the results of \citet{braverman2008noisy} if $pr$ is small. E.g., consider the extreme case where $pr = 1$ (i.e., each pair is compared once in $\Pi$). Then, for small values of $\beta$, the running time of \citep[Theorem~8]{braverman2008noisy} becomes $n^{O(1/\beta^4)}$, while the running time of maximum likelihood estimation from $p$-frequent selective samples becomes $n^{O(1/(p^3 \beta))}$. Thus, large incomplete rankings mitigate the difficulty of maximum likelihood estimation (compared against noisy pairwise comparisons), if $rp = \Theta(1)$, $\beta$ is small, and $1/\beta$ is much smaller than $1/p$. In the following, we provide the intuition and proof sketches for our main results. For the full proofs of all our technical claims, we refer the reader to the Appendix~\ref{appendix:start}. \noindent{\bf Notation.} We conclude this section with some additional notation required in the technical part of the paper. For any ranking $\pi$ of some $S\subseteq[n]$ and any pair of alternatives $i, j \in S$, we let $i \succ_{\pi} j$ denote that $i$ precedes $j$ in $\pi$, i.e., that $\pi(i) < \pi(j)$. When we use the term reduced central ranking (according to a sample), we refer to the permutation of the elements of some selection set according to the central ranking. For any object $B$, we use the notation $B=B[\Pi]$ to denote that $B$ depends on a sample profile $\Pi$. Moreover, for simplicity and brevity, we use the asymptotic notation $O_\beta$ (or $\Omega_\beta$) to hide polynomial terms in $1/\beta$. \subsection{Related Work} \label{s:related} There has been a huge volume of research work on statistical models over rankings (see e.g., \cite{fligner1993probability, marden1996analyzing, xia2019learning} and the references therein). The \citet{mallows1957non} model plays a central role in the aforementioned literature. A significant part of this work concerns either extensions and generalizations of the Mallows model (see e.g., \citep{fligner1986distance,murphy2003mixtures,lebanon2003conditional}, and also \citep{lebanon2008non,lu2014effective,busa2014preference} more closely related to partial rankings) or statistically and computationally efficient methods for recovering the parameters of Mallows distributions (see e.g., \citep{adkins1998non,caragiannis2013noisy,liu2018efficiently,busa2019optimal}). From a conceptual viewpoint, the work of \citet{hajek2014minimax} is closest to ours. \cite{hajek2014minimax} introduce a model of selective incomplete Thurstone and Plackett-Luce rankings, where the selection sequence consists of sets of $k$ alternatives selected uniformly at random. They provide upper and lower bounds on how fast optimizing the log-likelihood function from incomplete rankings (which in their case is concave in the parameters of the model and can be optimized via e.g., gradient descent) converges to the model's true parameters. In our case, however, computing a maximum likelihood ranking is not convex and cannot be tackled with convex optimization methods. From a technical viewpoint, our work builds on previous work by \citet{braverman2008noisy}, \citet{caragiannis2013noisy} and \citet{busa2019optimal}. For almost three decades, there has been a significant interest in ranked preference aggregation and sorting from noisy pairwise comparisons. One branch of this research direction assumes that the algorithm actively selects which pair of alternatives to compare in each step and aims to minimize the number of comparisons required for sorting (see e.g., \citep{feige1994computing,braverman2008noisy}, or \citep{ailon2012active} for sorting with few errors, or \citep{braverman2016parallel} for parallel algorithms). A second branch, closest to our work, studies how many passively (see e.g., \citep{mao2018breaking,mao2018minimax}) or randomly (see e.g., \citep{wauthier2013efficient}) selected noisy comparisons are required for ranked preference aggregation and sorting. A more general problem concerns the design of efficient approximation algorithms (based on either sorting algorithms or common voting rules) for aggregating certain types of incomplete rankings, such as top-$k$ rankings, into a complete ranking (see e.g., \cite{Ailon10,MM20}). Moreover, there has been recent work on assigning ranking scores to the alternatives based on the results of noisy pairwise comparisons, by likelihood maximization through either gradient descent or majorize-maximization (MM) methods (see e.g., \citet{VojnovicYZ20}). Such works on learning from pairwise comparisons are also closely related to our work from a graph-theoretic viewpoint, since they naturally correspond to weighted graph topologies, whose properties (e.g., Fiedler eigenvalue of the comparison matrix~\citep{hajek2014minimax, shah2016estimation, khetan2016data, vojnovic2016parameter, negahban2017rank, VojnovicYZ20} or degree sequence~\citep{pananjady2020worst}) characterize the sample complexity and convergence rate of various learning approaches. The comparison graph of $p$-frequent Mallows is the clique. Another related line of research (which goes back at least to \citet{ConitzerS05}) investigates how well popular voting rules (e.g., Borda count, Kemeny ranking, approval voting) behave as maximum likelihood estimators for either the complete central ranking of the alternatives, or the identities of the top-$k$ alternatives, or the top alternative (a.k.a. the winner). In this line of work, the input may consist of complete or incomplete noisy rankings \citep{xia2011determining,ProcacciaRS12}, the results of noisy pairwise comparisons \citep{shah2017simple}, or noisy $k$-approval votes \citep{caragiannis2017learning}. \section{Retrieving the Central Ranking} \label{s:central} In this section, we settle the sample complexity of learning the central ranking $\pi_0$ under the selective Mallows model. We show that a practically optimal strategy is to neglect the concentration of alternatives' positions around their initial positions in $\pi_0$ and act as if the samples are a set of pairwise comparisons with common probability of error only depending on $\beta$. \noindent{\bf Positional Estimator.} Given a sample profile $\Pi=(\pi_1,\dots,\pi_r)$ corresponding to a selection sequence $\mathcal{S}=(\Set_1,\dots,\Set_r)$, we denote with $\hat{\pi}=\hat{\pi}[\Pi]$ the permutation of $[n]$ output by Algorithm \ref{algo:positional}. \begin{algorithm} \caption{Positional Estimator of profile $\Pi$} \label{algo:positional} \begin{algorithmic}[1] \Procedure{PosEst}{$\Pi$} \State $\hat{\pi} \gets \textbf{0}_n$ \For{$i \in [n]$} \For{$j \in [n]$} \State $\Pi_{i,j} \gets \{\pi \in \Pi : i,j \in \pi \}$ \State $n_{j \succ i} \gets |\{ \pi \in \Pi_{i,j} : j \succ_{\pi} i\}|$ \If{$n_{j \succ i} \geq |\Pi_{i,j}|/2$} \State $\hat{\pi}(i) \gets \hat{\pi}(i) + 1$ \EndIf \EndFor \EndFor \State Break ties in $\hat{\pi}$ uniformly at random\\ \Return $\hat{\pi}$ \EndProcedure \end{algorithmic} \end{algorithm} Algorithm \ref{algo:positional} first calculates the pairwise majority position of each alternative, by comparing each $i \in [n]$ with any other $j \in [n]$ in the joint subspace of the sample profile where $i$ and $j$ appear together. Intuitively, $\hat{\pi}(i)$ equals $|\{j:j\text{ ranked before } i\text{ most times}\}|$. The algorithm, after breaking ties uniformly at random, outputs $\hat{\pi}$. We call $\hat{\pi}$ the \emph{positional estimator} (of the sample profile $\Pi$), which belongs to the class of pairwise majority consistent rules, introduced by \citet{caragiannis2013noisy}. \noindent{\bf Sample Complexity.} Motivated by the algorithm of \citet{caragiannis2013noisy} for retrieving the central ranking from complete rankings, we utilize the {\textsc{PosEst}} (Algorithm \ref{algo:positional}) and establish an upper bound on the sample complexity of learning the central ranking in the selective Mallows case. \begin{theorem}\label{thm:upper_central} For any $\epsilon\in(0,1)$, $\pi_0\in\mathfrak{S}_n$, $\beta>0$, $p\in(0,1]$, there exists an algorithm such that, given a sample profile from $\Mal^{\Scal}$, where $\mathcal{S}$ is a $p$-frequent selection sequence of length $O(\frac{1}{p(1-e^{-\beta})^2}\log(n/\epsilon))$, Algorithm~\ref{algo:positional} retrieves $\pi_0$ with probability at least $1-\epsilon$. \end{theorem} \begin{sproof} If we have enough samples so that every pair of alternatives is ranked correctly in the majority of its comparisons, with probability at least $1-\epsilon/n^2$, then, by union bound, all pairs are ranked correctly in the majority of their comparisons with probability at least $1-\epsilon$, which, in turn, would imply the theorem. If the number of samples is $r$, then each pair is compared at least $pr$ times in the sample. The Hoeffding bound implies that the probability that a pair is swapped in the majority of its appearances decays exponentially with $pr$. \end{sproof} For the complete proof, we refer the reader to the Appendix \ref{s:pf1}. In fact, the positional estimator is an optimal strategy with respect to the sample complexity of retrieving the central ranking. This stems from the fact that if for some pair, the total number of its comparisons in the sample is small, then there exists a family of possible central rankings where different alternatives cannot be easily ranked, due to lack of information. We continue with an essentially matching lower bound: \begin{theorem}\label{thm:lower_central} For any $p\in(0,1]$, $\epsilon\in(0,1/2]$, $\beta>0$ and $r=o(\frac{1}{\beta p}\log(n/\epsilon))$ there exists a $p$-frequent selection sequence $\mathcal{S}$ with $|\mathcal{S}|=r$, such that for any central ranking estimator, there exists a $\pi_0\in\mathfrak{S}_n$, such that the estimator, given a sample profile from $\Mal^{\Scal}$, fails to retrieve $\pi_0$ with probability more than $\epsilon$. \end{theorem} \begin{sproof} Let $\mathcal{S}$ contain $p|\mathcal{S}|$ sets with all alternatives and $(1-p)|\mathcal{S}|$ sets of size at most $n\sqrt{p/(1-p)}$. For any $i,j\in[n]$, let $\num[i,j](\mathcal{S})$ be the number of sets of $\mathcal{S}$ containing both $i$ and $j$, that is, the number of the appearances of pair $(i,j)$. Clearly, $\mathcal{S}$ is $p$-frequent and: \begin{equation}\label{eq:bounded_appearances} \sum_{i<j}\num[i,j](\mathcal{S})\le pn^2|\mathcal{S}|. \end{equation} Assume that $|\mathcal{S}|<\frac{1}{8p\beta}\log(\frac{n(1-\epsilon)}{4\epsilon})$. We will show that there exists a set of $n/2$ disjoint pairs of alternatives which we observe only a few times in the samples. Assume that $n$ is even. We design a family $\{P_t\}_{t\in[n/2]}$ of perfect matchings on the set of alternatives $[n]$. Specifically, we consider $n/2$ sets of $n/2$ disjoint pairs $P_1 = \{(1,2),(3,4),\dots,(n-1,n)\}$, $P_2 = \{(1,4),(3,6),\dots,(n-1,2)\}$ and, in general, $P_t = \{(1,(2t)\mathop{\mathrm{mod}} n),\dots, (n-1,(2t+n-2)\mathop{\mathrm{mod}} n)\}$ for $t \in [n/2].$ Observe that no pair of alternatives appears in more than one perfect matching of the above family. Therefore: \begin{equation}\label{eq:bounded_appearances_matchings} \sum_{t\in[n/2]}\sum_{(i,j)\in P_t}\num[i,j](\mathcal{S})\le\sum_{i<j}\num[i,j](\mathcal{S})\,. \end{equation} Combining \eqref{eq:bounded_appearances}, the bound for $|\mathcal{S}|$ and \eqref{eq:bounded_appearances_matchings}, we get that: \begin{equation*}\label{eq:bounded_appearances_matching} \exists t\in[n/2]: \sum_{(i,j)\in P_t}\num[i,j](\mathcal{S}) < \frac{n}{4\beta}\log\left(\frac{n(1-\epsilon)}{4\epsilon}\right)\,. \end{equation*} Hence, since $|P_t|=[n/2]$, there exist at least $n/4$ pairs $(i,j)\in P_t$ with $\num[i,j](\mathcal{S}) < \frac{1}{\beta}\log\left(\frac{n(1-\epsilon)}{4\epsilon}\right)$. We conclude the proof with an information-theoretic argument based on the observation that if the pairs of $P_t$, $n/4$ of which are observed few times, are adjacent in the central ranking, then the probability of swap is maximized for each pair. Moreover, the knowledge of the relative order of the elements in some pairs in the matching does not provide any information about the relative order of the elements in any of the remaining pairs. Intuitively, since each of $n/4$ pairs is observed only a few times, no central ranking estimator can be confident enough about the relative order of the elements in all these pairs. \end{sproof} The complete proof can be found at the Appendix \ref{s:pf2}. \section{Approximating the Central Ranking with Few Samples} \label{s:approx} We show next that the positional estimator smoothly approximates the position of each alternative in the central ranking, within an additive term that diminishes as the number of samples $r$ increases. The average position estimation approximates the positions of the alternatives under the Mallows model, as shown by \citet{braverman2009sorting}. However, the average position is not meaningful under the selective Mallows model, because the lengths of the selective ranking may vary. Also, although under the Mallows model, the probability of displacement of an alternative decays exponentially in the length of the displacement, under the selective Mallows model, distant elements might be easily swapped in a sample containing only a small number of the alternatives that are ranked between them in the central ranking. For example, denoting with $\text{id}$ the identity permutation ($1\succ 2\succ \dots \succ n$), if $m\in[n]$, $m \gg 1$, $\pi_1\sim\mathcal{M}_{\text{id},\beta}^{\{[n]\}}$ and $\pi_2\sim\mathcal{M}_{\text{id},\beta}^{\{\{1,m\}\}}$, then: \[ \textnormal{Pr}[m\succ_{\pi_1} 1] \le 2e^{-\beta m/2}, \] since in order for $1$ and $m$ to be swapped, either has to be displaced at least $m/2$ places, and as shown by \citet{bhatnagar2015lengths}, under Mallows model, the probability of displacement of an alternative by $t$ places is bounded by $2e^{-\beta t}$. However: \[ \textnormal{Pr}[m\succ_{\pi_2} 1] \ge e^{-\beta}/(1+e^{-\beta}), \] using the bound for swap probability provided by \citet{chierichetti2014reconstructing}. Since $m \gg 1$: $\textnormal{Pr}[m\succ_{\pi_1} 1] \ll \textnormal{Pr}[m\succ_{\pi_2} 1]$. However, even though some selection sets may weaken the concentration property of the positions of the alternatives, we show that the positional estimator works under the selective Mallows model. This happens due to the requirement that each pair of alternatives should appear frequently: the majority of distant (in $\pi_0$) alternatives remain distant in the majority of the resulting incomplete rankings obtained by restricting $\pi_0$ to the selection sets. This is summarized by the following: \begin{theorem}\label{thm:approx} Let $\Pi\sim\Mal^{\Scal}$, where $\pi_0\in\mathfrak{S}_n$, $\beta>0$, $|\mathcal{S}|=r$ and $\mathcal{S}$ is $p$-frequent, for some $p\in(0,1]$, and $\epsilon\in(0,1)$. Then, for the positional estimator $\hat{\pi}=\hat{\pi}[\Pi]$, there exists some $N=O(\frac{\beta^2+1}{\beta^3 p^2r}\log(n/\epsilon))$ such that: \[ \textnormal{Pr}[\exists i\in[n]:|\hat{\pi}(i)-\pi_0(i)|>N]\le \epsilon\,. \] \end{theorem} \begin{sproof} We show that with high probability, for any alternative $i\in[n],$ only $N = O(\frac{1}{p}(\frac{1}{\beta}+\frac{1}{\beta pr}\log\frac{n}{\epsilon}))$ other alternatives $j$ are ranked incorrectly relatively to $i$ in the majority of the samples of $\Pi$ where both $i$ and $j$ appear. Therefore, by the definition of the {\textsc{PosEst}}, even after tie braking, each alternative is ranked by the output ranking within an $O(N)$ margin from its original position in $\pi_0$. If two alternatives $i,j$ are ranked $L$ positions away by the reduced central ranking corresponding to a sample, then the probability that they appear swapped in the sample is at most $2e^{-\beta L/2}$. However, even if $i,j$ are distant in $\pi_0$, they might be ranked close by a reduced central ranking. For any $i\in[n]$, we define a neighborhood $\mathcal{N}_i(L,\lambda)$ containing the other alternatives $j$ which appear less than $L$ positions away from $i$ in the corresponding reduced central rankings of at least $r/\lambda$ samples. Intuitively, those alternatives $j$ outside $\mathcal{N}_i(L,\lambda)$ are far from $i$ in the corresponding reduced central ranking of many samples. Hence, in these samples where $i,j$ are initially far (according to $L$), the probability of observing them swapped is small enough so that, with high probability, the number of samples where they are ranked correctly is dominant among all the appearances of the pair, since we have additionally forced the number of samples where $i,j$ are initially close (in which swaps are easy) to be small (according to $\lambda$). Additionally, we bound the size of the neighborhood by $|\mathcal{N}_i(L,\lambda)|\le 2L\lambda$, because in each sample there is only a small number of candidate neighbors (according to $L$) and an element of $\mathcal{N}_i(L,\lambda)$ uses many of the total number places. We conclude the proof by setting $L=O(\frac{1}{\beta}+\frac{1}{\beta pr}\log\frac{n}{\epsilon}))$ and $\lambda = O(\frac{1}{p})$. Intuitively, $\lambda$ is chosen so that the number of samples where swaps are difficult is comparable to $pr$ (the minimum number of samples where each pair appears). The margin of error is $N=O(L\lambda)$. \end{sproof} For the details, we refer the reader to the Appendix \ref{s:pf3}. We continue with a remark on the sample complexity. \begin{remark}\label{rem:approx} In Theorem \ref{thm:approx}, the margin of the approximation accuracy $N$ can be refined as follows: \[ N = \begin{cases} O(\frac{1}{\beta p^2r}\log(n/\epsilon)),\text{ when }r=O(\frac{1}{p}\log(n/\epsilon))\,. \\ O(\frac{1}{\beta p}),\text{ when }r=\omega(\frac{1}{ p}\log(n/\epsilon))\,. \\ 0, \text{ when }r=\omega(\frac{1}{\beta^2 p}\log(n/\epsilon))\,. \end{cases} \] \end{remark} According to Remark \ref{rem:approx}, the error margin of approximation for the {\textsc{PosEst}} provably diminishes inversely proportionally to $\beta p^2r$, when $r$ is sufficiently small and eventually becomes zero when $r$ exceeds the sample complexity of Theorem~\ref{thm:upper_central}. \section{Applications of Approximation} \label{s:applications} Assume we are given a sample from $\Mal^{\Scal}$, where $\mathcal{S}$ is $p$-frequent for some $p\in(0,1]$. Unless $|\mathcal{S}|$ is sufficiently large, we cannot find the central ranking with high probability. However, due to Theorem $\ref{thm:approx}$, we know that the positional estimator approximates the true positions of the alternatives within a small margin. This implies two possibilities which will be analyzed shortly: First, in Section~\ref{sec:mle}, we present an algorithm that finds the maximum likelihood estimation of the central ranking with high probability. The algorithm is quite efficient, assuming that the frequency $p$ and the spread parameter $\beta$ are not too small. In Section~\ref{sec:topk}, we show how to retrieve the top-$k$ ranking (assuming we have enough samples to sort $O(k)$ elements), when $k$ is sufficiently large. \subsection{Maximum Likelihood Estimation of the Central Ranking} \label{sec:mle} We work in the regime where $r$ is (typically much) smaller than the sample complexity of Theorem~\ref{thm:upper_central}. We start with some necessary notation. For any $\mathcal{A}\subseteq \mathfrak{S}_n$, let $\mle_\mathcal{A}=\mle_\mathcal{A}[\Pi]$ be a \textit{maximal likelihood} estimation of $\pi_0$ among elements of $\mathcal{A}$, that is: \begin{equation}\label{eqn:mleA} \mle_\mathcal{A}\in\arg\max_{\pi\in\mathcal{A}}\textnormal{Pr}[\Pi|\pi,\beta,\mathcal{S}]\,. \end{equation} If $\mathcal{A}=\mathfrak{S}_n$, $\mle_\mathcal{A}=\sample^*$, is a \textit{maximum likelihood} estimation of $\pi_0$, while if $\pi_0\in\mathcal{A}$, $\mle_\mathcal{A}=\sample^\circ$ is a \textit{likelier than nature} estimation of $\pi_0$ \citep{rubinstein2017sorting}. The following lemma states that computing the maximum likelihood ranking (\textsc{MLR}) is equivalent to maximizing the total number of pairwise agreements (\textsc{MPA}) between the selected ranking and the samples. \begin{lemma} \label{lem:mle_form} Let $\mathcal{A}$ be a subset of $\mathfrak{S}_n$ and assume that we draw a sample profile $\Pi \sim \Mal^{\Scal}.$ Consider the following two problems: \begin{equation} \label{eq:mlr} \textsc{(MLR)} \hspace{2mm} \arg\max_{\pi\in\mathcal{A}}\textnormal{Pr}[\Pi|\pi,\beta,\mathcal{S}]\ \ \ \ \ \mbox{and} \end{equation} \begin{equation} \label{eq:mpa} \textsc{(MPA)} \hspace{2mm} \arg\max_{\pi\in\mathcal{A}}\sum_{i\succ_\pi j}|\{\pi'\in\Pi: i\succ_{\pi'}j\}|\,. \end{equation} Then, the solutions of \textsc{(MLR)} and \textsc{(MPA)} coincide. \end{lemma} \begin{proof} If $\Pi=(\pi_1,\dots,\pi_r) \sim \Mal^{\Scal}$, then, starting from \eqref{eq:mlr}, we get that: \[ \arg\max_{\pi\in\mathcal{A}}\textnormal{Pr}[\Pi|\pi,\beta,\mathcal{S}] = \arg\min_{\pi\in\mathcal{A}}\sum_{\ell\in[r]}d_{KT}(\pi,\pi_\ell)\,. \] That is, maximizing the likelihood function is equivalent to minimizing the total number of pairwise disagreements. Equivalently, we have to maximize the total number of pairwise agreements and, hence, retrieving \eqref{eq:mpa}. Note that the samples in $\Pi$ are incomplete and therefore each pair of alternatives is compared only in some of the samples. \end{proof} Let us consider a subset $\mathcal{A}$ of $\mathfrak{S}_n$. According to Lemma \ref{lem:mle_form}, there exists a function $f=f[\Pi]: [n]\times [n]\to \mathbb{N}$ such that solving (\textsc{MLR}) is equivalent to maximizing a score function $s:\mathcal{A} \to \mathbb{N}$ of the form: \begin{equation}\label{eq:score} s(\pi) = \sum_{i\succ_\pi j}f(i,j)\,. \end{equation} Then, as shown by \citet{braverman2009sorting}, there exists a dynamic programming, which given an initial approximation of the maximizer of $s$, computes a ranking that maximizes $s$ in time linear in $n$, but exponential in the error of the initial approximation. More specifically, \citet{braverman2009sorting} show that: \begin{lemma}[\citet{braverman2009sorting}] \label{lem:maxscore} Consider a function $f:[n]\times [n] \rightarrow \mathbb{N}$. Suppose that there exists an optimal ordering $\pi\in\mathfrak{S}_n$ that maximizes the score \eqref{eq:score} such that $|\pi(i)-i|\le R, \forall i\in[n]$. Then, there exists an algorithm which computes $\pi$ in time $O(n\cdot R^2\cdot 2^{6R})$. \end{lemma} Recall that the positional estimator finds such an approximation $\hat{\pi}$ of the central ranking. Also, a careful examination of the proof of Lemma \ref{lem:maxscore} shows that given any initial permutation $\sigma \in \mathfrak{S}_n$, the dynamic programming algorithm finds, in time $O(n\cdot R^2\cdot 2^{6R})$, a maximizer (of the score function $s$) in $\mathcal{A}\subseteq \mathfrak{S}_n$, where $\mathcal{A}$ contains all the permutations that are $R$-pointwise close\footnote{We say that $\pi,\sigma \in \mathfrak{S}_n$ are $R$-pointwise close, if it holds that: $|\pi(i) - \sigma(i)| \leq R$ for all $i \in [n]$.} to the initial permutation $\sigma$. Therefore, we immediately get an algorithm that computes a likelier than nature estimation $\sample^\circ$ by finding $\mle_\mathcal{A}$, for $\mathcal{A}$ such that $\hat{\pi},\pi_0\in\mathcal{A}$. Furthermore, if $\sample^*$ is an approximation of $\pi_0$, then $\hat{\pi}$ is an approximation of $\sample^*$. Hence, we get an algorithm for computing a maximum likelihood estimation $\sample^*$. It turns out that $\sample^*$ approximates $\pi_0$, but with a larger margin of error. Thus, we obtain the following: \begin{theorem}\label{thm:mle_alg} Let $\Pi$ be a sample profile from $\Mal^{\Scal}$, where $\mathcal{S}$ is a $p$-frequent selection sequence, $p\in(0,1]$, $|\mathcal{S}|=r$, $\pi_0\in\mathfrak{S}_n$, $\beta>0$ and let $\alpha>0$. Then: \begin{enumerate} \item There exists an algorithm that finds a likelier than nature estimation of $\pi_0$ with input $\Pi$ with probability at least $1-n^{-\alpha}$ and in time: \[ T = O(n^2+n^{1+O(\frac{2+\alpha}{r\beta p^2})}2^{O(\frac{1}{p\beta})}\log^2n)\,. \] \item There exists an algorithm that finds a maximum likelihood estimation of $\pi_0$ with input $\Pi$ with probability at least $1-n^{-\alpha}$ and in time: \[ T = O(n^2+n^{1+O(\frac{2+\alpha}{r\beta p^4})}2^{O(\frac{1}{p^3\beta})}\log^2n)\,. \] \end{enumerate} \end{theorem} To summarize the algorithm of Theorem~\ref{thm:mle_alg}, we note that it consists of two main parts. First, using the fact that our samples are drawn from a selective Mallows distribution, in which the positions of the alternatives exhibit some notion of locality, we approximate the central ranking within some error margin for the positions of alternatives. Second, beginning from the approximation we obtained at the previous step, we explore (using dynamic programming instead of exhaustive search, see Lemma \ref{lem:maxscore}) a subset of $\mathfrak{S}_n$ which is reasonably small and provably contains with high probability either $\pi_0$ (for finding a likelier than nature ranking) or $\sample^*$ (for finding a maximum likelihood one). The proof of Theorem \ref{thm:mle_alg} can be found at Appendix \ref{s:pf4}. \subsection{Retrieving the Top-$k$ Ranking} \label{sec:topk} In this section, we deal with the problem of retrieving the top-$k$ sequence and their ranking in $\pi_0$. In this problem, we are given a sample profile from a selective Mallows model and a positive integer $k$. We aim to compute the (identities and the) order of the top-$k$ sequence in the central ranking $\pi_0$. Based on the properties of the positional estimator, it suffices to show that (after sufficiently many selective samples) any of the alternatives of the top-$k$ sequence $i$ in $\pi_0$ is ranked correctly with respect to any other alternative $j$ in the majority of samples where both $i$ and $j$ appear. Then, every other alternative will be ranked after the top-$k$ by the {\textsc{PosEst}}. The claim above follows from the approximation property of the {\textsc{PosEst}}. Theorem~\ref{thm:approx} ensures that for any alternative of the top-$k$ sequence $i$, the correct majority order against most other alternatives (those that are far from $i$ in most reduced central rankings). So, we can focus on only $O(k)$ pairs, which could appear swapped with noticeable probability. To formalize the intuition above, we can restrict our attention to the regime where the number of alternatives $n$ is sufficiently large and $k = \omega(1 / (p \beta))$. By Remark~\ref{rem:approx}, this ensures that the accuracy of the approximation of {\textsc{PosEst}} diminishes inversely proportional to $\beta p^2r$, since we only aim to ensure that the accuracy is $O(k)$. Specifically, Theorem~\ref{thm:topk-up} provides an upper bound on the estimation in that regime and Corollary~\ref{cor:topk-low} gives a general lower bound for the case where $k = O(n)$. \begin{theorem} \label{thm:topk-up} Let $k = \omega( 1/(p \beta))$ be a positive integer. For any $\epsilon\in(0,1)$ and $p\in(0,1]$, there exists an algorithm which given a sample profile from $\Mal^{\Scal}$, where $\mathcal{S}$ is a $p$-frequent selection sequence with: \begin{equation*} |\mathcal{S}| = O\!\left(\frac{\log(k/\epsilon)}{p(1-e^{-\beta})^2}+\frac{\log(n/\epsilon)}{p^2\beta k}\right)\,, \end{equation*} retrieves the top-$k$ ranking of the alternatives of $\pi_0$, with probability at least $1-\epsilon$. \end{theorem} \begin{sproof} Let $\Pi\sim\Mal^{\Scal}$ be our sample profile. We will make use of the {\textsc{PosEst}} $\hat{\pi} = \hat{\pi}[\Pi]$ and, without loss of generality, assume that $\pi_0$ is the identity permutation. We will bound the probability that there exists some $i\in[k]$ such that $\hat{\pi}(i)\neq i$. For any $i\in[k]$, we can partition the remaining alternatives into $A_1(i) = \mathcal{N}_i(L,\lambda)$ and $A_2(i)=[n]\setminus (A_1(i)\cup\{i\})$. From the proof sketch of Theorem~\ref{thm:approx}, we recall that $\mathcal{N}_i(L,\lambda)$ contains the alternatives that are ranked no more than $L$ places away from $i$ in the reduced central rankings corresponding to at least $r/\lambda$ samples. From an intermediate result occurring during the proof of Theorem \ref{thm:approx}, it holds that for some $L$, $\lambda$ such that $|A_1(i)|=O(\frac{1}{p\beta}+\frac{1}{p^2\beta |\mathcal{S}|}\log(n/\epsilon))$, with probability at least $1-\epsilon/2$, for every $i\in[n]$, for every alternative $j\in A_2(i)$ (distant from $i$ in most samples), $j$ is ranked in the correct order relatively to $i$ in the majority of the samples where they both appear. Picking $L$, $\lambda$ so that the above result holds, there exists some $r_1=O(\frac{1}{p^2\beta k}\log(n/\epsilon))$ such that, if $|\mathcal{S}| \geq r_1$, then $|A_1(i)| = O(k)$. Furthermore, following the same technique used to prove Theorem \ref{thm:upper_central}, we get that for some $r_2 = O(\frac{1}{p(1-e^{-b})^2}\log(k/\epsilon))$, if $|\mathcal{S}|\ge r_2$, then, with probability at least $1-\epsilon/2$, every pair of alternatives $(i,j)$ such that $i\in[k]$ and $j\in A_1(i)$ is ranked correctly by the majority of samples where both $i$ and $j$ appear, since the total number of such pairs is $O(k^2)$. Therefore, with probability at least $1-\epsilon$, both events hold and for any fixed $i\in[k]$, $\hat{\pi}(i)=i$, because $i$ is ranked correctly relatively to every other alternative in the majority of their pairwise appearances and also because for every other alternative $j>k$: $\hat{\pi}(j)>k$, since each of the alternatives in $[k]$ appear before it in the majority of samples where they both appear. \end{sproof} For the complete proof, we refer to Appendix \ref{s:pf5}. From a macroscopic and simplistic perspective, the sample complexity of learning the top-$k$ ranking can be interpreted as follows. The first term, i.e., $O_\beta(\frac{1}{p}\log(k/\epsilon))$, accounts for learning the ranking of the top-$k$ sequence (as well as some $O(k)$ other alternatives), since each of them is close to the others in the central ranking (and in each reduced rankings where they appear). Hence, it is probable that their pairs appear swapped. The second term, i.e., $O_\beta(\frac{1}{p^2 k}\log(n/\epsilon))$, accounts for identifying the top-$k$ sequence, by approximating their positions. Intuitively, the required accuracy of the approximation diminishes to the value of $k$, since we aim to keep $O(k)$ probable swaps for each of the alternatives of the top-$k$ sequence. Combining the two parts, we conclude that given enough samples, {\textsc{PosEst}} outputs a ranking where the top-$k$ ranking coincides with the top-$k$ ranking of $\pi_0$. We conclude with the lower bound, followed by a discussion about the tightness of our results. \begin{cor} \label{cor:topk-low} For any $k\le n$, $p\in(0,1]$, $\epsilon\in(0,1/2]$, $\beta>0$ and $r=o(\frac{1}{\beta p}\log(k/\epsilon))$, there exists a $p$-frequent selection sequence $\mathcal{S}$ with $|\mathcal{S}|=r$, such that for any central ranking estimator, there exists a $\pi_0\in\mathfrak{S}_n$ such that the estimator, given a sample profile from $\Mal^{\Scal}$, fails to retrieve the top-$k$ ranking of $\pi_0$ with probability at least $\epsilon$. \end{cor} Corollary~\ref{cor:topk-low} is an immediate consequence of Theorem~\ref{thm:lower_central}. The bounds we provided in Theorem~\ref{thm:topk-up}~and Corollary~\ref{cor:topk-low} become essentially tight if $k=\Omega(\frac{1}{p}\log(n/\epsilon))$, since the term $O_\beta(\frac{1}{p}\log(k/\epsilon))$ becomes dominant in the upper bound, which then essentially coincides with the lower bound. In the intuitive interpretation we provided for the two terms of the sample complexity in Theorem~\ref{thm:topk-up}, this observation suggests that when $k$ is sufficiently large, the sample complexity of identifying the top-$k$ ranking under the selective Mallows model is dominated by the sample complexity of sorting them. We conclude with an informative example, where we compare the sample complexity of retrieving the complete central ranking and the top-$k$ ranking in an interesting special case. Let us assume that $p=1/\log\log n$ and that $k=\Theta_\beta(\log(n/\epsilon))$. Then we only need $O_{\beta}(\log\log n \cdot \log\frac{\log n}{\epsilon})$ samples to retrieve the top-$k$ ranking, while learning the complete central ranking requires $\Omega_{\beta}(\log (n/\epsilon))$ samples. Namely, we have an almost exponential improvement in the sample complexity, for values of $k$ that suffice for most practical applications. \section{Experiments} \label{s:experiments} In this section, we present some experimental evaluation of our main results, using synthetic data. First, we empirically verify that the sample complexity of learning the central ranking from $p$-frequent selective Mallows samples using $\textsc{PosEst}$ is $\Theta(1/p)$, assuming every other parameter to be fixed. Furthermore, we illustrate empirically that $\textsc{PosEst}$ is a smooth estimator of the central ranking, in the sense that $\textsc{PosEst}$ outputs rankings that are, on average, closer in Kendall Tau distance to the central ranking as the size of the sample profile grows. \subsection{Empirical sample complexity} We estimate the sample complexity of retrieving the central ranking from selective Mallows samples where $n=20$ and $\beta=2$, with probability at least $0.95$, using $\textsc{PosEst}$ by performing binary search over the size of the sample profile. During a binary search, for every value, say $r$, of the sample profile size we examine, we estimate the probability that $\textsc{PosEst}$ outputs the central ranking by drawing $100$ independent $p$-frequent selective Mallows profiles of size $r$, computing $\textsc{PosEst}$ for each one of them and counting successes. We then compare the empirical success rate to $0.95$ and proceed with our binary search accordingly. For a specific value of $p$, we estimate the corresponding sample complexity, by performing $100$ independent binary searches and computing the average value. The results, which are shown in Figure \ref{fig:SCadv}, indicate that the dependence of sample complexity on the frequency parameter $p$ is indeed $\Theta(1/p)$. \begin{figure}[ht] \centering \includegraphics[width=.85\linewidth]{figs/SCadv} \caption{Estimated sample complexity of retrieving, with probability at least $0.95$ and using $\textsc{PosEst}$, the central ranking from selective Mallows samples, with $n=20$, $\beta=2$, over the frequency parameter's inverse.}\label{fig:SCadv} \end{figure} \subsection{Smoothness of $\textsc{PosEst}$} We plot, for different values of the frequency parameter $p$, the average Kendall Tau distance between the central ranking and the output of $\textsc{PosEst}$ with respect to the size of the sample profile. For each value, say $r$, of the sample profile size, considering $\beta=0.3$ and $n=20$, we draw $100$ independent selective Mallows sample profiles, each of size $r$, we compute the distance between the output of $\textsc{PosEst}$ for each sample profile and the central ranking and take the average of these distances. The results are presented in Figure \ref{fig:PDadv}. \begin{figure}[ht] \centering \includegraphics[width=.95\linewidth]{figs/PDadv} \caption{Average Kendall Tau distance between the output of $\textsc{PosEst}$ and the central ranking with respect to the size of the sample profile, for different values of the frequency parameter $p$, when $n=20$, $\beta = 0.3$.}\label{fig:PDadv} \end{figure} \subsubsection*{Acknowledgements} The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. Dimitris Fotakis and Alkis Kalavasis are partially supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the ``First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant'', project BALSAM, HFRI-FM17-1424. \newpage
proofpile-arXiv_068-139
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{section:1} Theories of distributive justice hold that justice can be measured (in part) in terms of the fair distribution of benefits and burdens across people in society. These theories help us (i.e., community members, researchers, and policymakers) to know both what justice is and how to tell whether one society or system is more or less just than another. Aiming to mitigate ``algorithmic unfairness'' or bias, the field known as \ac{fair ML} has recently sought to operationalize theories of distributive justice in ML systems~\citep{binns_fairness_2018, dwork_fairness_2011}. Researchers have quantitatively formalized theories of equal opportunity~\citep{hardt_equality_2016, heidari_fairness_2018, joseph_fairness_2016}, as well as portions of John Rawls's influential theory of distributive justice~\citep{hashimoto_fairness_2018, jabbari_fairness_2017, joseph_rawlsian_2016}. While the quantitative formalization of these theories has been on-going over the past 30-plus years, in theory we don't need to assume a quantitative measure at the outset~\citep{roemer_theories_1998}. Historically, formalization has been accompanied by robust philosophical debate over the question ``\emph{what} is the right measure of justice?'' That is, when evaluating whether a system is fair or just, what type of benefit or burden should we look at, as a measure? And how should we approach measuring that benefit, quantitatively or otherwise? These questions are important because different benefits affect the well-being of different people in different ways, and different theories offer different approaches to measurement. Selecting the wrong measure, or operationalizing the wrong theory, may prematurely foreclose consideration of alternatives, disregard the well-being of certain people, or reproduce and entrench---rather than remediate---existing social injustices. Case in point, \ac{fair ML} researchers have often conceptualized algorithmic unfairness as a problem of ``resource allocation'' wherein they aim to mitigate unfairness by algorithmically reallocating resources across people~\citep{donahue_fairness_2020,hashimoto_fairness_2018}. For inspiration and operationalization, some well-intentioned researchers have turned to the theories of distributive justice that similarly aim to remediate social injustices by quantifying, measuring, and reallocating resources across people in society~\citep{hoffmann_where_2019}. Notably, the theories of John~\cite{rawls_theory_1971}---recently called ``artificial intelligence's favorite philosopher'' due his popularity in \ac{fair ML}~\citep{procaccia_ai_2019}---have received significant attention. However, when unfairness or injustice is conceptualized as a problem of resource allocation, the answer to the question ``\emph{what} is the right measure?'' is already embedded in the statement of the problem. That is, ``resource allocation'' problems entail a particular solution: to mitigate unfairness or remediate social injustice, it is \emph{resources} that are quantified, measured, and reallocated across people. But are resources the right type of benefit to look at, as a measure of justice? Rawls answers yes, with important qualifications. Other philosophers known as capability theorists answer no. They argue that resource measures encode biases against disabled and other socially marginalized people. Rawls mounts a defense of resources, but capability theorists don't only argue against resources, they also offer an alternative measure: \emph{capabilities}. Which measure of justice is right? And has \ac{fair ML} been using the wrong one? \subsection{Overview of Contents} In this paper, I extend---from philosophy to machine learning---a longstanding debate between defenders of two prominent approaches to measuring justice: the \textbf{resourcist approach} (defended by philosophers like John Rawls, Ronald Dworkin, and Thomas Pogge) and the \textbf{capability approach} (defended by philosophers like Amartya Sen, Martha Nussbaum, and Elizabeth Anderson). I spell out the differences between these competing approaches, and how they have been (or could be) operationalized in the design and development of ML systems. First, I inquire into what makes the resourcist approach amenable to operationalization in these systems. Then, I inquire into whether the capability approach could be operationalized with more success. The inquiry proceeds by way of the capability theorists' critiques of the resourcist approach. At a high level, the inquiry proceeds as follows. In the resources-versus-capabilities debate, the two approaches are distinguished based on certain criteria for a measure of justice. I begin by introducing these criteria, as well as the philosophical tools and terminology of distributive justice~\sectionlabel{section:2}. Based on these criteria, and their underlying characteristics, I establish a theoretical resemblance between resource measures and the measures often adopted in \ac{fair ML}, where algorithmic unfairness has often been conceptualized as a problem of resource allocation. This resemblance suggests how critiques made by the capability theorists might carry over to the ML systems that operationalize Rawls's theory~\sectionlabel{section:3}. To show how these critiques do carry over, I discuss two real-world case studies from ML. In these cases, I argue that the measures selected for mitigating algorithmic unfairness end up entrenching and reproducing social injustices---just as the capability theorists argue resource measures do~\sectionlabel{section:4}. As an alternative to operationalizing the resourcist approach in ML systems, I introduce the capability approach~\sectionlabel{section:5}. I outline what operationalizing this approach could look like in practice---through a participatory and democratic process---and discuss how doing so could better remediate the social injustices from the case studies, and beyond. I also discuss some of the most noteworthy advantages and limitations of the capability approach~\sectionlabel{section:6}. Finally, I conclude by replying to some potential objections~\sectionlabel{section:7}. Extending the longstanding resources-versus-capabilities debate from philosophy to ML reveals some broadly applicable characteristics of (and criteria for) a measure of justice in machine learning. \begin{itemize} \setlength\itemsep{0.0em} \item Whether a measure is single or multi-valued; quantitative and/or qualitative. \item Whether what is being measured is a ``means to an end'' or an ``end in itself.'' \item Whether or not a measure is sensitive to people's heterogeneities and societal contexts. \item Whether or not a measure is publicly legible and verifiable. \end{itemize} \noindent These characteristics and criteria have broad implications for the practicability of quantitative and/or qualitative evaluations of justice and fairness in machine learning, for the participatory design of these systems, and for the democratic oversight over the institutions that build and deploy them. \section{Theories of Distributive Justice} \label{section:2} There are many theories of justice---distributive, legal, restorative, retributive, among others---all uniquely important and often complementary. Here, I focus on theories of distributive justice. While some of these theories (namely, Rawls's resourcist approach) have recently been operationalized in ML systems, prominent alternatives (such as the capability approach) seem to have been sidelined. To understand the differences between these two approaches, and how they have been (or could be) operationalized in \ac{fair ML}, we first need to know more about how theories of distributive justice work. Theories of distributive justice propose \emph{principles} for fairly distributing benefits (e.g., material resources, services, and opportunities) and burdens (e.g., barriers, taxes, and punishments) across people in society. These principles provide normative guidance for a society's systems and institutions to follow, to detect and remediate historical and on-going social injustices. Distributive principles are specified by two features: their measure and their rule. The \textbf{measure} specifies \emph{what} type of benefit or burden is to be distributed. The \textbf{rule} specifies \emph{how} that benefit or burden is to be distributed~\citep{brighouse_justifying_2010}. For example, as I noted in the Introduction, to remediate social injustices, some philosophers propose reallocating resources across people in society. Similarly, to mitigate algorithmic unfairness, some \ac{fair ML} researchers propose the same. But the principle of ``reallocate resources'' is specified both too quickly and not yet fully. It is too quick because we don't need to select resources as the type of benefit to be reallocated (the measure). It is not yet fully-specified because it also matters how resources are to be reallocated (the rule). We can specify both the measure and the rule in many ways. And each feature combination will have different normative implications, lead to different distributions, and affect the well-being of different people differently.\footnote{ Unlike distributive principles from philosophy, the field of \ac{fair ML} has not made a consistent conceptual and terminological distinction between the measure (\emph{what} is being distributed) and the rule (\emph{how} it is being distributed). Instead, these two features have often been collapsed into a single conception, referred to as the ``fairness metric''. For example, ``equal predictive accuracy'' is one such fairness metric, and there are countless others on offer~\citep{narayanan_21_2018,deborah_hellman_measuring_2019}. To be consistent with the philosophical terminology, I will refer to equal predictive accuracy (and the like), not as a fairness metric, but as a distributive principle that specifies predictive accuracy as its measure and equality as its rule. Making this conceptual and terminological distinction is important because it helps us to be clearer about which feature (the measure or the rule) contributes to the unfairness or unjustness of a distribution. Failure to make this distinction can lead us to take either feature for granted. } Theories of distributive justice primarily differ on which measure-rule combination they defend as the most successful remediator of social injustice. Some philosophers argue that resources like monetary income and wealth (the measure) should be distributed more-or-less equally (the rule) across people in society. Others defend different measures or rules, arguing that the principle of ``equal income and wealth'' is not an adequate remediator of social injustice. They argue this, not necessarily because equalizing income and wealth wouldn't be helpful at all, but because successful remediation could require that we look at resources other than money, or it could require that we look at benefits other than resources, as a measure of justice.\footnote{ Apart from looking at the distribution of resources like income and wealth, successful remediation of social injustices might require that people in society have a high quality of life overall, that they are paid what they deserve for their hard work, that they are not disadvantaged for being born into historically marginalized communities, that they are not subject to coercion or fraud, or that they are capable of functioning as equal citizens in a democratic society. Equalizing monetary resources like income and wealth may not remediate injustices arising from these other non-monetary, less-readily-quantifiable types of benefit and burden, so philosophers of distributive justice have defended many non-monetary, non-resource measures (such as utility, desert, luck, entitlements, capabilities, among others) on the grounds that they will do a better job. } Theories of distributive justice also differ on whether they consider the \emph{practicability} of a distributive principle in society. That is, remediating social injustice via redistribution doesn't only depend on specifying the right measure and rule. It also depends on whether a distributive principle can be put into practice, and under which social conditions. For some philosophers, a principle only needs to work ``in theory,'' under abstract and ideal social conditions. For others, a principle only needs to work ``in practice,'' under existing social conditions, here and now~\citep{schmidtz_nonideal_2011}. Whether a principle works in theory, in practice, or in both, is not always clear. But, we (i.e., community members, researchers, and policymakers) should at least be clear about the conditions under which a principle is \emph{intended} to work. Putting into practice a principle that is intended for other social conditions could have unintended consequences.\footnote{ When theorizing about justice, it is important to distinguish between ideal and non-ideal modes of theorizing. \emph{Ideal theories} (such as Rawls's) propose distributive principles for arranging societies and systems under certain ideal social conditions~\citep{wenar_john_2017}. For example, Rawls's theory assumes that the social conditions of society are cooperative, and that its members are ``normal and fully cooperating members of society over a complete life,'' among other assumptions~\citep{rawls_theory_1971}. The purpose of making these idealizing assumptions is not to ignore the difficult realities that actual people face in present-day society. Rather, the purpose is to first formulate principles workable under ideal conditions, so that we are then better equipped to formulate principles workable under non-ideal conditions, through reference to the ideal~\citep{wenar_john_2017}. At their best, ideal theories provide guidance and facilitate progress in theorizing about justice. By contrast, \emph{non-ideal theories} (such as the capability approach) propose actual steps that actual people can take---here and now---to address the actual problems they face, and to make present-day society a better place~\citep{schmidtz_nonideal_2011}. For these philosophers, a principle which cannot be put into practice (due to institutional, informational, or technical constraints) is not yet a serious candidate for the urgent remediation of existing social injustice~\citep{lamont_distributive_2017}. What distinguishes ideal from non-ideal theorizing is not the use of idealization \emph{per se}: non-ideal theorizing can and will idealize in various ways. Rather, what distinguishes the two is the priority given to the abstract over the actual. With non-ideal theorizing, we don't need to first formulate principles that are workable under ideal social conditions, since these principles may not be helpful or applicable under non-ideal social conditions. At their worst, ideal theorizing can distract us from, marginalize, or exclude the difficult realities that actual people face in present-day society~\citep{mills_ideal_2005}. Recent work in \ac{fair ML} has suggested moving away from ideal theorizing, toward non-ideal theorizing in the design and development of ML systems~\citep{fazelpour_algorithmic_2020}. } While the measure and the rule of a distributive principle are equally important, in this paper I focus on the measure, largely setting aside discussion of the rule. For a fuller discussion of both, I refer interested readers to surveys by Richard~\cite{arneson_egalitarianism_2013} and by Julian Lamont and Christi Favor~\citep{lamont_distributive_2017}. Additionally, I recommend the anthology \emph{Measuring Justice: Primary Goods and Capabilities} edited by Harry Brighouse and Ingrid Robeyns, from which I draw inspiration for this paper's title as well as much of its philosophical underpinning~\citep{brighouse_measuring_2010}. \subsection{What Is the Right Measure of Justice?} \label{section:2.1} To know what measure of justice is right, we first need to know what the measure is intended to do. Essentially, the measure should encompass (as much as possible) what it is that contributes to people's well-being within their society. That is, the measure should consist of some type of benefit that people would like to have (or some burden that people would like to avoid) such that having more (or less) contributes to their well-being.\footnote{ For readers familiar with the philosophical terminology, I clarify here that I do not use the term ``well-being'' to refer specifically to the \emph{welfarist approach} to measuring justice, which is exemplified by the theories of utilitarianism. For unfamiliar readers, I clarify that in theories of distributive justice, ``well-being'' can have various conceptions that are adopted to, as T.M.~\cite{scanlon_what_1998} puts it, ``make comparisons of how well-off people would be under various conditions, as measured by these conceptions.'' For example, the welfarist approach conceives of well-being in terms of \emph{welfare}, the resourcist approach in terms of \emph{resources}, and the capability approach in terms of \emph{capabilities}. In this paper, I largely set aside discussion of the welfarist approach, for reasons that will become clearer later on. I thank Gabriel Karger for suggesting this reference. } Consider again the simple monetary measure of income and wealth, a resource measure. People with more income and wealth are usually better off than people with less. Acquiring more money usually helps people to do more of the things they enjoy, to achieve greater social mobility, and to generally improve their quality of life and future prospects. As a measure of well-being, income and wealth are ubiquitous, easily understood, and frequently appear in evaluations of individual, institutional, and national well-being. So, we might intuitively think that these monetary measures get something right about measuring justice. Indeed, philosophers often refer to the measure as the ``currency'' of a distributive principle~\citep{cohen_currency_2011}. Less intuitively, however, a measure of justice doesn't need to be monetary, or even numerically quantifiable. This is because, when we look more closely at a person's societal context, we find that being monetarily wealthy is not the same as being well. A person's well-being does not depend \emph{only} on their income and wealth, but also on the physiological, psychological, social, political, and economic conditions under which they live. These heterogeneous conditions improve and diminish well-being, so a measure of justice should encompass them. But, some of these heterogeneities won't be readily expressed in monetary, quantifiable terms, so a measure should include \emph{more than} income and wealth alone. If it does not, then the measure may inadequately detect and remediate social injustices arising from heterogeneous social and political conditions. In his Tanner Lecture, Amartya~\cite{rawls_equality_1979} argued that Rawls's measure of justice is an inadequate remediator of social injustice because it is \emph{insensitive} to people's heterogeneities and societal context. This lecture inaugurated a decades-long, still on-going debate between defenders of the resourcist and capability approaches to measuring justice. Rawls defends what is called the \emph{primary goods} measure, a measure of resources.\footnote{ I thank Sally Haslanger for encouraging me to introduce Rawls's primary goods measure in greater detail. } However, unlike the simple measure of income and wealth, Rawls's conception of resources is broad and inclusive. The primary goods consist of both \emph{material} resources (such as income and wealth) and also \emph{immaterial} resources (such as opportunities, rights, and liberties, among others)~\citep{rawls_theory_1971}. Rawls conceives the primary goods measure in this way (in part) to achieve greater sensitivity to the heterogeneous social and political conditions that affect people's well-being in society. This is because (as above) these heterogeneities may not be readily expressed in the quantifiable terms of income and wealth alone. While Rawls's primary goods measure includes much more than income and wealth, the measure may nevertheless be insensitive in other ways. It may not, for example, adequately attend to disparities in well-being that are not constituted by a lack of resources (either material or immaterial) but rather by social and political relations between people, as we will see later on. Although it is conceived broadly and inclusively, the primary goods measure is still ultimately ``resourcist'' in its conception (i.e., constituted by resources), and so the capability theorists argue that it is ultimately inadequate for the purpose of remediating social injustice. Capability theorists object not so much with which (or how many) resources are included in the primary goods measure, but with resources \emph{themselves}, as a type of benefit to be looked at as a measure of justice. On their view, we can (and should) measure another type of benefit: one that achieves greater sensitivity to people's heterogeneous social and political conditions, thereby better encompassing what contributes to people's well-being in society. Capability theorists like Sen defend a measure of people's \emph{capabilities}. To foreshadow, I will argue that measures from ML \emph{resemble} resource measures such as Rawls's, and that this resemblance makes the resourcist approach amenable to operationalization in ML systems. At the same time, I will argue that this resemblance opens measures from ML to similar critiques from the capability perspective. Specifically, it opens these measures to the critique that they are \emph{insensitive} to the heterogeneous social and political conditions that affect people's well-being in society. Note that the relation between resource measures and those from ML is one of resemblance, and not of equivalence. This is because Rawls's primary goods measure is much broader and more inclusive than the measures appearing in resource allocation problems from \ac{fair ML}. An equivalence relation would be more appropriately drawn between measures from ML and the simple resource measure of income and wealth. That is, measures from ML tend to consist of (at most) a very small subset of Rawls's primary goods, which are conceived broadly and inclusively. So, if the capability theorists' critiques of the primary goods measure land with any force, we might expect similar critiques to land much more forcefully, when leveled against measures much more limited than Rawls's. \subsection{Two Criteria for a Measure of Justice} \label{section:2.2} I will establish a theoretical resemblance between measures from ML and resource measures from theories of distributive justice. To do this, I first introduce two important criteria for a measure of justice, which are at the center of the resources-versus-capabilities debate: the measure's sensitivity to personal heterogeneities and societal context (which I refer to as the \textbf{sensitivity criterion}) and the measure's public legibility and verifiability (or the \textbf{publicity criterion}). Introducing these criteria early on will help us to outline the contours of the debate, and to later draw the resemblance between resource measures and their operationalized analogues in ML systems. As I introduce these criteria, the reader may find it helpful to keep in mind the overarching argumentative point: that measures from ML \emph{resemble} resource measures, and are therefore open to similar critiques from the capability perspective. \subsubsection{The Sensitivity Criterion} \label{section:2.2.1} Should a measure of justice be sensitive to people's heterogeneities and societal context? That is, should a measure take into consideration differences between people, in terms of their physiological and psychological functioning as well as their social and political conditions? Sen and other capability theorists argue that because a person's well-being depends not only on the resources they have in their possession, but also on their heterogeneous conditions, a measure of justice must be sensitive to these conditions. To appreciate what it means for a measure to be sensitive, consider one way that resource measures are argued to be \emph{insensitive}. According to capability theorists, resource measures are not adequately sensitive to what contributes to the well-being of disabled people~\citep{brighouse_justifying_2010,rawls_equality_1979,brighouse_what_2010}. This is because these measures do not attend to the complex interplay between bodily functioning and societal context, from which disability arises.\footnote{ In disability scholarship, the \emph{social model} of disability makes a conceptual and terminological distinction between ``impairment'' in bodily functioning and ``disability''. Impairment is characterized by some limitation of physiological or psychological functioning, whereas disability is characterized by social or political exclusion perpetrated on the basis of that impairment. Impairment alone is not disabling. Rather, disability arises when a society---built to accommodate ``normal'' people who lack ``abnormal'' impairments---ends up excluding (intentionally or not) people with impairments from full social and political participation~\citep{wasserman_disability:_2016, shakespeare_social_2013}. While the social model has been enormously influential, the disability-impairment distinction has been critiqued in the Disability Rights and Disability Pride movements, as well as in more recent scholarship. According to these critiques, the distinction between ``disability'' and ``impairment'' is untenable, and possibly circular. If disability is understood as societal exclusion perpetrated on the basis of ``abnormal'' impairment, then there must some characteristics that distinguish people who are ``impaired'' (i.e., limited in their in bodily functioning) from people who are not. But, it's unclear what those characteristics are, exactly, since impairment may itself be socially determined or specific to a person's historical and cultural context~\citep{tremain_government_2001}. An alternative view, known as the \emph{minority model} of disability, proposes that what it means for a person to be ``disabled'' is to be viewed as such by their society, and to be socially subordinated as a minority within that society. That is, to be disabled is to be disadvantaged (along some dimension) in comparison to people who are non-disabled, without positing impairment in bodily functioning as the basis for that disadvantage~\citep{barnes_minority_2016}. I thank Sally Haslanger for suggesting these references. } For example, two people may be equal in their allotment of resources, and thus apparently equal in their well-being, according to the simple measure of income and wealth. But, if one person has a disability that is monetarily expensive (due to structural barriers, a lack of accommodation, or outright social exclusion), then the two may in fact be unequal in their well-being~\citep{rawls_equality_1979}. This critique is easy enough to grasp when considering the simple measure of income and wealth, but the capability theorists also level it against Rawls's primary goods measure, broadly and inclusively conceived. Equalizing resources of any sort, they argue, will inadequately remediate social injustices that arise from people's heterogeneous social and political conditions. Equality will be achieved in name only. But here it may be asked: why not adjust the rule instead of the measure? That is, couldn't an \emph{unequal} reallocation of resources---whereby disabled people receive \emph{more} resources---adequately remediate these social injustices?\footnote{ I thank Harini Suresh for encouraging me to clarify this point. } This is a plausible idea. But, it only moves the problem out of view temporarily, and the capability theorists ultimately reject it.\footnote{ As with equalizing income, unequal redistribution of resources toward disabled people---although well-intentioned---may yet be an inadequate remediator of social injustice, in two ways. First, it may not allow people with disabilities to bypass long-standing structural barriers when doing so requires accommodations that exceed their allotted resource bonus, or when doing so is blocked by social prejudice (e.g., as when workplace modifications are deemed ``unreasonable'' by an unaccommodating employer). Second, reallocation of resources toward disabled people may disrespect their personhood, or be indignifying, by suggesting that they are somehow pitiful, or that their disabilities are quantifiable losses rectifiable by resource compensation, thereby entrenching a subordinate social status relative to their non-disabled peers~\citep{anderson_what_1999}. } As above, reallocating resources can inadequately encompass well-being---for people with disabilities, especially---but also for \emph{anyone} who experiences discrimination, structural barriers, psychosocial harms, and other forms of non-monetary, less-readily-quantifiable disadvantage. Structural and psychosocial injustices---including \emph{de facto} segregation, discourse inequality, stigma, and shunning---are prevasive, but they are not likely remediable by reallocations of resources alone~\citep{brighouse_justifying_2010}. This is because these injustices are not constituted by a inequitable distributions of resources \emph{across} people, but rather by social and political relations \emph{between} people. Here, I note (briefly) that \ac{fair ML} often posits a distinction similar to the one just described. That is, a distinction is posited between algorithmic unfairness which \emph{can be} mitigated via resource reallocation versus unfairness which \emph{cannot} because it arises from social and political relations between people, and not from an inequitable distribution of resources alone. In the \ac{fair ML} literature, this is known as the distinction between \emph{allocative harms} and \emph{representational harms}~\citep{crawford_trouble_2017, barocas_fairness_2019}. I will postpone a fuller introduction to this distinction until later on, when I discuss how \ac{fair ML} researchers have sometimes conflated allocation and representation when operationalizing theories of distributive justice~\sectionlabel{section:4.2}. It suffices to note here that measures which are sensitive to heterogeneous political and social conditions (namely, capability measures) are conceived to better address harms of the second sort (namely, harms of representation, not of allocation). But, if sensitive measures do a better job of addressing these harms, it may be wondered: why not select a sensitive measure? Because, the sensitivity criterion is in tension with another: publicity. \subsubsection{The Publicity Criterion} \label{section:2.2.2} At the level of a society's systems and institutions, the information required to make a claim of injustice must be publicly legible and verifiable~\citep{brighouse_measuring_2010}. That is, the measure must be transparent (not murky) and accessible to all. This is because, if this information were not publicly legible and verifiable, the legitimacy and stability of the systems and institutions regulated by such a measure may be suspect or indeterminate~\citep{brighouse_justifying_2010}. Inconsistent and/or inscrutable measures may themselves become sources of social injustice when they stymie public oversight and accountability~\citep{brighouse_two_2010}. Here, resourcists and capability theorists agree: the right measure of justice must be publicly legible and verifiable. They disagree, however, on whether both the publicity and sensitivity criteria can be satisfied at the same time. Resourcists argue that, for a measure to satisfy the publicity criterion, it cannot also be sensitive to people's heterogeneities and societal context. Only if a measure is insensitive will it provide a \emph{public standard} of measurement: one that is consistently applicable and legible across diverse groups of people and heterogeneous social and political conditions~\citep{brighouse_equal_2010}.\footnote{ For example, in the physical sciences, the International System of Units, or metric system, is one such public standard of measurement. It allows scientists to understand, share, and compare their findings in a consistent fashion across different contexts, helping to ensure that those findings are publicly legible and, when necessary, verifiable. } Sensitive measures will fail to provide a public standard, resourcists argue, for at least two reasons. First, sensitive measures likely require large quantities of information that may be difficult or undesirable to acquire in practice (perhaps due to parsimony or concerns about individuals' privacy). This makes them difficult to verify in a public fashion~\citep{robeyns_capability_2016}. Second, even if these information demands could be met, sensitive measures (by definition) vary according to people's heterogeneities and societal context. This potentially renders indeterminate (or murky) comparisons between different people in different contexts~\citep{brighouse_critique_2010}. For comparison, consider again the simple resource measure of income and wealth. Although this monetary measure may be insensitive to people's heterogeneities and societal contexts, it is at least \emph{consistently} measurable across people and contexts. That is, while income and wealth may not encompass the \emph{entirety} of a person's well-being, it can at least provide a consistently measurable indication of well-being across heterogeneous social and political conditions. Although capability theorists and resourcists agree that a measure must be publicly legible and verifiable, they disagree on whether a measure can satisfy both criteria at the same time. Capability theorists argue that a measure can satisfy both, as we will see later on, whereas resourcists argue otherwise. \subsection{Sensitivity vs. Publicity} \label{section:2.3} In the resources-versus-capabilities debate, there exists a tension between a measure's sensitivity and its publicity. On the one hand, measures that are insensitive to people's heterogeneous conditions may overlook, encode biases against, or perpetuate social injustice against people whose well-being matters in evaluations of justice and fairness. On the other hand, measures that are sensitive to people's heterogeneous conditions may obscure injustices at the level of systems and institutions, if those measures are publicly illegible, inconsistently applicable, or require large quantities of information that are prohibitively costly or undesirable to acquire (perhaps due to overriding concerns about individuals' privacy). So far, I have said a lot about the philosophical criteria for a measure of justice, and only a little about ML. However, I claim that the tension between publicity and sensitivity---between resources and capabilities---from the prior discussion carries over to the design and development of ML systems. The prior discussion begins to suggest how it is that resource measures and measures from ML are alike: they are both publicly legible and verifiable, but may be insensitive to people's heterogeneous social and political conditions. In the next section, I draw this resemblance much more explicitly. While the resemblance may seem readily apparent, simply from observing which distributive theories \ac{fair ML} has picked out for operationalization, I will argue that it goes deeper than theory selection, or a superficial similarity between ``resource'' terminology. At bottom, resource measures and measures from ML share similar \emph{characteristics} that contribute to their satisfaction (or not) of the publicity and sensitivity criteria. \section{The Resourcist Approach to Measuring Justice} \label{section:3} In this section, I first define what resources are, as they are understood in both the distributive justice and \ac{fair ML}, and provide some examples of each. Then, to establish the resemblance between resource measures and measures from ML, I present two key characteristics shared by the measures appearing in both literatures. Based on these shared characteristics, I argue that measures from ML satisfy the publicity criterion, but fail to satisfy the sensitivity criterion. In upcoming sections, I show how this failure can occur in practice, by discussing two real-world case studies in which the measures selected for mitigating algorithmic unfairness end up entrenching and reproducing social injustice---just as the capability theorists argue resource measures do. \subsection{What Are Resources?} Recall that the right measure of justice is intended to encompass what it is that contributes to people's well-being within their society. The resourcist approach aims to do this by looking primarily at the resources people have in their possession. A person's \textbf{resources} are defined as the things that they can possess or use, in the broadest sense~\citep{long_egalitarianism_2019}. In theories of distributive justice, resources can be \emph{material} (such as bundles of various goods, income, and wealth) as well as \emph{immaterial} (such as services, opportunities, liberties, basic political rights)~\citep{cohen_currency_2011}. As I previously noted, Rawls's primary goods measure conceives of resources in this broad and inclusive way, to better encompass what it is that contributes to people's well-being. Acquiring more resources usually improves people's well-being. For example, people can usually use their income to buy a house to attain shelter, food to attain sustenance, health insurance to attain care, a train ticket to attain mobility, a computer to attain internet access, and so on---all of which tend to yield improvements to a person's quality of life. Because people can usually use resources in this way, resourcists argue that well-being is more-or-less encompassed by the resources people have in their possession. Whoever has more resources at their disposal is generally better-off, disregarding other personal heterogeneities. In \ac{fair ML}, the resources appearing in ``resource allocation'' problems have lacked a consistent and agreed-upon definition. In many cases from the literature, the resources under consideration are material: loans (to be fairly allocated across loan applicants)~\citep{jabbari_fairness_2017, hu_welfare_2018}, power generators (to be fairly allocated across homes)~\citep{donahue_fairness_2020}, and so on. In other cases, the resources are immaterial: university admissions (to be fairly allocated across students)~\citep{dwork_fairness_2011}, salary opportunities (to be equalized across employees)~\citep{heidari_fairness_2018}, and so on. In the vast majority of cases, however, what is looked at as a measure of fairness is some computational artifact of the ML model itself: the model's predictive accuracy, or false positive and negative rates, for different groups of people~\citep{narayanan_21_2018}. These cases are peculiar. What is the ontological status of these computational artifacts? In one sense, they are simply numerical quantities: abstract representations of statistical concepts like accuracy, error, utility, likelihood, and risk. In another sense, the numerical values obtained by these artifacts are constrained by algorithmic optimization procedures, which are both constrained mathematically and by real-world (i.e., finite) hardware resources, such as those of a computer's processing units. Does this suggest that these computational artifacts are \emph{themselves} ``resourcist'' in their conception? That is, are they things that people can possess or use, in the broadest sense? This is not entirely clear. But, what is clear is that, although they are ``merely'' numerical quantities, these computational artifacts are often referred to in allocative terms. That is, artifacts like predictive accuracy are referred to as being ``allocated'' across people qua the group of which they are a member.\footnote{ I thank Reuben Binns for suggesting this language. } For example, researchers describe ``fair allocation of predictions'' across groups of people~\citep{zliobaite_survey_2015} and refer to ``allocational harms'' that are understood as disparities in a model's accuracy for different groups of people~\citep{de-arteaga_bias_2019, davidson_racial_2019}. In another example, a model's predictive accuracy is \emph{explicitly} characterized as a ``resource'' to be fairly allocated via a distributive rule~\citep{hashimoto_fairness_2018}. These examples suggest that the ``resources'' often appearing in \ac{fair ML} are conceptualized---not only as material or immaterial---but also as uniquely \emph{computational} in some sense. Here, I do not attempt to give a complete ontology of these ``computational resources''. It is enough to note two things. First, computational artifacts like predictive accuracy share important characteristics with resource measures from distributive justice, as I will discuss in the next section. Second, when these artifacts are ``allocated'' across different groups of people, they have the power to improve or diminish people's well-being within their society~\citep{angwin_machine_2016, buolamwini_gender_2018, dastin_amazon_2018}. It is in this latter sense that computational artifacts may be ``posessed'' or ``used'' in different ways, by different people, depending on their heterogeneous social and political conditions. In all of the above cases, both from distributive justice and from \ac{fair ML}, the resources allocated are generally thought to improve the well-being of people who receive or possess them (be it through direct acquisition in the real world, or through interaction with a ML system). That is, whoever has more resources (be they material, immaterial, or computational) is generally assumed to be better-off. \subsection{Two Characteristics of Resource Measures} Previously, I introduced two criteria for a measure of justice (sensitivity and publicity) and discussed at a high level how it is that resource measures do or do not satisfy those criteria. However, I did not discuss at a low level why that is exactly. That is, I left out the specific \emph{characteristics} of resource measures that contribute to their satisfaction of the publicity criterion, but not the sensitivity criterion. In this section, I introduce two key characteristics of a measure of justice, and argue that resource measures and measures from ML align on both.\footnote{ I do not claim that these characteristics exhaust all possible similarities and differences between the measures under consideration. Indeed, there is a third key characteristic: whether the measure is \emph{subjective} or \emph{objective}. That is, whether what is being measured exists ``in the world,'' outside of people's subjective mental states, such as their mental states of happiness or preference satisfaction~\citep{brighouse_justifying_2010}. Note that ``objective'' here refers to existing ``mind-independently'' and not to scientific objectivity, or observing impartially and without bias. However, because the objective-subjective distinction is introduced to distinguish objective measures (such as resources and capabilities) from subjective measures (such as welfare and utility), I do not discuss it at length, since all of the measures presently under consideration (resources, capabilities, and measures from ML) are ``objective'' in this sense. } This alignment forms the basis of my resemblance claim between the two measures. The two key characteristics are: (1) the measure can be expressed as a single-valued quantity, rather than multi-valued quantities or qualities, and (2) what is being measured (namely, a resource) is a ``means to an end,'' rather than an ``end in itself.'' To better understand these characteristics, I discuss them as they apply to both resource measures and measures from ML. For easy reference, \autoref{tab:measuretable} shows how these measures align on the key two characteristics, and suggests how it is that these characteristics do or do not contribute to satisfaction of the sensitivity and publicity criteria. Additionally, this table foreshadows how capability measures differ in their characteristics and satisfaction of the criteria. \subsubsection{Resource Measures Can Be Single-Valued} Resources are things that people can possess or use, in the broadest sense. People can possess or use many things---income and wealth, material goods, opportunities, rights and liberties---any one or all of which may be looked at as a measure of justice. But, not all resources are the same. Some have characteristics that make them more or less amenable to certain types of measurement (quantitative or otherwise) and to satisfying the publicity and sensitivity criteria for a measure of justice. As I previously noted, Rawls specifies his primary goods measure broadly and inclusively (in part) because different resources can be called upon to play different roles when detecting and remediating social injustice. Of particular importance, some resource measures can be---but do not \emph{need} to be---expressed as single-valued quantities. This, it turns out, is a very useful characteristic. For example, monetary resources like income and wealth are particularly useful because they can be expressed in singular, quantitative terms (i.e., as the numerical quantity of income or wealth a person has in their possession). When included as part of a measure of justice, single-valued quantities are useful because they permit unambiguous comparisons, orderings, and interval rankings of the people whose well-being they aim to encompass. To know, for example, who is better-off when using a monetary measure like income, we simply need to look up and compare the numerical quantities of income that people have in their possession. Thus, according to such a single-valued quantitative measure, it is always clear to us where people stand in relation to one another, even across heterogeneous social and political conditions. This is a win for the measure's publicity, but simultaneously, it is a loss for the measure's sensitivity, for reasons that we have already seen. By contrast, measures that are multi-valued (i.e., incorporating more than a single variable) or heterogeneous (i.e., mixing qualitative and quantitative data) are much less likely to permit unambiguous comparisons, orderings, and interval rankings of the sort achievable via single-valued quantities. This is because the information admissible to these measures includes \emph{more than} a single-valued quantity. It may, for example, include qualitative textual data, or it may include multiple quantities such that no single quantity can be called upon to establish a strict interval ranking. These sorts of measures are more common to mixed-methods research, ethnography, narrative inquiry, grounded theory, or other social science methodologies. Here, it is important to state explicitly: resource measures \emph{can be} single-valued, but they are not \emph{necessarily} single-valued.\footnote{ I thank Gabriel Karger for encouraging me to clarify this point. } We could, for example, formulate a quantitative multi-valued resource measure, so that it does not strongly prioritize any constituent single-valued quantity, such as income and wealth. Indeed, we will see later on how the capability approach (in some of its variants) proposes to do exactly that. But, while we can formulate such a multi-valued resource measure, we will at the same time be leaving behind one of the primary reasons resources are useful in evaluations of justice and fairness: when expressed as single-valued quantities, resource measures permit unambiguous comparisons and rankings. Being able to make these comparisons and rankings is crucial to the success of Rawls's distributive principles.\footnote{ It is worth clarifying in detail how, exactly, capability theorists argue that Rawls is committed to single-valued quantitative measures. Rawls's measure of primary goods is conceptualized broadly to include both material resources (income and wealth) and immaterial resources (opportunities, rights, liberties, and others). This suggests that the primary goods measure is \emph{not} single-valued, but rather multi-valued. Certainly, the primary goods include resources other than income and wealth alone. And yet, the capability theorists still critique primary goods for providing a singular quantitative standard. For example, Martha~\cite{nussbaum_capabilities_2009} says that Rawls is committed to ``measuring relative social positions in a single and linear way, with reference to income and wealth alone.'' Are capability theorists like Nussbaum flatly misunderstanding Rawls's view, interpreting it uncharitably, or is there something more going on here? The last question (and some might argue the middle one too) can be answered affirmatively. The reason capability theorists critique the primary goods measure as ``single-valued'' has to do with the distributive rule that Rawls pairs it with, what is known as the Difference Principle. As it turns out, \ac{fair ML} researchers have picked out Rawls's Difference Principle for operationalization in ML systems, so I will postpone a full introduction until later on~\sectionlabel{section:4.2}. It is enough to note here that income and wealth (as constituents of the primary goods measure) are called upon to play a central role in the successful operation of Rawls's Difference Principle. According to~\cite{nussbaum_capabilities_2009}, Rawls is committed to measuring relative social positions ``with reference to income and wealth alone'' because he ``attaches considerable importance to the ability to rank in a definite and unilinear way who is well-off and not well-off... if the measures were [multi-valued] and heterogeneous, then it would be unclear who is least well-off, and the whole argument for the Difference Principle would be thrown into jeopardy.'' Some defenders of Rawls's view, such as Thomas~\cite{brighouse_critique_2010}, agree with Nussbaum on this point: the Difference Principle requires ``not merely a partial ordinal ranking, but a complete interval ranking.'' And such a ranking would not be possible if the measure were multi-valued and heterogeneous. } Likewise, it is just as crucial (if not more so) to the successful operation of algorithms appearing in \ac{fair ML} and machine learning systems. If the distributive and allocational rules that regulate these systems were instead combined with multi-valued and/or heterogeneous measures---resulting in potentially ambiguous or murky comparisons and rankings---then their entire operation may be thrown into jeopardy (see footnote 13). \bgroup \def1.5{1.5} \begin{table}[] \centering \begin{tabular}{c|p{28mm}|p{28mm}?p{28mm}|p{28mm}|} \hhline{~----} & \multicolumn{2}{c?}{\textbf{Characteristics}} & \multicolumn{2}{c|}{\textbf{Criteria}} \\ \hhline{~----} & can be a single-valued quantity? & measure a ``means to an end''? & are sensitive to heterogeneities? & are publicly legible \& verifiable? \\ \hhline{-----} \multicolumn{1}{|c|}{Resource measures} & {\cellcolor{ForestGreen!15}}Yes & {\cellcolor{ForestGreen!15}}Yes & {\cellcolor{Mahogany!15}}No & {\cellcolor{ForestGreen!15}}Yes \\ \hhline{-----} \multicolumn{1}{|c|}{Measures from ML} & {\cellcolor{ForestGreen!15}}Yes & {\cellcolor{ForestGreen!15}}Yes & {\cellcolor{Mahogany!15}}No & {\cellcolor{ForestGreen!15}}Yes \\ \hhline{-----} \multicolumn{1}{|c|}{Capability measures} & {\cellcolor{Mahogany!15}}No & {\cellcolor{Mahogany!15}}No & {\cellcolor{ForestGreen!15}}Yes & {\cellcolor{gray!15}}Yes? \\ \hhline{-----} \end{tabular} \caption{\label{tab:measuretable}The measures, their characteristics, and whether or not they satisfy the publicity and sensitivity criteria. Resource measures and measures from ML are aligned on their characteristics and their satisfaction of the criteria. (Although not discussed at length in this paper, responses to these questions regarding the welfare measures would read left-to-right as Yes, No, Yes, No.) } \end{table} \egroup \subsubsection{Resources Are Means, Not Ends} In the resources-versus-capabilities debate, an important distinction is posited between things that are a ``means to an end'' and things that are an ``end in themselves.'' A means to an end is something that people can use to achieve some other thing: another means or, eventually, something that is an end in itself. In short, ends are things that are \emph{ultimately} valuable to people, and means are merely a way of getting to them. \noindent The means-end distinction is posited to argue that resources are a means to well-being, which is an end in itself~\citep{long_egalitarianism_2019}. Recall that two people may possess equal resources (such as income and wealth, or primary goods) and yet still be unequal in their well-being, if one lives under heterogeneous social and political conditions that diminish their well-being. Thus, it is argued, what contributes to people's well-being must consist of something \emph{other than} (or in addition to) resources alone~\citep{rawls_equality_1979}. Resources may be a means to achieving well-being, but resources are not ultimately valuable to people, as an end in themselves. What people \emph{ultimately} value is their well-being, not the resources they use to achieve it. Similarly, the things that are ultimately valuable to people with respect to ML systems may not be what \ac{fair ML} researchers have so far attempted to measure~\citep{thomas_problem_2019}. For example, \ac{fair ML} researchers have so far been concerned with the accuracy of a ML model, or similar computational artifacts. A model that is less accurate for some people, and more for others, may seem flatly unfair to the former. Accordingly, researchers may adjust their model's optimization procedure so that it achieves accuracies more evenly balanced across these different groups of people. When accuracy is roughly equalized across people, \ac{fair ML} researchers might think that their job is done; fairness accomplished. But, as with the resources described above, accuracy might not be what people \emph{ultimately} value when interacting with ML systems. This is not to suggest that people don't value accuracy \emph{at all}, but that what people ultimately value includes \emph{more than} accuracy alone. Accuracy may be a means or a ``proxy'' for what people value with respect to ML systems, but it may not be an end in itself. Thus, an inordinate focus on accuracy as the sole measure of the fairness or justness of these systems may elide other important ways in which people's well-being is improved or diminished by them, as I will discuss in the upcoming case studies. \subsection{Resource Measures: Publicity, Not Sensitivity} Measures commonly used in \ac{fair ML} resemble resource measures from distributive justice to the extent that they align on at least two key characteristics: (1) these measures can be expressed as single-valued quantities that permit unambiguous comparisons and rankings, and (2) what is being measured (namely, a resource of some sort) is a means to an end, rather than an end in itself. Taken together, these two characteristics contribute to measures that satisfy the publicity criterion, but fail to satisfy the sensitivity criterion. Consider publicity. Because resource measures and measures from ML can be expressed as single-valued quantities, they provide a \emph{public standard} of measurement that is consistent and practicable across heterogeneous social and political conditions. This is because it is easy to measure people's relative social positions (by simply looking up their income, or some other single-valued quantity). Single-valued quantities also permit unambiguous comparisons and interval rankings (by simply comparing these singular values). In this way, these measures are publicly legible and verifiable. In ML, for example, the accuracy of different models for different groups of people can be compared consistently on benchmark datasets, allowing researchers to share their findings in a publicly legible (and if necessary, verifiable) fashion. Consider sensitivity. Because resource measures and those from ML focus on means (i.e., material, immaterial, or computational resources) rather than ends in themselves (i.e., things that people ultimately value), without considering how the two can come apart, these measures can be insensitive to heterogeneous social and political conditions that are relevant to evaluations of algorithmic fairness or distributive justice. This is because not all algorithmic unfairness (or social injustice) is readily expressed in terms of quantifiable resource deficits: some unfairness or injustice is structural and psychosocial, arising from social and political relations between people. Thus, the capability theorists argue that resource measures achieve their publicity at the expense of sensitivity. According to Elizabeth~\cite{brighouse_justifying_2010}, resource measures pass over ``structural and psychosocial injustices that interfere with individuals' functioning as equals, although [these injustices] are neither constituted nor remediated by distributions of resources.'' In the next section, I discuss two real-world cases wherein structural and psychosocial injustices go undetected and unremediated by measures from ML the exhibit these characteristics. \section{Case Studies from Machine Learning} \label{section:4} To further substantiate the resemblance between resource measures and measures from ML, and to illustrate the differences between the resourcist and capability approaches when operationalized in ML systems, I discuss two case studies. Each case concerns a distributive principle that specifies a seemingly fair distributive rule, but---by also specifying a resource measure---the principle fails to adequately remediate structural and psychosocial injustice. In the first case, I discuss how computational resource measures used in algorithmic hiring can encode and reproduce historic and ongoing employment discrimination, specifically against disabled people. In the second case, I discuss how the measures used in operationalized analogues of Rawls's theory can subject racial minority groups to stereotype threat, discourse inequality, or other psychosocial harms. In both cases, I argue that the specified measures exhibit the two key characteristics from the the previous section, such that they satisfy the publicity criterion but fail to satisfy the sensitivity criterion. I contextualize and motivate each case with an introductory discussion of its political and historical significance. \subsection{Algorithmic Antidiscrimination} \label{section:4.1} Discrimination has long pervaded employment. From the Civil Rights Act of 1964, to the Americans with Disabilities Act of 1990 (ADA), the long history of antidiscrimination law makes apparent that employment opportunities have rarely, if ever, been fairly distributed in the United States~\citep{united_states_americans_1990}. Even long after the passage of these laws---which grant employment protections to disability, gender, and racial minority groups, among others---employers have continued to discriminate, often motivated by cognitive bias, if not explicit prejudice~\citep{ameri_disability_2015, bertrand_are_2003, cole_recruiters_2007}. Against this backdrop of historic and ongoing discrimination, proponents of algorithmic decision-making argue that introducing ML into hiring decisions could reduce bias against these legally protected groups by enforcing fairness constraints on the pool of job candidates under consideration.\footnote{ Companies like LinkedIn, Indeed, and Pymetrics use hiring algorithms to place and optimize job advertisements appealing to target demographics, notify jobseekers about positions of potential interest, entice out-of-work candidates to rejoin the job market, administer pre-employment application tests, and recommend promising candidates to recruiters~\citep{bogen_help_2018,carey_how_2016,silverman_algorithm_2015}. } Algorithmic decision-making, these proponents argue, affords an opportunity to remediate employment discrimination by removing human prejudice and cognitive bias from the decision-making process. At the same time, critics argue that introducing algorithms into hiring decisions could silently encode, entrench, and reproduce human biases against legally protected groups, by recommending only those candidates whose characteristics match high-performing employees belonging to the already dominant groups (typically comprised of young, white, non-disabled, cis-gender men).\footnote{ For example, in 2018, Amazon abandoned its algorithmic review and recommendation of job applicants when their ML model (trained on 10 years of resumes submitted to the company) was found to be unfairly biased against women~\citep{dastin_amazon_2018}. As early as 2016, the Equal Employment Opportunity Commission (the United States agency responsible for enforcing employment antidiscrimination laws) raised concerns about the potential for hiring algorithms to reproduce biases against women and disabled people, if they happened to have irregular work habits or a pattern of absences, due to child-care obligations or healthcare needs~\citep{elejalde-ruiz_alexia_end_2018, equal_employment_opportunity_commission_use_2016}. } While all legally protected groups have faced longstanding and on-going discrimination in the workplace, studies show that disabled people have been especially disadvantaged.\footnote{ In 2018, nearly 30 years after the passage of the ADA in the United States, the employment-population ratio (i.e., the proportion of the population that is employed) was 19\% among adults with a disability. By contrast, the employment-population ratio for adults without a disability was 66\%~\citep{labor_statistics_persons_2018}. The disparity between disabled and non-disabled populations represented one of the largest, indicating substantial barriers to employment for disabled people. For comparison, in 2018, the employment-population ratio was 56\% among adult women versus 69\% among adult men, 63\% among black men versus 67\% among white men, and 58\% among black women versus 56\% among white women. } In addition to the barrier of identity-based prejudice, people with disabilities face a unique barrier to employment: the barrier of a work environment not built from the outset to accommodate the many ways of moving and working in society. Today, after decades of disability rights activism and advocacy for equal access to work environments, disabled people face another, analogous barrier: an information technology environment not built from the outset to accommodate the many ways of being and working online. As hiring decisions become increasingly mediated via online ML systems, the long history of disability employment discrimination casts doubt on claims that this new information technology environment will remediate---rather than entrench and reproduce---employment discrimination against people with disabilities. \subsubsection{The Case of Disability Hiring} \label{section:4.1.1} Consider jobseekers who browse for employment opportunities on online job platforms, such as LinkedIn, Indeed, or Pymetrics. Hiring algorithms deployed on these job platforms search through resumes for promising candidates (i.e., those fitting qualifications provided by company recruiters) and send them \emph{pre-employment application tests} (i.e., timed tasks intended to test a candidate's relevant skills), administered through the job platform's website~\citep{bogen_help_2018, carey_how_2016}. Suppose, charitably, that these algorithms are designed to take into consideration any available demographic information (such as whether a candidate belongs to a dominant or legally protected group) and follow a rule for allocating application tests based on a bias-mitigating conception of algorithmic fairness. In other words, suppose that the hiring algorithm is optimized to distribute a ``fair'' number of application tests (employment opportunities) across demographic groups, which it does by allocating fewer tests to dominant group candidates and more to legally protected group candidates, subject to the demands of fairness. Because the distributive rule accounts for disparities between dominant and legally protected candidate groups (by targeting the latter with more opportunities to complete application tests), such a reallocation of pre-employment application tests may seem sufficient for achieving algorithmic fairness, in this context. But what about the fairness of the measure itself: the application tests? Not implausibly, these tests could encode biases against legally protected groups of people. For example, blind jobseekers who use screen readers may be relatively disadvantaged if the tests themselves are not built for screen reader compatibility, as they typically are not.\footnote{ Screen readers are a type of assistive technology that convey visual information through non-visual means (such as text-to-speech or braille) in order to make internet browsing accessible to blind people. In 2019, Domino's Pizza, Inc. brought a case to the United States Supreme Court, arguing that their company website should not be required to comply with Title III of the ADA (i.e., arguing that it does not need to be made accessible to blind customers who use screen readers)~\citep{barnes_dominos_2019}. Title III of the ADA requires that ``[n]o individual shall be discriminated against on the basis of disability in the full and equal enjoyment of the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation''~\citep{united_states_americans_1990}. The Supreme Court declined to hear the case. } That is, two equally-qualified candidates may receive the same application test (and thus have an apparently equal opportunity for employment), but one may be unfairly disadvantaged, if the test is in some way inaccessible to jobseekers who use screen readers.\footnote{ For example, if the application test is not designed in accordance with the World Wide Web Consortium's Web Content Accessibility Guidelines (WCAG), people who use screen readers may not be able to access certain content. If the application test uses a candidate's time-to-completion in assessments of their performance, screen reader users may be at a competitive disadvantage, since website navigation may take longer with the use of a screen reader than without~\citep{trewin_ai_2018}. However, when reading text, screen reader users can be significantly faster than people who read visually. } Allocating more pre-employment application tests toward screen reader users will thus inadequately remediate employment discrimination, if the design of those tests is based on a biased conception of a job candidate's functioning. Merely measuring the quantity of application tests a candidate receives---without providing opportunities for recourse via a human hiring agent, for requesting accessibility accommodations, or for contesting automated decisions---will therefore be insensitive to people's heterogeneous ways of being and working online. Furthermore, seeking recourse, requesting accommodation, or initiating contestation within an algorithmic hiring system is likely to be difficult when that system's behavior is largely opaque, both to jobseekers (who may not notice when and how they are unfairly disadvantaged), and to employers (who may have little incentive to scrutinize the system's recommendations). While human hiring agents are also fallible, their behavior is at least scrutable, and their decisions can therefore be contested and influenced through dialogue and interaction. Observe here that measuring the quantity of pre-employment application tests a candidate receives exhibits two key characteristics of a resource measure: (1) it is a single-valued quantity that is easy to measure, compare, and rank (by simply looking up the number of tests allocated to jobseekers belonging to different demographic groups), and (2) it is a means or ``proxy'' for what actually contributes to a jobseeker's well-being (namely, real opportunities to achieve employment, free from structural or technological barriers). Thus, such a measure satisfies the publicity criterion, but fails to satisfy the sensitivity criterion. Although consistently measurable across different jobseekers, merely receiving the opportunity to complete a pre-employment application test is \emph{insensitive} to whether or not a candidate is capable of completing it, given their particular way of being and working online.\footnote{ Furthermore, quantitatively measuring the allocation of any single computational resource or technological artifact (such as the quantity of tests allocated or otherwise) can in principle encode biases in the same way. Because these artifacts are designed with a particular conception of human functioning in mind, they may not be sensitive to the context-specific ways in which people are capable or incapable of using them. Thus, receiving more (or fewer) artifacts does not necessarily improve (or diminish) a person's well-being. Adherents to the resourcist approach might counter-argue that computational resources or technological artifacts of this sort are not the right resources to look at, as a measure of fairness or justice for ML systems. That is, there could be other resources, say, those existing \emph{outside} of ML systems, which are more appropriate to evaluations of justice or fairness, in this context. This may be correct. However, if correct, the standard capability critique of non-computational resources would then apply. } \subsection{Operationalizing Rawls's Difference Principle} \label{section:4.2} Throughout, I have noted that \ac{fair ML} researchers have sought to operationalize theories of distributive justice in ML systems, in particular those belonging to John Rawls's resourcist approach. Here, I discuss one such case in detail. This case is significant to the inquiry for three reasons: (1) it exhibits a straightforward attempt by \ac{fair ML} researchers to operationalize one of Rawls's distributive principles, namely the Difference Principle, (2) in the operationalization, the ML model's predictive accuracy is selected as the measure and is \emph{explicitly} characterized as the ``resource'' to be allocated across people via the rule, and (3) the operationalization is carried out to address the problem of \emph{representation disparity} in ML models. The reader can likely grasp why the first two reasons are significant, but perhaps not the third, which requires further explanation. The problem of representation disparity occurs when an ML model achieves high overall accuracy for a population, but low accuracy for a subgroup within that population---often a subgroup of racial, gender, cultural, or linguistic minority status, or intersections of these identities. Because ML models are often optimized to achieve high overall accuracy at the population-level, the optimization procedure trains a model that further-marginalizes an already-marginalized group, by down-weighting the relative importance of what little data that group does contribute to the optimization procedure. Thus, the model's \emph{outputs} for the subgroup are bad (i.e., achieve low accuracy, or are of low quality) \emph{because} there is a disparity in the model's \emph{representation} of the subgroup relative to the overall population. It is significant that Rawls's Difference Principle is operationalized to address this problem of representation disparity because doing so exemplifies a distinction between two types of harms, often posited in \ac{fair ML} literature.\footnote{ I thank Harini Suresh for encouraging me to discuss this distinction. } Recall that I previously noted (briefly) the distinction between \emph{allocative harms} and \emph{representational harms}~\sectionlabel{section:2.2.1}. I now give a fuller introduction to the distinction between allocation and representation. In \ac{fair ML} literature, allocative harms refer to algorithmic unfairness which \emph{can be} mitigated via resource reallocation, whereas representational harms refer to unfairness which \emph{cannot be} mitigated via resource reallocation~\citep{barocas_fairness_2019}. Allocative harms are remeidiated by redistributing resources \emph{across} people, whereas representational harms arise from social and political relations \emph{between} people. Representational harms include stereotyping, misrecognition (e.g., disrespect, indignity, invalidating personhood), denigration (e.g., the use of epithets or other insulting language), and under-representation (e.g., datasets wherein a dominant group is over-represented, typically white men)~\citep{crawford_trouble_2017}. These harms arise from people's heterogeneous social and political conditions, and so they are difficult to formalize quantitatively within ML systems.\footnote{ A commonly given example of representational harm is that of a search-engine which outputs biased results for particular input queries. For example, when searching for ``CEO'' the engine may output only images of white men~\citep{crawford_trouble_2017}, or when searching for ``black girls'' the engine may output content that is demeaning and denigrating to this group~\citep{noble_missed_2012}. } These harms are also (in essence) the same as those I described when introducing the sensitivity criterion. In other words, a measure of justice that satisfies the sensitivity criterion would (in theory) also be sensitive to representational harms, which are not remediable by reallocating resources alone. As I have described it above, the problem of representation disparity seems as though it fits neatly within representational harms (specifically, a harm of under-representation), rather than within allocative harms. But, as we will see in the following case study, the operationalization of Rawls's Difference Principle takes an allocative approach (i.e., a resourcist approach) to addressing this representational harm, and in doing so it fails to remediate psychosocial injustices that arise from people's heterogeneous social and political conditions. \subsubsection{The Case of Predicting Sociolects} \label{section:4.2.1} Consider ``text auto-complete'' models that predictively suggest the next most-likely word to a user as they are typed in real-time. These models are commonplace on most smartphone text messaging apps, such Apple's iMessage. These models are also known to exhibit linguistic representation disparities for minority sociolects (i.e., socially-restricted dialects). That is, due to disparities in sociolect representation in linguistic training data, text auto-complete models ``allocate'' relatively less predictive text accuracy to minority sociolects, such as so-called African-American English (AAE), while at the same time allocating relatively more predictive text accuracy to majority languages, such as so-called Standard-American English (SAE), thereby reproducing existing discourse inequalities within the ML system~\citep{mayfield_equity_2019}. In a recent \ac{fair ML} paper presented at the 35th International Conference on Machine Learning (ICML), researchers operationalize one of Rawls's distributive principles to mitigate language representation disparities in a text auto-complete model~\citep{hashimoto_fairness_2018}. In their operationalization, the researchers specify ``predictive text accuracy'' as the ``resource'' for allocation via one of Rawls's distributive rules, what is known as the Difference Principle. The Difference Principle requires that any inequalities in a distribution of resources be to the greatest advantage (maximally beneficial) to the people who are least well-off, measured in terms of their resources~\citep{rawls_theory_1971,wenar_john_2017}. To achieve this, the rule maximizes (over all possible distributions of resources), the minimum (over all people), each person's allocation of resources. Because it ``maximizes the minimum'' in this way, the Difference Principle is also often referred to as the \emph{maximin rule}. Accordingly, when operationalized in a text auto-complete model, the model is optimized to allocate a ``fairer'' quantity of predictive text accuracy across languages and sociolects, which is achieved by allocating less accuracy to the majority language (SAE) and more accuracy to the minority sociolect (AAE), subject to the constraints of the Difference Principle. Because the distributive rule accounts for the accuracy disparity between the language (SAE) and the sociolect (AAE), by requiring that any inequalities in the distribution of predictive accuracy be maximally beneficial to the group that is least well-off, such a reallocation of ``resources'' may seem sufficient for achieving algorithmic fairness. But what about the fairness of the measure itself: predictive text accuracy? Not implausibly, predictive text accuracy could encode biases against particular linguistic groups, depending on the societal contexts in which the text auto-complete model is deployed. For example, it could be that AAE-speakers do indeed prefer increased predictive text accuracy, since text messaging apps often flag AAE words as ``misspelled'' thereby further marginalizing the AAE sociolect in the ``linguistic marketplace'' where dominant languages like SAE enjoy greater linguistic capital and symbolic power~\citep{bourdieu_language_1991}. Alternatively, it could be that AAE-speakers \emph{do not} prefer increased predictive text accuracy for AAE words, since these sociolects are often considered to be ``resistance vernaculars'' that intentionally subvert dominant linguistic norms as a means of celebrating cultural difference and fostering in-group solidarity~\citep{park_appropriating_2008}. In this latter case, highly accurate text auto-complete predictions for AAE words (appearing on, say, a user's iMessage app) may not improve an AAE-speaker's user experience, if those predictions are perceived as culturally appropriating, or if they reproduce psychosocial injustices such as stereotype threat~\citep{steele_stereotype_1995}.\footnote{ Unfortunately, what AAE-speakers prefer in their text messaging apps remains an open question. Although the researchers took a step toward answering this question by conducting a ``real-world, human evaluation'' of the text auto-complete models, participants in the evaluation consisted entirely of anonymous workers from Amazon's Mechanical Turk, an online labor platform. Demographic information was not collected from these participants, but they were nevertheless tasked with completing the evaluation \emph{as if} they were actual speakers of either AAE or SAE~\citep{hashimoto_fairness_2018}. Because this evaluation did not engage with self-identified AAE-speakers, no conclusions can be drawn about whether they prefer more or less predictive text accuracy in their text messaging apps. } Observe here that predictive text accuracy exhibits the two key characteristics of a resource measure: (1) it is a single-valued quantity that is easy to measure, compare, and rank (by simply looking up the accuracy allocated to each linguistic group), and (2) it is merely a means to the end goal of providing an enjoyable text-messaging user experience. Thus, this ``resource'' measure satisfies the publicity criterion, but fails to satisfy the sensitivity criterion. Although consistently measurable across languages and sociolects, merely reallocating more predictive text accuracy is insensitive to whether AAE-speakers experience that reallocation as helpful and convenient, or stereotyping and insulting. Absent direct engagement with the people whose well-being will be affected by use of this system (in this case AAE-speakers, see footnote 22), it would be a mistake to assume that reallocating more predictive accuracy will necessarily improve, rather than diminish, well-being.\footnote{ Similar arguments have been advanced against increasing the accuracy of computer vision models used in facial-recognition systems. Similar to representation disparities in linguistic data, computer vision models may be highly inaccurate for minority populations, due to disparities between minority and majority representation in image training data~\citep{buolamwini_gender_2018, bennett_what_2019}. However, the pernicious applications of these models (namely, the increased surveillance and policing of historically and presently oppressed populations) should cast serious doubt on the claim that increasing model accuracy will improve well-being. Whether increasing predictive accuracy is beneficial or detrimental to a person's well-being depends on who that person is and in what context they interact with, or are affected, by the model. Thus, an evaluation of the unfairness or injustice of a particular model will require sensitivity to people's personal heterogeneities societal context. } \subsection{``Doing Justice'' to Rawls's Theory} \label{section:4.3} The prior cases serve two purposes. First, they substantiate the claim that measures from ML align with resource measures on the two key characteristics (\autoref{tab:measuretable}). Second, they illustrate how measures exhibiting these characteristics can satisfy the publicity criterion, but fail to satisfy the sensitivity criterion, thereby reproducing and entrenching structural and psychosocial injustice. This brings to a close the first part of this paper: the inquiry into what makes the resourcist approach amenable to operationalization in ML systems. And it opens the second part: inquiring into whether the capability approach could be operationalized with more success, and if so, how? At this juncture, we should pause to ask: what does it mean for a theory to be operationalized successfully? The answer to this question will depend both on the theory selected for operationalization, as well as on how the operationalization is carried out in practice. Was Rawls's theory ``done justice'' in its operationalization, so-to-speak? That is, was it operationalized in a way that is true to the original statement of the theory? Following the capability theorists, I have argued that the operationalization of Rawls's theory in the prior cases was unsuccessful. So, I should answer whether that is (A) due to flaws in Rawls's theory, and/or (B) due to how the operationalization was carried out in the particular case.\footnote{ I thank Sally Haslanger for encouraging me to address these questions. } Answering (B), there are at least two ways that the operationalization deviated from Rawls's original theory. First, the \emph{lexical priority} of Rawls's principles requires that the principles of Equal Basic Liberty and Fair Equality of Opportunity are satisfied before moving on to the Difference Principle~\citep{rawls_theory_1971}. (I refer interested readers to a survey by Leif~\cite{wenar_john_2017} for an overview of Rawls's lexically prior principles.) So, if we are true to the original statement of the theory, we will not skip over these principles to pick out the Difference Principle for operationalization by itself. Second, I have previously noted that measures from ML resemble the simple measure of income and wealth, more so than Rawls's primary goods measure, which is broad and inclusive. So again, if we are true to the original statement of the theory, we will also aim to operationalize the primary goods measure and not only the Difference Principle. Answering (A), given that the operationalization deviated from Rawls's theory in at least two ways, we can ask if an operationalization truer to the original statement of the theory could address the prior cases more successfully. I think a truer operationalization could do better. But, we must also consider the appropriateness of operationalizing Rawls's entire theory of justice in the design of a consumer text messaging app. Would this be ``doing justice'' to Rawls's theory? We can deviate from a theory's statement by failing to operationalize it in its entirety (as above). But, we can also deviate by operationalizing a theory in contexts to which it was not intended to apply (see footnote 3). That is, successful operationalization depends not only on staying true to the theory's statement, but also on applying it where intended, under its intended social conditions. So I won't decide here that Rawls's theory itself is flawed, but only misapplied. These observations may nevertheless count in favor of the capability approach, as we will see later on. \begin{center} \huge\textreferencemark \end{center} \noindent Before turning to the capability approach, it is worth addressing another discrepancy between \ac{fair ML} and distributive justice. While discussing the operationalization of Rawls's Difference Principle, I introduced the distinction between harms of \emph{allocation} and harms of \emph{representation}, as it is often posited in \ac{fair ML} literature. However, I didn't note that philosophers of distributive justice have long-posited an analogous distinction, between injustices of \emph{redistribution} and injustices of \emph{recognition}, where the terms here are used (in essence) in the same sense as allocation and representation, respectively. In contrast to \ac{fair ML}, the analogous distinction from distributive justice is posited, if not less sharply, then with more nuance. This is because, for some claims of social injustice (in particular, those relating to disability) it may be difficult to sharply distinguish whether the claim is one of redistribution, or one of recognition.\footnote{ The sharp distinction between redistribution and recognition becomes uncertain when considering ``reasonable accommodation'' requirements for disabled people under the Americans with Disabilities Act~\citep{united_states_americans_1990}, the The European Union Framework Directive~\citep{whittle_framework_2002}, and/or the United Nations Convention on the Rights of Persons with Disabilities~\citep{united_nations_convention_2008}. Reasonable accommodation requires reconstruction of the public built environment (e.g., offices, restaurants, and parks) to better accommodate disabled people by including ramps, elevators, text in multiple formats, sign-language interpreters, flexible work schedules, and so on~\citep{putnam_disability_2019}. In one sense, these accommodations may be characterized as \emph{distributive} in that they require reallocating resources of the aforementioned sort. In another sense, these accommodations partially rectify longstanding \emph{misrecognition} of disabled people as members of society worthy of full participation in public life, with equal access to public spaces. In this latter sense, appealing to an abstract principle of fair or just redistribution may not be required at all. Rather, we may only need to ask what is required for a society to fully recognize disabled people, and reply that it requires (in part) equal access to the public built environment~\citep{putnam_disability_2019}. Thus, recognition requires redistribution, and redistribution aims to secure recognition. } Recognition may require redistribution, and redistribution should aim to secure recognition~\citep{putnam_disability_2019}. The two may be mutually reinforcing, and some philosophers argue that the antithesis between them is a false one. According to Nancy~\cite{fraser_redistribution_2003}, ``one should roundly reject the construction of redistribution and recognition as mutually exclusive alternatives. The goal should be, rather, to develop an integrated approach that can encompass, and harmonize, both dimensions of social justice.'' Could the same be said of the allocation-representation distinction posited in \ac{fair ML}? In the examples that we have seen, a model's \emph{representation} of a group can produce bad outputs or \emph{allocations} for that group. Here, the distinction between representation and allocation seems to be very narrow, especially when what is being ``allocated'' is something like the model's predictive accuracy. But, representational harms include more than under-representation in the training data that leads to representation disparity within a ML model. They also include harms that arise from people's \emph{interaction with} that model, after it has been designed and deployed in society, as in the Disability Hiring and Predicting Sociolects case studies. These harms arise from heterogeneous social and political conditions that are not straightforwardly formalizable within the ML model itself, but are nevertheless reproduced and entrenched through the model's misallocation of a particular resource. Here, allocative harms (i.e., inequities in the distribution of resources) reproduce and entrench representational harms (i.e., those arising from social and political relations between people), and this may occur \emph{even when} resources are distributed equally, as we have now seen many times. While it is easy to posit allocative and representational harms as two distinct types, it is more difficult to specify the relationship between the two. At the very least, it is clear that they are not mutually exclusive, but mutually reinforcing. Following Fraser's prescription, we might then ask: how can \ac{fair ML} researchers develop an integrated approach that can ``encompass and harmonize'' both allocation and representation? Doing so may not be straightforward, or easily operationalized, but we might again look to extant theories of distributive justice for suggestions. As we have seen, resource measures can be insensitive to heterogeneous social and political conditions from which representational harms arise. Rawls's primary goods measure attempts to addresses this insensitivity by conceiving of resources broadly and inclusively, but capability theorists argue Rawls's conception still fails to detect and remediate social injustice. They argue that reallocating resources \emph{alone} is inadequate for addressing structural and psychosocial injustices of recognition (i.e., harms of representation). Doing so requires something \emph{more than} resources. An approach that integrates allocation with representation may still rely (in part) upon redistributing resources, but it may do so with an aim toward securing broader recognition of people within their society, measured in terms of something \emph{other than} the resources they have in the possession. Capability theorists argue that recognition is better secured by measuring well-being in terms of what people are \emph{capable} of achieving or not, as members of their society, sensitive to their heterogeneous social and political conditions~\citep{anderson_what_1999, brighouse_justifying_2010}. \section{The Capability Approach to Measuring Justice} \label{section:5} In his Tanner Lecture lecture, Amartya~\cite{rawls_equality_1979} argued that resource measures are ``concerned with good things rather than with what these good things do to human beings.'' Likewise, the same may be said of the measures discussed in the prior cases. Pre-employment application tests, or ML models with high predictive accuracy, are often assumed to be ``good things'' because, for most people, they are. That is, for most people, it's plausible to assume that these things contribute to people's well-being in the given context (be it searching for job opportunities online or engaging in a text message conversation). But, theories of distributive justice aren't only concerned the well-being of \emph{most} people. They're concerned with the well-being of \emph{all} people.\footnote{ Historically, many widely regarded theories of distributive justice overlooked, intentionally delayed, or disregarded consideration of disability. Up to and including Rawls, no major theory of justice in the western philosophical tradition considered disabled people of central importance~\citep{becker_reciprocity_2005}. After Rawls, nearly all do, in part due to disability-informed critiques of his influential theory. Disability has long been characterized as a ``hard case'' for distributive justice, one that~\cite{rawls_kantian_1975} thought should be addressed eventually, but not centrally, since doing so ``prematurely introduc[es] difficult questions that may take us beyond the theory of justice,'' and ``consideration of these hard cases can distract our moral perception by leading us to think of people distant from us whose fate arouses pity and anxiety.'' Capability theorists disagree. On their view, disability is not ``distant from us'' but in fact widespread~\citep{nussbaum_capabilities_2009}. According to~\cite{rawls_equality_1979}, resource measures such as Rawls's are ``not merely ignoring a few hard cases, but overlooking very widespread and real differences,'' differences which affect the well-being of disabled people first and foremost, but also affect the well-being of all people, since all people ``have very different needs varying with health, longevity, climatic conditions, location, work condition and even body size.'' } Resource measures aim to encompass what contributes to people's well-being, but, capability theorists argue, they miss their mark. Because resource measures are insensitive to personal heterogeneities and societal context, they fail to attend to the different ways some people are (or are not) able to convert resources into well-being. Capabilities theorists argue that what actually contributes to well-being does not therefore lie in the resources themselves, but in what people are \emph{capable} of achieving through their use. The right measure therefore does not lie in the ``good things'' or resources, but in what these ``good things do to human beings''~\citep{rawls_equality_1979}. That is, it lies in people's \emph{capabilities}. In this section, I first define what capabilities are as they are understood in distributive justice and provide some examples. Then, to distinguish capability measures from resource measures and those from ML, I discuss how they differ on the two key characteristics. Based on these differences, I argue that capability measures satisfy the sensitivity criterion, and they also (in theory) satisfy the publicity criterion. Whether capability measures achieve publicity in practice is somewhat less clear (\autoref{tab:measuretable}). In upcoming sections, I outline how the capability approach could be operationalized---through a participatory and democratic process---in the design and development of ML systems, to better remediate the social injustices discussed in the previous case studies, and beyond. While I present the core differences between capabilities and resources, I necessarily omit some subtleties of the capability approach, which has many variants. For a more comprehensive presentation, I refer interested readers to \emph{Wellbeing, Freedom and Social Justice: The Capability Approach Re-Examined} by Ingrid~\cite{robeyns_wellbeing_2017}. \subsection{What Are Capabilities?} The capability approach attempts to locate a person's well-being in their capabilities. A person's \textbf{capabilities} are defined as their real opportunities to achieve valuable states of ``being and doing''~\citep{daniels_equality_1990}. States of being might include being sheltered, being nourished, or being cared for. States of doing might include commuting to work, browsing the internet, or communicating with a friend via text message. What does it mean to locate a person's well-being in their capabilities, or their ``real opportunities to achieve valuable states of being and doing''~\citep{robeyns_capability_2016}? Unfamiliar readers will likely find this definition nebulous at first, but capabilities are most readily understood when contrasted with resources. Recall that because people can use resources to do or to be things that contribute to their well-being, the resourcist approach aims to locate a person's well-being primarily in their resources (i.e., things they can possess or use). People can usually use their income and wealth to buy a house for shelter, food for nourishment, health insurance for care, a train ticket for their work commute, a computer for internet access, and so on. While capability theorists agree that people can usually use resources to achieve these valuable states of being and doing (being sheltered, being nourished, being cared for, commuting to work, browsing the internet, communicating with friends), they disagree with the implication that resources are therefore the right benefit to look at when measuring what contributes to people's well-being within their society. Why not? Well, according to capability theorists, resources are (as I described earlier) a \emph{means} of achieving those valuable states, but resources are not an end in themselves. That is, although people can usually use, consume, or cash-in their resources to achieve valuable states of being and doing, what actually contributes to well-being are those valuable states themselves, not the means (i.e., resources) used to achieve them~\citep{robeyns_wellbeing_2017}.\footnote{ In the capability literature, these states of being and doing are called a person's \emph{functionings}. Functionings are the states of being and doing that a person experiences (such as moving freely, being literate, being healthy, being well-nourished, being free from persecution, and so on). Functionings focus ``on the state of the person, distinguishing it from both the commodities [resources] that help generate that state, and from the utilities [welfare] generated by that state''~\citep{sen_capability_1993}. For this reason functionings are sometimes called ``midfare'' because they exist mid-way between resources and welfare~\citep{cohen_equality_1990}. } Resources may be a ``proxy'' for a person's well-being, but resources are not equivalent with well-being. As before, being monetarily wealthy is not the same as being well. We have already seen that, in many cases, the connection between the means (i.e., resources) and the valuable states of being and doing comes apart, depending on people's heterogeneities and societal contexts. For example, for a person who uses a wheelchair, purchasing a train ticket is not a means to the benefits of public transportation, if the train station lacks wheelchair accessible ramps. For a blind person, purchasing a computer is not a means to the benefits of internet access, if those technologies are not compatible with screen reader technology. Similarly, in the prior case studies from ML, a pre-employment application test is not a means to employment if that test is not accessible, and increasing predictive text accuracy is not a means to a better user experience if the predictively-suggested language is stereotyping or disrespectful. In each of these cases, the resource (train ticket, computer, application test, predictive accuracy) presents a \emph{false} opportunity for achieving the corresponding valuable state of being or doing (commuting by public transit, browsing the internet, being employed, carrying on an enjoyable text message conversation). That is, what is usually (for most people) an opportunity to convert some resource into a valuable state of being or doing is foreclosed for others, due to their personal heterogeneities and societal context. Because different people are able to convert their resources into well-being in different ways, and with different degrees of reliability, capability theorists argue that resources alone must not be what contributes to people's well-being---that resources are not \emph{ultimately} valuable to people. Ultimately, they argue, well-being amounts to real opportunities to achieve valuable states of being and doing---and \emph{not} the not-always-reliable resources through which those valuable states may or may not be achieved. The right thing to look at when measuring and remediating social injustice, therefore, is not the quantities of resources people have in their possession, but the states of being and doing that people are capable of achieving, sensitive to their heterogeneous social and political conditions. \subsection{Two Characteristics of Capability Measures} Recall the two key characteristics on which resource measures and measures from ML are aligned: (1) these measures can be expressed as single-valued quantities, and (2) what it is that they measure (namely, resources of some sort) are a means to an end, rather than an end in themselves. In this section, I discuss how capability measures differ from these measures on both characteristics, and how they subsequently satisfy the sensitivity criterion and may also (in theory) satisfy the publicity criterion (\autoref{tab:measuretable}). \subsubsection{Capability Measures Must Be Multi-Valued} Whereas resource measures \emph{can be} singe-valued quantities, capability measures \emph{must be} multi-valued and can be heterogeneous (i.e., they can combine qualitative and quantitative data). As Martha~\cite{nussbaum_capabilities_2009} puts it, ``it is of the essence of the focus on capabilities to insist that the goods [benefits] to be distributed... are plural and not single, and that they are not commensurable in terms of any single quantitative standard.'' Notice what the essence does \emph{not} insist. It does not insist that the measure is \emph{not} purely-quantitative (i.e., that it is either qualitative or mixed). In other words, a multi-valued quantitative measure is consistent with the capability approach, but so is a purely-qualitative measure or a measure that mixes qualitative and quantitative data. Capability measures can be formulated in many ways (subject to certain process constraints, as we will see later on), so long as they are not single-valued quantities.\footnote{ However, a formulation that comes close to a single-valued quantity might also contravene the essence of the approach. For example, a double-valued quantitative measure, while technically permissible because it is not commensurable in terms of any single quantity, may not be capture the fullness and potential of the approach. This is because each value in the measure is an \emph{indicator} for a capability that people have in a given context. There will be few values in the capability measure only when there are few capabilities relevant to that context. In most contexts of practical concern, however, there will be many relevant capabilities. For example, consider the context of remediating social injustice relating to ambulatory disability. A single-valued resource measure might aim to measure whether people with ambulatory disabilities have in their possession a particular resource (or bundle of resources), including income and wealth, wheelchairs or prostheses, and so on. By contrast, a multi-valued capability measure might aim to measure whether people achieve the capability of ``freely moving without barriers'' within their society, regardless of disability. Achievement of this broader end might be measured through multiple indicator values, such as whether public spaces are wheelchair accessible, whether public transportation is widely available, and whether people can acquire the relevant resources, including income and wealth, wheelchairs or prostheses, and so on. } Here, it is important to state explicitly: the capability approach does not exclude resources from playing a constituent role in the formulation of a capability measure. Exactly the opposite: the approach (in part) \emph{relies on} reallocating resources so that people achieve greater capabilities. Why? Capabilities are based on people's states of being and doing. Because these states of being and doing are coincident with people, they cannot be allocated in any direct fashion, only measured~\citep{brighouse_justifying_2010}. Thus, the difference between the resourcist and capability approach is (in part) one of emphasis. The former emphasises the resources people have in their possession, and usually measures them through single-valued quantities. The latter emphasises what it is that people are capable of doing with those resources given their heterogeneities and societal context, and measures this through multiple values. In both cases, resources are reallocated. This begins to suggest how capability measures might be employed to develop an integrated approach that can ``encompass and harmonize'' allocation and representation (i.e., redistribution and recognition), as prescribed earlier by Nancy~\cite{fraser_redistribution_2003}. \subsubsection{Capabilities Are Ends, Not Means} Recall that in the resources-versus-capabilities debate, an important distinction is posited between things that are a ``means to an end'' and things that are an ``end in themselves.'' In this way, it is argued that resources are a means of achieving well-being, but resources are not \emph{ultimately} valuable to people, as an end in themselves. We have now seen a number of examples in which resources fail to confer their intended benefit, due to context-sensitive variations in how reliably people convert resources into well-being. Because resources are a \emph{means} to achieving well-being, acquiring more of them may not lead to corresponding improvements to people's quality of life and future prospects. In these cases, there is a ``slippage'' between the resources a person has in their possession and how those resources affect their well-being. Resources may be an easily-measurable ``proxy'' for well-being, but they may be unreliably beneficial. Thus, resource measures may fail to encompass well-being because they do not seek to measure well-being \emph{itself} (i.e., what is ultimately valuable to people)~\citep{robeyns_capability_2016}. By contrast, the capability approach offers an alternative framework for conceptualizing well-being itself, one that understands well-being in terms of a person's valuable states of being and doing~\citep{robeyns_capability_2003}. These states of being and doing are ultimately valuable to people. That is, there is no way in which a people's capabilities are ``converted'' into anything of \emph{greater} value. Their value is final. People's capabilities are thus an end in themselves, not a means. Because of this, there is no ``slippage'' between means and ends in the capability framework of well-being. To measure well-being, capability theorists aim to measure people's capabilities, sensitive to their heterogeneities and societal context. \subsection{Capability Measures: Sensitivity, And Publicity?} Capability measures differ from resource measures and measures from ML on their two key characteristics: (1) they must be multi-valued, and (2) what is being measured (namely, capabilities) are an end in themselves, not a means. Taken together, these characteristic differences contribute to a measure that satisfies the sensitivity criterion, and may also (in theory) satisfy the publicity criterion. Consider sensitivity. The means-end distinction relates to sensitivity as follows. Because capabilities are ends in themselves and measured with respect to people's heterogeneities and societal context, they are sensitive to the heterogeneous social and political conditions under which people live. Additionally, the multi-valuedness of a capability measure contributes to its sensitivity as follows. Because capability measures are necessarily multi-valued, they achieve greater sensitivity to heterogeneous social and political conditions via multiple indicators. By contrast, resource measures are often expressed as single-valued quantities and therefore offer a simple---perhaps overly-simplifying---measure of well-being. However, including multiple indicator values is not enough to provide a fuller measure of well-being. It also matters \emph{which} values are selected for inclusion in the capability measure. Here, the capability approach prescribes selecting values that are indicators of people's capability achievement within their society, rather than adopting a general-purpose resource standard. Consider publicity. Because capability measures must be multi-valued (and are possibly heterogeneous), they will not permit strictly unambiguous comparisons and rankings (i.e., a complete interval rank, not only a partial ordinal ranking). While it is clear that a capability measure will not provide a public standard of measurement that is \emph{equally} as strict as that provided by a singular quantitative resource measure, it does not necessarily follow that a capability measure \emph{cannot} be publicly legible and verifiable. This will depend on the requirements set for public legibility and verifiability. A capability measure \emph{can} provide a standard of measurement of some sort. The question is whether that standard will be strict enough for the purposes of remediating social injustices. Resourcists argue that the capability standard of measurement is not strict enough; that justice demands interpersonal and intercontextual comparisons be clear, and not murky~\citep{brighouse_equal_2010,brighouse_critique_2010}. Capability theorists counter-argue, in two ways. First, capability theorists observe that there may be a trade-off between improving human capabilities and achieving a publicly legible and verifiable standard of measurement. Achieving full publicity might require omitting important contextual information, and doing so could allow social injustices to slip by undetected and unremediated, as we have seen. Conversely, admitting too much contextual information into the measure might strain public legibility, and doing so could also perpetuate social injustices by stymieing oversight and accountability of the systems and institutions regulated by that measure. When the trade-off is between improving human capabilities and achieving a fully legible and verifiable measure, capability theorists argue that we should opt for improving human capabilities, by relaxing the strictness of the publicity criterion~\citep{brighouse_two_2010}. In other words, so long as capabilities are more-or-less equalized across people in society, a capability measure does not need to achieve publicity fully, but only partially. Second, capability theorists appeal to the way in which the capability measure is specified: through political consensus; namely, a participatory and democratic process. When a measure of justice is specified through such a process, it achieves public legibility and verifiability of a different sort. In the next section, I will elaborate further on what is meant by a process that is participatory and democratic. \section{Operationalizing The Capability Approach} \label{section:6} I now address the question of whether the capability approach can be operationalized in the design and development of ML systems, and if so, how? The answer to this question will depend (in part) on how strictly the requirement for public legibility and verifiability is drawn. Capability measures can (in theory) satisfy both sensitivity and publicity criteria. Sensitivity is achieved through a multi-valued (possibly heterogeneous) measure of people's capabilities. Publicity is achieved by either relaxing the strictness of the requirement for public legibility and verifiability (in favor of improving human capabilities), and/or by specifying the capability measure through a participatory and democratic process. Before elaborating on what this means, I first note that such a process is not impracticable. For decades, the capability approach has been operationalized as an alternative to standard single-valued measures of well-being in welfare economics (e.g., measures of Gross Domestic Product and Gross National Product). It has been operationalized in the United Nations Human Development Index, to measure international capabilities in health, education, and income~\citep{fukuda-parr_human_2003, fukuda-parr_handbook_2009}. And, of particular relevance to this inquiry, the approach has been operationalized in the design of many information and communications technologies~\citep{oosterlaken_evaluating_2012}. For an in-depth survey of technology-specific applications, I refer interested readers to the anthology \emph{The Capability Approach, Technology, and Design} by Ilse Oosterlaken and Jeroen van der Hoven~\citep{oosterlaken_capability_2012}. \subsection{How to Do It: A Five-Step Process} \label{section:6.1} What does it mean to specify a capability measure through a participatory and democratic process? And how would the resulting measure more successfully remediate the social injustices from the prior case studies, and beyond? I emphasize that there are no \emph{general} answers to these questions. Different capability theorists propose different methods of operationalizing the approach~\citep{robeyns_sens_2003}. As just one example of how such operationalization might proceed in a participatory and democratic fashion, I outline a five-step process, extrapolated from Murphy and Gardoni~\citep{oosterlaken_design_2012}. \begin{quote} \textbf{Step 1.} In collaboration with the affected parties (i.e., people whose well-being may be affected by the system under consideration), select the set of relevant capabilities (i.e., what encompasses people's well-being with respect to that system). \end{quote} \begin{quote} \textbf{Step 2.} For each capability in the set, select indicators for that capability. Indicators might include single-valued quantities, open-ended written testimony, and/or likert scale ratings, indicating the extent to which its corresponding capability is achieved. \end{quote} \begin{quote} \textbf{Step 3.} Convert each indicator into an index that ranges from 0 (minimum achievement) to 1 (maximum achievement). Note: this step is optional, if it is preferred that the data remain qualitative or mixed. \end{quote} \begin{quote} \textbf{Step 4.} Combine all indices to create an aggregate measure (e.g., by averaging, if the measure is quantitative). \end{quote} \begin{quote} \textbf{Step 5.} Iteratively design and evaluate the system to bring all affected parties to a \emph{baseline threshold} level of capability achievement. If and when all parties reach the baseline threshold, increase the threshold. \end{quote} \noindent In this way, the capability approach specifies the distributive principle of ``equal capabilities'' across affected parties, with respect to the particular system and context under consideration. Note that this five-step participatory process could result in an aggregate, purely-quantitative measure, but it need not. For example, converting open-ended written testimony into a single-valued quantity at Step 3 is not required, as it may be difficult to make this conversion in any meaningful way, or it may be more valuable and illuminating to keep qualitative data in its qualitative form.\footnote{ Importantly, even if this process did result in an aggregated quantity, this multi-valued quantitative measure would not be reducible to, or give priority to, any constituent single-valued quantity. This is because, in Step 2, if a single-valued quantity were selected as an indicator of some relevant capability, this indicator would be only one of many equally important indicators, each measuring a corresponding relevant capability, of which there are necessarily more than one. Thus, the single-valued quantity would not be the primary measure of whether or not the system is fair, just, improves or diminishes the well-being of those who are affected by it. Additionally, in Step 4, the aggregation of multiple single-valued quantities should be done in a way that strongly prioritizes any particular quantity. The most basic approach to aggregation is un-weighted averaging. } The process only stipulates that the capability measure is multi-valued and not singular. Consider again the prior cases from ML. Operationalizing the capability approach in the case of Disability Hiring would first require specifying the set of relevant capabilities in collaboration with the users of online hiring platforms, requiring representation from users who are likely to use or interact with those systems. This would include people who navigate the hiring platforms using screen reader technology. Here, it is important to note that participatory democracy does not imply majoritarian rule, or even rule by plurality. That is, the set of relevant capabilities should be determined through a participatory and democratic process, sensitive to both majority and minority interests. (No doubt, such a process introduces a number of difficulties and limitations, the discussion of which I will postpone until the next section.) Hypothetically, the relevant capabilities selected through such a process might include: successfully completing an employment application test (as indicated by whether or not a user files the test without reporting difficulties), having the option to communicate directly with a human hiring agent (as indicated by the presence or absence of customer support on the hiring platform), and so on. Similarly, operationalizing the capability approach in the case of Predicting Sociolects would first require specifying the set of capabilities relevant to text auto-complete models, in collaboration with the users of those models---particularly, self-identified speakers of AAE---prior to optimizing the model to achieve a particular accuracy benchmark. Hypothetically, the process-determined set of relevant capabilities might include: having an enjoyable user experience (as indicated by user testimony or likert scale ratings), contesting or customizing the predictions of the text auto-complete model (as indicated by whether these controls are exposed in the user interface), and so on. The aforementioned capabilities and indicators primarily exist at the interface between people and the ML system, and hypothetically result in a measure that is multi-valued and heterogeneous (i.e., combining quantitative and qualitative data). In this sense, the capability approach is analogous to research methodologies often employed in the field of human-computer interaction (including ethnography, participatory action research, and other mixed methods), whereas the resourcist approach is analogous to methodologies often employed in ML. But here it may be wondered: when operationalized in the design and development of ML systems, would the capability approach apply at ($I$) the level of algorithm design and model optimization (as in much of current \ac{fair ML} research), at ($II$) the level of human interaction with the ML system (as in the examples above), or at ($III$) the level of societal well-being (as in traditional welfare economics)? The answer is, potentially, any or all of the above, depending on the scale and scope of the ML system under consideration: the relevant capabilities and the evaluation of justice or fairness should be scoped to address that particular system. That is, when operationalized with respect to a particular ML system, the approach should correct for capability deficits resulting from (relating to) that system, and not necessarily beyond.\footnote{ I thank Reuben Binns for encouraging me to clarify this point. } When designing algorithms at level $I$, it may be unrealistic or impracticable for ML developers to correct for capability deficits at level $III$, where the set of relevant capabilities may be different because these deficits do not arise from ML systems. However, considerations of societal well-being may still factor into participants' selection of relevant capabilities for a particular ML system. For example, level $III$ considerations might lead participants to conclude that a particular ML system (scoped to level $I$ and/or $II$), should not be built or deployed at all, if that system seems likely to exacerbate capability deficits at the level of societal well-being. Thus, while the approach technically permits a capability measure that is purely-quantitative---applied at level $I$ alone---such a measure would leave behind part of what makes capability measures useful to begin with: they can (in theory) bridge these different levels of evaluation, from ($I$) algorithm design, to ($II$) human-system interaction, to ($III$) societal well-being. And they can do so in a principled fashion, under a unified conceptual framework of well-being. Here again, this suggests how capability measures can be employed to develop an integrated approach that can ``encompass and harmonize'' allocation and representation (i.e., redistribution and recognition), as prescribed earlier by Nancy~\cite{fraser_redistribution_2003}. In the next section, I discuss some of the most noteworthy advantages and limitations of the capability approach. \subsection{Advantages and Limitations} \label{section:6.2} The five-step participatory process outlined in the previous section raises a number of questions. Who are the affected parties? How do these people select the relevant capabilities, the indicators for those capabilities, and the baseline threshold of capability achievement? How is data collected and aggregated? One short, simplifying answer is that each of these decisions is to be made through a participatory and democratic process~\citep{sen_quality_1993}. The longer, more complicated answer is that such processes are hardly straightforward, and each decision point poses further challenges to adopting the capability approach in practice. Here, I will not attempt to address all of these questions; the capability literature is vast and there are many answers to these questions on offer~\citep{robeyns_wellbeing_2017}. Instead, I discuss some of the advantages and limitations of the approach as they relate specifically to the design and development of ML systems. \subsubsection{Sensitivity Across the Data Pipeline} \label{section:6.2.1} There are at least two advantages to operationalizing the capability approach in ML systems. First, the approach is underspecified by design. At minimum, it stipulates no more than a \emph{conceptual framework} for developing capability measures of well-being in a context-sensitive fashion~\citep{robeyns_wellbeing_2017}. Contrast this with Rawls's theory of justice, which is elaborate, highly-specified, and intended to apply under certain ideal social conditions, not directly operationalized in practice (see footnote 3). The capability approach is \emph{intended} to be operationalized in a variety of real-world contexts, and is underspecified in order to be flexibly applied. As we will see, underspecificity has its limitations, but for now let's consider its advantages. Because it is underspecified, the capability approach has been flexibly operationalized for a variety of projects, contexts, and heterogeneous data types, both qualitative and/or qualitative~\citep{alkire_valuing_2005}. Applications of the approach have ranged from a quantitative evaluation of gender differences in India~\citep{sen_commodities_1985}, to a qualitative ``asset mapping'' survey seeking to understand and improve the quality of life for people in low-income neighborhoods in California~\citep{jasek-rysdahl_applying_2001}, to the aforementioned technology-specific applications. This breadth of applications suggests that the approach may be suited to answering recent calls for incorporating qualitative and heterogeneous data into the evaluation of ML systems~\citep{malik_hierarchy_2020}. So far, ML systems have been largely evaluated using standard quantitative measures on benchmark datasets. And, although often critiqued as insensitive to societal context~\citep{selbst_fairness_2019}, it has been largely unclear \emph{how} to achieve greater sensitivity to people's heterogeneous social and political conditions, in a principled fashion. The capability approach provides a conceptual framework that can (in theory) bridge several levels of evaluation: the algorithmic, the human-system interface, and the societal. A second advantage of the capability approach is that it is process-oriented. That is, it does not at the outset aim to achieve any predetermined outcome, such as optimizing a particular measure, maximizing the accuracy of an ML model, and so on. Instead, the approach remains open to any and all outcomes, sensitive to the interests, needs, personal heterogeneities, and societal contexts of participants who engage in the process of developing the capability measure. In this way, when operationalized in the design of information and communications technologies, the capability approach can (in theory) achieve sensitivity from the beginning, to the end, of the data pipeline. That is, not only does a measure achieve sensitivity to people's heterogeneities with respect to particular extant datasets~\citep{gebru_datasheets_2020}, it aims to achieve sensitivity across the entire data pipeline: \emph{who} collects the data, \emph{what} data is collected, \emph{how} data is being collected, and \emph{which} calculations and evaluations are made using the data. In this sense the capability approach is reminiscent of value-sensitive, participatory, or inclusive design practices~\citep{frediani_processes_2012, oosterlaken_human_2015}, as well as more recent calls for justice-based and feminist design practice in data science~\citep{costanza-chock_design_2018,costanza-chock_design_2020,dignazio_data_2020}. On process-oriented variants of the approach, the foregoing questions---who, what, how, and which---are to be answered through a participatory and democratic process, such as the one outlined above. \subsubsection{Selecting the Relevant Capabilities} \label{section:6.2.2} While the underspecified and process-oriented design of the capability approach has its advantages, it also brings with it many limitations. In particular, the question of how to select the relevant capabilities is a prominent point of tension among capability theorists, and opens the approach to criticism from resourcists concerning the public legibility and verifiability of a capability measure. There are many answers to this question on offer, the most prominent of which I discuss in this section. Sen has repeatedly declined to endorse a specific set of relevant capabilities, preferring instead that the approach remain an open-ended conceptual framework, and not a well-defined theory. According to Sen, the relevant capabilities should be decided via democratic participation and political consensus, sensitive to the heterogeneous social and political conditions of the people who will be affected by the resulting measure~\citep{sen_capability_1993}. In practice, however, achieving political consensus may be slow, difficult, or expensive. Questions of scope and scale are likely to arise. Participants may disagree on which capabilities are relevant, and their valuations of particular capabilities may be biased by their own \emph{adaptive preferences}, whereby they unconsciously downgrade the importance of capabilities previously inaccessible to them~\citep{alkire_concepts_2008, khader_adaptive_2011}.\footnote{ I thank Sally Haslanger for suggesting these references. } To address some of the practical difficulties with participatory and democratic process, Robeyns has proposed pragmatic criteria for guiding the selection of relevant capabilities in policy and empirical research, modulated by the scope and scale of the system under consideration~\citep{robeyns_capability_2003}. In contrast with Sen's view, Nussbaum and Anderson have both proposed specific and highly-abstract capability sets relevant to their particular areas of concern. Anderson proposes that the relevant capabilities are whichever allow a people to function as equal citizens in a democratic society~\citep{brighouse_justifying_2010}. Nussbaum proposes a general set of \emph{ten core capabilities}, which can be translated into more detailed and specific capability sets depending on local context~\citep{nussbaum_women_2000}.\footnote{ Nussbaum's set of core capabilities includes \emph{bodily health} (i.e., nourishment and shelter), \emph{bodily integrity} (i.e., freedom of movement, freedom from violence), \emph{emotions} (i.e, being able to have attachments to things and people), \emph{practical reason} (i.e, being able to engage in critical reflection about the planning of one's life), \emph{affiliation} (i.e, being able to live with others), \emph{play} (i.e., being able to laugh and pursue recreational activities), and \emph{control} over one's environment (i.e, political choice and participation)~\citep{nussbaum_frontiers_2009}, among others. Only some of these capabilities may be relevant to the design and development of ML systems. } While these capability sets are proposed for their particular application areas, and so are not immediately applicable to the design and development of ML systems, they may nevertheless guide the identification of capabilities relevant to these systems. For example, automating important decisions (such as those regarding employment, loans, or healthcare) in ways that are opaque may significantly inhibit a person's capability for \emph{practical reason} (one of Nussbaum's core capabilities). That is, doing so may inhibit a person's ability to critically reflect and plan their life~\citep{coeckelbergh_health_2010}. If a person does not know for what reason they were denied employment or a loan, it will likely be difficult for them to self-reflect and better plan for the future job or loan applications. Similarly, automating these decisions in ways that do not provide opportunities to contest their outcomes may inhibit a person's capability for \emph{control} over their environment (another one of Nussbaum's core capabilities). The underspecified design of the capability approach leaves it open to criticisms of impracticability and public illegibility, when the participatory process fails to result in an agreed-upon capability set. This may be so, but public deliberation and democratic process is never guaranteed at the outset to arrive at definitive answers. If answers could be arrived at, then we (i.e., participants in the process of specifying the capability measure) could have some confidence that the resulting measure would be sensitive to our heterogeneous social and political conditions. Alternatively, if no such agreed-upon answers are found, engaging in a participatory and democratic process may---at the very least---provide some valuable insight into the question of which capabilities are relevant to the design and development of ML systems. \subsection{Capability Measures in Machine Learning} \label{section:6.3} Operationalizing the capability approach in the design and development of ML systems can (in theory) satisfy the sensitivity and publicity criteria for a measure of justice in machine learning. Sensitivity is achieved through a multi-valued (possibly heterogeneous) measure of human capability with respect to these systems. Publicity is achieved by either relaxing the strictness of the requirement for public legibility and verifiability (in favor of improving human capabilities) and/or by specifying the measure through a participatory and democratic process. Thus, whether or not the capability approach can be successfully operationalized in the design and development of ML systems will depend (in part) on how strictly the requirement for public legibility and verifiability is drawn, and on the particular details of the process. If the field of \ac{fair ML} draws the requirement for public legibility and verifiability strictly (i.e., by requiring an rank interval ordering of well-being, rather than a partial ordering), then a capability measure may not be adequate for remediating injustice in machine learning. However, I have noted that the capability approach has been successfully operationalized in a variety of fields (some of which are not less quantitatively-strict than ML), and the approach has at least been given passing consideration by some \ac{fair ML} researchers~\citep{gajane_formalizing_2017, jurgens_just_2019}. Additionally, recent, complementary work has proposed how to better contextualize ML models and datasets with respect to their societal context~\citep{mitchell_model_2019, gebru_datasheets_2020}, how to engage in participatory approaches when designing and developing ML systems~\citep{kulynych_participatory_2020}, and how a different but related conception of ``capabilities'' can be operationalized in the evaluation of ML systems~\citep{ribeiro_beyond_2020}. These examples suggest that, if the publicity requirement is relaxed within reason, a capability measure may better encompass what it is that contributes to people's well-being, as it relates to these systems, and better detect and remediate the social injustices that arise from their deployment. Toward that end, I have outlined one way the capability approach could be operationalized in the design and development of ML systems, through a five-step participatory and democratic process. I have also discussed the most noteworthy advantages and limitations of such a process. While its success cannot be guaranteed at the outset, I have suggested that even if failing to satisfy strict criteria for public legibility and verifiability, engaging in a participatory and democratic process may still yield information relevant and valuable to the urgent remediation of social injustices arising from the deployment of ML systems. It may, therefore, still be worth the effort. \section{Conclusion} \label{section:7} In this paper, I have extended---from philosophy to machine learning---a longstanding debate between defenders of the resourcist and capability approaches to measuring justice. By establishing a theoretical \emph{resemblance} between resource measures and measures from \ac{fair ML}, where algorithmic unfairness has often been conceptualized as a problem of resource allocation, I have argued that the capability theorists' critiques of the resourcist approach carry over to ML systems. In two real-world case studies, I have shown how the measures selected for mitigating algorithmic unfairness can end up entrenching and reproducing social injustice---just as capability theorists argue resource measures do. I have aimed to provide a constructive critique of (and partial corrective to) the significant attention \ac{fair ML} researchers have already paid to the resourcist approach (Rawls's theory, in particular). This critique is constructive because I do not merely criticize the operationalization of Rawls's theory in ML systems. I also introduce an alternative approach, perhaps better suited to such operationalization. I have qualified this critique by noting where and how operationalized analogues of Rawls's theory have diverged from its original statement and intended application. I have outlined what operationalizing the capability approach in ML systems could look like in practice---through a participatory and democratic process---and discussed how doing so could better remediate the social injustices from the case studies, and beyond. Finally, I have discussed some of the most noteworthy advantages and limitations of the capability approach. Extending the longstanding resources-versus-capabilities debate from philosophy to ML has revealed some broadly applicable characteristics of (and criteria for) a measure of justice in machine learning. \begin{itemize} \setlength\itemsep{0.0em} \item Whether a measure is single or multi-valued; quantitative and/or qualitative. \item Whether what is being measured is a ``means to an end'' or an ``end in itself.'' \item Whether or not a measure is sensitive to people's heterogeneities and societal contexts. \item Whether or not a measure is publicly legible and verifiable. \end{itemize} \noindent These characteristics and criteria have broad implications for the practicability of quantitative and/or qualitative evaluations of justice and fairness in machine learning, for the participatory design of these systems, and for the democratic oversight over the institutions that build and deploy them. \subsection{Replying to Objections} The foregoing inquiry remains open to some objections, to which I reply in this final section. Specifically, the reader may (1) object to the established resemblance between resource measures and those from ML, (2) object to the capability theorists' critique of resource measures, (3) object to the capability approach from more conservative perspectives, and/or (4) object to the approach from more critical perspectives. I will reply to these objections in reverse order. Regarding (4), the capability approach has been criticized from more critical perspectives as inadequately attentive to how social and political institutions produce and reproduce power~\citep{robeyns_capability_2003}. The concern here is that, although the approach engages a participatory and democratic process, it does not necessarily attend to the ways that participation in that process may be coerced by institutional power, or biased by participants' adapative preferences~\citep{alkire_concepts_2008, khader_adaptive_2011}. Without attention to how institutions exert power over participants, participatory and democratic processes may reduce to perfunctory representation, ``tokenism'', or ``participatory theater'' that does not substantively address and improve participants' well-being. Robeyns has responded to this concern by appealing the underspecified design of the approach~\citep{robeyns_capability_2003}. Because it stipulates at minimum no more than a conceptual framework in which capabilities are taken as the right measure of justice, a variety of more critical and progressive theories may be incorporated into this framework. Specifically, in this case, theories that address power differentials in democratic deliberation may be incorporated, as needed. Regarding (3), the capability approach has been criticized from more conservative perspectives, as being impracticable on the basis of parsimony. Operationalizing the approach is likely to be expensive, perhaps so much so that profit-motivated, fiscally conservative institutions and corporations will decline to adopt it in practice. Critiques from parsimony may have some merit for small, funding-scarce projects. But, I argue (briefly) they do not apply to ML, for two reasons. First, ML is a highly-funded, highly-profitable area of industry and academic research. The expense can be spared. Second, the societal benefits of building and deploying ML systems is not only unproven, but highly dubious. Their detriments are more readily apparent. Showing that deploying these systems improve well-being and remediate---rather than reproduce---social injustices will require careful, comprehensive evaluation. Arguably, any adequate evaluation (capability-based or otherwise) will be time-consuming and monetarily expensive. Regarding (2), in the resources-versus-capabilities debate, some philosophers argue that the two measures are ultimately not all that different~\citep{brighouse_equal_2010}. Or, more commonly, they argue that that the resourcist approach can address the capability theorists' critiques in various ways. While resource measures are not sensitive in the same way as capability measures, resourcists have argued that this insensitivity can be addressed through other means, for example through social welfare programs that provide compensatory reallocations of resources on the basis of disability, unemployment, age, and so on~\citep{brighouse_critique_2010}. Additionally, when considering the totality of Rawls's theory, some argue that a measure of justice doesn't need to be sensitive to heterogeneous social and political conditions because Rawls's lexically prior principles (i.e., equal basic rights and liberties, fair equality of opportunity) and his measure of primary goods, will address the cases on concern to the capability theorists~\citep{rawls_political_1993}. Related to this last point, it remains an open possibility that, if operationalized in its totality---rather than piecemeal---in ML systems, the resourcist approach could better address the prior cases from ML. Although, it still remains unclear whether such an operationalization would be appropriate to those particular contexts~\sectionlabel{section:4.3}. Finally, regarding (1), the objection that I have not sufficiently established the resemblance between resource and measures from ML, I do think that a fuller characterization of the latter is possible. Of the prominent approaches from theories of distributive justice (resourcist, capability, welfarist), I maintain that the measures from ML most-closely resemble those belonging to the resourcist approach, to the extent that they align on the characteristics and criteria previously discussed. However, this resemblance does not preclude the possibility of a different type of measure or approach, distinct from these but yet-to-be specified. Inquiring into this possibility is not the primary aim of this paper, and is left to future work. \section{Acknowledgements} For supporting, supervising, and evaluating this paper, I am especially grateful to Arvind Satyanarayan, Sally Haslanger, and Reuben Binns. For providing detailed feedback and edits, I am very grateful to Marion Boulicault. For providing helpful comments and suggestions, I thank Harini Suresh, Milo Phillips-Brown, Gabriel Karger, and Jason White. For reading early drafts, I thank Aspen Hopkins, Crystal Lee, and Jonathan Zong. I thank the reviewers and participants from the 2020 ACM Conference on Fairness, Accountability, and Transparency, and from the 2019 ACM SIGACCESS Conference on Computers and Accessibility: Workshop on AI Fairness for People with Disabilities. I thank Momin M. Malik whose template I borrowed for typesetting this document. This paper is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
proofpile-arXiv_068-238
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{Intro} A classifier usually makes predictions for a test sample by selecting the class label with the highest predicted probability. Such a strategy is insufficient to meet the growing interest in characterizing the prediction reliability in real applications such as medical diagnosis \cite{esteva2017dermatologist, kompa2021second} and autonomous vehicles \cite{kalra2016driving, qayyum2020securing}. To address this need, we can minimize a mixed cost on miss-classification and rejection to allow refraining from making any prediction on a test sample with high uncertainty. For instance, we may refrain from predicting an observation $x$ with small $\max_{k\in \{0,1\}} \hat p_{k}(x)$ for the binary response, where $\hat p_{k}(x)$ is the estimated probability using the training data for being class $k$ at observation $x$ \cite{chow1970optimum, herbei2006classification, bartlett2008classification}. This idea has been applied to different learning algorithms and extended to the multi-class classification problem \cite{cortes2016boosting,ni2019calibration,charoenphakdee2021classification}. Alternatively, another line of work adopts the set-valued prediction framework where the classifier outputs all plausible labels at $x$ using the estimated probabilities $\hat{p}_{k}(x)$. For example, one may construct $\hat C(x) =\{k: \hat p_k(x)\geq \tau_k\}$ with $\tau_k$ selected to guarantee a desired coverage of the true label \cite{vovk2005algorithmic}. Despite such progress, the work above assumes that the training and test samples are i.i.d generated from the same distribution, with the rejection rules capturing uncertain areas in the training cohort. However, it is crucial to characterize uncertainty under distributional changes in many safety-critical systems and flag alerting test samples where we should not trust the model trained with the training set. For example, in medical applications, the test cohort may contain samples that represent novel pathology and bear low similarity to labeled training set \citep{lin2005approximations}. Also, network attackers may generate novel intrusions to circumvent current detection systems \citep{marchette2001computer}. In this paper, we propose CSForest (\underline{C}onformalized \underline{S}emi-supervised random \underline{Forest}) for calibrated set-valued prediction under the non-iid setting where the training and test cohort can have disparity. The proposed method constructs a semi-supervised random forest ensemble with both labeled training data and unlabeled test data and adapts the idea of Jacknife+aB to accompany this semi-supervised ensemble with well-calibrated prediction interval \citep{kim2020predictive}. CSForest achieves significantly better classification accuracy than existing classification methods while providing a provable worst-case coverage guarantee on the true labels in our numerical experiments. We give a glimpse of its strength over alternative methods in section \ref{sec:preview}. \subsection{Preview of CSForest} \label{sec:preview} \begin{figure*} \centering \includegraphics[height = .33\textwidth, width=1\textwidth]{illustration_exampleI} \caption{FIG\ref{fig:illustration}A shows the first two dimensions of samples are generated from the three classes: green\slash blue\slash red points representing samples from class 1\slash 2\slash R. FIG\ref{fig:illustration}B shows the coverage rate which is defined by the proportion of samples with true labels included in their prediction sets. The horizontal dash line refers to the coverage level of 95\%. FIG\ref{fig:illustration}B is grouped by the actual labels in the testing data and colored based on if a prediction set contains only the correct label (blue) or more than correct label (gray). } \label{fig:illustration} \end{figure*} Before we proceed to the details of CSForest, we first compare it with BCOPS (\underline{B}alanced\&\underline{C}onformalized \underline{O}ptimal \underline{P}rediction \underline{S}ets) and CRF (\underline{C}omformalized \underline{R}andom \underline{F}orest) and DC (\underline{D}ensity-set \underline{C}lassifier) on a simulated classification task described in Example \ref{example:intro}. BCOPS, CRF and DC represent three different types of calibrated set-valued prediction. \begin{itemize} \item BCOPS combines a semi-supervised classifier separating different training classes and the unlabeled test cohort and constructs a calibrated set-valued prediction using sample-splitting conformal prediction \citep{guan2019prediction}. Compared to CSForest, BCOPS utilizes poorly both the training samples and the test samples. We will provide more details in section \ref{sec:related} and section \ref{sec:methods}. In this paper, we used random forest \citep{breiman2001random} as the classifier in BCOPS. \item CRF constructs the set-valued prediction $\{k: \hat p_k(x)\geq \tau_k\}$ by including training labels $k$ achieving high estimated probability from the random forest classifier, with the cut-offs $\tau_k$ chosen based on sample-splitting conformal prediction \cite{vovk2005algorithmic}. \item DC constructs the set-valued prediction similarly to CRF, except for replacing the estimated probability $\hat p_k(x)$ by an estimation of the density function for class $k$ using the training data \cite{hechtlinger2018cautious}. \end{itemize} \begin{example} \label{example:intro} Let $X\in \mathbbm{R}^{10}$ be a ten-dimensional feature. In the training data, we observe two classes $Y\in \{1,2\}$, but the test data contains a novel component labeled with $Y=R$. For each component, $X_j\sim N(0,1)$ $(j=3,\ldots, 10)$ are noise, with different components separated by the first two dimensions: {\small $$\left\{ \begin{aligned} & X_1 \sim N(0,1), \;X_2\sim N(0,1), \quad Y = 1,\\ &X_1\sim N(3,0.5), \;X_2\sim N(0,1), \quad Y = 2,\\ &X_1\sim N(0,1), \;X_2\sim N(3,1), \quad Y = R. \end{aligned} \right. $$ } FIG\ref{fig:illustration}A shows the first two dimensions of samples generated from the three classes $Y\in \{1,2,R\}$. We generate 200 samples from class 1 and class 2 to form the training set and 200 samples from each of the three classes to form the test set. \end{example} We evaluate the performance by type I error and type II error. Type I error is defined as the percentage of samples with true label excluded from its associated set-valued prediction $\hat C(x)$ for observed classes. Type II error is constructed as the percentage of samples with $\hat C(x)$ containing labels other than the true labels. In FIG\ref{fig:illustration}B, we evaluated the quality of the set-valued prediction $\hat{C}(x)$ using DC, CRF, BCOPS and CSForest across 20 independent runs with a targeted miscoverage rate at $\alpha =0.05$. All four methods have been calibrated using conformal prediction and achieve the desired coverage on true labels. However, CSForest and BCOPS are both adaptive to the test cohort and improve over CRF and DC for outlier detection by a large margin. In addition, compared to BCOPS, CSForest has fewer samples with multiple labels from class $1$ and $2$ (higher specificity\slash low type II error), and has a higher percent of rejection on outliers which is equivalent to smaller type II error in the outlier class. \section{Related work} \label{sec:related} \noindent\textbf{Conformal prediction with different splitting schemes.} Split conformal prediction offers an easy-to-implement but reliable recipe for capturing the prediction uncertainty \citep{vovk2005algorithmic}. It first splits the training sample pairs $(X_i, Y_i)$ into two sets $\mathcal{I}_{tr}^1$ and $\mathcal{I}_{tr}^2$. Then, it uses $\mathcal{I}_{tr}^1$ to learn a score function $\hat s(x, y):\mathbbm{R}^{p+1}\mapsto \mathbbm{R}$ and $\mathcal{I}_{tr}^2$ to make a calibrated decision about whether to include a particular choice of $y$ in the prediction set: \begin{align*} \hat C(x) = \{y: \frac{\left(1+ \sum_{(x_i, y_i)\in \mathcal{I}_{tr}^2}\mathbbm{1}_{\hat s(x,y)\geq \hat s(x_i,y_i)}\right)}{|\mathcal{I}_{tr}^2|+1}\geq \alpha.\Large\} \end{align*} The constructed $\hat C(x)$ covers the unobserved $Y_{n+1}$ of the test sample $X_{n+1}$ with guaranteed probability $(1-\alpha)$, given that the training samples and that the test sample $(X_{n+1}, Y_{n+1})$ are exchangeable with each other. In \cite{vovk2018cross}, the authors proposed the cross-conformal prediction method to improve the data utilization efficiency, which calculates scores for each fold of data using score functions learned from the remaining folds. \cite{barber2021predictive} further proposed Jacknife+ for regression problems that combine Jacknife with conformal prediction and constructed the prediction interval as \[ \hat C(x) = \left\{y:\frac{1}{n+1}\left(1+\sum_{i=1}^n\mathbbm{1}_{ \hat s^{i}(x,y)\geq \hat s^{i}(x_i,y_i)}\right)\geq \alpha\right\}, \] where $\hat s^i(x, y)=|\hat \mu^{i}(x) - y|$ is the regression score using the prediction function $\hat \mu^i(x)$ learned from training samples excluding $(x_i, y_i)$. Although Jacknife+ can only provide a worst-case coverage guarantee at level $(1-2\alpha)$, the achieved empirical coverage is often well-calibrated. Jacknife+ tends to offer better construction than the split conformal prediction, with the downside being its computational cost. \cite{kim2020predictive} described Jacknife+aB to mediate the computational burden, which constructs an ensemble regression score $\hat s^i(x, y) = |y - \psi(\{\hat \mu_b(x): i\notin \mathcal{I}^b\})|$ where $\hat \mu_b(x)$ is the estimated prediction function of $y$ using the Bootstrapped training set $\mathcal{I}^b$ for $b=1,\ldots, B$, and $\psi(\{\hat \mu_b(x): i\notin \mathcal{I}^b\})$ is an ensemble of all prediction functions $\hat \mu_b(x)$ with $(x_i, y_i)$ excluded from $\mathcal{I}^b$. \noindent\textbf{Generalized label shift model and test-data-adaptive classification.} Regarding distributional changes, both the covariate shift model and the label shift model are commonly studied \citep{scholkopf2012causal}. The former assumes $p(y|x)$ to be fixed with $p(x)$ potentially changing \citep{shimodaira2000improving,bickel2009discriminative, gretton2009covariate, csurka2017domain}; the latter treats $p(x|y)$ as fixed, but the prevalence of different labels can vary \citep{storkey2009training, lipton2018detecting}. In both frameworks, the problem of dealing with outliers or novel components is non-trivial. Generalized label shift model in (\ref{eq:glabelshift}) extends the label shift model to include unseen classes, enabling us to deal with outliers conveniently. Suppose that the training data is a mixture of $K$ different classes. For class $k$, its mixture proportion is $\pi_k$, and feature density is $f_k(x)$, with $\pi_k$ satisfying $\sum_{k=1}^K \pi_k = 1$. The generalized label shift model assumes a target distribution accepting both label shift among training classes and the appearance of outlier component(s) and requires only $f_k(x)$ to remain the same for each observed class: \begin{equation} \label{eq:glabelshift} \mu(x) = \sum_{k=1}^K \tilde\pi_k f_k(x)+\varepsilon \cdot e(x), \end{equation} where $\varepsilon+\sum_{k=1}^K \tilde \pi_k = 1$. Here $\tilde\pi_k$ represents the proportion of samples from class $k$ in the target distribution, $\varepsilon$ represents the proportion of outlier samples not from the observed classes, and $e(x)$ represents the density for the outlier component. Under the generalized label shift model, our goal is to construct a set-valued prediction $C(x)$ to include the true label of the observed classes with high probability while avoiding unnecessary false labels in $C(x)$. In other words, given a user-specified true-label inclusion probability $(1-\alpha)$ for each observed class, we aim to minimize the average set length over the target distribution $\mu(x)$. The BCOPS method aims to optimize the model performance on the test set and sets $\mu(x) = f_{te}(x)$, the marginal density for the test data, and constructs calibrated set-valued prediction combining the empirically estimated $v_k(x)$ and the sample-splitting conformal prediction \citep{vovk2005algorithmic}. Although BCOPS outperformed non-test-cohort-adaptive approaches for abnormality detection, it relies on the availability of a large set of test data, and the sample-splitting scheme results in low data utilization efficiency \cite{guan2019prediction}. \begin{figure*} \centering \includegraphics[width=1\textwidth]{StructureCSForest} \caption{Overview of CSForest Structure.} \label{fig:StructureCSForest} \end{figure*} The main contribution of CSForest is to adapt the idea of Jacknife+aB to the generalized label shift model where we constructed a novel calibrated semi-supervised random forest ensemble for joint multi-class classification and abnormality detection. \noindent\textbf{Other related alternative strategies:} Existing popular approaches for building prediction set $\hat C(x)$ in classification problems are often based on $\hat p_k(x)\coloneqq \hat p(k|x)$, the estimated probability of being class $k$ at feature value $x$, using only the training samples. For instance, we could take the point prediction $y = \arg\max_{k}\hat p_{tr}(k|x)$. To account for the prediction uncertainty, \cite{vovk2005algorithmic} suggested the set-valued prediction $\hat C(x) =\{k: \hat p_k(x)\geq \tau_k\}$ with $\tau_k$ selected in the combined prediction framework to guarantee a desired coverage of the true label. Later, \cite{romano2020classification} considered $\hat C(x) =\{k_{(l')}: l'\leq \min\{l: \sum_{l'\leq l}\hat p_{k_{(l')}}(x)\geq \tau\}\}$ to include $l$ classes label with largest probabilities such that the total probability of $y$ falling into this set sufficiently large. The previous two constructions do not work well with the generalized distributional shift model and abnormality detection in particular. Although we can combine them with the proposal from \citep{tibshirani2019conformal} which extends conformal prediction to correctly calibrate the prediction uncertainty in the test samples under the covariate shift model, the modified procedures are sub-optimal under the generalized label shift model for classification as we will demonstrate in our numerical experiments. In \citep{lei2014classification,hechtlinger2018cautious}, the authors suggested the use of the density set $\hat C(x) = \{k: \hat p(x|k)\geq \tau\}$ which outputs an empty set of $\hat p(x|k)$ is small for all class $k$. However, $\hat p(x|k)$ is often a poor discriminator among the observed classes and insensitive to outliers in high dimensions; consequently, the resulting $\hat C(x)$ tends to have a high type II error. \section{Conformalized semi-supervised random forest} \label{sec:methods} We are interested in making set-valued predictions to include true labels frequently but avoid false labels as much as possible for the target distribution $\mu(x)$. More specifically, we consider the following constrained optimization problem: \begin{align} &\min \int_{x} |C(x)|\mu(x) d x, \label{eq:GLS}\\ &s.t.\; \mathbb{P}[k\in C(X)|Y=k] \geq 1-\alpha,\notag\\ &\mbox{for all }k=1,\ldots, K.\notag \end{align} CSForest considers optimizing the ``average" training and test performance, and considers $\mu(x) = f_{te}(x)+ wf_{tr}(x)$ for a weight $w \geq 0$. If $w = 0$, $\mu(x) = f_{te}(x)$ and the objective of CSForest coincides with the objective of BCOPS which optimizes for the test cohort classification accuracy. On the other hand, when $w$ is large, it has a similar objective as the CRF model and optimizes more the classification performance on the training set. Proposition \ref{prop:oracle} describes the oracle construction given $\mu(x)$. \begin{proposition}[Modified from Proposition 1 in \cite{guan2019prediction}] \label{prop:oracle} Set $s_k(x;\mu) = [f_k(x)\slash\mu(x)]$. Under the generalized label shift model, the solution to (\ref{eq:GLS}) is $C(x) =\{k: \mathbb{E}_X[\mathbbm{1}\{s_k(x; \mu)\geq s_k(X;\mu)\}|Y=k]\geq \alpha,k=1,\ldots, K\}$. \end{proposition} CSForest constructs an empirical version of $C(x)$ via a semi-supervised random forest using the labeled training samples and unlabeled test cohort, coupled with Jacknife+aB strategy for the correct calibration such that the coverage guarantee holds \citep{kim2020predictive}. It consists of two major components (1) growing a semi-supervised random forest tree separating different training classes and the test samples, (2) comparing how likely a test sample belongs to class $k$ with a sample from training class $k$ using the ensemble prediction on trees excluding these two samples. FIG\ref{fig:StructureCSForest} reveals the model structure of CSForest. The second component adapts Jacknife+aB strategy where we need to exclude the paired training-test observations when performing the comparison, rather than excluding a single training sample in Jacknife+aB. Let $\mathcal{I}_{tr}$ and $\mathcal{I}_{te}$ denote the training and the test sets. Let $\mathcal{I}_{tr,k}$ denote samples from training class $k$ with size $n_k$. Algorithm \ref{alg:CSForestI} gives details for CSForest: (1) Line 2-5 constructs $B$ semi-supervised random forest tree classifiers, (2) Line 6 constructs the ensemble classifier $\hat f^{ii'}(x)$ excluding the paired test and training samples $(i,i')$ that measures how likely $x_i$ and $x_{i'}$ are from class $k$ with $k$ being the class label for the training sample $x_{i'}$. Finally, (3) lines 8-13 calibrate the prediction using the ensemble prediction $\hat f^{ii'}(x)$ for test sample $x_i$ and all training samples $i'$, and output the calibrated CSForest score $\hat s_{ik}$ for each class $k$. Steps (2) and (3) adapt the Jacknife+aB strategy to the semi-supervised random forest classifier and the additional sampling step at line 3 is for the same technical reason in the Jacknife+aB. Our default choice for $w$ is $w = 1$, which is used in all numerical experiments in the main paper including Example \ref{example:intro}, where the inliers inform little about the outliers, and vice versa. In Appendix \ref{app:example_intro}, we have experimented smaller $w\in [0,1)$ and varying sample sizes to examine if less emphasize on training sample from other classes improves the ability for outlier detection. CSForest is consistently better than BCOPS for $w\in[0,1]$ with $w = 1$ a top performer across various settings overall and only slightly worse than $\omega = 0$ for outlier detection when the percent of outliers in the pooled training and test cohort is small. \begin{algorithm}[H] \caption{Comformalized semi-supervised random forest} \label{alg:CSForestI} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \setcounter{AlgoLine}{0} \Input{Training Data $\{z_i\coloneqq(x_i, y_i),i\in \mathcal{I}_{tr}\}$, Test Data $\{(x_i),i\in \mathcal{I}_{te}\}$, $\gamma$.} \Output{Prediction sets $\hat C_i(x_i)$ for $i\in \mathcal{I}_{te}$.} \For{$k=1,\ldots,K$}{ Sample $B$ from ${\rm{Binomial}}(\tilde B, (1-\frac{1}{n_k+1})^{n_k})$,\\ \For{$b=1,\ldots, B$}{ Let $\mathcal{I}_{tr,k}^{b}$ and $\mathcal{I}_{te}^{b}$ be the Bootstrap of original data $\mathcal{I}_{tr,k}$ and $\mathcal{I}_{te}$ respectively. Let $\tilde\mathcal{I}_{other}$ be the Bootstrap of size $\min(\lceil m\gamma \rceil,n-n_k)$ from training samples other than class $k$.\\ Grow a single tree random forest classifier $\hat f^b(x)$ separating different labeled classes and the test samples using $\mathcal{I}_{tr,k}^{b}\cup\mathcal{I}_{te}^{b}\cup \tilde\mathcal{I}_{other}$.} For sample pair $i\in \mathcal{I}_{te}, i'\in \mathcal{I}_{tr,k}$, set $\mathcal{B}_{ii'}=\{b: i\notin \mathcal{I}_{te}^{b}, i'\notin \mathcal{I}_{tr}^{b,k}\}$ and $\hat f^{ii'}(x)=\left(\sum_{b\in \mathcal{B}_{ii'}}\hat f^b_k(x)\right)\slash |\mathcal{B}_{ii'}|$.} \For{$i\in \mathcal{I}_{te}$}{ \For{$k=1,\ldots, K$}{ Construct calibrated score for sample $i$: $\hat s_{ik} = \frac{\left(1+\sum_{i'\in\mathcal{I}_{tr,k}}\mathbbm{1}\{\hat f^{ii'}(x_i)\geq \hat f^{ii'}(x_{i'})\} \right)}{\left(n_k+1\right)}$ } Construct prediction set for sample $i$: $\hat C_i(x) = \left\{k: \hat s_{ik}\geq \alpha\right\}$ } \end{algorithm} In the regression problem where the prediction model is trained on the training cohort, Jacnife+aB provides the worst-case coverage guarantee on the true response at the level $(1-2\alpha)$. Here, we show that CSForest achieves the same level of worst-case coverage guarantee, with achieved empirical coverage close to $(1-\alpha)$ in our experiments. \begin{theorem} \label{thm:coverage} Suppose that the generalized label shift model holds where features from class $k$ are i.i.d generated from $\mathcal{P}_k$. For any fixed integers $\tilde B \geq 1$, the constructed $\hat C_i(x)$ from CSForest satisfies: \begin{align} \label{eq:coverage} &\mathbb{P}\left[k\in \hat C_i(X)|Y=k\right]\geq 1-2\alpha, \\ &\mbox{for all } i\in \mathcal{I}_{te}\; \mbox{ and }\; k = 1,\ldots, K. \nonumber \end{align} \end{theorem} The proof for Theorem \ref{thm:coverage} follows from that for Theorem 1 in \cite{kim2020predictive}, with additional arguments needed for including test samples in training (see Appendix \ref{Proof of Theorem 1}). \section{Numerical Experiments} \label{sec:experiments} \begin{figure*} \centering \includegraphics[height = .45\textwidth, width =.8\textwidth]{CoverageRateMNIST} \caption{Per-class quality evaluation of all methods with outlier components but no additional label shift among inlier digits. The horizontal dash line refers to the coverage level of 95\%. And outliers $R = \{6,7,8,9\}$. FIG\ref{fig:CoverageRateBarMNISTI} is grouped by the actual labels in the testing data and colored based on if a prediction set contains only the correct label (blue) or more than the correct label (gray).} \label{fig:CoverageRateBarMNISTI} \end{figure*} In this section, we perform numerical experiments on the MNIST handwritten digit data set, which contains digit labels $Y\in \{0,\ldots, 9\}$ with the feature dimension being 784. We compare CSForest to state-of-the-art alternatives, including BCOPS, CRF, DC, as well as the other two approaches achieving adaptive coverage for different classes over the feature space: \begin{itemize} \item ACRF: We refer to the adaptive-coverage classification approach using random forest proposed in \cite{romano2020classification} as ACRFrandom (Adaptive-coverage CRF with randomization) where the randomization is introduced via an additional uniform random variable $U$ for tie-breaking. Here, we consider a non-randomized version of it, referred to as ACRF where we do not use $U$. More specifically, ACRF constructs the prediction set $\hat C(x)$ by including labels with large estimated probabilities such that the total probability is greater than $\hat \tau_{\alpha}$. Here, $\hat\tau_{\alpha}$ is the upper-level quantile of the empirical distribution of $\{E_i\}_{i\in \mathcal{I}_{cal}}\cup \{\infty\}$, $\mathcal{I}_{cal}$ is the calibration set in sample-splitting conformal prediction and $E_i$ is the sum of estimated probabilities for all labels proceeding that for the true label. In this paper, we considered ACRF as one of the baselines instead of ACRFrandom because ACRFrandom sometimes can produce unnecessarily wide prediction set due to trying to achieve the conditional coverage, as pointed out in the original paper \cite{romano2020classification}, and lead to high type II errors when our goal is only the marginal coverage. We observed this phenomenon in our numerical experiments, which is also confirmed by \cite{romano2020classification} for MNIST data using a random forest classifier. More details about ACRF and its comparisons to ACRFrandom are in Appendix \ref{app:ACRF}. \item ACRFshift: ACRF suffers from both label shifts of inlier classes and the emergence of outliers. We can combine it with the covariate shift conformal prediction \citep{tibshirani2019conformal} to make it more robust, referred to as ACRFshift. Let $r(x) =\frac{ \mathbb{P}\{{\rm{test}}|x\} }{\mathbb{P}\{{\rm{train}}|x\}}$ be the odd function for being from the test cohort and $\gamma_{x_0}(x) = \frac{r(x)}{r(x_0)+\sum_{z_i\in \mathcal{I}_{cal}} r(x_i)}$ be the weight function where $x_0 \in \mathcal{I}_{te}$ and $x \in \mathcal{I}_{cal}$. ACRFshift constructs the prediction set $\hat C(x)$ by including labels with large estimated probabilities such that the total probability is greater than $\hat \tau_{\alpha}$, where $\hat\tau_{\alpha}$ is the upper-level quantile of the weighted empirical distribution on $\{E_i\}_{i\in \mathcal{I}_{cal}}\cup \{\infty\}$ with weight $\gamma_{x_0}(x_i)$ for $E_i$ and weight $\gamma_{x_0}(x_0)$ for $\infty$. Details of ACRFshift are given in Appendix \ref{app:ACRF}. \end{itemize} \subsection{Comparisons with no additional label shift} In this experiment, we randomly select 500 samples for each digit 0-5 as the training data and 500 samples for each digit 0-9 as the test data. We then apply BCOPS, DC, CRF, ACRF, ACRF and CSForest. And the set-valued prediction $\hat{C}(x)$ of each method is constructed with $\alpha=0.05$ and $B=3000$ over 10 repetitions. Table \ref{Append:ErrTable} shows the average type I and type II errors of different methods on the test data at $\alpha=0.05$. When averaging digits 0-5, all methods achieved the expected coverage, and CSForest has the lowest overall type II error. \begin{table} \caption{Achieved Type I and Type II errors at $\alpha = 0.05$ with outlier components and no additional label shift among inlier digits. All methods achieved the targeted coverage for inlier digits in this experiment, with CSForest having the lowest type II error because of high specificity across different classes as shown in Figure \ref{fig:CoverageRateBarMNISTI}.} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{ccc} \toprule \textbf{Method}&Type I Error&Type II Error \\ \midrule CSForest& 0.051$\pm$0.005 & 0.088$\pm$0.008 \\ BCOPS&0.043$\pm$0.004 & 0.238$\pm$0.019 \\ DC & 0.047$\pm$0.008 & 0.900$\pm$0.021 \\ CRF & 0.057$\pm$0.007 & 0.308$\pm$0.035 \\ ACRF& 0.047$\pm$0.006 & 0.431$\pm$0.003 \\ ACRFshift & 0.036$\pm$0.009 & 0.439$\pm$0.009 \\ \bottomrule \end{tabular} \label{Append:ErrTable} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} FIG\ref{fig:CoverageRateBarMNISTI} shows the classification results for different methods via the barplot. ACRF achieves the highest accuracy for inlier digits, followed by CRF and CSForest. However, only CSForest and BCOPS have a good ability to detect outliers digits 6-9, with CSForest achieving a power of more than 89\%, followed by BCOPS with a power of slightly less than 75\%. Overall, CSForest provides set-valued predictions with quality comparable to CRF for inlier digits but significantly better than all alternatives for abnormality detection among outlier digits. The improved performance of ACRF is due to pursuing a marginal coverage guarantee as opposed to a per-class coverage guarantee when compared to CRF or CSforest, rendering it susceptible to the traditional label shift setting, as we shall see in section \ref{sec:labelshift}. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{MNISTVarySize.png} \caption{FIG\ref{fig:MNISTvaringsize} Penals A1-C1 show achieved type II error for inliers (digits 0-5) at $\alpha = 0.05$ as we vary both per-class training sample size and per-class test sample from 50 to 200, or vary per-class training sample size, or vary per-class test sample size size. Similarly, panels A2-C2 show the achieved type II errors for outliers (digits 6-9) as we vary the sample sizes. It is worth noting that the error bars representing the standard deviation in FIG\ref{fig:MNISTvaringsize} are lower compared to the true standard deviation since the samples are not completely independent.} \label{fig:MNISTvaringsize} \end{figure*} \subsection{Comparisons with varying sample sizes} In this section, we compare different methods under various sample size settings. We vary both the per-class training and test samples from 50 to 200 and compare the type I and type II errors averaged over 10 iterations at $\alpha = 0.05$. FIG\ref{fig:MNISTvaringsize} reports the achieved type II error for inliers (digits 0-5) and outliers (digits 6-9) for all models. From FIG\ref{fig:MNISTvaringsize}, while the type II error (inlier) of all methods decreases along with the increase in sample size (with the exception of DC), CSForest and BCOPS are much better at detecting unobserved outlier digits (digits 6-9) compared to others. Furthermore, CSForest achieves the best power for outlier detection while obtaining close to the lowest type II error for inliers as we vary the sample sizes. It is worth noting that the Type II error of ACRFshift on outliers increases as the sample size increases. It is caused by the weights $\gamma_{x_0}(x) = \frac{r(x)}{r(x_0)+\sum_{z_i\in \mathcal{I}_{cal}} r(x_i)}$ which is sensitive to the sample size of the calibration training set $\mathcal{I}_{cal}$ and increases with the training sample size in the sample-splitting scheme. \subsection{Comparisons with additional label shift} \label{sec:labelshift} In the previous simulations, although we had outliers, the class ratios among inlier classes were balanced and remained the same for training and test cohorts. One nice property for approaches that aim for per-class coverage over approaches that aim for marginal coverage is their robustness against label shift among inlier classes regarding the coverage guarantee. To see this, we now sample data to training and test samples under the traditional label shift model with no outlier digit. We randomly select 500 samples for digits 0-2 and 100 samples for digits 3-5 as the training data. For testing data, we randomly select 100 samples for digits 0-2 and 500 samples for digits 3-5. We repeat each method 10 times with $\alpha=0.05$ and $B=3000$ and calculate the average. \begin{table} \caption{Achieved Type I and Type II errors at $\alpha = 0.05$ with label shift among inlier digits but no outlier digits. ACRF fails to meet the desired coverage in this setting.} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{ccc} \toprule \textbf{Method}&Type I Error&Type II Error \\ \midrule CSForest& 0.042$\pm$0.008 & 0.295$\pm$0.036 \\ BCOPS& 0.041$\pm$0.010 & 0.536$\pm$0.056 \\ DC & 0.040$\pm$0.016 & 0.962$\pm$0.022 \\ CRF & 0.036$\pm$0.018 & 0.523$\pm$0.082 \\ ACRF& 0.171$\pm$0.024 & 0.313$\pm$0.067 \\ ACRFshift& 0.080$\pm$0.026 & 0.630$\pm$0.127 \\ \bottomrule \end{tabular} \label{tab:ErrTableII} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Table \ref{tab:ErrTableII} shows the achieved type I and type II errors using different methods on this standard label shift simulation. We observed that while CRF, BCOPS and CSForest all achieved desired coverage, ACRF failed due to such distributional changes between training and test cohorts. Although ACRFshift alleviates the under coverage of ACRF, it leads to a much higher type II error compared to CRF and CSForest, highlighting the importance of adopting the right distributional change mechanism in practice. In Appendix \ref{app:ACRF}4, we show the coverage barplot for different inlier classes, confirming that the under-coverage of ACRF is due to the over-representation of minority training digits 3-5 in the test cohort, which is severely under-covered using ACRF. \section{Discussion} This paper proposes CSForest as a powerful ensemble classifier to perform robust classification and outlier detection under distributional changes. It aims to construct a set-valued prediction set at each test sample to cover the true label frequently while avoiding false labeling to its best effort. Compared to its precedent BCOPS, CSForest outputs prediction sets of higher quality and is more stable. Although with a worst-case coverage level at $(1-2\alpha)$, the achieved empirical coverage is often close to $(1-\alpha)$ in practice. How much guidance do we need from the test samples to help us with high-quality outlier detection? While this is data-dependent, when the hidden signal is relatively strong as in the MNIST example, a little guidance makes a big difference. We further consider the setting where we have 200 samples from each class in the train set but only 5 samples from each class in the test set, and examine if the test cohort can guide us toward better outlier detection in this extreme case. FIG\ref{fig:MNISTerror} shows the achieved type II errors at $\alpha = 0.05$ using different methods, separately for inlier and outlier classes. CSForest achieved a type II error of around 42\% on average and 60\% for outliers with merely five unlabeled samples from outlier digit classes in the test cohort. At the same time, DC has an average type II error of around 95\% for this moderately high dimensional example using hundreds of training samples. This suggests that information from the outlier component can be crucial for guiding our decision in detecting outliers in high dimensions. We can be much more effective in discriminating outliers from normal classes if we allow an adaptive decision rule to look for outlier-related directions actively. Encouraged by the performance of CSForest on the test set with a small sample size, we believe it would be worth exploring the extension of this framework to the case with a single test sample, and we defer it as a future research direction. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{MNISTerror.png} \caption{Achieved Type II errors for inliers (digits 0-5) outliers (digits 6-9) across 100 repetitions at $\alpha = 0.05$ with merely 5 samples per-class in the test cohort.} \label{fig:MNISTerror} \vskip -.3in \end{figure} \newpage \bibliographystyle{icml2023}
proofpile-arXiv_068-1070
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Polarization of an elastic or an electromagnetic wave refers to the vector-valued amplitude in the high-frequency regime. Extending methods of geometrical optics, microlocal analysis applies to prove well-posedness of propagation of polarizations along rays broken by reflection and transmission (refraction) at boundaries and at interior interfaces. Such results are fundamental to geometric methods for solving the inverse problem of recovering interfaces and elasticities. Only recently, in \cite{StefUhlVasy21transm}, has microlocal well-posedness of the transmission problem of isotropic elastodynamics been proven. The purpose of this paper is to prove, also for anisotropic elastic media, propagation of polarizations along broken bicharacteristics. Making assumptions about sources, we exclude glancing rays from consideration. The elasticity operator $L$ only depends on the metric tensor and the stiffness $4$-tensor of the elastic material. Away from sources, elastic waves satisfy the homogeneous wave equation $Lu=\rho D_t^2 u$, where $D_t=i^{-1}\partial/\partial t$, $i$ the imaginary unit, and $\rho>0$ is the material density. Our basic assumption is that the operator of elastodynamics, $P=L-\rho D_t^2$, is of real principal type. This covers isotropic and generic elastic media. For systems of real principal type, $Pu=0$, the propagation of polarization wavefront sets in the interior was studied in \cite{Dencker82polarrpt}, and at boundaries in \cite{Gerard85polar}. Suppose $u$ is a Lagrangian distribution section in a bundle $E$, $\Lambda$ the associated Lagrangian manifold. Then one expects finer results. In particular, the polarization, that is the principal symbol $a$ of $u$, should satisfy a transport equation. In the scalar case, \cite{DuisHorm72FIOtwo} derived the transport equation $i^{-1}\Lie_H a + p_s a = 0$, where $\Lie_H$ is the Lie derivative in direction of the Hamilton field $H$ and $p_s$ the subprincipal symbol of $P$. For general systems with non-scalar principal symbol, no definition of subprincipal symbol and no transport equation seems to be known. Suppose the manifold and the bundle $E$ are endowed with connections $\nabla$ and $\nabla^E$. In Theorem~\ref{theorem-amp-transport-eqn} we show that the polarization $a$ satisfies a transport equation \begin{equation} \label{eq-transport-polariz} i^{-1}D_H a + p_s a = 0 \quad\text{on $\Lambda$.} \end{equation} The first order differential operator $D_H$ is assembled via Leibniz' formula from $\nabla^E$ pulled back to $\Lambda$ and the Lie derivative $\Lie_H$ on half-densities. The geometric pseudo-differential calculus of \cite{Sharaf05geosymb} is basic to the definition of the subprincipal symbol $p_s$ and to the proof of \eqref{eq-transport-polariz}. For outgoing solutions of $Pu=f$ with Lagrangian sources $f$, we apply the Lagrangian intersection calculus of \cite{MelroseUhlmann79intersection} to derive initial conditions for the polarization. An elastic body is modelled over a three-dimensional Riemannian manifold $M$ with boundary $\partial M$. The free surface problem asks for elastic waves $u$ having zero boundary traction, $Tu=0$ at $\partial M\times \mathbb{R}_t$. Layers having different elastic properties are separated by hypersurfaces $N$ in the interior of $M$. In classical formulation, the transmission problem requires displacements $u$ and tractions to be continuous across $N\times \mathbb{R}_t$. In this paper we study the transmission problem of elastodynamics, looking for outgoing waves generated by Lagrangian sources located in the interior, on the boundary, or on the interfaces. It is standard to reduce the analysis of reflection of wavefront sets at a boundary to the construction of parametrices for Dirichlet data. Finding such parametrices is possible after a suitable spectral decomposition on the principal symbol level, \cite{Taylor75reflection}. We establish, microlocally near a non-glancing boundary region, factorizations \[ L -\rho D_t^2 = (D_r -Q^\sharp)A_0 (D_r - Q). \] Here $r\geq 0$ is the distance to the boundary, and the first order pseudodifferential operators $Q$ and $Q^\sharp$ are tangential, that is, they commute with multiplication by $r$. The spectrum of the principal symbol $q$ of $Q$ is contained in the closure of the complex upper half-plane. Following \cite{Taylor75reflection}, the equation $(D_r-Q)u=0$ is solved for arbitrary initial data at $r=0$. On the principal symbol level, the factorization of the elastodynamics operator corresponds to, and the operator factorization is derived from, spectral factorizations of self-adjoint quadratic polynomials in $s\in\mathbb{C}$: \begin{equation} \label{eq-specfact-of-eladyn-symbol} \ell(\eta+s\nu)-\rho\tau^2\operatorname{Id} =(s-q^\sharp(\eta,\tau))a_0(s-q(\eta,\tau)), \end{equation} $\eta,\nu=\mathop{\operatorname{d}} r(y)\in T_y^*\partial M$. The polynomial takes its values in $\operatorname{End}(E)$. Here the Hilbert space $E$ is the complexification of the tangent space of $M$ at $y\in\partial M$. The spectra of $q$ and $q^\sharp$ are disjoint. There holds $E=E_c\oplus E_r$, where $E_c$ and $E_r$ are the sums of generalized eigenspaces of $q$ associated to eigenvalues $s$ satisfying $\operatorname{Im} s>0$ and $s\in\mathbb{R}$, respectively. In case $E_r=0$, the right root $q$ is unique. On the other hand, if $E_r\neq 0$, then, because of a sign characteristic of real eigenvalues, there are two distinct right roots $q$, $q_{\mathrm{out}}$ and $q_{\mathrm{in}}$. It is a consequence of the sign characteristic combined with the real principal type property that, along the bicharacteristics of $D_r-Q_{\mathrm{out}}$ (resp.\ of $D_r-Q_{\mathrm{in}}$) which issue from the boundary into the interior, time $t$ increases (resp.\ decreases). Using Dirichlet parametrices, \[ (D_r- Q_{{\mathrm{out}}/{\mathrm{in}}}) U_{{\mathrm{out}}/{\mathrm{in}}}\equiv 0, \quad U_{{\mathrm{out}}/{\mathrm{in}}}|_{r=0}\equiv\operatorname{Id}, \] we introduce the DN (Dirichlet to Neumann) operators $Z_{{\mathrm{out}}/{\mathrm{in}}}= TU_{{\mathrm{out}}/{\mathrm{in}}}$. If $Z_{\mathrm{out}}$ is elliptic, reflection is given by $Z_{{\mathrm{out}}}^{-1}Z_{\mathrm{in}}$ in the sense that this operator maps, microlocally, incoming to outgoing Dirichlet data. Similarly, reflection and transmission at interior interfaces is readily resolved provided $Z^+_{\mathrm{out}}+Z^-_{\mathrm{out}}$ is elliptic; the superscripts refer to sides of the interface. The principal symbol $z=z_{{\mathrm{out}}}$ of $Z_{\mathrm{out}}$ satisfies the useful inequality \begin{equation} \label{eq-energy-flow-ineq} -\tau \operatorname{Im}(zv|v) >0 \quad\text{if $v\not\in E_c$, and $\geq 0$ for every $v\in E$.} \end{equation} The inequality reflects, on the principal symbol level, outgoing energy flow. The proof is based on the spectral factorization \eqref{eq-specfact-of-eladyn-symbol}. The inequality \eqref{eq-energy-flow-ineq} implies $\operatorname{ker} z\subset E_c$ and, at the interface, $\operatorname{ker}(z^++z^-)\subset E_c^+\cap E_c^-$. It follows that $Z_{\mathrm{out}}$ and $Z^+_{\mathrm{out}}+Z^-_{\mathrm{out}}$ are elliptic operators in the hyperbolic region, see Proposition~\ref{prop-z-ell-at-Hyp}. We emphasize that, at the interface, the hyperbolic region $\ensuremath{\mathcal H}$ defined in this paper contains any region which is hyperbolic with respect to at least one side of the interface. For isotropic media, we show by explicit symbol computations that the operators are elliptic also in mixed regions, Proposition~\ref{prop-z-ell-at-Mix}. We include an analysis of the propagation of polarizations of free (Rayleigh) surface waves which may occur over the elliptic boundary region. Rayleigh wave propagation was recognized as a propagation of singularities phenomenon by \cite{Taylor79rayleighwaves}. The theory of free surface waves in anisotropic elastic media, \cite{LotheBarnett85surfwaveimped}, implies that restrictions to the boundary of Rayleigh waves satisfy a real principal type system, \cite{Nakamura91rayleighpulses}, thereby making these waves accessible to refined microlocal analysis. In the final Section~\ref{section-propag-polar} we state and prove results on propagation of polarization originating from Lagrangian sources. We refrain from stating the results in terms of FIOs. Pseudo-differential calculus with differential geometric structure is important in the present work because it offers an invariant leading symbol which includes the subprincipal level; in fact, it even has an invariant full symbol. Formulas for the leading symbols of compositions and adjoints contain vertical and horizontal derivatives of principal symbols. See Section~\ref{section-horizontal-deriv} for details on these derivatives. Section~\ref{section-geom-symbol-calc} is a self-contained presentation of the geometric pseudo-differential calculus of \cite{Sharaf05geosymb} to the extent needed in our paper. There are two appendices. Appendix~\ref{sect-elastic-symmetries} recalls elastic symmetries. Furthermore, it is used to show that generically the elastodynamic operator of tranversely isotropic perturbations of isotropic media is of real principal type. In Appendix~\ref{sect-spectral-factorization} we redevelop, simplifying standard proofs, the theory of spectral factorization of self-adjoint quadratic matrix polynomials, and we deduce some corollaries. \section{Linear elasticity} \label{sect-linear-elast} We present the equations of linear elasticity in differential geometric language, and we apply the variational solution method and regularity theory to the transmission problem of elastodynamics. Let $M$ be a smooth ($\ensuremath{C^{\infty}}$) manifold endowed with a smooth Riemannian metric tensor $g$ and the associated Levi-Civita connection $\nabla$. Denote $T^{(p,q)}M$ the bundle of $p$-contravariant and $q$-covariant tensors. So, $TM=T^{(1,0)}M$ is the tangent and $T^*M=T^{(0,1)}M$ the cotangent bundle. Furthermore, $\operatorname{End}(TM)=T^{(1,1)}M$. The Lie derivative of $g$ along a given vector field $u\in\ensuremath{C^{\infty}}(M;TM)$ is a symmetric tensor field $\Lie_u g\in \ensuremath{C^{\infty}}(M;T^{(0,2)}M)$. Identifying the tensor bundles $T^{(0,2)}M$ and $T^{(1,1)}M$ using $g$, $\Lie_u g$ agrees with the symmetrization of $\nabla u\in\ensuremath{C^{\infty}}(M;\operatorname{End}(TM))$. For an elastic material modeled on $(M,g)$, infinitesimal displacements $u$ are vector fields. The tensor field $\varepsilon(u) =\Lie_u g$ is the strain induced by the displacement $u$. Elastic materials are distinguished by their stiffness (or elasticity) tensors which are $4$-tensor fields $C$ which map strains to stresses $\sigma=C\varepsilon$. In coordinate notation $\sigma^{ij}=C^{ijkm}\varepsilon_{km}$. (Here we use the summation convention.) We regard the stiffness tensor as an endomorphism of $T^{(1,1)}M$ as well as an homomorphism from $T^{(0,2)}M$ into its dual $T^{(2,0)}$. A stiffness tensor field $C$ is assumed to satisfy the following properties: $C$ is a symmetric homomorphism which annihilates antisymmetric tensors, maps into symmetric tensors, and is positive definite when restricted to the subspace of symmetric tensors. The last condition is known in elasticity as the strong convexity condition. When $\dim M=3$, stiffness tensors are classified by their $SO(3)$ symmetry subgroups. Isotropy and transverse isotropy are two of eight distinct symmetry classes; see Appendix~\ref{sect-elastic-symmetries}. We say that $(M,g,C)$ is an elastic body if $M$ is an oriented, compact, connected manifold with smooth boundary, $\partial M\neq\emptyset$. Let $(M,g,C)$ be an elastic body. Suppose $C\in\ensuremath{C^{\infty}}$. Denote by $x\, \bar y$ the inner product induced by $g$ on tangent and tensor spaces and their complexifications; the bar marks the conjugate linear slot. Let $\mathop{\operatorname{d}} V_M$ and $\mathop{\operatorname{d}} V_{\partial M}$ denote the Riemannian volume elements on $M$ and on its boundary. The elasticity operator is the second order differential operator $L$ given by the following identity of Green's type: \begin{equation} \label{eq-Green-id-elasticity} \int_M C\varepsilon(u)\,\overline{\varepsilon(v)}\;\mathop{\operatorname{d}} V_M =\int_M L u \,\bar{v}\;\mathop{\operatorname{d}} V_M + \int_{\partial M} T u \,\bar{v}\;\mathop{\operatorname{d}} V_{\partial M}. \end{equation} The traction operator $T$ is a first order differential operator. Using local coordinates, we denote by ${}_{;j}$ covariant derivatives with respect to the $j$-th coordinate, and we lower indices to turn $u$ into a covector. Then $\varepsilon_{km}=(u_{k;m}+u_{m;k})/2$. By \eqref{eq-Green-id-elasticity} and the symmetries of $C$, and using the divergence theorem, we have \begin{equation*} (Lu)^i = -(C^{ijkm} u_{k;m})_{;j}, \quad (Tu)^i = C^{ijkm}u_{k;m} \nu_j. \end{equation*} Here $\nu$ is the exterior unit conormal at $\partial M$; $\nu=-\mathop{\operatorname{d}} r|_{\partial M}$ with $0\leq r$ the distance to $\partial M$. The principal symbols of $L$ and $T$ at a covector $\xi\in T^*M$ are given by the matrices $\ell(\xi)=(C^{ijkm}\xi_j\xi_m)$ and $(\sqrt{-1}C^{ijkm}\nu_j\xi_m)$, respectively. Observe that $L$ is uniformly elliptic. By Korn's inequality and the positive definiteness of $C$ the sesquilinear form \[ E(u,v)= \int_M C\varepsilon(u)\,\overline{\varepsilon(v)}\;\mathop{\operatorname{d}} V_M =\int_M C\nabla u\,\overline{\nabla v}\;\mathop{\operatorname{d}} V_M \] is coercive on the Sobolev space $H^1=H^1(M;\mathbb{C}\otimes TM)$. Following the variational method, the weak definition of the elasticity operator $L:H^1\to{H^1}^*$ is given by \begin{equation} \label{eq-weak-elastic-traction-problem} E(u,v)= (Lu|v), \quad u,v\in H^1. \end{equation} The duality bracket is induced via the $L^2$ inner product: $(w|v)=\int_M w\,\bar v\;\mathop{\operatorname{d}} V_M$. For $\lambda > 0$ sufficiently large, $L+\lambda$ is an isomorphism from $H^1$ onto its conjugate dual ${H^1}^*$. Since $C$ is real symmetric, $L$ restricts to a self-adjoint operator on $L^2$ with domain \[ D(L)=\{u\mathrel{;} Lu\in L^2\} = \{u\in H^2\mathrel{;} Tu|_{\partial M} =0\}.\] The last equality follows from $H^2$-regularity combined with Green's identity~\eqref{eq-Green-id-elasticity} for $u\in H^2$ and $v\in H^1$. The self-adjoint operator $L$ is the elasticity operator of the free surface problem. Variational regularity theory applies to give higher order Sobolev regularity: If $u\in D(L)$ and $Lu\in H^k$, then $u\in H^{k+2}$. More generally, we consider composite elastic bodies with components separated by smooth hypersurfaces. To be precise, suppose that $C$ is smooth except for possible jump discontinuities across a smooth orientiable hypersurface $N$ contained in the interior $M^\circ$ of $M$. The hypersurface $N$ is not connected in general. The components of $M':=M^\circ\setminus N$ represent the interiors of subbodies of $M$, layers for example. The boundaries of subbodies are submanifolds of $\partial M\cup N$ equipped with their orientations as boundaries. In the following, Sobolev spaces are of integer order and consist of sections of the complexified tangent bundle; we omit the bundles from the notation. The definition \eqref{eq-weak-elastic-traction-problem} of the elasticity operator $L$ is still applicable. There holds $H^2$-regularity: If $Lu\in L^2(M)$, then $u\in H^1(M)\cap H^2(M')$. For $u\in H^2(M')$, define the traction jump at $N$ as $[Tu]=(Tu)|_{N_+}+(Tu)|_{N_-}$; the exterior unit normals of the sides $N_\pm$ of $N$ satisfy $\nu_+=-\nu_-$. Applying \eqref{eq-Green-id-elasticity} to each component of $M'$ and summing over the components, we get \[ \int_M C\varepsilon(u)\,\overline{\varepsilon(v)}\;\mathop{\operatorname{d}} V_M =\int_M L u \,\bar{v}\;\mathop{\operatorname{d}} V_M + \int_{\partial M} T u \,\bar{v}\;\mathop{\operatorname{d}} V_{\partial M} + \int_{N} [Tu] \,\bar{v}\;\mathop{\operatorname{d}} V_{N}. \] The restriction of $L$ to the domain \[ D(L_T) = \{u\mathrel{;} Lu\in L^2(M)\} = \{u\in H^1(M)\cap H^2(M')\mathrel{;} Tu|_{\partial M}=0,\; [Tu]=0\}. \] is a non-negative self-adjoint operator $L_T$, the elasticity operator corresponding to homogeneous traction and transmission conditions. For $u\in H^1(M')$ the jump $[u]=u|_{N_+}-u|_{N_-}$ of $u$ across $N$ vanishes iff $u\in H^1(M)$. So, $D(L_T)$ consists of all $u\in H^2(M')$ which satisfy the zero traction boundary condition, $Tu|_{\partial M}=0$, and the homogeneous transmission conditions at the interior interfaces, $[u]=[Tu]=0$ at $N$. To determine the sign of $[u]$, fix an orientation of $N$. Higher regularity holds: If $u\in D(L)$ and $Lu\in H^k(M')$, then $u\in H^{k+2}(M')$. Moreover, the inverse $(L_T+1)^{-1}$ exists and maps $H^k(M')$ onto $H^{k+2}(M')\cap D(L_T)$. Now we consider elastic waves $u=u(x,t)$ over $M\times \mathbb{R}_t$. Put $D_t=-\sqrt{-1}\partial/\partial_t$. Given sources $f$, $h$, and $h_j$ over $M'\times\mathbb{R}$, $\partial M\times\mathbb{R}$, and $N\times\mathbb{R}$, respectively, we look for solutions of the following problem of elastodynamics: \begin{equation} \label{eq-eladyn-source-problem} Lu-\rho D_t^2 u=f, \quad Tu=h, \quad [u]=h_0,\quad [Tu]=h_1. \end{equation} The material density $\rho>0$ is a smooth function on $M$ except for a jump discontinuity at $N$. We are interested in outgoing solutions of \eqref{eq-eladyn-source-problem}, that is $\operatorname{supp}(u)\subset M\times[t_0,\infty[$ should hold for some time $t_0$. In the main part of the paper we construct, with the help of microlocal parametrices, and under assumptions on the wavefront sets of the sources, approximate solutions which satisfy \eqref{eq-eladyn-source-problem} up to $\ensuremath{C^{\infty}}$ errors. To correct for these errors we need to solve \eqref{eq-eladyn-source-problem} with $\ensuremath{C^{\infty}}$ data $f,h,h_j$. If $r$ is a defining function of (part of) $\partial M\cup N$, then the commutator $[T,r]$ is non-singular. Therefore, for $h$ and $h_j$ smooth, we can find $w=w(x,t)$ which is smooth in $M\times \mathbb{R}$ except for a jump discontinuity at $N\times \mathbb{R}$ such that $Tw=h$ holds at $\partial M$, and $[w]=h_0$ and $[Tw]=h_1$ hold at $N\times \mathbb{R}$. Therefore, we may assume $h=0=h_j$ and $f$ smooth, except for a jump at $N$. We use the variational form of the evolution equation \eqref{eq-eladyn-source-problem}, and we apply standard theory, \cite{Evans98pde,Wloka87pde}, mutatis mutandis. Over a time intervall $t_0<t<t_1$ and given data $f\in L^2(t_0,t_1;L^2(M))$, we consider \begin{equation} \label{eq-eladyn-weak-form} (u''(t)|v) +E(u(t),v)=(f(t)|v) \quad\text{for each $v\in H^1(M)$.} \end{equation} A prime denotes a time derivative. In \eqref{eq-eladyn-weak-form} we absorbed $\rho$ into the $L^2$ inner product of the Gelfand triplet $H^1(M)\hookrightarrow L^2(M)\hookrightarrow {H^1}^*(M)$ by redefining $(w|v)=\int_M w\,\bar{v}\,\rho\mathop{\operatorname{d}} V_M$, that is, we replaced the volume density by a mass density. Furthermore, we replace $L$, $L_T$, and $f$ by the quotients $\rho^{-1}L$, $\rho^{-1}L_T$, and $f/\rho$. There exists a unique solution $u(t)$ of \eqref{eq-eladyn-weak-form}, \[ u\in L^2(t_0,t_1;H^1(M)), \quad u'\in L^2(t_0,t_1;L^2(M)), \quad u''\in L^2(t_0,t_1;{H^1}^*(M)), \] such that $u(t_0)=0$ and $u'(t_0)=0$ hold. Initial values are defined because $u\in C([t_0,t_1];L^2(M))$ and $u'\in C([t_0,t_1];{H^1}^*(M))$. Suppose $f\in H^k(t_0,t_1;L^2(M))$ satisfies the compatibility conditions between initial and boundary values; for example, $f^{(j)}(t_0)=0$ holds for $0\leq j<k$. Then we have additional regularity with respect to $t$: \[ u\in H^k(t_0,t_1;H^1(M)), \quad u^{(k+1)}\in L^2(t_0,t_1;L^2(M)), \quad u^{(k+2)}\in L^2(t_0,t_1;{H^1}^*(M)). \] Regularity with respect to the spatial variables follows using the differential equation \[ u(t) = (L_T+1)^{-1} \big(f(t)- u''(t) + u(t)\big). \] We can now combine the regularity results to conclude that, given a positive integer $m$, there exists $k\geq m$ such that the following holds: If $f\in H^k(t_0,t_1;H^k(M'))$ satisfies $f^{(j)}(t_0)=0$ for $j<k$, then the unique solution $u$ of \eqref{eq-eladyn-weak-form} and $u(t_0)=u'(t_0)=0$ satisfies \[ u\in H^m(t_0,t_1;H^m(M')\cap D(L_T)). \] Thus, if $f$ is smooth in $]-\infty,t_1]$ and supported in $[t_0,t_1]$, then so is $u$. \section{Covariant and horizontal derivatives} \label{section-horizontal-deriv} We need some facts, mostly standard \cite{KobaNomizu63I,Tu17diffgeom}, from the differential geometry of vector bundles. Let $X$ be a $\ensuremath{C^{\infty}}$ manifold endowed with a symmetric linear connection $\nabla$. Assume $X$ Hausdorff, second countable, and without boundary. The exponential map $\exp$ maps an open neighborhood of the zero-section of the tangent bundle $TX\to X$ into $X$. We write $\exp_x v=\exp(v)$ for $v\in T_xX$. A normal neighborhood centered at $x\in X$ is the diffeomorphic image $\exp_x(V)$ of a star-shaped zero-neighborhood $V\subset T_xX$. An open set which is a normal neighborhood centered at each of its points is called normal convex. Any two points in a normal convex set $O$ are the endpoints of a unique, up to parametrization, geodesic in $O$. If $O$ is a normal convex open set, then $(x,v)\mapsto (x,y)$, $y=\exp_x v$, maps an open subset of $TX$ diffeomorphically onto $O\times O$. The topology of $X$ has a basis consisting of normal convex sets. See \cite[Ch.~\RN{1} \S 6]{Helgason01diffgeom}. A local coordinate system $(x^j)$ on $X$ defines local frame fields $(\partial_j)$ of $TX$ and $(\mathop{\operatorname{d}} x^j)$ of the cotangent bundle $\pi:T^*X\to X$. Here we introduced $\partial_j$ as an abbreviation of $\partial/\partial x^j$. The covariant derivative is given by $\nabla\partial_j =\Gamma_{kj}^i\partial_i\mathop{\operatorname{d}} x^k$ where $\Gamma_{kj}^i$ are the Christoffel symbols. (We use the summation convention of summing over equal indices in opposite position.) In the cotangent bundle, canonical coordinates $(x^j,\xi_j)$ are defined by $\xi=\xi_j\mathop{\operatorname{d}} x^j\in T_x^*X$. We abbreviate $\partial/\partial\xi_j$ as $\partial^j$. The Hessian of a function $\varphi\in\ensuremath{C^{\infty}}(X)$ is given by \[ \nabla^2\varphi=\nabla\mathop{\operatorname{d}}\varphi = (\partial_k\partial_m \varphi-\Gamma_{km}^j \partial_j\varphi)\mathop{\operatorname{d}} x^k \mathop{\operatorname{d}} x^m. \] The symmetry of $\nabla$ implies that the Hessian is a symmetric $2$-tensor field. Let $a\in\ensuremath{C^{\infty}}(T^* X)$. Using canonical coordinates, the vertical derivative $\vnabla a$ and the horizontal derivative $\hnabla a$ are defined as follows: $\vnabla a(x,\xi)= \partial^j a(x,\xi)\partial_j$, and \begin{equation} \label{eq-horiz-a-scalar} \hnabla a(x,\xi)= \big(\partial_k a(x,\xi) + \Gamma^j_{km}(x)\xi_j \partial^m a(x,\xi)\big)\mathop{\operatorname{d}} x^k. \end{equation} Invariantly, $\vnabla a$ and $\hnabla a$ are sections of the complexifications of pullback bundles $\pi^* TX$ and $\pi^* T^*X$, respectively. The vertical derivative $\vnabla a(x,\xi)\in\operatorname{Hom}(T_x^*X,\mathbb{C})=T_xX\otimes\mathbb{C}$ is the ordinary derivative in fiber direction. The horizontal derivative satisfies, and is uniquely determined by, the identity \[ \mathop{\operatorname{d}}(a\circ\mathop{\operatorname{d}}\varphi)=(\hnabla a)\circ\mathop{\operatorname{d}}\varphi + \nabla^2\varphi \cdot (\vnabla a\circ\mathop{\operatorname{d}}\varphi), \] which holds for $\varphi\in\ensuremath{C^{\infty}}(X;\mathbb{R})$. A centered dot denotes contraction of $T^*X\otimes TX$ factors to scalars by taking the trace. Let $(E,\nabla^E)$ be a real or complex vector bundle $p^E:E\to X$ with a linear connection. If $(e_j)$ is a local frame field of $E$, then the covariant derivative operator $\nabla^E$ is given by a matrix $(\omega^k_j)$ of connection one-forms: \begin{equation} \label{eq-E-frame-connection} \nabla^E s = \mathop{\operatorname{d}} s^j\otimes e_j + s^k\omega^{j}_k \otimes e_j \quad\text{if $s(x)=s^j(x) e_j(x)$,} \end{equation} or $\nabla^{E}_V s = (V s^j) e_j + s^k \omega^j_k(V) e_j$ for every vector field $V$. If $f:Y\to X$ is smooth, then the pullback bundle $f^*E\to Y$ together with a pullback connection $\nabla^{f^*E}$ are defined. It is convenient to regard sections of $f^*E$ as sections along $f$, that is as smooth maps $s:Y\to E$ which satisfy $p^E\circ s=f$. Using the notation of \eqref{eq-E-frame-connection}, $(e_j\circ f)$ is a local frame field of $f^*E$, and $f^* \omega^j_k$ are the connection one-forms of $\nabla^{f^*E}$: \[ \nabla^{f^*E}_V s = (V s^j) e_j\circ f + s^k \omega^j_k(f_* V) e_j\circ f \quad\text{if $s(y)=s^j(y) e_j(f(y))$.} \] For example, suppose $f:\mathbb{R}\times X\to X$ is the projection along $\mathbb{R}$, $f(t,x)=x$, and $W=a\partial_t+V$, $V$ tangent to $X$. Then $\nabla^{f^*E}_W =a\partial_t + \nabla^E_V$. The dual bundle $E^*$ and the density bundles $|E|^\alpha$ are equipped with connections naturally induced from $\nabla^E$. Suppose $(F,\nabla^F)$ is another vector bundle with a linear connection. Then $E\otimes F$ and $\operatorname{Hom}(E,F)=E^*\otimes F$ are equipped with the linear connections induced by $\nabla^E$ and $\nabla^F$. A connection on $\operatorname{Hom}(E,F)\to X$ is determined by the Leibniz rule: \[ \nabla_V^F Au= (\nabla_V^{\operatorname{Hom}(E,F)}A)u+A\nabla_V^E u \] holds if $u$ is a section of $E$, $A$ a section of $\operatorname{Hom}(E,F)$, and $V$ a vector field. Suppose $E$ is equipped with a Hermitian metric $(\cdot|\cdot)_E$ which is compatible with $\nabla^E$, i.e., $V (u|v)_E=(\nabla^E_V u|v)_E+(u|\nabla^E_V v)_E$ holds for real vector fields $V$ and sections $u$ and $v$ of $E$. The connection $\nabla^E$ is said to be being metric. Hermitian metrics are linear in the first slot and antilinear in the second. Suppose $X$ oriented and (pseudo-)Riemannian with metric tensor $g$, $\nabla$ the Levi-Civita connection. Then there is a canonical volume form $0<\mu_g\in\ensuremath{C^{\infty}}(X;\DB{X})$ which is parallel, that is, $\nabla\mu_g=0$ holds. The divergence $\operatorname{div} V$ of a vector field $V$ is defined by $\Lie_V\mu_g=(\operatorname{div} V)\mu_g$, where $\Lie_V$ denotes the Lie derivative defined by $V$. There holds \begin{equation} \label{eq-int-LieV} \int_X (Vc+c\operatorname{div} V)\mu_g=\int_X \Lie_V (c\mu_g)=0 \quad\text{if $c\in\Ccinfty(X)$.} \end{equation} Furthermore, $\operatorname{div} V=\operatorname{tr}\nabla V$. Consider the scalar product $\int_X (u|v)_E\mu_g$ of $u,v\in\Ccinfty(X;E)$. It follows from \eqref{eq-int-LieV} that $-i\nabla^E_V -i\operatorname{div} V$ is the formal adjoint of $-i\nabla^E_V$. Let $c:I\subset\mathbb{R}\to X$ be a smooth curve. By definition, a section $s$ along $c$ of $E$ is parallel iff, on the interval $I$, $\nabla^{c^*E}s=0$ holds. These equations are a homogeneous linear system of ordinary differential equations for $s$, i.e., \begin{equation} \label{eq-ode-par-transp} \dot s^j + \omega^j_k(c(t))\dot c(t) s^k =0 \quad\text{where $s(t)=s^j(t)e_j(c(t))$.} \end{equation} Dots denote derivatives with respect to $t$. The fundamental matrices $\PTE_{c,t_0,t}:E_{c(t_0)}\to E_{c(t)}$ assign $s(t)$ to $s(t_0)$ when $\nabla^{c^*E}s=0$. The linear map $\PTE_{c,t_0,t}$ is called the parallel transport in $E$ along $c$ from $c(t_0)$ to $c(t)$. We abbreviate $\PTE_{c,t_0,t}$ as $\PTE_{c}$ if the endpoint parameters $t_0$ and $t$ are clear from the context. If $y=\exp_x v$ belongs to a normal neighborhood centered at $x$, then we let $\PTE_{x\from y}:E_y\to E_x$ denote the parallel transport along a geodesic $c$ from $y$ to $x$, $c(t)=\exp_x(v-tv)$ for $0\leq t\leq 1$. A reparametrization of a curve from $y$ to $x$ does not affect the parallel transport map $E_y\to E_x$. Parallel transport along piecewise smooth curves is defined in an obvious way. Covariant derivatives are recovered from the associated parallel transport: \[ \big(\nabla^E_{\dot c(0)}s\circ c\big)(0) = \frac{\mathop{\operatorname{d}}}{\mathop{\operatorname{d}} t}\PTE_{c,t,0}s(c(t))|_{t=0}. \] A loop based at $x$ is a curve with initial and end point equal to $x$. For infinitesimal loops based at $x$, holonomy theory relates parallel transport to curvature. We need a special case where the curvature does not contribute. \begin{lemma} \label{lemma-trivial-holonomy} Let $c:[0,\delta[\to X$ be a smooth curve. Set $x=c(0)$. For $0<\varepsilon<\delta$, denote by $\lambda_\varepsilon$ the loop at $x$ which consists of $c|_{[0,\varepsilon]}$ followed by the geodesic $x\from c(\varepsilon)$. Then $\PTE_{\lambda_\varepsilon}=\operatorname{Id}_{E_x}+\mathcal{O}(\varepsilon^2)$ as $\varepsilon\to 0$. \end{lemma} \begin{proof} Assume that $c$ stays in a normal neighborhood of $x$. Using normal coordinates and a local frame field of $E$, introduce norms on $T_xX$ and uniformly on the fibers of $E$. Write $c(t)=\exp_x w(t)$. Define $\gamma_\varepsilon(t)=\exp_x((t/\varepsilon) w(\varepsilon))$. Observe that \[ \sup\nolimits _{0\leq t\leq\varepsilon} |w(t)-(t/\varepsilon)w(\varepsilon)|=\mathcal{O}(\varepsilon^2). \] Standard Lipschitz stability estimates for systems of ordinary differential equations, applied to \eqref{eq-ode-par-transp}, imply the estimate $\|\PTE_{c|_{[0,\varepsilon]}}-\PTE_{\gamma_\varepsilon}\|=\mathcal{O}(\varepsilon^2)$ with respect to norms of $\operatorname{Hom}(E_x,E_{c(\varepsilon)})$. To complete the proof, observe that $\PTE_{x\from c(\varepsilon)}= \big(\PTE_{\gamma_\varepsilon}\big)^{-1}$ stays uniformly bounded. \end{proof} Let $a\in\ensuremath{C^{\infty}}(T^*X;\pi^* E)$. The covariant derivative $\nabla^{\pi^* E}a$ is a section of the vector bundle $T^*(T^*X)\otimes \pi^*E$. More useful are the vertical derivative $\vnabla a$ and the horizontal derivative $\hnabla a$ which are sections of the bundles $\pi^* (TX\otimes E)$ and $\pi^* (T^*X\otimes E)$, respectively. As in the scalar case, the vertical derivative is the derivative in fiber direction: $\vnabla a(x,\xi)\in T_xX\otimes E_x$ is the derivative of the map $T_x^*X\to E_x$, $\xi\mapsto a(x,\xi)$, that is $\vnabla a(x,\xi)=\partial_j\otimes \partial^j a(x,\xi)$ holds in canonical coordinates. Viewing $\eta\in T_x^*X$ as a tangent vector to the fiber, $\eta\in T^*_{(x,\xi)}T^*X$, we have $\vnabla a(x,\xi)\cdot\eta=\nabla^{\pi^*E}_\eta a(x,\xi)$. We define the horizontal derivative with respect to a local frame $(e_j)$ with connection matrix $(\omega_k^j)$ as \begin{equation} \label{eq-horiz-a-vector} \hnabla a= (\hnabla a^j + a^k \omega_k^j)e_j, \quad a(x,\xi)=a^j(x,\xi) e_j(x). \end{equation} For real-valued $\varphi\in\ensuremath{C^{\infty}}(X)$ we have \begin{equation} \label{eq-covder-horiz-der} \nabla^E(a\circ\mathop{\operatorname{d}}\varphi)=(\hnabla a)\circ\mathop{\operatorname{d}}\varphi + \nabla^2\varphi \cdot (\vnabla a\circ\mathop{\operatorname{d}}\varphi). \end{equation} In particular, $\hnabla a(x,\xi)=\nabla^E (a\circ\mathop{\operatorname{d}}\varphi)(x)$ if $\xi=\mathop{\operatorname{d}}\varphi(x)$ and $\nabla^2 \varphi(x)=0$. This shows that the horizontal derivative is well-defined. Contractions of $T^*X\otimes TX$ are denoted by a centered dot, e.g., \[ \vnabla \cdot \hnabla a, \nabla^2\varphi\cdot\vnabla^2 a\in \ensuremath{C^{\infty}}(T^*X;\pi^* E). \] An example in canonical coordinates is \[ \hnabla\cdot\vnabla a=\vnabla \cdot \hnabla a = \big(\partial^k\partial_k a^j + \Gamma_{km}^n (\delta^k_n \partial^m a^j +\xi_n \partial^k\partial^m a^j) + \partial^m a^k \omega^j_k(\partial_m)\big) e_j \] where $a=a^j e_j$; recall \eqref{eq-horiz-a-scalar}. In the flat case, this simplifies to $\vnabla\cdot\hnabla a= \partial^k\partial_k a$, which is a term familiar from the formula for the subprincipal symbol. When dealing with products in the geometric symbol calculus we shall encounter the following situation: $E$, $F$ and $G$ are vector bundles over $X$, $E$ and $F$ with connections. The bundle $\operatorname{Hom}(E,F)$ is equipped with the induced connection. Let $a$ and $b$ be sections of the bundles $\pi^* \operatorname{Hom}(E,F)$ and $\pi^* \operatorname{Hom}(F,G)$, respectively. Besides $ba$ also \( \vnabla b \cdot \hnabla a \) is a well-defined element of $\ensuremath{C^{\infty}}(T^*X;\pi^* \operatorname{Hom}(E,G))$. The Poisson bracket $\{b,a\}=H_b a$ and the Hamilton field $H_b$ are defined for functions $a,b\in\ensuremath{C^{\infty}}(T^*X)$. We have the following generalization of the Poisson bracket when $a$ is not necessarily scalar. \begin{lemma} Let $a\in\ensuremath{C^{\infty}}(T^*X;\pi^*\operatorname{End}(E))$ and $b\in\ensuremath{C^{\infty}}(T^*X)$ real-valued. Denote by $H_b$ the Hamilton vector field of $b$. Then \begin{equation} \label{eq-Poisson-vert-horiz} \nabla^{\pi^*\operatorname{End}(E)}_{H_b} a = \vnabla b\operatorname{Id}\cdot\hnabla a - \hnabla b\operatorname{Id}\cdot\vnabla a. \end{equation} \end{lemma} \begin{proof} Write $a(x,\xi)=a^j(x,\xi) e_j(x)$. Both sides of \eqref{eq-Poisson-vert-horiz} are equal to \[ \big(\partial^k b (\partial_k a^j +\omega^{j}_i(\partial_k) a^i) -\partial_k b \partial^k a^j\big) e_j. \] To see that this is true for the right-hand side, use \eqref{eq-horiz-a-vector}, \eqref{eq-horiz-a-scalar}, the symmetry of the Christoffel symbols, and $\nabla^{\pi^*\operatorname{End}(E)} \operatorname{Id}=0$. On the left-hand side insert $H_b=(\partial^j b)\partial_j-(\partial_j b)\partial^j$. \end{proof} \section{Geometric symbol calculus} \label{section-geom-symbol-calc} Assuming additional geometric structure, geometric pseudo-differential calculi invariantly define full symbols of operators; see \cite{Bokobza69pdovardiff,Widom80completesymb} and the more recent work of \cite{Sharaf04geosymb,Sharaf05geosymb}. In this section we present, following Sharafutdinov's approach and including complete proofs, a geometric symbol calculus down to subprincipal (leading) symbol level. Suppose $(X,\nabla)$ is a manifold with a symmetric connection, and $E$ a complex vector bundle over $X$ endowed with a connection $\nabla^E$. Let $u\in\ensuremath{C^{\infty}}(X;E)$. Using parallel transport, the covariant derivative is expressed as an ordinary derivative: \[ \nabla^E u(x) = U'(0)\in\operatorname{Hom}(T_xX,E_x), \quad U(v) = \PTE_{x\from \exp_x v} u(\exp_x v). \] Suppose $\operatorname{supp} u$ is a compact subset of a normal neighborhood of $x$. The Fourier inversion formula applied to the derivative $U'$ gives \begin{equation} \label{eq-nablaE-via-Fourier} -i\nabla^E u(x)= (2\pi)^{-\dim X}\int_{T^*_x X}\int_{T_x X} e^{-i\xi v} \xi\otimes \PTE_{x\from y} u(y)\mathop{\operatorname{d}} v \mathop{\operatorname{d}} \xi, \quad y =\exp_x v. \end{equation} The Lebesgue measures $\mathop{\operatorname{d}} v$ and $\mathop{\operatorname{d}}\xi$ are normalized as follows: $\mathop{\operatorname{d}} v\mathop{\operatorname{d}} \xi=|\sigma^n|/n!$ where $\sigma$ denotes the canonical symplectic form of $T^*_x X\times T_x X$. Let $F$ be another complex vector bundle over $X$. Let $p\in \ensuremath{C^{\infty}}(T^*X;\operatorname{Hom}(\pi^*E,\pi^*F))$ belong to a standard symbol class $S^m$. As in \cite[Lemma~8.1]{Sharaf05geosymb}, we define a geometric pseudo-differential operator $P$ by \begin{equation} \label{eq-def-geom-PsDO-pointwise} Pu(x)= (2\pi)^{-\dim X}\int_{T^*_x X}\int_{T_x X} e^{-i\xi v} p(x,\xi) \PTE_{x\from y} u(y)\chi(x,y)\mathop{\operatorname{d}} v \mathop{\operatorname{d}} \xi \in F_x, \quad y=\exp_x v, \end{equation} $u\in\ensuremath{C^{\infty}}(X;E)$ and $x\in X$. We require the cutoff function $\chi\in\ensuremath{C^{\infty}}(X\times X)$ to satisfy the following: \begin{compactenum}[(i)] \item $\chi=1$ in a neighborhood of the diagonal. \item\label{item-chi-proper} The relation $\operatorname{supp}\chi$ is proper, i.e., for $K\subset X$ compact, the intersection of $\operatorname{supp}\chi$ with $K\times X$ and with $X\times K$ is compact. \item\label{item-chi-admissable} For every point in $X$ there exist open neighborhoods $U_1$ and $U_2$, $U_2$ a normal neighborhood of $U_1$, that is of every point of $U_1$, such that $(U_1\times X)\cap\operatorname{supp}\chi\subset U_1\times U_2$. \end{compactenum} There exists an open neighborhood $U\subset X\times X$ of the diagonal such that $\operatorname{supp}\chi\subset U$ implies condition \eqref{item-chi-admissable}. For example, if $(O_j)$ is a locally finite covering of $X$ by normal convex open sets $O_j$, then $U=\cup_j O_j\times O_j$ has this property. Indeed, given $z\in X$, set $U_1=\cap_{j\in J}O_j$ and $U_2=\cup_{j\in J}O_j$, where $J$ is the finite set of indices $j$ with $z\in O_j$. The domain of integration in \eqref{eq-def-geom-PsDO-pointwise} depends on $x$. It is not immediately clear that \eqref{eq-def-geom-PsDO-pointwise} defines a continuous section $Pu$, let alone a pseudo-differential operator $P$. \begin{lemma} \label{lemma-geom-PsDO-local} Let $U_1\subset X$ open, $U_2$ a normal neighborhood of $U_1$, such that $(U_1\times X)\cap\operatorname{supp}\chi\subset U_1\times U_2$. Let $x,z\in U_1$, $x=\exp_z s$. Define $w\mapsto v$ by $\exp_x v=\exp_z w\in U_2$. For $\zeta\in T_z^*X$, set $\xi =\mathbin{^t(\partial v/\partial w)^{-1}}\zeta\in T_xX$ and $\varphi =-\xi v$. Then \eqref{eq-def-geom-PsDO-pointwise} becomes \begin{equation} \label{eq-def-geom-PsDO-local} Pu(x)= (2\pi)^{-\dim X}\int_{T^*_z X}\int_{T_z X} e^{i\varphi} p(x,\xi) \PTE_{x\from y} u(y)\chi(x,y)\mathop{\operatorname{d}} w \mathop{\operatorname{d}} \zeta, \quad y=\exp_z w. \end{equation} The phase function satisfies $\varphi(s,w,\zeta)=\zeta\psi(s,w)(s-w)$ with $\psi$ a $\ensuremath{C^{\infty}}$ map into $\operatorname{End}(T_zX)$, and $\psi(s,s)=\operatorname{Id}$. Moreover, $\varphi'_{\zeta}=0$ iff $w=s$. \end{lemma} \begin{proof} The map $(w,\zeta)\mapsto (v,\xi)$ is symplectic, hence volume preserving. Changing variables from $(v,\xi)$ to $(w,\zeta)$ shows that the integral \eqref{eq-def-geom-PsDO-pointwise} equals the integral \eqref{eq-def-geom-PsDO-local}. In case the double integral is not absolutely convergent, we employ the standard procedure of partial integration in regions where the phase function non-stationary. Note that $\varphi=-\zeta(\partial v/\partial w)^{-1} v$, and $\varphi'_{\zeta}\neq 0$ iff $v\neq 0$. The map $(s,w)\mapsto (\partial v/\partial w)^{-1}v$ is a smooth map into $T_zX$ which is zero when $w=s$. Thus there exists a $\ensuremath{C^{\infty}}$ map $\psi$ into $\operatorname{End}(T_zX)$ such that $(\partial v/\partial w)^{-1} v= \psi(s,w)(w-s)$ holds. Taking the derivative with respect to $w$ and evaluating at $w=s$, we see that $\psi(s,s)$ is the identity. \end{proof} In $U_1\times U_1$, the phase function $\varphi$ parametrizes the conormal bundle of the diagonal, which is given by $s=w$. By Lemma~\eqref{lemma-geom-PsDO-local} and standard theory, we know that \eqref{eq-def-geom-PsDO-pointwise} defines a pseudo-differential operator $P$. Furthermore, it follows from \eqref{item-chi-proper} that $P$ is properly supported. Replacing $\chi$ by a cutoff function, which also satisfies the assumptions, modifies $P$ by a smoothing operator. Conversely, if $P$ is a pseudo-differential operator, then any given point $z\in X$ has an open neighborhood $U_1$, where $P$ can be represented, modulo a smoothing operator, as \eqref{eq-def-geom-PsDO-local}. Here, when starting from a matrix representing $P$ with respect to some local frame of $E$, a parallel transport map is absorbed into the symbol $p$. Reversing the transformation from \eqref{eq-def-geom-PsDO-pointwise} to \eqref{eq-def-geom-PsDO-local}, we see that every pseudo-differential operator is, modulo a smoothing operator, geometric. Denote by $S^m(T^*X;\operatorname{Hom}(E,F))\subset\ensuremath{C^{\infty}}(T^*X;\operatorname{Hom}(\pi^*E,\pi^*F))$ the standard space of symbols of order $\leq m$. To ease writing, we omit $\pi^*$ from symbol space notation, and often we abbreviate the symbol space as $S^m$. Formula \eqref{eq-def-geom-PsDO-pointwise} defines the geometric quantization $p\mapsto P=\op(p)$. The sum of $\op(S^m(T^*X;\operatorname{Hom}(E,F)))$ and the space $\Psi^{-\infty}(X;E,F)$ of properly supported smoothing operators is the space $\Psi^m(X;E,F)$ of pseudo-differential operators of order $\leq m$. A symbol $p\in S^m$ is polyhomogeneous of degree $m$ iff there exist an asymptotic symbol expansion $p\sim\sum_{j\geq 0} p_j$ where the $p_j$'s are $\ensuremath{C^{\infty}}$ sections over $T^*X\setminus 0$, and $p_j(x,\xi)$ is homogeneous of degree $m-j$ in the fiber variable $\xi$. Thus, using semiclassical notation with a positive asymptotic parameter $\hslash$, \begin{equation} \label{eq-polyhom-expansion} \hslash^m p(x,\xi/\hslash)\sim \sum\nolimits_{j\geq 0} \hslash^j p_j(x,\xi) \quad\text{as $\hslash\to 0+$.} \end{equation} $\Sphg^m\subset S^m$ and $\Psiphg^m\subset\Psi^m$ denote the subclasses of polyhomogeneous symbols and operators. The left-hand side in \eqref{eq-polyhom-expansion} is the symbol of the semiclassical operator $\hslash^mP$ associated with \eqref{eq-def-geom-PsDO-pointwise}: \begin{equation} \label{eq-P-semiclass} \hslash^mP u(x) = (2\pi \hslash)^{-\dim X} \int_{T^*_{x} X} \int_{T_{x} X} e^{-i\xi v/\hslash} \hslash^mp(x,\xi/\hslash) \PTE_{x\from y}u(y)\chi(x,y) \mathop{\operatorname{d}} v \mathop{\operatorname{d}} \xi, \end{equation} $y=\exp_x v$. A linear operator $P$ is a (classical) pseudo-differential operator of order $\leq m$ iff a complete asymptotic expansion $\hslash^m e^{-i\varphi/\hslash} P e^{i\varphi/\hslash}a(x) \sim \sum\nolimits_{j\geq 0} \hslash^j b_j(x)$ exists whenever the phase $\varphi$ is real-valued and $\mathop{\operatorname{d}}\varphi\neq 0$ holds on $\operatorname{supp} a$. We give a formula the two top order terms. \begin{proposition} \label{prop-fund-asymp-expansion} Let $P=\op(p)\in\Psiphg^m(X;E,F)$, $p\sim\sum_{j\geq 0} p_j$. Let $a\in\ensuremath{C^{\infty}}(X;E)$, $\varphi\in\ensuremath{C^{\infty}}(X)$ and realvalued, $\mathop{\operatorname{d}} \varphi\neq 0$ on $\operatorname{supp} a$. As $\hslash\to 0+$, \begin{equation} \label{eq-fund-asymp-expansion} \hslash^m e^{-i\varphi/\hslash} P e^{i\varphi/\hslash}a = ((p_0+\hslash p_1)\circ\mathop{\operatorname{d}}\varphi) a -i\hslash D a + \mathcal{O}(\hslash^2) \end{equation} holds, where \[ D a= (\vnabla p_0\circ\mathop{\operatorname{d}}\varphi) \cdot\nabla^E a + \nabla^2\varphi\cdot(\vnabla^2 p_0\circ\mathop{\operatorname{d}}\varphi) a/2. \] If $E=F$ and the principal symbol is scalar, $p_0=qI$, then $V=\vnabla q\circ\mathop{\operatorname{d}}\varphi$ is a vector field, and \[ D a= \nabla^E_V a + \nabla^2\varphi\cdot(\vnabla^2 p_0\circ\mathop{\operatorname{d}}\varphi) a/2. \] Furthermore, if $P\in\Psi^{-\infty}$, then $p_j=0$ for all $j$. \end{proposition} By \cite[Theorems~7.7.5-6]{Hormander90anaOne}, the stationary phase formula \begin{equation} \label{eq-stationary-phase} \int e^{i \varphi/\hslash} a\mathop{\operatorname{d}} x = {\det (H/2\pi i\hslash)}^{-1/2} \sum\nolimits _{j<3N} (i\hslash/2)^j \langle H^{-1} \partial,\partial\rangle^j \big(e^{i\rho/\hslash}a\big)(0)/j! +\mathcal{O}(\hslash^{N}) \end{equation} holds as $\hslash\to 0+$, uniformly in parameters. Here $\varphi$ is $\ensuremath{C^{\infty}}$ and real-valued, $\varphi(0)=0$, the origin $x=0$ is the only critical point of $\varphi$ in the support of $a\in\Ccinfty(\mathbb{R}^n)$, and $H=\varphi''(0)$ is non-singular. Furthermore, $\rho(x)=\varphi(x)-\langle Hx,x\rangle/2=\mathcal{O}(|x|^3)$. The following corollary of \eqref{eq-stationary-phase} will be useful. Assume that the integration variable splits as $x=(y,\eta)\in\mathbb{R}^k\times\mathbb{R}^k$. If $\varphi''_{\eta\eta}(0,0)=0$ and if $\partial_y^\alpha\partial_\eta^\beta\rho(0,0)=0$ when $|\alpha|\leq 2$, then \begin{equation} \label{eq-stationary-phase-mod-hsquared} \int e^{i \varphi(x)/\hslash} a(x)\mathop{\operatorname{d}} x = |\det (\varphi''_{y\eta}/2\pi \hslash)|^{-1} \big(a(0) + i\hslash \langle H^{-1} \partial,\partial\rangle a(0)/2 +\mathcal{O}(\hslash^{2})\big). \end{equation} This follows by direct computation from \eqref{eq-stationary-phase} using $N = 2+k$. \begin{proof}[Proof of Proposition~\ref{prop-fund-asymp-expansion}] A uniform asymptotic expansion of $e^{-i\varphi/\hslash} \hslash^m P e^{i\varphi/\hslash}a$ is known to exist. So it suffices to evaluate the expansion at a given point $x$ which we hold fixed. By \eqref{eq-P-semiclass}, \begin{equation} \label{eq-A-with-osc-test} (e^{-i\varphi/\hslash} \hslash^m P e^{i\varphi/\hslash}a)(x) = (2\pi \hslash)^{-\dim X} \int_{T^*_{x} X} \int_{T_{x} X} e^{i\Phi/\hslash} \hslash^m p(x,\xi/\hslash) \PTE_{x\from y}a(y)\chi(x,y) \mathop{\operatorname{d}} v \mathop{\operatorname{d}} \xi. \end{equation} Here $\Phi=\Phi(v,\xi)=-\xi v+\tilde\varphi(v)-\varphi(x)$ with $\tilde\varphi(v)=\varphi(y)$ and $y=\exp_x v$. The stationary point $(v,\xi)$ of $\Phi$ satisfies $v=0$ and $\xi=\tilde\varphi_v'(0)=\mathop{\operatorname{d}}\varphi(x)$. The Hessian is \[ H=\begin{bmatrix} \tilde\varphi_{vv}''(0) & -I\\ -I& 0 \end{bmatrix}. \] Apply \eqref{eq-stationary-phase-mod-hsquared} to \eqref{eq-A-with-osc-test}, and get $e^{-i\varphi/\hslash} \hslash^mP e^{i\varphi/\hslash}a = b_0+\hslash b_1+\mathcal{O}(\hslash^2)$, $b_0(x)= p_0(x,\xi)a(x)$, and \[ b_1(x)= p_1(x,\xi)a(x)-i\vnabla p_0(x,\xi)\cdot \nabla^E a(x) -(i/2) \tilde\varphi_{vv}''(0)\cdot \vnabla^2 p_0(x,\xi)a(x). \] By the symmetry of $\nabla$, $\tilde\varphi_{vv}''(0) =\nabla^2\varphi(x)$. Thus \eqref{eq-fund-asymp-expansion} holds. In the scalar case, $p_0=q\operatorname{Id}$, the simpler formula for $Da$ is easily verified. Fix $w\in E_x$ and $0\neq \eta\in T_x^*X$. If $\varphi$ and $a$ satisfy $\varphi(\exp_x v)=\eta v$ and $\PTE_{x\from y}a(y)=w$ for $y$ near $x$, the stationary phase formula \eqref{eq-stationary-phase} applied to \eqref{eq-A-with-osc-test} gives \[ \hslash^m (e^{-i\varphi/\hslash} P e^{i\varphi/\hslash}a)(x) \sim \sum\nolimits _{j\geq 0} (i\hslash)^j \langle\partial_\xi,\partial_v\rangle^j \hslash^m p(x,\xi/\hslash) w|_{(v,\xi)=(0,\eta)} = \hslash^mp(x,\eta/\hslash)w. \] Suppose $P\in\Psi^{-\infty}$. It follows that $p(x,\eta/\hslash)w =\mathcal{O}(\hslash^{\infty})$ holds, thus $p_j=0$ for all $j$. \end{proof} It follows from \eqref{eq-fund-asymp-expansion} that, modulo $\mathcal{O}(\hslash^2)$, \[ (p_0+\hslash p_1)(x,\xi) a(x) \equiv (e^{-i\varphi/\hslash} \hslash^m P e^{i\varphi/\hslash}a)(x) \quad\text{if $\xi=\varphi'(x)$, $\nabla^2\varphi(x)=0=\nabla^E a(x)$.} \] We call $p_0+\hslash p_1$ the \emph{leading symbol} of the operator $P$. Even when $P$ is not semi-classical we stick to semi-classical notation for the leading symbol in order to visibly distinguish $p_0$ from $p_1$. The principal symbol $p_0$ does not depend on the connections $\nabla$ and $\nabla^E$, whereas $p_1$ does. It follows from \eqref{eq-nablaE-via-Fourier} that the leading symbol of $-i\nabla^E$ is equal to its principal symbol $d:T^*X\to \operatorname{Hom}(TM,\operatorname{Hom}(TM,E))$ which is given by $d(\xi):e\mapsto e\otimes\xi$ for $\xi\in T_x^*M$, $e\in E_x$. Leading symbols refine the symbol calculus of pseudo-differential operators. \begin{proposition} \label{prop-geosymb-composition} The product $QP$ of pseudo-differential operators $Q$ and $P$ with leading symbols $q+\hslash q_1$ and $p+\hslash p_1$ has leading symbol $qp+\hslash (q_1p+qp_1 -i \vnabla q\cdot \hnabla p)$. \end{proposition} \begin{proof} Denote the orders of $P$ and $Q$ by $m_P$ and $m_Q$. Put $m=m_P+m_Q$. Let $\varphi$, $a$ and $Da$ as in Proposition~\ref{prop-fund-asymp-expansion}. It follows from \eqref{eq-fund-asymp-expansion} that the following holds modulo $\mathcal{O}(\hslash^2)$: \begin{align*} \hslash^m e^{-i\varphi/\hslash} QP e^{i\varphi/\hslash}a &\equiv \hslash^{m_Q} e^{-i\varphi/\hslash} Q e^{i\varphi/\hslash} \big(((p+\hslash p_1)\circ\mathop{\operatorname{d}}\varphi) a -i\hslash D a\big) \\ &\equiv ((q+\hslash q_1)\circ\mathop{\operatorname{d}}\varphi) \big(((p+\hslash p_1)\circ\mathop{\operatorname{d}}\varphi) a -i\hslash D a\big) \\ &\phantom{=} -i\hslash (\vnabla q\circ\mathop{\operatorname{d}}\varphi) \cdot\nabla^F ((p\circ\mathop{\operatorname{d}}\varphi) a) -i\hslash \nabla^2\varphi\cdot(\vnabla^2 q\circ\mathop{\operatorname{d}}\varphi) (p\circ\mathop{\operatorname{d}}\varphi)a/2. \end{align*} Fix $(x,\xi)\in T^*X\setminus 0$. Choose $\varphi$ and $a$ such that $\xi=\mathop{\operatorname{d}}\varphi(x)$, $\nabla^2\varphi(x)=0$, and $\nabla^E a(x)=0$ hold. Then $Da(x)=0$, and, by Leibniz' rule and \eqref{eq-covder-horiz-der}, \[ \nabla^F ((p\circ\mathop{\operatorname{d}}\varphi) a)(x) = \nabla^{\operatorname{Hom}(E,F)} (p\circ\mathop{\operatorname{d}}\varphi)(x) a(x) +p(x,\xi) \nabla^E a(x) = \hnabla p(x,\xi)a(x). \] Summarizing, we have shown that \[ e^{-i\varphi/\hslash} \hslash^m QP e^{i\varphi/\hslash}a(x) \equiv (qp+\hslash q_1p+\hslash qp_1)(x,\xi)a(x) -i\hslash (\vnabla q\cdot\hnabla p)(x,\xi)a(x) \] holds modulo $\mathcal{O}(\hslash^2)$, which completes the proof. \end{proof} \begin{remark} \label{remark-mult-op} Suppose $Q$ is a differential operator of order zero, that is, the multiplication operator given by a bundle homomorphism $q$. Then the leading symbol of $QP$ is $qp+\hslash qp_1$. \end{remark} In the remainder of this section, we assume that $X$ is an oriented (pseudo-)Riemannian manifold with metric $g$, $\nabla$ the Levi-Civita covariant derivative, and $\mu_g\in\ensuremath{C^{\infty}}(X;\DB{X})$ the positive volume form. Every $P\in\Psi^m(X;E,F)$ can be viewed as an operator acting on half-densities: \[ \mu_g^{1/2}P\mu_g^{-1/2}\in\Psi^m(X;E\otimes\hDB{X},F\otimes\hDB{X}). \] We identify symbol spaces via the identification \[ \operatorname{Hom}(E,F)\equiv \operatorname{Hom}(E\otimes\hDB{X},F\otimes\hDB{X}) \] which is defined by tensoring sections with $\mu_g^{\pm 1/2}$. \begin{corollary} \label{cor-leadsymb-inv-conjug} The leading symbols of $P$ and $\mu_g^{1/2}P\mu_g^{-1/2}$ agree. \end{corollary} \begin{proof} The vertical derivatives of the multiplication operators $\mu_g^{\pm 1/2}$ vanish. So do the horizontal derivatives as $\mu_g$ is parallel, $\nabla\mu_g=0$. Now apply Proposition~\ref{prop-geosymb-composition}. \end{proof} Next we consider formal adjoints of $P$ and of $Q=\mu_g^{1/2}P\mu_g^{-1/2}$. Assume that $E$ and $F$ are Hermitian bundles with metric connections. Denote the bundle metrics $(\cdot|\cdot)_E$ and $(\cdot|\cdot)_F$, and the metric connections $\nabla^E$ and $\nabla^F$. The adjoint of $p\in\operatorname{Hom}(E,F)$ is written $p^*\in\operatorname{Hom}(F,E)$. For sections $u,v$ of $E$, resp.\ $E\otimes\hDB{X}$, we have scalar products $\int_X(u|v)_E\mu_g$, resp.\ $\int_X(u|v)_E$. Associated to the scalar products are the formal adjoints $P^*\in\Psi^m(X;F,E)$ and $Q^\dagger= \mu_g^{1/2}P^*\mu_g^{-1/2}$. \begin{proposition} \label{prop-geosymb-adjoint} The leading symbol of the adjoint $P^*$ of a pseudo-differential operator $P$, polyhomogeneous with leading symbol $p+\hslash p_1$, equals $p^*+\hslash p_1^* -i\hslash \vnabla \cdot \hnabla p^*$. \end{proposition} By Corollary~\ref{cor-leadsymb-inv-conjug}, the proposition holds also for $P^\dagger$. \begin{proof} Suppose $P\in\Psiphg^m(X;E,F)$. Let $a\in\Ccinfty(X;E)$, $b\in\ensuremath{C^{\infty}}(X;F)$, and $\varphi\in\ensuremath{C^{\infty}}(X)$ real-valued, $\mathop{\operatorname{d}}\varphi\neq 0$. Apply \eqref{eq-fund-asymp-expansion} to the right-hand side of \begin{equation*} \int_X \big(a\big|e^{-i\varphi/\hslash}\hslash^m P^* e^{i\varphi/\hslash} b\big)_E\mu_g = \int_X \big(e^{-i\varphi/\hslash}\hslash^m P e^{i\varphi/\hslash} a\big|b\big)_F\mu_g, \end{equation*} and get \begin{align*} \int_X \big(a\big|e^{-i\varphi/\hslash}\hslash^m P^* e^{i\varphi/\hslash} b\big)_E\mu_g &= \int_X \big(a\big|((p^*+\hslash p_1^*)\circ\mathop{\operatorname{d}}\varphi+(i\hslash/2)\nabla^2\varphi\cdot\vnabla^2 p^*\circ\mathop{\operatorname{d}}\varphi)b\big)_E\mu_g \\ &\phantom{==} -i\hslash \int_X\big((\vnabla p\circ\mathop{\operatorname{d}}\varphi)\cdot \nabla^E a\big| b\big)_F\mu_g +\mathcal{O}(\hslash^2) \end{align*} as $\hslash\to 0+$. Suppose $(V_j)$ is a frame of $TX$ over an open subset $O\subset X$, and $\operatorname{supp} b\subset O$. Write $\vnabla p^*\circ\mathop{\operatorname{d}}\varphi=\sum_j S^j\otimes V_j$ with $S^j\in\ensuremath{C^{\infty}}(O;\operatorname{Hom}(F,E))$. Then \[ \big((\vnabla p\circ\mathop{\operatorname{d}}\varphi)\cdot\nabla^E a\big|b\big)_F = \sum\nolimits _j \big(\nabla^E_{V_j}a\big| S^j b\big)_E = -\sum\nolimits _j \big(a\big|\nabla^E_{V_j} S^j b\big)_E +\sum\nolimits _j V_j \big(a\big| S^j b\big)_E. \] By the Leibniz' rule, \begin{align*} \nabla^E_{V_j} S^jb &= (\nabla^{\operatorname{Hom}(F,E)}_{V_j} S^j) b + S^j \nabla^F_{V_j} b, \\ \nabla^{\operatorname{Hom}(F,E)\otimes TX}(S^j\otimes V_j) &= \nabla^{\operatorname{Hom}(F,E)} S^j \otimes V_j + S^j\otimes \nabla V_j. \end{align*} Using \eqref{eq-covder-horiz-der} and contraction, we derive \[ \hnabla\cdot\vnabla p^* \circ \mathop{\operatorname{d}}\varphi + \nabla^2\varphi\cdot \vnabla^2 p^*\circ \mathop{\operatorname{d}}\varphi = \sum\nolimits_j\nabla^{\operatorname{Hom}(E,F)}_{V_j} S^j +\sum\nolimits_j (\operatorname{div} V_j)S^j. \] Combining formulas, we obtain \begin{align*} \big((\vnabla p\circ\mathop{\operatorname{d}}\varphi)\cdot \nabla^E a\big| b\big)_F &= -\big(a\big|(\hnabla\cdot\vnabla p^* \circ \mathop{\operatorname{d}}\varphi + \nabla^2\varphi\cdot \vnabla^2 p^*\circ \mathop{\operatorname{d}}\varphi) b\big) \\ &\phantom{==} + \sum\nolimits_j (V_j c^j +(\operatorname{div} V_j)c^j) - \sum\nolimits_j (a|S^j \nabla^F_{V_j} b)_E, \end{align*} $c^j=(a|S^j b)_E$. Integrate and use \eqref{eq-int-LieV}. Since $a$ is arbitrary, we deduce the asymptotics \begin{align*} e^{-i\varphi/\hslash}\hslash^m P^* e^{i\varphi/\hslash} b &= ((p^*+\hslash p_1^*)\circ\mathop{\operatorname{d}}\varphi ) b -i\hslash (\hnabla\cdot\vnabla p^* \circ \mathop{\operatorname{d}}\varphi) b \\ &\phantom{==} -(i\hslash/2)\nabla^2\varphi\cdot\vnabla^2 p^*\circ\mathop{\operatorname{d}}\varphi)b -i\hslash S^j \nabla^F_{V_j} b +\mathcal{O}(\hslash^2). \end{align*} Fix $x\in X$ and $\xi=\mathop{\operatorname{d}}\varphi(x)$, and suppose that $\nabla^2\varphi(x)=0$ and $\nabla^F b(x)=0$. Then \[ (e^{-i\varphi/\hslash}\hslash^m P^* e^{i\varphi/\hslash} b)(x) = (p^*+\hslash p_1^*)(x,\xi)b(x) -i\hslash (\hnabla\cdot\vnabla p^*)(x,\xi)b(x) + \mathcal{O}(\hslash^2), \] which implies the proposition. \end{proof} We apply the geometric calculus to the elasticity operator $L=\nabla^*C\nabla=(-i\nabla)^*C(-i\nabla)$ of an elastic body $(M,g,C)$. The leading symbol $d(\xi)\in \operatorname{Hom}(T_xM,T_x^{(1,1)}M)$ of $-i\nabla$ at $\xi\in T_x^*M$ is given by tensoring with $\xi$. The horizontal derivatives of $d$ and of its adjoint symbol $d^*$ vanish. The adjoint $(-i\nabla)^*$ is defined by taking the $L^2$ inner product given by $\mathop{\operatorname{d}} V_M=\mu_g$. Proposition~\ref{prop-geosymb-adjoint} implies that $d^*(\xi)$, which is contraction with $\xi$, is the leading symbol of $(-i\nabla)^*$. Apply Remark~\ref{remark-mult-op} to multiplication by the stiffness tensor $C$. It follows from Proposition~\ref{prop-geosymb-composition} that the leading symbol of $L$ equals \[ \ell+\hslash\ell_1, \quad \ell(\xi) = d(\xi)^* C(x) d(\xi), \quad \ell_1(\xi) = -i \big((\vnabla d(\xi)^*)\cdot \hnabla C\big) d(\xi), \] where $\xi\in T_x^*M$, and $d(\xi):w\mapsto w\otimes\xi$, $w\in T_xM$. Given local coordinates $x^j$, this reads \[ \ell(\xi)=\big(C^{ijkm}\xi_j\xi_m\big), \quad \sqrt{-1}\ell_1(\xi)=\big(C^{ijkm}_{\phantom{ijkm};m}\xi_j\big). \] The $\xi_j$'s are dual coordinates, and the matrix representations correspond to the frame $\partial_j$. Thus the lower order symbol $\sqrt{-1}\ell_1$ is a contraction of the tensor $\nabla C$. Using the Leibniz rule and the symmetries of $C$, we find the formula \begin{equation} \label{eq-ell-leadsymb-simplified} 2i\ell_1 =\vnabla\cdot\hnabla \ell, \end{equation} $i=\sqrt{-1}$. This means that the leading symbol of the elasticity operator is determined from its principal symbol. \section{Subprincipal symbol} In this section we define the subprincipal symbol of systems of real principal type. Examples from elastodynamics are considered. Let $P\in\Psiphg^m(X;E)$ with principal symbol $p$ and characteristic $\operatorname{Char} P=\{\det p=0\}\setminus 0$. Following \cite{Dencker82polarrpt}, $P$ is of real principal type iff there hold: \begin{compactenum}[(i)] \item \label{rpt-Hamilton} $\operatorname{Char} P$ is smooth hypersurface with a non-radial Hamilton field $H$ which is homogeneous of degree zero. \item Locally on $\operatorname{Char} P$, the dimension of $\operatorname{ker} p$, the null-space of $p$, is constant. \item \label{rpt-tilde-p} If $q$ is a defining function for $\operatorname{Char} P$, then $\tilde p=qp^{-1}$ extends smoothly across $\operatorname{Char} P$. \end{compactenum} Fixing $H$ as in \eqref{rpt-Hamilton}, and we say that $P$ is endowed with the Hamiltion field $H$. So the orientation of the bicharacteristic strips, that is, the integral curves of $H$ in $\operatorname{Char} P$, is determined. Furthermore, up to a factor which equals $1$ on $\operatorname{Char} P$, Hamilton functions $q$ satisfying $H=H_q$ on $\operatorname{Char} P$ are uniquely determined. So is $\tilde p|_{\operatorname{Char} P}$. The range of $\tilde p$ equals $\operatorname{ker} p$, and the range of $p$ equals $\operatorname{ker} \tilde p$. The definition of real principal type systems microlocalizes to open conic subsets of $T^*X\setminus 0$. \begin{proposition} Suppose $P$ is of real principal type and endowed with the Hamilton field $H=H_q$. Denote by $p+\hslash p_1$ the leading symbol of $P$. Set \begin{equation} \label{eq-subprinc-symb} p_s =\tilde p p_1 -i \vnabla\tilde p\cdot \hnabla p +(i/2) (\vnabla\cdot\hnabla q)\operatorname{Id}, \end{equation} where $q\operatorname{Id}=\tilde p p$. The restriction $p_s|_{\operatorname{ker} p}$ does not depend on the choice of $q$. \end{proposition} We call $p_s$, and its restriction to the kernel of $p$, the \emph{subprincipal symbol} of $P$ with respect to the Hamilton field $H$. Note that $p_s$ is homogeneous of degree zero because we do assume that $q$ is homogeneous of degree $1$. For scalar operators of order $1$, our subprincipal symbol agrees with the usual one. \begin{proof} Suppose $H=H_q=H_{q'}$. Thus $q'=cq$ with $c=1$ at $q=0$. Moreover, $q=\tilde p p$ and $q'=\tilde p' p$, $\tilde p'=c\tilde p$. We identify scalar functions $f$ with the corresponding sections $f\operatorname{Id}$. Since $\operatorname{Id}=\operatorname{Id}_E$ is parallel, \eqref{eq-Poisson-vert-horiz} implies $\vnabla q\cdot\hnabla c-\vnabla c\cdot\hnabla q=(Hc)\operatorname{Id}=0$. Therefore, at $q=0$, \[ \vnabla\cdot\hnabla q' = \vnabla\cdot\hnabla q+\vnabla c\cdot\hnabla q +\vnabla q\cdot\hnabla c = \vnabla\cdot\hnabla q+2\vnabla c\cdot\hnabla q. \] Using $\hnabla q=(\hnabla\tilde p)p+\tilde p(\hnabla p)$, we have \( \vnabla\tilde p'\cdot \hnabla p = c\vnabla\tilde p\cdot \hnabla p + \vnabla c\cdot\big(\hnabla q -(\hnabla\tilde p) p\big). \) It follows that \[ \vnabla\tilde p'\cdot \hnabla p -\vnabla\cdot\hnabla q'/2 = \vnabla\tilde p\cdot \hnabla p -\vnabla\cdot\hnabla q/2 -(\vnabla c\cdot \hnabla\tilde p) p \] holds at $q=0$. Therefore, $p_s|_{\operatorname{ker} p}$ does not depend on $c$. \end{proof} Suppose $\Lambda$ is a Lagrangian submanifold of $T^*X$ which is contained in $\operatorname{Char} P$. Suppose $E$ equipped with a connection $\nabla^E$. Denote by $(\hat E,\nabla^{\hat E})$ the pullback of $(E,\nabla^E)$ by the canonical projection $\Lambda\to X$, and by $\hDB{\Lambda}$ the bundle of half-densities over $\Lambda$. Setting \[ D_H (e\otimes\nu) = (\nabla^{\hat E}_H e)\otimes\nu + e\otimes(\Lie_H \nu), \] the Hamilton field $H$ and the subprincipal symbol $p_s$ determine a first order differential operator $D_H$ which operates on sections of the bundle $\hat E\otimes\hDB{\Lambda}\to \Lambda$. Here $e$ is a section of $\hat E$, and $\Lie_H \nu$ denotes the Lie derivative of a nowhere vanishing the half-density $\nu$. Note that $D_H$ is well-defined. \begin{lemma} \label{lemma-homog-transp-eq} Suppose $a\in\ensuremath{C^{\infty}}(\Lambda;\hat E\otimes\hDB{X})$ satisfies $i^{-1}D_H a +p_sa\in\operatorname{ker} p$. Then $pa$ vanishes along a bicharacteristic strip of $H$ if $pa$ vanishes at some point on it. \end{lemma} \begin{proof} $\vnabla q=(\vnabla p)\tilde p +p\vnabla\tilde p$ and $\hnabla q=(\hnabla\tilde p)p+\tilde p\hnabla p$ imply the first equality of \[ p\vnabla\tilde p\cdot \hnabla p -\vnabla p\cdot(\hnabla \tilde p)p = \vnabla q\cdot \hnabla p- \vnabla p\cdot \hnabla q = \nabla_H^{\pi^*\operatorname{End}(E)}p. \] The second follows from \eqref{eq-Poisson-vert-horiz}. Observe that $D_H pa= (\nabla_H^{\operatorname{End}(\hat E)} p)a+ pD_Ha$, and that $\nabla_H^{\operatorname{End}(\hat E)} p$ equals the pullback of $\nabla_H^{\pi^*\operatorname{End}(E)} p$ to $\Lambda$. Using the assumption and \eqref{eq-subprinc-symb}, we get \begin{align*} D_H pa &=\big(p\vnabla\tilde p\cdot \hnabla p -\vnabla p\cdot(\hnabla \tilde p)p-ipp_s\big)a \\ &= -\vnabla p\cdot(\hnabla \tilde p)pa +(\vnabla\cdot\hnabla q)pa/2. \end{align*} This means that the restriction of $pa$ to a bicharacteristic satisfies a homogeneous system of ordinary differential equations. The assertion follows from uniqueness of solutions to initial value problems. \end{proof} Consider the operator $P=L-\rho D_t^2$ of elastodynamics for a three-dimensional elastic body. The principal symbol is $p(\xi,\tau)=\ell(\xi)-\rho(x)\tau^2$ at $(\xi,\tau)\in T_{(x,t)}^*(M\times \mathbb{R})$. Suppose the stiffness tensor is isotropic with Lam\'e parameters satisfying $\lambda+\mu>0$ and $\mu\geq 0$. The principal symbol $\ell(\xi)$ of the isotropic elasticity operator $L$ is stated in formula \eqref{eladyn-iso-acoustic-tensor} of Appendix~\ref{sect-elastic-symmetries}. Therefore, the principal symbol $p$ of the operator of elastodynamics for isotropic media is given by \begin{equation} \label{eq-p-iso-eladyn} p(\xi,\tau)/\rho = (c_p^2\xi^2-\tau^2)\pi_p(\xi) + (c_s^2\xi^2-\tau^2)\pi_s(\xi). \end{equation} Here $c_s=\sqrt{\mu/\rho}<\sqrt{(\lambda+2\mu)/\rho}=c_p$ are the speeds of shear and pressure waves. If $(\xi,\tau)\in\operatorname{Char} P$, then one of $\tau=\pm c_{p/s}|\xi|$ holds. Off $\operatorname{Char} P$ there holds \begin{equation*} \rho p(\xi,\tau)^{-1} = (c_p^2\xi^2-\tau^2)^{-1}\pi_p(\xi) + (c_s^2\xi^2-\tau^2)^{-1}\pi_s(\xi). \end{equation*} So we see that $P$ is of real principal type. For a fluid, $\mu=0$, $P$ is of real principal type where $\tau\neq 0$, despite $c_s$ being zero. Now suppose the elastic medium is non-isotropic and that the eigenvalues of the acoustic tensor $\ell(\xi)$, $\xi\neq 0$, are pairwise distinct. Then $P$ is of real principal type. In fact, setting $\tilde p=(\partial_\tau q) (\partial_\tau p)^{-1}=-(\partial_\tau q/2)\operatorname{Id}$ on $\operatorname{ker} p$, the extension required in \eqref{rpt-tilde-p} is seen to exists, \cite[Proposition 3.2]{Dencker82polarrpt}. By Proposition~\ref{prop-transv-iso-perturb}, this observation applies to transversely isotropic media which are generic small perturbations from isotropy. The axis of rotational symmetry, $J(x)$, should not be parallel to the propagation direction $\xi$. Suppose the elastodynamics operator $P=L-\rho D_t^2$ is of real principal type. A natural endowment of $P$ with a Hamilton field is given by the condition $Ht=1$, so that bicharacteristic strips are parametrized by time. The leading symbol $p+\hslash p_1$ is given by $p=\ell-\rho \tau^2$ and $p_1=\ell_1$. Here we equipped $X=M\times\mathbb{R}$ with the connection induced from $M$ by projection off the time axis. The projection is also used to pull back the (complexified) tangent bundle of $M$ to get the bundle $E\to X$. The formula \eqref{eq-subprinc-symb} for the subprincipal symbol can be simplified to a formal Poisson bracket: \begin{equation} \label{eq-subprinc-eladyn} 2ip_s = \vnabla\tilde p\cdot \hnabla p - \hnabla\tilde p \cdot\vnabla p \quad\text{on $\operatorname{ker} p$.} \end{equation} In fact, using \eqref{eq-ell-leadsymb-simplified}, \eqref{eq-subprinc-symb}, and $\partial_t p=0$, we compute \begin{align*} 2ip_s &= \tilde p \vnabla\cdot\hnabla p + 2\vnabla\tilde p\cdot \hnabla p -\vnabla\cdot\hnabla \tilde p p \\ &= \vnabla\tilde p\cdot \hnabla p -\hnabla \tilde p\cdot\vnabla p - (\vnabla\cdot\hnabla \tilde p)p, \end{align*} and this implies \eqref{eq-subprinc-eladyn}. \section{Lagrangian solutions} \label{section-Lagr-solns} With a real-valued non-degenerate phase function $\varphi$ and a symbol $a$ there is associated the oscillatory integral $\int e^{i\varphi(x,\theta)}a(x,\theta)\mathop{\operatorname{d}}\theta$. We assume that $a$ is a section of the bundle $E$ pulled back by the map $(x,\theta)\mapsto x$, i.e., $a(x,\theta)\in E_x$ holds. The duality pairing with test functions in $\Ccinfty(X;E^*\otimes\DB{X})$, $E^*$ the dual bundle of $E$, is defined. The oscillatory integral is a distribution section of $E$. The wavefront set is contained in the Lagrangian manifold parametrized by $\varphi$. Lagrangian distributions are microlocally sums of oscillatory integrals. Let $\Lambda\subset T^*X\setminus 0$ be a closed conic Lagrangian submanifold. Denote by $I^\mu(X,\Lambda;E)$ the space of Lagrangian distribution sections of $E$ of order $\leq \mu$ which are associated with $\Lambda$. Denote the Maslov bundle by $M_\Lambda$ and the pullback of $E$ under the projection $\Lambda\to X$ by $\hat E$. There is a principal symbol isomorphism \[ I^\mu(X,\Lambda;\hDB{X}\otimes E)/I^{\mu-1} \equiv S^{\mu+n/4}(\Lambda;M_\Lambda\otimes \hDB{\Lambda}\otimes\hat E)/S^{\mu-1+n/4}, \] \cite[Theorem 25.1.9]{Hormander85anaFour}. We assume that principal symbols are homogeneous. The principal symbol $a$ of $A\in I^\mu(X,\Lambda;\hDB{X}\otimes E)$ at $(y,\eta)\in\Lambda$ is given by \begin{equation} \label{eq-symbol-Lagrdistr} \langle\chi(y), a(y,\eta;\lambda)\rangle \omega^{-1/2} =\lim_{\hslash\to 0+} (2\pi)^{-n/4} \hslash^{\mu-n/4} \int_X e^{-i\psi/\hslash} \langle\chi, A\rangle, \end{equation} $n=\dim X$; \cite[\S 4.1]{Duistermaat73fio}. Here the real-valued function $\psi\in\ensuremath{C^{\infty}}(X)$ satisfies $\psi(y)=0$ and $\mathop{\operatorname{d}} \psi(y)=\eta$. The tangent plane at $(y,\eta)$ of the graph of $\mathop{\operatorname{d}}\psi$ is denoted $\lambda$, and it is assumed that $\lambda$ is transversal to $T_{(y,\eta)}\Lambda$. The section $\chi\in\Ccinfty(X;E^*\otimes\hDB{X})$ vanishes outside a small neighborhood of $y$. Finally, $\omega$ is the canonical volume form of $T^*X$, and angular brackets are duality brackets. A convenient representation of the inverse of the principal symbol map is found microlocally using canonical coordinates $x,\xi$ such that the planes $\xi=$constant are transversal to $\Lambda$. Then $\Lambda$ is parametrized by a phase function $\langle\xi,x\rangle-\Theta(\xi)$, $\Theta\in\ensuremath{C^{\infty}}$ real-valued and homogeneous of degree one. Thus $\Lambda$ is given by $x=\Theta'(\xi)$, \cite[Theorem~21.2.16]{Hormander85anaThree}. If $a(\xi)|\mathop{\operatorname{d}}\xi|^{1/2}$ is the principal symbol of $A\in I^\mu$, then \begin{equation*} A(x) \equiv (2\pi)^{-3n/4} \int_{\mathbb{R}^n}e^{i(\langle\xi,x\rangle-\Theta(\xi))} \PTE_{x\from \Theta'(\xi)}a(\xi)|\mathop{\operatorname{d}} x|^{1/2}\mathop{\operatorname{d}}\xi \quad \mod I^{\mu-1}. \end{equation*} To see this, substitute $\hslash\xi$ for $\xi$, choose $\psi(x)=\langle\eta,x\rangle$, and evaluate \eqref{eq-symbol-Lagrdistr} by stationary phase. We generalize the Duistermaat--Hörmander formula \cite[Theorem~25.2.4]{Hormander85anaFour} about products with vanishing principal symbol to non-scalar real principal type systems. \begin{theorem} \label{theorem-amp-transport-eqn} Suppose $P\in\Psiphg^m(X;E)$ is of real principal type with leading symbol $p+\hslash p_1$ and endowed with the Hamilton field $H$. Let $\Lambda$ be a closed conic Lagrangian submanifold of $\operatorname{Char} P$. Let $A\in I^\mu(X,\Lambda;\hDB{X}\otimes E)$ with the principal symbol $a$. Suppose $PA\in I^{\mu+m-1}(X,\Lambda;\hDB{X}\otimes E)$. Then $p a=0$, and the principal symbol $b$ of $PA$ satisfies \begin{equation} \label{eq-amp-transport} i^{-1} D_H a + p_s a = \tilde p b. \end{equation} \end{theorem} It follows from \eqref{eq-amp-transport} and $\operatorname{ker}\tilde p=\operatorname{im} p$ that $b$ is uniquely determined modulo the range of $p$. Maslov factors do not appear in \eqref{eq-amp-transport} because they are locally constant. A non-geometric coordinate version of \eqref{eq-amp-transport} is given in \cite[Theorem 3.1]{SHRoehrig04lagrsol}. \begin{proof} We work microlocally near a given point in $\Lambda$. Choose $\tilde P\in\Psiphg^{-m+1}(X;E)$ with principal symbol $\tilde p$. The product $Q=\tilde P P$ is of real principal type. By Proposition \ref{prop-geosymb-composition}, the leading symbol $q_0+\hslash q_1$ satisfies $q_0=q\operatorname{Id}$ and, on $\operatorname{ker} p$, $q_1=\tilde p p_1 -i \vnabla \tilde p\cdot \hnabla p$. Therefore, equipping $Q$ with the Hamilton field $H$, the subprincipal symbol of $Q$ is equal to $p_s$. It suffices to prove the theorem with $P$ replaced by $Q$. So we assume $m=1$, $p=q\operatorname{Id}$, $\tilde p=\operatorname{Id}$, and $p_s=p_1+(i/2)(\vnabla\cdot\hnabla q)\operatorname{Id}$. Fixing canonical coordinates as above, $\Lambda=\{x=\Theta'(\xi)\}$ holds. Recall the abbreviations $\partial_j=\partial/\partial x^j$ and $\partial^j=\partial/\partial \xi_j$. Using Taylor's formula, we write $q(x,\xi)=q_j(x,\xi)(x^j-\partial^j\Theta(\xi))$. (Summation over indices in opposite position is implied.) On $\Lambda$, the Hamilton field reads $H= - q_j\partial^j$. Put $\tilde a(x,\xi)=\PTE_{x\from \Theta'(\xi)}a(\xi)\in E_x$. Suppose \begin{equation} \label{eq-A-Lagr-in-adapted-coords} A(x) = (2\pi)^{-3n/4} \hslash^{-\mu-3n/4} \int_{\mathbb{R}^n}e^{i(\langle\xi,x\rangle-\Theta(\xi))/\hslash} \tilde a(x,\xi)|\mathop{\operatorname{d}} x|^{1/2}\mathop{\operatorname{d}}\xi. \end{equation} The principal symbol of $A$ is $a(\xi)|\mathop{\operatorname{d}}\xi|^{1/2}$. Apply Proposition~\ref{prop-fund-asymp-expansion} to the oscillatory integral \eqref{eq-A-Lagr-in-adapted-coords}, and get \begin{align*} PA(x) &= (2\pi)^{-3n/4}\hslash^{-\mu-m-3n/4} \int e^{i(\langle\xi,x\rangle-\Theta(\xi))/\hslash} \tilde b(x,\xi,\hslash)|\mathop{\operatorname{d}} x|^{1/2}\mathop{\operatorname{d}}\xi, \\ \tilde b(x,\xi,\hslash) &= q(x,\xi)\tilde a(x,\xi)+\hslash p_1(x,\xi)\tilde a(x,\xi) -i\hslash\nabla_V^E \tilde a(x,\xi) \\ &\phantom{==} -i\hslash \big((\nabla_V|\mathop{\operatorname{d}} x|^{1/2})|\mathop{\operatorname{d}} x|^{-1/2} +\nabla^2\langle\xi,x\rangle\cdot \vnabla^2 q(x,\xi)/2\big)\tilde a(x,\xi) +\mathcal{O}(\hslash^2) \end{align*} with $V= (\partial^j q) \partial_j$. A partial integration proves \[ \int e^{i(\langle\xi,x\rangle-\Theta(\xi))/\hslash} q(x,\xi)\tilde a(x,\xi)\mathop{\operatorname{d}} \xi = i\hslash\int e^{i(\langle\xi,x\rangle-\Theta(\xi))/\hslash} \partial^j q_j(x,\xi) \tilde a(x,\xi)\mathop{\operatorname{d}} \xi. \] Hence $PA(x) \equiv (2\pi)^{-3n/4} \int e^{i(\langle\xi,x\rangle-\Theta(\xi))} b(x,\xi)|\mathop{\operatorname{d}} x|^{1/2}\mathop{\operatorname{d}}\xi$ modulo $I^{\mu+m-2}$, where \[ b = i\partial^j (q_j\tilde a)+p_1\tilde a -i\nabla_V^E \tilde a +i(\Gamma^k_{jk}\partial^j q + \Gamma^j_{k\ell}\xi_j \partial^k\partial^\ell q)\tilde a/2. \] Here the $\Gamma$'s are the Christoffel symbols of the symmetric connection $\nabla$, and we used the formulas $\nabla|\mathop{\operatorname{d}} x|^{1/2}=-\Gamma^k_{jk}\mathop{\operatorname{d}} x^j|\mathop{\operatorname{d}} x|^{1/2}/2$ and $\nabla^2\langle\xi,x\rangle= \nabla \xi_j\mathop{\operatorname{d}} x^j= - \Gamma^j_{k\ell}\xi_j \mathop{\operatorname{d}} x^k\mathop{\operatorname{d}} x^\ell$. The principal symbol of $PA$ is $b(\Theta'(x),\xi)|\mathop{\operatorname{d}}\xi|^{1/2}$. Observe that $\nabla^E \tilde a(x,\xi)=0$ holds if $x=\Theta'(\xi)$. We claim that, at $\Lambda$, the following holds: \begin{equation} \label{eq-claim-DH-a-mu} D_H(a\nu) = -\partial^j (q_j\tilde a)\nu +(\partial^j\partial_j q) a\nu/2, \quad \nu=|\mathop{\operatorname{d}}\xi|^{1/2}. \end{equation} Assuming \eqref{eq-claim-DH-a-mu} we obtain \[ b\nu = i^{-1} D_H (a\nu) + p_1 a\nu +i\big(\partial^j\partial_jq+\Gamma^k_{jk}\partial^j q + \Gamma^j_{k\ell}\xi_j \partial^k\partial^\ell q\big)a\nu/2 \quad\text{at $\Lambda$.} \] Recall \eqref{eq-horiz-a-scalar}. The sum in brackets expresses $\vnabla\cdot\hnabla q$ in coordinates, implying \eqref{eq-amp-transport}. It remains to prove \eqref{eq-claim-DH-a-mu}. Let $(y(t),\eta(t))$ denote the bicharacteristic curve in $\Lambda$ which passes at $t=0$ through $(x,\xi)$, $x=\Theta'(\xi)$. This means that \[ \frac{\mathop{\operatorname{d}}}{\mathop{\operatorname{d}} t} \eta_j(t) =-q_j(y(t),\eta(t)),\quad \eta(0)=\xi,\quad y(t)=\Theta'(\eta(t)). \] Set $\beta_t=y|_{[0,t]}$. Denote by $\lambda_t$ the loop which consists of $\beta_t$ followed by the geodesic from $y(t)$ to $y(0)=x$. There holds \[ -q_j\partial^j \tilde a|_{x=\Theta'(\xi)} = \frac{\mathop{\operatorname{d}}}{\mathop{\operatorname{d}} t} \tilde a(x,\eta(t))\big|_{t=0} = \frac{\mathop{\operatorname{d}}}{\mathop{\operatorname{d}} t} \PTE_{\lambda_t} \PTE_{\beta_t^{-1}} a(\eta(t))\big|_{t=0} = \nabla_H^{\pi^* E} a(\xi). \] The last equality follows from Lemma~\ref{lemma-trivial-holonomy} and the Leibniz rule. Moreover, since $H$ is tangent to $\Lambda$, we may replace $\pi^* E$ by $\hat E$. Note that $\partial^j\partial_j q = \partial^j q_j - (\partial_jq_k)\partial^j\partial^k\Theta$ holds on $\Lambda$. The Lie derivative of the half-density $\nu$ is obtained as follows: \[ (\Lie_H\nu)/\nu = -\partial^j q_j(\Theta'(\xi),\xi) /2 = -\big((\partial_k q_j)\partial^k\partial^j \Theta +(\partial^j q_j)\big) /2 = \partial^j\partial_j q/2 -\partial^j q_j. \] Compare \cite[(25.2.11)]{Hormander85anaFour}. Combining the results proves \eqref{eq-claim-DH-a-mu}. \end{proof} \begin{remark} Given $B\in I^{\mu+m-1}=I^{\mu+m-1}(X,\Lambda;\hDB{X}\otimes E)$, we solve $PA\equiv B$ modulo $I^{\mu+m-2}$ as follows. First we solve transport equations \eqref{eq-amp-transport} to find $A\in I^\mu$ such that $pa=0$, and such that the principal symbol of $B-PA\in I^{\mu+m-1}$ lies in $\operatorname{ker} \tilde p$. Using $\operatorname{ker} \tilde p=\operatorname{im} p$, we then replace $A$ by $A+A'$, where $A'\in I^{\mu-1}$ solves $PA'\equiv B-PA$ modulo $I^{\mu+m-2}$. \end{remark} We use the Lagrangian intersection calculus of \cite{MelroseUhlmann79intersection} to solve real principal type equations with Lagrangian sources. Suppose $(\Lambda_0,\Lambda)$ is a pair of conic Lagrangian submanifolds of $T^*X\setminus 0$ intersecting cleanly in the boundary of $\Lambda$, $\partial\Lambda=\Lambda\cap\Lambda_0$. Let $A$ be an element of \[ I^{k}(\Lambda_0\cup\Lambda) = I^{k}(X,\Lambda_0\cup\Lambda;\hDB{X}\otimes E), \] the space of Lagrangian distributions of order $\leq k$ associated with the intersecting pair. To simplify formulas, we denote intersecting pairs by their union, and we often drop $X$ and the bundle $\hDB{X}\otimes E$ from the notation. There holds $\operatorname{WF} A\subset\Lambda_0\cup\Lambda$. In $\Lambda\setminus\Lambda_0$, resp.\ in $\Lambda_0\setminus\Lambda$, $A$ belongs to $I^k(\Lambda)$ with principal symbol $a$, resp.\ to $I^{k-1/2}(\Lambda_0)$ with principal symbol $a_0$. The principal symbol $(a_0,a)$ of $A$ satisfies a compatibility condition at $\partial\Lambda$ which we recall. Suppose $\gamma\in\partial\Lambda$. Choose smooth functions $s_1,\ldots,s_{n-1}$ and $q,t$ such that the following hold. On $\partial\Lambda$, $\sigma=\mathop{\operatorname{d}} s_1\wedge \ldots\wedge \mathop{\operatorname{d}} s_{n-1}\neq 0$ at $\gamma$. The function $q$ vanishes on $\Lambda$ and its differential on $\Lambda_0$ at $\gamma$ is nonzero. The function $t$ vanishes on $\Lambda_0$, is positive on $\Lambda\setminus\Lambda_0$, and $H_qt>0$ at $\gamma$. The Hamilton fields of $q$ and $t$ are $H_q$ and $H_t$. Choose a Lagrangian plane $\mu\subset T_\gamma(T^*X)$ which is transversal to the fiber and to the connecting path between $T_\gamma \Lambda_0$ and $T_\gamma \Lambda$ generated from $T_\gamma \partial\Lambda$ and the straight line between $H_t$ and $H_q$. The symbols $a$ and $qa_0$ are smooth at $\partial\Lambda$, and \begin{equation} \label{eq-iLagr-compat} a|\sigma\wedge \mathop{\operatorname{d}} t|^{-1/2} = (2\pi)^{1/4}e^{\pi i/4} q a_0|\sigma\wedge \mathop{\operatorname{d}} q|^{-1/2} (H_qt)^{1/2} \end{equation} holds at $\gamma$ and $\mu$. Conversely, if $(a_0,a)\in S^{k+(n-2)/4}(\Lambda_0)\times S^{k+n/4}(\Lambda)$ satisfy the compatibility condition \eqref{eq-iLagr-compat}, then there exists $A\in I^k(\Lambda_0\cup\Lambda)$, unique modulo $I^{k-1}(\Lambda_0\cup\Lambda)$, which has $(a_0,a)$ as its principal symbol. On the symbol level, the action of pseudo-differential operators on $\cup_k I^k(\Lambda_0\cup\Lambda)$ is by multiplication of symbols, consistent with the standard calculus. See Theorem 4.13 and formula (5.2) of \cite{MelroseUhlmann79intersection} for exact sequences which encode the symbol calculus. \begin{proposition} \label{prop-MU-Lagr-intsect-calc} Suppose $P\in\Psiphg^m(X;E)$ is of real principal type with principal symbol $p$, endowed with a Hamilton field $H=H_q$, and $\tilde p p=q\operatorname{Id}$. Suppose $\Lambda_0$ is a conic Lagrangian manifold such that $H$ is transversal to $\Lambda_0$ at $\Lambda_0\cap\operatorname{Char} P$. Assume that the $H$-flowout of $\Lambda_0\cap\operatorname{Char} P$ defines a conic Lagrangian manifold $\Lambda$ which intersects $\Lambda_0$ in the boundary of $\Lambda$, $\partial\Lambda=\Lambda_0\cap\operatorname{Char} P$. Let \[ B\in I^{m+k-1/2}(X,\Lambda_0;\hDB{X}\otimes E) \] with principal symbol $b$. There exists $A\in I^k(X,\Lambda_0\cup\Lambda;\hDB{X}\otimes E)$ such that $PA-B\in\ensuremath{C^{\infty}}$. The principal symbol $a$ of $A$ on $\Lambda\setminus\Lambda_0$ satisfies $pa=0$, and it is the solution of the transport equation $i^{-1} D_H a + p_s a = 0$ with initial condition \begin{equation} \label{eq-transport-initval} a|\sigma\wedge \mathop{\operatorname{d}} t|^{-1/2} = (2\pi)^{1/4}e^{\pi i/4} \tilde p b|\sigma\wedge \mathop{\operatorname{d}} q|^{-1/2} (Ht)^{1/2} \end{equation} at $\partial\Lambda$, the notation being as in \eqref{eq-iLagr-compat}. \end{proposition} \begin{proof} To construct $A$, we proceed symbolically. Given, in addition, $C\in I^{m+k-1}(\Lambda_0\cup\Lambda)$, we first solve \begin{equation} \label{eq-PA-BC} PA \equiv B+C\mod I^{m+k-3/2}(\Lambda_0)+I^{m+k-2}(\Lambda_0\cup\Lambda). \end{equation} Denote by $c$ the principal symbol of $C$ on $\Lambda\setminus\Lambda_0$. Let $a$ be the solution of the transport equation $i^{-1} D_H a + p_s a = \tilde p c$ on $\Lambda$ which satisfies the initial condition \eqref{eq-transport-initval} at $\partial\Lambda$. Since $p\tilde p=0$ on $\operatorname{Char} P$, $pa=0$ holds at $\partial \Lambda$, hence on all of $\Lambda$ by Lemma~\ref{lemma-homog-transp-eq}. Put $a_0=p^{-1}b$ on $\Lambda_0\setminus\operatorname{Char} P$. The pair $(a_0,a)$ satisfies the compatibility condition \eqref{eq-iLagr-compat}. Choose $A\in I^k(\Lambda_0\cup\Lambda)$ with principal symbol $(a_0,a)$. By the symbol calculus, $PA-B\in I^{m+k-1}(\Lambda_0\cup\Lambda)$. By Theorem~\ref{theorem-amp-transport-eqn} and the properties of $a$, the $\Lambda$-component of the principal symbol of $PA-B-C$ ranges in $\operatorname{ker} \tilde p=\operatorname{im} p$. Therefore, we can find $A'\in I^{k-1}(\Lambda_0\cup\Lambda)$ such that the $\Lambda$-component of the principal symbol of $P(A+A')-B-C\in I^{m+k-1}(\Lambda_0\cup\Lambda)$ vanishes. It follows from the symbol calculus that \eqref{eq-PA-BC} holds with $A$ replaced by $A+A'$. We use \eqref{eq-PA-BC} to recursively construct $A_{j'}\in I^{k-j'}(\Lambda_0\cup\Lambda)$ such that \[ P\sum\nolimits _{0\leq j'<j} A_{j'} \equiv B \mod I^{m+k-j-1/2}(\Lambda_0)+I^{m+k-j-1}(\Lambda_0\cup\Lambda)\subset I^{m+k-j}(\Lambda_0\cup\Lambda). \] Asymptotic summation completes the proof. \end{proof} Replacing $X$ by $X\times X$, $\Lambda_0$ by the twisted conormal bundle of the diagonal in $X\times X$, and $B$ by the identity operator $\operatorname{Id}$, the foregoing arguments lead to a forward parametrix $E_+$. So, $PE_+\equiv \operatorname{Id}$ holds modulo a smoothing operator, and $E_+$ is an FIO associated with an intersecting pair of canonical relations. \section{Boundary parametrices of elastodynamics} Let $(M,g,C)$ be an elastic body. Assuming that the operator of elastodynamics, $P=L-\rho D_t^2$, is of real principal type, we shall construct, microlocally at non-glancing regions, outgoing and incoming Dirichlet parametrices at the boundary of space-time $X=M\times\mathbb{R}$. Furthermore, we define the corresponding microlocal DN maps which map displacements to tractions. The boundary $\partial M$ is endowed with its natural metric tensor and connection. Let $n$ be the interior unit normal field at $\partial M$. Discarding a closed subset of the interior of $M$, removes the interface $N$ from consideration, and $\kappa:(r,y)\mapsto x=\exp_y(r\,n(y))$ maps $[0,R[\times\partial M$ diffeomorphically onto $M$. We assume that $r$ is the distance from $x$ to $\partial M$, and that $y=b(x)$ is the unique point in $\partial M$ nearest to $x$. Parallel transport along boundary orthogonal geodesics, $r\mapsto \kappa(r,y)$, gives an orthogonal bundle isomorphism between $TM$ and the pullback by $b$ of $T_{\partial M}M$. Using the isomorphism, we identify sections of the tangent bundle $TM$ with smooth maps from the interval $[0,R[$ into the space of sections of $T_{\partial M}M$. Analogously, tensor fields over $M$ are identified with $r$-dependent sections of the restrictions to $\partial M$ of corresponding tensor bundles. We equip the bundles with the pullback connections under the inclusion map $\partial M\hookrightarrow M$. Using the identifications, the elasticity operator $L$ and other differential operators are regarded as elements of the algebra generated by $D_r=i^{-1}\partial_r$ and the tangential differential operators. An operator is said to be tangential if it commutes with multiplication by $r$. We use the complexifications of bundles without distinguishing in notation from the real case. Covariant derivation in directions tangent to boundary orthogonal geodesics is given by \[ \nabla_{V} u(x)= \partial_r u(r,y), \quad x=\kappa(r,y), \quad V= \PT_{x\from y}n(y). \] Put $\nu={\mathop{\operatorname{d}} r}|_{\partial M}$. Define $B$ by \begin{equation} \label{eq-covar-der-collar} -i\nabla u = D_r u \otimes \nu + Bu. \end{equation} The operator $B$ is tangential. More precisely, $B$ consists of a family, smoothly parametrized by $r$, of differential operators which map sections of $T_{\partial M}M$ to sections of $\operatorname{Hom}(T\partial M,T_{\partial M}M)$. Here $T_{\partial M}M=\mathbb{R} n\oplus T\partial M$ and the corresponding orthogonal decomposition of $TM$ is used. Notice that $B|_{r=0}=-i\nabla^{T_{\partial M}M}$. As in \cite[Proposition~11]{SH11rqm}, the elasticity and the traction operator are \begin{equation} \label{eq-L-and-T-in-collar} L = (D_r -i\operatorname{tr} S)(A_0D_r + A_1) + A_1^\sharp D_r + A_2, \quad iT = A_0 D_r +A_1. \end{equation} The Hessian $S=\nabla^2 r$ is the shape operator associated with the level surfaces of $r$. The proof of \eqref{eq-L-and-T-in-collar} uses \eqref{eq-Green-id-elasticity} with strain tensors replaced by covariant derivatives: \begin{equation} \label{eq-Green-id-elast-covar} \int_{r\geq 0}\int_{\partial M} C\nabla u \overline{\nabla v}\, J\mathop{\operatorname{d}} V_{\partial M}\mathop{\operatorname{d}} r = \int_{r\geq 0}\int_{\partial M} Lu\,\bar v \, J\mathop{\operatorname{d}} V_{\partial M}\mathop{\operatorname{d}} r + \int_{\partial M} Tu\,\bar v \, \mathop{\operatorname{d}} V_{\partial M}. \end{equation} Notice that $\kappa^*\mathop{\operatorname{d}} V_M = J\mathop{\operatorname{d}} V_{\partial M}\mathop{\operatorname{d}} r$ holds with $J$ given by $\partial_r \log J=\operatorname{tr} S$, $J|_{r=0}=1$. Recall that the bundle identification by parallel transport preserves inner products. Insert \eqref{eq-covar-der-collar} into \eqref{eq-Green-id-elast-covar} and perform partial integrations to get \eqref{eq-L-and-T-in-collar}. Put $B^\sharp=J^{-1}B^* J$ with $B^*$ the adjoint of $B$. The coefficients in \eqref{eq-L-and-T-in-collar} are \begin{equation} \label{eq-Aj-op} A_0 = \nu\cdot C \cdot\nu, \quad A_1 = \nu\cdot C B, \quad A_1^\sharp = B^\sharp C\cdot\nu, \quad A_2 = B^\sharp C B. \end{equation} The dots to the left and right of the stiffness tensor $C$ denote contractions with one of the two left and one of the two right slots of $C$, respectively. The operators $A_j$ and $A_j^\sharp$ are of order $\leq j$ with real principal symbols. The principal symbols $a_0(r,y)$ and $a_2(r,y,\eta)$, $0\neq\eta\in T_y^*\partial M$, of $A_0$ and $A_2$ are symmetric positive definite. The principal symbol of $A_1^\sharp$ is the transpose $a_1(r,y,\eta)^t$ of the principal symbol of $A_1$. Coordinatewise, at $r=0$, the principal symbols are given by \begin{equation} \label{eq-aj-coord-rep} a_0^{ik} = \nu_j C^{ijkm} \nu_m, \quad a_1^{ik} = \nu_j C^{ijkm} \eta_m, \quad a_2^{ik} = \eta_j C^{ijkm} \eta_m. \end{equation} Suppose $P= L -\rho D_t^2$ is of real principal type. Endow $P$ with the Hamilton field $H$ which satisfies $Ht=1$ on $\operatorname{Char} P$. Fix \[ \gamma=(\eta,\tau)\in T_{(y,t)}^*(\partial M\times\mathbb{R}). \] In general, there are several bicharacteristics which hit the space-time boundary over $\gamma$. Each corresponds to a unique point in $\operatorname{Char} P\cap (\gamma+\mathbb{R}\nu)$, $\nu=\mathop{\operatorname{d}} r(y)$. Suppose $\gamma$ non-glancing, that is, $Hr$ never vanishes at the intersection. Set $E=\operatorname{End}(T_y M)$, and \[ a(s)= p(\gamma+s\nu) = a_0 s^2 +\big(a_1(\eta)+ a_1(\eta)^t\big)s+a_2(\eta)-\rho\tau^2 \in\operatorname{End}(E), \] where $p$ denotes the principal symbol of $P$. We are going to apply results of Appendix~\ref{sect-spectral-factorization} to the quadratic self-adjoint polynomial function $a(s)$. The real spectrum of $a(s)$ equals $\operatorname{Char} P\cap (\gamma+\mathbb{R}\nu)$. Let $a'(s)$ denote the derivative of $a(s)$. Define the \emph{outgoing spectrum} (resp.\ \emph{incoming spectrum}) at $\gamma$ as the set $\sigma_{\mathrm{out}}$ (resp.\ $\sigma_{\mathrm{in}}$) of $s\in\mathbb{C}$, $\operatorname{Im} s\geq 0$, such that $\operatorname{ker} a(s)\neq 0$, and $-\tau a'(s)$ is positive (resp.\ negative) definite on $\operatorname{ker} a(s)$ when $s$ is real. Proposition~\ref{prop-sqp-factorization} applies to give spectral factorizations at $\gamma$: \begin{equation} \label{eq-out-in-spec-factorization} a(s)= (s-q_{{\mathrm{out}}/{\mathrm{in}}}^\sharp) a_0 (s-q_{{\mathrm{out}}/{\mathrm{in}}}), \end{equation} $\operatorname{spec}(q_{{\mathrm{out}}/{\mathrm{in}}})= \sigma_{{\mathrm{out}}/{\mathrm{in}}}$, and $\operatorname{spec}(q_{{\mathrm{out}}/{\mathrm{in}}}^\sharp)\cap \operatorname{spec}(q_{{\mathrm{out}}/{\mathrm{in}}})= \emptyset$. Bicharacteristic strips are outgoing from (resp.\ incoming to) the boundary iff the distance $r$ increases (resp.\ decreases) as time increases. The outgoing and incoming spectral factorizations are distinct only if the real spectrum of $a(s)$ is non-empty. The terms outgoing and incoming spectral factorization are justified by the following lemma. \begin{lemma} \label{lemma-p-on-affineline-signs} Suppose $\gamma+s\nu\in\operatorname{Char} P$. The restriction of $-\tau a'(s)$ to the null space of $a(s)$ is positive (resp.\ negative) definite iff $Hr(\gamma+s\nu)$ is positive (resp.\ negative). \end{lemma} \begin{proof} Suppose $a(s)w=0$ with $w\neq 0$. Using the real principal type assumption, we extend $w$ to a neighborhood of $\gamma+s\nu$ as a smooth section which satisfies $pw=0$ on $\operatorname{Char} P$. The real-valued function $q=(w|pw)$ vanishes on $\operatorname{Char} p$ and is, in a neighborhood of $\gamma+s\nu$, a defining function of $\operatorname{Char} P$. Indeed, $\partial/\partial\tau$ is transversal to $\operatorname{Char} P$, and the Hamilton field $H_q$ of $q$ satisfies $H_q t=\partial q/\partial\tau =-2\rho\tau (w|w)\neq 0$ on $\operatorname{Char} P$. Thus $H=cH_q$ holds with $c\tau<0$. The assertion follows from $Hr=cH_q r=c(w|a'(s) w)$. \end{proof} Microlocally at non-glancing points we decompose $P$ into a product of first order operators. The factorization will be in the algebra generated by $D_r$ and the algebra of tangential pseudo-differential operators, $\Psitang^\infty=\cup_m \Psitang^m$. If $y$ denotes local coordinates of $\partial M$, then tangential pseudo-differential operators are of the form $A(r,y,t,D_y,D_t)$. We ignore the issue that these operators may not be pseudo-differential near boundary conormals because we apply the operators only to distributions which are $\ensuremath{C^{\infty}}$-maps from $r\geq 0$ into distribution sections over the boundary. Equip the space-time boundary $\partial M\times \mathbb{R}$ with the pullback connection obtained from the Levi-Civita connection of the boundary $\partial M$ by projecting off the time axis. We use the geometric symbol calculus for pseudo-differential operators on $\partial M\times \mathbb{R}$. \begin{proposition} In a microlocal neighborhood of a given non-glancing boundary point, there exist $Q, Q^\sharp\in\Psitang^1$ such that \begin{equation} \label{eq-eladyn-op-factorized} L -\rho D_t^2 \equiv (D_r -Q^\sharp)A_0 (D_r - Q) \end{equation} holds modulo $\Psitang^{-\infty}+ \Psitang^{-\infty}D_r$. The principal symbol $q$ of $Q$ and the principal symbol $q^\sharp$ of $Q^\sharp$ have disjoint spectra. The bicharacteristic strips of $D_r-Q$ are the outgoing bicharacteristics of $L-\rho D_t^2$. \end{proposition} The factor $D_r-Q$ is of real principal type because $L-\rho D_t^2$ is. We call $Q_{{\mathrm{out}}}=Q$ the outgoing right root of $L-\rho D_t^2$. Similarly, there exists an incoming right root $Q_{{\mathrm{in}}}$. \begin{proof} On the principal symbol level, \eqref{eq-eladyn-op-factorized} necessitates \begin{equation} \label{eq-princsymb-eladyn-op-factorized} a_0 s^2 +\big(a_1(\eta)+ a_1(\eta)^t\big)s+a_2(\eta)-\rho\tau^2 = \big(s -q^\sharp(\eta,\tau)\big)a_0 \big(s-q(\eta,\tau)\big). \end{equation} To find $q$, we use the outgoing factorizations \eqref{eq-out-in-spec-factorization}, that is, we choose $q=q_{{\mathrm{out}}}$. The real principal type property and the non-glancing assumption insures that real eigenvalues $s$ of $a(s)=p(\gamma+s\nu)$ are smooth functions of $\gamma=(y,t,\eta,\tau)$ and of $r\geq 0$, provided $r$ is small. Furthermore, the sign type of $s$ remains locally constant; see Lemma~\ref{sqp-residue-semidef} for the positive or negative type of a real eigenvalue. The integral formula \eqref{sqp-Qr-intformula} implies that the right root $q=q(r,y,\eta,\tau)$ is smooth and is homogeneous of degree $1$ in $(\eta,\tau)$. The same is true of the left root $q^\sharp$. Passing from symbols to operators, \eqref{eq-L-and-T-in-collar} and \eqref{eq-princsymb-eladyn-op-factorized} give \[ L -\rho D_t^2 = (D_r -Q^\sharp)A_0 (D_r - Q) + R_0D_r+R_1 \] with $R_j\in\Psitang^j$. The spectra of $q$ and $q^\sharp$ are disjoint. Hence the equations $s_jq-q^\sharp s_j=r_j$ have unique solutions $s_j$. The principal symbol $r_j$ of $R_j$ is homogeneous of degree $j$. So, $s_j$ is a homogeneous symbol of degree $j-1$. Choose $S_j\in\Psitang^{j-1}$ with principal symbols $s_j$. Replacing $Q$ by $Q-A_0^{-1}(S_0Q+S_1)$ and $Q^\sharp$ by $Q^\sharp+(Q^\sharp S_0+S_1)A_0^{-1}$, the orders of the error terms are decreased by one step, i.e.\ $R_j\in\Psitang^{j-1}$ holds. Continuing this procedure we find, for every $k\in\mathbb{N}$, $Q$ and $Q^\sharp$ such that $R_j\in\Psitang^{j-k}$ holds. The proof is completed using asymptotic summation. \end{proof} As in \eqref{eq-eladyn-op-factorized}, let $Q=Q_{{\mathrm{out}}}$ be the outgoing right root near a conic non-glancing set $\Gamma\subset T^*(\partial M\times \mathbb{R})$ with compact base. The principal symbol $q=q(r,y,t,\eta,\tau)$ has a spectral decomposition $q=\sum_s s\psi_s + \psi$. Here $\psi_s$ projects onto the eigenspace of the outgoing real eigenvalue $s$, and the spectrum of $\psi$ is contained in the positive upper half-plane. Following \cite{Taylor75reflection}, we find that $(D_r-Q)W\equiv W(D_r-Q')$ holds with $W\in\Psitang^0$ elliptic and $Q'\in\Psitang^1$ which has a block diagonal principal symbol with blocks corresponding to the spectral decomposition of $q$. Hence, microlocally near the given non-glancing point, we have a parametrix $U$ of the Dirichlet problem \[ (D_r-Q)U\equiv 0, \quad U|_{r=0}\equiv \operatorname{Id}. \] Furthermore, $U=\sum_s U_s+V$ where $U_s$ is a (tangential) Fourier integral operator associated with the canonical relation given by the bicharacteristic strips outgoing from $(0,y,t,s,\eta,\tau)$. The principal symbol of $U_s$ at $r=0$ equals $\psi_s$. The Poisson type operator $V$ maps boundary data to sections which are $\ensuremath{C^{\infty}}$ in $r>0$. Microlocally we construct outgoing and incoming boundary parametrices, $U_{\mathrm{out}}$ and $U_{\mathrm{in}}$, such that $u=U_{{\mathrm{out}}/{\mathrm{in}}}f$ solves $Lu-\rho D_t^2u\equiv 0$ and $u|_{r=0}\equiv f$ microlocally near $\Gamma$ if $\operatorname{WF}(f)\subset\Gamma$. The wavefront set of $u=U_{{\mathrm{out}}}f$ (resp.\ $u=U_{{\mathrm{in}}}f$) is disjoint from incoming (resp.\ outgoing) bicharacteristics which pass over $\Gamma$, justifying the terminology. In view of \eqref{eq-L-and-T-in-collar}, in the non-glancing region, the boundary traction is given as follows: \[ Tu|_{r=0} = -i(A_0 D_r u +A_1 u)|_{r=0} \equiv Z_{{\mathrm{out}}/{\mathrm{in}}} f, \quad u=U_{{\mathrm{out}}/{\mathrm{in}}}f. \] Here $Z_{{\mathrm{out}}/{\mathrm{in}}} = -i(A_0 Q_{{\mathrm{out}}/{\mathrm{in}}}+A_1)|_{r=0}$ is the outgoing/incoming DN-map which maps boundary traces of displacements (Dirichlet data) to boundary traces of tractions (Neumann data). The principal symbol $q= q_{{\mathrm{out}}/{\mathrm{in}}}$ of $Q_{{\mathrm{out}}/{\mathrm{in}}}$ is the unique right root of the spectral factorization \eqref{eq-princsymb-eladyn-op-factorized}. The principal symbol of $Z_{{\mathrm{out}}/{\mathrm{in}}}$ is $z_{{\mathrm{out}}/{\mathrm{in}}}=-i(a_0 q_{{\mathrm{out}}/{\mathrm{in}}}+a_1)$. We call $z_{{\mathrm{out}}/{\mathrm{in}}}$ boundary impedance because it corresponds to the surface impedance tensor in the physics literature, \cite{LotheBarnett85surfwaveimped}. The components of $M^\circ\setminus N$ extend as manifolds with smooth boundaries. So the results of the present section not only give Dirichlet parametrices and impedance operators for the boundary $\partial M$ but also for both sides $N^\pm$ of the interface $N$. Thus we have, microlocally at non-glancing regions in $T^*(N^\pm\times\mathbb{R})$, Dirichlet parametrices $U^{\pm}_{{\mathrm{out}}/{\mathrm{in}}}$ and impedances $z^{\pm}_{{\mathrm{out}}/{\mathrm{in}}}$. \section{Reflection and transmission} \label{sect-refl-transm} We turn to the microlocal study of problem \eqref{eq-eladyn-source-problem} at the boundary and at the interface. Given sources $h$ and $h_j$ on the boundary $\partial M\times\mathbb{R}$ and on the interior interface $N\times\mathbb{R}$, respectively, we look for solutions of $Pu=0$ which satisfy, microlocally near given non-glancing points, \[ Tu|_{\partial M\times\mathbb{R}} = h,\quad u|_{N^+\times\mathbb{R}}-u|_{N^-\times\mathbb{R}}= h_0,\quad Tu|_{N^+\times\mathbb{R}} + Tu|_{N^-\times\mathbb{R}}= h_1. \] Recall that the jump of tractions across an interface is the sum of tractions of both sides because our definition of traction involves the oriented normal. We assume $u$ to be microlocally outgoing; incoming solutions are handled analogously. Sources are either given as data or arise as traces of incoming waves. Using the outgoing Dirichlet parametrices, we make the ansatz $u=U_{{\mathrm{out}}}f$ and $u=U_{{\mathrm{out}}}^\pm f^\pm$ at the boundary and at the interface, respectively. This leads, microlocally in non-glancing regions, to the equations $Z_{{\mathrm{out}}}f=h$ on $\partial M\times\mathbb{R}$ and \[ f^+-f^- = h_0, \quad Z^+_{{\mathrm{out}}} f^+ + Z^-_{{\mathrm{out}}} f^- = h_1 \quad\text{on $N\times\mathbb{R}$.} \] The equation on $N\times\mathbb{R}$ reduces to $(Z^+_{{\mathrm{out}}} + Z^-_{{\mathrm{out}}}) f^+ = h_1 + Z^-_{{\mathrm{out}}} h_0$. Therefore, we ask for properties of $Z_{{\mathrm{out}}}$ and of $Z^+_{\mathrm{out}} +Z^-_{\mathrm{out}}$, in particular, whether these are elliptic operators. The projection $T^*(\partial M\times\mathbb{R})\to \partial M$ off fiber and time axis induces, by pullback of the complexification of the bundle $T_{\partial M}M\to\partial M$, a rank $3$ bundle $E$. Similarly, we have rank $3$ bundles $E^\pm\to T^*(N\times\mathbb{R})$ with respect to the sides $N^\pm$ of the interface $N$. Points in $T^*(N\times\mathbb{R})\setminus 0$ are said to be non-glancing if they are non-glancing for both sides of $N$. Non-glancing sets are open. Over the non-glancing sets $E$ and $E^\pm$ split into subbundles of locally constant rank: $E = E_c\oplus E_r$ and $E^\pm = E_c^\pm\oplus E_r^\pm$. The subbundles with subscripts ${}_c$ and ${}_r$ correspond respectively to the non-real and the real spectrum of the outgoing spectral factorization; see \eqref{eq-H-spec-decomp} for the fiberwise decomposition where $Q=q_{{\mathrm{out}}}$. The non-glancing set decomposes into three open subsets, the hyperbolic, mixed, and elliptic regions. The hyperbolic region $\ensuremath{\mathcal H}$ is defined by $E_c=0$ and $E_c^+\cap E_c^- =0$, and the elliptic region $\ensuremath{\mathcal E}$ by $E_c=E$ and $E_c^+=E_c^-=E$. The complement of $\ensuremath{\mathcal H}\cup\ensuremath{\mathcal E}$ in the non-glancing set constitutes the mixed region $\ensuremath{\mathcal M}$. Notice that, at $N$, $\ensuremath{\mathcal H}$ includes any region which is hyperbolic with respect to at least one side of $N$, e.g., hyperbolic--mixed or hyperbolic--elliptic regions in the sense of \cite{StefUhlVasy21transm} are contained in $\ensuremath{\mathcal H}$. In this section we study outgoing impedances $z$ and $z^++z^-$ only at $\ensuremath{\mathcal H}\cup\ensuremath{\mathcal M}$. The elliptic region $\ensuremath{\mathcal E}$ is analyzed in Section~\ref{sect-surf-waves}. \begin{proposition} \label{prop-z-ell-at-Hyp} $Z_{\mathrm{out}}$ and $Z_{{\mathrm{out}}}^+ + Z_{{\mathrm{out}}}^-$ are elliptic in $\ensuremath{\mathcal H}$. \end{proposition} \begin{proof} It follows from Proposition~\ref{prop-Z-nonsing-if-sigma-real} and Lemma~\ref{lemma-p-on-affineline-signs} that $-\tau\operatorname{Im} (u|z_{{\mathrm{out}}} u) \geq 0$ holds for $u\in E$ with strict inequality if the $E_r$-component of $u$ is non-zero. This implies \[ -\tau\operatorname{Im} \big(u|(z_{{\mathrm{out}}}^++z_{{\mathrm{out}}}^-) u\big) > 0 \quad\text{if $u\not\in E_c^+ \cap E_c^-$.} \] Hence $\operatorname{ker} z_{{\mathrm{out}}}\subset E_c$ and $\operatorname{ker} (z_{{\mathrm{out}}}^+ + z_{{\mathrm{out}}}^-)\subset E_c^+\cap E_c^-$. \end{proof} No assumption about the symmetry group of the elastic medium is made in Proposition~\ref{prop-z-ell-at-Hyp}. For an analogous result in $\ensuremath{\mathcal M}$, we shall assume the medium isotropic and use a matrix representation on impedance. Matrix representations of the impedance $z=z_{\mathrm{out}}(\eta,\tau)$ are obtained from eigenvectors of the polynomial $a(s)=p(\gamma+s\nu)$: \begin{equation} \label{eq-z-on-eigenvec} izv=(s a_0+a_1)v \quad\text{whenever $a(s)v=0$ and $s\in\sigma_{\mathrm{out}}$ hold.} \end{equation} Here $q=q_{\mathrm{out}}(\eta,\tau)$ is outgoing right root introduced in \eqref{eq-out-in-spec-factorization}. Equivalently, the six-dimensional vector $[v,izv]^t$ is an eigenvector of the Stroh matrix \eqref{factor-Stroh-matrix}. In case of non-semisimple eigenvalues, principal vectors must be used in the obvious way. Suppose the elastic material isotropic with Lam\'e parameters $\lambda, \mu$ satisfying $\mu>0$ and $\lambda+\mu>0$. In view of \eqref{eq-p-iso-eladyn}, we consider the polynomial \[ a(s)= p(\xi,\tau) = (\lambda+\mu)\xi\otimes\xi +\mu\xi^2\operatorname{Id} -\rho\tau^2\operatorname{Id}, \quad \xi=\eta+s\nu. \] Recall that $\eta,\nu\in T_y^* M$ are orthogonal, and $|\nu|=1$. Observe that \[ a_0= (\lambda+\mu)\nu\otimes\nu + \mu\operatorname{Id}, \quad a_1(\eta)= (\lambda+\mu)\nu\otimes\eta. \] The inequality $(\lambda+2\mu)\eta^2>\rho\tau^2$ defines the hyperbolic region, and $(\lambda+2\mu)\eta^2>\rho\tau^2>\mu\eta^2$ the mixed region. Suppose $\tau<0$. The outgoing eigenvalues $s=s_{p/s}$, $\operatorname{ker} a(s)\neq 0$, are the roots of $s^2=\tau^2/c_{p/s}^2-\eta^2\neq 0$ which are positive real or positive imaginary. Choose $\zeta\neq 0$ orthogonal to $\nu$ and $\eta$. Fix $\tilde s$ so that $\eta+\tilde s\nu$ and $\eta+s_s\nu$ are orthogonal, i.e., $\eta^2+\tilde s s_s=0$. A straightforward computation based on \eqref{eq-z-on-eigenvec} gives \begin{align*} iz \zeta &= s_s\mu\zeta, \\ iz (\eta+\tilde s\nu) &= -\mu\eta^2 \nu +s_s\mu\eta, \\ iz (\eta+s_p\nu) &= (\rho\tau^2-\mu\eta^2)\nu +s_p\mu\eta. \end{align*} The formulas represent $z=z_{\mathrm{out}}$ in a basis of SH-SV-P polarizations. \begin{proposition} \label{prop-z-ell-at-Mix} Suppose the elastic body is isotropic. Then $Z_{\mathrm{out}}$ and $Z_{{\mathrm{out}}}^+ + Z_{{\mathrm{out}}}^-$ are elliptic in $\ensuremath{\mathcal M}$. \end{proposition} \begin{proof} Suppose $(\eta,\tau)\in\ensuremath{\mathcal M}$. By the proof of Proposition~\ref{prop-z-ell-at-Hyp} it suffices to show that $\operatorname{ker} z \cap E_c =0$ and $\operatorname{ker}(z^+ + z^-)\cap E_c^+\cap E_c^-=0$. (We dropped the subscript ${}_{\mathrm{out}}$.) Since $s_s$ is real, $E_r$ contains the two-dimensional space orthogonal to $\xi=\eta+s_s\nu$. Therefore, $\dim E_c\leq 1$ and $\min(\dim E_c^+,\dim E_c^-)\leq 1$. Suppose $\dim E_c^+=1$. Thus $s_p^+\in i\mathbb{R}_+$, $E_c^+=\mathbb{C}(\eta+s_p^+\nu)$, and \[ iz^+ (\eta+s_p^+\nu)= (\rho^+\tau^2-\mu^+\eta^2)\nu +s_p^+\mu^+\eta \neq 0. \] We claim that the inner product of $(iz^++iz^-)(\eta+s_p^+\nu)$ and $\eta$ is not zero. If $\dim E_c^-=1$, then either $E_c^+\cap E_c^-=0$ or $s_p^+=s_p^-$, implying the claim in this case. Suppose $E_c^-=E$, i.e., $s_s^-$ and $s_p^-$ are positive imaginary. Define $\hat s$ and $t$ by $\eta^2+\hat s s_s^-=0$ and $t s_p^-+(1-t)\hat s=s_p^+$. Observe that $\hat s\in i\mathbb{R}_+$ and $|s_p^\pm|<|\eta|<|\hat s|$, hence $t>0$. The inner product of \[ (iz^++iz^-)(\eta+s_p^+\nu) = iz^+(\eta+s_p^+\nu)+tiz^-(\eta+s_p^-\nu) + (1-t)iz^-(\eta+\hat s\nu) \] and $\eta$ equals $\eta^2$ times the sum $s_p^+\mu^++ts_p^-\mu^-+(1-t)s_s^- \mu^-$. The sum is positive imaginary because $0<s_s^-<s_p^-$ in $i\mathbb{R}$, and $t>0$. This proves the claim and the lemma. \end{proof} \section{Surface waves} \label{sect-surf-waves} The elliptic region is the open subset $\ensuremath{\mathcal E}\subset T^*(\partial M\times\mathbb{R})\cup T^*(N\times\mathbb{R})$ over which no incoming or outgoing bicharacteristics pass. However, elastic waves satisfying zero traction boundary conditions may have singularities in $\ensuremath{\mathcal E}$ because $\operatorname{Char} Z\cap\ensuremath{\mathcal E}$ may be non-empty. This happens for isotropic media and is the reason for the existence of Rayleigh surface waves. The free surface wave theory of \cite{LotheBarnett85surfwaveimped} implies, when translated into the language of microlocal analysis as done below, that $Z$ is, for any elastic symmetry, of real principal type in $\ensuremath{\mathcal E}$. The elliptic region $\ensuremath{\mathcal E}$ is called the subsonic region in Barnett--Lothe theory, and the part of the glancing region which bounds $\ensuremath{\mathcal E}$ is called transsonic. Suppose $0\neq\eta\in T_y^*(\partial M)$. Set $\nu=\mathop{\operatorname{d}} r(y)$ as before. For real $\tau$, the condition $\gamma=(\eta,\tau)\in \ensuremath{\mathcal E}$ is equivalent to the positive definiteness of \[ p(\gamma+s\nu) = a_0 s^2 +\big(a_1(\eta)+ a_1(\eta)^t\big)s+a_2(\eta)-\rho\tau^2, \quad s\in\mathbb{R}. \] Decreasing $\tau^2$ does not lead out of $\ensuremath{\mathcal E}$. Put $\tau_\eta=\sup_{(\eta,\tau)\in\ensuremath{\mathcal E}}\tau$. Then $|\tau|=\tau_\eta>0$ describes the part of glancing set which bounds $\ensuremath{\mathcal E}$. Denote by $q=q(\eta,\tau)$ the unique spectral right root of the polynomial $p(\gamma+s\nu)$ with spectrum contained in the upper complex half-plane. The principal symbol of $Z$ is the impedance $z$ given by $iz(\eta,\tau)=a_0q(\eta,\tau)+a_1(\eta)$. The following fact is well-known. \begin{lemma} \label{lemma-z-posdef} At $\tau=0$ the impedance $z$ is positive definite. \end{lemma} \begin{proof} Suppose $\tau=0$. Let $u_0\in E$. Put $u(r)=e^{irq}u_0$ and $D=-i\partial_r$. As $r\to\infty$, $u(r)$ decreases to zero exponentially. Using \eqref{eq-Z-int-over-r} and \eqref{eq-aj-coord-rep}, we get \begin{align*} (zu_0|u_0) &= \int_0^\infty (a_0Du|Du)+(a_1(\eta)u|Du)+(Du|a_1(\eta)u)+(a_2(\eta)u|u)\mathop{\operatorname{d}} r \\ &= \int_0^\infty (Cw(r)| w(r)) \mathop{\operatorname{d}} r. \end{align*} Here $w(r)=Du(r) \nu+u(r) \eta$ is the symmetrization of the tensor $Du(r)\otimes \nu+u(r)\otimes\eta$. Since the stiffness tensor $C$ is positive definite, $(zu_0|u_0)\geq 0$ holds. Suppose $(zu_0|u_0)\leq 0$. This implies $w=0$. Since $\nu$ and $\eta$ are orthogonal, $0=(w(r)|\nu\nu)=(Du(r)|\nu)|\nu|^2$ holds. We infer that $(u(r)|\nu)$ is constant, hence zero. Suppose $\mu$ is orthogonal to $\nu$. It follows that $(Du(r)|\mu)|\nu|^2= (w(r)|\nu \mu)=0$. Thus $Du=0$, and $u_0=0$. \end{proof} The boundary value $v=u|_{\partial M\times\mathbb{R}}$ of an elastic wave $u$ which satisfies the free surface boundary condition, $Tu|_{\partial M\times\mathbb{R}}=0$, is a solution of the equation $Zv=0$. Next, we show that the results of Section~\ref{section-Lagr-solns} apply to $Zv=0$ in $\ensuremath{\mathcal E}$. \begin{proposition} \label{prop-Z-realprinctype} $Z$ is of real principal type in $\ensuremath{\mathcal E}$. The leading symbol of $Z$ is $z+\hslash z_1$, where $z=-i(a_0q+a_1)$ is the impedance, and $z_1$ is the unique solution of \begin{equation} \label{eq-z-lead-symb} q^*z_1 -z_1q + (\vnabla\cdot\hnabla a_1^t)q - \vnabla q^*a_0\cdot\hnabla q +ia_{2,1}+ i\operatorname{tr}(S)z+i\partial_r z =0. \end{equation} Here $S=\nabla^2 r$, $\partial_r z=(\partial z/\partial r)|_{r=0}$, and $a_2+\hslash a_{2,1}$ is the leading symbol of $A_2$. \end{proposition} We used $z=-i(a_0q+a_1)$ to define the derivative $\partial_r z$. \begin{proof} In $\ensuremath{\mathcal E}$, $z$ is self-adjoint. By Proposition~\ref{prop-Re-Z-posdef}, $\operatorname{Re} z$ is positive definite; consequently, at least two eigenvalues of $z$ are positive. Proposition~\ref{prop-Zdot-posdef} implies that $\partial z/\partial \tau^2$ is negative definite. Using perturbation theory of eigenvalues, e.g.\ \cite[Theorem \RN 2-5.4]{Kato76perturb}, we find that $\partial \det z/\partial\tau\neq 0$ holds at $\operatorname{Char} Z$. In view of Lemma~\ref{lemma-z-posdef}, either $z(\eta,\tau)$ is positive definite for $|\tau|<\tau_\eta$, or there exists a positive smooth function $\hat\tau$ on an open neigborhood $W\subset T^*(\partial M)$ of $\eta$, such that, for $\zeta\in W$, $(\zeta,\tau)\in\operatorname{Char} Z$ holds iff $|\tau|=\hat\tau(\zeta)$. Clearly, $\dim\operatorname{ker} z(\eta,\tau)= 1$ holds for $(\eta,\tau)\in\operatorname{Char} Z$. The real principal type property of $Z$ now follows. For the leading symbol, we procede as in \cite[Lemma 18]{SH11rqm}. Recall \eqref{eq-L-and-T-in-collar}. Comparing coefficients of powers of $D_r$, we see that \eqref{eq-eladyn-op-factorized} is equivalent to the following two equations between tangential operators: \begin{align*} A_0Q +A_1 + Q^\sharp A_0 + A_1^\sharp -i\operatorname{tr}(S)A_0 &= 0, \\ [D_r,A_0Q+A_1] -Q^\sharp A_0 Q +A_2-\rho D_t^2 -i\operatorname{tr}(S) A_1 &=0. \end{align*} Multiply the first equation from the right by $Q$, and add the equations: \begin{equation} \label{eq-to-determine-leadsymb-of-Z} A_0Q^2+A_1Q+ A_1^\sharp Q +A_2-\rho D_t^2 = -\operatorname{tr}(S) Z -[D_r,A_0Q+A_1]. \end{equation} The principal symbol of the first order operator on the right-hand side is $-\operatorname{tr}(S)z-\partial_r z$. Using Propositions~\eqref{prop-geosymb-composition} and \eqref{prop-geosymb-adjoint}, we compute the leading symbol of the second order operator on the left. First, we determine the symbols of the operators $A_1$ and $A_1^\sharp$ defined in \eqref{eq-Aj-op}. The horizontal derivative of the principal symbol of $B|_{r=0}$ vanishes, because this operator is a covariant derivative. It follows that the leading symbol of $A_1$ equals its principal symbol $a_1$. At $r=0$, $A_1^\sharp=A_1^*$ holds. The leading symbol of $A_1^*$ is $a_1^t -i\hslash \vnabla \cdot \hnabla a_1^t$, and those of $iZ+A_1^*$ and $Q=A_0^{-1}(iZ-A_1)$ are \[ -q^*a_0+i\hslash (z_1-\vnabla\cdot\hnabla a_1^t) \quad\text{and}\quad q+i\hslash a_0^{-1}z_1, \] respectively. Therefore, the leading symbol of $A_0Q^2+(A_1+ A_1^*)Q = (iZ+A_1^*)Q$ is \[ (a_0q^2+a_1q+a_1^tq)+i\hslash\big((z_1-\vnabla\cdot\hnabla a_1^t)q-q^* z_1 +\vnabla q^* a_0\cdot\hnabla q \big).\] The principal symbol of the second order operator on the left-hand side of \eqref{eq-to-determine-leadsymb-of-Z} must be zero. This is confirmed by the solvency equation \eqref{sqp-solvency}. Equating the lower order term of the leading symbol on the left with the principal symbol on the right of \eqref{eq-to-determine-leadsymb-of-Z} proves \eqref{eq-z-lead-symb}. \end{proof} It is clear from the proof that, for $(\eta,\tau)\in\ensuremath{\mathcal E}\setminus\operatorname{Char}(Z)$, either $z(\eta,\tau)$ is positive definite or $z(\eta,\tau)$ has exactly one negative eigenvalue. In the latter case, there exists a unique $0<\tau_R<|\tau|$ satisfying $(\eta,\tau_R)\in\operatorname{Char}(Z)$. This is the basic criterion of Barnett--Lothe theory for the existence of Rayleigh--type surface waves. Other useful existence criteria are stated in terms of the limit of $z(\eta,\tau)$ as $\tau\to \tau_\eta$; see \cite{LotheBarnett85surfwaveimped}. Surface waves $u$ are, microlocally near $\ensuremath{\mathcal E}\cap\operatorname{Char}(Z)$, given by $u=Uv$, where $U$ is the Dirichlet parametrix, and $v$ solves the real principal type system $Zv\equiv 0$ on $\ensuremath{\mathcal E}$. \begin{remark} At the interior interface $N\times\mathbb{R}$, surface waves of Stoneley--type may occur. These do arise when $\operatorname{Char}(Z^++Z^-)\cap\ensuremath{\mathcal E}\cap T^*(N\times\mathbb{R})\neq\emptyset$. In fact, the proof of Proposition~\ref{prop-Z-realprinctype} applies also to $Z^++Z^-$ instead of $Z$. To see this, notice the positive definiteness of $z^++z^-$ at $\tau=0$, of $\operatorname{Re}(z^++z^-)$ and of $-\partial(z^++z^-)/\partial \tau^2$ in $\ensuremath{\mathcal E}\cap T^*(N\times\mathbb{R})$. \end{remark} \section{Propagation of polarization} \label{section-propag-polar} Suppose $P=L-\rho D_t^2$ of real principal type. Endow $\operatorname{Char} P$ with the Hamilton field $H$ determined by $Ht=1$. We say that a Lipschitz continuous map $\gamma:I\to\bTstar(M\times\mathbb{R})$ from an interval $I\subset\mathbb{R}$ into the $b$-cotangent bundle is an outgoing broken bicharacteristic of \eqref{eq-eladyn-source-problem} iff $\gamma(I)$ does not intersect the glancing set and, at time $t\in I$, one of the following holds: \begin{compactenum}[(i)] \item $\gamma'(t)=H(\gamma(t))$, or \item\label{item-bichar-refl-transm} $\gamma(s) \in T^*(M'\times\mathbb{R})\cap\operatorname{Char} P$ for $s-t>0$ small. \end{compactenum} Case \eqref{item-bichar-refl-transm} arises at points of reflection or transmission. We call the sequence of components of $\operatorname{Char} P$ containing bicharacteristic segments the signature of a broken bicharacteristic $\gamma$. We look for outgoing solutions $u$ of \eqref{eq-eladyn-source-problem} which are generated by Lagrangian sources. By outgoing we mean that time $t$ is bounded from below on $\operatorname{supp}(u)$. Since the boundary and the interface are non-characteristic, \[ \mathbin{^b \operatorname{WF}}(u)\subset T^*(M'\times\mathbb{R})\cup T^*(\partial M\times\mathbb{R})\cup T^*(N\times\mathbb{R})\subset \bTstar(M\times\mathbb{R}) \] has to hold. Here we use the abbreviation $M'=M\setminus (\partial M\cup N)$ from Section~\ref{sect-linear-elast}. By Proposition~\ref{prop-z-ell-at-Mix}, the first assumption of the following proposition is true in the case of isotropic elastic materials. \begin{proposition} Suppose $Z_{\mathrm{out}}$ and $Z^+_{\mathrm{out}} + Z^-_{\mathrm{out}}$ are elliptic in $\ensuremath{\mathcal M}$. Suppose $f\in I^{k+3/2}(\Lambda_f)$ is a compactly supported Lagrangian distribution in $M'\times\mathbb{R}$, and $H$ is transversal to the Lagrangian manifold $\Lambda_f$ at $\Lambda_f\cap\operatorname{Char} P$. Suppose every broken bicharacteristic issuing from $\Lambda_f\cap\operatorname{Char} P$ extends until a time $t_1$. There is a unique outgoing solution $u(x,t)$, $t<t_1$, of the transmission problem \eqref{eq-eladyn-source-problem} which satisfies homogeneous boundary and transmission conditions, $h=h_j=0$. Moreover, $u$ is a Lagrangian distribution with associated Lagrangian manifolds labelled by signatures of the broken bicharacteristics issuing from $\Lambda_f$. Along any broken bicharacteristic the polarization is completely determined by transport equations \eqref{eq-transport-polariz} along segments, and by initial values at $\Lambda_f\cap\operatorname{Char} P$ and at points on the boundary or the interface in terms of reflection and transmission laws; see \eqref{eq-reflection-law} below. \end{proposition} \begin{proof} Applying Proposition~\ref{prop-MU-Lagr-intsect-calc}, we obtain $w\in I^{k}(\Lambda_f\cup\Lambda)$ such that $Pw\equiv f$ holds modulo a function which is $\ensuremath{C^{\infty}}$ except for jump discontinuities across the interface $N\times\mathbb{R}$. Restriction to a hypersurface is an FIO of order $1/4$. The Cauchy data of $w$, that is the restrictions of $w$ and the traction $Tw$ to the boundary and the interface, are Lagrangian distributions of orders $k+1/4$ and $k+5/4$. The Cauchy data of $w$ are related by the incoming DN operators $Z_{\mathrm{in}}$. Note that, because $\Lambda$ does not intersect the glancing set, the Lagrangian manifold associated with the Cauchy data is the restriction of $\Lambda$ to $\ensuremath{\mathcal H}\cup\ensuremath{\mathcal M}$. Replace $u$ by $u+w$. Now we have a transmission problem \eqref{eq-eladyn-source-problem} with $f\equiv 0$, and $h,h_j$ are, in $\ensuremath{\mathcal H}\cup\ensuremath{\mathcal M}$, given by the Cauchy data of $w$. Moreover, $h$ and $h_1$ are Lagrangian distributions of order $k+5/4$, and $h_0$ of order $k+1/4$. Following the outline in Section~\ref{sect-refl-transm}, we set \begin{equation} \label{eq-u-of-sources-h-hj} u_1 = UZ^{-1}h + U^+(Z^++Z^-)^{-1}(h_1+Z^- h_0) + U^-(Z^++Z^-)^{-1}(h_1-Z^+ h_0). \end{equation} Here, inverses are parametrices of the elliptic operators $Z$ and $Z^++Z^-$, and the operators $U, U^\pm$ are outgoing Dirichlet parametrices. The canonical relations of the FIO parts of $U$ and $U^\pm$ are extended by Hamilton flowout until the boundary or the interface are hit again or the final time $t_1$ is reached. The Lagrangian manifold generated by the flowouts is $\tilde\Lambda$. Now $u=u_1\in I^{k}(\tilde \Lambda)$ solves \eqref{eq-eladyn-source-problem} for $f\equiv 0$ and modified boundary sources $h,h_j$. The infimum of $t$ over \( (\operatorname{WF} h\cup \operatorname{WF} h_0\cup\operatorname{WF} h_1)\cap(\ensuremath{\mathcal H}\cup\ensuremath{\mathcal M}) \) has strictly increased after modification. The number of reflections and transmissions along outgoing broken bicharacteristics issuing from the wavefront sets of the sources is bounded. Continuing as above, we reach, after finitely many steps, time $t=t_1$. Thus we have shown that, under our assumptions, \eqref{eq-eladyn-source-problem} can be solved modulo $\ensuremath{C^{\infty}}$ errors for Lagrangian sources. By the results of Section~\ref{sect-linear-elast} we obtain exact solutions. \end{proof} Let $q_{{\mathrm{out}}}^\pm=\psi^\pm+\sum\nolimits _s s\psi_s^\pm$ be the spectral decompositions of outgoing right roots, where $\psi_s^\pm$ is the projector onto the eigenspace with real eigenvalue $s$ and $\psi^\pm$ maps into $E_c^\pm$. Apply the symbol calculus to the construction \eqref{eq-u-of-sources-h-hj}. For incoming polarization over $N^+$, the initial values of reflected and transmitted polarizations at the interface can be concisely written as \begin{equation} \label{eq-reflection-law} a^{\pm}_{{\mathrm{out}},s} = -\psi_{s}^{\pm} (z^+_{\mathrm{out}}+z^-_{\mathrm{out}})^{-1} (z^+_{\mathrm{in}} \pm z^{\mp}_{\mathrm{out}})a^+_{\mathrm{in}}. \end{equation} An analoguous but simpler law holds at the boundary. \begin{remark} Our assumption that $Z$ be elliptic in the mixed region means that we do not cover the phenomenon of supersonic surface waves in special tranversely isotropic media, \cite{GundWangLothe91secluded}. \end{remark} Finally we turn to surface waves. In the elliptic region, $Z$ and $Z^++ Z^-$ are of real principal type, possibly even elliptic. Endow these operators with a Hamilton field, also denoted $H$, which satisfies $Ht=1$. \begin{proposition} \label{prop-surface-wave-propagation} Let $h\in I^{k+1/2}(\Lambda_h)$ a compactly supported Lagrangian distribution on $\partial M\times \mathbb{R}$. Suppose that $\Lambda_h\subset\ensuremath{\mathcal E}\cap T^*(\partial M\times\mathbb{R})$ and $H$ is transversal to $\Lambda_h$ at $\Lambda_h\cap\operatorname{Char} Z$. Suppose that no (Rayleigh) bicharacteristic issuing from $\operatorname{WF}(h)$ intersects the glancing set before time $t_1$. The transmission problem \eqref{eq-eladyn-source-problem} with $f=0=h_j$ has a unique outgoing solution $u(x,t)$, $t<t_1$. Furthermore, $u\equiv Uw$ modulo $\ensuremath{C^{\infty}}$, where $U$ is the microlocal Dirichlet parametrix at $\ensuremath{\mathcal E}\cap T^*(\partial M\times\mathbb{R})$, and $w\in\Dprime(\partial M\times\mathbb{R})$ is the microlocally outgoing solution of $Zw\equiv h$. The polarization of $w$ satisfies a transport equation \eqref{eq-transport-polariz} with initial values determined from the principal symbol of $h$ at $\Lambda_h\cap\operatorname{Char} Z$. \end{proposition} \begin{proof} We apply Proposition~\ref{prop-MU-Lagr-intsect-calc}. We obtain $w\in I^{k}(\Lambda_h\cup\Lambda)$, $\Lambda\subset\operatorname{Char} Z$ the Hamilton flowout of $\Lambda_h\cap\operatorname{Char} Z$, such that $Zw\equiv h$ holds for $t<t_1$. Observe that $Uw$ satisfies the transmission problem modulo smooth error. Using Section~\ref{sect-linear-elast}, we obtain an exact solution $u\equiv Uw$. \end{proof} Of course, actual surface waves can only arise when $\Lambda_h\cap\operatorname{Char} Z$ is non-empty. To set up the transport equation for the polarization of $u|_{\partial M\times\mathbb{R}}\equiv w$ it is necessary to evaluate the subprincipal symbol of $Z$ using the formulas \eqref{eq-subprinc-symb} and \eqref{eq-z-lead-symb} with $p=z$ and $p_1=z_1$. This is cumbersome but algorthmically straightforward. Observe that the subprincipal symbol of $Z$, hence also the polarization of $w$, depends not only on the elasticities on $\partial M$ but also on their $r$-derivatives in direction transversal to the boundary. Furthermore, the polarization depends on the curvature of the boundary. In elasticity, dispersion of Rayleigh wave velocity has been observed, i.e., velocity depends on frequency and $r$; compare \cite{MaNaTaWa15dispersion}. Dispersion refers to $u$ and not its boundary trace $w$, because the latter is Lagrangian with frequency-independent wave speed. If $\operatorname{Char}(Z^++Z^-)\cap\ensuremath{\mathcal E}\neq\emptyset$, then existence of Stoneley type waves at the interior interface is proved in the same way as in Proposition~\ref{prop-surface-wave-propagation}. See the recent analysis \cite{Zhang2020rayleigh} of Rayleigh and Stoneley waves in isotropic media.
proofpile-arXiv_068-1565
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} As the use of machine learning (ML) models across industry and society has grown dramatically in recent years, so too have concerns about the privacy of personal data that is used in training such models. It is well-documented that ML models may leak sensitive, confidential data, e.g. via model inversion attacks and membership-inference attacks~\cite{inversionfred, shokri2017membership, korolova2018facebook, nasr2019comprehensive, carlini2021extracting}. \textit{Differential privacy} (DP)~\cite{dwork2006calibrating} offers a rigorous promise that user data cannot be leaked, and a plethora of work has been devoted to DP ML and optimization~\cite{chaudhuri2008privacy, chaud, kifer2012private, duchi13, bst14, ullman2015private, wang2017ermrevisited, bft19, fkt20, lowy2021, lr21fl, cheu2021shuffle, asiL1geo, lowy2022NCFL}. Of particular importance is the fundamental problem of DP \textit{stochastic (convex) optimization} (S(C)O). In DP SO, we are given $n$ i.i.d. samples $X = (x_1, \ldots, x_n) \in \mathcal{X}^n$ from an unknown distribution $\mathcal{D}$, and the goal is to privately solve \begin{equation} \label{eq:SO} \vspace{-.02in} \min_{w \in \mathcal{W}}\left\{F(w) := \mathbb{E}_{x \sim \mathcal{D}} [f(w,x)] \right\} \end{equation} \vspace{-.02in} \normalsize for a given loss function $f: \mathcal{W} \times \mathcal{X} \to \mathbb{R}$ defined on a convex parameter domain $\mathcal{W} \subset \mathbb{R}^d$. The \textit{excess risk} (a.k.a. excess population loss) of a (randomized) algorithm $\mathcal{A}$ for solving~\cref{eq:SO} is $\mathbb{E} F(\mathcal{A}(X)) - \min_{w \in \mathcal{W}} F(w)$, where the expectation is taken over both the random draw of the data $X$ and the algorithm. In cases where $f(\cdot, x)$ is $L_f$-\textit{Lipschitz} for all data samples $x \in \mathcal{X}$ (i.e. the gradient of $f$ is uniformly bounded with $\sup_{w \in \mathcal{W}, x \in \mathcal{X}} \|\nabla_w f(w,x) \| \leq L_f$), optimal rates for DP SCO are known~\cite{bft19, fkt20, asiL1geo, bassily2021non, lr21fl}. While the assumption of uniformly Lipschitz loss is convenient for bounding the \textit{sensitivity}~\cite{dwork2006calibrating} of queries\footnote{Loosely speaking, sensitivity measures the amount by which the output of a query may change when one person's data $x$ is replaced by that of another person $x'$ in the \textit{worst-case}.} and applying/analyzing standard DP mechanisms, this assumption is often unrealistic. In practical applications such as finance~\cite{fama1963mandelbrot, ibragimov2015heavy}, insurance~\cite{pisarenko2010heavy}, biomedicine~\cite{woolson2011statistical}, sociology~\cite{markovich2008nonparametric}, internet traffic prediction~\cite{crovella1998heavy, hernandez2004variable}, and experimental sciences~\cite{balasubramanian2016discussion} to name a few, data frequently contains outliers or is unbounded, and may even be \textit{heavy-tailed} (i.e. some moments may be infinite). As a result, $L_f$ may be prohibitively large or infinite. For example, in linear regression, $f(w, x) = \frac{1}{2}(\langle w, x^{(1)} \rangle - x^{(2)})^2$ where $x^{(1)}$ is the feature data and $x^{(2)}$ is the label; hence $\nabla_w f(w,x) = x^{(1)}(\langle w, x^{(1)} \rangle - x^{(2)})$. In this case, $L_f = \infty$, \textit{even for ``well-behaved'' data distributions} such as Gaussian and compact $\mathcal{W}$. Similar observations can be made for other commonly used ML models (e.g. deep neural nets~\cite{lei2021sharper}), and the situation becomes even grimmer if the data distribution is heavy-tailed. For any of the aforementioned problems, existing bounds for DP Lipschitz SO (e.g. \cite{bft19, fkt20, asiL1geo, bassily2021non, lr21fl})--which scale with $L_f$--become vacuous, and DP techniques that are effective with the Lipschitz assumption (e.g. gradient/objective/output perturbation, exponential mechanism) cannot be directly applied since the model sensitivity is unbounded. However, even when $L_f = \infty$ or $L_f \gg 1$, it is usually the case that the $k$-th \textit{moment} of the stochastic gradients is reasonably small for some $k \geq 2$ (see~\cref{ass:boundednoncentral}): e.g. this is the case for Gaussian linear regression (see~\cref{example: gauss lin reg}). Thus, excess risk bounds that scale with the $k$-th moment (like those given in this paper) may be much smaller than bounds that scale with $L_f$. On the other hand, outlier data can also exacerbate privacy risks, as large ML models tend to memorize certain worst-case training examples, leaving these users especially vulnerable to inference attacks: e.g. see \cite{carlini2021extracting} for a training data extraction attack executed on the language model GPT-2. Thus, it is of great importance to develop effective methods for DP SO with (potentially unbounded) data containing outliers and/or heavy-tails.\footnote{Following~\cite{wx20, klz21}, we use the terminology ``heavy-tailed data'' throughout the paper. However, as discussed, our results are useful for a broader class of distributions than those which are typically considered ``heavy-tailed.'' Namely, we are also interested in data that contains outliers (even if it is bounded), or data that is unbounded but light-tailed (e.g. Gaussian), since $r \ll L_f$ in such cases (using notation of~\cref{ass:boundednoncentral}). } In this work, we consider two questions: \begin{center} \noindent\fbox{ \noindent \textit{I. What are the minimax optimal rates for (strongly) convex DP SO with heavy-tailed data?}} \\ \vspace{0.2cm} \noindent \fbox{ \noindent \textit{II. What utility guarantees are achievable for non-convex DP SO with heavy-tailed data?} } \\ \end{center} \vspace{-.04in} There has been some progress made in addressing the first question above: The work of~\cite{wx20} provided the first excess risk upper bounds for \textit{smooth}\footnote{Function $g$ is $\beta$-smooth if it is differentiable and its derivative $\nabla g$ is $\beta$-Lipschitz.} DP convex/strongly convex\footnote{Function $g: \mathcal{W} \to \mathbb{R}$ is \textit{$\mu$-strongly convex} ($\mu \geq 0$) if $g(w) \geq g(w') + \langle \nabla g(w'), w - w' \rangle + \frac{\mu}{2}\|w - w'\|^2 ~\forall ~w, w' \in \mathcal{W}$ and all subgradients $\nabla g(w') \in \partial g(w')$. If $\mu = 0,$ we say $g$ is \textit{convex.}} SO. The work of \cite{klz21} gave an improved, yet still suboptimal, upper bound for smooth convex loss, as well as lower bounds for the convex/strongly convex settings. Also, a \textit{nearly} optimal upper bound for \textit{smooth} strongly convex loss was \textit{asserted} in \cite[Theorem 5.6]{klz21}, but we identify gaps in their proof that do not seem to be easily fixable within their framework: see~\cref{app: wrong proofs}. In this work, we provide \textit{optimal algorithms for convex and strongly convex heavy-tailed DP SO}, resolving the first question (up to logarithmic factors). Our bounds hold even for \textit{non-smooth} loss functions. With regards to the second question, we provide the \textit{first algorithms and utility guarantees for heavy-tailed DP SO with non-convex loss} functions satisfying the Proximal-Polyak-Łojasiewicz condition~\cite{polyak, karimi2016linear}. We elaborate on our contributions in~\cref{sec: contributions}. \vspace{-.05in} \subsection{Preliminaries} \vspace{-.03in} Let $\mathcal{W}$ be a convex, compact set of $\ell_2$ diameter $D$. We assume~\cref{ass:boundednoncentral} throughout: \vspace{-.07in} \begin{assumption}[Used in this work] \label{ass:boundednoncentral} There exists $k \geq 2$ and $r > 0$ such that $\sup_{w \in \mathcal{W}} \mathbb{E} \left[\| \nabla f(w, x) \|_2^k\right]\leq r^k$, for all subgradients $\nabla f(w, x_i) \in \partial_w f(w, x_i)$. \vspace{-.1cm} \end{assumption} \noindent We always have $r \leq L_f = \sup_{w, x} \|\nabla f(w,x)\|$, but this inequality is often very loose: \vspace{-.07in} \begin{example} \label{example: gauss lin reg} For linear regression on a unit ball $\mathcal{W}$ with $2$-dimensional Gaussian data $(x^{(1)}, x^{(2)}) \sim \mathcal{N}_2(0, \Sigma^2)$ and $\text{Var}(x^{(1)}) = \text{Var}(x^{(2)}) = 1$, we have $r^k \leq 4 \ll L_f = \infty$, $\forall k \geq 2$. \end{example} \vspace{-.04in} For differentiable $f(\cdot, x)$, the works~\cite{wx20, klz21} assume:\footnote{The work of~\cite{klz21} assumes that $L \lesssim \gamma^{1/k} = 1$. On the other hand, \cite{wx20} assumes that $F$ is $\beta$-smooth and $\nabla F(w^{*}) = 0$ for some $w^{*} \in \mathcal{W}$, where $\mathcal{W}$ has diameter $D$; this implies $L \leq 2 \beta D$ (e.g. by the mean value theorem).} \vspace{-.07in} \begin{assumption}[Used in~\cite{wx20, klz21}] \label{ass:coordinatewise} There exists $k \geq 2$ and $\gamma > 0$ such that $\sup_{w \in \mathcal{W}} \mathbb{E} | \langle \nabla f(w,x) - \nabla F(w), e_j \rangle |^k \leq \gamma$, for all $j \in [d]$, where $e_j$ denotes the $j$-th standard basis vector in $\mathbb{R}^d$. Also, $L \triangleq \sup_{w \in \mathcal{W}}\| \nabla F(w)\| \leq 2 \sqrt{d} \gamma^{1/k}$. \end{assumption} \vspace{-.02in} \noindent For finite $k$, \cref{ass:boundednoncentral} and \cref{ass:coordinatewise} are weaker than assuming that $f(\cdot, x)$ is Lipschitz since they allow for the $p$-th moments to be unbounded for $\infty \geq p > k$. The following lemma (proved in~\cref{app: lemma scaling factors}) allows us to compare our results obtained under~\cref{ass:boundednoncentral} to the results in~\cite{wx20, klz21}, which require \cref{ass:coordinatewise}: \vspace{-.06in} \begin{lemma} \vspace{-.01in} \label{lem: comparing assumptions} Suppose~\cref{ass:coordinatewise} holds. Then, \cref{ass:boundednoncentral} holds with $r \leq 4 \sqrt{d} \gamma^{1/k}$. \vspace{-.02in} \end{lemma} \vspace{-.06in} Since our~\cref{ass:boundednoncentral} is implied by \cref{ass:coordinatewise}, the upper bounds that we obtain under \cref{ass:boundednoncentral} also hold (up to constants) if we grant \cref{ass:coordinatewise} instead (with the appropriate change in scaling factors $r \leftrightarrow \sqrt{d} \gamma^{1/k}$). \vspace{.04in} \noindent \textbf{Differential Privacy:} \textit{Differential privacy}~\cite{dwork2006calibrating} ensures that no adversary--even one with enormous resources and side knowledge (e.g. of the ML model)--can infer much more (from the output of the algorithm) about any individual who contributes training data than if that individual's data were not present. If two data sets $X$ and $X'$ differ in a single entry (i.e. $d_{\text{hamming}}(X, X') = 1$), then we say that $X$ and $X'$ are \textit{adjacent} and denote $X \sim X'$. \begin{definition}[Differential Privacy] \label{def: DP} Let $\epsilon \geq 0, ~\delta \in [0, 1).$ A randomized algorithm $\mathcal{A}: \mathcal{X}^n \to \mathcal{W}$ is \textit{$(\epsilon, \delta)$-differentially private} (DP) if for all pairs of adjacent data sets $X, X' \in \mathcal{X}^n$ and all measurable subsets $S \subseteq \mathcal{W}$, we have $\mathbb{P}(\mathcal{A}(X) \in S) \leq e^\epsilon \mathbb{P}(\mathcal{A}(X') \in S) + \delta$. \end{definition} \noindent In this work, we focus on \textit{zero-concentrated differential privacy}~\cite{bun16}: \vspace{-.05in} \begin{definition}[Zero-Concentrated Differential Privacy (zCDP)] A randomized algorithm $\mathcal{A}: \mathcal{X}^n \to \mathcal{W}$ satisfies $\rho$-zero-concentrated differential privacy ($\rho$-zCDP) if for all pairs of adjacent data sets $X, X' \in \mathcal{X}^n$ and all $\alpha \in (1, \infty)$, we have $D_\alpha(\mathcal{A}(X) || \mathcal{A}(X')) \leq \rho \alpha$, where $D_\alpha(\mathcal{A}(X) || \mathcal{A}(X'))$ is the $\alpha$-R\'enyi divergence\footnote{For distributions $P$ and $Q$ with probability density/mass functions $p$ and $q$, $D_\alpha(P || Q) := \frac{1}{\alpha - 1}\ln \left(\int p(x)^{\alpha} q(x)^{1 - \alpha}dx\right)$~\cite[Eq. 3.3]{renyi}.} between the distributions of $\mathcal{A}(X)$ and $\mathcal{A}(X')$. \end{definition} \vspace{-.04in} \noindent zCDP is weaker than $(\epsilon, 0)$-DP, but stronger than $(\epsilon, \delta)$-DP ($\delta > 0$) in the following sense: \vspace{-.04in} \begin{proposition}\cite[Proposition 1.3]{bun16} \label{prop:bun1.3} If $\mathcal{A}$ is $\rho$-zCDP, then $\mathcal{A}$ is $(\rho + 2\sqrt{\rho \log(1/\delta)}, \delta)$ for any $\delta > 0$. \end{proposition} \vspace{-.02in} \noindent Thus, if $\epsilon \leq \sqrt{\log(1/\delta)}$, then any $\frac{\epsilon^2}{2}$-zCDP algorithm is $(2\epsilon\sqrt{\log(1/\delta)}, \delta)$-DP. Also, if $\epsilon \leq 2\log(1/\delta)$, then any $\frac{\epsilon^2}{8 \log(1/\delta)}$-zCDP algorithm is $(\epsilon, \delta)$-DP. \vspace{.2cm} Our algorithms use the \textit{Gaussian mechanism} to achieve zCDP: \begin{proposition}{\cite[Proposition 1.6]{bun16}} \label{prop: gauss} Let $q: \mathcal{X}^n \to \mathbb{R}$ be a query with $\ell_2$-sensitivity $\Delta := \sup_{X \sim X'}\|q(X) - q(X')\|$. Then the Gaussian mechanism, defined by $\mathcal{M}: \mathcal{X}^n \to \mathbb{R}$, $M(X) := q(X) + u$ for $u \sim \mathcal{N}(0, \sigma^2)$, is $\rho$-zCDP if $\sigma^2 \geq \frac{\Delta^2}{2\rho}$. \end{proposition} The (adaptive) composition of zCDP algorithms is zCDP, with privacy parameters adding: \begin{lemma}{\cite[Lemma 2.3]{bun16}} \label{lem: composition} Suppose $\mathcal{A}: \mathcal{X}^n \to \mathcal{Y}$ satisfies $\rho$-zCDP and $\mathcal{A}': \mathcal{X}^n \times \mathcal{Y} \to \mathcal{Z}$ satisfies $\rho'$-zCDP (as a function of its first argument). Define the composition of $\mathcal{A}$ and $\mathcal{A}'$, $\mathcal{A}'': \mathcal{X}^n \to \mathcal{Z}$ by $\mathcal{A}''(X) = \mathcal{A}'(X, \mathcal{A}(X))$. Then $\mathcal{A}''$ satisfies $(\rho + \rho')$-zCDP. In particular, the composition of $T$ $\rho$-zCDP mechanisms is a $T\rho$-zCDP mechanism. \end{lemma} \subsection{Contributions and Related Work} \label{sec: contributions} Here we discuss our contributions in the context of related work. See Figure~\ref{table: sum} for a summary of our results for the case $k=2$, and~\cref{app: related work} for a more thorough discussion of related work. \vspace{.2cm} \noindent \textbf{\underline{Optimal Rates for Non-Smooth (Strongly) Convex Losses} (\cref{sec: optimal rates}):}\\ We establish the following minimax optimal rates for (non-smooth) heavy-tailed DP SCO: \begin{theorem}[Informal, see~\cref{thm: localization convex}, \cref{thm: localization strongly convex}, \cref{thm: asymptotic optimality}, \cref{thm: convex lower bound}, \cref{thm: strongly convex lower bound}] Let $f(\cdot, x)$ be convex. Then, there is a $\frac{\epsilon^2}{2}$-zCDP algorithm $\mathcal{A}$ such that \small $\small \mathbb{E} F(\mathcal{A}(X)) - F^* = \widetilde{\mathcal{O}}\left( rD \left(\frac{1}{\sqrt{n}} + \left(\frac{\sqrt{d}}{\epsilon n} \right)^{(k-1)/k} \right) \right)$. \normalsize If $f(\cdot, x)$ is $\mu$-strongly convex, then \small $\mathbb{E} F(\mathcal{A}(X)) - F^* = \widetilde{\mathcal{O}}\left( \frac{r^2}{\mu} \left(\frac{1}{n} + \left(\frac{\sqrt{d}}{\epsilon n} \right)^{(2k-2)/k} \right) \right)$. \normalsize Further, these excess risk bounds are minimax optimal (up to logarithms) among all $\frac{\epsilon^2}{2}$-zCDP algorithms. \end{theorem} \vspace{-.05in} The previous state-of-the-art convex upper bound was suboptimal: $\mathcal{O}\left(r D \sqrt{\frac{d}{n}}\right)$ when $\epsilon \approx 1$~\cite[Theorem 5.4]{klz21}.\footnote{The bound in~\cite[Theorem 5.4]{klz21} for $k=2$ is stated in the notation of~\cref{ass:coordinatewise} and thus has an extra factor of $\sqrt{d}$, compared to the bound written here. We write their bound in terms of~\cref{ass:boundednoncentral}, replacing their $\gamma d$ by $r \sqrt{d}$.} Their result also required $f(\cdot, x)$ to be $\beta_f$-smooth for all $x \in \mathcal{X}$, which can be restrictive in the heavy-tailed setting: e.g. this implies that $f(\cdot, x)$ is $L_f$-Lipschitz with $L_f \leq 2\beta_f D$ if $\nabla f(w^{*}(x), x) = 0$ for some $w^{*}(x) \in \mathcal{W}$. Our strongly convex bound also significantly improves (in the minimax sense) over the best previous upper bound of~\cite[Theorem 2]{wx20}, which is $\Omega\left(\frac{(\beta/\mu)^2 d^3}{n \epsilon^4} \frac{\gamma^{2/k}}{\mu}\right)$ for $\beta$-smooth $F$ with condition number $\beta/\mu$.\footnote{The bound stated in~\cite[Theorem 2]{wx20} contains a factor of $F^*$ in place of $\frac{\gamma^{2/k}}{\mu}$. However, in the worst case (e.g. for the quadratic hard instance used to prove the lower bounds in \cref{thm: strongly convex lower bound}), we can have $F^* = \Omega(d \gamma^{2/k}/\mu)$.} Other upper bounds were \textit{asserted} in \cite[Theorems 5 and 7]{wx20} and \cite[Theorem 5.6]{klz21}, but we observe errors in the proofs of these results: In short, the mistake in these proofs is that Jensen's inequality is used in the wrong direction to claim that the $T$-th iterate of their algorithm $w_T$ satisfies $\mathbb{E}[\|w_T - w^{*}\|^2] \leq (\mathbb{E} \|w_T - w^{*}\|)^2$, which is false in general. Further, there seems to be no ``easy fix'' for this issue: see~\cref{app: wrong proofs} for more details. \begin{figure}[t] \centering \includegraphics[width=\textwidth] {heavy_tailed_table_arXiv_border.pdf} \vspace{-.3in} \caption{\footnotesize $\frac{\epsilon^2}{2}$-zCDP excess risk bounds for $k=2$, $\gamma = 1, r = \sqrt{d}$; we omit logarithms. $\kappa$ is the condition number of $F$. }\label{table: sum} \end{figure} Our algorithm (\cref{alg: localization}) for convex losses combines the iterative localization technique of~\cite{fkt20, asiL1geo} with a noisy \textit{clipped} subgradient method (\cref{alg: clipped GD}). With clipped (hence biased) stochastic subgradients and non-Lipschitz/non-smooth loss, the excess risk analysis of our algorithm is significantly harder than in the Lipschitz setting. Instead of the uniform convergence analysis used in~\cite{wx20, klz21}, we extend results about the \textit{stability}~\cite{devroye1979distribution, kearns1999algorithmic, bousquet2002stability, lei2020fine} and generalization error of (regularized) ERM to non-smooth, non-Lipschitz losses: see e.g. Proposition~\ref{prop: stability implies generalization} and~\ref{cor: reg ERM excess risk}. Existing results on the stability and generalization error of learning algorithms (e.g. \cite{shalev2009stochastic, lei2020fine}) were limited to smooth and/or Lipschitz loss. Combining our novel statistical learning results with a convergence guarantee for subgradient method with biased, noisy subgradients (Lemma~\ref{lem: subgrad ERM bound}) yields an excess risk bound (\cref{thm: localization convex}) for our algorithm that is tight up to a factor of $\widetilde{\mathcal{O}}(\widetilde{R}_n/r)$ (for any $n \geq 1$); here $\widetilde{R}_n$ is a coefficent that depends on the expected \textit{empirical} moments. The final step (\cref{thm: asymptotic optimality}) is to prove that $\lim_{n \to \infty} \widetilde{R}_n \leq 4r$ under mild assumptions (e.g. subexponential subgradients). Our strongly convex bound (\cref{thm: localization strongly convex}) is obtained by a reduction to the convex case, drawing on~\cite{hk14, fkt20}. We also provide lower bounds (\cref{thm: convex lower bound} and~\cref{thm: strongly convex lower bound}) under \cref{ass:boundednoncentral} and~\cref{ass:coordinatewise}. Our lower bounds refine (to describe the dependence on $r, \gamma, D, \mu$), extend (to $k \gg 1$), and tighten (for convex case) the lower bounds of \cite{klz21}. \vspace{.2cm} \noindent \textbf{\underline{Linear Time Algorithms for Smooth (Strongly) Convex Losses} (\cref{sec: linear time}):} For convex losses such that $F$ is $\beta$-smooth\footnote{In contrast to prior works~\cite{wx20, klz21}, \textit{we do not require $f(\cdot, x)$ to be $\beta$-smooth} for all $x$. Our assumption that $F$ is smooth is strictly weaker than the assumption that $f(\cdot, x)$ is smooth for all $x$.}, we provide a novel \textit{accelerated} DP algorithm for heavy-tailed SCO (\cref{alg: ACSA}), building on the AC-SA method of~\cite{ghadimilan1}. The runtime of our algorithm is linear in $n$ and its excess risk improves over the previous state-of-the-art (\textit{not linear time}) algorithm~\cite[Theorem 5.4]{klz21} in practical parameter regimes (e.g. $d \gtrsim n^{1/6}$). The excess risk of our algorithm is ``nearly'' optimal: e.g. when $k=2$, our bound is tight up to a factor of $\left(\frac{\epsilon n}{d^{3/2}}\right)^{1/18}$; in particular, it is tight in the parameter regime $d \gtrsim (\epsilon n)^{2/3}$. Specializing to ``sufficiently smooth'' $F$, our algorithm is \textit{optimal}: see Remark~\ref{rem: affine optimal}. To prove our upper bound, we give the first analysis of accelerated SGD with biased stochastic gradients. Vanilla SGD with biased stochastic gradients was studied in~\cite{as21, asi2021private}; with acceleration, the analysis is more involved due to bias accumulation and the more complicated algorithm. For smooth strongly convex losses, acceleration results in excessive bias accumulation, so we propose a simple one-pass noisy clipped SGD method instead (\cref{alg: vanilla SGD}). Our algorithm attains excess risk that is nearly optimal up to a factor of $\widetilde{\mathcal{O}}((\beta/\mu)^{(k-1)/k})$, where $\beta/\mu$ is the condition number of $F$: see~\cref{thm: strongly convex smooth upper bound}. This bound significantly improves over the previous state-of-the-art bound of~\cite[Theorem 2]{wx20}. Our algorithm builds on~\cite{klz21}, but uses a lower-bias clipping mechanism (from~\cite{bd14}) and more careful choice of parameters. Crucially, we also provide a new analysis that avoids the error in the proof of~\cite[Theorem 5.6]{klz21}. \vspace{.2cm} \noindent \textbf{\underline{First Algorithm for Non-Convex (Proximal-PL) Losses} (\cref{sec: PL}):} We consider losses satisfying the \textit{Proximal Polyak-\L ojasiewicz (PPL) inequality} \cite{polyak, karimi2016linear} (Definition~\ref{def: Prox PL}), an extension of the classical PL inequality to the proximal setting, which covers important models like neural nets, linear/logistic regression, and LASSO~\cite{karimi2016linear, lei2021sharper}. We propose a DP proximal clipped SGD (\cref{alg: zCSDP SGD}) to attain nearly optimal excess risk. The excess risk of our algorithm \textit{nearly matches the optimal smooth, strongly convex} rate, having the optimal dependence on $n, d$, and $\epsilon$: see~\cref{thm: PL upper bound}. The proof of this result is difficult because it is unclear how to separate the privacy noise from the non-private terms in the proximal/non-convex setting. We prove~Proposition~\ref{lemma:extendsAS21Thm6} by building on~\cite{lowy2022NCFL}, which analyzed \textit{Lipschitz} PPL losses with \textit{unbiased} gradients. \vspace{.2cm} We also provide (in~\cref{app: SDP mean estimators}) the \textbf{first \textit{shuffle differentially private (SDP)}}~\cite{prochlo, cheu2019distributed} \textbf{algorithms for heavy-tailed SO}. Our SDP algorithms achieve the \textit{same utility as their central DP counterparts} (up to logarithmic factors), \textit{without requiring a trusted curator.} Thus, our algorithms may be useful in decentralized learning environments, such as federated learning, where training is orchestrated by a central server that individuals may not trust with their sensitive data~\cite{kairouz2019advances, lr21fl}. SDP SO with Lipschitz loss functions was considered in~\cite{lr21fl, cheu2021shuffle, lowy2022NCFL}. \section{Private Heavy-Tailed Mean Estimation Building Blocks} In each iteration of our SO algorithms, we need a way to privately estimate the mean $\nabla F(w_t) = \mathbb{E}_{x \sim \mathcal{D}}[\nabla f(w_t, x)]$. If $f$ is Lipschitz, then one can simply draw a random sample $x^t$ from the data set $X$ and add (zero-mean) noise to the stochastic gradient $\nabla f(w_t, x^t)$ to obtain a private mean estimator of $\nabla F(w_t)$: the $\ell_2$-sensitivity of stochastic gradient updates is bounded by $\sup_{x, x' \in \mathcal{X}} \| \nabla f(w_t, x) - \nabla f(w_t, x')\| \leq 2L_f$, so the Gaussian mechanism guarantees DP (by Proposition~\ref{prop: gauss}). However, in the heavy-tailed setting that we consider, $L_f$ (and hence the sensitivity) may be unbounded, so noisy stochastic gradients are not DP. Thus, we \textit{clip} the stochastic gradients (to force the sensitivity to be bounded) before adding noise. Specifically, we invoke~\cref{alg: MeanOracle2} on a minibatch of $s$ stochastic gradients at each iteration of our algorithms. In~\cref{alg: MeanOracle2}, $\Pi_{C}(z) := \argmin_{y \in B_2(0, C)}\|y - z\|^2$ denotes the projection onto the centered $\ell_2$ ball of radius $C$ in $\mathbb{R}^d$. Lemma~\ref{lem: bias and variance of bd14} bounds the bias and variance of~\cref{alg: MeanOracle2}. \begin{algorithm}[ht] \caption{$\ell_2$ Clip $\texttt{MeanOracle1}(\{x_i\}_{i=1}^s; s; C; \frac{\epsilon^2}{2})$ \cite{bd14}} \label{alg: MeanOracle2} \begin{algorithmic}[1] \STATE {\bfseries Input:} $X = \{x_i\}_{i=1}^s$, $C>0$, $\epsilon > 0$. \FOR{$i \in [n]$} \STATE $z_i := \Pi_{C}(x_i)$. \ENDFOR \\ \STATE Set $\sigma^2 = \frac{4 C^2}{s^2 \epsilon^2}$ for $\frac{\epsilon^2}{2}$-zCDP. \STATE Draw $u \sim \mathcal{N}(0, \sigma^2 \mathbf{I}_d )$. \STATE $\widetilde{\nu} := \frac{1}{s}\sum_{i=1}^s z_i + u$. \STATE {\bfseries Output:} $\widetilde{\nu}$. \end{algorithmic} \end{algorithm} \begin{lemma}[\cite{bd14}] \label{lem: bias and variance of bd14} Let $\{x_i\}_{i=1}^s \sim \mathcal{D}^s$ be $\mathbb{R}^d$-valued random vectors with $\mathbb{E} x_i = \nu$ and $\mathbb{E}\|x_i\|^k \leq r^k$ for some $k > 1$. Denote the noiseless average of clipped samples by $\widehat{\nu} := \frac{1}{s}\sum_{i=1}^s \Pi_C(x_i)$. Then, $\|\mathbb{E} \widetilde{\nu} - \nu \| = \|\mathbb{E} \widehat{\nu} - \nu \| \leq \mathbb{E} \|\widehat{\nu} - \nu \| \leq \frac{r^k}{(k-1)C^{k-1}}$, and $\mathbb{E}\|\widetilde{\nu} - \mathbb{E} \widetilde{\nu}\|^2 = \mathbb{E} \| \widetilde{\nu} - \mathbb{E} \widehat{\nu}\|^2 \leq d\sigma^2 + \frac{r^2}{s}$. \end{lemma} \section{Optimal Rates for Non-Smooth (Strongly) Convex Losses} \label{sec: optimal rates} In this section, we establish the asymptotically optimal rates (up to logarithms) for DP SO with convex and strongly convex (non-Lipschitz) loss functions. First, in~\cref{sec: localization}, we present our algorithm and non-asymptotic upper bound for convex losses. Next, in \cref{sec: localization strongly}, we provide a non-asymptotic upper bound for strongly convex losses. In~\cref{sec: asymptotic}, we give asymptotic upper bounds, which are shown to be optimal by the lower bounds in~\cref{sec: lower bounds}. \subsection{Localized Noisy Clipped Subgradient Method for Convex Losses} \label{sec: localization} Our algorithm (\cref{alg: localization}) combines the iterative localization technique of~\cite{fkt20, asiL1geo} with noisy \textit{clipped} subgradient method (\cref{alg: clipped GD}) to handle heavy-tailed data.\footnote{We assume without loss of generality that $n = 2^l$ for some $l \in \mathbb{N}$. If this is not the case, then one can throw out samples until it is the case; since the number of remaining samples is at least $n/2$, our excess risk bounds still hold up to a constant factor.} \begin{algorithm}[ht] \caption{Noisy $\ell_2$-Clipped Subgradient Method for Heavy-Tailed DP ERM } \label{alg: clipped GD} \begin{algorithmic}[1] \STATE {\bfseries Input:} Data $X \in \mathcal{X}^n$, iteration number $T$, stepsize $\eta$, clip threshold $C$, $w_0 \in \mathcal{W}$. \FOR{$t \in \{0, 1, \cdots, T-1\}$} \STATE $\widetilde{\nabla} F_t(w_t) := \texttt{MeanOracle1}(\{\nabla f(w_t, x_i)\}_{i=1}^n; n; C; \frac{\epsilon^2}{2T})$ for subgradients $\nabla f(w_t, x_i) \in \partial_w f(w_t, x_i)$. \STATE $w_{t+1} = \Pi_{\mathcal{W}}\left[w_t - \eta \widetilde{\nabla} F_t(w_t) \right] $ \ENDFOR \\ \STATE {\bfseries Output:} $w_T$. \end{algorithmic} \end{algorithm} \vspace{-.1in} \begin{algorithm}[ht] \caption{Localized Noisy Clipped Subgradient Method for Heavy-Tailed DP SCO} \label{alg: localization} \begin{algorithmic}[1] \STATE {\bfseries Input:} Data $X \in \mathcal{X}^n$, stepsize $\eta$, clip thresh. $\{C_i\}_{i=1}^{\log_2(n)}$, iteration num. $\{T_i\}_{i=1}^{\log_2(n)}$. \STATE Initialize $w_0 \in \mathcal{W}$. Let $l := \log_2(n)$. \FOR{$i \in [l]$} \STATE Set $n_i = 2^{-i} n, \eta_i = 4^{-i} \eta$, and $\lambda_i = \frac{1}{\eta_i n_i}$. \STATE Draw new batch $\mathcal{B}_i$ of $n_i = |\mathcal{B}_i|$ samples from $X$ without replacement. \STATE Let $\widehat{F}_X_i(w) := \frac{1}{n_i} \sum_{j \in \mathcal{B}_i} f(w, x_j) + \frac{\lambda_i}{2}\|w - w_{i-1}\|^2$. \STATE Run~\cref{alg: clipped GD} on $\widehat{F}_X_i$ initialized at $w_{i-1}$ for $T_i$ iterations with clip threshold $C_i$, stepsize $\eta_i$, and noise $\sigma_i^2 = \frac{4C_i^2 T_i}{n_i^2 \epsilon^2}$. Let $w_i$ be the output of~\cref{alg: clipped GD}. \ENDFOR \\ \STATE {\bfseries Output:} $w_l$. \end{algorithmic} \end{algorithm} The main ideas of~\cref{alg: localization} are: \vspace{-.05in} \begin{enumerate} \vspace{-.05in} \item We run a \textit{multi-pass} noisy clipped subgradient method on a \textit{regularized} empirical loss: multiple passes allows us to reduce the noise variance in phase $i$ by a factor of $T_i$ (by appealing to Lemma~\ref{lem: composition} instead of parallel composition) and get a more accurate solution to the ERM subproblem; regularization makes the empirical loss strongly convex, which improves the \textit{stability} (hence generalizability) of ERM and speeds up convergence. \vspace{-.05in} \item In early phases (small $i$), when we are far away from the optimum $w^{*}$, we use more samples (larger $n_i$) and large learning rate $\eta_i$ to make progress quickly; as $i$ increases, $w_i$ is closer to $w^{*}$, so fewer samples and slower learning rate suffice. We also choose $T_i$ proportional to $n_i$ (see Remark~\ref{rem: computation}). \vspace{-.05in} \item Since step size $\eta_i$ shrinks (geometrically) faster than $n_i$, the effective variance of the privacy noise $\eta_i^2 \sigma_i^2$ decreases as $i$ increases, which prevents $w_{i+1}$ from moving too far away from $w_i$ (and hence from $w^{*}$). We further enforce this ``localization'' behavior by increasing the regularization parameter $\lambda_i$ over time, which also stabilizes the algorithm. \end{enumerate} Next, we will provide privacy and (non-asymptotic) excess risk guarantees for~\cref{alg: localization}. In order to precisely state the excess risk bound, we will need to introduce some notation. For a batch of data $X \in \mathcal{X}^m$, we define the $k$-th \textit{empirical moment} of $f(w, \cdot)$ by \vspace{-.05in} \small \begin{equation} \widehat{r}_m(X)^k = \sup_{w \in \mathcal{W}} \sup_{\{\nabla f(w, x_i) \in \partial_w f(w, x_i)\}} \frac{1}{m}\sum_{i=1}^m \|\nabla f(w, x_i) \|^k, \vspace{-.05in} \end{equation} \normalsize where the supremum is also over all subgradients $\nabla f(w, x_i) \in \partial_w f(w, x_i)$ in case $f$ is not differentiable. For $X \sim \mathcal{D}^m$, we denote the $k$-th \textit{expected empirical moment} by \small \vspace{-.025in} \begin{equation} \widetilde{r}_m^k = \mathbb{E}[\widehat{r}_m(X)^k] = \mathbb{E}\left\{\sup_{w \in \mathcal{W}} \frac{1}{m}\sum_{i=1}^m \|\nabla f(w, x_i) \|^k \right\}, \vspace{-.025in} \end{equation} \normalsize and let $\widetilde{r}_m := (\widetilde{r}_m^k)^{1/k}$. Clearly, $\widetilde{r}_m^k \leq \sup_{w, x} \|\nabla f(w,x)\|^k = L_f^k$ for any $m \geq 1$, but this inequality is often very loose: e.g. in Example~\ref{example: gauss lin reg}, we have $\widetilde{r}_m^k \leq 4 \ll L_f = \infty$ for any $m$. Additionally, the expected empirical moments satisfy the following property: \begin{lemma} \label{lem: empirical moments} We have: $\widetilde{r}_1^k \geq \widetilde{r}_2^k \geq \widetilde{r}_4^k \geq \widetilde{r}_8^k \geq \cdots \geq r^k$. \end{lemma} \noindent Complete proofs of all results in this subsection are deferred to~\cref{app: localization}. Our excess risk upper bound will depend on a weighted average of the expected empirical moments for different batch sizes $m \in \{1, 2, 4, 8, \cdots, n\}$, with more weight being given to $\widetilde{r}_m$ for large $m$ (which are smaller, by Lemma~\ref{lem: empirical moments}). For any $n = 2^l$, define \vspace{-.1in} \small \begin{equation} \widetilde{R}_n^2 := \sum_{i=1}^{l} 2^{-i} \widetilde{r}_{n_i}^2, \vspace{-.1in} \end{equation} \normalsize where $n_i = 2^{-i} n$. With this notation, we can now state the main result of this subsection: \begin{theorem} \label{thm: localization convex} Let $f(\cdot, x)$ be convex for all $x$, and $\epsilon \leq n^{3/2} \sqrt{d}$. Then, there are algorithmic parameters such that~\cref{alg: localization} is $\frac{\epsilon^2}{2}$-zCDP, and (for absolute constant $A$) has excess risk \vspace{-.05in} \small \begin{equation*} \mathbb{E} F(w_l) - F^* \leq A \widetilde{R}_n D\left(\frac{1}{\sqrt{n}} + \left(\frac{\sqrt{d \ln(n)}}{\epsilon n}\right)^{\frac{k-1}{k}}\right). \end{equation*} \normalsize \vspace{-.05in} \end{theorem} \vspace{-.05in} The proof of~\cref{thm: localization convex} will consist of three main steps: i) we bound the empirical error of the noisy clipped subgradient subroutine (Lemma~\ref{lem: subgrad ERM bound}); ii) we prove that if an algorithm is \textit{on-average model stable} (see Definition~\ref{def: stability}), then it generalizes (Proposition~\ref{prop: stability implies generalization}), extending results from~\cite{shalev2009stochastic, lei2020fine} to non-smooth/non-Lipschitz losses; and iii) we bound the excess population loss of~\cref{alg: clipped GD} run on the regularized empirical objective (c.f. line 7 of~\cref{alg: localization}): see Proposition~\ref{cor: reg ERM excess risk}. By using iii) with the proof technique of~\cite{fkt20, asiL1geo}, we can obtain~\cref{thm: localization convex}. The following lemma extends the standard analysis of projected subgradient method to biased, noisy subgradient oracles: \begin{lemma} \label{lem: subgrad ERM bound} Let $\widehat{F}_X: \mathcal{W} \to \mathbb{R}$ be a $\lambda$-strongly convex empirical loss with $\small \widehat{r}_n(X)^2 \geq \sup_{w \in \mathcal{W}}\left\{\frac{1}{n} \sum_{i=1}^n \|\nabla f(w, x_i)\|^2\right\}$ \normalsize for all subgradients $\nabla f(w,x_i) \in \partial_w f(w, x_i)$, and let $\widetilde{\nabla} F_t(w_t) = \nabla \widehat{F}_X(w_t) + b_t + N_t$ be biased noisy subgradients of $\widehat{F}_X(w_t)$. Assume that the bias and noise satisfy $\|b_t\| \leq \hat{B}$ w.p. $1$, $\mathbb{E} N_t = 0$, $\mathbb{E}\|N_t\|^2 \leq \hat{\Sigma}^2$, $\forall t \in [T-1]$, and that $\{N_t\}_{t=1}^T$ are independent. Let $\hat{w} = \argmin_{w \in \mathcal{W}} \widehat{F}_X(w)$ and $\eta \leq \frac{2}{\lambda}$. Then, the output of~\cref{alg: clipped GD} satisfies \vspace{-.05in} \small \[ \mathbb{E}\|w_T - \hat{w}\|^2 \leq \exp\left(-\frac{\lambda \eta T}{2}\right)\|w_0 - \hat{w}\|^2 + \frac{4 \eta}{\lambda}\left(\widehat{r}_n(X)^2 + \hat{B}^2 + \hat{\Sigma}^2 \right) + \frac{4 \hat{B}^2}{\lambda^2}. \] \normalsize \vspace{-.05in} \end{lemma} \vspace{-.02in} Next, we will bound the generalization error of regularized ERM for convex loss functions. When the underlying loss $f(\cdot, x)$ is Lipschitz continuous or smooth, the generalization error of regularized ERM is understood~\cite{shalev2009stochastic, lei2020fine}. However, obtaining such a generalization error bound without Lipschitz continuity or smoothness of $f$ requires us to derive a few new results (which we hope will be useful in applications beyond DP SCO). First, we recall the notion of \textit{on-average model stability}~\cite{lei2020fine}: \begin{definition} \label{def: stability} Let $X = (x_1, \cdots, x_n)$ and $X' = (x'_1, \cdots, x'_n)$ be drawn independently from $\mathcal{D}$. For any $i \in [n]$, define $X^{i} = (x_1, \cdots, x_{i-1}, x'_i, x_{i+1}, \cdots, x_n)$. We say that a randomized algorithm $\mathcal{A}$ has on-average model stability $\alpha$ (or that $\mathcal{A}$ is $\alpha$-on-average model stable) if \small $ \mathbb{E}\left[\frac{1}{n} \sum_{i=1}^n \|\mathcal{A}(X) - \mathcal{A}(X^i)\|^2\right] \leq \alpha^2, $ \normalsize where the expectation is over the randomness of $\mathcal{A}$ and the draws of $X$ and $X'$. \end{definition} \noindent Definition~\ref{def: stability} is weaker than the notion of \textit{uniform stability}~\cite{bousquet2002stability}, which has been used in DP Lipschitz SO (e.g. \cite{bft19, lr21fl}). Without Lipschitz continuity of $f$, we need this weaker notion to prove our generalization error bounds. The main result in \cite{lei2020fine} showed that on-average model stable algorithms generalize well if $f(\cdot, x)$ is \textit{smooth}, even without Lipschitz continuity. Next, we show that neither smoothness nor Lipschitz continuity of $f$ is needed to guarantee generalizability: \begin{proposition} \label{prop: stability implies generalization} Let $f(\cdot, x)$ be convex for all $x$. Suppose $\mathcal{A}: \mathcal{X}^n \to \mathcal{W}$ is $\alpha$-on-average model stable. Let $\widehat{F}_X_X(w) := \frac{1}{n}\sum_{i=1}^n f(w, x_i)$ be an empirical loss. Then for any $\zeta > 0$, we have \small \[ \mathbb{E}[F(\mathcal{A}(X)) - \widehat{F}_X_X(\mathcal{A}(X))] \leq \frac{r^2}{2 \zeta} + \frac{\zeta}{2} \alpha^2. \] \normalsize \end{proposition} Using~Proposition~\ref{prop: stability implies generalization}, we can bound the generalization error and excess (population) risk bound of regularized ERM with convex loss (c.f. line 7 of~\cref{alg: localization}): \begin{proposition} \label{cor: reg ERM excess risk} Let $f(\cdot, x)$ be convex $\forall x$. Fix any $w_{i-1} \in \mathcal{W}$ and denote $\hat{w}_i = \argmin_{w \in \mathcal{W}} \widehat{F}_X_i(w)$, where $\widehat{F}_X_i(w) := \frac{1}{n_i} \sum_{j \in \mathcal{B}_i} f(w, x_j) + \frac{\lambda_i}{2}\|w - w_{i-1}\|^2$ as in line 6 of~\cref{alg: localization}. Then, for any $y \in \mathcal{W}$, we have: \vspace{-.05in} \small \[ \mathbb{E}[F(\hat{w}_i)] - F(y) \leq \frac{2r^2}{\lambda_i n_i} + \frac{\lambda_i}{2}\|y - w_{i-1}\|^2, \] \normalsize where the expectation is over both the random draws of $X$ from $\mathcal{D}$ and $\mathcal{B}_i$ from $X$. \end{proposition} \vspace{-.04in} With the pieces developed above, we can now sketch the proof of~\cref{thm: localization convex}: \begin{proof}[Sketch of the Proof of~\cref{thm: localization convex}] \noindent \textbf{Privacy:} Since the batches $\{\mathcal{B}_i\}_{i=1}^l$ are disjoint, it suffices to show that $w_i$ (produced by $T_i$ iterations of~\cref{alg: clipped GD} in line 7 of~\cref{alg: localization}) is $\frac{\epsilon^2}{2}$-zCDP~~$\forall i \in [l]$. The $\ell_2$ sensitivity of the clipped subgradient update is $\Delta = \sup_{w, X \sim X'} \|\frac{1}{n_i} \sum_{j=1}^{n_i} \Pi_{C_i}(\nabla f(w, x_j)) - \Pi_{C_i}(\nabla f(w, x'_j))\| \leq 2C_i/n_i$. Thus, the privacy guarantees of the Gaussian mechanism (Proposition~\ref{prop: gauss}) and the composition theorem for zCDP (Lemma~\ref{lem: composition}) imply that \cref{alg: localization} is $\frac{\epsilon^2}{2}$-zCDP. \noindent \textbf{Excess risk:} By combining Lemma~\ref{lem: subgrad ERM bound} with~Lemma~\ref{lem: bias and variance of bd14} and appropriate choices of $\eta$ and $T_i$, we get (for an absolute constant $A$): \vspace{-.1in} \small \begin{equation} \label{eq: 0ing} \small \mathbb{E}\|w_i - \hat{w}_i\|^2 \leq A\left(\frac{\eta_i}{\lambda_i}\left(\widetilde{r}_{n_i}^2 + \widetilde{B}_i^2 + \widetilde{\Sigma}_i^2 \right) + \frac{\widetilde{B}_i^2}{\lambda_i^2}\right) \leq A\left(\frac{\eta^2 n}{32^i}\left(\widetilde{r}_{n_i}^2 + \frac{d C_i^2 T_i}{\epsilon^2 n_i^2} + \frac{n \widetilde{r}_{n_i}^{2k}}{2^i C_i^{2k-2}}\right) \right). \end{equation} \normalsize Now, following the strategy used in the proofs of~\cite[Theorem 4.4]{fkt20} and \cite[Theorem 4]{asiL1geo}, we write \small $\mathbb{E} F(w_l) - F(w^{*}) = \mathbb{E}[F(w_l) - F(\hat{w}_l)] + \sum_{i=1}^l \mathbb{E}[F(\hat{w}_i) - F(\hat{w}_{i-1})]$, \normalsize where $\hat{w}_0 := w^{*}$. Using~\cref{eq: 0ing} and Lipschitz continuity of $F$ (which is implied by~\cref{ass:boundednoncentral}), we can bound the first term. To bound the sum (second term), we use Proposition~\ref{cor: reg ERM excess risk} to obtain (for the right choice of $\eta$) \vspace{-.1in} \small \begin{align*} \sum_{i=1}^l \mathbb{E}[F(\hat{w}_i) - F(\hat{w}_{i-1})] \leq \frac{4D^2}{\eta n} + \eta r^2 + 4r^2\sum_{i=2}^l \eta_i + \sum_{i=2}^l \frac{\mathbb{E}[\|\hat{w}_{i-1} - w_{i-1}\|^2]}{\eta_i n_i} \vspace{-.1in} \end{align*} \normalsize Using~\cref{eq: 0ing} to bound $\small \sum_{i=2}^l \frac{\mathbb{E}[\|\hat{w}_{i-1} - w_{i-1}\|^2]}{\eta_i n_i} \normalsize$ and carefully choosing $\eta$, $C_i$, and $T_i$ yields the result. \end{proof} \begin{remark}[Choice of $T_i$ and Computational Complexity] \label{rem: computation} To get optimal excess risk via~\cref{alg: localization}, $T_i = \widetilde{\Theta}\left(\frac{1}{\lambda_i \eta_i}\right) \lesssim n_i \ln\left(n \right)$ suffices (see full proof of~\cref{thm: localization convex} in~\cref{app: localization}). Thus, \cref{alg: localization} uses $\sum_{i=1}^l n_i T_i \lesssim \ln(n) n^2$ subgradient evaluations. If one desires $(\epsilon, \delta)$-DP or $(\epsilon, \delta)$-SDP instead of zCDP, then the gradient complexity of~\cref{alg: localization} can be improved to $\mathcal{O}(n^{3/2} \sqrt{\ln(n)})$: see~\cref{app: localization} for details. \end{remark} \subsection{The Strongly Convex Case} \label{sec: localization strongly} Following~\cite{fkt20}, we use a folklore reduction to the convex case (detailed in~\cref{app: localization strongly}) in order to obtain the following upper bound via~\cref{thm: localization convex}: \begin{theorem} \label{thm: localization strongly convex} Let $\epsilon \leq n^{3/2} \sqrt{d}$ and assume $f(\cdot, x): \mathcal{W} \to \mathbb{R}$ is $\mu$-strongly convex for all $x \in \mathcal{X}$. Then, there is an $\frac{\epsilon^2}{2}$-zCDP algorithm $\mathcal{A}$ (based on~\cref{alg: localization}) with excess risk \small \vspace{-.1in} \begin{equation*} \mathbb{E} F(\mathcal{A}(X)) - F^* \leq \frac{A \widetilde{R}_{n/4}^2}{\mu}\left(\frac{1}{n} + \left(\frac{\sqrt{d \ln(n)}}{\epsilon n}\right)^{\frac{2k-2}{k}}\right). \end{equation*} \normalsize \end{theorem} \subsection{Asymptotic Optimality of~\cref{alg: localization}} \label{sec: asymptotic} We will show that $\widetilde{R}_n = \mathcal{O}(r)$ if the distribution of stochastic subgradients is subexponential. \begin{definition}[Subexponential Distribution] A random variable $Y$ is subexponential if there is an absolute constant $s > 0$ such that $\mathbb{P}(|Y| \geq t) \leq 2 \exp\left(-\frac{t}{s}\right)$ for all $t \geq 0$. For subexponential $Y$, we define $\|Y\|_{\psi_1} := \inf\left\{s > 0: \mathbb{P}(|Y| \geq t) \leq 2 \exp\left(-\frac{t}{s}\right) ~\forall~t \geq 0\right\}$. \end{definition} \noindent Essentially all (heavy-tailed) distributions that arise in practice are subexponential~\cite{mckay2019probability}. For such distributions, \cref{alg: localization} (and our algorithm for strongly convex losses, built on~\cref{alg: localization}) is asymptotically optimal (up to $\ln(n)$ factor), under mild conditions: \begin{theorem} \label{thm: asymptotic optimality} Let $f(\cdot, x)$ be convex. Suppose that $\widetilde{r}_1^k < \infty$, and that for all $w \in \mathcal{W}$, $Y_i = \|\nabla f(w, x_i)\|^k$ is subexponential with $E_n \geq \max_{i \in [n]} \left(\|Y_i\|_{\psi_1}\right)$. Assume that for sufficiently large $n$, we have $\sup_{w,x}\|\nabla f(w,x)\|^k \leq n^q r^k$ for some $q \geq 1$ and $\max\left(\frac{E_n}{r^k}, \frac{E_n^2}{r^{2k}}\right) \ln\left(\frac{3n D \beta}{4r}\right) \leq \frac{n}{d q}$, where $\|\nabla f(w,x) - \nabla f(w', x)\| \leq \beta \|w - w'\|$ for all $w, w' \in \mathcal{W}, x\in \mathcal{X}$, and subgradients $\nabla f(w,x) \in \partial_w f(w,x)$. Then, $\lim_{n \to \infty} \widetilde{R}_n \leq 4 r.$ Further, there exists $N \in \mathbb{N}$ such that for all $n \geq N$, the output of~\cref{alg: localization} satisfies \small \[ \small \mathbb{E} F(w_l) - F^* = \mathcal{O}\left(rD\left(\frac{1}{\sqrt{n}} + \left(\frac{\sqrt{d \ln(n)}}{\epsilon n} \right)^{\frac{k-1}{k}}\right) \right). \] \normalsize If $f(\cdot, x)$ is $\mu$-strongly convex, then the output of algorithm $\mathcal{A}$ (in~\cref{sec: localization strongly}) satisfies \small \[ \small \mathbb{E} F(\mathcal{A}(X)) - F^* = \mathcal{O}\left(\frac{r^2}{\mu}\left(\frac{1}{n} + \left(\frac{\sqrt{d \ln(n)}}{\epsilon n}\right)^{\frac{2k-2}{k}}\right)\right). \] \normalsize \end{theorem} The proof (in~\cref{app: asymptotic}) of the first claim combines a covering argument with Bernstein's inequality, Lemma~\ref{lem: empirical moments}, and Lebesgue's dominated convergence theorem. The second and third claims are immediate from the first claim and the non-asymptotic excess risk bounds provided earlier. While a bound on $\sup_{w,x}\|\nabla f(w,x)\|$ is needed in~\cref{thm: asymptotic optimality}, it can grow as fast as any polynomial in $n$ and only needs to hold for sufficiently large $n$. As $n \to \infty$, this assumption is trivially satisfied. Likewise,~\cref{thm: asymptotic optimality} depends only logarithmically on the Lipschitz parameter of the subgradients $\beta$, so the result still holds up to constant factors if, say, $\beta \leq n^p (r/D)$ as $n \to \infty$ for some $p\geq 1$. Crucially, our excess risk bounds do not depend on $L_f$ or $\beta$. Further, our bounds in~\cref{thm: localization convex} and~\cref{thm: localization strongly convex} hold even if $n$ is small and $L_f = \beta = \infty$. Optimality of our algorithm follows from the lower bounds in~\cref{sec: lower bounds}. \subsection{Lower Bounds } \label{sec: lower bounds} The work of~\cite{klz21} proved lower bounds under~\cref{ass:coordinatewise} that are tight (by our upper bounds in~\cref{sec: optimal rates}) in most parameter regimes for $\gamma = D = \mu = 1$ and $k = \mathcal{O}(1)$.\footnote{The lower bounds asserted in~\cite{klz21} only hold if $k = \mathcal{O}(1)$ since the moments of the Gaussian distribution that they construct grow exponentially/factorially with $k$.} Our (relatively modest) contribution in this subsection is: refining these lower bounds to display the correct dependence on $\gamma, r, D, \mu$; tightening the convex lower bound~\cite[Theorem 6.4]{klz21} in the regime $d > n$; and extending the proofs of~\cite[Theorems 6.1 and 6.4]{klz21} to $k \gg 1$. Our first lower bounds hold even for affine functions:\footnote{An affine function is a function that is linear in $w$: i.e. $\nabla^2_{ww} f(w,x) = 0$.} \begin{theorem}[Smooth Convex, Informal] \label{thm: convex lower bound} Let $\rho \leq d$. For any $\rho$-zCDP algorithm $\mathcal{A}$, there exist closed convex sets $\mathcal{W}, \mathcal{X} \subset \mathbb{R}^d$ such that $\|w - w'\| \leq 2D$ for all $w, w' \in \mathcal{W}$, a $\beta_f$-smooth, linear, convex (in $w$) loss $f: \mathcal{W} \times \mathcal{X} \to \mathbb{R}$, and distributions $\mathcal{D}$ and $\mathcal{D'}$ on $\mathcal{X}$ such that:\\ 1. \cref{ass:boundednoncentral} holds and if $X \sim \mathcal{D}^n$, then \small $\mathbb{E} F(\mathcal{A}(X)) - F^* = \Omega\left(r D \left(\frac{1}{\sqrt{n}} + \min\left\{1, \left(\frac{\sqrt{d}}{\sqrt{\rho} n}\right)^{\frac{k-1}{k}}\right\}\right)\right).$ \\ \normalsize \noindent 2. \cref{ass:coordinatewise} holds and if $X' \sim \mathcal{D'}^n$, then \small $\mathbb{E} F(\mathcal{A}(X')) - F^* = \Omega\left(\gamma^{1/k} D \left(\sqrt{\frac{d}{n}} + \sqrt{d}\min\left\{1, \left(\frac{\sqrt{d}}{\sqrt{\rho} n}\right)^{\frac{k-1}{k}}\right\}\right)\right).$ \normalsize \end{theorem} \noindent The proof (in \cref{app: lower bounds}) constructs a bounded (hence subexponential) distribution and an $(L_f \approx r)$-Lipschitz, $\beta_f$-smooth loss, easily satisfying the conditions in~\cref{thm: asymptotic optimality}. Our proof of~\cref{thm: convex lower bound} follows the proof framework of~\cite[Theorem 6.4]{klz21}. The main differences in our proof of part 2 of \cref{thm: convex lower bound} from the proof of \cite[Theorem 6.4]{klz21} (for $\gamma = D = 1$) are: 1) we construct a Bernoulli product distribution (built on~\cite[Example 7.7]{duchinotes}) instead of a Gaussian, which establishes a lower bound that holds for all $k \geq 2$ instead of just $k = \mathcal{O}(1)$; and 2) we choose a different parameter value (larger $p$ in the notation of the proof) in our application of Fano's method, which results in a tighter lower bound: the term $\min\{1, \sqrt{d/n}\}$ in \cite[Theorem 6.4]{klz21} gets replaced with $\sqrt{d/n}$.\footnote{Note that \cite[Theorem 6.4]{klz21} writes $\sqrt{d/n}$ for the first term. However, the proof (see Equation 16 in their paper) only establishes the bound $\min\{1, \sqrt{d/n}\}$.} Also, there exist parameter settings for which our lower bound is indeed strictly greater than the lower bound in \cite[Theorem 6.4]{klz21}: for instance, if $d > n > d/\rho$ and $k \to \infty$, then our lower bound simplifies to $\Omega(\sqrt{\frac{d}{n}})$. On the other hand, the lower bound in \cite[Theorem 6.4]{klz21} breaks as $k \to \infty$ (since the $k$-th moment of their Gaussian goes to infinity); however, even if were extended to $k \to \infty$ (e.g. by replacing their Gaussian with our Bernoulli distribution), then the resulting lower bound $\Omega(1 + \frac{d}{\sqrt{\rho} n})$ would still be smaller than the one we prove above. The proof of part 1 is similar but uses different scaling factors.\footnote{By Lemma~\ref{lem: comparing assumptions}, lower bounds under~\cref{ass:coordinatewise} imply lower bounds under~\cref{ass:boundednoncentral} with $\gamma^{1/k}$ replaced by $r/\sqrt{d}$. Nevertheless, we provide direct proofs under both assumptions for additional clarity.} Next, we provide lower bounds for smooth, strongly loss functions: \begin{theorem}[Smooth Strongly Convex, Informal] \label{thm: strongly convex lower bound} Let $\rho \leq d$. For any $\rho$-zCDP algorithm $\mathcal{A}$, there exist compact convex sets $\mathcal{W}, \mathcal{X} \subset \mathbb{R}^d$, a $\mu$-smooth, $\mu$-strongly convex (in $w$) loss $f: \mathcal{W} \times \mathcal{X} \to \mathbb{R}$, and distributions $\mathcal{D}$ and $\mathcal{D'}$ on $\mathcal{X}$ such that:\\ 1. \cref{ass:boundednoncentral} holds, and if $X \sim \mathcal{D}^n$, then \small $\mathbb{E} F(\mathcal{A}(X)) - F^* = \Omega\left(\frac{r^2}{\mu}\left(\frac{1}{n} + \min\left\{1, \left(\frac{\sqrt{d}}{\sqrt{\rho} n}\right)^{\frac{2k-2}{k}}\right\}\right)\right).$ \normalsize \\ 2. \cref{ass:coordinatewise} holds, and if $X' \sim \mathcal{D'}^n$, then \small $\mathbb{E} F(\mathcal{A}(X')) - F^* = \Omega\left(\frac{\gamma^{2/k}}{\mu}\left(\frac{d}{n} + d\min\left\{1, \left(\frac{\sqrt{d}}{\sqrt{\rho} n}\right)^{\frac{2k-2}{k}}\right\}\right)\right).$ \normalsize \end{theorem} \noindent Theorem 6.1 of~\cite{klz21} resembles part 2 of \cref{thm: strongly convex lower bound} with $\gamma = \mu = 1$. However, similar to the convex case, the proof of their result shows that $k = \mathcal{O}(1)$ is necessary. Also,~\cite{agarwal} provides non-private lower bounds under~\cref{ass:boundednoncentral} that resemble the first (non-private) terms in part 1 of our (convex and strongly convex) lower bounds, but their lower bounds do not imply tight lower bounds under~\cref{ass:coordinatewise}. The first term in each of the minima in the above (convex and strongly convex) lower bounds can be attained by the algorithm that outputs any $w_0 \in \mathcal{W}$ and is trivially DP. Thus, \cref{thm: asymptotic optimality} is indeed tight (up to logarithms). Having resolved \textit{Question I}, next we will develop \textit{linear time} algorithms for smooth losses. \section{Linear Time Algorithms for Smooth (Strongly) Convex Losses} \label{sec: linear time} \subsection{Noisy Clipped Accelerated SGD for Smooth Convex Losses} \label{sec: convex} We propose \cref{alg: ACSA}, which builds on (non-private) AC-SA of~\cite{ghadimilan1}; its privacy and excess risk guarantees are given in~\cref{thm: convex ACSA one pass}. \begin{algorithm}[ht] \caption{Noisy Clipped Accelerated SGD (AC-SA) for Heavy-Tailed DP SCO} \label{alg: ACSA} \begin{algorithmic}[1] \STATE {\bfseries Input:} Data $X \in \mathcal{X}^n$, iteration number $T \leq n$, stepsize parameters $\{\eta_t \}_{t \in [T]}, \{\alpha_t \}_{t \in [T]}$ with $\alpha_1 = 1, \alpha_t \in (0,1)$~$\forall t \geq 2$. \STATE Initialize $w_0^{ag} = w_0 \in \mathcal{W}$ and $t = 1$. \FOR{$t \in [T]$} \STATE $w_t^{md} := (1- \alpha_t)w_{t-1}^{ag} + \alpha_t w_{t-1}$. \STATE Draw new batch $\mathcal{B}_t$ (without replacement) of $n/T$ samples from $X$. \STATE $\widetilde{\nabla} F_t(w_t^{md}) := \texttt{MeanOracle1}(\{\nabla f(w_t^{md}, x)\}_{x \in \mathcal{B}_t}; \frac{n}{T}; \frac{\epsilon^2}{2})$ \STATE $w_{t} := \argmin_{w \in \mathcal{W}}\left\{\alpha_t\langle \widetilde{\nabla} F_t(w_t^{md}), w\rangle + \frac{\eta_t}{2}\|w_{t-1} - w\|^2\right\}. $ \STATE $w_{t}^{ag} := \alpha_t w_t + (1-\alpha_t)w_{t-1}^{ag}.$ \ENDFOR \\ \STATE {\bfseries Output:} $w_T^{ag}$. \end{algorithmic} \end{algorithm} \begin{theorem}[Informal] \label{thm: convex ACSA one pass} Let $F$ be convex and $\beta$-smooth. Then, there are parameters such that \cref{alg: ACSA} is $\frac{\epsilon^2}{2}$-zCDP and (for an absolute constant $A$): \small \begin{equation} \label{eq: convex part 1} \small \expec F(w_T^{ag}) - F^* \leq A r D\left[ \frac{1}{\sqrt{n}} + \max\left\{ \left(\left(\frac{\beta D}{r}\right)^{1/4} \frac{\sqrt{d}}{\epsilon n} \right)^{\frac{4(k-1)}{5k-1}} , \left(\frac{\sqrt{d}}{\epsilon n}\right)^{\frac{k-1}{k}} \right\} \right]. \end{equation} \normalsize \end{theorem} \vspace{-.1in} \noindent The key ingredient used to prove~\cref{eq: convex part 1} is a novel convergence guarantee for AC-SA with \textit{biased}, noisy stochastic gradients: see Proposition~\ref{prop: ACSA generic} in~\cref{app: convex accel}. Combining Proposition~\ref{prop: ACSA generic} with Lemma~\ref{lem: bias and variance of bd14} and a careful choice of stepsizes, clip threshold, and $T$ yields the excess risk bound in~\cref{thm: convex ACSA one pass}. Privacy follows from Proposition~\ref{prop: gauss} and parallel composition~\cite{mcsherry2009privacy}. \begin{remark}[Optimal rate for ``sufficiently smooth'' convex functions] \label{rem: affine optimal} Notice that the upper bound in \cref{thm: convex ACSA one pass} scales with the smoothness parameter $\beta$. Thus, for sufficiently small $\beta$, the optimal rates (see \cref{thm: convex lower bound}) are attained. For example, when $k = 2$, the upper bound in \cref{eq: convex part 1} matches the respective lower bound in \cref{thm: convex lower bound} when \small $\small \beta \lesssim \frac{r}{D} \left(\frac{d^5}{\epsilon n}\right)^{1/18} \normalsize$; e.g. if $\beta$ and $D$ are constants and $d \geq (\epsilon n)^{1/5}$. In particular, for \textit{affine functions}--which were not addressed in prior works~\cite{wx20, klz21} since these works assumed $\nabla F(w^{*}) = 0$--we have $\beta = 0$, so that \cref{alg: ACSA} is optimal (up to constant factors) for all $k \geq 2$.\footnote{The assumption made in \cite{wx20, klz21} that $\nabla F(w^{*}) = 0$ is needed for the mean oracle of \cite{hol19}, which is used in \cite{wx20, klz21}; this assumption excludes affine functions. Also, note that the lower bound construction in~\cref{thm: convex lower bound} uses an affine function.} \end{remark} Having discussed the dependence on $\beta$, let us focus on understanding how the bound in \cref{thm: convex ACSA one pass} scale with $n, d$ and $\epsilon$. Thus, let us fix \small $ \small \beta = D = \gamma = 1 \normalsize$ and $\small r = \sqrt{d} \normalsize$ for simplicity. If $k=2$, then the bound in \cref{eq: convex part 1} simplifies to \small $\small \mathcal{O}\left(\sqrt{\frac{d}{n}} + \max\left\{\frac{d^{2/3}}{(\epsilon n)^{4/9}}, \frac{d^{3/4}}{\sqrt{\epsilon n}}\right\}\right),$ \normalsize whereas the lower bound in~\cref{thm: convex lower bound} (part 1) becomes \small $\small \Omega\left(\sqrt{\frac{d}{n}} + \frac{d^{3/4}}{\sqrt{\epsilon n}}\right) \normalsize$. Therefore, the bound in \cref{eq: convex part 1} is tight if $d^{3/2} \gtrsim \epsilon n$. For general $n, d, \epsilon$,~\cref{eq: convex part 1} is \textit{nearly} tight up to a multiplicative factor of \small $\small \left(\frac{\epsilon n}{d^{3/2}}\right)^{1/18} \normalsize$. By comparison, the previous state-of-the-art (\textit{not linear time}) bound for $\epsilon \approx 1$ was \small $\small \mathcal{O}\left(\frac{d}{\sqrt{n}}\right) \normalsize$~\cite[Theorem 5.4]{klz21}. Our bound~\cref{eq: convex part 1} improves over~\cite[Theorem 5.4]{klz21} when $\small d \gtrsim n^{1/6} \normalsize$, which is typical in practical ML applications. As $k \to \infty$,~\cref{eq: convex part 1} becomes $\small \mathcal{O}\left(\sqrt{\frac{d}{n}} + \left(\frac{d}{n}\right)^{4/5}\right) \normalsize$ for $\epsilon \approx 1$, which is strictly better than~\cite[Theorem 5.4]{klz21}. Also, the gradient complexity of our algorithm is $n$, whereas the complexity of the algorithm in~\cite{klz21} is $\mathcal{O}(n^2/d)$. \subsection{Noisy Clipped SGD for Strongly Convex Losses} \label{sec: strongly convex} Our algorithm for strongly convex losses (\cref{alg: vanilla SGD}) builds on the frameworks of~\cite{wx20, klz21}. The differences in our approach lie in the choice of algorithmic parameters (\texttt{MeanOracle}, step size, and iterate averaging weights), as well as in our \textit{analysis} of the algorithm. \begin{algorithm}[ht] \caption{ Noisy Clipped SGD for Heavy-Tailed DP SCO } \label{alg: vanilla SGD} \begin{algorithmic}[1] \STATE {\bfseries Input:} Data $X \in \mathcal{X}^n$, $T \leq n$, stepsizes $\{\eta_t\}_{t=0}^{T}$, averaging weights $\{\zeta_t\}_{t=0}^{T}$, $w_0 \in \mathcal{W}$. \FOR{$t \in \{0, 1, \cdots, T\}$} \STATE Draw new batch $\mathcal{B}_t$ (without replacement) of $n/T$ samples from $X$. \STATE $\widetilde{\nabla} F_t(w_t) := \texttt{MeanOracle1}(\{\nabla f(w_t, x)\}_{x \in \mathcal{B}_t}; \frac{n}{T}; \frac{\epsilon^2}{2})$ \STATE $w_{t+1} = \Pi_{\mathcal{W}}\left[w_t - \eta_t \widetilde{\nabla} F_t(w_t) \right] $ \ENDFOR \\ \STATE {\bfseries Output:} $\widehat{w}_T := \frac{1}{Z_T} \sum_{t=0}^{T} \zeta_t w_{t+1}$, where $Z_T = \sum_{t=0}^T \zeta_t$. \end{algorithmic} \end{algorithm} \begin{theorem}[Informal] \label{thm: strongly convex smooth upper bound} Let $F$ be $\mu$-strongly convex and $\beta$-smooth with $ \frac{\beta}{\mu} \leq n/\ln(n)$. Then, there are algorithmic parameters such that \cref{alg: vanilla SGD} is $\frac{\epsilon^2}{2}$-zCDP and (for an absolute constant $A$): \small \begin{equation} \label{eq: smooth sc upper l2 clip} \small \mathbb{E}F(\widehat{w}_T) - F^* \leq \frac{Ar^2}{\mu}\left(\frac{1}{n} + \left(\frac{\sqrt{d (\beta/\mu) \ln(n)}}{\epsilon n}\right)^{\frac{2k-2}{k}} \right). \end{equation} \normalsize \end{theorem} \noindent The bound \cref{eq: smooth sc upper l2 clip} is optimal up to a factor of $\widetilde{\mathcal{O}}((\beta/\mu)^{(k-1)/k})$ and significantly improves over the best previous (correct) bound~\cite[Theorem 2]{wx20}. The proof of \cref{thm: strongly convex smooth upper bound} (given in~\cref{app: strongly convex}) relies on a novel convergence guarantee for projected SGD with biased noisy stochastic gradients (see Proposition~\ref{prop: strongly convex biased sgd}). Compared to the results in \cite{asi2021private} for convex DP ERM and \cite{as21} for non-private unconstrained PL losses, Proposition~\ref{prop: strongly convex biased sgd} is tighter, which is needed to obtain near-optimal excess risk: we leverage smoothness and strong convexity. Our new analysis also avoids the issue in the proofs of~\cite{wx20, klz21}. \section{Algorithm for Non-Convex Proximal-PL Loss Functions} \label{sec: PL} Suppose $f(w,x) = f^0(w,x) + f^1(w)$, where $f^0(\cdot, x)$ is differentiable (maybe non-convex) and $f^1$ is proper, closed, and convex (maybe non-differentiable) for all $x \in \mathcal{X}$. Assume $F(w) = F^0(w) + f^1(w) = \mathbb{E}_{x \sim \mathcal{D}}[f^0(w,x)] + f^1(w)$ satisfies the \textit{Proximal-PL} condition~\cite{polyak, karimi2016linear}: \begin{definition}[$\mu$-PPL] \label{def: Prox PL} Let $F: \mathbb{R}^d \to \mathbb{R} \bigcup \{+\infty\}$ be bounded from below, $F(w) = F^0(w) + f^1(w)$, where $F^0$ is $\beta$-smooth and $f^1$ is convex. We say $F$ satisfies \textit{Proximal Polyak-\L ojasiewicz} inequality with parameter $\mu > 0$ if \begin{align*} \mu[F(w) - \inf_{w'} F(w')] &\leq - \beta \min_{y}\Big[\langle \nabla F^0(w), y - w \rangle + \frac{\beta}{2}\|y - w\|^2 + f^1(y) - f^1(w)\Big], ~\forall~w \in \mathbb{R}^d. \end{align*} \end{definition} \noindent Definition~\ref{def: Prox PL} generalizes the classical PL condition (take $f^1 = 0$), allowing for e.g. constrained optimization and non-smooth regularizer (depending on $f^1$)~\cite{polyak, karimi2016linear}. We propose \cref{alg: zCSDP SGD}, which runs in linear time, for PPL losses. Recall that the \textit{proximal operator} of a convex function $g$ is defined as \small \[ \small \texttt{\textup{prox}}_{\eta g}(z) := \argmin_{y \in \mathbb{R}^d}\left(\eta g(y) + \frac{1}{2}\|y - z\|^2 \right), ~\text{for}~\eta > 0. \] The privacy and excess risk guarantees of \cref{alg: zCSDP SGD} are provided in \cref{thm: PL upper bound}. \begin{algorithm}[ht] \caption{ Noisy Clipped Proximal SGD for Heavy-Tailed DP SO } \label{alg: zCSDP SGD} \begin{algorithmic}[1] \STATE {\bfseries Input:} Data $X \in \mathcal{X}^n$, $T \leq n$, stepsizes $\{\eta_t\}_{t=0}^{T-1}$. \STATE Initialize $w_0 \in \mathcal{W}$. \FOR{$t \in \{0, 1, \cdots, T-1\}$} \STATE Draw new batch $\mathcal{B}_t$ (without replacement) of $n/T$ samples from $X$. \STATE $\widetilde{\nabla} F_t^0(w_t) := \texttt{MeanOracle}(\{\nabla f^0(w_t, x)\}_{x \in \mathcal{B}_t}; \frac{n}{T}; \frac{\epsilon^2}{2})$ \STATE $w_{t+1} = \texttt{\textup{prox}}_{\eta_t f^1}\left(w_t - \eta_t \widetilde{\nabla} F_t^0(w_t) \right) $ \ENDFOR \\ \STATE {\bfseries Output:} $w_T$. \end{algorithmic} \end{algorithm} \begin{theorem} \label{thm: PL upper bound} Let $F = F^0 + f^1$ be $\mu$-PPL for $\beta$-smooth $F^0$, with $ \frac{\beta}{\mu} \leq n/\ln(n)$. Then, there are parameters such that \cref{alg: zCSDP SGD} is $\frac{\epsilon^2}{2}$-zCDP, and (for an absolute constant $A$): \small \[ \small \expec F(w_T) - F^* \leq \frac{Ar^2}{\mu}\left(\left(\frac{\sqrt{d}}{\epsilon n} \left(\frac{\beta}{\mu}\right) \ln(n) \right)^{\frac{2k-2}{k}} + \frac{(\beta/\mu) \ln(n)}{n}\right). \] \normalsize \end{theorem} The bound in \cref{thm: PL upper bound} nearly matches the smooth, \textit{strongly convex} lower bound in \cref{thm: strongly convex lower bound} up to the $\widetilde{\mathcal{O}}((\beta/\mu)^{(2k-2)/2})$ factor, and is attained without convexity. In particular, \cref{alg: zCSDP SGD} is nearly optimal.\footnote{Since any smooth, strongly function satisfies the PPL condition~\cite{karimi2016linear}, the lower bounds in~\cref{thm: strongly convex lower bound} also apply to the PPL function class considered here.} To prove~\cref{thm: PL upper bound}, we first derive a convergence guarantee for proximal SGD with generic biased, noisy stochastic gradients in terms of the bias and variance of the oracle (see Proposition~\ref{lemma:extendsAS21Thm6} in~\cref{app: PL}), and then apply this guarantee for \texttt{MeanOracle1} (\cref{alg: MeanOracle2}) with carefully chosen stepsizes, clip threshold, and $T$, using~Lemma~\ref{lem: bias and variance of bd14}. Proposition~\ref{lemma:extendsAS21Thm6} generalizes \cite[Theorem 6]{as21}--which provides a similar bound for the unconstrained, classical PL problem--to the proximal setting. However, the proof of Proposition~\ref{lemma:extendsAS21Thm6} is very different from the proof of \cite[Theorem 6]{as21}, since the proximal operator makes it difficult to bound the excess loss without convexity when the stochastic gradients are biased/noisy. Instead, our proof draws inspiration from the proof of \cite[Theorem 3.1]{lowy2022NCFL} (which considered \textit{Lipschitz} $f$ and \textit{unbiased} stochastic gradients). Specifically, we view each biased/noisy proximal evaluation as an \textit{objective perturbation}~\cite{chaud, kifer2012private} problem. Then, using techniques from the analysis of objective perturbation, we bound the difference between the errors of the biased, noisy stochastic proximal gradient steps and the unbiased noiseless proximal gradient steps, the latter of which can be bounded via the PPL inequality. Compared to the proof of~\cite[Theorem 3.1]{lowy2022NCFL}, here we need to carefully handle the bias term and bound the error without appealing to Lipschitz continuity of $f$. See~\cref{app: PL} for details. \section{Concluding Remarks and Open Questions} We considered the important problem of DP SO with (potentially unbounded) data containing outliers and loss functions that are not uniformly Lipschitz continuous. For (strongly) convex loss functions, we established the asymptotically optimal rates, which hold even for non-differentiable losses. We also provided linear time algorithms for smooth losses with improved rates (compared to prior works) that are optimal in certain practical parameter regimes, but suboptimal in general. An interesting open question is: does there exist a linear time algorithm with optimal excess risk? We also initiated the study of non-convex heavy-tailed DP SO, showing that the optimal strongly convex rates can nearly be attained without convexity, via the proximal-PL condition. We leave the treatment of general non-convex losses for future work. Last, we provide shuffle DP variations of our algorithms in~\cref{app: SDP mean estimators}: the same excess risk bounds (attained by our zCDP algorithms) can be achieved without a trusted curator, which could be useful in applications such as federated learning~\cite{kairouz2019advances, lr21fl}. \section*{Acknowledgements} We would like to thank John Duchi, Larry Goldstein, and Stas Minsker for very helpful conversations and pointers related to our lower bounds and the proof of Lemma~\ref{lem:4.1}. We also thank the authors of~\cite{klz21} for clarifying some steps in the proof of their Theorem 4.1. \clearpage
proofpile-arXiv_068-2900
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{The transformation rules} \label{subsec:Rules} First, we introduce the following notion of a {\it stratification} for a set of clauses. Let $\mathbb N$ denote the set of the natural numbers. A \emph{level mapping} is a function~$\ell\!:\mathit{Pred}\!\rightarrow\!\mathbb{N}$. For every predicate $p$, the natural number $\ell(p)$ is said to be the {\it level\/} of~$p$. Level mappings are extended to atoms by stating that the level $\ell(A)$ of an atom $A$ is the level of its predicate symbol. A clause \( H\leftarrow c, A_{1}, \ldots, A_{n} \) is {\it stratified with respect to the level mapping}~$\ell$ if, for \( i\!=\!1,\ldots ,n \), \(\ell (H)\geq \ell(A_i)\). A set $P$ of CHCs is {\it stratified~with respect to $\ell$} if all clauses of $P$ are stratified with respect to~$\ell$. Clearly, for every set~$P$ of CHCs, there exists a level mapping~$\ell$ such that $P$ is stratified with respect to $\ell$~\cite{Llo87}. A {\it transformation sequence from} $P_{0}$ {\it to} $P_{n}$ is a sequence $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_n$ of sets of CHCs such that, for $i\!=\!0,\ldots,n\!-\!1,$ $P_{i+1}$ is derived from $P_i$, denoted $P_{i} \Rightarrow P_{i+1}$, by applying one of the following \mbox{rules~R1--R7}. We assume that the initial set $P_0$ is stratified with respect to~a given level mapping~$\ell$. \medskip The Definition Rule allows us to introduce new predicate definitions. \medskip \hrule \vspace*{1.5mm} \noindent {\bf(R1)~ Definition Rule.} Let $D$ be the clause $\textit{newp}(X_1,\ldots,X_k)\leftarrow c,A_1,\ldots\!,A_m$, where: (1)~\textit{newp} is a predicate symbol in $\textit{Pred\/}$ not occurring in the sequence $P_0\Rightarrow P_1\Rightarrow\ldots\Rightarrow P_i$ constructed so far, \mbox{(2)~$c$ is a constraint,} (3)~the predicate symbols of $A_1,\ldots,A_m$ occur in $P_0$, and (4)~$(X_1,\ldots,X_k)\subseteq \mathit{vars}(\{c,A_1,\ldots,A_m\})$. Then, by {\it definition} we get $P_{i+1}= P_i\cup \{D\}$. We define the level mapping $\ell$ of \textit{newp} to be equal to $\textit{max}\,\{\ell(A_i) \mid i=1,\ldots,m\}$. \smallskip \hrule \medskip For $j\!=\!0,\ldots, n$, by $\textit{Defs}_j$ we denote the set of clauses, called {\it definitions}, introduced by rule~R1 during the construction of the prefix $P_0\Rightarrow P_1\Rightarrow\ldots\Rightarrow P_j$ of the transformation sequence $P_0\Rightarrow P_1\Rightarrow\ldots\Rightarrow P_n$. Thus, $\textit{Defs}_0\!=\!\emptyset$, and for $j\!=\!0,\ldots, n,$ $\textit{Defs}_j\!\subseteq\!\textit{Defs}_{j+1}$. Note that, by using rules R2--R7, one may replace a definition occurring in~$P_h$, for some $0\!<\!h\!<\!n$, and hence it may happen that $\textit{Defs}_{k}\!\not\subseteq\!P_{k}$, for some $k$ such that $h\!<\!k\!\leq\!n$. \medskip The Unfolding Rule consists in performing a symbolic computation step. \medskip \hrule \vspace*{1.5mm} \noindent {\bf (R2)~Unfolding Rule.} Let $C$: $H\leftarrow c,G_L,A,G_R$ be a clause in $P_i$, where $A$ is an atom. Without loss of generality, we assume that $\mathit{vars}(C)\cap\mathit{vars}(P_0)=\emptyset$. Let {\it Cls}: $\{K_{1}\leftarrow c_{1}, B_{1},~\ldots,~K_{m}\leftarrow c_{m}, B_{m}\}$, with $m\!\geq\!0$, be the set of clauses in $P_0$, such that: for $j=1,\ldots,m$, (1)~there exists a most general unifier~$\vartheta_j$ of $A$ and $K_j$, and {(2)~the conjunction of constraints $(c, c_{j})\vartheta_j$ is satisfiable.} Let $\mathit{Unf}(C,A,P_0)$ be the set $\{(H\leftarrow c, {c}_j,G_L, B_j, G_R) \vartheta_j \mid j=1, \ldots, m\}$ of clauses. Then, by {\it unfolding~$C$ with respect to $A$}, we derive the set $\mathit{Unf}(C,A,P_0)$ % and we get $P_{i+1}= (P_i\setminus\{C\}) \cup \mathit{Unf}(C,A,P_0)$. \smallskip \hrule \medskip When we apply rule R2, we say that, for $j=1, \ldots, m,$ the atoms in the conjunction $B_j \vartheta_j$ are {\it derived} from $A$, and the atoms in the conjunction $(G_L, G_R) \vartheta_j$ are {\it inherited} from the corresponding atoms in the body of $C$. \medskip The Folding Rule is a special case of an inverse of the Unfolding Rule. \nopagebreak \medskip \hrule \vspace*{1.5mm} \noindent {\bf (R3)~Folding Rule.} Let $C$: $H\leftarrow c, G_L,Q,G_R$ be a clause in $P_i$, and let $D$: $K \leftarrow d, B$ be a variant of a clause in $\textit{Defs}_i$. Suppose that: (1)~either $H$ is $\mathit{false}$ or \mbox{$\ell(H) \geq \ell(K)$,} and (2)~there exists a substitution~$\vartheta$ such that~\mbox{$Q\!=\! B\vartheta $} and $\mathbb D\models \forall(c \rightarrow d\vartheta)$. Then,\,by \textit{folding \( C\)\,using definition\,\( D\)}, we derive clause \(E \):~\( H\leftarrow c, G_{\!L}, K\vartheta, G_{\!R} \), and we get \( P_{i+1}= (P_{i}\setminus\{C\})\cup \{E \} \). \smallskip \hrule \medskip The Clause Deletion Rule removes a clause with an unsatisfiable constraint in its body. \medskip \hrule \vspace*{1.5mm} \noindent {\bf (R4)~Clause Deletion Rule.} Let $C$: $H\leftarrow c,G$ be a clause in $P_i$ such that the constraint~$c$ is unsatisfiable. Then, by {\it clause deletion} we get $P_{i+1} = P_i \setminus\{C\}$. \smallskip \hrule \medskip The Functionality Rule rewrites a functional conjunction of atoms by using Property ({\it Funct}). \medskip \hrule \vspace*{1.5mm} \noindent {\bf (R5)~Functionality Rule.} Let $C$: $H\leftarrow c, G_L,F(X,Y),F(X,Z), G_R$ be a clause in~$P_i$, where $F(X,Y)$ is a functional conjunction of atoms {from~$X$ to~$Y$} with respect to $\mathit{Definite}(P_0) \cup \mathit{Defs}_i$. Then, by \textit{functionality}, from~$C$ we derive~$D$: $H\!\leftarrow c, Y\!\!=\!Z, G_{\!L},F(X,\!Y),G_{\!R}$, and we get \( P_{i+1}= (P_{i}\setminus\{C\})\cup \{D \} \). \smallskip \hrule \medskip The Totality Rule rewrites a functional conjunction of atoms by using Property~({\it Total\/}). \medskip \hrule \vspace*{1.5mm} \noindent {\bf (R6)~Totality Rule.} Let $C$: $H\leftarrow c, G_L,F(X,Y),G_R$ be a clause in $P_i$ such that $Y \cap \mathit{vars}(H\leftarrow c, G_L,G_R) = \emptyset$ and $F(X,Y)$ is a total conjunction of atoms {from~$X$ to~$Y$} with respect to $\mathit{Definite}(P_0)\cup \mathit{Defs}_i$. Then, by \textit{totality}, from~$C$ we derive clause~$D$\,: $H\leftarrow c, G_L,G_R$, and we get \( P_{i+1}= (P_{i}\setminus\{C\})\cup \{D \} \). \smallskip \hrule \medskip As mentioned above, the functionality and totality properties hold by construction, and we do not need to prove them when applying rules~R5 and~R6. \medskip The Differential Replacement Rule replaces a conjunction of atoms by a new conjunction together with an atom defining a relation among the variables of those conjunctions. \medskip \hrule \smallskip \vspace*{1.5mm} \noindent {\bf (R7)~Differential Replacement Rule.} Let $C$: $H\leftarrow c, G_L,F(X;Y),G_R$ be a clause in $P_i$, and let $D$: $\mathit{diff}(Z) \leftarrow d, F(X;Y), R(V;W)$ be {a variant of} a definition clause in $\mathit{Defs}_i$, such that: \noindent (1)~$F(X;Y)$ and $R(V;W)$ are total, functional conjunctions with respect to $\mathit{Definite}(P_0)\cup \mathit{Defs}_i$, (2)~$W\cap \mathit{vars}(C)\! =\!\emptyset$, (3)~$\mathbb{D}\models\forall (c\!\rightarrow\! d)$, and (4)~\mbox{$\ell(H)\!>\!\ell(\!\mathit{diff(Z)})$}. Then, by {\it differential replacement}, we derive clause~$E$: $H\!\leftarrow \!c, G_{\!L},R(V;\!W),$ $\mathit{diff}(Z), G_{\!R}$, and we get $P_{i+1}= (P_{i}\setminus\{C\}) \cup \{E \}$. \smallskip \hrule \medskip Note that in rule~R7 no assumption is made on the set $Z$ of variables, apart from the one deriving from the fact that $D$ is a definition, that is, $Z\! \subseteq\! {\mathit{vars}}(d) \cup X\cup Y \cup V \cup W.$ The transformation algorithm~${\mathcal R}$~for the removal of ADTs, which we will present in Section~\ref{sec:Strategy}, applies a specific instance of rule~R7 (see, in particular, the Diff-Introduce step). The general form of rule~R7 that % we have {now considered,} makes it easier to prove the Soundness and Completeness Theorems (see Theorems~\ref{thm:unsat-preserv} and~\ref{thm:sat-preserv}) we will present below. \subsection{Soundness of the transformation rules} \label{subsec:soundness} Now we will extend to rules \mbox{R1--R7} some correctness results that have been proved for the transformation of (constraint) logic programs~\cite{EtG96,Fi&04a,Sek09,TaS86}. \begin{theorem}[Soundness of the Transformation Rules] \label{thm:unsat-preserv} Let $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_n$ be a transformation sequence using rules {\rm{R1--R7}}. Suppose that the following condition holds\,$:$ \vspace{.5mm} \noindent\hangindent=8mm \makebox[8mm][l]{\rm \,(U)}for $i \!=\! 1,\ldots,n\!-\!1$, if $P_i \Rightarrow P_{i+1}$ by folding a clause in $P_i$ using a definition $D\!: H \leftarrow c,B$ in $\mathit{Defs}_i$, then, for some $j\! \in\!\{1,\ldots,i\!-\!1,i\!+\!1,\ldots, n\!-\!1\}$, $ P_{j}\Rightarrow P_{j+1}$ by unfolding $D$ with respect to an atom $A$ such that $\ell(H)=\ell(A)$. \vspace{.5mm} \noindent If $P_n$ is satisfiable, then $P_0$ is satisfiable. \end{theorem} Thus, to prove the satisfiability of a set~$P_0$ of clauses, it suffices to: (i)~construct a transformation sequence $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_n$, and then (ii)~prove that $P_n$ is satisfiable. The need for Condition (U) in Theorem~\ref{thm:unsat-preserv} can be shown by the following example. \vspace{-1mm} \begin{example}\label{ex:need-of-Cond-U} Let us consider the following initial set of clauses: \vspace{1mm} \begin{minipage}[t]{8mm} $P_{0}$: \end{minipage} \begin{minipage}[t]{80mm} {\small{ \begin{verbatim} 1. false :- p. 2. p. \end{verbatim} }} \end{minipage} \vspace{1mm} \noindent By rule~R1 we introduce the definition: \vspace{.5mm} \hspace{9mm}{\tt \small 3. newp :- p.} \vspace{.5mm} \noindent and we get the set $P_{1}\!=\!\{${\tt \small 1,2,3}$\}$ of clauses. Then, by folding clause~{\tt \small 1} using definition~{\tt \small 3}, we get: \vspace{1mm} \begin{minipage}[t]{8mm} $P_{2}$: \end{minipage} \begin{minipage}[t]{80mm} {\small{ \begin{verbatim} 1f. false :- newp. 2. p. 3. newp :- p. \end{verbatim} }} \end{minipage} \vspace{1mm} \noindent Again, by folding definition~{\tt \small 3} using the same definition~{\tt \small 3}, we get: \vspace{1mm} \begin{minipage}[t]{8mm} $P_{3}$: \end{minipage} \begin{minipage}[t]{80mm} {\small{ \begin{verbatim} 1f. false :- newp. 2. p. 3f. newp :- newp. \end{verbatim} }} \end{minipage} \vspace{1mm} \noindent Now we have that $P_{3}$ is satisfiable (being \{{\tt \small p}\} its least $ {\mathbb D}$-model), while $P_{0}$ is unsatisfiable. This fact is consistent with Theorem~\ref{thm:unsat-preserv}. Indeed, the transformation sequence $P_{0}\Rightarrow P_{1}\Rightarrow P_{2}\Rightarrow P_{3}$ does not comply with Condition~(U) because during that sequence, definition~{\tt \small{3}} has not been unfolded. \hfill $\Box$ \end{example} The following example shows that for the application of rule~R7, Condition~(4) cannot be dropped because, otherwise, Theorem~\ref{thm:unsat-preserv} does not hold. \vspace{-1mm} \begin{example} Let us consider the initial set of clauses: \vspace{1mm} \begin{minipage}[t]{8mm} $P_{0}$: \end{minipage} \begin{minipage}[t]{80mm} {\small{ \begin{verbatim} 1. false :- r(X,Y). 2. r(X,Y) :- f(X,Y). 3. f(X,Y) :- Y=0. \end{verbatim} }} \end{minipage} \vspace{1mm} \noindent where {\tt \small f} and {\tt \small r} are predicates whose arguments are in the set~$\mathbb Z$ of the integers. Let us assume that the level mapping $\ell$ is defined as follows: $\ell(${\tt \small f}$)\!=\!1$ and $\ell(${\tt \small r}$)\!=\!2$. Now, we apply rule R1 and we introduce a new predicate~{\tt \small diff}, and we get: \vspace{1mm} \begin{minipage}[t]{8mm} $P_{1}$: \end{minipage} \begin{minipage}[t]{80mm} {\small{ \begin{verbatim} 1. false :- r(X,Y). 2. r(X,Y) :- f(X,Y). 3. f(X,Y) :- Y=0. 4. diff(X,W,Y) :- f(X,Y), r(X,W). \end{verbatim} }} \end{minipage} \vspace{1mm} \noindent where, complying with rule R1, we set $\ell(${\tt \small{diff}}$)\!=\!2$. By applying rule~R7, even if Condition~(4) is not satisfied, we get: \vspace{1mm} \begin{minipage}[t]{8mm} $P_{2}$: \end{minipage} \begin{minipage}[t]{80mm} {\small{ \begin{verbatim} 1. false :- r(X,Y). 2r. r(X,Y) :- r(X,W), diff(X,W,Y). 3. f(X,Y) :- Y=0. 4. diff(X,W,Y) :- f(X,Y), r(X,W). \end{verbatim} }} \end{minipage} \vspace{1mm} \noindent Now, contrary to the conclusion of Theorem~\ref{thm:unsat-preserv}, we have that $P_{0}$ is unsatisfiable and $P_{2}$ is satisfiable, being \{{\tt \small f(n,0)} $\mid$ {\tt \small n}\,$\in\!{\mathbb Z}\}$ its least ${\mathbb Z}$-model. Note that the other Conditions~(1), (2), and~(3) for applying rule~R7 do hold. In particular, the atoms {\tt \small f(X,Y)} and {\tt \small r(X,Y)} are total, functional atoms from~{\tt \small X} to~{\tt \small Y} with respect to $P_{0}$. \hfill $\Box$ \end{example} \medskip The rest of this section is devoted to the proof of Theorem~\ref{thm:unsat-preserv}. \newcommand{\leftrightarrow}{\leftrightarrow} \newcommand{\Rightarrow}{\Rightarrow} \newcommand{\Leftrightarrow}{\Leftrightarrow} First, we recall and recast in our framework some definitions and facts taken from the literature~\cite{EtG96,TaS84,TaS86}. Besides the rules presented in Section~\ref{subsec:Rules} above, let us also consider the following rule R8 which, given a transformation sequence $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_i$, for some $i\!\geq\!0$, allows us to extend it by constructing a new set $P_{i+1}$ of CHCs such as $P_i\Rightarrow P_{i+1}$. \medskip \hrule \vspace*{1.5mm} \noindent \textbf{(R8)~Goal Replacement Rule.} Let $C$:~${H}\leftarrow c, c_{1}, {G}_{L}, {G}_{1}, {G}_{R}$ be a clause in~${P}_{i}$. If in clause~$C$ we \emph{replace} $c_1,G_1$ by $c_2,G_2$, we derive clause~$D$: ${H}\leftarrow c, c_{2}, {G}_{L}, {G}_{2}, {G}_{R}$, and we get $P_{i+1} = ({P}_{i}\setminus\{C\})\cup \{D\}$. \smallskip \hrule \medskip Now let us introduce two particular kinds of Goal Replacement Rule: (i)~{\it{body weakening}}, and (ii)~{\it{body strengthening}}. First, for the Goal Replacement Rule, we need to consider the following two tuples of variables: \vspace*{1mm} $T_1 = \mathit{vars}(\{c_{1},{G}_{1}\}) \setminus \mathit{vars}(\{{H}, c, {G}_{L},{G}_{R}\})$,~~ and $T_2 = \mathit{vars}(\{c_{2},{G}_{2}\}) \setminus \mathit{vars}(\{{H}, c, {G}_{L},{G}_{R}\})$. \vspace*{1mm} \noindent The Goal Replacement Rule is said to be a {\em body weakening} if the following two conditions hold: \vspace*{1mm} \makebox[12mm][r]{(W.1)~~} $M(\textit{Definite}(P_0)\cup \mathit{Defs}_i) \models \forall\,( c_{1}\! \wedge \! {G}_{1} \rightarrow \exists T_2.\, c_{2}\! \wedge \! {G}_{2})$ \makebox[12mm][r]{(W.2)~~} $\ell(H)\!>\!\ell(\mathit{A})$, for every atom $A$ occurring in $G_2$ and not in $G_1$. \smallskip \noindent \noindent The Goal Replacement Rule is said to be a \emph{body strengthening} if the following condition holds: \vspace*{1mm} \makebox[12mm][c]{(S)~~} $M(\textit{Definite}(P_0)\cup \mathit{Defs}_i) \models \forall \, (c_{2}\! \wedge \! {G}_{2} \rightarrow \exists T_1.\, c_{1}\! \wedge \! {G}_{1})$. \vspace*{1mm} \smallskip Usually, in the literature the Goal Replacement Rule is presented by considering the conjunction of Conditions~(W.1) and~(S), thereby considering the quantified equivalence $\forall \, ( (\exists T_1.\, c_{1}\! \wedge \! {G}_{1}) \leftrightarrow (\exists T_2.\, c_{2}\! \wedge \! {G}_{2}))$. We have split that equivalence into the two associated implications. This has been done because, when proving the Soundness result (see Theorem~\ref{thm:unsat-preserv} in this section) and the Completeness result (see Theorem~\ref{thm:sat-preserv} in Section~\ref{subsec:completeness}), it is convenient to present % the preservation of the least $\mathbb D$-models % into two parts as specified by the following two theorems. \begin{theorem} \label{thm:cons} Let $D_0, \ldots, D_n$ be sets of definite CHCs and let \mbox{$D_0 \! \Rightarrow \! \ldots \! \Rightarrow \! D_n$} be a transformation sequence constructed using rules {\rm R1} $($Definition$)$, {\rm R2} $($Unfolding$)$, {\rm R3} $($Folding$)$, and {\rm R8} $($Goal Replacement$)$. Suppose that Condition~{\rm (U)} of Theorem~$\ref{thm:unsat-preserv}$ holds and all goal replacements are body weakenings. Then $M({D}_0\cup \mathit{Defs}_n)\subseteq M({D}_n)$. \end{theorem} \begin{theorem} \label{thm:pc} Let $D_0, \ldots, D_n$ be sets of definite CHCs and let \mbox{$D_0 \! \Rightarrow \! \ldots \! \Rightarrow \! D_n$} be a transformation sequence constructed using rules {\rm R1} $($Definition$)$, {\rm R2} $($Unfolding$)$, {\rm R3} $($Folding$)$, and {\rm R8} $($Goal Replacement$)$. Suppose that, for all applications of~{\rm R3}, Condition {\rm (E)} holds $($see Definition~$\ref{def:condR}$ in Section~$\ref{sec:Completeness}$$)$ and all goal replacements are body strengthenings. Then \(M( D_0\cup \mathit{Defs}_n)\supseteq M( D_n) \). \end{theorem} For the proof of Theorems~\ref{thm:cons} and~\ref{thm:pc} we refer to the results presented in the literature~\cite{EtG96,TaS84,TaS86}. The correctness of the transformation rules with respect to the least Herbrand model semantics has been first proved in the landmark paper by Tamaki and Sato~\cite{TaS84}. In a subsequent technical report~\cite{TaS86}, the same authors extended that result by introducing the notion of the {\it level\/} of an atom, which we also use in this paper (see the notion defined at the beginning of Section~\ref{subsec:Rules}). The use of atom levels allows less restrictive applicability conditions on the Folding and Goal Replacement Rules. Later, Etalle and Gabbrielli~\cite{EtG96} extended Tamaki and Sato's results to the \mbox{$\mathbb D$-model} semantics of {\em constraint logic programs} (in the terminology used in this paper, a constraint logic program is a set of definite constrained Horn clauses). There are three main differences between our presentation of the correctness results for the transformation rules with respect to the presentation considered in % the literature~\cite{EtG96,TaS84,TaS86}. First, as already mentioned, we kept the two Conditions~(E) and~(S), which guarantee the inclusion \(M(D_0\!\cup\! \mathit{Defs}_n)\!\supseteq\! M( D_n) \) (called {\em Partial Correctness} by Tamaki and Sato~\cite{TaS86}), separated from the three Conditions~(U), (W.1), and~(W.2), which guarantee the reverse inclusion \(M(D_0\cup\mathit{Defs}_n)\!\subseteq\! M( D_n) \). All five conditions together garantee the equality \(M(D_0\!\cup\! \mathit{Defs}_n)\!=\!M( D_n) \) (called {\em Total Correctness} by Tamaki and Sato~\cite{TaS86}). Second, Tamaki and Sato's conditions for the correctness of the Goal Replacement Rule are actually more general than ours, as they use a well-founded relation which is based on atom levels and also on a suitable measure (called \mbox{\em weight-tuple measure}~\cite{TaS86}) of the successful derivations of an atom in $M( D_0\cup \mathit{Defs}_n)$. Our simpler conditions straightforwardly imply Tamaki and Sato's ones, and are sufficient for our purposes in the present paper. Third, Tamaki and Sato papers~\cite{TaS84,TaS86} do not consider constraints, whereas Etalle and Gabbrielli results for constraint logic programs do not consider Goal Replacement~\cite{EtG96}. However, Tamaki and Sato's proofs can easily be extended to constraint logic programs by simply dealing with atomic constraints as atoms with level 0 and assigning positive levels to all other atoms. \medskip From Theorem~\ref{thm:cons}, we get the following Theorem~\ref{thm:unsat}, which relates the satisfiability of sets of clauses obtained by applying the transformation rules to the satisfiability of the original sets of clauses. \begin{theorem} \label{thm:unsat} Let $P_0 \! \Rightarrow \! \ldots \! \Rightarrow \! P_n$ be a transformation sequence constructed using rules {\rm R1} $($Definition$)$, {\rm R2} $($Unfolding$)$, {\rm R3} $($Folding$)$, and {\rm R8} $($Goal Replacement$)$. Suppose that Condition~{\rm (U)} of Theorem~$\ref{thm:unsat-preserv}$ holds and all goal replacements are body weakenings. If $P_n$ is satisfiable, then $P_0$ is satisfiable. \end{theorem} \begin{proof} First, we observe that $P_0$ is satisfiable iff $P_0 \cup \textit{Defs}_n$ is satisfiable. Indeed, we have that: (i)~if~$\mathcal{M}$ is a $\mathbb D$-model of $P_0$, then the $\mathbb D$-interpretation $\mathcal{M} \cup \{\textit{newp}(a_1,\ldots,a_k) \mid \textit{newp}$ is a head predicate in $\textit{Defs}_n \mbox{ and } a_1,\ldots, a_k$ are ground terms$\}$ is a $\mathbb D$-model of $P_0 \cup \textit{Defs}_n$, and (ii)~if $\mathcal{M}$ is a $\mathbb D$-model of $P_0 \cup \textit{Defs}_n$, then all clauses of $P_0$ are true in $\mathcal{M}$, and hence $\mathcal{M}$ is a $\mathbb D$-model of $P_0$. Then, let us consider a new transformation sequence $P'_0\Rightarrow \ldots \Rightarrow P'_n$ obtained from the sequence $P_0\Rightarrow \ldots \Rightarrow P_n$ by replacing each occurrence of \textit{false} in the head of a clause by a fresh, new predicate symbol, say $f$. $P'_0,\ldots,P'_n$ are sets of definite clauses, and thus, for $i=0,\ldots,n,$ $\textit{Definite}(P'_i)= P'_i$. The sequence \mbox{$P'_0\Rightarrow \ldots \Rightarrow P'_n$} satisfies the hypotheses of Theorem~\ref{thm:cons}, and hence $M(P'_0\cup \mathit{Defs}_n)\!\subseteq M(P'_n)$. We have that: \noindent \makebox[12mm][l]{}$P_n$ is satisfiable \noindent \makebox[12mm][l]{implies}{$P'_n\cup \{\neg f\}$ is satisfiable} \noindent \makebox[12mm][l]{implies}{$f\not\in M(P'_n)$} \noindent \makebox[12mm][l]{implies,}\,by Theorem~\ref{thm:cons}, $f\not\in M(P'_0 \cup \textit{Defs}_n)$ \noindent \makebox[12mm][l]{implies}{$P'_0 \cup \textit{Defs}_n\cup \{\neg f\}$ is satisfiable} \noindent \makebox[12mm][l]{implies}{$P_0 \cup \textit{Defs}_n$ is satisfiable} \noindent \makebox[12mm][l]{implies}{$P_0$ is satisfiable.} \hfill$\Box$ \end{proof} Now, in order to prove Theorem~\ref{thm:unsat-preserv} of Section~\ref{sec:TransfRules}, which states the soundness of rules R1--R7, we show that rules R4--R7 are all body weakenings. \medskip An application of rule~R4~(Clause Deletion), by which we delete clause~$C$: $H\leftarrow c,G$, whenever the constraint~$c$ is unsatisfiable, is equivalent to the replacement of the body of clause $C$ by {\it false}. Since $c$ is unsatisfiable, we have that: \smallskip $M(\textit{Definite}(P_0)\cup \mathit{Defs}_i) \models \forall\,(c \wedge G \rightarrow {\it false})$ \smallskip \noindent and Condition (W.1) of rule R8 holds. Also Condition~(W.2), that is: \vspace*{1mm} $\ell(H)\!>\!\ell(\mathit{A})$, for every atom $A$ occurring in ${\it false}$ \vspace*{1mm} \noindent trivially holds, because there are no atoms in ${\it false}$. Thus, the replacement of the body of clause $H\leftarrow c,G$ by {\it false} is a body weakening. \medskip Let us now consider rule~R5 (Functionality). Let $F(X,Y)$ be a conjunction of atoms that defines a functional relation from~$X$ to~$Y$, that is, Property (\textit{Funct}) of Section~\ref{sec:CHCs} holds for $F(X,Y)$. When rule~R5 is applied whereby a conjunction $F(X,Y),F(X,Z)$ is replaced by the new conjunction $Y\!=\!Z,\ F(X,Y)$, we have that: \vspace*{1mm} $M(\mathit{Definite}(P_0)\cup \mathit{Defs}_i) \models \forall(F(X,Y) \wedge F(X,Z) \rightarrow Y\!=\!Z)$ \vspace*{1mm} \noindent and hence Condition~(W.1) of rule R8 holds. When this replacement is performed, also Condition~(W.2) trivially holds, and thus rule~R5 is a body weakening. \medskip An application of rule~R6 (Totality) replaces a conjunction $F(X,Y)$ by {\it true} (that is, the empty conjunction), which is implied by any formula. Hence {Conditions (W.1) and (W.2) trivially hold,} % and rule~R6 is a body weakening. \medskip For rule~R7 (Differential Replacement) we prove the following lemma. Recall that by $F(X;Y)$ we denote a conjunction of atoms that defines a total, functional relation from~$X$ to~$Y$. \begin{lemma}\label{lemma:R7unsat} Let us consider a transformation sequence $P_{0}\Rightarrow \ldots \Rightarrow P_{i}$ and a clause $C${\rm :}~$H\leftarrow c, G_L,F(X;Y),G_R$ in $P_{i}$. Let us assume that by applying rule~{\rm R7} on clause~$C$ using the definition clause \smallskip $D{\rm :}~\mathit{diff}(Z) \leftarrow d, F(X;Y), R(V;W),$ \smallskip \noindent where\,{\rm :} $(D1)$~$W \cap \mathit{vars}(C) = \emptyset,$ and $(D2)$~$\mathbb{D}\models\forall (c\rightarrow d),$ we derive clause \smallskip $E${\rm :} $H\leftarrow c, G_L,R(V;W), \mathit{diff}(Z), G_R$ \smallskip \noindent and we get the new set $P_{i+1} = (P_{i}\setminus\{C\}) \cup \{E \}$ of clauses. Then, \smallskip $M(\textit{Definite}(P_0) \cup \mathit{Defs}_i)\models \forall( c \wedge F(X;Y) \rightarrow \exists W.\,(R(V;W) \wedge \mathit{diff}(Z)))$. \end{lemma} \begin{proof} Let $\mathcal M$ denote $M(\textit{Definite}(P_0) \cup \mathit{Defs}_i)$. Since, % $R(V;W)$ is a total, functional conjunction from $V$ to $W$ with respect to~$\textit{Definite}(P_0) \cup \mathit{Defs}_i$, we have: \smallskip $\mathcal M\models \forall\,( c \wedge F(X;Y) \rightarrow \exists W.\, R(V;W))$ \hfill $(\alpha)$~~~~ \smallskip \noindent Since, by Condition~$(D1)$, none of the variables in $W$ occurs in~$C$, from definition~$D$, we get: \smallskip $\mathcal M \models \forall\,(d \wedge F(X;Y) \wedge R(V;W) \rightarrow \mathit{diff}(Z))$\hfill $(\beta)$~~~~ \smallskip \noindent From~$(\alpha)$, $(\beta)$, and Condition~$(D2)$, we get the thesis.\hfill $\Box$ \end{proof} \medskip From Lemma~\ref{lemma:R7unsat} it follows that rule~R7, which replaces in the body of clause~$C${\rm :} $H\leftarrow c, G_L,F(X;Y),G_R$ the conjunction~$F(X;Y)$ by the new conjunction $R(V;W)$, $\mathit{diff}(Z)$, is a body weakening, assuming that $\ell(H)\!>\!\ell(\mathit{diff}(Z))$. Recall that, since clause $D$ is a definition clause, we have that $\ell(\mathit{diff}(Z))\!\geq\!\ell(\mathit{R})$, and thus we have that $\ell(H)\!>\!\ell(\mathit{R})$. The following lemma summarizes the facts we have shown above about rules~R4--R7. \begin{lemma}\label{lemma:weakening} The applications of rules {\rm R4--R7} are all body weakenings. \end{lemma} Finally, having proved Lemma~\ref{lemma:weakening}, we can present the proof of Theorem~\ref{thm:unsat-preserv}. \medskip \noindent {\it Proof of Theorem~$\ref{thm:unsat-preserv}$}. Let $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_n$ be a transformation sequence constructed using rules {\rm{R1--R7}}. Then, by Lemma~\ref{lemma:weakening}, that sequence can also be constructed by applications of rules {\rm{R1--R3}} together with applications of rule~R8 which are all body weakenings. Since by hypothesis of Theorem~\ref{thm:unsat-preserv}, Condition~(U) does hold, by applying Theorem~\ref{thm:unsat} we get the thesis.~\hfill~$\Box$ \subsection{The ADT removal Algorithm~${\mathcal R}$} \label{subsec:algoR} Algorithm~${\mathcal R}$~(see Figure~\ref{fig:AlgoR}) removes ADT terms starting from the set~$\mathit{Gs}$ of goals in~$\mathit{Cls}$. Initially, those goals are all collected in the set~$\mathit{InCls}$. The set~$\mathit{Defs}$ collects the definitions of the new predicates introduced by applications of rule~R1 during the execution of Algorithm~${\mathcal R}$. Initially, we have that $\mathit{Defs}\!=\!\emptyset$. \begin{figure}[!ht] \vspace{-5mm} \noindent \hrulefill\nopagebreak \noindent {\bf Algorithm}~${\mathcal R}$\\ {\em Input}: A set $\mathit{Cls}$ of clauses and a level mapping $\ell$ of the predicates occurring in $\mathit{Cls}$. {\em Output}: A set $\mathit{TransfCls}$ of clauses that have basic types. \vspace*{-2mm} \noindent \rule{2.0cm}{0.2mm} \noindent Let $\mathit{Cls} = \mathit{Ds} \cup \mathit{Gs}$, where $\mathit{Ds}$ is a set of definite clauses and $\mathit{Gs}$ is a set of goals; \noindent $\mathit{InCls}:=\mathit{Gs}$; \noindent $\mathit{Defs}:=\emptyset$; \noindent $\mathit{TransfCls}:=\emptyset;$ \noindent {\bf while} $\mathit{InCls}\!\neq\!\emptyset$ {\bf do} \hspace*{3mm}\vline\begin{minipage}{11.8cm} % \makebox[4mm][l]{~$\scriptscriptstyle\blacksquare$}$\mathit{Diff\mbox{-}Define\mbox{-}Fold}(\mathit{InCls},\mathit{Defs}, \mathit{NewDefs},\mathit{FldCls});$ \makebox[4mm][l]{~$\scriptscriptstyle\blacksquare$}$\mathit{Unfold}(\mathit{NewDefs},\mathit{Ds},\mathit{UnfCls});$ \makebox[4mm][l]{~$\scriptscriptstyle\blacksquare$}$\mathit{Replace}(\mathit{UnfCls}, \mathit{Ds}, \mathit{RCls});$ \makebox[4.5mm][l]{}$\mathit{InCls}:=\mathit{RCls};$~~~ $\mathit{Defs}:=\mathit{Defs}\,\cup\mathit{NewDefs};$~~~ $\mathit{TransfCls}:=\mathit{TransfCls}\,\cup\mathit{FldCls};$ \end{minipage} % \vspace*{1mm} \noindent \hrulefill \vspace*{-2mm} \caption{The ADT removal algorithm~${\mathcal R}$ \label{fig:AlgoR}.} \vspace*{-6mm} \end{figure} \noindent Algorithm~${\mathcal R}$~iterates a sequence made out of the following three procedures.~ \smallskip \noindent (1)~{\it Procedure $\mathit{Diff\mbox{-}Define\mbox{-}Fold}$} introduces, by rule~R1, a set $\mathit{NewDefs}$ of suitable new predicate definitions. By applications of the Folding Rule~R3 and, possibly, of the Differential Replacement Rule~R7, using clauses in $\mathit{Defs} \cup \mathit{NewDefs}$, the procedure removes the ADT terms from the input set $\mathit{InCls}$ of clauses. The bodies (but not the heads) of the clauses in $\mathit{NewDefs}$ contain ADT terms, and thus they need to be transformed to remove those terms. \smallskip \noindent (2)~{\it Procedure $\mathit{Unfold}$} performs some steps of symbolic evaluation of the newly introduced definitions by applying the Unfolding Rule~R2 to the clauses occurring in $\mathit{NewDefs}$. \smallskip \noindent (3)~{\it Procedure $\mathit{Replace}$} removes clauses that have an unsatisfiable body by applying rule~R4, and also exploits the functionality and totality properties of the predicates by applying rules~R5 and~R6, respectively. \smallskip The clauses with ADTs obtained after the $\mathit{Replace}$ procedure and the new predicate definitions introduced at each iteration, are added to $\mathit{InCls}$ and $\mathit{Defs}$, respectively. Algorithm~${\mathcal R}$~terminates when the set~$\mathit{InCls}$ of clauses becomes empty because no new definitions need to be introduced to perform folding steps. \smallskip Note that Algorithm~${\mathcal R}$~takes as input also a level mapping~$\ell$ for the predicates occurring in {\it Cls}. In our implementation, however, no function $\ell$ is actually provided and, instead, a suitable level mapping is constructed during the execution of the algorithm itself. We do this construction % by following a general constraint-based approach for guaranteeing the correctness of logic program transformations~\cite{Pe&12a}. In particular, given an initially empty set~$L$ of constraints, each time~${\mathcal R}$~applies a transformation rule whose soundness depends on the satisfaction of a constraint on the predicate levels (see, in particular the conditions in rules~R1, R3, R7, and Condition (U) in Theorem~\ref{thm:unsat-preserv} for R2), that constraint is added to the set~$L$. For the soundness of Algorithm~${\mathcal R}$, it is required that at the end of its execution, the set $L$ be satisfiable. A solution of~$L$ provides the level mapping~$\ell$ to be constructed. In order not to burden % the presentation with too many technical details, we will not present here the actual constraint handling mechanism used for the construction of the function~$\ell$. \begin{example}[Reverse]\label{ex:rev1} Throughout this section we will use the {\it Reverse} example of Section~\ref{sec:IntroExample} as a running example for illustrating an application of the ADT removal algorithm~${\mathcal R}$. In that example, the set {\it Cls} of clauses given as input to~${\mathcal R}$~consists of clauses {\tt \small 1}--{\tt \small 9}, with ${\it Gs} = \{${\tt \small 1}$\}$ and ${\it Ds} = \{${\tt \small 2}$,\ldots,${\tt \small 9}$\}$. Thus, ${\it InCls}$ is initialized to \{{\tt \small 1}\}. We assume that the following level mapping $\ell$ is associated with the predicates occurring in clauses {\tt \small 1}--{\tt \small 9}: $\ell(${\tt \small append}$) = \ell(${\tt \small reverse}$) = 2$, and $\ell(${\tt \small snoc}$) = \ell(${\tt \small len}$) = 1$. \hfill $\Box$ \end{example} \subsection{Procedure $\mathit{Diff\mbox{-}Define\mbox{-}Fold}$} \label{subsec:diff} In order to present the $\mathit{Diff\mbox{-}Define\mbox{-}Fold}$ procedure used by Algorithm~${\mathcal R}$, first we introduce the following notions. Given a conjunction $G$ of atoms, by $\mathit{bvars}(G)$ we denote the set of variables in~$G$ that have a basic type. Similarly, by $\mathit{adt\mbox{-}vars}(G)$ we denote the set of variables in $G$ that have an ADT type. \begin{definition} We say that an atom $($or a clause$)$ {\em has basic types} if {\em all\/} its arguments $($or atoms, respectively$)$ have a basic type. An atom $($or a clause$)$ {\em has ADTs\/} if {\em at least one} of its arguments $($or atoms, respectively$)$ has an ADT type. \end{definition} \begin{definition} {Given a set $($or a conjunction$)$ $S$ of atoms, $\mathit{SharingBlocks}(S)$ denotes the partition of $S$ with respect to the reflexive, transitive closure, denoted~$\Downarrow_S$, of the relation~$\downarrow_S$ defined as follows. Given two atoms $A_1$ and $A_2$ in $S$, $A_1\! \downarrow_S\! A_2$ holds iff $\mathit{adt\mbox{-}vars}(A_1) \cap \mathit{adt\mbox{-}vars}(A_2)\!\neq\! \emptyset$.} The elements of the partition are called the {\em sharing blocks} of~$S$. We say that $S$ is {\em connected} if $\mathit{SharingBlocks}(S)=\{S\}$. \end{definition} \begin{definition} A {\em generalization} of a pair $(c_1,c_2)$ of constraints is a constraint, denoted $\alpha (c_1,c_2)$, such that $\mathbb D \models\forall (c_1 \rightarrow \alpha (c_1,c_2))$ and $\mathbb D \models\forall (c_2 \rightarrow \alpha (c_1,c_2))$. \end{definition} In particular, we consider the following generalization operator based on {\it widening}~\cite{CoH78,Fi&13a}. Suppose that $c_1$ is the conjunction $(a_1,\ldots,a_m)$ of atomic constraints, then $\alpha (c_1,c_2)$ is defined to be the conjunction of all $a_i$'s in $(a_1,\ldots,a_m)$ such that \mbox{$\mathbb D\!\models\!\forall (c_2\! \rightarrow\! a_i)$}. In order to improve the efficacy of generalization, when some of the $a_i$'s are {\it LIA} equalities, they are split into conjunctions of {{\it LIA}} inequalities before applying widening. \begin{definition}\label{def:projection} For any constraint $c$ and tuple $V$ of variables, the {\em projection} of~$c$ onto $V$ is a constraint $\pi(c,V)$ such that: $(i)$~$\mathit{vars}(\pi(c,V))\!\subseteq\! V$, and $(ii)$~$\mathbb D \models \forall (c\!\rightarrow\!\pi(c,V))$. \end{definition} In our implementation, $\pi(c,V)$ is computed by applying to the formula $\exists Y.\, c$, where \mbox{$Y\!=\! \mathit{vars}(c)\! \setminus\! V,$} a quantifier elimination algorithm for the {theories} of booleans and {\it rational} {(not integer)} numbers. This implementation is safe in our context, because it guarantees properties~$(i)$ and~$(ii)$ {of Definition~\ref{def:projection}}, and avoids relying on modular arithmetic, as usually done when eliminating quantifiers in {\it LIA}~\cite{Rab77}. \begin{definition}% For two conjunctions $G_1$ and $G_2$ of atoms, we say that $G_1$ {\em atomwise subsumes} $G_2$, denoted $G_1\Embedded G_2$, if $G_1$ is the conjunction $(A_1,\ldots,A_n)$ and there exists a subconjunction $(B_{1},\ldots, B_{n})$ of atoms of $G_2$ $($modulo reordering$)$ and substitutions $(\vartheta_{1},\ldots,\vartheta_{n})$ such that, for $i\!=\!1,\ldots,n,$ we have that $B_{i}\!=\!A_{i}\vartheta_{i}$. \end{definition} Now let us present the $\mathit{Diff\mbox{-}Define\mbox{-}Fold}$ procedure (see Figure~\ref{fig:Diff}). At each iteration of the body of the {\bf for} loop, the $\mathit{Diff\mbox{-}Define\mbox{-}Fold}$ procedure removes the ADT terms occurring in a sharing block $B$ of the body of a clause~$C\!:$ $H\!\leftarrow\! c, B, G'$ of $\mathit{InCls}$ (initially, $\mathit{InCls}$ is a set of goals whose head is {\it false}). This is done by possibly introducing some new definitions using the Definition Rule~R1 and applying the Folding Rule~R3. To allow folding, some applications of the Differential Replacement Rule~R7 may be needed. We have the following four cases. \begin{figure}[!ht] \noindent \hrulefill \nopagebreak \noindent {\bf Procedure $\mathit{Diff\mbox{-}Define\mbox{-}Fold}(\mathit{InCls},\mathit{Defs}, \mathit{NewDefs},\mathit{FldCls})$} \\ {\em Input}\/: A set {\it InCls} of clauses and a set {\it Defs} of definitions; \\ {\em Output}\/: A set {\it NewDefs} of definitions and a set $\mathit{FldCls}$ of clauses with basic types. \vspace{-2mm} \noindent \rule{2.0cm}{0.2mm} \noindent $\mathit{NewDefs} := \emptyset; \ \mathit{FldCls}:= \emptyset$; \noindent {\bf for} each clause $C$: $H\leftarrow c, G$ in $\mathit{InCls}$ {\bf do} \noindent \hspace{3mm}{\bf if} $C$ has basic types {\bf then} $\mathit{InCls}\! :=\! \mathit{InCls}\!\setminus\! \{C\}$;\ $\mathit{FldCls}:=\mathit{FldCls}\cup\{C\}$ \noindent \hspace*{3mm}{\bf else} \vspace{1mm} \hspace{6mm} \vline\hspace{1mm}\begin{minipage}{11.4cm} let $C$ be $H\leftarrow c, B, G'$ where $B$ is a sharing block in $G$ such that $B$ contains at least one atom that has ADTs; \\ $\bullet$ ({\bf Fold}) {\bf if} there is a clause $D$: $\mathit{newp}(U) \leftarrow d, B'$, which is a variant of a clause\\ \hspace*{3.7mm}in $\mathit{Defs}\cup \mathit{NewDefs}$, with $U\!=\!\textit{bvars}(\{d,B'\})$, such that: (i)~$B=B'\vartheta$, {for some\\ \hspace*{3.7mm}substitution $\vartheta$ % {acting on $\mathit{adt\mbox{-}vars}(B')$ only,}} and (ii)~$\mathbb D \models\forall (c \rightarrow d)$, {\bf then}\\ \hspace*{3.7mm}\underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} fold} $C$ using $D$ and derive $E$: $H\!\leftarrow c, \mathit{newp}(U),\! G'$; \\ $\bullet$ ({\bf Generalize}) {\bf else if} there is a clause $D$: $\mathit{newp}(U) \leftarrow d, B'$, which is a variant \\ \hspace*{3.7mm}of a clause in $\mathit{Defs}\cup \mathit{NewDefs}$, with $U\!=\!\textit{bvars}(\{d,B'\})$, such that: % (i)~$B=B'\vartheta$, {\\ \hspace*{3.7mm}for some substitution $\vartheta$ % acting on $\mathit{adt\mbox{-}vars}(B')$ only,} and (ii)~$\mathbb D \not\models\forall (c \rightarrow d)$, {\bf then} \\ % \hspace*{4.5mm}\vline\hspace{1.5mm}\begin{minipage}{10.8cm} \underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} introduce definition} $\mathit{GenD}$: $\mathit{genp}(U) \leftarrow \alpha(d,c), B'$ \\ \underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} fold} $C$ using $\mathit{GenD}$ and derive $E$: $H\leftarrow c, \mathit{genp}(U), G'$; \\ $\mathit{NewDefs} := \mathit{NewDefs} \cup \{\mathit{GenD}\}$; \end{minipage}\vspace{.5mm} \\ $\bullet$ ({\bf Diff}-{\bf Introduce}) {\bf else if} there is a clause $D$:\,$\mathit{newp}(U) \leftarrow d, B'$, which is a\\\hspace*{3.9mm}variant of a clause in\,$\mathit{Defs}\,\cup \mathit{NewDefs}$, with $U\!=\!\textit{bvars}(\{d,\!B'\})$ and $B' \Embedded B$, {\bf then} \\ % \hspace*{4.5mm}\vline\hspace{1.5mm}\begin{minipage}{10.8cm} take a maximal connected subconjunction $M$ of $B$, if any, such that: \hangindent=3mm \\ (i) $B\!=\!(M, F(X;Y))$, for some \raisebox{-.5mm}{\rule{0mm}{1mm}} non-empty conjunction $F(X;Y)$, (ii) $B'\vartheta=(M, R(V;W))$, {for some substitution $\vartheta$ % acting on $\mathit{adt\mbox{-}vars}(B')$ only} and $W \cap \mathit{vars}(C) \!= \!\emptyset$, and (iii)~for every atom~$A$ in $F(X;Y)$, $\ell(H)>\ell(A)$; \underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} introduce definition} \hangindent=15mm $\widehat{D}$: $\mathit{diff}(Z) \leftarrow \pi(c,X),F(X;Y),R(V;W)$ \hspace*{3mm}where $Z\!=\!\mathit{bvars}(\{F(X;Y),R(V;W)\})$; $\mathit{NewDefs} := \mathit{NewDefs} \cup \{\widehat{D}\}$; \underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} replace} $F(X;Y)$ by $(R(V;W), \mathit{diff}(Z))$ in $C$, and derive clause\\ \hspace*{3mm}\rule{0mm}{2.9mm}$C'$: $H\leftarrow c, M, R(V;W), \mathit{diff}(Z), G'$; {\bf if} $\mathbb D \models\forall (c \rightarrow d)$ \hspace*{3mm}{\bf then}~\underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} fold} $C'$ using $D$ and derive $E$: $H\leftarrow c,\mathit{newp}(U), \mathit{diff}(Z), G'$; \hspace*{3mm}{\bf else} % \hspace{.5mm}\vline\hspace{1.5mm}\begin{minipage}[t]{9.5cm}% \underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} introduce\,definition}\,$\mathit{GenD}$:\,$\mathit{genp}(U) \!\leftarrow\!\alpha(d,c), B'$; \underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} fold} $C'$ using $\mathit{GenD}$ and derive $E$: $H\leftarrow c, \mathit{genp}(U), \mathit{diff}(Z), G'$; $\mathit{NewDefs} := \mathit{NewDefs} \cup \{\mathit{GenD}\}$; \end{minipage} % \end{minipage} \noindent $\bullet$ ({\bf Project}) {\bf else} \hspace*{4.5mm}\vline\hspace{1.5mm}\begin{minipage}{10.8cm} \underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} introduce definition} $\mathit{ProjC}$: $\mathit{newp}(U) \leftarrow \pi(c,Z), B$ \ where $U\!=\!\mathit{bvars}(B)$ \\ and $Z$ are the input variables of a basic type in $B$; \underline{\raisebox{-.5mm}{\rule{0mm}{1mm}} fold} $C$ using $\mathit{ProjC}$ and derive clause $E$: $H\leftarrow c, \mathit{newp}(U), G'$; $\mathit{NewDefs} := \mathit{NewDefs} \cup \{\mathit{ProjC}\}$; \end{minipage} \end{minipage} % \vspace{-1mm} \noindent \hspace{3mm}$\mathit{InCls}\! :=\! (\mathit{InCls}\setminus \{C\}) \cup \{E\}$; \nopagebreak % \noindent \hrulefill \vspace*{-1mm} \caption{The {\it Diff-Define-Fold} procedure. According to rule R1, the level of every new predicate~(either $\mathit{genp}$, or $\mathit{diff}$, or $\mathit{newp}$) introduced by the procedure, is equal to the maximum level of the atoms occurring in the body of its definition. \label{fig:Diff}} \vspace*{-4mm} \end{figure} \smallskip \noindent $\bullet$ ({\bf Fold}). In this case the ADT terms in $B$ can be removed {by folding using a definition that has already been introduced. In particular, let us} suppose that $B$ is an instance, via a substitution $\vartheta$, of the conjunction of atoms in the body of a definition $D$ introduced at a previous iteration of the {\it Diff-Define-Fold} procedure, and constraint~$c$ in~$C$ entails the constraint in~$D$. Since we have assumed that all terms of a basic type occurring in an atom are distinct variables (see Section~\ref{sec:CHCs}), and the variables of $D$ can be freely renamed, we require that $\vartheta$ acts on ADT variables only, and hence it is the identity on the variables of a basic type. A similar assumption is also made in the next two cases (Generalize) and (Diff-Introduce). Then, we remove the ADT arguments occurring in $B$ by folding $C$ using~$D$. Indeed, by construction, all variables in the head of every definition introduced by Algorithm~${\mathcal R}$~have a basic type. \vspace{1.5mm} \noindent $\bullet$ ({\bf Generalize}). Suppose that the previous case does not apply. Suppose also that there exists a definition $D$, introduced at a previous iteration of the {\it Diff-Define-Fold} procedure, such that the sharing block $B$ is an instance of the conjunction $B'$ of the atoms in the body of $D$ and, unlike the (Fold) case, the constraint~$c$~in $C$ does {\em not} entail the constraint $d$ in $D$. We introduce a new definition $\mathit{GenD}\!:$ $\mathit{genp}(U) \leftarrow \alpha(d,c), B'$, where: (i)~by construction, the constraint $\alpha(d,c)$ is a generalization of~$d$ such that $c$ entails $d$, and (ii)~$U$ is the tuple of the variables of a basic type in $(d,B)$. Then, we remove the ADT arguments occurring in~$B$ by folding~$C$ using~$\mathit{GenD}$. \smallskip \noindent $\bullet$ ({\bf Diff}-{\bf Introduce}). Suppose that the previous two cases do not apply because the sharing block $B$ in clause~$C$ is not an instance of the conjunction of atoms in the body of any definition introduced at a previous iteration of the procedure. Suppose, however, that $B$ {\em partially matches} the body of {an already introduced} % definition $D$: $\mathit{newp}(U) \leftarrow d, B'$, that is, (i)~$B\!=\!(M, F(X;Y))$, and (ii)~for some substitution $\vartheta$ acting on $\mathit{adt\mbox{-}vars}(B')$ only, $B'\vartheta\!=\!(M, R(V;W))$ (see Figure~\ref{fig:Diff} for details). Then, we introduce a difference predicate {\it diff\/} {defined by the clause} % $\widehat{D}$: $\mathit{diff}(Z) \leftarrow \pi(c,X),F(X;Y),R(V;W),$ where $Z\!=\!\mathit{bvars}(\{F(X;Y),$ $ R(V;W)\})$ and, by rule~R7, we replace the conjunction $F(X;Y)$ by the new conjunction $(R(V;W), \mathit{diff}(Z))$ in the body of~$C$, thereby deriving $C'$. Finally, we remove the ADT arguments in~$B$ by folding $C'$ using either $D$ (if $c$ entails $d$) or a clause $\mathit{GenD}$ whose constraint is the generalization $\alpha(d,c)$ of the constraint~$d$ (if $c$ does {\em not} entail $d$) (again, see Figure~\ref{fig:Diff} for details). \smallskip \noindent $\bullet$ ({\bf Project}). Suppose that none of the previous three cases apply. Then, we first introduce a new definition $\mathit{ProjC}$: $\mathit{newp}(U)\! \leftarrow\! \pi(c,Z), B$, where $U\!=\!\mathit{bvars}(B)$ and $Z$ are the input variables of basic types in $B$, and then we can remove the ADT arguments occurring in the sharing block $B$ by {folding} $C$ using $\mathit{ProjC}$. \begin{example}[Reverse, Continued]\label{ex:rev2} The body of goal~{\tt \small 1} (see Section~\ref{sec:IntroExample}) has a single sharing block, that is, \smallskip \noindent $B_1$:~~{\tt \small append(Xs,Ys,Zs), reverse(Zs,Rs), len(Xs,N0), len(Ys,N1), len(Rs,N2)} \smallskip \noindent Indeed, we have that {\tt \small append(Xs,Ys,Zs)} shares a list variable with each of atoms {\tt \small reverse(Zs,Rs)}, {\tt \small len(Xs,N0)}\!, and {\tt \small len(Ys,N1)}, and atom {\tt \small reverse(Zs,Rs)} shares a list variable with {\tt \small len(Rs,N2)}. None of the first three cases (Fold), (Generalize), or ({Diff}-{Introduce}) applies, because $\mathit{Defs}\cup \mathit{NewDefs}$ is the empty set. Thus, Algorithm~${\mathcal R}$~introduces the following new definition (see also Section~\ref{sec:IntroExample}): \vspace{-2mm} {\small \begin{verbatim} D1. new1(N0,N1,N2) :- append(Xs,Ys,Zs), reverse(Zs,Rs), len(Xs,N0), len(Ys,N1), len(Rs,N2). \end{verbatim} } \vspace{-2mm} \noindent where: (i)~{\tt \small new1} is a new predicate symbol, (ii)~the body is the sharing block~$B_{1}$, (iii)~{\tt \small N0,N1,N2} are the variables of basic types in $B_{1}$, and (iv)~the constraint is the empty conjunction {\tt \small true}, that is, the projection of the constraint {\tt \small N2=\textbackslash=N0+N1} occurring in goal {\tt \small 1} onto the input variables of basic types in~$B_{1}$ (i.e., the empty set, as {\tt \small N0,N1,N2} are all output variables). In accordance with rule R1, we set $\ell(${\tt \small new1}$) = \mathit{max}\{\ell(${\tt \small append}$), \ell(${\tt \small reverse}$), \ell(${\tt \small len}$) \} = 2$. By folding, from goal~{\tt \small 1} we derive a new goal without occurrences of list variables: \smallskip \noindent {\tt \small{10. false :- N2=\textbackslash=N0+N1, new1(N0,N1,N2).}} \smallskip \noindent The presentation of this example will continue in Example~\ref{ex:rev-continued} (see Section~\ref{subsec:proc-unfold-replace}).~\hfill $\Box$ \end{example} \vspace{-6mm} \subsection{Procedures $\mathit{Unfold}$ and $\mathit{Replace}$} \label{subsec:proc-unfold-replace} The $\mathit{Diff\mbox{-}Define\mbox{-}Fold}$ procedure may introduce new definitions with ADTs in their bodies (see, for instance, clause~{\tt \small D1} defining predicate {\tt \small new1} in Example~\ref{ex:rev2}). These definitions are added to {\it NewDefs} and transformed % by the $\mathit{Unfold}$ and $\mathit{Replace}$ procedures. Procedure {\it Unfold} (see Figure~\ref{fig:unfoldProc}) repeatedly applies rule~R2 in two phases. In Phase~1 the procedure unfolds a given clause in {\it NewDefs} with respect to so-called {\em source} atoms in its body. Recalling that each atom is the relational translation of a function call, the source atoms represent innermost function calls in the functional expression corresponding to the clause body. The unfolding steps of Phase~1 may determine, by unification, the instantiation of some input variables. Then, in Phase~2 these instantiations are taken into account for performing further unfolding steps. Indeed, the procedure selects for unfolding only atoms whose input arguments are instances of the corresponding arguments in the heads of their matching clauses. \begin{figure}[!ht] \noindent \hrulefill \noindent {\bf Procedure $\mathit{Unfold}(\mathit{NewDefs},\mathit{Ds},\mathit{UnfCls})$} \\ {\em Input}\/: A set $\mathit{NewDefs}$ of definitions and a set $\mathit{Ds}$ of definite clauses; \\ {\em Output}\/: A set $\mathit{UnfCls}$ of definite clauses. \vspace*{-2.5mm} \noindent \rule{2.0cm}{0.2mm} % \noindent $\mathit{UnfCls} := \mathit{NewDefs}$; % \vspace*{.5mm} \noindent Phase~1. \vline\vline\hspace{1.5mm}\begin{minipage}[t] {10.7cm} \noindent\hangindent=2mm - {For each clause $C$ in $\mathit{UnfCls}$, mark as unfoldable a set $S$ of atoms in the body of $C$ such that: (i)~there is an atom $A$ in $S$ with $\ell(H)\!=\!\ell(A)$, {where $H$ is the head of $C$,} and (ii)~all atoms in $S\setminus \{A\}$ are source atoms such that every source variable of the body of $C$ occurs in $S$;} \noindent\hangindent=2mm - {\bf while} there exists a clause $C$: $H \leftarrow c, {L}, A, {R}$\, in $\mathit{UnfCls}$, for some conjunctions $L$ and~$R$ of atoms, such that $A$ is an unfoldable atom {\bf do} \vspace*{0.5mm} \noindent \hspace{5mm}$\mathit{UnfCls}:=(\mathit{UnfCls}\setminus\{C\}) \cup \mathit{Unf(C,A,Ds)}$; \end{minipage} \vspace*{0.5mm} \hspace{13mm}\rule{2.0cm}{0.2mm} \noindent Phase~2. \vline\vline\hspace{1.5mm}\begin{minipage}[t] {10.7cm} - Mark as unfoldable all atoms in the body of each clause in $\mathit{UnfCls}$; \noindent \hangindent=2mm - {\bf while} there exists a clause $C$: \mbox{$H \leftarrow c, {L}, A, {R}$} in $\mathit{UnfCls}$, for some conjunctions $L$ and~$R$ of atoms, such that $A$ is a head-instance % with respect to~{\it Ds} and $A$ is either unfoldable or descending {\bf do} \vspace*{0.5mm} \noindent \hspace{5mm}$\mathit{UnfCls}:=(\mathit{UnfCls}\setminus\{C\}) \cup \mathit{Unf(C,A,Ds)}$; \end{minipage} \vspace*{0.5mm} \noindent \hrulefill \vspace{-2mm} \caption{The {\it Unfold} procedure.\label{fig:unfoldProc}} \vspace*{-3mm} \end{figure} In order to present the {\it Unfold} procedure in a formal way, we need the following notions. \begin{definition} A variable $X$ occurring in a conjunction $G$ of atoms is said to be a {\em source variable} if it is an input variable for an atom in $G$ and not an output variable of any atom in $G$. An atom $A$ in a conjunction $G$ of atoms is said to be a {\em source atom} if all its input variables are source variables. \end{definition} For instance, in clause~{\tt \small 1} of Section~\ref{sec:IntroExample}, where the input variables of the atoms {\tt \small append(Xs,Ys,Zs)}, {\tt \small reverse(Zs,Rs)}, {\tt \small len(Xs,N0)}, {\tt \small len(Ys,N1)}, and {\tt \small len(Rs,N2)} are {\tt \small (Xs,Ys)}, {\tt \small Zs}, {\tt \small Xs}, {\tt \small Ys}, and {\tt \small Rs}, respectively, there are the following three source atoms: {\tt \small append(Xs\!,Ys\!,Zs)}, {\tt \small len(Xs\!,N0)}, and {\tt \small len(Ys\!,N1)}. These three atoms correspond to the innermost function calls which occur in the functional expression \mbox{{\tt \small len(reverse(append xs ys))} {\tt \small =}\hspace*{-1.6mm}{\tt \small /} {\tt \small (len xs)\,+\,(len ys)})} corresponding to the clause body. \begin{definition}\label{def:head-instance} An atom $A(X;Y)$ in the body of clause $C$: \mbox{$H \leftarrow c, {L}, A(X;Y), {R}$} is a {\em head-instance} with respect to~a set~{$\mathit{Ds}$} of clauses if, for every clause $K\leftarrow d, B$ in~{$\mathit{Ds}$} such that: $(1)$~there exists a most general unifier $\vartheta$ of $A(X;Y)$ and $K$, and $(2)$~the constraint $(c, d)\vartheta$ is satisfiable, we have that $\vartheta$ is a variable renaming for~$X$. \end{definition} Thus, $A(X;Y)$ is a head-instance, if for all clause heads $K$ in $\mathit{Ds}$ the input variables $X$ are not instantiated by unification with $K$. For instance, {with respect to the set $\{${\tt \small 2},\,{\tt \small 3}$\}$ of clauses of Section~\ref{sec:IntroExample}}, the atom {\tt \small append([X|Xs],Ys,Zs)} is a head-instance, while the atom {\tt \small append(Xs,Ys,Zs)} is~not. Recall that in a set {\it Cls} of clauses, predicate $p$ {\it immediately depends on} predicate $q$, if in\,{\it Cls} there is a clause of the form $p(\ldots) \leftarrow \ldots, q(\ldots), \ldots$ The {\it depends on} relation is the transitive closure of the {\it immediately depends on} relation~\cite{Apt90}. \begin{definition} Let $\prec$ be a well-founded ordering on tuples of terms such that, for all tuples of terms $t$ and $u,$ if $t\!\prec\! u$, then, for all substitutions $\vartheta,$ \mbox{$t\vartheta\!\prec\! u\vartheta$.} A predicate $p$ is {\em descending} with respect to~$\prec$ if, for all clauses, $p(t;u) \leftarrow c,\, p_1(t_1;u_1),\ldots,p_n(t_n;u_n),$ for $i\!=\!1,\ldots,n,$ if $p_i$ depends on $p$, then $t_i\!\prec\! t$. An atom is descending if its predicate is descending. \end{definition} The well-founded ordering $\prec$ we use in our implementation is based on the {\it subterm} relation and is defined as follows: {$(u_1,\ldots,u_m)\!\prec\! (v_1,\ldots,v_n)$ if for every~$u_i$ there exists $v_j$ such that $u_i$ is a (non necessarily strict) subterm of $v_j$, and there exists $u_i$ which is a {strict} subterm of some~$v_j$.} For instance, the predicates {\tt \small append}, {\tt \small reverse}, {\tt \small snoc}, and {\tt \small len} in our running example are all descending. To control the application of rule R2 in Phases~1 and~2 of the {\it Unfold} procedure we mark as {\it unfoldable} some atoms in the body of a clause. If we unfold with respect to~atom~$A$ clause $C$: \mbox{$H \!\leftarrow c, L, A, R$}, then the marking of the clauses in $\mathit{Unf(C,A,Ds)}$ is {done} as follows: (i)~each atom derived from $A$ is not marked as unfoldable, and (ii)~each atom~$A''$ inherited from an atom~$A'$, different from~$A$, in the body of $C$ is marked as unfoldable iff $A'$ is marked as unfoldable. In Phase~1, for each clause $C$ in $\mathit{NewDefs}$ the procedure marks as unfoldable a non-empty set~$S$ of atoms in the body of $C$ consisting of: (i)~an atom $A$ such that $\ell(H)\!=\!\ell(A)$, where $H$ is the head of $C$, and (ii)~a set of source atoms (possibly including $A$) such that every source variable of the body of $C$ occurs in $S$. Then, the procedure unfolds with respect to all unfoldable atoms. Note that atom~$A$ exists because, by construction, when we introduce a new predicate during the $\mathit{Diff\mbox{-}Define\mbox{-}Fold}$ procedure, we set the level of the new predicate to the maximal level of an atom in the body of its definition. The unfolding with respect to~$A$ enforces Condition~(U) of Theorem~\ref{thm:unsat-preserv}, and hence the soundness of Algorithm~${\mathcal R}$. In Phase~2 the instantiations of input variables determined by the unfolding steps of Phase~1 are {taken into account} for further applications of rule~R2. Indeed, clauses are unfolded with respect to atoms which are head-instances and, in particular, unfolding with respect to head-instances which are descending atoms, is repeated until no such atoms are present. The termination of the procedure {\it Unfold} is ensured by the following two facts: (i) if a clause $C$ has $n\, (\geq\!1)$ atoms marked as unfoldable, and clause~$C$ is unfolded with respect to an atom~$A$ that is marked as unfoldable, then each clause in $\mathit{Unf(C,A,Ds)}$ has $n\!-\!1$ atoms marked as unfoldable, and (ii)~since $\prec$ is a well-founded ordering, it is not possible to perform an infinite sequence of applications of the Unfolding Rule R2 with respect to descending atoms. \begin{example}[Reverse, Continued]\label{ex:rev-continued} The {\it Unfold} procedure marks as unfoldable atom {\tt \small append(Xs,Ys,Zs)} in the body of clause {\tt \small D1}, which has the same level as the head of the clause. Atom {\tt \small append(Xs,Ys,Zs)} is also a source atom containing all the input variables of the body of clause {\tt \small D1} (that is, {\tt \small Xs} and {\tt \small Ys}). Then, by unfolding clause {\tt \small D1} with respect to {\tt \small append(Xs,Ys,Zs)}, we get: \vspace{1mm} \noindent {\tt \small 11. new1(N0,N1,N2)\!\! :- reverse(Ys,Rs), len([],N0), len(Ys,N1), len(Rs,N2).} \noindent {\tt \small 12. new1(N0,N1,N2)\!\! :- append(Xs,Ys,Zs)\!,\! reverse([X|Zs],Rs)\!,\! len([X|Xs],N0)\!,} \noindent \hspace{36mm}{\tt \small len(Ys,N1), len(Rs,N2).} \vspace{1mm} \noindent Now, atoms {\tt \small len([],N0)}, {\tt \small reverse([X|Zs],Rs)}, and {\tt \small len([X|Xs],N0)} are all head-instances, and hence the procedure unfolds clauses {\tt \small 11} and {\tt \small 12} with respect to these atoms. We get: \vspace*{-2mm} {\small \begin{verbatim} 13. new1(N0,N1,N2) :- N0=0, reverse(Zs,Rs), len(Zs,N1), len(Rs,N2). 14. new1(N01,N1,N21) :- N01=N0+1, append(Xs,Ys,Zs), reverse(Zs,Rs), len(Xs,N0), len(Ys,N1), snoc(Rs,X,R1s), len(R1s,N21). \end{verbatim} } \vspace*{-2mm} The presentation of this transformation will continue in Example~\ref{ex:rev-continued2} below. \hfill $\Box$ \end{example} \noindent {Procedure {\it Replace} (see Figure \ref{fig:replaceProc}) applies rules~R4, R5, and~R6 as long as possible. {\it Replace} terminates because each application of one of those rules decreases the number of atoms. } \begin{figure}[!ht] \noindent \hrulefill \noindent {\bf Procedure $\mathit{Replace}(\mathit{UnfCls},\mathit{Ds},\mathit{RCls})$} \\ {\em Input}\/: Two sets $\mathit{UnfCls}$ and $\mathit{Ds}$ of definite clauses; \\ {\em Output}\/: A set $\mathit{RCls}$ of definite clauses. \vspace*{-2.5mm} \noindent \rule{2.0cm}{0.2mm} % \noindent $\mathit{RCls} := \mathit{UnfCls}$; % \vspace*{.5mm} \noindent \begin{minipage}[t] {11.8cm} \noindent\hangindent=4mm {\bf repeat} \\ {\bf if} there is a clause $C\in \mathit{RCls}$ such that rule R4 is applicable to $C$\\ \hspace*{3mm}{\bf then} $\mathit{RCls} := \mathit{RCls} \setminus \{C\}$; \smallskip \\ {\bf if} there is a clause $C\in \mathit{RCls}$ such that the Functionality Rule R5 is applicable to $C$ with respect to $\mathit{RCls}\cup \mathit{Ds}$, thus deriving a new clause $D$\\ \hspace*{3mm}{\bf then } $\mathit{RCls} := (\mathit{RCls} \setminus \{C\}) \cup \{D\}$; \smallskip\\ {\bf if} there is a clause $C\in \mathit{RCls}$ such that the Totality Rule R6 is applicable to $C$ with respect to $\mathit{RCls}\cup \mathit{Ds}$, thus deriving a new clause $D$\\ \hspace*{3mm}{\bf then } $\mathit{RCls} := (\mathit{RCls} \setminus \{C\}) \cup \{D\}$; \smallskip {\bf until} no rule in $\{$R4, R5, R6$\}$ is applicable to a clause in $\mathit{RCls}$ \end{minipage} \smallskip \noindent \hrulefill \vspace{-2mm} \caption{The {\it Replace} procedure.\label{fig:replaceProc}} \vspace*{-3mm} \end{figure} \begin{example}[Reverse, Continued]\label{ex:rev-continued2} Neither rule~R5 nor rule~R6 is applicable to clauses~{\tt \small 13} and~{\tt \small 14}. Thus, the first iteration of the body of the {\bf while-do} loop of Algorithm~${\mathcal R}$~terminates with $\mathit{InCls}\!=\!\{${\tt \small 13,14}$\},$ $\mathit{Defs}\!=\!\{${\tt \small D1}$\},$ and $\mathit{TransfCls}\!=\!\{${\tt \small 10}$\}$. Now, the second iteration starts off by executing the {\it Diff-Define-Fold} procedure. The procedure handles the two clauses {\tt \small 13} and {\tt \small 14} in $\mathit{InCls}$. \smallskip \noindent For clause {\tt \small 13}, the {\it Diff-Define-Fold} procedure applies case (Project). Indeed, the body of clause {\tt \small 13} has the following single sharing block: \smallskip \noindent $B_{13}$:~~{\tt \small reverse(Zs,Rs), len(Zs,N1), len(Rs,N2)} \smallskip \noindent and there is no clause $\mathit{newp}(V) \leftarrow d, B'$ in $\textit{Defs}\,\cup\textit{NewDefs}$ such that $B_{13}$ is an instance of $B'$. Thus, the procedure adds to \textit{NewDef} the following new definition (see also Section~\ref{sec:IntroExample}): \smallskip \noindent {\tt \small D2. new2(N1,N2) :- reverse(Zs,Rs), len(Zs,N1), len(Rs,N2).} \smallskip \noindent and, by folding clause~{\tt \small 13}, we get: \smallskip \noindent {\tt \small 15. new1(N0,N1,N2) :- N0=0, new2(N1,N2).} \smallskip \noindent which has basic types and hence it is added to {\it FldCls}. This clause is then added to the output set {\it TransfCls} (see Figure~\ref{fig:AlgoR}). \medskip \noindent For clause {\tt \small 14}, the {\it Diff-Define-Fold} procedure applies case (Diff-Introduce). Indeed, the body of clause {\tt \small 14} has the following single sharing block: \smallskip \noindent $B_{14}$:~~{\tt \small append(Xs,Ys,Zs)\!,} {\tt \small reverse(Zs,Rs)\!,}\ {\tt \small len(Xs,N0)\!,}\ {\tt \small len(Ys,N1)\!,}\, \hspace{3.7mm}{\tt \small snoc(Rs,X,R1s)\!,}\ {\tt \small len(R1s,N21)} \smallskip \noindent and we have that $B_1 \Embedded B_{14},$ where $B_1$ is the body of clause {\tt \small D1}, which is the definition introduced as explained in Example~\ref{ex:rev2} above. The procedure constructs the conjunctions defined at Points (i)--(iii) of (Diff-Introduce) as follows: \smallskip \noindent $M$= ({\tt \small append(Xs,Ys,Zs)\!,} {\tt \small reverse(Zs,Rs)\!, len(Xs,N0)\!, len(Ys,N1)}), \noindent $F(X;Y)$\! =\! ({\tt \small snoc(Rs,X,R1s)\!,\;len(R1s,N21)}), ~where $X$={\tt \small (Rs,X)}, $Y$={\tt \small (R1s,N21)}, \noindent $R(V;W)$\! =\! {\tt \small len(Rs,N2)}, ~where $V$=\,{\tt \small (Rs)}, $W$=\,{\tt \small (N2)}. \smallskip \noindent In this example, $\vartheta$ is the identity substitution. Morevover, the condition on the level mapping~$\ell$ required in the {\it Diff-Define-Fold} Procedure of Figure~\ref{fig:Diff} is fulfilled because $\ell(${\small{\tt \small new1}}$)\!>\!\ell(${\small{\tt \small snoc}}$)$ and $\ell(${\small{\tt \small new1}}$)\!>\!\ell(${\small{\tt \small len}}$)$. Thus, the definition $\widehat{D}$ to be introduced~is: \vspace{-1mm} {\small \begin{verbatim} D3. diff(X,N2,N21) :- snoc(Rs,X,R1s), len(R1s,N21), len(Rs,N2). \end{verbatim} } \vspace{-1mm} \noindent Indeed, we have that: (i)~the projection $\pi(c,X)$ is $\pi(${\small{\tt \small N01=N0+1}},\,{\small{\tt \small (Rs,X)}}$)$, that is, the empty conjunction {\tt \small true}, (ii)~$F(X;Y),\, R(V;W)$ is the body of clause~{\tt \small D3}, and (iii)~the head variables {\tt \small N2}, {\tt \small X}, and {\tt \small N21} are the integer variables in that body. Note that: (i)~clause~{\tt \small D3} % is the one we have presented in Section~\ref{sec:IntroExample}, and (ii)~the relationship between the sharing blocks~$B_{1}$ and~$B_{14}$, which occur in the body of clauses~{\tt \small D1} and~{\tt \small 14}, respectively, formalizes the notion of mismatch between the bodies of clauses~{\tt \small D1} and~{\tt \small D1$^{\textstyle *}$} described in Section~\ref{sec:IntroExample}, because clause~{\tt \small 14} is the same as clause~{{\tt \small D1$^{\textstyle *}$}.} Then, by applying rule~R7 to clause~{\tt \small 14}, the conjunction `{\tt \small snoc(Rs,X,R1s)\!,} {\tt \small len(R1s\!,N21)\!}' can be replaced by the new conjunction\,`{\tt \small len(Rs\!,N2)\!,} {\tt \small diff\!(X\!,N2\!,N21)\!}', and we get the clause: \vspace{-1mm} {\small \begin{verbatim} 16. new1(N01,N1,N21) :- N01=N0+1, append(Xs,Ys,Zs), reverse(Zs,Rs), len(Xs,N0), len(Ys,N1), len(Rs,N2), diff(X,N2,N21). \end{verbatim} } \vspace{-1mm} \noindent Finally, by folding clause~{\tt \small 16} using clause~{\tt \small D1}, we get the following clause: \vspace{-1mm} {\small \begin{verbatim} 17. new1(N01,N1,N21) :- N01=N0+1, new1(N0,N1,N2), diff(X,N2,N21). \end{verbatim} } \vspace{-1mm} \noindent which has no list arguments and hence it is added to {\it FldCls}. This clause is then added to the output set {\it TransfCls}. \smallskip Algorithm ${\mathcal R}$~proceeds by applying the \textit{Unfold} and \textit{Replace} procedures to clauses~{\tt \small D2} and~{\tt \small D3}. Then, a final execution of the {\it Diff-Define-Fold} procedure allows us to fold all clauses with ADT terms and derive clauses with basic types, without introducing any new definition. Thus, ${\mathcal R}$~terminates and its output \textit{TransfCls} is equal (modulo variable renaming) to the set \textit{TransfRevCls} of clauses listed in Section~\ref{sec:IntroExample}. \hfill$\Box$ \end{example} \subsection{Termination of {Algorithm}~${\mathcal R}$} As discussed above, each execution of the {\it Diff-Define-Fold}, {\it Unfold}, and {\it Replace} procedures terminates. However, Algorithm~${\mathcal R}$~might not terminate because new predicates may be introduced by {\it Diff-Define-Fold} at each iteration of the {\bf while}-{\bf do} of~${\mathcal R}$, and the loop-exit condition $\textit{InCls}\neq \emptyset$ might be never satisfied. Thus, {Algorithm}~${\mathcal R}$~terminates if and only if, during its execution, the {\it Diff-Define-Fold} procedure introduces a finite set of new predicate definitions. A way of achieving this finiteness property is to combine the use of a generalization operator for constraints (see Section~\ref{subsec:diff}) with a suitable generalization strategy for the conjunctions of atoms that can appear in the body of the definitions (see, for instance, the {\em most specific generalization} used by {\em conjunctive partial deduction}~\cite{De&99}). It should be noticed, however, that an effect of a badly designed generalization strategy could be an ADT removal algorithm that often terminates and returns a set of unsatisfiable CHCs whereas the initial clauses were satisfiable (in other terms, the transformation would often generate {\em spurious counterexamples}). The study of suitable generalization strategies and also the study of classes of CHCs for which a suitable modification of Algorithm~${\mathcal R}$~terminates are beyond the scope of the present paper. Instead, in Section~\ref{sec:Experiments}, we evaluate the effectiveness of Algorithm~${\mathcal R}$~from an experimental viewpoint. \subsection{Soundness of {Algorithm}~${\mathcal R}$} \label{subsec:soundR} The soundness of~${\mathcal R}$~follows from the soundness of the transformation rules, and hence we have the following result. \begin{theorem}[Soundness of {Algorithm}~${\mathcal R}$] \label{thm:soundness-AlgorithmR} Suppose that {Algorithm}~${\mathcal R}$~terminates for an input set $\mathit{Cls}$ of clauses, and let $\mathit{TransfCls}$ be the output set of clauses. Then, every clause in $\mathit{TransfCls}$ has basic types, and if $\mathit{TransfCls}$ is satisfiable, then $\mathit{Cls}$ is satisfiable. \end{theorem} \begin{proof} Each procedure used in {Algorithm}~${\mathcal R}$~consists of a sequence of applications of rules R1--R7. Moreover, the Unfold procedure ensures that each clause $H \leftarrow c,B$ introduced by rule R1 is unfolded with respect to an atom $A$ in $B$ such that $\ell(H)\!=\!\ell(A)$. Thus, Condition~(U) of the hypothesis of Theorem~\ref{thm:unsat-preserv} holds for any transformation sequence generated by {Algorithm}~${\mathcal R}$, and hence the thesis follows from Theorem~\ref{thm:unsat-preserv}. \hfill~$\Box$ \end{proof} \subsection{Completeness of the transformation rules} \label{subsec:completeness} Completeness may be affected by the use of rule~R3 or rule~R7, as shown by the following two examples. In these examples the variables range over the integers~${\mathbb Z}$ or the lists $\mathbb{L}$ of integers, according to their type. \begin{example}\label{ex:false-pos-fold} Let us consider the following set of clauses: \vspace{1mm} \noindent \begin{minipage}[t]{8mm} $P_{0}$: \end{minipage} \begin{minipage}[t]{50mm} {\small \begin{verbatim} 1. false :- Y>0, a([],Y). 2. a([],Y) :- Y=0. 3. a([H|T],Y) :- Y=1. \end{verbatim} } \end{minipage} \smallskip \noindent We introduce the following clause defining a new predicate by rule R1: \vspace{1mm} \hspace{4mm}{\tt \small{4. newp(Z) :- a(X,Z).}} \vspace{1mm} \noindent and we get $P_1\!=\!\{ ${\tt \small 1,2,3,4}\}. Now, we unfold clause {\tt \small 4} and we derive the clauses: \vspace{1mm} \hspace{4mm}{\tt \small{5. newp(Y) :- Y=0.}} \hspace{4mm}{\tt \small{6. newp(Y) :- Y=1. }} \vspace{1mm} \noindent We get $P_2\!=\!\{${\tt \small 1,2,3,5,6}\}. Finally, we fold clause {\tt \small 1} using clause {\tt \small 4}, which belongs to $\textit{Defs}_2$, and we derive the set of clauses: \vspace{1mm} \noindent \begin{minipage}[t]{8mm} $P_{3}$: \end{minipage} \begin{minipage}[t]{50mm} {\small \begin{verbatim} 1f. false :- Y>0, newp(Y). 2. a([],Y) :- Y=0. 3. a([H|T],Y) :- Y=1. 5. newp(Y) :- Y=0. 6. newp(Y) :- Y=1. \end{verbatim} } \end{minipage} \vspace{1mm} \noindent Now, we have that $P_{0}$ is satisfiable (because {\tt \small a([],Y)} holds for {\tt \small Y=0} only), while $P_{3}$ is unsatisfiable (because {\tt \small newp(Y)} holds for {\tt \small Y=1}). Let us explain why, in this example, folding affects completeness. By applying the Folding Rule~R3 to clause~{\tt \small 1}, we have replaced atom {\tt \small a([],Y)} by atom {\tt \small newp(Y)}, which, by clause~{\tt \small 4}, is equivalent to {\tt \small $\mathtt{\exists}$X.a(X,Y)} (because~{\tt \small X} does not occur in the head of clause~{\tt \small 4}). Thus, folding is based on a substitution for the existentially quantified variable {\tt \small X} which is not the identity. Now in $M(P_{1})$, which is \{{\tt \small a([],0)}, {\tt \small a([h|t],1)}, {\tt \small newp(0)}, {\tt \small newp(1)} $\mid$ {\tt \small h}\,$\in\!{\mathbb Z}$, {\tt \small t}\,$\in\!{\mathbb L}$\}, atoms {\tt \small a([],Y)} and {\tt \small newp(Y)} are {\it not equivalent}. Indeed, $M(P_{1})\models$ {\tt \small a([],Y)} $\rightarrow$ {\tt \small newp(Y)}, while $M(P_{1})\not\models$ {\tt \small newp(1)} $\rightarrow$ {\tt \small a([],1)}. \hfill $\Box$ \end{example} \begin{example}\label{ex:false-pos-diff} Let us consider the following set of clauses: \vspace{1mm} \noindent \begin{minipage}[t]{8mm} $P_{0}$: \end{minipage} \begin{minipage}[t]{50mm} {\small \begin{verbatim} 1. false :- Y>0, a(X), f(X,Y). 2. a([]). 3. f([],Y) :- Y=0. 4. f([H|T],Y) :- Y=1. 5. r(X,W) :- W=1. \end{verbatim} } \end{minipage} \smallskip \noindent We introduce the following clause defining the new predicate {\tt \small diff} by rule R1: \vspace{1mm} \hspace{4mm}{\tt \small{6. diff(W,Y) :- f(X,Y), r(X,W). }} \vspace{1mm} \noindent where: (i)~{\tt \small f(X,Y)} is a total, functional atom from {\small{\tt{X}}} to {\small{\tt{Y}}}, and (ii)~{\small{\tt{r(X,W)}}} is a total, functional atom from {\small{\tt{X}}} to {\small{\tt{W}}}. Thus, we get $P_{1}\!=\!\{${\tt \small{1,2,3,4,5,6}}$\}$ and $\mathit{Defs}_{1}\!=\!\{${\tt \small{6}}$\}$. By applying the Differential Replacement Rule~R7, from $P_{1}$ we derive the following set of clauses: \vspace{1mm} \noindent \begin{minipage}[t]{8mm} $P_{2}$: \end{minipage} \begin{minipage}[t]{90mm} {\small \begin{verbatim} 1r. false :- Y>0, a(X), r(X,W), diff(W,Y). 2. a([]). 3. f([],Y) :- Y=0. 4. f([H|T],Y) :- Y=1. 5. r(X,W) :- W=1. 6. diff(W,Y) :- f(X,Y), r(X,W). \end{verbatim} } \end{minipage} \smallskip \noindent Now, we have that $P_{0}$ is satisfiable (because {\tt \small a(X)} holds for {\tt \small X=[]} only, and {\tt \small f([],Y)} holds for {\tt \small Y=0} only), while $P_{2}$ is unsatisfiable (because the body of clause~{\tt \small 1r} holds for {\tt \small X=[]} and {\tt \small W=Y=1}). Let us now explain why, in this example, the application of rule~R7 affects completeness. By applying rule~R7, we have replaced atom {\tt \small f(X,Y)} by the conjunction `{\tt \small r(X,W),\;diff(W,Y)}', which by clause~{\tt \small 6} and the totality of {\tt \small r(X,W)} from~{\tt \small X} to {\tt \small W}, is implied by {\tt \small f(X,Y)}. However, in $M(P_{1})$, which is \{{\tt \small a([])}, {\tt \small f([],0)}, {\tt \small f([h|t],1)}, {\tt \small r(u,1)}, {\tt \small diff(1,0)}, {\tt \small diff(1,1)} $\mid$ {\tt \small h}\,$\in\!\!{\mathbb Z}$, {\tt \small t},\,{\tt \small u\,}$\in\!\!{\mathbb L}$\}, atom {\tt \small f(X,Y)} and the conjunction `{\tt \small r(X,W),\;diff(W,Y)}' are {\it not equivalent}. Indeed, we have that $M(P_{1}) \not\models$ ({\tt \small r([],1)}$\,\wedge\,${\tt \small diff(1,1)}) $\rightarrow$ {\tt \small f([],1)}. \indent {In particular, note that when applying rule~R7, we have replaced {\tt \small f(X,Y)}, which is functional from~{\tt \small X} to~{\tt \small Y}, by `{\tt \small r(X,W),\;diff(W,Y)}', which is not functional from {\tt \small X} to~{\tt \small (W,Y)} (indeed, for all {\tt \small u}$\,\in\!{\mathbb L}$, we have that both `{\tt \small r(u,1),\;diff(1,0)}' and `{\tt \small r(u,1),\;diff(1,1)}' do hold).} This is due to the fact that {\tt \small diff(W,Y)} is {\em not functional\/} from {\tt \small W} to {\tt \small Y}. \hfill $\Box$ \end{example} Now, we will give some sufficient conditions that guarantee that the transformation rules presented in Section~\ref{sec:TransfRules} are complete, in the sense that if a set $P_0$ of CHCs is transformed into a new set $P_n$ by $n$ applications of the rules and $P_0$ is satisfiable, then also $P_n$ is satisfiable. Thus, when those conditions hold, the converse of the Soundness Theorem~\ref{thm:unsat-preserv} holds. We consider the following Conditions~(E) and~(F) on the application of the Folding Rule~R3~\cite{EtG96,TaS84} and the Differential Replacement Rule~R7, respectively. \begin{definition}[Condition E] \label{def:condR} {\rm Let us assume that: (i)~we apply the Folding Rule~R3 for folding clause $C$: $H\leftarrow c, G_L,Q,G_R$ in $P_i$ using the definition~$D$: $K \leftarrow d, B$, and (ii)~$\vartheta$ is a substitution such that~\mbox{$Q\!=\! B\vartheta $} and $\mathbb D\models \forall(c \rightarrow d\vartheta)$. We say that this application of rule~R3 \textit{fulfills Condition} (E) if the following holds: \vspace{1mm}\noindent (E)~for every variable \( X\!\in\!\textit{vars}(\{d,B\})\!\setminus\!\textit{vars}(K)\), \noindent \hspace{6mm}E1.~\( X\vartheta \) is a variable not occurring in \( \{H,c,G_{L},G_{R}\} \)~ and \noindent \hspace{6mm}E2.~$X\vartheta$ does not occur in the term $Y\vartheta$, for any variable~$Y$ occurring\newline \hspace*{13mm}in $(d, B)$ and different from $X$. } \end{definition} In Condition~(F) below, we consider the particular case, which is of our interest in this paper, when rule~R7 is applied within the \textit{Diff-Define-Fold} procedure. \begin{definition}[Condition F]\label{def:condF}{\rm Let us assume that we apply the Differential Replacement Rule R7 to clause $C$: $H\leftarrow c, G_L, F(X;Y), G_R$ in~$P_i$ using the definition $\widehat{D}$: $\mathit{diff}(T_b,W_b,Y_b) \leftarrow d, F(X;Y), R(V;W)$ in $\mathit{Defs}_i$, where $T_b\!=\!\textit{bvars}(X\!\cup V)$, $W_b\!=\!\textit{bvars}(W)$, and $Y_b\!=\!\textit{bvars}(Y)$.} {\rm{We say that this application of rule~R7 \textit{fulfills Condition} (F) if the following holds: \smallskip\noindent \makebox[13mm][r]{(F)~F1.~}atom $\mathit{diff}(T_b,W_b,Y_b)$ is functional from $(T_b,W_b)$ to $Y_b$ with respect \noindent\hspace{13mm}to $\textit{Definite}(P_0) \cup \mathit{Defs}_i$, \noindent \makebox[13mm][r]{F2.~}$Y \cap (V \cup \textit{vars}(d)) = \emptyset$, and \noindent \makebox[13mm][r]{F3.~}$\mathit{adt}\mbox{-}\mathit{vars}(Y) \cap \mathit{adt}\mbox{-}\mathit{vars}(\{H,c, G_L, G_R\}) = \emptyset$. }} \end{definition} The following theorem guarantees that, if Conditions~(E) and~(F) hold, then the transformation rules R1--R7 are complete. \begin{theorem}[Completeness of the Transformation Rules] \label{thm:sat-preserv} Let $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_n$ be a transformation sequence using rules~{\rm R1--R7}. Suppose that, for every application of~\,{\rm R3}, Condition {\rm (E)} holds, and for every application of~\,{\rm \/R7}, Condition {\rm (F)} holds. If $P_0$ is satisfiable, then $P_n$ is satisfiable. \end{theorem} Note that the applications of R3 and R7 in Examples~\ref{ex:false-pos-fold} and~\ref{ex:false-pos-diff} violate Conditions~(E) and~(F), respectively, and these facts explain why they affect completeness. Indeed, in Example~\ref{ex:false-pos-fold}, atom {\tt \small a([],Y)} in the body of clause~{\tt \small 1} is an instance of the body of clause~{\tts4} via the substitution $\vartheta = \mbox{\tt \small {\{X/[],Z/Y\}}}$, and $\vartheta$ does not satisfy Condition~(E1). In Example~\ref{ex:false-pos-diff}, as mentioned above, {\tt \small diff(W,Y)} is not functional from {\tt \small W} to {\tt \small Y}, and hence Condition~(F1) is not fulfilled. For the proof of Theorem~\ref{thm:sat-preserv} we need some preliminary results. First we prove the following theorem, which is the converse of Theorem~\ref{thm:unsat} and is a consequence of Theorem~\ref{thm:pc} reported in Section~\ref{subsec:soundness}. \begin{theorem} \label{thm:sat} Let $P_0 \! \Rightarrow \! \ldots \! \Rightarrow \! P_n$ be a transformation sequence constructed using rules {\rm R1} $($Definition$)$, {\rm R2} $($\!Unfolding$)$, {\rm R3} $($Folding$)$, and \,{\rm R8} $($\!Goal Replacement$)$. Suppose that, for all applications of\, {\rm R3}, Condition {\rm (E)} holds and all goal replacements are body strengthenings $($that is, they are applications of rule~{\rm R8} for which Condition~$(S)$ of Section~{\rm{\ref{subsec:soundness}}} holds$)$. If $P_0$ is satisfiable, then $P_n$ is satisfiable. \end{theorem} \begin{proof} As shown in the proof of Theorem~\ref{thm:unsat}, $P_0$ is satisfiable iff $P_0 \cup \textit{Defs}_n$ is satisfiable. Also in this proof, we consider the transformation sequence $P'_0\Rightarrow \ldots \Rightarrow P'_n$ obtained from the sequence $P_0\Rightarrow \ldots \Rightarrow P_n$ by replacing each occurrence of \textit{false} in the head of a clause by a new predicate symbol $f$. $P'_0,\ldots,P'_n$ are sets of definite clauses, and thus for $i=0,\ldots,n,$ $\textit{Definite}(P'_i)= P'_i$. The sequence $P'_0\Rightarrow \ldots \Rightarrow~P'_n$ satisfies the hypotheses of Theorem~\ref{thm:pc}, and hence \(M(P'_0\cup \mathit{Defs}_n)\supseteq M(P'_n) \). Thus, we have that: \noindent \makebox[12mm][l]{}$P_0$ is satisfiable \noindent {implies $P_0 \cup \textit{Defs}_n$ is satisfiable} \noindent implies $P'_0 \cup \textit{Defs}_n\cup \{\neg f\}$ is satisfiable \noindent {implies $f\not\in M(P'_0 \cup \textit{Defs}_n)$} \noindent implies, by Theorem~\ref{thm:pc}, $f\not\in M(P'_n)$ \noindent \makebox[49mm][l]{implies $P'_n\cup \{\neg f\}$ is satisfiable} \noindent implies $P_n$ is satisfiable. \hfill$\Box$ \end{proof} Now, in order to prove Theorem~\ref{thm:sat-preserv} of Section~\ref{sec:TransfRules}, we show that rules~R4--R7 are all body strengthenings. \medskip \noindent Rule R4 (Clause Deletion) is a body strengthening, as \smallskip $M(\textit{Definite}(P_0)\cup \mathit{Defs}_i) \models \forall\, ( {\it false} \rightarrow c \wedge G)$ \smallskip \noindent trivially holds. \medskip Now let us consider rule~R5 (Functionality). Let $F(X,Y)$ be a conjunction of atoms that defines a functional relation from~$X$ to~$Y$. When rule R5 is applied whereby the conjunction $F(X,Y),\ F(X,Z)$ is replaced by the conjunction $Y\!=\!Z,$ $F(X,Y)$, it is the case that \smallskip $M(\mathit{Definite}(P_0)\cup \mathit{Defs}_i) \models \forall\,(Y\!=\!Z \wedge F(X,Y) \rightarrow F(X,Y) \wedge F(X,Z))$ \smallskip \noindent Hence, Condition (S) holds and rule~R5 is a body strengthening. \medskip An application of rule R6 (Totality) replaces a conjunction $F(X,Y)$ by {\it true} (that is, the empty conjunction). When rule~R6 is applied, it is the case that, by Property~(\textit{Total\/}) of Section~\ref{sec:CHCs}, \smallskip $M(\mathit{Definite}(P_0)\cup \mathit{Defs}_i) \models \forall\,({\it true} \rightarrow \exists Y.\, F(X,Y))$ \smallskip \noindent Hence, rule~R6 is a body strengthening. \medskip Now we prove that, when Condition (F) of Definition~\ref{def:condF} holds, rule~R7 is a body strengthening. \begin{lemma}\label{lemma:R7sat} Let us consider the following clauses $C$, $\widehat{D}$, and $E$ used when applying rule~{\rm R7}\,$:$ \smallskip $C$: $H\leftarrow c,\, G_L,\, F(X;Y),\, G_R$ $\widehat{D}$: $\mathit{diff}(T_b,W_b,Y_b) \leftarrow d,\, F(X;Y),\, R(V;W)$ $E$: $H\leftarrow c,\, G_L,\, R(V;W),\, \mathit{diff}(T_b,W_b,Y_b),\, G_R$ \smallskip \noindent where $Y=(Y_a,Y_b)$, $Y_a=\mathit{adt}\mbox{-}\mathit{vars}(Y)$, and $Y_b=\textit{bvars}(Y)$. Let us assume that Conditions~{\rm (F1)} and~{\rm (F2)} of Definition~$\ref{def:condF}$ hold. Then, \smallskip \noindent ~~$M(\!\textit{Definite}(P_0) \cup \mathit{Defs}_i)\models \forall\,(c\, \wedge R(V;\!W) \wedge \mathit{diff}(T_b,\!W_b,\!Y_b)\rightarrow \exists Y_a.\, c\, \wedge\, F(X;\!Y)).$ \end{lemma} \begin{proof} % Let $\mathcal M$ denote $M(\textit{Definite}(P_0) \cup \mathit{Defs}_i)$. Let~$Y'=(Y_a,Y_b')$, where $Y_b'$ is obtained by renaming the variables in $Y_b$ with new variables of basic type. By the totality of $F(X;Y')$, we have: \smallskip $\mathcal M\models \forall\,(c \wedge R(V;W) \wedge \mathit{diff}(T_b,W_b,Y_b)\rightarrow \exists Y'.\, F(X;Y'))$ \smallskip \noindent By the definition of rule R7, $\mathbb{D}\models\forall (c\rightarrow d)$ holds, and we get: \smallskip $\mathcal M\models \forall\,(c \wedge R(V;W) \wedge \mathit{diff}(T_b,W_b,Y_b)\rightarrow \exists Y'.\, d \wedge F(X;Y')\wedge R(V;W))$ \smallskip \noindent Now, we have that $Y \cap (X \cup V\cup W \cup \mathit{vars}(d)) = \emptyset$. Indeed, (i)~by the definition of rule R7, $W \cap \textit{vars}(C) =\emptyset$, (ii)~by the notation `$F(X;Y)$', we have that $\textit{vars}(X) \cap \textit{vars} (Y) =\emptyset $, and (iii) by Condition (F2), $Y \cap (V\cup \mathit{vars}(d)) = \emptyset$. Then, ${d} \wedge F(X;Y') \wedge R(V;W)$ is a variant of the body of clause~$\widehat{D}$, and since $Y'=(Y_a,Y_b')$, we get: \smallskip $\mathcal M\models \forall\,(c \wedge R(V;W) \wedge \mathit{diff}(T_b,W_b,Y_b) $ \hspace{2cm}$\rightarrow \exists Y_a,Y'_b.\, F(X;(Y_a,Y_b'))\wedge \mathit{diff}(T_b,W_b,Y'_b))$ \smallskip \noindent By Condition (F1), $\mathit{diff}(T_b,W_b,Y_b)$ is functional from $(T_b,W_b)$ to $Y_b$, and we have: \smallskip $\mathcal M\models \forall\,(c \wedge R(V;W) \wedge \mathit{diff}(T_b,W_b,Y_b)\rightarrow \exists Y_a,Y'_b.\, F(X;(Y_a,Y_b'))\wedge Y_b\!=\!Y'_b)$ \smallskip \noindent Thus, \smallskip $\mathcal M\models \forall\,(c \wedge R(V;W) \wedge \mathit{diff}(T_b,W_b,Y_b)\rightarrow \exists Y_a.\, F(X;(Y_a,Y_b)))$ \smallskip \noindent and, observing that $Y_a\cap \textit{vars}(c)=\emptyset$, we get the thesis. \hfill $\Box$ \end{proof} Now, in order to show that an application of rule R7 according to the hypotheses of Lemma~\ref{lemma:R7sat} (see also Definition~\ref{def:condF}) is an instance of a body strengthening, where in clauses~$C$ and~$D$ of rule R8 we consider $c\!=\!\mathit{true}$ and $c_{1}\!=\!c_{2}\!=\!c$, we have to show: \smallskip \makebox[12mm][c]{(S$_{c}$)} $M(\textit{Definite}(P_0)\cup \mathit{Defs}_i) \models \forall \, (c \wedge {G}_{2} \rightarrow \exists T_1.\, c \wedge {G}_{1})$ \smallskip \noindent where: $T_{1} = \mathit{vars}(c \wedge F(X;\!(Y_{a},Y_{b}))) \setminus \mathit{vars}(\{{H}, \mathit{true}, {G}_{L},{G}_{R}\})$ $G_{1}=F(X;(Y_{a},Y_{b}))$, and $G_{2}= R(V;W),\, \mathit{diff}(T_b,W_b,Y_b)$. \smallskip \noindent Now, by Lemma~\ref{lemma:R7sat}, we have: \smallskip \makebox[12mm][c]{(L)}~~$M(\!\textit{Definite}(P_0) \cup \mathit{Defs}_i)\models \forall\,(c\, \wedge G_{2} \rightarrow \exists Y_a.\, c\, \wedge\, F(X;\!(Y_{a},Y_{b})))$ \smallskip \noindent Since: (i)~by Condition~(F3) and the fact that the variables of $c$ are all of basic type, we have that: $Y_a \subseteq T_{1}\!= \!\mathit{vars\/}(c \cup X \cup Y_{a} \cup Y_{b}) \setminus \mathit{vars\/}(\{H,\mathit{true},G_L,G_R\})$, and (ii)~the variables in $T_{1}\setminus Y_{a}$ are universal quantified in~(L), we have that~(L) implies~(S$_{c}$). This completes the proof that if Condition (F) of Definition~\ref{def:condF} holds, then rule R7 is a body strengthening. Thus, by taking into account also the facts we have proved above about rules~R4, R5, and R6, we have the following lemma. \begin{lemma}\label{lemma:strengthen} All the applications of rules {\rm R4, R5, R6}, and the applications of rule~{\rm R7} where Condition~{\rm (F)} holds $($see Definition~$\ref{def:condF})$, are body strengthenings. \end{lemma} \medskip Now we can present the proof of Theorem~\ref{thm:sat-preserv}. \smallskip \noindent {\it Proof of Theorem~$\ref{thm:sat-preserv}$}. Let $P_0 \! \Rightarrow \! \ldots \! \Rightarrow \! P_n$ be a transformation sequence using rules~{\rm{R1}}--{\rm{R7}}. Suppose that, for every application of {\rm R3}, Condition {\rm (E)} holds, and for every application of {\rm R7}, Condition {\rm (F)} holds. Thus, $P_0 \! \Rightarrow \! \ldots \! \Rightarrow \! P_n$ can also be constructed by applications of rules R1--R3 and applications of rule~R8 which, by Lemma~\ref{lemma:strengthen}, are all body strengthenings. Then, the thesis follows from Theorem~\ref{thm:sat}. \hfill $\Box$ \subsection{Completeness of Algorithm~${\mathcal R}$} \label{subsec:completeAlgo} We have the following straightforward consequence of Theorem~\ref{thm:sat-preserv}. \begin{theorem}[Completeness of {Algorithm}~${\mathcal R}$] \label{thm:completeness-AlgorithmR} Suppose that {Algorithm}~${\mathcal R}$\ terminates for the input set $\mathit{Cls}$ of clauses, and let $\mathit{TransfCls}$ be the output set of clauses. Suppose also that all applications of rules {\rm R3} and \,{\rm R7} during the execution of Algorithm~\,${\mathcal R}$~fulfill Conditions~{\rm (E)} and~{\rm (F)}, respectively. If $\mathit{Cls}$ is satisfiable, then $\mathit{TransfCls}$ is satisfiable. \end{theorem} In practice, having constructed the transformation sequence $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_n$, where rule~R7 has been applied to the set $P_{i}$, with $0\!<\!i\!<\!n$, it is often more convenient to check the validity of Condition~(F) with respect to $\textit{Definite}(P_n)$, instead of $\textit{Definite}(P_0)\cup \mathit{Defs}_i$, as required by the hypotheses of Theorem~\ref{thm:sat-preserv}. Indeed, in the set $P_{n}$, predicate {\it diff\/} is defined by a set of clauses whose variables have all integer or boolean type, and hence in checking Condition~(F) we need not reason about predicates defined over ADTs. The following Proposition~\ref{prop:fun-preserv} guarantees that Theorem~\ref{thm:sat-preserv} holds even if we check Condition (F) with respect to $\textit{Definite}(P_n)$, instead of $\textit{Definite}(P_0)\cup \mathit{Defs}_i$. \begin{proposition}[Preservation of Functionality] \label{prop:fun-preserv} Let $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_n$ be a transformation sequence using rules~{\rm R1--R7}. Suppose that Condition~{\rm (U)} of Theorem~$\ref{thm:unsat-preserv}$ holds. For $i\!=\!0,\ldots,n$, if an atom $A(X,Y)$ is functional from $X$ to $Y$ with respect to~$\textit{Definite}(P_n)$ and the predicate symbol of $A(X,Y)$ occurs in $\textit{Definite}(P_0)\cup \mathit{Defs}_i$, then $A(X,Y)$ is functional from~$X$ to~$Y$ with respect to~$\textit{Definite}(P_0)\cup \mathit{Defs}_i$. \end{proposition} \begin{proof} Let us suppose that $A(X,Y)$ is functional from $X$ to $Y$ with respect to $\textit{Definite}(P_n)$, that is, \smallskip $M(\textit{Definite}(P_n))\models \forall\,(A(X,Y) \wedge A(X,Z) \rightarrow Y\!=\! Z)$. \smallskip \noindent Then, for all (tuples of) ground terms $u$, $v$, and $w$, with $v\!\neq\! w,$ \smallskip $\{A(u,v),A(u,w)\} \not\subseteq M(\textit{Definite}(P_n))$ \smallskip \noindent Since $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_n$ is a transformation sequence using rules R1--R7 such that Condition (U) of Theorem~\ref{thm:unsat-preserv} holds, then by Theorem~\ref{thm:cons}, \smallskip $\{A(u,v),A(u,w)\} \not\subseteq M(\textit{Definite}(P_0) \cup \mathit{Defs}_n)$ \smallskip \noindent Hence, for $i\!=\!0,\ldots,n$, \smallskip $\{A(u,v),A(u,w)\} \not\subseteq M(\textit{Definite}(P_0) \cup \mathit{Defs}_i)$ \smallskip \noindent Thus, \smallskip $M(\textit{Definite}(P_0) \cup \mathit{Defs}_i)\models \forall\,(A(X,Y) \wedge A(X,Z) \rightarrow Y\!=\! Z)$. \hfill $\Box$ \end{proof} \section{A Method for Checking the Satisfiability of CHCs through ADT Removal} \label{subsec:solvingCHCs} In this section we put together the results presented in Sections~\ref{sec:TransfRules}, \ref{sec:Strategy}, and~\ref{sec:Completeness} and we define a method for checking whether or not a set $P_0$ of CHCs is satisfiable. We proceed as follows: (i)~first, we construct a transformation sequence $P_0 \Rightarrow P_1 \Rightarrow \ldots \Rightarrow P_n$ using Algorithm~${\mathcal R}$, and (ii)~then we apply a CHC solver to $P_n$. If the solver is able to prove the satisfiability of~$P_n$, then, by Theorem~\ref{thm:soundness-AlgorithmR}, $P_0$ is satisfiable. If the solver proves the unsatisfiability of $P_n$ and Conditions~(E) and~(F) are both fulfilled during the execution of~${\mathcal R}$, then, by Theorem~\ref{thm:completeness-AlgorithmR}, $P_0$ is unsatisfiable. Now, (i)~Condition~(E) can be checked by simply inspecting the substitution computed when applying the Folding Rule~R3 during the {\it Diff-Define-Fold} procedure. (ii)~Condition~(F1), by Proposition~\ref{prop:fun-preserv}, can be checked by proving, for every difference predicate \textit{diff} that has been used in applying the Differential Replacement Rule~R7, the satisfiability of the following set of clauses \vspace{1mm} \noindent \hspace{6mm} $D_n ~\cup~ \{\textit{false}\leftarrow Y_1\!\neq \!Y_2, \ \textit{diff\/}(T,W,Y_1),\ \textit{diff\/}(T,W,Y_2)\}$, \vspace{1mm}\noindent where $D_n$ is the set of clauses defining \textit{diff\/} in $\textit{Definite}(P_n)$. (iii)~Finally, Conditions~(F2) and (F3) can be checked by inspecting, for every application of rule~R7, the clauses $C$ and $\widehat D$ involved in that application (see Definition~\ref{def:condF}). \smallskip {In previous sections we have seen in action our method for proving the satisfiability of a set of CHCs. In the following example we show an application of our method to prove unsatisfiability of a set of CHCs.} \begin{example}\label{ex:func} \newcommand{{\tt \small 1$^{\textstyle *}$}}{{\tt \small 1$^{\textstyle *}$}} \newcommand{{\tt \small 10$^{\textstyle *}$}}{{\tt \small 10$^{\textstyle *}$}} Let us consider again our introductory example of Section~\ref{sec:IntroExample} where we started from the initial set {\it RevCls} made out of clauses~{\tt \small 1}--{\tt \small 9}. Let us suppose that we want to check the satisfiability of the set \textit{RevCls}$^{*}$ of clauses that includes clauses~{\tt \small 2}--{\tt \small 9} and the following clause {\tt \small 1$^{\textstyle *}$}, instead of clause~{\tt \small 1}: \vspace{1mm} \noindent {\tt \small 1$^{\textstyle *}$}. ~{\tt \small false :- N2=\textbackslash=N0-N1, append(Xs,Ys,Zs), reverse(Zs,Rs), \hspace{16mm}len(Xs,N0), len(Ys,N1), len(Rs,N2).} \vspace{1mm} \noindent Clause~{\tt \small 1$^{\textstyle *}$}~differs from clause~{\tt \small 1} because of the constraint `{\tt \small N2=\textbackslash=N0-N1}', instead of `{\tt \small N2=\textbackslash=N0+N1}'. The set \textit{RevCls}$^{*}$ % is unsatisfiable because the body of clause~{\tt \small {\tt \small 1$^{\textstyle *}$}} holds, in particular, for {\tt \small Xs=[]} and {\tt \small Ys=[Y]}, where {\tt \small Y} is any integer. Algorithm~${\mathcal R}$~works for the input set $\textit{RevCls}^{*} =$ \{{\tt \small {\tt \small 1$^{\textstyle *}$}\!\!,2,...,9}\} exactly as described in Section~\ref{sec:Strategy} for the set $\textit{RevCls} = \{\mbox{\tt \small 1,2,...,9}\}$, except that clause {\tt \small {\tt \small 1$^{\textstyle *}$}}, instead of clause {\tt \small 1}, is folded by using the definition: \vspace{-2mm} {\small \begin{verbatim} D1. new1(N0,N1,N2) :- append(Xs,Ys,Zs), reverse(Zs,Rs), len(Xs,N0), len(Ys,N1), len(Rs,N2).\end{verbatim} } \vspace{-2mm} \noindent thereby deriving the following clause, instead of clause {\tt \small 10}: \vspace{1mm} \noindent {\tt \small 10$^{\textstyle *}$}. {\tt \small false :- N2=\textbackslash=N0-N1, new1(N0,N1,N2).} \vspace{1mm} \noindent Thus, the output of Algorithm~${\mathcal R}$~is \textit{TransfRevCls}$^{*}$ = \{{\tt \small {\tt \small 10$^{\textstyle *}$}\!\!,}{\tt \small 15,17,18,19,20,21}\} (for clauses {\tt \small 15,17,18,19,20,21} see Section~\ref{sec:IntroExample}). The CHC solver Eldarica proves that \textit{TransfRevCls}$^{*}$ is an unsatisfiable set of clauses. Now, in order to conclude that also the input set \textit{RevCls}$^{*}$ is unsatisfiable, we apply Theorem~\ref{thm:completeness-AlgorithmR} and Proposition~\ref{prop:fun-preserv}. We look at the transformation sequence constructed by Algorithm~${\mathcal R}$~and we check that both Conditions~(E) and (F) are fulfilled. Condition~(E) is fulfilled because each time we apply the folding rule, the substitution $\vartheta$ is the identity (see, in particular, the folding step for deriving clause~{\tt \small 10$^{\textstyle *}$}\ above, and also the folding steps in Example~\ref{ex:rev-continued2}). Now, let us check Condition~(F). % We have that clauses~$C$ and~$\widehat{D}$ occurring in Definition~\ref{def:condF} are clauses~{\tt \small 14} and~{\tt \small D3}, respectively. For the reader's convenience, we list them here: \vspace{-2mm} {\small \begin{verbatim} 14. new1(N01,N1,N21) :- N01=N0+1, append(Xs,Ys,Zs), reverse(Zs,Rs), len(Xs,N0), len(Ys,N1), snoc(Rs,X,R1s), len(R1s,N21). D3. diff(X,N2,N21) :- snoc(Rs,X,R1s), len(R1s,N21), len(Rs,N2). \end{verbatim} } \vspace{-2mm} \noindent With reference to Definition~\ref{def:condF} (see Example~\ref{ex:rev-continued2}), we have: \smallskip \noindent $F(X;Y)$\! =\! ({\tt \small snoc(Rs,X,R1s)\!,\;len(R1s,N21)}), \ \ $X$={\tt \small (Rs,X)}, \ \ $Y$={\tt \small (R1s,N21)}, \noindent $R(V;W)$\! =\! {\tt \small len(Rs,N2)}, \ \ $V$=\,{\tt \small (Rs)}, \ \ $W$=\,{\tt \small (N2)}. \smallskip \noindent Condition~(F1) requires that atom {\tt \small diff(X,N2,N21)} be functional from {\tt \small (X,N2)} to~{\tt \small N21} with respect to ${\mathit{Definite}}(\mathit{RevCls}^{*})$. In order to check this functionality, by Proposition~\ref{prop:fun-preserv}, it suffices to check the satisfiability of the set consisting of following clause: \vspace{1mm} \noindent {\tt \small 22. false :- N21=\textbackslash=N22, diff(X,N2,N21), diff(X,N2,N22).} \vspace{1mm} \noindent together with clauses {\tt \small 20} and {\tt \small 21}, which define {\tt \small diff} in ${\mathit{Definite}}(\mathit{TransfRevCls}^{*})$. We recall them here: \vspace{1mm} \noindent {\tt \small 20. diff(X,N0,N1) :- N0=0, N1=1.} \noindent {\tt \small 21. diff(X,N0,N1) :- N0=N+1, N1=M+1, diff(X,N,M).} \vspace{1mm} \noindent The CHC solver Eldarica proves the satisfiability of the set \{{\tt \small 20,21,22}\} of clauses by computing the following model: \vspace{1mm} {\tt \small diff(X,N2,N21) :- N21=N2+1, N2>=0.} \vspace*{1mm} \noindent Thus, Condition (F1) is fulfilled. Condition~(F2) requires that \{{\tt \small R1s,N21}\} $\cap$ \{{\tt \small Rs}\}$\,=\! \emptyset$, and Condition~(F3) requires that \{{\tt \small R1s}\} $\cap$ \{{\tt \small Xs,Ys,Zs,Rs}\}$\,=\! \emptyset$. They are both fulfilled. Therefore, by Theorem~\ref{thm:completeness-AlgorithmR}, we conclude, as desired, that the input set \textit{RevCls}$^{*}\!$ of clauses is unsatisfiable.\hfill $\Box$ \vspace*{-3mm} \noindent \end{example} \subsection{The workflow of the {\sc AdtRem} tool} \label{subsec:AdtRem-workflow} Our {\sc AdtRem} tool implements the satisfiability checking method % presented in Section~\ref{subsec:solvingCHCs} as follows. First, {\sc AdtRem} makes use of the VeriMAP system~\cite{De&14b} to perform the steps specified by Algorithm~${\mathcal R}$. It takes as input a set~$P_0$ of CHCs and, if it terminates, it produces as output a set~$P_n$ of CHCs that have basic types. Then, in order to show the satisfiability of $P_{0}$, {\sc AdtRem} invokes the Eldarica CHC solver to show the satisfiability of~$P_n$. If Eldarica proves that $P_n$ is satisfiable, then, by the soundness of the transformation Algoritm~${\mathcal R}$~(see Theorem~\ref{thm:soundness-AlgorithmR}), $P_0$ is satisfiable and {\sc AdtRem} returns the answer `\textit{sat}'. In particular, the implementation of the \textit{Unfold} procedure enforces Condition~(U) of Theorem~\ref{thm:unsat-preserv}, which indeed ensures the soundness of Algoritm~${\mathcal R}$. If Eldarica proves that $P_n$ is unsatisfiable by constructing a counterexample CEX, {\sc AdtRem} proceeds by checking whether or not Conditions (E) and (F), which guarantee the completeness of Algoritm~${\mathcal R}$, hold (see Theorem~\ref{thm:completeness-AlgorithmR}). In particular, during the execution of Algoritm~${\mathcal R}$, {\sc AdtRem} checks Condition (E) when applying the Folding Rule R3, and Conditions (F2) and (F3) when applying the Differential Replacement Rule~R7. {\sc AdtRem} marks all clauses in $P_n$ derived by a sequence of transformation steps where one of the Conditions (E), (F2), and (F3) is not satisfied. Then, {\sc AdtRem} looks at the counterexample CEX constructed by Eldarica to verify whether or not any instance of the marked clauses is used for the construction of CEX. If this is the case, it is not possible to establish the unsatisfiability of $P_0$ from the unsatisfiability of~$P_n$ (as completeness of Algoritm~${\mathcal R}$~may not hold) and hence {\sc AdtRem} returns the answer `\textit{unknown}' as the result of the satisfiability check for $P_{0}$. Otherwise, if {no instance of a marked clause} is used in CEX, then {\sc AdtRem} proceeds by checking Condition~(F1) of Definition~\ref{def:condF}. {Recall that, by Proposition~\ref{prop:fun-preserv}, Condition~(F1) can be checked by inspecting the clauses defining the differential predicates in $\textit{Definite}(P_n)$.} To perform this check, for each differential predicate $\textit{diff}_k$ introduced by Algorithm~${\mathcal R}$~and occurring in CEX, {\sc AdtRem} produces {a clause, call it $\textit{fun-diff}_k$,} of the form: \smallskip $~\textit{false}\leftarrow$ $O_1\!\neq\!O_2,$ $\textit{diff}_k(I,O_1), \textit{diff}_k(I,O_2)$ \smallskip \noindent where $I$ and $O_i$, for $i=1,2,$ are the tuples of input and output variables, respectively, of $\textit{diff}_k(I,O_i)$. Then, {\sc AdtRem} runs Eldarica to check the satisfiability of $\bigcup_k\{\textit{fun-diff}_k\}\cup D_n$, where $D_n$ is the set of clauses defining $\textit{diff}_k$ in $\textit{Definite}(P_n)$ (see Example~\ref{ex:func}). Now, this satisfiability check may succeed (Case 1) or may not succeed (Case 2). In Case 1, Condition~(F1) holds and, by the completeness of Algorithm~${\mathcal R}$~(see Theorem~\ref{thm:completeness-AlgorithmR}), we have that $P_0$ is % unsatisfiable and {\sc AdtRem} returns the answer `\textit{unsat}'. In Case 2, Condition~(F1) cannot be shown, and {\sc AdtRem} returns the answer `\textit{unknown}' as the result of the satisfiability of~$P_{0}$. \subsection{Benchmark suite} Our benchmark suite consists of 251 {verification} % problems over inductively defined data structures, such as lists, queues, heaps, and trees. Out of these 251 problems, 168 of them refer to properties that hold (\textit{valid} properties) and the remaining 83 refer to properties that do not hold (\textit{invalid} properties). The 168 problems specifying valid properties have been adapted from the benchmark suite considered by Reynolds and Kuncak~\cite{ReK15}, and originate from benchmarks used by various theorem provers, such as CLAM~\cite{IrB96}, HipSpec~\cite{Cl&13}, IsaPlanner~\cite{DiF03,Jo&10}, and Leon~\cite{Su&11}. In particular, we have considered Reynolds and Kuncak's `dtt' encoding where natural numbers are represented using the built-in SMT type {\it Int}. From those problems we have discarded: (i)~the ones that do not use ADTs, and (ii)~the ones that cannot be directly represented in Horn clause format. In order to make a comparison between our approach and Reynolds-Kuncak's one on a level playing field, since {\sc AdtRem} supports neither higher order functions nor user-provided lemmas, (i)~we have replaced higher order functions by suitable first order instances, and (ii)~we have removed all auxiliary lemmas from the formalization of the problems. We have also used LIA constraints, instead of the basic functions recursively defined over natural numbers, such as the {\it plus} function and {\it less-or-equal} relation, so that the solver can deal with them by using the LIA theory. The 83 problems specifying invalid properties have been obtained from those specifying valid properties by either negating the properties or modifying {the definitions} of the predicates on which the properties depend. The benchmark suite is available at {\small{\url{https://fmlab.unich.it/adtrem/}}}. \subsection{Experiments} We have performed the following experiments. \smallskip \noindent\hangindent=4mm 1. We have run the `cvc4+ig' configuration of the CVC4 solver extended with inductive reasoning and also the AdtInd solver on the 251 {verification} % problems in SMT-LIB format. \noindent\hangindent=4mm 2. Then, we have translated each {verification} % problem into a set, call it $P_0$, of CHCs in the Prolog-like syntax supported by {\sc AdtRem} by using a modified version of the SMT-LIB parser of the ProB system~\cite{LeB03}. We have run Eldarica {v2.0.5}, which uses no induction-based mechanism for handling ADTs, to check the satisfiability of the SMT-LIB translation of $P_0$\footnote{We have also performed analogous experiments on the set of valid properties by using Z3-SPACER~\cite{Ko&13}, instead of Eldarica, as reported on an earlier version of this paper~\cite{De&20a}. The results of those experiments, which we do not report here, are very similar to those shown in Table~\ref{tab:Exper-Results}.}\!. \noindent\hangindent=4mm 3. Finally, we have run {\sc AdtRem} to check the satisfiability % of $P_0$. If {\sc AdtRem} returns `\textit{sat}', the property specified by $P_0$ is reported to be valid. If {\sc AdtRem} returns `\textit{unsat}', the property is reported to be invalid. \smallskip \noindent Experiments have been performed on an Intel Xeon CPU E5-2640 2.00GHz with 64GB RAM under CentOS and for each problem we have set a timeout limit of 300 seconds. \subsection{Evaluation of Results} The results of our experiments are summarized in the following four tables. \noindent\hangindent=3mm -- In Table~\ref{tab:Exper-Results} ({\it Solved problems}) we report the number of problems solved by each tool, also % classified by the type of property (valid or invalid). Columns 3--6 are labeled by the name of the tool and the last column reports the results of the `Virtual Best' tool, that is, the number of problems solved by at least one of the tools we have considered. \noindent\hangindent=3mm -- In Table~\ref{tab:Exper-Results-unique} ({\it Uniquely solved problems}) we report, for each tool, the number of uniquely solved problems, that is, the number of problems solved by that tool and not solved by any of the other tools. By definition, there are no problems uniquely solved by the `Virtual Best' tool. % \noindent\hangindent=3mm -- In order to assess the difficulty of the benchmark problems, we have computed, for each problem, the number of tools that are able to solve it. The results are reported in Table~\ref{tab:Exper-Results-howmany} ({\it Benchmark difficulty}). \noindent\hangindent=3mm -- In Table~\ref{tab:Exper-Results-adtrem} ({\it Termination of Algorithm~${\mathcal R}$~and effectiveness of {\sc AdtRem}}) we report the number of problems for which the ADT removal algorithm ${\mathcal R}$~terminates and the percentage of those problems solved by {\sc AdtRem}. \vspace*{-4mm} \begin{table}[!ht] \begin{center} \begin{tabular}{|@{\hspace{-2mm}}l@{\hspace{-3mm}}|r@{\hspace{1mm}}||r@{\hspace{2mm}}|r@{\hspace{2mm}}|r@{\hspace{2mm}}|r@{\hspace{2mm}}||r@{\hspace{1mm}}|} \hline {\parbox[center]{22mm}{\begin{center}Type of\\ properties\end{center}}} & {\parbox[top]{17mm}{\begin{center}Number of\\ problems\end{center}}} & {\parbox[top]{13mm}{\vspace*{-2mm}\center CVC4\!\\[-.5mm] with\!\\[-.5mm] induction\!\\[2mm]}} & {~AdtInd} & {~Eldarica} & {{ {\textsc{ AdtRem}}} } & {~Virtual Best} \\ \hline \hline % {~~~~Valid} & 168 & 75 & 62 & 12 & 115 & 127\,\\ {~~~~Invalid} & 83 & 0 & 1 & 75 & 61 & 83\,\\ \hline {~~~~Total} & 251 & 75 & 63 & 87 & 176 & 210\,\\ \hline \end{tabular} \vspace{2mm} \caption{{\small {\it Solved problems.} Number of problems solved by each tool. }}\label{tab:Exper-Results} \end{center} \vspace*{-8mm} \end{table} \vspace*{-2mm} Table~\ref{tab:Exper-Results} shows that, on our benchmark, {\sc AdtRem} compares favorably to all other tools we have considered. On problems with valid properties, {\sc AdtRem} performs better than solvers extended with inductive reasoning, such as CVC4 and AdtInd. {\sc AdtRem} performs better than those two tools also on problems with invalid properties on which CVC4 and AdtInd show % poor results. This poor outcome may be due to the fact that those tools were designed with the aim of proving theorems rather than finding counterexamples to non-theorems. Table~\ref{tab:Exper-Results} also shows that the ADT removal performed by Algorithm~${\mathcal R}$, implemented by {\sc AdtRem}, considerably increases the overall effectiveness of the CHC solver Eldarica, without the need for any inductive reasoning support. In particular, Eldarica is able to solve {87} problems out of {251} {\em before} the application of Algorithm~${\mathcal R}$~(see Column `Eldarica'), while {\sc AdtRem} solves {176} problems by using Eldarica {\em after} the application of Algorithm~${\mathcal R}$~(see Column `{\sc AdtRem}'). The gain in effectiveness is very high on problems with valid properties, where Eldarica solves {12} problems out of {168}, while {\sc AdtRem} solves {115} problems by applying Eldarica after the removal of ADTs. On problems with invalid properties Eldarica is already very effective before the removal of ADTs and is able to solve {75} % problems out of {83}, whereas the number of problems solved by {\sc AdtRem} is only {61}, which are all the problems with invalid properties for which Algorithm~${\mathcal R}$~terminates (see Table~\ref{tab:Exper-Results-adtrem}). Note, however, that by inspecting the detailed results of our experiments (see {\small{\url{https://fmlab.unich.it/adtrem/}}}), we have found 8 problems with invalid properties solved by {\sc AdtRem}, which are not solved by Eldarica before ADT removal. \smallskip \begin{table}[!ht] \vspace*{-4mm} \begin{center} \begin{tabular}{|@{\hspace{-2mm}}l@{\hspace{-3mm}}|r@{\hspace{2mm}}||r@{\hspace{2mm}}|r@{\hspace{2mm}}|r@{\hspace{2mm}}|r@{\hspace{2mm}}||r@{\hspace{1mm}}|} \hline {\parbox[center]{22mm}{\begin{center}Type of\\ properties\end{center}}} & {\parbox[top]{16mm}{\begin{center}Number of\!\!\\ problems\!\!\end{center}}} & {\parbox[top]{13mm}{\vspace*{-2mm}\center CVC4\\[-.5mm] with\\[-.5mm] induction\!\\[2mm]}} & {~AdtInd} & {~Eldarica} & {{ {\textsc{ AdtRem}}} } & {~Virtual Best} \\ \hline \hline % {~~~~Valid} & 50 & 3 & 2 & 1 & 44 & -\,\\ {~~~~Invalid} & 29 & 0 & 0 & 22 & 7 & -\,\\ \hline {~~~~Total} & 79 & 3 & 2 & 23 & 51 & -\,\\ \hline \end{tabular} \vspace{2mm} \caption{{\small {\it Uniquely solved problems.} Number of uniquely solved problems by each tool. }}\label{tab:Exper-Results-unique} \end{center} \vspace*{-8mm} \end{table} Table~\ref{tab:Exper-Results-unique} shows further evidence that the overall performance of {\sc AdtRem} is higher than that of the other tools. Indeed, the number of problems solved by {\sc AdtRem} only is larger than the number of problems uniquely solved by any other tool. In particular, {\sc AdtRem} uniquely solves 51 problems (44 with valid properties, 7 with invalid properties) and Eldarica uniquely solves 23 problems (1 with valid property, 22 with invalid properties). \begin{table}[!ht] \vspace*{-4mm} \begin{center} \begin{tabular}{|@{\hspace{-2mm}}l@{\hspace{-3mm}}|r@{\hspace{2mm}}||r@{\hspace{2mm}}|r@{\hspace{2mm}}|r@{\hspace{2mm}}|r@{\hspace{2mm}}|r@{\hspace{1mm}}|} \hline {\parbox[center]{21mm}{\begin{center}Type of\\ properties\end{center}}} & {\parbox[top]{16mm}{\begin{center}Number of\\ problems\end{center}}} & {~Unsolved} & {~\parbox[top]{13mm}{\begin{center}Uniquely \\ solved\end{center}}} & {~\parbox[top]{14mm}{\begin{center}Solved by \\ two tools\end{center}}} & {~\parbox[top]{15mm}{\begin{center}Solved by \\ three tools\end{center}}} & {~\parbox[top]{14mm}{\begin{center}Solved by \\ all tools\end{center}}} \\ \hline \hline % {~~~~Valid} & 168 & 41& 50&25&44&8\,\\ {~~~~Invalid} & 83 & 0 & 29&54&0&0\,\\ \hline {~~~~Total} & 251 & 41&79&79&44 & 8\,\\ \hline \end{tabular} \vspace{2mm} \caption{{\small {\it Benchmark difficulty.} Number of problems grouped by the number of tools that were able to solve them. }}\label{tab:Exper-Results-howmany} \end{center} \vspace*{-8mm} \end{table} Table~\ref{tab:Exper-Results-howmany} illustrates the degree of difficulty of the problems of the benchmark for the tools we have considered. Indeed, out of the 251 problems in the benchmark, 41 of them are solved by no tool, and only 8 problems are solved by all tools. \begin{table}[!ht] \begin{center} \begin{tabular}{|@{\hspace{-2mm}}l@{\hspace{-3mm}}|r@{\hspace{2mm}}||r@{\hspace{2mm}}|r@{\hspace{2mm}}|} \hline {\parbox[center]{24mm}{\begin{center}Type of\\ properties\end{center}}} & {\parbox[top]{18mm}{\begin{center}Number of\\ problems\end{center}}} & {\parbox[top]{20mm}{\begin{center} {Algorithm~${\mathcal R}$ \\~terminates} \end{center}}} & {\parbox[top]{26mm}{\begin{center}Percentage solved\\ by {\sc AdtRem} \end{center}}} \\ \hline \hline % {~~~~Valid} & 168 & 117 & $98\%$ \,\\ {~~~~Invalid} & 83 & 61 & $100\%$ \,\\ \hline {~~~~Total} & 251 & 178 & $99\%$~\,\\ \hline \end{tabular} \vspace{2mm} \caption{{\small {\it Termination of Algorithm~${\mathcal R}$~and effectiveness of {\sc AdtRem}.} Number of problems for which the ADT removal algorithm terminates and the percentage of those problems solved by {\sc AdtRem}. }}\label{tab:Exper-Results-adtrem} \end{center} \vspace*{-8mm} \end{table} Table~\ref{tab:Exper-Results-adtrem} shows that Algorithm~${\mathcal R}$~terminates quite often and, whenever it terminates Eldarica is able to check the satisfiability of the derived set of clauses in almost all cases. Indeed, Algorithm~${\mathcal R}$~terminates on {178} problems out of 251, and {\sc AdtRem} solves 176 problems out of those {178}. Note that the use of the Differential Replacement Rule R7 {(which is a novel rule we have used in this paper)} has a positive effect on the termination of Algorithm~${\mathcal R}$. In order to assess this effect, we have implemented a modified version of the ADT removal algorithm~${\mathcal R}$, called~{$\mathcal{R}^{\circ}$}\!, which \textit{does not} introduce difference predicates. Indeed, in {$\mathcal{R}^{\circ}$} the {\it Diff-Introduce} case of the {\it Diff-Define-Fold} Procedure of Figure~\ref{fig:Diff} is never executed. We have applied Algorithm~{$\mathcal{R}^{\circ}$} to the 168 problems with valid properties and it terminated only on {94} of them, while ${\mathcal R}$~terminates on 117 (see Table~\ref{tab:Exper-Results-adtrem}). Details are given in {\small{\url{https://fmlab.unich.it/adtrem/}}}. \smallskip The effectiveness of the solvers that use induction we have considered, namely, CVC4 and AdtInd, may depend on the supply of suitable lemmas to be used for proving the main conjecture and also on the representation of the natural numbers. Indeed, further experiments we have performed (see {\small{\url{https://fmlab.unich.it/adtrem/}}}) show that, on problems with valid properties, CVC4 and AdtInd solve 102 (instead of 75) and 64 (instead of 62) problems, respectively, when auxiliary lemmas are added as extra axioms. If, in addition, we consider the `dti' encoding of the natural numbers~\footnote{In the `dti' encoding, natural numbers are represented using both the built-in type {\it Int} and the ADT inductive definition with the zero and successor constructors~\cite{ReK15}.}\!, CVC4 and AdtInd solve 139 and 59 problems, respectively. Our results show (see Table~\ref{tab:Exper-Results}) that in most cases {\sc AdtRem} needs neither those extra axioms nor that sophisticated encoding. \smallskip Finally, in Table~\ref{tab:examples} we report some problems with valid properties solved by {\sc AdtRem} that are not solved by CVC4 with induction, nor by AdtInd, nor by Eldarica. CVC4 with induction and AdtInd are not able to solve those problems even if we take their formalizations with auxiliary lemmas and different encodings of the natural numbers. In Table~\ref{tab:examples2} we report problems with valid properties solved by CVC4 with induction, or by AdtInd, or by Eldarica, that are not solved by {\sc AdtRem}. \begin{table}[!ht] \vspace{-3mm} \begin{center} \begin{tabular}{|@{\hspace{1mm}}l@{\hspace{0mm}}|@{\hspace{2mm}}l@{\hspace{10mm}}|} \hline {\it Problem} & {\it Property} \\ \hline\hline CLAM goal4\hspace*{1.3cm} \rule{0mm}{3.5mm}$_{_{_{_{~}}}}$ & $\forall x.\ \mathit{len (append(x,\!x)) =} ~2 \ \mathit{len(x)}$ \\ \hline CLAM goal6\hspace*{1.3cm}\rule{0mm}{3.5mm}$_{_{_{_{~}}}}$ & $\forall x,y.\ \mathit{len (rev(append(x,\!y))) = len(x) + len(y)}$ \\ \hline IsaPlanner goal52 \rule{0mm}{3.5mm}$_{_{_{_{~}}}}$ & $\forall n,l.\ \mathit{count(n,\!l) = count(n, rev(l))}$ \\ \hline IsaPlanner goal80 \rule{0mm}{3.5mm}$_{_{_{_{~}}}}$ & $\forall l.\ \mathit{sorted (sort(l))}$ \\ \hline \hline \end{tabular}\vspace*{3mm} \caption{{\small Problems solved by {\sc AdtRem} and solved by neither {\rm CVC4 with induction nor AdtInd nor Eldarica.}}} \label{tab:examples} \end{center} \vspace*{-6mm} \end{table} \vspace*{-8mm} \begin{table}[!ht] % \begin{center} \begin{tabular}{|@{\hspace{1mm}}l@{\hspace{1mm}}|@{\hspace{1mm}}l@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{1mm}}|} \hline {\it Problem} & {\it Property} & {\it Solved by} \\ \hline\hline CLAM goal18 & $\forall x,\!y.\ \mathit{rev(append(rev(x),\! y))\!=\! append(rev(y),\!x)}$ & {\parbox[top]{18mm}{\vspace*{-2mm}\center CVC4 with\\[-.5mm] induction\!\\[1.2mm]}} \\ \hline CLAM goal76 \rule{0mm}{3.5mm}$_{_{_{_{~}}}}$& $\forall x,\!y.\ \mathit{append(revflat(x),\! y)\!=\! qrevaflat(x,\!y)}$ & AdtInd \\ \hline \\[-4mm] {\parbox[top]{22mm}{\rule{0mm}{3.5mm} Leon amortize-\\queue-goal3$_{_{_{_{~}}}}$}} & $\forall x.\ \mathit{len(qrev(x))\!=\! len(x)}$ & Eldarica \\ \hline \end{tabular}\vspace*{3mm} \caption{{\small Problems solved by {\rm CVC4 with induction or AdtInd or Eldarica} and not solved by {\sc AdtRem}. }} \label{tab:examples2} \end{center} \vspace*{-6mm} \end{table} \vspace*{-5mm} \subsection{The workflow of the {\sc AdtRem} tool} \label{subsec:AdtRem-workflow} Our {\sc AdtRem} tool implements the satisfiability checking method % presented in Section~\ref{subsec:solvingCHCs} as follows. First, {\sc AdtRem} makes use of the VeriMAP system~\cite{De&14b} to perform the steps specified by Algorithm~${\mathcal R}$. It takes as input a set~$P_0$ of CHCs and, if it terminates, it produces as output a set~$P_n$ of CHCs that have basic types. Then, in order to show the satisfiability of $P_{0}$, {\sc AdtRem} invokes the Eldarica CHC solver to show the satisfiability of~$P_n$. If Eldarica proves that $P_n$ is satisfiable, then, by the soundness of the transformation Algoritm~${\mathcal R}$~(see Theorem~\ref{thm:soundness-AlgorithmR}), $P_0$ is satisfiable and {\sc AdtRem} returns the answer `\textit{sat}'. In particular, the implementation of the \textit{Unfold} procedure enforces Condition~(U) of Theorem~\ref{thm:unsat-preserv}, which indeed ensures the soundness of Algoritm~${\mathcal R}$. If Eldarica proves that $P_n$ is unsatisfiable by constructing a counterexample CEX, {\sc AdtRem} proceeds by checking whether or not Conditions (E) and (F), which guarantee the completeness of Algoritm~${\mathcal R}$, hold (see Theorem~\ref{thm:completeness-AlgorithmR}). In particular, during the execution of Algoritm~${\mathcal R}$, {\sc AdtRem} checks Condition (E) when applying the Folding Rule R3, and Conditions (F2) and (F3) when applying the Differential Replacement Rule~R7. {\sc AdtRem} marks all clauses in $P_n$ derived by a sequence of transformation steps where one of the Conditions (E), (F2), and (F3) is not satisfied. Then, {\sc AdtRem} looks at the counterexample CEX constructed by Eldarica to verify whether or not any instance of the marked clauses is used for the construction of CEX. If this is the case, it is not possible to establish the unsatisfiability of $P_0$ from the unsatisfiability of~$P_n$ (as completeness of Algoritm~${\mathcal R}$~may not hold) and hence {\sc AdtRem} returns the answer `\textit{unknown}' as the result of the satisfiability check for $P_{0}$. Otherwise, if {no instance of a marked clause} is used in CEX, then {\sc AdtRem} proceeds by checking Condition~(F1) of Definition~\ref{def:condF}. {Recall that, by Proposition~\ref{prop:fun-preserv}, Condition~(F1) can be checked by inspecting the clauses defining the differential predicates in $\textit{Definite}(P_n)$.} To perform this check, for each differential predicate $\textit{diff}_k$ introduced by Algorithm~${\mathcal R}$~and occurring in CEX, {\sc AdtRem} produces {a clause, call it $\textit{fun-diff}_k$,} of the form: \smallskip $~\textit{false}\leftarrow$ $O_1\!\neq\!O_2,$ $\textit{diff}_k(I,O_1), \textit{diff}_k(I,O_2)$ \smallskip \noindent where $I$ and $O_i$, for $i=1,2,$ are the tuples of input and output variables, respectively, of $\textit{diff}_k(I,O_i)$. Then, {\sc AdtRem} runs Eldarica to check the satisfiability of $\bigcup_k\{\textit{fun-diff}_k\}\cup D_n$, where $D_n$ is the set of clauses defining $\textit{diff}_k$ in $\textit{Definite}(P_n)$ (see Example~\ref{ex:func}). Now, this satisfiability check may succeed (Case 1) or may not succeed (Case 2). In Case 1, Condition~(F1) holds and, by the completeness of Algorithm~${\mathcal R}$~(see Theorem~\ref{thm:completeness-AlgorithmR}), we have that $P_0$ is % unsatisfiable and {\sc AdtRem} returns the answer `\textit{unsat}'. In Case 2, Condition~(F1) cannot be shown, and {\sc AdtRem} returns the answer `\textit{unknown}' as the result of the satisfiability of~$P_{0}$. \subsection{Benchmark suite} Our benchmark suite consists of 251 {verification} % problems over inductively defined data structures, such as lists, queues, heaps, and trees. Out of these 251 problems, 168 of them refer to properties that hold (\textit{valid} properties) and the remaining 83 refer to properties that do not hold (\textit{invalid} properties). The 168 problems specifying valid properties have been adapted from the benchmark suite considered by Reynolds and Kuncak~\cite{ReK15}, and originate from benchmarks used by various theorem provers, such as CLAM~\cite{IrB96}, HipSpec~\cite{Cl&13}, IsaPlanner~\cite{DiF03,Jo&10}, and Leon~\cite{Su&11}. In particular, we have considered Reynolds and Kuncak's `dtt' encoding where natural numbers are represented using the built-in SMT type {\it Int}. From those problems we have discarded: (i)~the ones that do not use ADTs, and (ii)~the ones that cannot be directly represented in Horn clause format. In order to make a comparison between our approach and Reynolds-Kuncak's one on a level playing field, since {\sc AdtRem} supports neither higher order functions nor user-provided lemmas, (i)~we have replaced higher order functions by suitable first order instances, and (ii)~we have removed all auxiliary lemmas from the formalization of the problems. We have also used LIA constraints, instead of the basic functions recursively defined over natural numbers, such as the {\it plus} function and {\it less-or-equal} relation, so that the solver can deal with them by using the LIA theory. The 83 problems specifying invalid properties have been obtained from those specifying valid properties by either negating the properties or modifying {the definitions} of the predicates on which the properties depend. The benchmark suite is available at {\small{\url{https://fmlab.unich.it/adtrem/}}}. \subsection{Experiments} We have performed the following experiments. \smallskip \noindent\hangindent=4mm 1. We have run the `cvc4+ig' configuration of the CVC4 solver extended with inductive reasoning and also the AdtInd solver on the 251 {verification} % problems in SMT-LIB format. \noindent\hangindent=4mm 2. Then, we have translated each {verification} % problem into a set, call it $P_0$, of CHCs in the Prolog-like syntax supported by {\sc AdtRem} by using a modified version of the SMT-LIB parser of the ProB system~\cite{LeB03}. We have run Eldarica {v2.0.5}, which uses no induction-based mechanism for handling ADTs, to check the satisfiability of the SMT-LIB translation of $P_0$\footnote{We have also performed analogous experiments on the set of valid properties by using Z3-SPACER~\cite{Ko&13}, instead of Eldarica, as reported on an earlier version of this paper~\cite{De&20a}. The results of those experiments, which we do not report here, are very similar to those shown in Table~\ref{tab:Exper-Results}.}\!. \noindent\hangindent=4mm 3. Finally, we have run {\sc AdtRem} to check the satisfiability % of $P_0$. If {\sc AdtRem} returns `\textit{sat}', the property specified by $P_0$ is reported to be valid. If {\sc AdtRem} returns `\textit{unsat}', the property is reported to be invalid. \smallskip \noindent Experiments have been performed on an Intel Xeon CPU E5-2640 2.00GHz with 64GB RAM under CentOS and for each problem we have set a timeout limit of 300 seconds. \section{Introduction} \label{sec:Intro} \input{1_Intro.tex} \section{A Motivating Example} \label{sec:IntroExample} \input{2_IntroEx.tex} \section{Constrained Horn Clauses} \label{sec:CHCs} \input{3_CHC-after.tex} \section{Transformation Rules for Constrained Horn Clauses} \label{sec:TransfRules} \input{4_Rules_Soundness.tex} \section{An Algorithm for ADT Removal} \label{sec:Strategy} \input{5_Transform.tex} \section{Preserving Completeness} \label{sec:Completeness} \input{6_Completeness.tex} \section{Experimental Evaluation} \label{sec:Experiments} \input{7_Experiments.tex} \section{Related Work and Conclusions} \label{sec:RelConcl} \input{8_RWConclusions.tex} \section*{Acknowledgements} \label{sec:Acknowledgements} \input{Acknowledgements.tex} \input{9_bibliography.bbl} \end{document}
proofpile-arXiv_068-4592
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} Historical photographs provide a valuable source of information for researchers in several fields of science. Alone the photographs of the two World Wars have been analyzed in archaeology \cite{WW_archaeology,WW_archaeologyII}, war history \cite{WW_ypres_salient,WW_warhistory2}, post‐phenomenological geography \cite{WW_repeat_photography}, photojournalism \cite{WW_Lapland,WW_photojournalism2}, religion \cite{WW_religion}, landscape research \cite{WW_landscape}, history of photography \cite{WW_photohistory,WW_history2}, propaganda research \cite{WW_propaganda,WW_propaganda2}, and others. Such research efforts require systematic analysis of large quantities of photographs, which is a laborious task taking a large part of the overall research time. State-of-the-art machine learning algorithms have potential to significantly speed up this task and also provide novel perspectives/directions for the following studies on different fields \cite{multimediaintelligence, cao2018recent, jiao2019survey}. Despite the potential, up to this point the use of machine learning has been very scarce in this context. Previous works in the field include applications of face recognition to assist in identifying persons in historical portrait photographs \cite{hist_face_recognition}, feature matching for geolocalization or target matching in historical repeat photography \cite{hist_match1,hist_match_dataset,hist_match2,hist_match3}, application of marked point processes on automatic detection of bomb craters in aerial wartime images \cite{hist_craters}, a rudimentary classification of historical photographs into portraits, landscapes, group photographs, and buildings/architectural photography \cite{hist_classification}. Wide-spread exploitation of machine learning in research using historical photographs has not started yet. One reason for this may be that the researchers performing such research typically have a background far from information technology. Besides not having the ability to use the novel machine learning tools, many researchers in these fields may not even realize the potential of machine learning in their work. Therefore, we demonstrate in this paper how state-of-the-art machine learning algorithms can assist and provide new insight in the historical photo analysis. As our case study, we concentrate on Finnish World War II photographs, while we use general algorithms and publicly available training data. Therefore, a similar analysis can be directly applied on any historical dataset. The Finnish army produced a unique and internationally significant database of photographs during the Winter War, Continuation War, and Lapland War in 1939-1945. This collection is known as the Finnish Wartime Photograph Archive \cite{SAarchive} and it consists of almost 160,000 photographs captured by men who served in TK (Tiedotuskomppania = Information company) troops. The archive has been digitized in the beginning of 2010s and made publicly available in 2013. In its extent and historical significance, the Finnish Wartime Photograph Archive is comparable to the American Farm Security Administration/Office of War Information Photograph Collection \cite{FSAarchive}, which contains about 175,000 photos taken during the depression and drought in 1930s and World War~II. One of the official tasks of the TK troops was to collect ethnographic records. The Finnish Wartime Photograph Archive provides a unique cross section of the life especially in the Eastern Karelia occupied by Finnish troops during the Continuation War \cite{SarkynytArki}. The archive provides a valuable source of information for historians, photojournalists, and other researchers searching information of the life and sentiments behind the battles \cite{SAtutkimus}. However, the original photograph labeling typically provides only the date, the place, the photographer, and a brief description of the key content. Thousands of photographs lack even this basic contextual information or it is incomplete. Moreover, not much of the content providing insight into the every day life and sentiments of the people has been originally described. Therefore, humanistic researchers have invested a considerable amount of time and effort to manually go through the collection and search for the information related to the studies at hand. In this paper, we show that machine learning algorithms can ease this kind of photo analysis, not only by helping to patch up gaps in the database but also by providing information that would be hard to obtain by manual inspection. Several hundreds of photographers captured the Finnish Wartime collection. However, most of them only took one or few images and just a few dozen photographers captured half of the images. While the photographers did not have the freedom to select their topics freely, each photographer still provides a subjective view of the events. Objects appearing in the photos, scene setup, and picture framing vary based on professional background, personal training, and preferences of a photographer. Some of the photographers can be considered as skillful photojournalists or artists, while others simply recorded the events with their cameras with a less experienced approach. Therefore, a better understanding of the differences of the individual TK photographers can provide deeper insight into the significance of the content and help researchers to find the content they are looking for. In this paper, we exploit state-of-the-art machine learning algorithms to analyze the characteristics and differences of 23 active TK photographers. We examine the typical objects appearing in the photographs and framing of the photos (i.e., close-ups vs. overall shots) for each photographer and we evaluate how distinguishable different photographers are. In this work, our contribution lies on the edge of the historical photograph analysis and machine learning, allowing to make a step towards automatically answering historical research questions and to facilitate the work of historians. Rather than presenting a novel method from the machine learning perspective, we show how several common historical photograph analysis research problems can be addressed by utilizing modern machine learning techniques or making minor modifications to them. Our proposed approaches allow automating the work of historians that currently requires a large amount of monotonous manual labor. This can provide a significant speed up of the research process as well as the possibility to process considerably larger-scale data and allow historians to use their time on analyzing higher-level meaning and consequences of the gatherer results. More specifically, \begin{itemize} \item we propose a pipeline for historical photograph analysis based on combination of four state-of-the-art object detection methods \item we show how the above-mentioned pipeline can be utilized for photo framing evaluation \item we formulate a problem of photographer recognition and propose an approach for quantitative assessment of visual similarity of the photographers as well as establishment of unknown authorship for photographs based on it \item based on performed experiments, we provide the analysis of the most prominent Finnish WW2 photographers selected from Finnish Wartime Photograph Archive \item we provide the obtained bounding box annotations, codes, and all the pretrained models obtained in our study to facilitate further research in this area.\footref{data} \end{itemize} We have structured the rest of paper as follows: in Section~\ref{sec:mlhist}, we describe and discuss the tasks and methodologies adopted in this study in a general manner understandable also without previous knowledge on machine learning. We give the technical details separately in Section~\ref{sec:methods}, discuss the obtained results in Section~\ref{sec:results} and conclude the paper in Section~\ref{sec:conclusion}. \section{Machine Learning for historical photograph analysis} \label{sec:mlhist} In this work, we propose and evaluate several application areas in which machine learning can assist in the analysis of historical images and photographers, namely, analysis of objects present in the scene, photo framing evaluation, photographer classification, and assessment of their visual similarity. The selected tasks illustrate only a small fraction of different ways machine learning can help in historical photograph analysis and were chosen based on their potential to provide a significant amount of useful information for researchers with a small amount of additional work that does not require deeper understanding of the underlying methods. We provide all the codes and models along with a detailed description on how to apply them on other historical photo archives\footnote{\label{data}We provide all codes, models, and obtained data annotations along with a detailed description on how to use them at \hyperlink{github.com/katerynaCh/Finnish-WW2-photographers-analysis}{github.com/katerynaCh/Finnish-WW2-photographers-analysis}. A permanent website will be created during the review process of this paper, which will host all information related to our research in this topic.}. We selected for our experiments 23 Finnish war photographers. First 20 of them were the photographers with the highest total numbers of images in the Finnish Wartime Photograph Archive and the remaining three were included as they are considered by experts interesting for the photojournalistic research. The selected photographers along with the number of photographs and the photographing period for each photographer are listed in Table~\ref{tab:photographers}. The table also assigns photographer IDs used in later tables and illustrations. The total number of photographs considered in our analysis is $59,021$. It is likely that most of the photographers captured a higher number of photographs than suggested here. This is because thousands of photos in the Finnish Wartime Photograph Archive still lack the name of the photographer. As our analysis will help to differentiate the characteristics of the TK photographers, it may later contribute to suggesting names for at least some of the anonymous photographs. \begin{table}[tb] \caption{Selected photographers, total number of taken photographs, and photographing periods} \begin{tabular}{l|llcc} ID & Photographer & Total & Start date & End date \\ \hline 1 & Kim Borg & 3932 & 25 Jun 1941 & 29 Oct 1944\\ 2 & Tuovi Nousiainen & 3551 & 25 Jun 1941 & 19 Sep 1944\\ 3 & Ukko Ovaskainen & 3523 & 24 Jun 1941 & 05 Jul 1944\\ 4 & V\"ain\"o Hollming & 3391 & 25 Sep 1941 & 09 Sep 1944\\ 5 & Jarl Taube & 3181 & 25 Aug 1941 & 11 Jul 1944\\ 6 & Nils Helander & 3125 & 14 Sep 1941 & 16 Jun 1944\\ 7 & Pauli J\"anis & 2903 & 10 Apr 1942 & 27 Sep 1944\\ 8 & Oswald Hedenstr\"om & 2812 & 24 Jun 1941 & 23 Sep 1944\\ 9 & Esko Suomela & 2755 & 25 Jun 1941 & 20 Sep 1944\\ 10 & Tauno Norjavirta & 2734 & 27 Jun 1941 & 21 Sep 1944\\ 11 & Martin Persson & 2615 & 02 Sep 1941 & 31 Aug 1943\\ 12 & Kauko Kivi & 2585 & 24 Jun 1941 & 02 Jul 1944\\ 13 & Hugo Sundstr\"om & 2564 & 24 Jun 1941 & 06 Nov 1944\\ 14 & Vilho Uomala & 2543 & 24 Jun 1941 & 20 Oct 1944\\ 15 & Eino Nurmi & 2379 & 25 Jun 1941 & 20 Aug 1944\\ 16 & Holger Harrivirta & 2307 & 26 Jun 1941 & 06 Dec 1942\\ 17 & Olavi Aavikko & 2109 & 10 Sep 1941 & 22 Jul 1944\\ 18 & Uuno Laukka & 1989 & 10 Aug 1941 & 10 Oct 1944\\ 19 & Kalle Sj\"oblom & 1967 & 20 Jun 1941 & 04 Aug 1944\\ 20 & Pekka Kyytinen & 1962 & 05 Jul 1941 & 15 Jul 1944\\ 21 & Heikki Roivainen & 1721 & 12 Sep 1941 & 21 Jul 1942\\ 22 & Esko Manninen & 1699 & 04 Jul 1941 & 20 Apr 1944\\ 23 & Turo Kartto & 674 & 17 Aug 1941 & 24 May 1942\\ \end{tabular} \label{tab:photographers} \end{table} \subsection{Photo content analysis} Presence of specific objects in a scene can provide plenty of information regarding an image of that scene. For example, it can be used for anomaly/fault detection \cite{staar2019anomaly, sarikan2018anomaly}, increasing autonomy of vehicles \cite{arnold2019survey}, and video surveillance \cite{huang2019deep}. To improve the performance of such scene analysis methods on new datasets, domain adaptation methods, that reduce the gap between the representation of the labeled training dataset and that of unlabeled target dataset, can be utilized \cite{hedegaard2020supervised, wang2020target, wang2019domain}. In the context of historical photo analysis, when using appropriate classes, detected objects allow to determine the context of each photo, as well as the focus of each photographer. For example, photographs on which chairs are detected, are likely to be taken indoors, while photographs on which horses, boats, cars, or trains are present, are more likely to be outdoor photos. Presence of objects such as skis can help determine the time of the year and, therefore, help establish the time period for unlabeled photographs. In turn, a high amount of chairs, ties, and people on the photo is likely to indicate a photo of some official event. At the same time, photos of airplanes are likely to resemble battles or near-battle areas. Besides analysis of a context of each specific photo, such analysis can help determine the main focus of each photographer by evaluating which types of objects are present in their photographs the most. For example, photographers having larger numbers of people, ties, and chairs, are more likely to have been urban photographers, while those with more animals (e.g., dogs, horses) are more likely to have worked in rural or countryside areas. In our study, we propose to perform such analysis by leveraging the power of object detection methods, i.e., methods that are able to localize objects from a set of pre-specified classes on the images. Besides, we observe that labeling and training on a specific historical image dataset is often unnecessary as by combining the outputs of several strong object detectors pre-trained on public modern datasets one can achieve representative results for many common object classes. We also provide the obtained bounding box annotations for the Finnish Wartime Photograph Archive to facilitate further research in this area \footnoteref{data}. \subsection{Photo framing evaluation} The framing of a photograph is one of the stylistic decisions a photographer has to make. It is one of the most effective ways to assure visual variety in a group of photographs of a single situation. A traditional way of categorizing framings is to use three types as defined by Kobre \cite{Kobre}: overall shots, medium shots, and close-ups. A more detailed division of framings is widely used, e.g., in cinematic storytelling. According to this basic categorization, an overall shot sets the scene showing where the event took place: inside, outside, country, city, land, sea, day, night, and so on. This shot defines the relative position of the participants. A medium shot, on the other hand, should “tell the story” in one photograph by compressing important elements into one image. It is shot close enough to see the actions of the participants, yet far enough away to show their relationship to one another and to the environment. Finally, a close-up adds drama isolating one element and emphasizing it. In photographs of people, a close-up usually portraits a subject’s face. Measuring the ratio of different framings in a photographer’s works in a certain collection is one way to characterize his/her way of seeing. Here, we propose to take advantage of a combination of several object detectors for solving this task. To separate different framing categories, we examine the photographs with detected people and consider the relative size of the largest bounding box, which usually corresponds to the person closest to the camera, with respect to the image size. \subsection{Photographer classification and visual similarity assessment} Another problem in this study that we propose to address by utilization of machine learning techniques is the assessment of visual similarity of different photos and photographers. Ability to differentiate the photographers based on the visual cues of photographs can assist in labeling the images for which the author is unknown. To evaluate how distinguishable different photographers are, we select a subset of 12 photographers (4-Hollming, 5-Taube, 6-Helander, 7-J\"anis, 8-Hedenstr\"om, 9-Suomela, 12-Kivi, 14-Uomala, 15-Nurmi, 19-Sj\"oblom, 21-Roivainen, 22-Manninen) and use some of the photographs from each photographer to train a neural network to recognize the photographer. A neural network sequentially applies a set of transformations to the input images, transforming them into such a representation that allows to distinguish different photographers, after which a classification layer classifies the image as belonging to a certain photographer. Therefore, besides directly establishing the authorship of unlabeled images, this approach allows to obtain such a feature representation of the photographs in which images that are visually similar are located closer in the feature space, allowing to asses the visual similarities of different photographers with each other as well as similarities of specific images quantitatively. For quantitative analysis, i.e., establishment of the extent to which the photographers are similar, we propose to utilize the Earth Mover's Distance \cite{emd} between the feature representations extracted from the pre-last layer of the neural network trained for photographer classification. \section{Methods Description} \label{sec:methods} This section provides technical details on the machine learning approaches utilized for performing the previously described analysis, including object detection, photo framing analysis, photographer classification, and analysis of visual similarity of photographs. \subsection{Photo content analysis and framing evaluation} \label{ssec:aggregation} For analysis of objects present in the scene of the photographs, we created a framework utilizing four state-of-the-art object detectors, namely Single-Shot Detector (SSD) \cite{SSD}, You Only Look Once v3 (YOLOv3) \cite{yolo}, RetinaNet \cite{retinanet}, and Mask R-CNN \cite{he2017mask}. We combine the four object detectors as such combination can result in improved detection accuracy and bounding box precision as compared to utilization of only one object detector, since generally a single object detector fails to detect all the objects of interest in the image. In this case, the information obtained from four independent detectors can compensate each other in terms of undetected objects of interest and therefore provide improved results. At the same time, combination of bounding box coordinates of objects detected by several object detectors can improve the precision of bounding boxes. All models were pretrained on MS-COCO dataset \cite{coco} that contains 80 classes. Among those, we considered people, airplanes, boats, trains, cars, bicycles, skis, dogs, horses, chairs, and ties as shown in Table~\ref{tab:detection}. The pipeline of aggregation of the information obtained from each object detector is described below in Sections III-A-1, 2, 3, and 4. From each detector, we obtain a set of bounding boxes that are given as 4 coordinates and a class label with a corresponding confidence score. We discarded predictions with a confidence score below a certain threshold. This threshold was selected to be 0.7 for Mask R-CNN, 0.3 for RetinaNet, 0.5 for SSD, and 0.6 for YOLOv3. The thresholds were selected by manually investigating the effect of different scores in each detector on overall detection results. Higher threshold was selected for YOLOv3 and Mask R-CNN as they tend to produce more false positives with higher scores in our setup. In order to determine the final bounding boxes, the aggregation of the results from multiple detectors is performed. We investigate two combination approaches. In each of the approaches, the identification of bounding boxes corresponding to the same objects is necessary. This is achieved as follows: first, the detected bounding boxes belonging to the same class are sorted according to their confidence scores. Then, the Jaccard similarities, also referred to as Intersection over Union \cite{iou} of each box with respect to the most confident bounding box of the class are calculated. Intersection over Union is defined by the area of the intersection of two boxes divided by the area of the union of these boxes: \begin{equation} {IoU = \frac{Area\:\:of\:\:overlap}{Area\:\:of\:\:union}} \end{equation} The bounding boxes having the IoU more than certain threshold $\theta$ are identified as belonging to the same object. In our experiments, we set $\theta$ to 0.1. Then, the most confident bounding box and the boxes having IoU with it higher than the threshold are identified as belonging to the same object, marked as processed, and removed from the bounding box list, so that each bounding box is matched with at most one object. The process continues starting from the bounding box that has the highest confidence score after removal of boxes of the previous step until all bounding boxes have been processed. After this stage, we examined two options of combining the detections identified as belonging to the same object: either the bounding box with the highest confidence score can be selected, or the mean of each coordinate of all bounding boxes corresponding to the same object can be taken. Following the first approach, issues related to different scoring systems of different detectors arise, i.e., some detector might produce higher scores for all of its detections in general, while its bounding boxes might be less accurate. We also observe this heuristically, so in our experiments, we follow the second approach of taking the mean value of the coordinate produced by all the detectors and we observe that generally this results in more accurate positioning of the bounding box, although this cannot be evaluated quantitatively without the groundtruth information. This process was applied to bounding boxes of each class separately. We utilize the detections of the person class for photo framing evaluation, or, in other words, estimation of a distance from which the photo was taken. After combining the predictions of each detector, we used the largest bounding box for the person class in our photo framing evaluation. The evaluation is based on the area occupied by the bounding box - if the bounding box occupies more than 65\% of the overall photograph, the photo was classified as a close-up, 10-65\% - medium shot, and \textless 10\% - as an overall shot. Further in this section, we provide details on the object detectors utilized in the above-described approach. \subsubsection{SSD} The first object detector applied was SSD \cite{SSD} that is one of the most well-known single-shot detectors. The detector is based on the VGG-16 \cite{vgg} model pretrained on ImageNet dataset \cite{imagenet} that is used as a backbone feature extractor, followed by several convolutional layers that downsample the image and result in multiple feature maps. Using these feature maps from different layers, the detection can be done on multiple scales, while preserving the parameters across all scales, ensuring that both large and small objects are detected equally well. In addition to that, the single-shot approach results in high inference speed. SSD relies on the idea of default bounding boxes, meaning that prior to training, several default bounding boxes are determined based on the amount of feature maps to be used and the size of the feature maps. Bounding boxes are created for the aspect ratios of $\{1,2,3,\frac{1}{2}, \frac{1}{3}\}$. During training, each groundtruth bounding box is associated with one of the default bounding boxes, determined by the highest Intersection over Union \cite{iou}. This default bounding box becomes a positive example for the groundtruth box, while the others become negative examples. At each scale, a feature map of different size is created and divided into a grid cell. During inference, a set of default bounding boxes is evaluated for each cell of the feature map and for each default bounding box, a shape offset is predicted along with the class probabilities for each class. Training is done with the combination of localization loss that is a Smooth L1 loss \cite{smoothl1} between the predicted box and the groundtruth box; and the confidence loss that is the cross-entropy loss over multiple class confidences. In our experiments, we used images rescaled to the size of $512\times512$ pixels as an input to SSD detector. \subsubsection{YOLOv3} The second object detector used was YOLOv3 \cite{yolo} that is in many ways similar to SSD: YOLO is a single-shot detector that makes predictions on multiple scales by performing detection on feature maps from different parts of the network. Prediction is done across three different scales obtained by dividing the image size by 32, 16, and 8. YOLO relies on an ImageNet-pretrained Darknet-53 architecture that is used as a feature extractor backbone and multiple convolutional layers are added on top of it. Similarly to SSD, an image is divided into a grid cell and each cell is responsible for detecting the object, the center of which is located within its boundaries. Each grid cell predicts several bounding boxes along with the corresponding class label and confidence score. Rather than predicting bounding box coordinates directly, YOLO predicts the offsets from the predetermined set of boxes, referred to as anchors boxes or prior boxes, and each box is represented by the width and height dimensions \cite{yolov2}. These anchor boxes are obtained by applying $k$-means clustering \cite{kmeans} on the width and height dimensions of the boxes in the training set with the distance defined as \begin{equation} d(box,centroid) = 1 - IoU(box, centroid), \end{equation} where both $box$ and $centroid$ are represented by two-dimensional vectors of width and height, $IoU$ stands for Intersection over Union, and $k = 9$ is chosen for $k$-means clustering, resulting in 9 anchor boxes. For calculation of $IoU$ we assume that the centers of the boxes are located at the same point. More specifically, for the model trained on COCO dataset and $416\times416$ images, the anchor boxes are $(10\times13),(16\times30),(33\times23),(30\times61),(62\times45),(59\times119),(116\times90),(156\times198)$, and $(373\times326)$. For each detected bounding box, class prediction is obtained by multi-label classification with separate logistic classifiers. During training, the loss comprised of binary cross-entropy loss for object classification, and sum of squared error loss for bounding box prediction is used. YOLO operates on images of fixed size, and for our experiments all images were rescaled to $416\times416$ pixels size. \subsubsection{RetinaNet} The RetinaNet \cite{retinanet} object detector is the third state-of-the-art object detector used in this work. Overall architecture of RetinaNet consists of the backbone network for feature extraction, namely, Feature Pyramid Network \cite{fpn} built on top of ResNet \cite{resnet}, and two subnetworks, one of which is responsible for object classification, and the other one - for the bounding box regression. Similarly to previous detectors, the backbone network in pretrained on ImageNet dataset. In a similar way to other detectors discussed so far, RetinaNet performs detection on multiple scales and relies on a predefined set of anchor boxes. Here, for each scale, anchors of 3 aspect ratios $\{1:2, 1:1, 2:1\}$ and 3 sizes $\{2^0, 2^{\frac{1}{3}}, 2^{\frac{2}{3}}\}$ are used, resulting in 9 anchor boxes per scale level. The subnet for object classification is a small fully-connected network, where the parameters are shared between different scale levels. The network is comprised of 3$\times$3 convolutional layers. For each spatial position, object class, and anchor box, a sigmoid activation function predicts the probability of presence of the object of that class. Thus, this subnet has the output of size $W\times H \times A*K$, where $A$ is the number of anchor boxes, $K$ is the number of classes, and $W$ and $H$ are the width and height of the corresponding feature map. The bounding box regression subnet is a fully-connected network that predicts four coordinates for each anchor box at each spatial location. The predicted coordinates correspond to the offset relative to the anchor. The main difference from other detectors lies in the utilization of the new loss function, referred to as Focal Loss, designed to address the issue of imbalanced classes in the object classification subnet: \begin{equation} FL(p_t) = -\alpha(1-p_t)^\gamma log(p_t);\:p_t = \begin{cases}p, \text{if } y=1\\ 1-p, \text{otherwise} \end{cases} \end{equation} where $y = \pm 1$ is the ground-truth binary class label for the evaluated class, $p$ is the estimated class probability, $\gamma$ is a focusing parameter, and $\alpha$ is a balancing parameter. For the input to this detector, we rescaled the images preserving the aspect ratio and setting the size of the smaller side to 800 pixels, while keeping the size of a larger side at 1333 pixels maximum. \subsubsection{Mask R-CNN} Mask R-CNN \cite{he2017mask} was the fourth detector used in this work. It is based on Faster R-CNN \cite{ren2015faster} - a region proposal based network consisting of two major blocks: a Region Proposal Network (RPN) that predicts the possible candidate locations of objects in the image, and a Region of Interest (RoI) classifier that extracts features of each candidate region proposed by RPN, assigns class labels to them, and refines the bounding box location. Mask R-CNN extends Faster R-CNN for prediction of segmentation masks that is performed in parallel with bounding boxes prediction. Mask R-CNN predicts a binary segmentation mask for each candidate region proposed by RPN, resulting in $K$ of $m\times m$ masks per RoI, where $K$ is the number of classes. The prediction is achieved by Fully Convolutional Network. A per-pixel sigmoid is applied to the $m \times m$ mask output on the groundtruth class during training (i.e., only to the $c^{th}$ mask for the RoI with groundtruth class $c$), and the segmentation loss $L_{mask}$ is defined as an average binary cross-entropy loss. The total loss is defined as $L = L_{cls} + L_{box} + L_{mask}$, where $L_{cls}$ and $L_{box}$ are the classification and bounding box regression loss, respectively, and they are defined in the same way as in original Fast R-CNN \cite{smoothl1}. Faster R-CNN relies on the RoIPool operations for extraction of small feature maps. RoIPool quantizes the float values of RoI into discrete bins to fit the granularity of the feature map, followed by spatial partitioning of the RoI into several spatial bins, to which pooling is applied. Such processing allows achieving higher training speed, while not affecting the performance much, as classification is robust to small translations. However, for the segmentation, pixel-accurate processing is required, resulting in the need for substitution of RoIPool with something else. For this purpose RoIAlign layer was proposed, where quantization is avoided: four locations are selected in each RoI bin and their values are computed using bilinear interpolation. Experimentally it is shown that usage of architecture with RoIAlign but without the mask segmentation component outperforms Faster R-CNN on bounding box prediction task already, and multi-task training for segmentation pushes the precision even further. The architecture of Mask R-CNN consists of the convolutional backbone that is used for feature extraction, and a head that is used for classification, bounding box prediction, and segmentation. In our setup, ResNet101 \cite{resnet} was used as a backbone, and FPN \cite{fpn} as the head. The image size of $540 \times 960$ was used for processing. \subsection{Photographer recognition} Evaluation of visual similarity of photographers and prediction of photograph authorship for unknown photos is achieved in this work by formulating an appropriate classification problem. For recognizing the photographer from the photos, we applied a pretrained and finetuned convolutional neural network. The architecture used in this work is a modified VGG-19 architecture \cite{vgg}, pretrained on ImageNet dataset. Modifications to the original architecture include the addition of Dropout layers after each pooling layer and each of the last two fully-connected layers with keeping 50\% of connections, and addition of a randomly-initialized fully-connected layer with 1024 neurons, followed by another Dropout layer that keeps 50\% of connections. At the final step, a layer with 12 neurons and softmax activation function is added. Adam optimizer was used for training with the learning rate of $10^{-5}$, momentum decay rates of $0.9$ and $0.999$ for the first and second moment estimates, respectively, and learning rate decay of $1e^{-6}$. \begin{figure*} \includegraphics[height=0.2\linewidth]{Figures/35197_good.pdf} \includegraphics[height=0.2\linewidth]{Figures/59947_good.pdf} \includegraphics[height=0.2\linewidth]{Figures/162432_medium.pdf} \includegraphics[height=0.2\linewidth]{Figures/127688_bad.pdf} \caption{Examples of successful and erroneous object detection results. Histograms of the photographs shown here and in the following examples have been equalized. We show here also object classes not used in our analysis (e.g. cow). Photographers: U. Ovaskainen, K. Borg, K. Borg, P. Jänis. Source of photographs: SA-kuva.}\label{fig:examples} \end{figure*} In order to address the issue of imbalanced classes, the weighted loss was used during training, calculated as: \begin{equation} \mathcal{L} = -w_c\frac{1}{N}\sum_{i=1}^N \:log(p[y_i \in C_{y_i}]) \end{equation} \begin{equation} w_c = \frac{N}{N_c \times C}, \end{equation} where $N$ is the total number of training samples, $N_c$ is the number of training samples in class $c$, $C$ is the total number of classes \cite{king2001logistic}, and $p([y_i \in C_{y_i}])$ denotes the predicted probability that $i^{th}$ observation $y_i$ belongs to the class $C_{y_i}$. The training, validation, and test splits were selected randomly, while ensuring that the photos taken on the same day by the same photographer are not divided between splits, as they likely contain very similar photographs of a single event. In our setup, 60\% of the photos were selected as training set, 20\% - as validation set, and the rest - as the test set. As a preprocessing step, we performed histogram equalization on each photo on the value component in the HSV space in order to improve the contrast of each photo. Then, we resized the images into 224$\times$224 pixels size. Training was done for 100 epochs with batch size of 8 and categorical cross-entropy as the loss function. \subsection{Photographer visual similarity} In order to obtain a quantitative measure of visual similarity between photographers, we extract the features from the second last layer of the network trained for photographer recognition. Treating the set of features of each photographer as a signature of corresponding probability distribution, we calculate the Earth Mover's Distance \cite{emd, pyemd} between these distributions. The Earth Mover's Distance is defined as the minimal cost needed for transformation of one signature into the other, where the cost is based on some distance metric between two features. In our case, we utilize Euclidean distance. The Earth Mover's Distance between two distributions $P$ and $Q$ is then formally defined as: \begin{equation} EMD(P,Q) = \frac{\sum^m_{i=1}{\sum^n_{j=1}f_{i,j}d_{i,j}}}{\sum^m_{i=1}{\sum^n_{j=1}f_{i,j}}}, \end{equation} where $m$ and $n$ are the sizes of signatures of corresponding distributions, $f_{i,j}$ denotes the optimal flow between samples $i$ and $j$ found by solving the corresponding network flow problem, and $d_{i,j}$ denotes the distance between samples $i$ and $j$ \cite{emd}. In order to visualize the relationships between the photos of different photographers, we utilize the same features that are used for calculating the photographer similarity. The resulting feature map has high dimensionality and for the visualization purposes we exploit the t-Stochastic Neighbour Embedding algorithm (t-SNE) \cite{tSNE}. t-SNE is a data visualization method for high-dimensional data, that aims at mapping the data instances in the high-dimensional space to some low-dimensional space, where the similarity between instances is preserved. This is achieved by modelling the similarities between instances as conditional probabilities. In the high-dimensional space, the similarity between data instances $x_i$ and $x_j$ is represented by the probability of $x_j$ to be selected as the nearest neighbor of $x_i$ if neighbors were selected proportionally to their probability density under a Gaussian distribution centered at $x_i$. In the low-dimensional space, instead of using the Gaussian distribution, the Student's t-distribution with one degree of freedom is used. Using a heavy-tailed distribution helps to model moderate distances in the high-dimensional space with a much larger distances in the low-dimensional space, resulting in better results compared to other methods. The Kullback-Leibler divergence of these probability distributions is then minimized with a gradient descent. The result of the visualization can be seen in Fig.~\ref{fig:tSNE}. \section{Empirical study, Results, and Discussion} In this section, we describe the experiments performed and discuss the obtained results for analysis based on object detection, photo framing evaluation, photographer classification, and their visual similarity assessment. \label{sec:results} \begin{figure*} \centering \begin{subfigure}[b]{0.31\textwidth} \includegraphics[height=\textwidth]{Figures/107372_close.pdf} \caption{A close-up photo} \end{subfigure} ~ \begin{subfigure}[b]{0.31\textwidth} \includegraphics[height=\textwidth]{Figures/31908_middist.pdf} \caption{A medium shot} \end{subfigure} ~ \begin{subfigure}[b]{0.31\textwidth} \includegraphics[height=\textwidth]{Figures/48375_overall.pdf} \caption{An overall shot} \end{subfigure} \caption{Examples photographs of different framing categories and the corresponding detection results. We show here also object classes not used in our analysis (e.g. elephant). Photographer: K. Borg. Source of photographs: SA-kuva.}\label{fig:ranges} \end{figure*} \begin{table*}[htbp] \caption{Ratio of photos with people, number of people per such an images, and occurrences of other object classes per 100 images for different photographers} \centering \begin{tabular}{l|c|cc|cccccccccc|} ID & Objects & Person & Persons & Airplanes & Boats & Trains & Cars & Bicycles & Skis & Dogs & Horses & Chair & Ties \\ & image & images & image & \multicolumn{10}{|c|}{100 images} \\ \hline 1 & 6.8 & 0.89 & 2.8 & 2.5 & 9.4 & 6.3 & 8.7 & 6.2 & 2.4 & 4.6 & 12.1 & 11.9 & \emph{8.0} \\ 2 & 7.9 & 0.89 & 3.6 & 2.4 & \emph{4.1} & \textbf{7.9} & 9.8 & 7.2 & 1.8 & 3.1 & 8.1 & 24.8 & 14.5 \\ 3 & 9.2 & 0.90 & 4.3 & 1.6 & 6.4 & 5.0 & \textbf{10.8} & 7.1 & 2.5 & 5.7 & 18.1 & 16.6 & 22.5 \\ 4 & 7.3 & 0.93 & 3.8 & 3.3 & 8.4 & \emph{1.6} & 6.2 & \emph{2.9} & \textbf{10.0} & 6.7 & 15.7 & \emph{8.4} & 13.0 \\ 5 & 7.4 & \textbf{0.95} & 4.6 & 1.6 & 4.5 & 4.2 & 9.2 & 3.9 & 4.0 & 4.2 & 10.5 & 21.5 & 13.3 \\ 6 & 7.8 & 0.90 & \emph{2.6} & \textbf{14.5} & 9.7 & 7.0 & 8.2 & 4.2 & 3.7 & 5.4 & 11.2 & 10.1 & 8.9 \\ 7 & 6.8 & 0.91 & 3.7 & 2.4 & \emph{4.3} & \emph{2.7} & 6.6 & 3.4 & 3.5 & 5.7 & 16.1 & 14.0 & 12.6 \\ 8 & 12.1 & 0.94 & \textbf{5.7} & 3.1 & 8.8 & 5.8 & 8.7 & 4.8 & \textbf{6.5} & \textbf{7.4} & 12.4 & 39.5 & \textbf{29.8} \\ 9 & 8.6 & 0.93 & 4.2 & 4.3 & \textbf{18.5} & \textbf{7.5} & 6.9 & 3.6 & 2.0 & 3.3 & \emph{5.2} & 19.4 & 19.5 \\ 10 & \emph{6.6} & \emph{0.85} & 3.6 & 2.0 & 8.4 & 4.3 & 9.5 & 3.8 & 1.3 & 4.0 & 13.4 & 9.0 & 13.1 \\ 11 & 7.6 & 0.91 & 3.2 & 2.8 & 8.8 & 5.9 & 10.0 & 3.8 & 5.4 & 5.0 & 6.9 & 21.3 & 10.3 \\ 12 & 9.8 & 0.92 & 4.0 & 7.4 & \textbf{15.7} & 6.0 & 9.1 & \textbf{11.0} & 6.2 & 7.0 & 19.1 & 12.8 & 9.6 \\ 13 & 9.8 & 0.90 & 5.0 & \emph{1.5} & 5.5 & 6.5 & 8.2 & 3.8 & 3.1 & \emph{3.0} & 7.9 & 36.4 & 27.0 \\ 14 & 6.7 & \emph{0.84} & 3.8 & 2.9 & 11.6 & 4.8 & 6.4 & 4.0 & 4.5 & 5.4 & 13.6 & \emph{7.3} & 9.6 \\ 15 & 9.3 & 0.94 & 4.0 & \textbf{11.8} & 7.6 & 4.1 & 8.1 & 4.3 & 1.8 & 5.3 & \textbf{27.0} & 16.1 & 12.2 \\ 16 & 6.7 & 0.89 & 3.2 & 3.1 & 9.7 & 4.8 & 10.2 & 3.4 & 2.1 & 5.1 & 16.7 & 8.7 & \emph{6.2} \\ 17 & 6.8 & 0.92 & 3.6 & 1.6 & 6.3 & 5.4 & 8.0 & 3.0 & 2.1 & 4.7 & 12.3 & 18.3 & 9.0 \\ 18 & 7.2 & 0.90 & 3.8 & 1.7 & 7.3 & 3.7 & \emph{6.1} & 4.8 & 4.5 & 5.5 & 9.9 & 16.8 & 14.5 \\ 19 & 11.8 & \textbf{0.98} & 4.1 & 2.5 & 4.6 & 4.1 & 8.5 & 4.9 & 1.4 & \emph{1.8} & \emph{3.8} & \textbf{53.7} & \textbf{39.8} \\ 20 & 7.3 & 0.88 & 3.5 & 1.7 & 12.5 & 4.2 & \emph{6.0} & 3.7 & 1.3 & 4.5 & 7.6 & 17.1 & 17.7 \\ 21 & 8.2 & 0.91 & \emph {2.7} & 2.0 & 7.0 & 7.1 & \textbf{13.9} & \emph{2.4} & 5.6 & \textbf{8.5} & \textbf{21.8} & 8.9 & 10.5 \\ 22 & 14.8 & \textbf{0.95} & \textbf{6.1} & 3.4 & 11.7 & 5.3 & 6.8 & \textbf{8.0} & 1.3 & 4.4 & 10.0 & \textbf{79.1} & 27.0 \\ 23 & 8.1 & 0.94 & 3.1 & \emph{1.5} & 5.2 & 6.4 & 8.2 & 7.3 & \emph{1.0} & 5.0 & 17.8 & 15.5 & 17.5 \\ \hline Avg & 8.5 & 0.91 & 3.3 & 3.6 & 8.5& 5.2 & 8.4 & 4.9 & 3.4& 5.0 & 12.9 & 21.2 & 17.5 \end{tabular} \label{tab:detection} \end{table*} \subsection{Photo content analysis} We applied pretrained object detection algorithms to detect the objects appearing in images. Out of the available 80 object classes, we manually selected 11 relevant classes (people, airplanes, boats, trains, cars, bicycles, skis, dogs, horses, chairs, and ties). We also empirically checked that the detection quality for these classes was high. Some of the potentially interesting classes, e.g., cow, we discarded, because many cow detections were actually horses, reindeer, or other objects. Also for the selected classes, the results should be considered only as indicative. When objects are clearly visible, they are typically well detected. However, there are cases where objects are missed or misidentified. Few examples of object detections are shown in Fig.~\ref{fig:examples}. \begin{figure*} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{Figures/close3.pdf} \caption{Percentage of close-ups} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{Figures/far3.pdf} \caption{Percentage of overall shots} \end{subfigure} \caption{Percentage of different framing categories among photographs with people (the rest of the photographs are considered as medium shots)} \label{fig:range_ratios} \end{figure*} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{Figures/test-cm.pdf} \caption{Confusion matrix for photographer recognition} \label{fig:confusion} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=1.9\columnwidth]{Figures/wasserstein.pdf} \caption{Similarity of the photographers (higher value denotes lower similarity).} \label{fig:wasserstein} \end{figure*} It is evident that the results do not provide exact object numbers. Instead, we exploit the results to evaluate relative numbers of occurrences of different objects in the photographs of each photographer. The object detection results for each photographer are given in Table~\ref{tab:detection}, where we report the ratio of images with people and the average number of persons in these images as well as the average number of occurrences of other objects per 100 images for each photographer and the average number of objects of all classes per 100 images. For each object class, we highlight the values for photographers with the most frequent (bolded) and infrequent (italic) occurrences. We also provide the average frequency of each class among all photographers for reference. As expected, we observe from Table~\ref{tab:detection} that different photographers concentrated on different content: 19-Sj\"oblom has people in 98\% of his images, while 10-Norjavirta and 14-Uomala have people in less than 85\% of their images. 8-Hedenstr\"om and 22-Manninen have the highest average number of people in these images (i.e., only images with people counted), while 6-Helander and 21-Roivainen captured images with fewer people. 6-Helander and 15-Nurmi captured high numbers of airplanes, while 9-Suomela and 12-Kivi concentrated on boats. Interestingly, 6-Helander has rather high occurrence of other types of vehicles as well (boats, trains), while 15-Nurmi focused predominantly on airplanes. In 21-Roivainen's photos, there are many animals (horses, dogs), showing that many of his photos were taken in rural environements rather than urban scenes. Based on our manual inspection, chair pictures are typically taken indoors, while ties are worn by high ranking soldiers or wealthy people in urban conditions.Generally, presence of these objects in the scene allows us to make a conclusion that a photo is taken at some formal event. 19-Sj\"oblom, who has the highest ratio of photographs with people and 22-Manninen, who has the highest average number of people in his pictures, also have the most chairs. At the same time, both of them have a low rate of skis, dogs, and horses, supporting the claim that they were focusing on reporting the formal events and photographing high ranking military servants in urban environments. The connection between occurrence of chairs and people is supported by the fact that the 4-Hollming and 14-Uomala have the lowest chair rate. 14-Uomala has also a low ratio of people images, while 4-Hollming pictured a high amount of skiing photos, which shows that he photographed more outdoors. The occurrence of animals in his photos is rather high as well. \subsection{Photo framing evaluation} Photo framing evaluation is performed for photographs on which people are present according to the output of the combination of object detectors. We manually defined two thresholds to divide such photographs into three classes: close-ups, medium shots, and overall shots. Fig.~\ref{fig:ranges} shows an example photograph belonging to each of these classes. Fig.~\ref{fig:range_ratios} shows how the photographs with people are divided into different framing categories for different photographers (the percentages of close-ups and overall shots are shown, the remaining percentage corresponds to medium shots). The figure shows that 19-Sj\"oblom took relatively most close-ups and medium shots and fewest overall shots. From the previous subsection, we know that he had also the highest ratio of photos with people and the objects detected in his photographs profiled him as an urban photographer. We also observe that the rest of the three photographers having the highest ratio of close-ups, i.e., 6-Helander, 7-J\"anis, and 21-Roivainen have also rather low chair and tie rates along with a low number of persons per image. At the same time the number of images on which people are present is average for these photographs. These observations lead us to the conclusion that these photographers were focusing on portrait photographs when photos of people were taken. In addition, we observe that 2-Nousiainen and 14-Uomala captured relatively most overall shots. 14-Uomala also had fewest people photographs in general and only few chairs in his images, which led us to conclude that he did mostly outdoor photography. The fact that the average number of objects on his photographs is also the lowest leads us to the same conclusion. These observations support each other as overall shots are mainly outdoor images. 18-Laukka took fewest close-ups, and the objects detected in his photos mainly profile him as a non-urban photographer, although the ratio of objects in each category is rather low. Interestingly, 4-Hollming had only few overall shots, while the object in his photographs profiled him as a non-urban outdoor photographer. One possible explanation comes from the fact that he has the highest ratio of skis, which are more likely to be present in close-up or medium photographs of people that are taken outdoors. \begin{table}[tb] \caption{Classification accuracies of different photographers} \centering \begin{tabular}{l|c} Photographer ID & Accuracy \\ \hline 4-V\"ain\"o Hollming & 51.4\% \\ 5-Jarl Taube & 47.8\% \\ 6-Nils Helander & 57.1\% \\ 7-Pauli J\"anis & 42.6\% \\ 8-Oswald Hedenstr\"om & 49.5\% \\ 9-Esko Suomela & 20.1\% \\ 12-Kauko Kivi & 26.9\% \\ 14-Vilho Uomala & 26.8\% \\ 15-Eino Nurmi & 25.8\% \\ 19-Kalle Sj\"oblom & 50.4\% \\ 21-Heikki Roivainen & 69.7\% \\ 22-Esko Manninen & 35.5\% \\ \end{tabular} \end{table} \subsection{Photographer recognition} Following the described classification method for photographer recognition, we evaluated whether the trained network can be used to recognize the photographer for the unseen photographs not used in training. Since each photographer has a certain amount of duplicate images, here we split the photographs into train and test sets according to the capturing times to ensure that photographs depicting the same event are not used for both training and testing. Overall, the network achieved 41.1\% classification accuracy on the test set. The confusion matrix of the classification results in shown in Fig.~\ref{fig:confusion}, where all the diagonal elements represent correctly classified samples. We see that the network was able to correctly classify a significant part of photographs from each of the photographers. The photographer-specific recognition rates vary from 20.1\% for 9-Suomela to 69.7\% for 21-Roivainen. The recognition accuracy of each photographer is shown in Table 3. Comparison of the recognition results with the earlier analysis on detected object reveals that some of the most recognized photographers have also specific objects. 21-Roivainen (69.7\% accuracy) has most dogs, horses, and cars in his pictures. 4-Hollming (51.4\%) has the highest number of skiing pictures and only few chairs (i.e., many outdoor photos). 22-Manninen (35.5\%) had the highest average number of people in his people photos and the highest occurrence of chairs (i.e., indoor photos). 19-Sj\"oblom (50.4\%) captured photographs in urban environments. Some of the main confusions occur between 4-Hollming, 6-Helander, and 7-J\"anis. We observe also that all three of them can be considered as non-urban photographers having rather similar number of persons per image and relatively low number of chairs and ties. In addition, 5-Taube and 12-Kivi are confused to each other. 19-Sj\"oblom and 22-Manninen are often misclassified as 8-Hedenstr\"om - these are also the three photographers having the highest numbers of chairs and ties, i.e., the photographers that appear to be the most urban. Also 9-Suomela is often misclassified as 12-Kivi, - both of them have a high rate of boats and rather high rate of trains. These observations support the conclusion that besides establishing the authorship of photographs, the learned feature representation allows to make conclusions about overall visual similarity of these photographers and similarity of the styles of their photos. \begin{figure*}[htbp] \centering \includegraphics[width=1.9\columnwidth]{Figures/photographers.pdf} \caption{Visualization of the photograph similarities using the t-SNE algorithm and sample photographs with a varying similarity. Source of photographs: SA-kuva.} \label{fig:tSNE} \end{figure*} \subsection{Photographer visual similarity} We calculated the Earth Mover's Distance between all pairs of photographers to assess their similarity as described in section III-C. The results are outlined in Fig.~\ref{fig:wasserstein}, where higher values show higher distances, i.e., lower similarity. The highest distances are highlighted in bold, and the lowest distances are underlined. We observe that the obtained results correspond closely to the misclassification rates of the photographers, i.e., photographers that are often misclassified as each other also have low distance between each other. For example, this can be observed in the pairs 5-Taube and 12-Kivi; 15-Nurmi and 9-Suomela; 12-Kivi and 15-Nurmi. We also observe that for these photographers the similarities can be seen based on the detected objects as well: both 15-Nurmi and 12-Kivi have similar number of objects per image and person images, and their rates of persons per image are equal. We observe the same for the pair 15-Nurmi and 9-Suomela. Besides, observing the photographers that are identified to be distant from each other, we can see that their mutual misclassification rates are low, and the objects detected on their photographs provide a reasonable explanation to their differences. The three least similar pairs are 19-Sj\"oblom and 21-Roivainen; 21-Roivainen and 8-Hedenstr\"om; 19-Sj\"oblom and 4-Hollming. From Table 2, we can observe that 19-Sj\"oblom has the highest ratio of person images, the lowest number of dogs and horses, and the highest number of chairs and ties. At the same time, 21-Roivainen has the highest ratio of dogs and horses, and low numbers of chairs and ties. He also has an average number of person images and objects per image, while 19-Sj\"oblom has rather high ratios. The similar reasoning holds for the pair 4-Hollming and 19-Sj\"oblom, as 4-Hollming has low number of chairs and ties, while having rather high ratios of non-urban photos. Besides, 4-Hollming has the highest ratio of skiing photos, meaning that he was most likely having significantly more winter photos than other photographers, making him visually more distinguishable. 21-Roivainen has the lowest ratio of persons per image, while 8-Hedenstr\"om has the highest. The average number of objects per image differs rather significantly as well. Besides, 8-Hedenstr\"om has high ratios of chairs and ties compared to 21-Roivainen. Another noticeable fact is that the photographers having many of the extreme values of detected objects, e.g., 19-Sj\"oblom (maximal ratio of person images, chairs, and ties, and minimal ratios of dogs and horses) and 21-Roivainen (maximal ratios of cars, dogs, and horses; minimal ratios of persons per image and bicycles) also have the higher distances to all other photographers, which can be seen by the fact that the corresponding rows and columns are rather dark compared to others. These facts support the meaningfulness of the proposed method for establishing photographers' similarity. We further examined the similarities and differences between the photographers by extracting the features learned by the classifier network for the test images and visualize them using the t-SNE algorithm \cite{tSNE} in Fig.~\ref{fig:tSNE}. In the figure, the dots denote photographs and different colors correspond to different photographers. Some of the colors are clearly concentrated on certain spots further confirming that different images are characteristic for different photographers. The similarities of adjacent images are also illustrated by the examples, showing that images that are located close to each other in the feature space are also visually similar even if the photographers are different. It can be observed that the upper-right corner images represent the landscapes, the lower-right corner shows images that contain many people in close proximity, and the upper-left corner contains images of multiple people located at distance, where not much other details are present. This further confirms the reasonability of the proposed approach for obtaining feature representations and allows to compare visual similarity of images of different photographers. \section{Conclusion} \label{sec:conclusion} We showed that modern machine learning algorithms can help in societal research on historical photo archives in many ways. In this paper, we applied state-of-the-art object detection models and neural network architectures to obtain statistics and characteristics of prominent Finnish World War II photographers. We examined the typical object categories in the photos of each photographer and analyzed the differences in their ways of capturing and framing people. Furthermore, we showed that a convolutional neural network was able to some extent recognize photographers from the photos leading to the conclusion that certain photos can be considered typical for a specific photographer. The confusion matrix of the photographer classifier revealed some similarities between the photographers. We are not aware of any prior works using machine learning to such photographer analysis. The obtained results will help historians, other researchers, and professionals using historical photo archives in their work to analyze and compare the works of specific photographers. In this work, we used only publicly available pretrained object detection models and basic photograph information, i.e., the photographer, available for most photo archives. The pretrained models showed good performance on the historical gray-scale photographs even though pretrained with modern color photos. We also provide all codes, models and data annotations along with a detailed description on how to use them. Thus, the same methods can be easily applied on other historical photo archives. At the same time, we remind that, in a single paper, we can demonstrate only a tiny fraction of all currently available machine learning tools. The photographer analysis could be further enhanced, for example, by considering photographer intentions \cite{intention} and their photo quality \cite{photoquality}, besides, object detection performance can be enhanced by considering information fusion approaches \cite{chen2019deep}, as well as improving detection of smaller sized objects \cite{cao2019improved}. Here, we only considered person detection, while person segmentation \cite{jiang2019end}, face detection with facial expression analysis \cite{uddin2017facial}, group-level emotion recognition \cite{yu2019group}, or age estimation \cite{al2019comprehensive} will open more interesting opportunities. Besides object-level analysis, scene recognition \cite{zheng2017multicriteria} will help to further characterize photographers. In the future, we will concentrate on issues requiring more specialized methods such as recognizing object classes only appearing in Finnish historical photos or during World War II. We aim at exploiting the original textual photo descriptions to produce more complete object labeling and as well as topic and event recognition \cite{zhang2017automatic}. This will help us to solve one of the biggest challenges in analysing wartime photos, namely separating different statuses of subjects - whether the people in the photographs are alive, wounded or deceased. These kinds of more refined results can help us in the end to draw a more detailed picture of the aims, qualities, and characters of individual TK photographers. We aim at publishing all our results in the archive to assist different types of societal studies on the archive. \bibliographystyle{plain}
proofpile-arXiv_068-4818
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Given a simple, connected graph $G$, the graph associahedron $\KG$ is a convex polytope whose face poset is based on the connected subgraphs of $G$ \cite{cd}. For special examples of graphs, the graph associahedra become well-known, sometimes classical polytopes. For instance, when $G$ is a path, a cycle, or a complete graph, $\KG$ results in the associahedron, cyclohedron, and permutohedron, respectively. A geometric realization was given in \cite{dev2}. Figure~\ref{f:kwexmp} shows $\KG$ when $G$ is a path and a cycle with three nodes, resulting in the 2D associahedron and cyclohedron. \begin{figure}[h] \includegraphics{KWtubes} \caption{Graph associahedra of the (a) path and (b) cycle with three nodes as underlying graphs.} \label{f:kwexmp} \end{figure} This polytope was first motivated by De Concini and Procesi in their work on ``wonderful'' compactifications of hyperplane arrangements \cite{dp}. In particular, if the hyperplane arrangement is associated to a Coxeter system, the graph associahedron $\KG$ appear as tilings of these spaces, where its underlying graph $G$ is the Coxeter graph of the system \cite{djs}. These compactified arrangements are themselves natural generalizations of the Deligne-Knudsen-Mumford compactification \M{n} of the real moduli space of curves \cite{dev1}. From a combinatorics viewpoint, graph associahedra arise in relation to positive Bergman complexes of oriented matroids \cite{arw} along with studies of their enumerative properties \cite{prw}. Recently, Bloom has shown graph associahedra arising in results between Seiberg-Witten Floer homology and Heegaard Floer homology \cite{blo}. Most notably, these polytopes have emerged as graphical tests on ordinal data in biological statistics \cite{mps}. It is not surprising to see $\KG$ in such a broad range of subjects. Indeed, the combinatorial and geometric structures of these polytopes capture and expose the fundamental concept of connectivity. Thus far, however, $\KG$ have been studied for only simple graphs $G$. The goal of this paper is to define and construct graph associahedra in a general context: finite pseudographs which are allowed to be disconnected, with loops and multiple edges. Most importantly, this induces a natural map between $\KG$ and $\KG'$, where $G$ and $G'$ are related by either edge contraction or edge deletion. Such an operation is foundational, for instance, to the Tutte polynomial of a graph $G$, defined recursively using the graphs $G/e$ and $G-e$, which itself specializes to the Jones polynomial of knots. An overview of the paper is as follows: Section~\ref{s:defns} supplies the definitions of the pseudograph associahedra along with several examples. Section~\ref{s:construct} provides a construction of these polytopes and polytopal cones from iterated truncations of products of simplices and rays. The connection to edge contractions (Section~\ref{s:contract}) and edge deletions (Section~\ref{s:delete}) are then presented. A geometric realization is given in Section~\ref{s:real}, used to relate pseudographs with loops to those without. Finally, proofs of the main theorems are given in Section~\ref{s:proof}. \begin{ack} The second author thanks Lior Pachter, Bernd Sturmfels, and the University of California at Berkeley for their hospitality during his 2009-2010 sabbatical where this work was finished. \end{ack} \section{Definitions} \label{s:defns} \subsection{} We begin with foundational definitions. Although graph associahedra were introduced and defined in \cite{cd}, we start here with a blank slate. The reader is forewarned that definitions here might not exactly match those from earlier works since previous ones were designed to deal with just the case of simple graphs. \begin{defn} Let $G$ be a finite graph with connected components $G_1$, \ldots, $G_k$. \begin{enumerate} \item A \emph{tube} is a proper connected subgraph of $G$ that includes at least one edge between every pair of nodes of $t$ if such edges of $G$ exist. \item Two tubes are \emph{compatible} if one properly contains the other, or if they are disjoint and cannot be connected by a single edge of $G$. \item A \emph{tubing} of $G$ is a set of pairwise compatible tubes which cannot contain all of the tubes $G_1$, \ldots, $G_k$. \end{enumerate} \end{defn} \begin{exmp} The top row of Figure~\ref{f:tubings} shows examples of valid tubings, whereas the bottom row shows invalid ones. Part (e) fails since one edge between the bottom two nodes must be in the tube. The tubing in part (f) contains a non-proper tube of $G$. The two tubes of part (g) fail to be compatible since they can be connected by a single edge of $G$. And finally, the tubing of part (h) fails since it contains all the tubes of the connected components. \end{exmp} \begin{figure}[h] \includegraphics{tubings} \caption{The top row shows valid tubings and the bottom rows shows invalid ones.} \label{f:tubings} \end{figure} \subsection{} Let $\red$ be the number of \emph{redundant edges} of $G$, the minimal number of edges we can remove to get a simple graph. We now state one of our main theorems. \begin{thm} \label{t:pseudo} Let $G$ be a finite graph with $n$ nodes and $\red$ redundant edges. The \emph{pseudograph associahedron} $\KG$ is of dimension $n-1+\red$ and is either \begin{enumerate} \item a simple convex polytope when $G$ has no loops, \ or \item a simple polytopal cone otherwise. \end{enumerate} Its face poset is isomorphic to the set of tubings of $G$, ordered under reverse subset containment. In particular, the codimension $k$ faces are in bijection with tubings of $G$ containing $k$ tubes. \end{thm} \noindent The proof of this theorem follows from the construction of pseudograph associahedra from truncations of products of simplices and rays, given by Theorem~\ref{t:trunc}. The following result allows us to only consider \emph{connected} graphs $G$: \begin{thm} \label{t:disconnect} Let $G$ be a disconnected pseduograph with connected components $G_1, G_2, \ldots, G_k$. Then $\KG$ is isomorphic to $\KG_1 \times \KG_2 \times \cdots \times \KG_k \times \Delta_{k-1}.$ \end{thm} \begin{proof} Any tubing of $G$ can be described as: \begin{enumerate} \item a listing of tubings $T_1 \in \KG_1, \ T_2 \in \KG_2, \ \ldots, \ T_k \in \KG_k$, \ and \item for each component $G_i$ either including or excluding the tube $T_i = G_i$. \end{enumerate} The second part of this description is clearly isomorphic to a tubing of the edgeless graph $H_k$ on $k$ nodes. But from \cite[Section 3]{dev2}, since $\K H_k$ is the simplex $\Delta_{k-1}$, we are done. \end{proof} We now pause to illustrate several examples. \begin{exmp} We begin with the 1D cases. Figure~\ref{f:1D-exmp}(a) shows the pseudograph associahedron of a path with two nodes. The polytope is an interval, seen as the classical 1D associahedron. Here, the interior of the interval, the maximal element in the poset structure, is labeled with the graph with no tubes. Part (b) of the figure shows $\KG$ as a ray when $G$ is a loop. Note that we cannot have the entire loop as a tube since all tubes must be proper subgraphs. \end{exmp} \begin{figure}[h] \includegraphics{1D-exmp} \caption{Two 1D examples.} \label{f:1D-exmp} \end{figure} \begin{exmp} For some 2D cases, Figure~\ref{f:kwexmp} displays $\KG$ for a path and a cycle with three nodes as underlying graphs. Figure~\ref{f:2D-exmp}(a) shows the simplest example of $\KG$ for a graph with a multiedge, resulting in a square. The vertices of the square are labeled with tubings with two tubes, the edges with tubings with one tube, and the interior with no tubes. Figure~\ref{f:2D-exmp}(b) shows $\KG$, for $G$ an edge with a loop, as a polygonal cone, with three vertices, two edges, and two rays. We will explore this figure below in further detail. \end{exmp} \begin{figure}[h] \includegraphics{2D-exmp} \caption{Two 2D examples.} \label{f:2D-exmp} \end{figure} \begin{exmp} Three examples of 3D pseudograph associahedra are given in Figure~\ref{f:3D-exmp}. Since each of the corresponding graphs have 3 nodes and one multiedge, the dimension of the polytope is three, as given in Theorem~\ref{t:pseudo}. Theorem~\ref{t:disconnect} shows part (a) as the product of an interval (having two components) with the square from Figure~\ref{f:2D-exmp}(a), resulting in a cube. The polyhedra in parts (b) and (c) can be obtained from iterated truncations of the triangular prism. Section~\ref{s:construct} brings these constructions to light. \end{exmp} \begin{figure}[h] \includegraphics{3D-exmp-2} \caption{Three 3D examples.} \label{f:3D-exmp} \end{figure} \subsection{} We close this section with an elegant relationship between permutohedra and two of the simplest forms of pseudographs. \begin{defn} The \emph{permutohedron} $\per_n$ is an $(n-1)$-dimensional polytope whose faces are in bijection with the strict weak orderings on $n$ letters. In particular, the $n!$ vertices of $\per_n$ correspond to all permutations of $n$ letters. \end{defn} \noindent The two-dimensional permutohedron $\per_3$ is the hexagon and the polyhedron $\per_4$ is depicted in Figure~\ref{f:tonks-facet}(a). It was shown in \cite[Section 3]{dev2} that if $\Gamma_n$ is a complete graph of $n$ nodes, then $\K \Gamma_n$ becomes $\per_n$. \begin{prop} \label{p:permuto} Consider the simplest forms of pseudographs $G$: \begin{enumerate} \item If $G$ has two nodes and $n$ edges between them, then $\KG$ is isomorphic to $\per_n \times \Delta_1$. \item If $G$ has one node and $n$ loops, then $\KG$ is isomorphic to $\per_n \times \ray$, where $\ray$ is a ray. \end{enumerate} \end{prop} \begin{proof} Consider case (1): We view $\per_n$ as $\K\Gamma_n$ for the complete graph on $n$ nodes $\{v_1,\dots v_n\}$, and the interval $\Delta_1$ as $\K\Gamma_2$ for the complete graph on two nodes $\{b_1, b_2\}$. Let the nodes of $G$ be $\{a_1, a_2\}$ and its edges $\{e_1, \ldots, e_n\}$. We construct an isomorphism $\KG \to \K\Gamma_n \times \K\Gamma_2$ where a tube $G_t$ of $G$ maps to the tube $(\psi_1(t), \psi_2(t))$, where $\psi_1(t)$ is the connected subgraph of $\Gamma_n$ induced by the node set $\{v_i \suchthat e_i \in G_t\}$, and $\psi_2(t)$ is the node $\{b_i \suchthat a_i = G_t \}$. This proves the first result; the proof of case (2) is similar, replacing the two nodes of $G$ with one node. \end{proof} \begin{exmp} Figure~\ref{f:3D-permuto-flat}(a) shows a hexagonal prism, viewed as $\per_3 \times \Delta_1$. It is the pseudograph associahedron of the graph with two nodes and three connecting edges. Part (b) shows a 2D projection of $\per_3 \times \ray$, the hexagonal cone of a graph with three loops. Indeed, as we will see later, the removal of a hexagonal facet in (a) yields the object in (b). \end{exmp} \begin{figure}[h] \includegraphics{3D-permuto-flat} \caption{(a) The hexagonal prism $\per_3 \times \Delta_1$ and (b) the planar projection of $\per_3 \times \ray$.} \label{f:3D-permuto-flat} \end{figure} \section{Constructions} \label{s:construct} \subsection{} There exists a natural construction of graph associahedra from iterated truncations of the simplex: For a connected, simple graph $G$ with $n$ nodes, let $\triangle_G$ be the $(n{-}1)$-simplex $\Delta_{n-1}$ in which each facet (codimension one face) corresponds to a particular node. Thus each proper subset of nodes of $G$ corresponds to a unique face of $\triangle_G$ defined by the intersection of the faces associated to those nodes. Label each face of $\triangle_G$ with the subgraph of $G$ induced by the subset of nodes associated to it. \begin{thm} \cite[Section 2]{cd} \label{t:graphasstrunc} For a connected, simple graph $G$, truncating faces of $\Delta_G$ labeled by tubes, in increasing order of dimension, results in the graph associahedron $\KG$. \end{thm} Figure~\ref{f:d4} provides an example of this construction. It is worth noting two important features of this truncation. First, only certain faces of the \emph{original} base simplex $\triangle_G$ are truncated, not any new faces which appear after subsequent truncations. And second, the \emph{order} in which the truncations are performed follow a De Concini - Procesi framework \cite{dp}, where all the dimension $k$ faces are truncated before truncating any $(k+1)$-dimensional faces. \begin{figure}[h] \includegraphics{d4} \caption{An iterated truncation of the simplex resulting in a graph associahedron.} \label{f:d4} \end{figure} \subsection{} We construct the pseudograph associahedron by a similar series of truncations to a base polytope. However the truncation procedure is a delicate one, where neither feature described above succeed here. \begin{defn} Let $G$ be a pseudograph with $n$ nodes. Two (non-loop) edges of $G$ are in a \emph{bundle} if and only if they have the same pair of endpoints. Let $G_\US$ be the \emph{underlying simple graph} of $G$, created by deleting all the loops and replacing each bundle with a single edge.\footnote{This graph is uniquely defined up to graph isomorphism.} Figure~\ref{f:bundles}(a) shows an example of a pseudograph with 10 bundles and 4 loops, whereas part (b) shows its underlying simple graph. \end{defn} \begin{figure}[h] \includegraphics{bundles} \caption{(a) Pseudograph and (b) its underlying simple graph.} \label{f:bundles} \end{figure} Let $\{B_1, \ldots, B_k\}$ be the set of bundles of edges of $G$, and denote $b_i$ as the number of edges of bundle $B_i$, and $\lambda$ as the number of loops of $G$. Define $\triangle \hspace{-.11in} \triangle_G$ as the product $$\Delta_{n-1} \ \times \ \prod_{B_i \in G} \, \Delta_{b_i -1} \ \times \ \ray^{\lambda}$$ of simplices and rays endowed with the following labeling on its faces: \begin{enumerate} \item Each \emph{facet} of the simplex $\Delta_{n-1}$ is labeled with a particular node of $G$, and each face of $\Delta_{n-1}$ corresponds to a proper subset of nodes of $G$, defined by the intersection of the facets associated to those nodes. \item Each \emph{vertex} of the simplex $\Delta_{b_i - 1}$ is labeled with a particular edge of bundle $B_i$, and each face of $\Delta_{b_i - 1}$ corresponds to a subset of edges of $B_i$ defined by the vertices spanning the face. \item Each \emph{ray} $\ray$ is labeled with a particular loop of $G$. \item These labelings naturally induce a labeling on $\triangle \hspace{-.11in} \triangle_G$. \end{enumerate} The construction of \emph{graph} associahedra from truncations of the simplex involved only a labeling associated to the nodes of our underlying graph. Thus tubes of the graph are immediate, based on connected subgraphs containing certain nodes. The construction of \emph{pseudograph} associahedra, however, involves the complexity of issues relating both the nodes and the edges. This leads not only to a subtle choosing of the faces of $\triangle \hspace{-.11in} \triangle_G$ to truncate, but a delicate ordering of the truncation of the faces. We begin by marking the faces of $\triangle \hspace{-.11in} \triangle_G$ which will be of interest in the truncation process: To each tube $G_t$ of the labeled pseudograph $G$, associate a labeling $S$ of nodes and edges of $G$ such that \begin{enumerate} \item all nodes of $G_t$ are in $S$, \item all edges of $G_t$ are in $S$, \item all bundles of $G$ not containing edges of $G_t$ are in $S$, \and \item all loops not incident to any node of $G_t$ are in $S$. \end{enumerate} \begin{defn} A tube $G_t$ is \emph{full} if it is a collection of bundles of $G$ which contains all the loops of $G$ incident to the nodes of $G_t$. In other words, $G_t$ is an induced subgraph of $G$. \end{defn} \noindent Figure~\ref{f:label} shows examples of tubes of a graph $G$ and their associated labeling $S$. The two tubes on the top row are full, whereas the bottom four tubes are not. \begin{figure}[h] \includegraphics{label} \caption{Tubes and their corresponding labels in $\triangle \hspace{-.11in} \triangle_G$.} \label{f:label} \end{figure} \subsection{} We can now state our construction of $\KG$ from truncations, broken down into two steps: \begin{lem} \label{l:simpletrunc} Let $G$ be a connected pseudograph. Truncating the faces of $\triangle \hspace{-.11in} \triangle_G$ labeled with full tubes, in increasing order of dimension, constructs \begin{equation} \label{e:midtrunc} \K G_\US \ \times \ \prod_{B_i \in G} \triangle_{b_i-1} \ \times \ \ray^{\lambda} \, . \end{equation} \end{lem} \begin{proof} A full tube consisting only of bundles maps to the $(b_i-1)$-face of $\Delta_{b_i-1}$. Thus truncating these faces has a trivial effect on that portion of the product. The result then follows immediately from Theorem~\ref{t:graphasstrunc}. \end{proof} As each face $f$ of $\triangle \hspace{-.11in} \triangle_G$ is truncated, those subfaces of $f$ that correspond to tubes but have not yet been truncated are removed. It is natural, however, to assign these defunct tubes to the combinatorial images of their original subfaces. Denote $\triangle \hspace{-.11in} \triangle^*_G$ as the truncated polytope of~\eqref{e:midtrunc}. \begin{thm} \label{t:trunc} Truncating the remaining faces of $\triangle \hspace{-.11in} \triangle^*_G$ labeled with tubes, in increasing order of the number of elements in each tube, results in the pseudograph associahedron $\KG$ polytope. \end{thm} \noindent This immediately implies the combinatorial result of Theorem~\ref{t:pseudo}. The proof of this theorem is given in Section~\ref{s:proof}. Notice the dimension of $\KG$ is the dimension of $\triangle \hspace{-.11in} \triangle_G$, which in turn equals $(n-1) + (b_i -1) + \cdots + (b_p -1) = n -1 +\red$, for $\red$ redundant edges, as claimed. \begin{exmp} We construct the pseudograph associahedron in Figure~\ref{f:3D-exmp}(b) from truncations. The left side of Figure~\ref{f:3D-trunc-label} shows the pseudograph $G$ along with a labeling of its nodes and bundles. \begin{figure}[h] \includegraphics{3D-trunc-label} \caption{A base polytope $\triangle \hspace{-.11in} \triangle_G$ and its labelings.} \label{f:3D-trunc-label} \end{figure} (Notice the edge from node 2 to node 3 is not labeled since the bundle associated to this edge is the trivial $\Delta_0$ point.) Thus the base polytope $\triangle \hspace{-.11in} \triangle_G$ is the product of $\Delta_2 \times \Delta_1$, with the middle diagram providing the labeling on $\Delta_2$ and $\Delta_1$ from $G$. The right side of the figure shows the induced labeling of the vertices of $\triangle \hspace{-.11in} \triangle_G$ from the labeling of $G$. Figure~\ref{f:3D-trunc} shows the iterated truncation of $\triangle \hspace{-.11in} \triangle_G$ in order to arrive at $\KG$. \begin{figure}[h] \includegraphics{3D-trunc} \caption{Iterated truncations of $\triangle \hspace{-.11in} \triangle_G$ resulting in $\KG$ from Figure~\ref{f:3D-exmp}(b).} \label{f:3D-trunc} \end{figure} Lemma~\ref{l:simpletrunc} first requires truncating the faces of $\triangle \hspace{-.11in} \triangle_G$ labeled with full tubes. There are five such faces in this case, three square facets and two edges. Since the squares (labeled on the triangular prism on the left) are facets, their truncations do not change the topological structure of the resulting polyhedron. The truncation of the two edges is given in the central picture of Figure~\ref{f:3D-trunc}, yielding $\triangle \hspace{-.11in} \triangle^*_G$. This polytope is $\KG_\US \times \Delta_1$, a $K_4$ pentagonal prism, as guaranteed by the lemma. Theorem~\ref{t:trunc} then requires truncations of the remaining faces labeled with tubes. There are four such faces, two triangle facets (which are two facets of $\triangle \hspace{-.11in} \triangle_G$, labeled on the left of Figure~\ref{f:3D-trunc}) and two edges, resulting in the polyhedron $\KG$ on the right. \end{exmp} \begin{exmp} Let $G$ be a pseudograph of an edge with a loop attached at both nodes. Figure~\ref{f:3D-loop} shows the polyhedral cone $\Delta_1 \times \ray^2$ along with the labeling of its four facets. There are two full tubes, the front and back facets in (a), and thus their truncation does not alter the polyhedral cone. There are five other tubes to be truncated: two containing one element (a node), one with three elements (two nodes and an edge), and two facets with four elements (two nodes, one edge, one loop). By Theorem~\ref{t:trunc}, the truncation is performed in order of the number of elements in these tubes. Figure~\ref{f:3D-loop}(b) shows the truncation of the edges assigned to tubes with one node. Part (c) displays the result of truncating the edge labeled with a tube with three elements. \begin{figure}[h] \includegraphics{3D-loop} \caption{An iterated truncation of $\Delta_1 \times \ray^2$, resulting in a graph associahedron.} \label{f:3D-loop} \end{figure} \end{exmp} \begin{exmp} Figure~\ref{f:4D-ex1} displays a Schlegel diagram of the 4D tetrahedral prism $\Delta_3 \times \Delta_1$, viewed as the base polytope $\triangle \hspace{-.11in} \triangle_G$ of the pseudograph shown. \begin{figure}[h] \includegraphics{4D-pseudo1} \caption{A tetrahedral prism $\triangle \hspace{-.11in} \triangle_G$ along with labeling of tubes for its six facets.} \label{f:4D-ex1} \end{figure} The six tubes of the pseudograph correspond to the six facets of $\triangle \hspace{-.11in} \triangle_G$. The top two tubes are identified with tetrahedra whereas the other four are triangular prisms. Figure~\ref{f:4D-ex2} shows the iterated truncations of $\triangle \hspace{-.11in} \triangle_G$ needed to convert it into the pseudograph associahedron $\KG$. \begin{figure}[h] \includegraphics{4D-pseudo2} \caption{An iterated truncation of the 4D tetrahedral prism, resulting in $\KG$.} \label{f:4D-ex2} \end{figure} The first row shows two edges and three squares of $\triangle \hspace{-.11in} \triangle_G$ being truncated, which are labeled with full tubes. The result, as promised by Lemma~\ref{l:simpletrunc} is $\KG_\US \times \Delta_1$, a $K_5$ associahedral prism. We continue truncating as given by the bottom row, first two squares with three elements in their tubes, and then two pentagons, with five elements in their tubes. It is crucial that the truncations be performed in this order, resulting in $\KG$ as the bottom-right most picture. \end{exmp} \section{Edge Contractions} \label{s:contract} We have shown that any finite graph $G$ induces a polytope $\KG$. Our interests now focus on deformations of pseudograph associahedra as their underlying graphs are altered. This section is concerned with contraction $G/e$ of an edge $e$, and the following section looks at edge deletions. \begin{defn} An edge (loop) $e$ is \emph{excluded} by tube $G_t$ if $G_t$ contains the node(s) incident to $e$ but does not contain $e$ itself. \end{defn} \begin{defn} Let $G$ be a pseudograph, $G_t$ a tube, and $e=(v,v')$ an edge. Define $$\Phi_e(G_t) = \begin{cases} G_t & \ \ \ G_t \cap \{v,v'\} = \emptyset \\ G_t/e & \ \ \ e \in G_t \\ G_t/\{v,v'\} & \ \ \ \text{ $G_t$ excludes $e$ } \\ \emptyset & \ \ \ \text{ otherwise. } \end{cases}$$ This map extends to $\Phi_e: \KG \to \K(G/e)$, where given a tubing $T$ on $G$, $\Phi_e(T)$ is simply the set of tubes $\Phi_e(G_t)$ of $G/e$, for tubes $G_t$ in $T$. \end{defn} Figure~\ref{f:contract} shows examples of the map $\Phi_e$. The top row displays some tubings on graphs where the edge $e$ to be contracted is highlighted in red. The image of each tubing under $\Phi_e$ in $G/e$ is given below each graph. Notice that $\Phi_e$ is not surjective in general since the dimension of $\K(G/e)$ can be arbitrarily higher than that of $\KG$. For example, if $G$ is the complete bipartite graph $\Gamma_{2,n}$ with an extra edge $e$ between the two ``left'' nodes, then by Theorem~\ref{t:pseudo}, $\KG$ is of dimension $n+1$ whereas $\K(G/e)$ is of dimension $2n$. Although not necessarily surjective, $\Phi_e$ is a poset map, as we now show. \begin{figure}[h] \includegraphics{contract} \caption{Top row shows tubings on graphs, and the bottom row shows these tubings under the map $\Phi_e$, where the red edge $e$ has been contracted.} \label{f:contract} \end{figure} \begin{prop} \label{p:contract} For a pseudograph $G$ with edges $e$ and $e'$, $\Phi_e: \K G \to \K (G/e)$ is a poset map. Moreover, the composition of these maps is commutative: $\Phi_e \circ \Phi_{e'} = \Phi_{e'} \circ \Phi_{e}$. \end{prop} \begin{proof} For two tubings $T$ and $T'$ of $G$, assume $T \prec T'$. For any tube $G_t \in T'$, the tube $\Phi_e(G_t)$ is included in both $\Phi_e(T)$ and $\Phi_e(T')$. Thus $\Phi_e(T) \prec \Phi_e(T')$, preserving the face poset structure. To check commutativity, it is straightforward to consider the 16 possible relationships of edges $e$ and $e'$ with a given tube $G_t$ of $G$, four each as in the definition of $\Phi_e(G_t)$. For each possibility, the actions of $\Phi_e$ and $\Phi_{e'}$ commute. \end{proof} For any collection $E$ of edges of $G$, let $\Phi_E:\KG \to \K(G/E)$ denote the composition of maps $\{\Phi_e ~|~e \in E\}.$ If $E$ is the set of edges of a connected subgraph $H$ of $G$, then contracting $E$ will collapse $H$ to a single node. The resulting graph $G/H$ is the \emph{contraction} of $G$ with respect to $H$. The following result describes the combinatorics of the facets of $\KG$ based on contraction. \begin{thm} \label{t:prod} Let $G$ be a connected pseudograph. The facet associated to tube $G_t$ in $\KG$ is $$\KG_t \times M$$ where $M$ is the facet of $\K (G/G_t)$ associated to the single node of $G/G_t$ which $G_t$ collapses to. In other words, the contraction map $\Phi_E: \KG \to \K(G/G_t)$ restricted to tubings of $G_t$ is the canonical projection onto $M$. \end{thm} \begin{proof} Let $v$ be the single node of $G/G_t$ which $G_t$ collapses to. Given a tubing $T$ of the subgraph induced by $G_t$, and $T'$ a tubing of $G/G_t$ which contains the tube $\{v\}$, we define a map: $$\rho(T,T') \ = \ T \ \cup \ \{G_t\} \ \cup \ \{G_{t'} \in T' ~|~ v \notin G_{t'} \} \ \cup \ \{ (G_{t'}-v) \cup G_t ~|~ v \in G_{t'} \in T'\} \, .$$ This is an isomorphism from the Cartesian product to the facet of $\KG$ corresponding to the tube $G_t$, which can be checked to preserve the poset structure. \end{proof} The following corollary describes the relationship between a graph and its underlying simple graph, at the level of graph associahedra. \begin{cor} Let $G_\US$ be the underlying simple graph of a connected graph $G$ with $\red$ redundant edges. The corresponding facet of $\KG$ for the tube $G_\US$ is equivalent to $\KG_\US \times \per_\red$. \end{cor} \begin{proof} This follows immediately from Proposition~\ref{p:permuto} and Theorem~\ref{t:prod} above, since contracting the underlying simple graph $G_\US$ in $G$ gives us a bouquet of $n$ loops. \end{proof} \begin{exmp} Figure~\ref{f:bouquet}(a) shows a graph $G$ with two nodes and seven edges, with one such edge $e$ highlighted in red. By Proposition~\ref{p:permuto}, we know the pseudograph associahedron $\KG$ is the permutohedral prism $\per_7 \times \Delta_1$. The tube given in part (b), again by Proposition~\ref{p:permuto}, is the permutohedron $\per_6$. By the corollary above, we see $\per_6$ appearing as a codimension two face of $\per_7 \times \Delta_1$. Figure~\ref{f:bouquet}(c) shows a graph $G$ and its underlying simple graph $G_\US$, outlined in red, and redrawn in (d). The corresponding facet of tube $G_\US$ in $G$ is $\per_6$, the pseudograph associahedron of (b), and the pseudograph associahedron $\KG_\US$ of (d). \end{exmp} \begin{figure}[h] \includegraphics{bouquet} \caption{Relationships between permutohedra and underlying graphs.} \label{f:bouquet} \end{figure} \section{Edge Deletions} \label{s:delete} \subsection{} We now turn our focus from edge contractions $G/e$ to edge deletions $G-e$. Due to Theorem~\ref{t:disconnect}, we have had the luxury of assuming all our graphs to be connected, knowing that pseudograph associahedra for disconnected graphs is a trivial extension. In this section, due to deletions of edges, no assumptions are placed on the graphs. \begin{defn} A \emph{cellular surjection} from polytopes $P$ to $Q$ is a map $f$ from the face posets of $P$ to $Q$ which preserves the poset structure, and which is onto. That is, if $x$ is a subface of $y$ in $P$ then $f(x)$ is a subface of or equal to $f(y)$. It is a \emph{cellular projection} if it also has the property that the dimension of $f(x)$ is less than or equal to the dimension of $x$. \end{defn} In \cite{to}, Tonks found a cellular projection from permutohedron to associahedron. In this projection, a face of the permutohedron, represented by a \emph{leveled tree}, is taken to its underlying tree, which corresponds to a face of the associahedron. The new revelation of Loday and Ronco \cite{lr} is that this map gives rise to a Hopf algebraic projection, where this algebra of binary trees is seen to be embedded in the Malvenuto-Reutenauer algebra of permutations. Recent work by Forcey and Springfield \cite{fs} show a fine factorization of the Tonks cellular projection through all connected graph associahedra, and then an extension of the projection to disconnected graphs. Several of these cellular projections through polytopes are also shown to be algebra and coalgebra homomorphisms. Here we further extend the maps based on deletion of edges to all pseudographs, in anticipation of future usefulness to both geometric and algebraic applications. \begin{defn} Let $G_t$ be a tube of $G$, where $e$ is an edge of $G_t$. We say $e$ \emph{splits} $G_t$ into tubes $G_{t'}$ and $G_{t''}$ if $G_t - e$ results in two disconnected tubes $G_{t'}$ and $G_{t''}$ such that $$G_t \ = \ G_{t'} \cup G_{t''} \cup \{e\}.$$ \end{defn} \begin{defn} Let $G$ be a pseudograph, $G_t$ a tube and $e$ be an edge of $G$. Define $$\Theta_e(G_t) = \begin{cases} G_t & \ \ \ \text{ if $e \notin G_t$ } \\ G_t-e & \ \ \ \text{ if $e \in G_t$ and $e$ does not split $G_t$ } \\ \{G_{t'}, G_{t''}\} & \ \ \ \text{ if $e$ splits $G_t$ into compatible tubes $G_{t'}$ and $G_{t''}$ } \\ \emptyset & \ \ \ \text{ otherwise.} \end{cases}$$ This map extends to $\Theta_e: \KG \to \K(G - e)$, where given a tubing $T$ on $G$, $\Theta_e(T)$ is simply the set of tubes $\Theta_e(G_t)$ of $G-e$, for tubes $G_t$ in $T$. \end{defn} Roughly, as a single edge is deleted, the tubing under $\Theta$ is preserved ``up to connection.'' That is, if the nodes of a tube $G_t$ are no longer connected by edge deletion, $\Theta(G_t)$ becomes the two tubes split by $e$, as long as these two tubes are compatible. Figure~\ref{f:tonks-vertex} shows \begin{figure}[h] \includegraphics{tonks-vertex} \caption{The projection $\Theta$ factored by graphs, from the complete graph to the path.} \label{f:tonks-vertex} \end{figure} maximal tubes on four different graphs, each corresponding to a vertex of its respective graph associahedron. As an edge gets deleted from a graph, progressing to the next, the map $\Theta$ shows how the tubing is being factored through. In this particular case, a vertex of the permutohedron (a) is factored through to a vertex of the associahedron (d) through two intermediary graph associahedra. \begin{rem} For a tubing $T$ of $G$ and a \emph{loop} $e$ of $G$, we find that the contraction and deletion maps of $e$ agree; that is, $\Theta_e(T) = \Phi_e(T)$. \end{rem} \subsection{} We now prove that $\Theta$ is indeed a cellular surjection, as desired. The following is the analog of Proposition~\ref{p:contract} for edge deletions. \begin{prop} \label{p:delete} For a pseudograph $G$ with edges $e$ and $e'$, $\Theta_e: \K G \to \K (G - e)$ is a cellular surjection. Moreover, the composition of these maps is commutative: $\Theta_e \circ \Theta_{e'} = \Theta_{e'} \circ \Theta_{e}$. \end{prop} \begin{proof} For two tubings $U$ and $U'$ of $G$, assume $U \prec U'$. For any tube $G_t \in U'$, the tube $\Theta_e(G_t)$ is included in both $\Theta_e(U)$ and $\Theta_e(U')$. Thus $\Theta_e(U) \prec \Theta_e(U')$, preserving the face poset structure. The map $\Theta$ is surjective, since given any tubing $U$ on $G-e$, we can find a preimage $T$ such that $U = \Theta_e(T)$ as follows: First consider all the tubes of $U$ as a candidate tubing of $G$. If it is a valid tubing, we have our $T.$ If not, there must be a pair of tubes $G_t'$ and $G_t''$ in $U$ which are adjacent via the edge $e$ and for which there are no tubes containing either $G_t'$ or $G_t''$. Let $U_1$ be the result of replacing that pair in $U$ with the single tube $G_t = G_t' \cup G_t''$. If $U_1$ is a valid tubing of $G$, then let $T=U_1$. If not, continue inductively. To prove commutativity of map composition, consider the image of a tubing of $G$ under either composition. A tube of $G$ that is a tube of both $G-e$ and $G-e'$ will persist in the image. Otherwise it will be split into compatible tubes, perhaps twice, or forgotten. The same smaller tubes will result regardless of the order of the splitting. \end{proof} \begin{rem} If $e$ is the only edge between two nodes of $G$, then $\Theta_e$ will be a cellular \emph{projection} between two polytopes or cones of the same dimension. Faces will only be mapped to faces of smaller or equal dimension. However, if $e$ is a multiedge, then $G-e$ is a tube of $G$. In this case, the map $\Theta_e$ projects all of $\KG$ onto a single facet of $\KG$, where there may be faces mapped to a face of larger dimension. An example of a deleted multiedge is given in Figure~\ref{f:tonks-multi}. \end{rem} \begin{figure}[h] \includegraphics{tonks-multi} \caption{The cellular surjection $\Theta_e$ for two different multiedges of $G$ from the example in Figure~\ref{f:3D-exmp}(b).} \label{f:tonks-multi} \end{figure} For any collection $E$ of edges of $G$, denote $\Theta_E$ as the composition of projections $\{\Theta_e ~|~e \in E\}$. Let $\Gamma_n$ be the complete graph on $n$ numbered nodes, and let $E$ be the set of all edges of $\Gamma_n$ except for the path in consecutive order from nodes $1$ to $n$. Then $\Theta_E$ is equivalent to the Tonks projection \cite{fs}. Thus, by choosing any order of the edges to be deleted, there is a factorization of the Tonks cellular projection through various graph associahedra. An example of this, from the vertex perspective, was shown in Figure~\ref{f:tonks-vertex}. \begin{figure}[h] \includegraphics{tonks-facet} \caption{A factorization of the Tonks projection through 3D graph associahedra. The shaded facets correspond to the shown tubings, and are collapsed as indicated to respective edges. The permutohedron $P_4$ in (a), through a sequence of collapses, is transformed to the associahedron $K_5$ in (d).} \label{f:tonks-facet} \end{figure} The same map, from the facet viewpoint, is given in Figure~\ref{f:tonks-facet}. Part (a) shows the permutohedron $\per_4$, viewed as $\K\Gamma_4$. A facet of this polyhedron is highlighted and below it is the tube associated to the facet. Deleting the (red) edge in the tube, thereby splitting the tube into two tubes, corresponds to collapsing the quadrilateral face into an interval, shown in part (b). A similar process is outlined going from (b) to (c). Figure~\ref{f:tonks-facet}(c) shows three faces which are highlighted, each with a corresponding tube depicted below the polyhedron. These are the three possible tubes such that deleting the (red) edge of each tube produces a splitting of the tube into two compatible tubes. Such a split corresponds to the collapse of the three marked facets of (c), resulting in the associahedron shown in (d). \section{Realization} \label{s:real} \subsection{} Let $G$ be a pseudograph without loops. We now present a realization of $\KG$, assigning an integer coordinate to each of its vertices. From Theorem~\ref{t:pseudo}, the vertices of $\KG$ are in bijection with the maximal tubings of $G$. For each such maximal tubing $T$, we first define a map $f_T$ on each edge of each \emph{bundle} of $G$. \begin{nota} Let $|G|$ denote the number of nodes and edges of $G$. For a tube $G_t$, let $V(t)$ denote the node set of $G_t$, and let $E(i,t)$ denote the edges of bundle $B_i$ in $G_t$. \end{nota} For a given tubing $T$, order the edges of each bundle $B_i$ by the number of tubes of $T$ that do \emph{not} contain each $e$ in $B_i$. Let $e(i,j)$ refer to the $j$-th edge in bundle $B_i$ under this ordering. Thus $e(i,j)$ is contained in more tubes than $e(i,j+1)$. Let $G_{e(i,j)}$ be the largest tube in $T$ that contains $e(i,j)$ but not $e(i,j+1)$. Note that $G_{e(i,b_i)}$ is the entire graph $G$. We assign a value $f_T$ to each edge in each bundle of $G$, as follows: $$f_T(e(i,j)) \ = \ \begin{cases} \ \ \displaystyle{c \ + \ \sum_{x=1}^{b_i-1} \bigg(\ 2 \left|G - G_{e(i,x)}\right| \ - \ 1 \ \bigg)} & \ \ \ \ j=1 \\[.3in] \ \ \displaystyle{c^{\; j-1} \cdot (c-1) \ - \ \bigg(\ 2 \left|G - G_{e(i,j-1)}\right| \ - \ 1 \ \bigg)} & \ \ \ \ j \neq 1 \\ \end{cases}$$ for the constant $c = |G|^2$. We assign $f_T(v)$ to each node of $G$ recursively by visiting each tube of $T$ in increasing order of size and ensuring that for all nodes and edges $x \in G_t$, $$\sum_{x \in G_t} f_T(x) \ = \ c^{\; |V(t)|} \ + \ \sum_i c^{\; E(i,t)} \ + \ \left|G - G_t\right|^2 \, .$$ \begin{thm} \label{t:hull} Let $G$ be a pseudograph without loops, with an ordering $v_1, v_2, \ldots, v_n$ of its nodes, and an ordering $e_1, e_2, \ldots, e_k$ of its edges. For each maximal tubing $T$ of $G$, the convex hull of the points \begin{equation} \label{e:hull} \bigg(f_T(v_1), \ldots, f_T(v_n), f_T(e_1), \ldots, f_T(e_k)\bigg) \end{equation} in $\R^{n+k}$ yields the pseudograph associahedron $\KG$. \end{thm} \noindent The proof of this is given at the end of the paper. \subsection{} We now extend the realization above to pseudographs with loops. In particular, we show every pseudograph associahedra with loops can be reinterpreted as an open subcomplex of one without loops, via a subtle redescription of the loops. \begin{defn} For $G$ a connected pseudograph with loops, define an associated \emph{loop-free} pseudograph $G_{\LF}$ by replacing the set of loops attached to a node $v$ by a set of edges between $v$ and a new node $v'$. We call $v'$ a \emph{ghost node} of $G_\LF$. An example is given in Figure~\ref{f:loopfree}. \end{defn} \begin{figure}[h] \includegraphics{loopfree} \caption{A pseudograph $G$ and its associated loop-free version $G_{\LF}$. The ghost nodes are shaded.} \label{f:loopfree} \end{figure} \begin{prop} \label{p:subcomplex} For a connected pseudograph $G$ with loops, the graph associahedron $\KG$ can be realized as an open subcomplex of $\KG_{\LF}$. \end{prop} \begin{proof} The canonical poset inclusion $\phi: \KG \to \KG_{\LF}$ replaces any loop of a tube by its associated edge in $G_{\LF}$. This clearly extends to an injection preserving inclusion of tubes, revealing $\KG$ as a subposet of $\KG_\LF$. Moreover, since covering relations are preserved by $\phi$, $\KG$ is a connected subcomplex of $\K G_{\LF}$. Indeed, this subcomplex is homeomorphic to a half-space of dimension $n-1+\red$, where $\red$ is the number of redundant edges of $G_{\LF}.$ To see this, note the only tubings not in the image of $\phi$ are those containing the singleton ghost tubes. In $\K G_{\LF}$, those singleton tubes represent a collection of pairwise adjacent facets since, by construction, the ghost nodes are never adjacent to each other. Therefore the image of $\phi$ is a solid polytope minus a union of facets which itself is homeomorphic to a codimension one disk. \end{proof} \begin{cor} The compact faces of $\KG$ correspond to tubings which exclude all loops. \end{cor} \begin{proof} For any tubing of $T$ in $\KG$ not excluding a loop, $\phi(T)$ will be compatible with the singleton ghost tube in $\KG_\LF$. \end{proof} As an added benefit of Theorem~\ref{t:hull} providing a construction of the polytope $\KG_m$, one gets a geometric realization of $\KG$ as a polytopal cone, for pseudographs $G$ with loops. The result is summarized below, the proof of which is provided at the end of the paper. Note that in addition to the combinatorial argument, we also see evidence that $\KG$ is conal: If the removal of one or more hyperplanes creates a larger region with no new vertices, then that region must be unbounded. \begin{cor} \label{c:loopreal} The realization of $\KG$ is obtained from the realization of $\KG_\LF$ by removing the halfspaces associated to the singleton tubes of ghost nodes. \end{cor} \begin{exmp} If $G$ is a path with two nodes and one loop, then $G_\LF$ is a path with three nodes. Figure~\ref{f:2D-reveal}(a) shows the 2D associahedron $\KG_\LF$ from Figure~\ref{f:kwexmp}(a), where the right most node of the path $G_\LF$ can be viewed as a ghost node. Part (b) shows $\KG$ as seen in Figure~\ref{f:2D-exmp}(b). Notice that the facet of $\KG_\LF$ corresponding to the tube around the ghost node is removed in (a) to form the open subcomplex of (b). \end{exmp} \begin{figure}[h] \includegraphics{2D-exmp-reveal} \caption{(a) The polygon $\KG_\LF$ and (b) the polygonal cone $\KG$.} \label{f:2D-reveal} \end{figure} \begin{exmp} A 3D version of this phenomena is provided in Figure~\ref{f:3D-loop-reveal}. Part (a) shows the 3D associahedron, viewed as the loop-free version $\KG_\LF$ to the pseudograph associahedron $\KG$ of part (b). Indeed, the two labeled facets of (a), associated to tubes around ghost nodes, are removed to construct $\KG$. The construction of $\KG$ for iterated truncations is given in Figure~\ref{f:3D-loop}. \end{exmp} \begin{figure}[h] \includegraphics{3D-loop-reveal} \caption{(a) The associahedron $\KG_\LF$ and the (b) polyhedral cone $\KG$, where the faces of $\KG_\LF$ associated to tubes around ghost nodes have been removed.} \label{f:3D-loop-reveal} \end{figure} \begin{exmp} A similar situation can be seen in Figure~\ref{f:3D-permuto-flat}, part (a) showing the permutohedral prism $\KG_\LF$ and part (b) the cone $\KG$ after removing the back face of the prism. \end{exmp} \section{Proofs} \label{s:proof} \subsection{} The proof of Theorem~\ref{t:trunc} is now given, which immediately gives a proof of Theorem~\ref{t:pseudo}. We begin with a description of the structure of $\triangle \hspace{-.11in} \triangle^*_G$, the polytope given in~\eqref{e:midtrunc}. \begin{enumerate} \item Each face corresponds to a tubing consisting of full tubes $T^F$ and to a subset $S$ of the edges and loops of $G$. The set $S$ contains at least one edge of each bundle. \item The subset $S$ produces a tube $G_S$ that contains all the nodes of $G$ as well as $S$. \item The tubing for a given face is the \emph{intersection} of $T^F$ with $G_S$. \item A face with a tubing $T_a$ contains a face with a tubing $T_b$, if and only if $T_a^F \subset T_b^F$ and $G_{S_a} \supset G_{s_b}$. \item Given two faces with tubings $T_a$ and $T_b$, their intersection is the intersection of $T_a^F \cup T_b^F$ with $G_{S_a} \cap G_{S_b}$ assuming the former is a tubing and the latter is a tube. Otherwise the faces do not intersect. \end{enumerate} In order to describe the effect of truncation on these tubings, we define \emph{promotion}, an operation on sets of tubings that was developed in \cite[Section 2]{cd}. \begin{defn} The \emph{promotion} of a tube $G_t$ in a set of tubings $\tubingset{}$ means adding to $\tubingset{}$ the tubings $$\{T \cup \{G_t\} \suchthat T \in \tubingset{}, G_t \text{ is compatible with all } G_{t'} \in T \} \, .$$ Note that this $T$ may be empty. The new tubings are ordered such that $T \cup \{G_t\} \prec T$, and $T \cup \{G_t\} \prec T' \cup \{G_t\}$ if and only if $T \prec T'$ in $\tubingset{}$. \end{defn} All valid combinations of full tubes of $G$ already exist as faces of $\triangle \hspace{-.11in} \triangle^*_G$. They are also already ordered by containment. Therefore, we may first conclude from this definition that promoting the non-full tubes is sufficient to produce the set of all valid tubings of $G$, resulting in $\KG$. Given a polytope whose faces correspond to a set of tubings, promoting a tube $G_F$ is equivalent to truncating its corresponding face $F$ so long as the subset of tubings compatible with $G_F$ corresponds to the set of faces that properly intersect or contain $F$. Verifying this equivalence for each prescribed truncation is sufficient to prove the theorem. \begin{proof}[Proof of Theorem~\ref{t:trunc}] We may proceed by induction, relying on the description of $\triangle \hspace{-.11in} \triangle^*_G$ above and leaving the computations of intersections to the reader. Consider the polytope $P$ in which all the faces before $F$ in the prescribed order have been truncated. Suppose that until this point, the promotions and truncations have been equivalent, that is, there is a poset isomorphism between the base polytope after a set of truncations and the sets of base tubings after the set of corresponding tubes are promoted. Note that in $P$, the faces that intersect (but are not contained in) $F$ are \begin{enumerate} \item faces that properly intersected or contained $F$ in $\triangle \hspace{-.11in} \triangle^*_G$ \item faces corresponding to tubes promoted before $G_F$ and compatible with $G_F$. \end{enumerate} Since faces created by truncation inherit intersection data from both the truncated face and the intersecting face, we may include (by induction if necessary) any intersection of the above that exists in $P$. Conversely, the faces that do not intersect $F$ in $P$ are \begin{enumerate} \item faces that did not intersect $F$ in $\triangle \hspace{-.11in} \triangle^*_G$ \item faces that did intersect $F$ but whose intersection was contained in a face truncated before $F$ and was thus removed \item faces corresponding to tubes promoted before $G_F$ but incompatible with $G_F$ \item any intersection of the above that exists in $P$. \end{enumerate} We have given a description of when no intersection exists between two faces in $\triangle \hspace{-.11in} \triangle^*_G$, as case (1) above. Most tubings incompatible with $G_F$ can be shown to belong to such a group. Some tubes $G_t$ that intersect $G_F$ fall into case (2), where their intersection corresponds to $\{G_t , G_t \cap G_F\}$. It is contained in the face corresponding to $\{G_F \cap G_t\}$, a face found before $G_F$ in the containment order. Thus no intersection is present in $P$. The tubings compatible with $G_F$ correspond to the faces that properly intersect or contain $F$. Promoting $G_F$ and truncating $F$ will produce isomorphic face/tubing sets. The conclusion of the induction is that the prescribed truncations will produce a polytope isomorphic to the set of tubings of $G$ after all non-full tubes have been promoted, resulting in $\KG$. \end{proof} \subsection{} We now provide the proof for Theorem~\ref{t:hull}. As before, let $G$ be a pseudograph without loops, and let $T$ be a maximal tubing of $G$. Moreover, let $conv(G)$ denote the polytope obtained from the convex hull of the points in Equation~\eqref{e:hull}. Close inspection reveals that $conv(G)$ is contained in an intersection of the hyperplanes defined by the equations: \begin{align*} h_V \ : \ \ & \sum_{v \in V} f_T(v) \ =\ c^{|V|}\\ h_{B_i} \ : \ \ & \sum_{e \in B_i} f_T(e) \ = \ c^{b_i} \end{align*} where $|V|$ is the number of nodes of $G$. To each tube $G_t \in T$, let $$\Lambda(G_t) \ = \ c^{\; |V(t)|} \ + \ \sum_i c^{\; E(i,t)} \ + \ \left|G - G_t\right|^2 \, .$$ These $\Lambda$ functions define halfspaces which contain the vertices associated to that tube: $$h_t^+ \ : \ \ \sum_{x \in G_t} f_T(x) \ \geq \ \Lambda(G_t)\, .$$ Proving that $conv(G)$ has the correct face poset as $\KG$ is mostly a matter of showing the equivalence of $conv(G)$ and the region $$\HY \ := \ h_V \ \cap \ \bigcap_i h_{B_i} \ \cap \ \bigcap_{G_t \in T} h_t^+ \, .$$ \begin{defn} Two tubes $G_a$ and $G_b$ of $G$ are \emph{bundle compatible} if for each $i$, one of the sets $E(i,a)$ and $E(i,b)$ contains the other. Note that the tubes of any tubing $T$ are pairwise (possibly trivially) bundle compatible. \end{defn} \begin{lem} \label{l:inequality} Let $G_{a}$ and $G_{b}$ be adjacent or properly intersecting bundle compatible tubes. Suppose their intersection is a set of tubes $\{G_{\wedge_i}\}$, while $G_{\vee}$ is a minimal tube that contains both. Let $E_\vee$ be the set of edges contained in $G_{\vee}$ but not $G_{a}$ or $G_{b}$. Then for any tubing $T$ containing $G_{\vee}$, $$\Lambda(G_{a}) \ < \ \Lambda(G_{\vee}) \ - \ \Lambda(G_{b}) \ + \ \sum_i \Lambda(G_{\wedge_i}) \ - \ \sum_{e \in E_\vee} f_T(e).$$ \end{lem} \begin{proof} The intersections with each bundle contribute equally to both sides. If $G_{\vee}$ contains more nodes than the others, then we simply note the dominance of the $k^{|V(\vee)|}$ term and place bounds on the remaining ones. If not, the sides are identical up to the $|G-G_t|^2$ terms, which provide the inequality. \end{proof} \begin{lem} \label{l:halfspace} For any tubing $T$, and any tube $G_t$, \begin{equation} \label{e:halfspace} \sum_{x \in G_t} f_T(x) \ \geq \ \Lambda(G_t) \end{equation} with equality if and only if $G_t \in T$. In particular, $conv(G) \subseteq \HY$, and only those vertices of $conv(G)$ that have $G_t$ in their tubing are contained in $h_t$. \end{lem} \begin{proof} If $G_t \in T$, the equality of Equation~\eqref{e:halfspace} follows directly from the definition of $f_T$. Suppose then that $G_t \notin T$. We proceed by induction on the size of $G_t$. First, produce a tube $G_{\sigma}$ which contains the same nodes as $G_t$, and the same size intersection with each bundle, but is bundle compatible with the tubes of $T$. Naturally $\Lambda(G_{\sigma})=\Lambda(G_t)$, but since $f_T$ is an increasing function over the ordered $e(i,j)$ edges of $G$, we get $$\sum_{x \in G_t} f_T(x) \ \geq \ \sum_{x \in G_\sigma} f_T(x)$$ with equality only if $G_t=G_{\sigma}$. Let $G_\vee$ be the smallest tube of $T$ that contains $G_\sigma$ (or all of G if none exists). If $G_\vee = G_\sigma$ then the inequality above is strict and the lemma is proven. Otherwise the maximal subtubes $\{G_{\vee_i}\}$ of $G_\vee$ are disjoint, and each either intersects or is adjacent to $G_\sigma$. If we denote the intersections as $\{G_{\wedge_i}\}$ and the set of edges of $G_\vee$ contained in none of these subtubes by $E_\vee$, then as a set, $$G_\sigma \ = \ G_\vee \ - \ \bigcup_i G_{\vee_i} \ + \ \bigcup_i G_{\wedge_i} \ - \ \bigcup_{e \in E_\vee} f_T(e) \, .$$ The tubes mentioned in the right hand side are all in $T$, except perhaps the intersections. Fortunately, the inductive hypothesis indicates that $$\sum_{x \in G_{\wedge_i}} f_T(x) \ \geq \ \Lambda( G_{\wedge_i})\, .$$ Thus we are able to rewrite and conclude $$\sum_{x \in G_\sigma} f_T(x) \ \geq \ \Lambda (G_\vee) \ - \ \sum_i \Lambda(G_i) \ + \ \sum_i \Lambda (G_{\wedge_i}) \ - \ \sum f_T(e_i) \ > \ \Lambda(G_t)$$ by repeated applications of Lemma \ref{l:inequality}. \end{proof} \begin{lem} \label{l:halfspaceinpoly} $\HY \subseteq conv(G)$. \end{lem} \begin{proof} Particular half spaces impose especially useful bounds of the value of certain coordinates within $\HY$. For instance, if $G_w$ is a full tube, then $$h_w^+ \ : \ \ \sum_{v \in V(w)} f_T(v) \ \geq \ c^{|V(w)|} + |G-G_w|^2 \, .$$ Choosing the maximal tube $G_x$ that intersects bundle $B_i$ in a particular subset of edges $X$ produces $$h_x^+ \ : \ \ \sum_{e \in X} f_T(e) \ \geq \ c^{|X|} + |G-G_x|^2 \, .$$ Applying these to single nodes and single edges gives a lower bound in each coordinate. The hyperplanes $h_V$ and $h_{B_i}$ supply upper bounds, so $\HY$ is bounded. Suppose $\HY - conv(G)$ is not empty. Since $conv(G)$ is convex, by construction, $\HY - conv(G)$ must have a vertex $v^*$ outside $conv(G)$, at the intersection of several $h_t$ hyperplanes. These hyperplanes correspond to a set $T^*$ of tubes of $G$. This $T^*$ contains at least one pair of incompatible tubes $G_a$ and $G_b$, for otherwise it would be a tubing and $v^*$ would be in $conv(G)$. \begin{enumerate} \item If $G_a$ and $G_b$ are bundle incompatible in some bundle $B_i$, then we produce the maximal tube $G_u$ that intersects $B_i$ in $E(i,a) \cup E(i,b)$. As above, $G_u$ produces a bound on the $E(i,u)$ coordinates, yielding $$h_u^+ \ : \ \ \sum_{e \in E(i,u)} f_T(e) \ \geq \ c^{|E(i,u)|} + |G-G_u|^2 \, .$$ The half spaces $h_w^+$ and $h_x^+$ above produce lower bounds on the sum of the vertex coordinates of $G_a$ and $G_b$. Subtracting these from $\Lambda (G_a)$ and $\Lambda(G_b)$ leaves a maximum of $$c^{|E(i,a)|} \ + \ \left|G-G_a\right|^2 \ + \ c^{|E(i,b)|} \ + \left|G-G_b\right|^2$$ for $\sum_{E(i,a)} f_T(e)$ and $\sum_{E(i,b)} f_T(e)$, which is insufficient for the $G_u$ requirement above. We conclude that $v^*$ is either outside $h_u^+$ or outside one of the halfspaces $h_w^+$ or $h_x^+$. Either way, $v^*$ is not in $\HY$. \item On the other hand, if $G_a$ and $G_b$ are bundle compatible, Lemma \ref{l:inequality} can be rearranged: $$\Lambda(G_\vee) \ > \ \Lambda(G_a) \ + \ \Lambda(G_b) \ - \ \sum_i \Lambda(G_{\wedge_i}) \ + \ \sum_{e \in E_{\vee}} f_T(e) \, .$$ Thus $v^*$ is either not in one of the $h_{\wedge_i}^+$ or not in $h_\vee^+$. Therefore $v^*$ is not in $\HY$. \end{enumerate} This contradiction proves the Lemma. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:hull}] Lemmas~\ref{l:halfspace} and~\ref{l:halfspaceinpoly} show that $conv(G) = \HY$. Consider the map taking a tubing $T$ of $G$ to the face $$conv(G) \cap \bigcap_{G_t \in T} h_t$$ of $conv(G)$. By Lemma~\ref{l:halfspace}, each tubing maps to a face of $conv(G)$ containing a unique set of vertices. Each face is an intersection of hyperplanes that contains such a vertex (and hence corresponds to a subset of a valid tubing). Since it clearly reverses containment, this map is an order preserving bijection. \end{proof} \begin{proof}[Proof of Corollary~\ref{c:loopreal}] We remark that notation (and the entire reasoning) in this proof is being imported from the proof of Lemma~\ref{l:halfspaceinpoly}. If $v$ is a ghost node, then it is not $G_w$, $G_x$ or $G_u$ for a pair of bundle incompatible tubes (since those tubes all have at least 2 nodes). It also is neither $G_\vee$ nor $G_{\wedge_i}$ for any pair of bundle compatible tubes. Thus $h_t^+$ excludes no intersection of hyperplanes. Its removal creates no new faces, and removes only those faces corresponding to tubings containing $v$. The identification of these faces is the canonical poset inclusion $\phi$ from the proof of Proposition~\ref{p:subcomplex}. \end{proof} \bibliographystyle{amsplain}
proofpile-arXiv_068-5416
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $G$ be a simple graph, that is, no loops or multiple edges. We write $V(G)$ for the vertex set of $G$ and $\delta(G)$ for the minimum degree of $G$. An \emph{edge-coloured graph} is a graph in which each edge is assigned a colour. We say such an edge-coloured $G$ is \emph{proper} if no two adjacent edges have the same colour. A subgraph $H$ of $G$ is \emph{rainbow} if all its edges have distinct colours. Rainbow subgraphs are also called totally multicoloured, polychromatic, or heterochromatic subgraphs. For a vertex $v$ of an edge-coloured graph $G$, the \emph{colour degree} of $v$ is the number of distinct colours on the edges incident with $v$. The smallest colour degree of all vertices in $G$ is the \emph{minimum colour degree of $G$} and is denoted by $\delta^c(G)$. Note that a properly edge-coloured graph $G$ with $\delta(G) \geq k$ has $\delta^c(G) \geq k$. In this paper, we are interested in rainbow matchings in edge-coloured graphs. The study of rainbow matchings began with a conjecture of Ryser~\cite{ryser}, which states that every Latin square of odd order contains a Latin transversal. Equivalently, for $n$ odd, every properly $n$-edge-colouring of $K_{n,n}$, the complete bipartite graph with $n$ vertices on each part, contains a rainbow copy of perfect matching. In a more general setting, given a graph $H$, we wish to know if an edge-coloured graph $G$ contains a rainbow copy of $H$. A survey on rainbow matchings and other rainbow subgraphs in edge-coloured subgraph can be found in \cite{kano}. From now onwards, we often refer to $G$ for an edge-coloured graph $G$ (not necessarily proper) of order $n$. Li and Wang~\cite{wang2} showed that if $\delta^c(G)=k$, then $G$ contains a rainbow matching of size $\big\lceil \frac{5k-3}{12} \big\rceil$. They further conjectured that if $k\geq 4$, then $G$ contains a rainbow matching of size $\big\lceil \frac{k}{2} \big\rceil$. This bound is tight for properly edge-coloured complete graphs. LeSaulnier et al.~\cite{lesaul} proved that if $\delta^c(G) = k$, then $G$ contains a rainbow matching of size $\big\lfloor \frac{k}{2} \big\rfloor$. Furthermore, if $G$ is properly edge-coloured with $G\neq K_4$ or $|V(G)|\neq \delta(G) + 2$, then there is a rainbow matching of size $\big\lceil \frac{k}{2} \big\rceil$. The conjecture was later proved in full by Kostochka and Yancey~\cite{kostochka2}. What happens if we have a larger graph? Wang~\cite{wang1} proved that every properly edge-coloured graph $G$ with $\delta(G) = k$ and $|V(G)|\geq \frac{8k}{5}$ contains a rainbow matching of size at least $\big\lfloor \frac{3k}{5} \big\rfloor$. He then asked if there is a function, $f(k)$, such that every properly edge-coloured graph $G$ with $\delta(G)\geq k$ and $|V(G)|\geq f(k)$ contains a rainbow matching of size $k$. The bound on the size of rainbow matching is sharp, as shown for example by any $k$-edge-coloured $k$-regular graph. If $f(k)$ exists, then we trivially have $f(k)\geq 2k$. In fact, $f(k) > 2k$ for even $k$ as there exists $k \times k$ Latin square without any Latin transversal (see \cite{brualdi,wanless}). Diemunsch et at.~\cite{diemunsch1} gave an affirmative answer to Wang's question and showed that $f(k)\leq \frac{13}{5}k$. The bound was then improved to $f(k)\leq \frac{9}{2}k$ in~\cite{lo}, and shortly thereafter, to $f(k)\leq \frac{98}{23}k$ in~\cite{diemunsch2}. Kostochka, Pfender and Yancey~\cite{kostochka1} considered a similar problem with $\delta^c(G)$ instead of properly edge-coloured graphs. They showed that if $G$ is such that $\delta^c(G)\geq k$ and $n>\frac{17}{4}k^2$, then $G$ contains a rainbow matching of size $k$. Kostochka~\cite{kostochka} then asked: can $n$ be improved to a linear bound in $k$? In this paper, we show that $n\geq 4k-4$ is sufficient for $k \ge 4$. Furthermore, this implies that $f(k)\leq 4k-4$ for $ k \ge 4$. \begin{thm} \label{rainbowmatching} If $G$ is an edge-coloured graph on $n$ vertices with $\delta^c(G) \geq k$, then $G$ contains a rainbow matching of size $k$, provided $n\geq 4k-4$ for $k\geq 4$ and $n\geq 4k-3$ for $k\leq 3$. \end{thm} \section{Main Result} We write $[k]$ for $\{1,2,\ldots,k\}$. For an edge $uv$ in $G$, we denote by $c(uv)$ the colour of $uv$ and let the set of colours be $\mathbb{N}$, the set of natural numbers. The idea of the proof is as follows. By induction, $G$ contains a rainbow matching $M$ of size $k-1$. Suppose that $G$ does not contain a rainbow matching of size $k$. We are going to show that there exists another rainbow matching $M'$ of size $k-1$ in $V(G) \setminus V(M)$. Clearly, the colours of $M$ equal to the colours of $M'$. If $n \geq 4k-3$, then there exists a vertex $z$ not in $M\cup M'$. Since $\delta^c(G) \ge k$, $z$ has a neighbour $w$ such that $zw$ does not use any colour of $M$. Hence, it is easy to deduce that $G$ contains a rainbow matching of size $k$. \begin{proof}[Proof of Theorem~\ref{rainbowmatching}] We proceed by induction on $k$. The theorem is trivially true for $k=1$. So fix $k>1$ and assume that the theorem is true for $k-1$. Let $G$ be an edge-coloured graph with $\delta^c(G) \geq k$ and $n=|V(G)|\geq 4k-4$ if $k\geq 4$ and $n\geq 4k - 3$ otherwise. Suppose that the theorem is false and so $G$ does not contain a rainbow matching of size $k$. By induction, there exists a rainbow matching $M = \{ x_iy_i : i \in [k-1]\} $ in $G$, say with $c(x_iy_i) = i$ for each $ i \in [k-1]$. Let $M'$ be another rainbow matching of size $s$ (which could be empty) in $G$ vertex-disjoint from $M$. Clearly $s \le k-1$ and the colours on $M'$ is a subset of $[k-1]$, as otherwise $G$ contains a rainbow matching of size $k$. Without loss of generality, we may assume that $M' = \{ z_iw_i : i \in [s] \}$ with $c(z_iw_i) = i$ for each $i \in [s]$. We further assume that $M$ and $M'$ are chosen such that $s$ is maximal. Now, let $W = V(G) \setminus V(M \cup M')$ and $S = \{x_i, y_i, z_i, w_i : i \in [s]\}$. Clearly, if there is an edge in $W$, it must have colour in $[s]$, otherwise $G$ contains a rainbow matching of size $k$, or $s$ is not maximal. \textbf{Fact A} If $uw$ is an edge in $W$, then $c(uw) \in [s]$. Furthermore, if $uv$ is an edge with $u \in S$ and $v \in W$, then $c(uv) \in [k-1]$, otherwise $G$ contains a rainbow matching of size $k$. First, we are going to show that $s=k-1$. Suppose the contrary, $s<k-1$. We then claim the following. \textbf{Claim} By relabeling the indices of $i$ (in the interval $\{s+1,s+2,\ldots,k-1\}$) and swapping the roles of $x_i$ and $y_i$ if necessary, there exist distinct vertices $z_{k-1}$, $z_{k-2}$, \dots, $z_{s+1}$ in $W$ such that for $s+1 \le i \le k-1$ the following holds for $s+1 \le i \le k-1$: \begin{enumerate}[(a)] \item $y_iz_i$ is an edge and $c(y_iz_i) \notin [i]$. \item Let $T_i$ be the vertex set $\{x_j, y_j, z_j : i \le j \le k-1\}$. For any colour $j$, there exists a rainbow matching of size $k-i$ on $T_i$ which does not use any colour in $[i-1] \cup \{j\}$. \item Let $W_i = W \backslash \{z_i, z_{i+1}, \dots, z_{k-1} \}$. If $x_i w$ is an edge with $w \in W_i$, then $ c(x_i w ) \in [s]$. \item If $uw$ is an edge with $u \in S$ and $w \in W_i$, then $c(uw) \in [i-1]$. \item If $uw$ is an edge with $u \in S \cup T_i \cup W$ and $w \in W_i$, then $c(uw) \in [i-1]$ or $u \in \{y_i, \dots, y_{k-1} \}$. \end{enumerate} \textit{Proof of Claim.} Let $W_k=W$ and $T_k=\emptyset$. Observe that part (d) and (e) of the claim hold for $i = k$. For each $i = k-1, k-2, \dots, s+1$ in terms, we are going to find $z_i$ satisfying (a) -- (e). Suppose that we have already found $z_{k-1}, z_{k-1}, \ldots, z_{i+1}$. Note that $|W_{i+1}| \ge n - 2(k-1) - 2s - (k-i-1) \ge 1$, so $W_{i+1} \ne \emptyset$. Let $z$ be a vertex in $W_{i+1}$. By the colour degree condition, $z$ must incident with at least $k$ edges of distinct colours, and in particular, at least $k-i$ distinct coloured edges not using colours in $[i]$. Then, there exists a vertex $u\in \{x_j,y_j: s+1\leq j \leq i\}$ such that $uz$ is an edge with $c(uz)\notin [i]$ by part (e) of the claim for the case $i+1$. Without loss of generality, $u=y_i$ and we set $ z_i=z $. Part (b) of the claim is true for colour $j\neq i$, simply by taking the edge $x_iy_i$ together with a rainbow matching of size $k-i-1$ on $T_{i+1}$ which does not use any colour in $[i] \cup \{j\}$. For colour $j=i$, we take the edge $y_iz_i$ together with a rainbow matching of size $k-i-1$ on $T_{i+1}$ which does not use any colour in $[i] \cup \{c(y_iz_i)\}$. To show part (c) of the claim, let $x_i w$ be an edge for some $w \in W_i$. By part (b) of the claim for the case $i+1$, there exists a rainbow matching $M''$ of size $k-i-1$ on $T_{i+1}$ which does not use any colour in $[i]\cup \{c(y_iz_i)\}$. Set ${M}_0 = \{x_j y_j : j \in [i-1] \} \cup M'' \cup \{y_i z_i\}$. Then, $M_0$ is a rainbow matching of size $k-1$ vertex-disjoint from $M'$. Now, by considering the pair $(M_0,M')$ instead of $(M,M')$, we must have $c(x_i w) \in [s]$. Otherwise, $G$ contains a rainbow matching of size $k$ or $s$ is not maximal. Let $uw$ be an edge with $u \in S$, $w \in W_i$ and $ c(u w ) \notin [i-1]$. Pick a rainbow matching $M_u$ of size $s$ on $S \setminus \{u\}$ with colours $[s]$, and a rainbow matching $M_u'$ of size $k-i$ on $T_i$ which does not contain any colour in $[i-1]\cup \{c(uw)\}$. Then, $\{uw\} \cup M_u \cup M_u' \cup \{x_j y_j : s+1\leq j \leq i-1 \}$ is a rainbow matching of size $k$ in $G$, a contradiction. So $c(u w)\in [i-1]$ for any $u\in S$ and $w\in W$, showing part (d) of the claim. Part (e) of the claim follows easily from Fact~A, (c) and (d). This completes the proof of the claim. \qed Recall that $s<k-1$. So we have $|W_{s+1}| = n - 2(k-1) - 2s - (k-1-s) \geq k-1-s \geq 1$. Pick a vertex $w\in W_{s+1}$. By part (e) of the claim, $w$ adjacent to vertices in $\{y_{s+1},y_{s+2},\ldots,y_{k-1}\}$ or $w$ incident with edges of colours in $[s]$. Hence, $w$ has colour degree at most $k-1$, which contradicts $\delta^c(G) \geq k$. Thus, we must have $s=k-1$ as claimed. In summary, we have $M = \{ x_iy_i : i \in [k-1]\}$ and $M' = \{ z_iw_i : i \in [k-1] \}$ with $c(x_iy_i) = i = c(z_iw_i) $ for $i\in [k-1]$. Now, if $n\geq 4k-3$, then $ V(G) \neq V(M \cup M')$. Pick a vertex $w\notin V(M \cup M')$ and since $w$ has colour degree at least $k$, there exists a vertex $u$ such that $uw$ is an edge and $c(uw)\notin [k-1]$. It is easy to see that $G$ contains a rainbow matching of size $k$, contradicting our assumption. Therefore, we may assume $n=4k-4$ and $k\geq 4$. Since $\delta^c(G)\geq k$, any vertex $u\in \{x_1,y_1,z_1,w_1\}$ must have a neighbour $v$ such that $c(uv)\notin [k-1]$. If $v\notin \{x_1,y_1,z_1,w_1\}$, then $G$ contains a rainbow matching of size $k$. So, without loss of generality, $x_1 z_1$ and $y_1 w_1$ are edges in $G$ with $c(x_1 z_1),c(y_1 w_1)\notin [k-1]$. By symmetry, we may assume that for each $i\in [k-1]$, $x_i z_i$ and $y_i w_i$ are edges in $G$ with $c(x_i z_i), c(y_i w_i) \notin [k-1]$. As $\delta^c(G)\geq k\geq 4$, $x_1$ must have a neighbour $v\notin \{y_1,z_1,w_1\}$ with $c(x_1 v)\neq 1$. Without loss of generality, we may assume $v=z_j$ for some $j$ and $c(x_1 z_j) = 2$. Now, $\{x_1z_j, z_1 w_1, y_2 w_2, \} \cup \{x_i y_i : i\in \{3,4,\ldots,k-1\}$ is a rainbow matching of size $k$ in $G$, which again is a contradiction. This completes the proof of the theorem. \end{proof} \section{Remarks} In Theorem~\ref{rainbowmatching}, the bound on $n$, the number of vertices, is sharp for $k=2,3$ (and trivially for $k=1$), as shown by properly 3-edge-coloured $K_4$ for $k=2$ and by properly 3-edge-coloured two disjoint copies of $K_4$ for $k=3$. However, we do not know if the bound is sharp for $k\geq 4$. \begin{question} Given $k$, what is the minimum $n$ such that every edge-coloured graph $G$ of order $n$ with $\delta^c(G) = k$ contains a rainbow matching of size $k$? \end{question} \subsection*{Acknowledgment} The authors thank Alexandr Kostochka for suggesting the problem during `Probabilistic Methods in Graph Theory' workshop at University of Birmingham, United Kingdom. We would also like to thank Daniela K\"uhn, Richard Mycroft and Deryk Osthus for organizing this nice event.
proofpile-arXiv_068-7170
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{I. Introduction} The so-called Generalized Uncertainty Principle (GUP) it is an immediate way to realize the old intuition about the existence of a fundamental minimal scale. In fact, a natural (Planckian) cut-off length has, in some (not yet understood) sense, to appears as soon as the smooth picture of the spacetime manifold breaks down, i.e. when the quantum effects are taken into account. Interest on in minimal length or GUP approach has been motivated by studies in perturbative string theory \cite{String}, considerations on the proprieties of black holes \cite{Mag} and de Sitter space \cite{Sny}. In particular, from the string theory point of view, a minimal observable length it is a consequence of the fact that strings can not probe distance below the string scale. However, in recent years, a big amount of work has been done in this active field in a wide variety of directions (see for example \cite{GUP1} and the references therein; for another application of the GUP approach to the minisuperspace dynamics, from a different point of view, see \cite{Vakili}). In this paper we address the question about the application of the GUP formalism in quantum cosmology. In particular, two cosmological model are discussed: i) the FRW ($k=0$) model with a massless scalar field and ii) the Taub model. For the discussion on the FRW model we refer to \cite{BM07a} and on Taub to \cite{BM07b}. In the first model (FRW) \cite{BM07a}, the scalar field is used as an ``relational time'' and only the scale factor is treated in the GUP formalism. As well-known in the Wheeler-DeWitt (WDW) framework the unavoidable classical singularity cannot be solved and the wave packet follow a classical trajectory up to the ``initial'' singularity \cite{Ish,APS}. In the GUP approach, as we will see, the situation is very different. In fact the wave packet, peaked at late times (at energies much smaller than the Planck's one), ``escape'' from the classical trajectory in the dynamics toward the cosmological singularity. Therefore the probability density to find the Universe near the classic time where the singularity appears goes to zero and, in some sense, our quantum Universe approach stationary states ``near the Planckian region''. In this sense the cosmological singularity is solved by the modified Heisenberg algebra. The second model (Taub) \cite{BM07b}, is studied in the context of the ADM reduction of the dynamics. Such a representation, allows us to regard one variable, mainly the Universe volume, as a ``time'' for the dynamics. Therefore, only the physical degree of freedom of the system, which is related to the Universe anisotropy, will be treated in the GUP formalism. In the canonical case (WDW theory), the wave packets are peaked around the classical trajectories and, after the bounce on the potential wall, they fall in the cosmological singularity. On the other hand, in the GUP case, we obtain two remarkable results. i) The probability density to find the Universe is peaked ``near'' the potential wall and the wave packets show a stationary behavior. Therefore, the classically singularity will be not probabilistically privileged. ii) The value of anisotropy for which the probability amplitude is peaked corresponds to a quasi-isotropic Universe. The paper is organized as follows. In Sec. II the GUP framework is reviewed and Sec. III is devoted to discuss the application of this approach in quantum cosmology. In Sec. IV and V the FRW model is studied in the WDW and GUP approach, respectively and a comparison between these results is given. Finally, in Sec. VI, VII and VIII the Taub model is presented and analyzed in the WDW and in the GUP framework, respectively. Concluding remarks follow. \section{II. Quantum mechanics in the GUP framework} In this section we briefly review some aspects and results of a non-relativistic quantum mechanics with nonzero minimal uncertainties in position \cite{Kem}. In one dimension, we consider the Heisenberg algebra generated by $\bf q$ and $\bf p$ obeying the commutation relation (in $\hbar=c=1$ units) \be\label{modal} [{\bf q},{\bf p}]=i(1+\beta{\bf {p}}^2), \ee where $\beta$ is a ``deformation'' parameter. This commutation relation leads to the uncertainty relation \be\label{gup} \Delta q \Delta p\geq \frac 1 2\left(1+\beta (\Delta p)^2+\beta \langle{\bf p}\rangle^2\right), \ee which appears in perturbative string theory \cite{String}. The canonical Heisenberg algebra can be recovered in the limit $\beta=0$ and the generalization to more dimension is straightforward, leading naturally to a ``noncommutative geometry'' for the space coordinates. It is immediate to verify that such a Generalized Uncertainty Principle (\ref{gup}) implies a finite minimal uncertainty in position $\Delta q_{min}=\sqrt\beta$. As well-known, the existence of a nonzero uncertainty in position implies that there cannot by any physical state which is a position eigenstate. In fact, an eigenstate of an observable necessarily has vanishing uncertainty on it. To be more precise, let us assume the commutation relations to be represented on some dense domain $D\subset H$ in a Hilbert space $H$. In the canonical case, a sequence $\vert\psi_n\rangle\in D$ exists, with position uncertainties decreasing to zero. On the other hand, in presence of a minimal uncertainty $\Delta q_{min}\geq0$, it is not possible any more to find some $\vert\psi_n\rangle\in D$ such that \be \lim_{n\rightarrow\infty}\left(\Delta q_{min}\right)_{\vert\psi_n\rangle}=\lim_{n\rightarrow\infty}\langle\psi\vert({\bf q}-\langle\psi\vert{\bf q}\vert\psi\rangle)^2\vert\psi\rangle=0. \ee Although it is possible to construct position eigenvectors, they are only formal eigenvectors and not physical states. Let us now stress that this feature comes out from the corrections to the canonical commutation relations and, in general, a non-commutativity of the ${\bf q}_i$ will not imply a finite minimal uncertainty $\Delta q_{min}\geq0$. Therefore, in the GUP approach, we can not work in the configuration space and some notion of position will recovered in the next. The Heisenberg algebra (\ref{modal}) can be represented in the momentum space, where the $\bf q$, $\bf p$ operators act as \be\label{rep} {\bf p}\psi(p)=p\psi(p), \qquad {\bf q}\psi(p)=i(1+\beta p^2)\partial_p\psi(p), \ee on a dense domain $S$ of smooth functions. To recover information on positions we have to study the states that realize the maximally-allowed localization. Such states $\vert\psi^{ml}_{\zeta}\rangle$ of maximal localization, which are proper physical states around a position $\zeta$, have the proprieties $\langle\psi^{ml}_{\zeta}\vert {\bf q}\vert\psi^{ml}_{\zeta}\rangle=\zeta$ and $(\Delta q)_{\vert\psi^{ml}_{\zeta}\rangle}=\Delta q_{min}$. These states are called of maximal localization, because they obey the minimal uncertainty condition $\Delta q\Delta p=\vert\langle [{\bf q},{\bf p}]\rangle\vert/2$ and therefore the following equation holds \be \left({\bf q}-\langle{\bf q}\rangle + \frac{\langle [{\bf q},{\bf p}]\rangle}{2(\Delta p)^2}({\bf p}-\langle{\bf p}\rangle)\right){\vert\psi^{ml}_{\zeta}\rangle}=0, \ee which admit, in the momentum space, the following solution\footnote{The absolutely minimal uncertainty in position $\Delta q_{min}=\sqrt\beta$ and thus also the maximal localization states, are obtained for $\langle{\bf p}\rangle=0$.} \be \psi^{ml}_{\zeta}(p)\sim\frac 1 {(1+ \beta p^2)^{1/2}} \exp\left(-i\frac{\zeta}{\sqrt{\beta}} \tan^{-1}(\sqrt{\beta}p)\right), \ee where with $\sim$ we omit the normalization constant. As we can easily see, these states in the $\beta=0$ limit reduce to ordinary plane waves. As last step, we can project an arbitrary state $\vert\psi\rangle$ on the maximally localized states $\vert\psi^{ml}_{\zeta}\rangle$ in order to obtain the probability amplitude for a particle being maximally localized around the position $\zeta$ (i.e. with standard deviation $\Delta q_{min}$). We call these projections the ``quasiposition wave function'' $\psi(\zeta)\equiv\langle\psi^{ml}_{\zeta}\vert\psi\rangle$; explicitly, we have \be\label{qwf} \psi(\zeta)\sim\int^{+\infty}_{-\infty}\frac{dp}{(1+\beta p^2)^{3/2}} \exp\left(i\frac{\zeta}{\sqrt{\beta}} \tan^{-1}(\sqrt{\beta}p)\right)\psi(p). \ee This is nothing but a generalized Fourier transformation, where in the $\beta=0$ limit the ordinary position wave function $\psi(\zeta) = \langle\zeta\vert\psi\rangle$ is recovered. \section{III. On the GUP in the minisuperspace dynamics} Let us discuss some aspects regarding the application of the GUP framework in quantum cosmology, i.e. in the context of a minisuperspace reduction of the dynamics. In fact, in such a theory, only a finite number of the gravitational degrees of freedom are invoked at quantum level and the remainder are set to zero by the imposition of symmetries on the spatial metric. In particular by requiring the spatial homogeneity, the (gravitational) system is described by three degrees of freedom, i.e. the three scalar factors of the Bianchi models. On the other hand, by imposing also the isotropy, we deal with a one-dimensional mechanical system, i.e. the FRW models. Therefore, quantum cosmology is a quantum mechanical toy model (finite degrees of freedom) which is a simple arena to test ideas and constructions which can be introduced in the (not yet found) quantum general relativity. However, since at classical level the Universe dynamics is described by such symmetric models, the quantization of these seems to be necessary to answer to the fundamental questions like the fate of the classical singularity, the inflationary expansion and the chaotic behavior of the Universe toward the singularity. In this respect, the GUP approach to quantum cosmology appears physically grounded. In fact, a generalized uncertainty principle can be immediately reproduced deforming the canonical Heisenberg algebra. In other words, the GUP scheme relies on a modification of the canonical quantization prescriptions and, in this respect, it can be reliably applied to any dynamical system. Although such a deformed commutation relation, differently from the GUP itself, has not been so far derived directly from string theory, it is one possible way in which certain features of a more fundamental theory may manifest themselves in quantizing a cosmological model. \section{IV. The FRW model in the WDW approach} The canonical quantization (in the sense of the WDW theory) of the homogeneous, isotropic, flat ($k=0$) cosmological model with a massless scalar field is reviewed (for more details see \cite{Ish,APS}). The Hamiltonian constraint for this model has the form \be\label{con} H_{grav}+H_{\phi}\equiv-9\kappa p_x^2x+\f3 {8\pi}\frac{p_{\phi}^2}{x}\approx0 \quad x\equiv a^3, \ee where $\kappa=8\pi G\equiv8\pi l_P^2$ is the Einstein constant and $a$ is the scale factor. In the classical theory, the phase space is $4$-dimensional, with coordinates $(x,p_x;\phi,p_{\phi})$. At $x=0$ the physical volume of the Universe goes to zero and the singularity appears. Since $\phi$ does not enter the expression of the constraint, $p_{\phi}$ is a constant of motion and therefore each classical trajectory can be specified in the $(x,\phi)$-plane. Thus $\phi$ can be considered as a relational time and the dynamical trajectory reads as \be\label{clastra} \phi=\pm\frac 1 {\sqrt{24\pi\kappa}}\ln\left|\frac x {x_0}\right|+\phi_0, \ee where $x_0$ and $\phi_0$ are integration constants. In this equation, the plus sign describes an expanding Universe from the Big-Bang, while the minus sign a contracting one into the Big-Crunch. We now stress that the classical cosmological singularity is reached at $\phi=\pm\infty$ and every classical solution, in this model, reaches the singularity. At quantum level the Wheeler-DeWitt equation, associated to the constraint (\ref{con}), tells us how the wave function $\Psi(x,\phi)$ evolves as $\phi$ changes; in this respect we can regard the argument $\phi$ of $\Psi(x,\phi)$ as an ``emergent time'' and the scale factor as the real physical variable. In order to have an explicit Hilbert space, we perform the natural decomposition of the solution into positive and negative frequency parts. Therefore, the solution of this Wheeler-DeWitt equation has the very well-known form \be\label{solcan} \Psi_\epsilon(x,\phi)=x^{-1/2}\left(Ax^{-i\gamma}+Bx^{i\gamma}\right)e^{i \sqrt{24\pi\kappa}\epsilon\phi}, \ee where $\gamma=\frac 12(4\epsilon^2-1)^{1/2}\geq0$ and $\epsilon^2$ being the eigenvalue of the operator $\Xi/24\pi\kappa$ defined below. Thus the spectrum is purely continuous and covers the interval $(\sqrt{3}/2l_P,\infty)$ \cite{Ish}. The wave function $\Psi_\epsilon(x,\phi)$ is of positive frequency with respect to the internal time $\phi$ and satisfies the positive frequency (square root) of the quantum constraint (\ref{con}); we deal with a Sch\"odinger-like equation $-i\partial_\phi\Psi=\sqrt{\Xi}\Psi$, where $\Xi\equiv24\pi\kappa\hat{x}\hat{p_x}^2\hat{x}$. In order to examine the behavior of the classical singularity at quantum level we have to clarify a general criteria for determining whether the quantized models actually collapse \cite{Got}. Unfortunately there is not such a rigorous criteria yet. An early idea was to impose the condition that the wave function vanishes at the singularity $a=0$ \cite{DW}, but this boundary conditions has little to do with the quantum singularity avoidance. It seems better to study the expectation values of observables which classically vanish at the singularity. In fact, $\vert\Psi(a=0,t)\vert^2$ is merely a probability density and thus, for example, one might have an evolving state that ``bounces'' (i.e. a nonsingular wave packet), even if $\vert\Psi(a=0,t)\vert\neq0$ for all $t$. On the other hand, if one could find a wave packet so that the probability $P_{\delta}\equiv\int_0^\delta\vert\Psi(a,t)\vert^2da\simeq0$ for $\delta$ being a very small quantity, then one could reasonably claim to have a nocollapse situation. Let's now come back to the canonical FRW model. It is not difficult to see that, in this framework, the unavoidable classical singularity is not tamed by quantum effects. In fact, if one starts with a state localized at some initial time, then its peak moves along the classical trajectory and falls into the classical singularity. Additionally, from the eigenfunctions (\ref{solcan}) it is clear that the probability defined above diverges indicating that the Wheeler-DeWitt formalism does not solve the classical singularity. \section{V. The FRW model in the GUP approach} In this section, we analyze the quantization of the FRW ($k=0$) model in the framework of minimal length uncertainty relation \cite{BM07a}. As in the canonical case, let us decompose the solution of the ``generalized Wheeler-DeWitt equation'' into positive and negative frequency parts and focus on the positive frequency sector. Remembering that we have to work in the momentum space, the wave function reads as: $\overline{\Psi}(p,\phi)=\Psi(p)e^{i \sqrt{24\pi\kappa}\epsilon\phi}$, where from now on $p\equiv p_x$. Thus, as soon as we regard the scalar field as an ``emergent time'' for the quantum evolution, then we treat in the ``generalized'' way only the real degree of freedom of the problem: the isotropic volume $x$. Therefore the quantum equation relative to the Hamiltonian constraint (\ref{con}), considering the representation (\ref{rep}), is the following \be\label{emu} \mu^2(1+\mu^2)^2\frac{d^2\Psi}{d\mu^2}+2\mu(1+\mu^2)(1+2\mu^2)\frac{d\Psi}{d\mu}+\epsilon^2\Psi=0, \ee where we have defined a dimensionless parameter $\mu\equiv\sqrt\beta p$. In order to integrate the above equation we introduce the variable $\rho\equiv\tan^{-1}\mu$, which maps the region $0<\mu<\infty$ to $0<\rho<\pi/2$. Then, performing another change of variables: $\xi\equiv\ln(\sin\rho)$ ($-\infty<\xi<0$), equation (\ref{emu}) reduces to \be \frac{d^2\Psi}{d\xi^2}+2\left(\frac{1+e^{2\xi}}{1-e^{2\xi}}\right)\frac{d\Psi}{d\xi}+\epsilon^2\Psi=0, \ee which can be explicitly solved and whose general solution reads as \be \Psi_\epsilon(\xi)=C_1 e^{-\xi(1+\alpha)}\left(1+b_+e^{2\xi}\right)+C_2 e^{-\xi(1-\alpha)}\left(1+b_-e^{2\xi}\right), \ee where $\alpha=\sqrt{1-\epsilon^2}$ and $b_\pm=1\pm\alpha/(1\mp\alpha)$. At this point we have to analyze the ``quasiposition wave function'' relative to this problem in order to make a first comparison with the canonical case, in particular with the wave function (\ref{solcan}). In agreement with formula (\ref{qwf}) we have \begin{multline}\label{quapos} \Psi_\epsilon(\zeta)=\int_{-\infty}^0d\xi \exp{\left(\xi+i\zeta\tan^{-1}\left(\frac{e^\xi}{\sqrt{1-e^{2\xi}}}\right)\right)}\\ \times\left[C_1 e^{-\xi(1+\alpha)}\left(1+b_+e^{2\xi}\right)+C_2 e^{-\xi(1-\alpha)}\left(1+b_-e^{2\xi}\right)\right], \end{multline} where $\zeta$, in this case, is expressed in units of $\sqrt\beta$. Thus we can easily see that our ``quasiposition wave function'', i.e. the probability amplitude for the particle (Universe) being maximally localized around the position $\zeta$, is nondiverging for all $\zeta$, as soon as we take the condition $C_1=0$. We stress that the canonical wave function (the function (\ref{solcan})) is diverging at the classical singularity $x=0$. To get a better feeling with our quantum Universe we construct and examine the motion of wave packets. Let's now construct states peaked at late times \be\label{wp} \Psi(\zeta,t)=\int_0^\infty d\epsilon g(\epsilon)\Psi_\epsilon(\zeta)e^{i\epsilon t}, \ee where we have defined the dimensionless time $t=\sqrt{24\pi\kappa}\phi$. In the following we take $g(\epsilon)$ to be a Gaussian distribution peaked at some $\epsilon^\ast\ll1$, which corresponds to be peaked at energy much less then the Plank energy $1/l_P$ (we recall that $\epsilon\sim\mathcal{O}(\overline\epsilon\ l_P)$, where $\overline\epsilon$ have dimension $1$ in energy). The analytic computation of the integral (\ref{wp}) for the wave function (\ref{quapos}) is impossible to perform. So, in order to describe the motion of wave packets we have to evaluate (\ref{wp}) numerically. \begin{figure} \includegraphics[height=1.8in]{Fig1.eps} \caption{The points represent the result of the numerical integration and are fitted by a Lorentzian $L(t)=0.692/(t^2+31.564)$ heaving width, at the inflection point, $3.243$.} \end{figure} At first, we want to analyze the most interesting region, i.e. where $\zeta\simeq0$, which corresponds to the purely quantum region, where the physical volume is Planckian. In fact, if we put $\beta\sim\mathcal{O}(l_P^6)$, the minimal uncertainty in position is of order of the Planckian volume. The ``quasiposition wave function'' (\ref{quapos}) can be expanded in order to give the probability density around $\zeta\simeq0$: $\vert\Psi(\zeta,t)\vert^2\simeq\vert A(t)\vert^2+\zeta^2\vert B(t)\vert^2$. Therefore, starting with a state peaked at some $\epsilon^\ast\ll1$, the probability density of finding the Universe ``around the Planckian region'' is $\vert A(t)\vert^2$, where $A(t)$ reads \be\label{At} A(t)=2C_2\int_0^\infty d\epsilon\frac{(1+2\sqrt{1-\epsilon^2})e^{-\frac{(\epsilon-\epsilon^\ast)^2}{2\sigma^2}+i\epsilon t}}{\sqrt{1-\epsilon^2}\left(3-\epsilon^2+3\sqrt{1-\epsilon^2}\right)} . \ee We evaluate the above integral numerically for $\epsilon^\ast=10^{-3}$, $\sigma^2=1/20$ and we take the constant $2C_2=1$. The probability density $\vert A(t)\vert^2$ is very well approximated by a Lorentzian function (see Fig. 1). As we can see form the picture, this curve is peaked around $t=0$. This value corresponds to the classical time for which $x(t)=x_0$ (in (\ref{At}) we consider $t_0=0$). Thus, for $x_0\sim\mathcal{O}(l_P^3)$, the probability density to find the Universe in a Planckian volume is peaked around the corresponding classical time. As a matter of fact this probability density vanishes for $t\rightarrow-\infty$, where the classical singularity appears. This is the meaning when we claim that the classical cosmological singularity is solved by this model. \begin{figure} \includegraphics[height=1.8in]{Fig2.eps} \caption{The peaks of the probability density $\vert\Psi(\zeta,t)\vert^2$ are plotted as functions of $t$ and $\ln(\zeta)$. The points (resulting from numerical computation) are fitted by a logarithm $0.050\ln(\zeta)+0.225$ for $\zeta\geq4$ and by a power law $0.067\zeta^{1.060}$ for $\zeta\in[0,4]$.} \end{figure} In order to describe the motion of the wave packet we evaluate $\vert\Psi(\zeta,t)\vert^2$ from the integral (\ref{wp}) of the wave function (\ref{quapos}). As before, we consider a wave packet initially peaked at late times and let it evolve numerically ``backward in time''. We use the same parameters for the integration performed above. The result of the integration is that the probability density, at different fixed values of $\zeta$, is very well approximated by a Lorentzian function yet. The width of this function remains, actually, the same as the states evolves from large $\zeta$ ($10^3$) to $\zeta=0$. For all fixed $t$ the probability density is well-fixed by a Lorentzian function and the width of these function, also in this case, remain almost the same during the evolution. These states are sharply peaked for $\zeta\sim\mathcal{O}(1)$ (which in our units correspond to $\zeta\sim\mathcal{O}(\sqrt\beta)\sim\mathcal{O}(l_P^3)$). The peaks of Lorentzian functions, at different $\zeta$ values, move along the classically expanding trajectory (\ref{clastra}) for values of $\zeta$ larger then $\sim4$. Near the Planckian region, i.e. when $\zeta\in[0,4]$, we observe a modification of the trajectory of the peaks. In fact they follow a power-law up to $\zeta=0$, reached in a finite time interval and ``escape'' from the classical trajectory toward the classical singularity (see Fig. 2). The peaks of the Lorentzian at fixed time $t$, evolves very slowly remaining close to the Planckian region. Such behavior outlines that the Universe has a stationary approach to the cutoff volume, accordingly to the behavior in Fig. 2. This peculiar behavior of our quantum Universe is different from other approaches to the same problem. In fact, recently, it was shown how the classical Big-Bang is replaced by a Big-Bounce in the framework of Loop Quantum Cosmology (LQC) \cite{APS}. Intuitively, one can expect that the bounce and so the consequently repulsive features of the gravitational field in the Planck regime are consequences of a Planckian cut-off length. But this is not the case. As matter of fact, we can observe from Fig. 2 that there is not a bounce for our quantum Universe. The main differences between the two approaches resides in the quantum modification of the classical trajectory. In fact, in the LQC framework we observe a ``quantum bridge'' between the expanding and contracting Universes; in our approach, contrarily, the probability density of finding the Universe reaches the Planckian region in a stationary way. \vspace{0.2cm} Let us now reassume the main differences between the Wheeler-DeWitt and the Generalized Uncertainty Principle approaches to the flat FRW Universe filled by a massless scalar field. The first distinction reside on a probabilistic level, i.e. on the diverging of the wave function at the classical singularity. The second, and more relevant, difference concerns the dynamics of the wave packets toward the singularity. In particular, we have: i) The WDW wave function (\ref{solcan}) is diverging at the classical singularity $a=0$. Therefore, the corresponding probability defined above is diverging, i.e. \be P_{\delta}\equiv\int_0^\delta\vert\Psi_{WDW}(a,t)\vert^2da=\infty. \ee On the other hand, the GUP wave function (\ref{quapos}) is nondiverging for all the ``quasiposition'' $\zeta$, as soon as the condition $C_1=0$ is taken. In this respect we obtain \be P_{\delta}\equiv\int_0^\delta\vert\Psi_{GUP}(a,t)\vert^2da<\infty. \ee Therefore, already at this level, we can claim to have a no collapse behavior for the quantum GUP Universe. Of course this is not the case for the WDW theory. ii) The semi-classical wave packets, in the WDW scheme, fall into the Big-Bang singularity. More precisely, it is possible to construct a wave packet which is peaked at late time, i.e. far from the Planckian region. Then, in the backward evolution toward the singularity, the wave packet continues to be peaked on the classical trajectory (\ref{clastra}) for all the ``times'' and therefore can not escape from the classical singularity. In this sense, the WDW approach does not resolve the singularity problem. On the other hand, the GUP wave packets do not fall into the singularity. In particular, at a given ``time'', they escape from the classical trajectory and the Universe exhibit a stationary behavior in approaching the Planckian volume. This way, the classical singularity is solved by our model. \section{VI. The Taub model} The Taub model is a particular case of the Bianchi IX model. This model is (together with Bianchi VIII) the most general homogeneous cosmological model and its line element reads\footnote{From now on we work in $\hbar=c=16\pi G=1$ units.}, in the Misner parametrization \cite{Mis69}, \be ds^2=N^2dt^2-e^{2\alpha}\left(e^{2\gamma}\right)_{ij}\omega^i\otimes\omega^j, \ee where $N=N(t)$ is the lapse function and the left invariant 1-forms $\omega^i=\omega^i_adx^a$ satisfy the Maurer-Cartan equations $2d\omega^i=\epsilon^i_{jk}\omega^j\wedge\omega^k$. The variable $\alpha=\alpha(t)$ describes the isotropic expansion of the Universe and $\gamma_{ij}=\gamma_{ij}(t)$ is a traceless symmetric matrix $\gamma_{ij}=diag\left(\gamma_++\sqrt3\gamma_-,\gamma_+-\sqrt3\gamma_-,-2\gamma_+\right)$ which determines the shape change (the anisotropy) {\it via} $\gamma_\pm$. Since the determinant of the 3-metric is given by $h=\det e^{\alpha+\gamma_{ij}}=e^{3\alpha}$, it is easy to recognize that the classical singularity appears for $\alpha\rightarrow-\infty$. Performing the usual Legendre transformations, we obtain the Hamiltonian constraint for this model. As well-known \cite{Mis69,CGM} the dynamics of the Universe, toward the singularity, is described by the motion of a two-dimensional particle (the two physical degree of freedom of the gravitational field) in a dynamically-closed domain. In the Misner picture, such a domain depends on the time variable $\alpha$ and therefore to overcame this difficulty, the so-called Misner-Chitr$\acute e$-like variables \cite{Chi} are introduced. In such a scheme the dynamically-allowed domain becomes independent on the new time variable. The next step is to perform the ADM reduction of the dynamics. This scheme relies on the idea to solve the classical constraint, with respect to a given momenta, before implementing some quantization algorithm. In this way, we will obtain an effective Hamiltonian which will depend only on the physical degrees of freedom of the system. Moreover, it is possible to choose the so-called Poincar$\acute e$ representation in the complex upper half-plane \cite{KM97}, in which the ADM ``constraint'' becomes \be\label{huv} -p_\tau\equiv H_{ADM}^{IX}=v\sqrt{p_u^2+p_v^2}, \ee being $\tau$ the new time variable and $u,v$ related to the anisotropies variables $\gamma_\pm$. The Taub model is nothing but the Bianchi IX model in the $\gamma_-=0$ case \cite{RS}. The dynamics of this Universe is equivalent to the motion of a particle in a one-dimensional closed domain. Such a domain corresponds to take only one of the three equivalent potential walls of the Bianchi IX model. It is no difficult to see that this particular case appears for $u=-1/2$ and therefore the ADM Hamiltonian (\ref{huv}) rewrite \be\label{hv} H_{ADM}^T=vp_v, \ee being $v\in[1/2,\infty)$. The above Hamiltonian (\ref{hv}) can be further simplified defying a new variable $x=\ln v$ and becomes \be\label{ht} H_{ADM}^T=p_x\equiv p, \ee which will be the starting point of the upcoming analysis. Let us stress that the classical singularity now appears for $\tau\rightarrow\infty$. \section{VII. Quantum Taub dynamics in the WDW scheme} In this section we focus our attention on the canonical quantum features of the Taub Universe, described by the Hamiltonian (\ref{ht}) with the boundary condition $x\in[x_0\equiv\ln(1/2),\infty)$. In particular, without discuss the computation details, we construct and analyze the motion of suitable wave packets in the $(\tau,x)$-plane. The result is plotted in Fig. 3 and the physical meaning of the configuration variable $x$ is clarified by the relation \be\label{anix} \gamma_+=\frac{e^{\tau-x}}{\sqrt3}\left(e^{2x}-\f34\right). \ee \begin{figure} \includegraphics[height=1.8in]{b0.eps} \caption{The evolution of the wave packets $\vert\Psi(\tau,x)\vert$ in the canonical case, i.e. $\beta=0$. The $x$ variable is in the $[x_0,5]$-interval.} \end{figure} As we can see from the picture, the wave packets follow the classical trajectories (for more details see \cite{BM07b}). The probability amplitude to find the particle (Universe) is packed around these trajectories. In this respect no privileged regions arise, namely no dominant probability peaks appear in the ($\tau,x$)-plane. As matter of fact, the ``incoming'' Universe ($\tau<0$) bounce at the potential wall at $x=x_0$ and then fall toward the classical singularity ($\tau\rightarrow\infty$). Therefore, as well-known, the Wheeler-DeWitt formalism is not able to get light on the necessary quantum resolution of the classical cosmological singularity. As we will see in the next section, this picture is radically changed in the GUP framework. \section{VIII. Quantum Taub dynamics in the GUP scheme} Let us now analyze the quantum evolution of the Taub Universe in the deformed Heisenberg algebra formalism \cite{BM07b}. Namely, we perform a generalized quantization of this model based on the GUP approach. Let us stress that, from the ADM reduction of the dynamics, the variable $\tau$ is regarded as a time coordinate and therefore the conjugate couple ($\tau,p_\tau$) will be treated in a canonical way. This way, considering the Hamiltonian (\ref{ht}), we deal with a Sch\"odinger-like equation \be\label{eqschtaub} i\partial_\tau\Psi(\tau,p)=\hat H_{ADM}^T\Psi(\tau,p). \ee As we have explained before, in the GUP approach, we have lost all informations on the position itself. Therefore, the boundary condition have to be imposed on the ``quasiposition wave function'' (\ref{qwf}), in the sense that $\psi(\zeta_0)=0$ (being $\zeta_0=\langle\psi^{ml}_{\zeta}\vert x_0\vert\psi^{ml}_{\zeta}\rangle$ in agreement with the previous discussion). The solution of the eigenvalue problem resulting from (\ref{eqschtaub}), is the Dirac $\delta$-distribution $\psi_k(p)=\delta(p^2-k^2)$ and therefore the ``quasiposition wave function'' (\ref{qwf}) reads \begin{multline} \psi_k(\zeta)=\f1{k(1+\beta k^2)^{3/2}}\left[A\exp\left(i\frac{\zeta}{\sqrt{\beta}} \tan^{-1}(\sqrt{\beta}k)\right)\right.+\\+\left.B\exp\left(-i\frac{\zeta}{\sqrt{\beta}} \tan^{-1}(\sqrt{\beta}k)\right)\right], \end{multline} where $A$ and $B$ are integration constants. In this way, the boundary condition $\psi(\zeta_0)=0$ can be easily imposed and fixes one constant, giving us the final form for the ``quasiposition'' eigenfunctions \begin{multline}\label{ef} \psi_k(\zeta)=\frac A{k(1+\beta k^2)^{3/2}}\left[\exp\left(i\frac{\zeta}{\sqrt{\beta}} \tan^{-1}(\sqrt{\beta}k)\right)+\right.\\-\left.\exp\left(i\frac{(2\zeta_0-\zeta)}{\sqrt{\beta}} \tan^{-1}(\sqrt{\beta}k)\right)\right]. \end{multline} Let us construct and examine the evolution of wave packets. The analysis of dynamics of such wave packets allow us to give a precise description of the evolution of the Taub Universe. They are superposition of the eigenfunctions, i.e. \be\label{wapa} \Psi(\tau,\zeta)=\int_0^\infty dk A(k)\psi_k(\zeta)e^{-ik\tau}. \ee In the following we will take $A(k)$ as a Gaussian-like function \be\label{gau} A(k)=k(1+\beta k^2)^{3/2}e^{-\frac{(k-k_0)^2}{2\sigma^2}} \ee in order to simplify the explicit expression of the wave packets. The computation of (\ref{wapa}) for the eigenfunctions (\ref{ef}) is performed in a numerical way and the parameters are chosen as follows: $k_0=1$ and $\sigma=4$. As we said the parameter $\beta$, i.e. the presence of a nonzero minimal uncertainty in the configuration variable ($\Delta x_{min}=\sqrt\beta$), is responsible for the GUP effects on the dynamics. In fact, in the $\beta=0$ limit, the eigenfunctions (\ref{ef}) reduce to ordinary plane waves and the quasiposition $\zeta\rightarrow x$, i.e. we recover the WDW scheme. \begin{figure} \includegraphics[height=1.8in]{b001.eps} \includegraphics[height=1.8in]{b01.eps} \includegraphics[height=1.8in]{b1.eps} \caption{The evolution of the wave packets $\vert\Psi(\tau,\zeta)\vert$ in the GUP framework. The graphics are for $\beta=0.01$, $\beta=0.1$ and $\beta=1$ respectively. For smaller $\beta$ the canonical case is recovered.} \end{figure} Therefore, in order to comprehend the alterations induced by the deformed Heisenberg algebra on the canonical Universe dynamics, we have to analyze different $\beta$-regions. In fact, when the ``deformation'' parameter $\beta$ become more and more important, i.e. when we are at some scale which allows us to appreciate the GUP effects, the evolution of the wave packets is different from the canonical case. In particular, we can distinguish between three different $\beta$-regimes: \begin{itemize} \item Let us first consider the ($\beta\sim\mathcal O(10^{-2})$)-region. In this regime the wave packets begin to spread and a constructive and destructive interference between the incoming and outgoing wave appears. The probability amplitude to find the Universe is still peaked around the classical trajectory, but ``not so much'' as in the canonical case. \end{itemize} \begin{itemize} \item When this parameter becomes more influent, i.e. $\beta\sim\mathcal O(10^{-1})$, we can no more distinguish an incoming or outgoing wave packet. At this level is meaningless to speak about a wave packet which follows the classical trajectory. Moreover, the probability amplitude to find the Universe is, in some sense, pecked in a specified region in the ($\tau,\zeta$)-plane, i.e. for $\zeta\simeq0$. \end{itemize} \begin{itemize} \item As last step, for $\beta\sim\mathcal O(1)$, a dominant probability peak ``near'' the potential wall appears. In this $\beta$-region, there are also other small peaks for growing values of $\zeta$, but they were widely suppressed for bigger $\beta$. In this case, the motion of wave packets show a stationary behavior, i.e. these are independent on $\tau$. \end{itemize} Following this picture we are able to learn the GUP modifications to the WDW wave packets evolution. In fact, considering a sort of dynamics in the ``deformation'' parameter $\beta$, from small to ``big'' values of $\beta$, we can see how the wave packets ``escape'' from the classical trajectories and approach a stationary state close to the potential wall. All this picture is plotted in Fig. 4. From this point of view, the classical singularity ($\tau\rightarrow\infty$) is widely probabilistically suppressed, because the probability to find the Universe is peaked just around the potential wall. Another feature to be considered, is that the large anisotropy states are not privileged. In fact, the most probable states, as we can see from the picture, are those for $\zeta\simeq0$, i.e. from equation (\ref{anix}) we obtain $\vert\gamma_+\vert\simeq e^\tau/10$. Therefore, with respect to predict an isotropic Universe, the GUP wave packets exhibit a better behavior with respect those in the WDW theory. \section{IX. Concluding Remarks} The effects of a modified Heisenberg algebra, which reproduces a GUP as appeared in studies on String Theory \cite{String}, on the Big-Bang singularity and on the Taub model are showed. In the case of the flat FRW Universe, the evolution is performed with respect to the scalar field taken as an emergent time and the the model appears to be singularity-free. Furthermore, suitable wave packets were constructed and their dynamics toward the classical singularity analyzed. As matter of fact, such a Universe show a stationary feature toward the Planckian region and no evidence for a Big-Bounce seems to come out. The dynamics of Taub model, on the other hand, was investigated in terms of an internal variable, related to the Universe isotropic volume. Also in this case the Universe exhibit a singularity-free behavior. As matter of fact, the wave packets stop following the classical trajectories toward the singularity and a dominant peak (near the potential wall region), in the probability amplitude to find the Universe, appears. Moreover, the large anisotropy states, i.e. those for $|\gamma_+|\gg1$ ($|x|\gg1$), are probabilistically suppressed. \bibliographystyle{aipproc}
proofpile-arXiv_068-7918
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} In recent years computer simulations are becoming an ever more important tool to study a large spectrum of physical problems, a trend supported by the increasing processor throughput and advances in the parallel computing research. Nonetheless, while the constraint due to the cost in computational time is gradually easing, other bottlenecks can remain, such as the limited or expensive access to licensed (i.e. commercial) or otherwise closed-source codes, that are hardly verifiable and rigorously validated. While several promising open-source libraries and codes (Deal.II, UG, dune, GetFEM, just to cite a few) are now available, there is still an urgent need of developing specific tools and workflow to approach particular classes of problems. This is particularly problematic in the case of very complex physics or inherently multi-scale problems such as heterogeneous media where, despite the current growing trend in non-physical data-driven science, computer modelling still remains the only practical path to an improved knowledge and understanding and, eventually, design and optimisation. These problems, in fact, often require, not only efficient discretisation and solvers, but also specific features, and a wide range of pre- and post-processing tools. Porous media, in their wide interpretation, are a prominent category of multi-scale problems, and, as such, form the basis and the motivation of the model problems explored in this paper~\cite{joekar2016pore,perovic2017multi}. The purpose of this work is thus to present a series of tools and methods which can be used to approach a wide range of problems and applications in porous media, characterised by different geometrical descriptions and involved in several transport phenomena and dynamics: from transport of dilute suspensions to heat transfer in porous structures. Eventually, the aim of this paper is to propose a fully open-source simulation workflow, show the validity and feasibility of a fully \textit{in-silico} investigation, and introducing some specific novel methods and formulation to streamline the modelling and upscaling step. We demonstrate this approach by solving three proof-of-concept problems. Although these are sometimes characterised by some ideal assumptions (e.g., geometrically simplified models, mostly linear and constant parameters, decoupled multi-physics), the tools presented in this work can be readily used for a number of important applications. \mat{Potremmo citare un po' di applicazioni: batterie, reattori catalitici, filtri, fluidised beds, CO2 storage, membrane?} Within the computational workflow mentioned above, we thus present here specific developments and advancements to: \begin{itemize} \item generate realistic packings of solid grains of arbitrary shapes, suitable for the modelling of both natural formations (e.g. aquifers) and artificial packings (e.g. packed bed reactors), a class of unit operations central to process engineering~\citep{P003}; \item generate random arrangements of spheres or ellipsoids with a tunable porosity, which can be used to used to represent realistic representations of solid dispersions with a variable amount of dispersed matter (as it will be shown in this work); \item improve on the classic boundary conditions used in unsteady linear transport problems, to strongly reduce computational times in the investigation of the upscaled dynamics of a quasi-periodic problem, bringing a definite improvement on the ``naive'' methods, especially for problems characterised by very large scales of variation of the quantity of interest (e.g.: advection-diffusion problems at very high P\'eclet numbers); this technique was already successfully used in a recent work of ours~\citep{boccardo2018robust}; \item define and solve efficiently closure cell problems for diffusion in heterogeneous anisotropic materials. \end{itemize} Other mathematical models and open-source simulation tools, based on the same computational platform, can be also profitably employed in conjunction to the methods explored here, to overcome some of the limitations. These include a quadrature-based method of moments to deal with the solution of PDF transport equations \citep{openqbmm} (e.g., evolution of particle populations or other additional internal coordinates such as transit time), and stochastic sampling techniques based on multi-level Monte Carlo methods to deal with uncertainty quantification and stochastic upscaling in heterogeneous porous media systems~\citep{icardi2016predictivity}. The structure of the paper is thus the following: in Section \ref{sec:eq} the theoretical bases of the investigated problems are laid down, given that the real-world test cases explored share the same theoretical background, fully or in part. Section~\ref{sec:numerics} summarises the numerical techniques, based on a finite-volume formulation. Eventually, the actual computational workflow is detailed in Section \ref{sec:results} for each case: from the generation of the geometrical model, to the creation of a suitable mesh, to the presentation of the actual results. Specifically, the first part will deal with transport of a solute in porous medium composed of a periodic arrangement of solid grains, with added attention to the meshing strategies employed in the simulation. The second part explores fluid flow and heat transfer in a pipe where varying amounts of solid matter has settled on its bottom, focusing on the resulting effect on the heat transfer coefficient of the system. Lastly, another heat transfer problem is presented, this time considering the case of a solid dispersion, where the continuous matrix and the dispersed inclusions are characterised by different thermal diffusivity. In this way, the reader will be able to more effectively peruse this work, focusing on the application of interest and on the specific computational pipeline for its development. \section{Model equations} \label{sec:eq} Let us consider a generic two-phase heterogeneous material (cf. Fig.~1), denoted with $\Omega=\Omega_{1}\cup\Omega_{2}$, $\Gamma$ being the interface between them, and let us split the external boundary in the inlet $\partial\Omega_{-}$, outlet $\partial\Omega_{+}$, and lateral boundary $\partial\Omega_{\ell}$. \input{squareDomain} In the following, we will consider a few simplifications, the chief of which is to assume a saturated porous medium, constituted by solid non-connected irregular grains immersed in a continuous phase, either liquid (i.e. Newtonian incompressible fluid at room temperature) or solid (with different characteristics from the grains). \subsection{Flow field} The equations governing the fluid phase are the stationary Navier-Stokes equation\footnote{Body forces are neglected.} and the mass balance law: \be{Navier-Stokes} \rho (\nabla \mathbf{u}) \mathbf{u} = -\nabla p + \mu \Delta \mathbf{u} \ , \expandafter\end{equation} \be{mass-balance} \nabla \cdot \mathbf{u} = 0 \ , \expandafter\end{equation} where $\rho$ is the constant fluid density (kg m$^{-3}$), $\mathbf{u}$ is the effective fluid velocity (m s$^{-1}$), $p$ is the pressure (kg m$^{-1}$ s$^{-2}$) and $\mu$ is the fluid dynamic viscosity (kg m$^{-1}$ s$^{-1}$). When inertia is negligible, \eq{Navier-Stokes} reduces to the Stokes's equation: \be{Stokes} -\nabla p + \mu \Delta \mathbf{u} = 0\ , \expandafter\end{equation} this case is usually referred to as ``creeping flow'', typically encountered in several environmental applications: \hl{all our simulations are performed in this range, in laminar conditions}. \subsection{Transport models} Coupled to the fluid phase, we generally solve a transport problem assuming it does not have a back-coupling with the flow. This is true for small dilute particles or for solutes. When no flow is present (i.e. the continuous phase is a solid), there is no need of solving the flow field, although in many applications advection can be due to electrostatic forces or other physics which we neglect here. \subsubsection{Eulerian equations} Working in an Eulerian framework, it is possible to model the concentration transport of solutes or of a diluted suspension of colloidal particles as: \be{Trasporto} \frac{\partial c}{\partial t} + \mathbf{u} \cdot \nabla c = \nabla \cdot (D \nabla c) \ , \expandafter\end{equation} where $c$ is the concentration (kg m$^{-3}$), $D$ is the diffusion coefficient (m$^2 $s$^{-1}$); for the advective term we made use of the incompressibility condition \eq{mass-balance}. As it has been said, the interactions between solid and fluid phases are described with \textit{one-way coupling}, stating that the fluid (and external forces) will affect the motion of the particles, but not the vice-versa. The diffusion coefficient for small particles can be estimated with the Stokes-Einstein equation: \be{Stokes-Einstein} D = \frac{\kappa_{\textrm{B}}T}{3 \pi \mu d_{\textrm{p}} }\ , \expandafter\end{equation} where $\kappa_\textrm{B}$ is the Boltzmann constant, $T$ is the temperature (K) and $d_\textrm{p}$ is the particle diameter (m). When dealing with solid phases (inside the grains or in both domains), $\mathbf{u}=0$ and $D$ is a solid diffusion coefficient (possibly a tensor). It is worth noting that an equation of the form of \eq{Trasporto} can apply for heat transfer problems (studied in Sections \ref{sec:pipeSand} and \ref{sec:soliDisp}) as well. In this case, it is convenient to introduce the following notation: \begin{equation}\label{eq:heatTransfer} \frac{\partial T}{\partial t} + \mathbf{u} \cdot \nabla T = \nabla \cdot (\alpha \nabla T) \ , \end{equation} where T is the fluid temperature (K) and the diffusive coefficient is now the thermal diffusivity (scalar or tensor) $\alpha$ (m$^2 $s$^{-1}$). \subsection{Scalar quasi-periodic boundary conditions}\label{sec:bc} Eulerian transport problems, even in the case of periodic geometry and flow, cannot be solved with simple periodic conditions on a single periodic unit cell due to intrinsic evolution in space and time of the equations that requires larger domain to be solved, or due to the non-conservative nature of the equations. However, when the equation is linear and we are solely interested in the asymptotic (long-time, infinitely far, self-similar) behaviour, we can reformulate the problem to find a stationary \textit{quasi-periodic solution} (up to a multiplicative or additive constant, depending on the nature of the problem). Dropping the time dependence, we can formulate a generic linear transport problem as \be{quasi-periodic} \div{(\mathfrak{L}c)}=\mathfrak{R}c+\mathfrak{F}\qquad \mbox{in}\quad\Omega \expandafter\end{equation} where $\mathfrak{L}$ is a flux operator (e.g., advection-diffusion fluxes), $\mathfrak{R}$ is a non-conservative operator (e.g., reactions, differential operators not in a divergence form), $\mathfrak{F}$ is a source term. These operators are all assumed to be linear in $c$ but possibly dependent on space $x$. We consider the following internal boundary conditions\footnote{On the porous matrix walls, in the case of perforated domain. Otherwise, for dual domain problems, this has to be replaced with interface conditions and an additional equation for the other domain has to be considered.}: \be{quasi-periodic-bc} \mathfrak{L} c = \mathfrak{f}(c)\qquad \mbox{on}\quad\Gamma \expandafter\end{equation} with $\mathfrak f$ being a generic (space-dependent, possibly non-linear) flux at the wall. We aim to find a self-similar solution of \eq{quasi-periodic}, i.e., a solution in the smallest periodic cell with quasi-periodic external boundary conditions of the type\footnote{Depending on the form of the flux operator, the resulting PDE can be elliptic, parabolic or hyperbolic. We consider here conditions both for the solution and its normal gradient, generally needed for elliptic PDEs.}: \be{quasi-periodic-bc2} c|_{\partial\Omega^-} = \phi(c|_{\partial\Omega^+}), \qquad \nabla_{n} c|_{\partial\Omega^-} = \psi(c|_{\partial\Omega^+}) \expandafter\end{equation} where $\phi$ and $\psi$ are linear functions and $\partial\Omega^\pm$ are two (geometrically opposite) periodic boundaries\footnote{In three dimensional cubical elementary volumes, one can write separate equations for $x,y,z$ periodicity.}. It follows that, generally, $\psi(c)=\phi'(c)\nabla_{n} c$ to ensure the quasi-periodicity of the normal gradient. From simple compatibility conditions for the existence and uniqueness of solutions, we can distinguish the following cases: \begin{itemize} \item \textit{Conservative transport:} $\mathfrak{R}=\mathfrak{F}=\mathfrak{f}=0$. This setup is the one of interest for studying \textit{passive asymptotic dispersion} (or effective diffusion when there is no advection) properties as what remains is a fully conservative equation. The quantity of interest to quantify the dispersion is the average of local gradients $\nabla c$. However, for periodic conditions, only a trivial constant solution (with $\phi(c)=\psi(c)=c$) exists. Therefore, we have to re-introduce a fictitious non-conservative constant source $\mathfrak{f}=\mathfrak{f}_{0}=\mathfrak{L}(\mathbf{p}\cdot\mathbf{x})$ where $\mathbf{p}$ is the direction in which we want to study the transport ($x$ in this case). This source term is zero for pure diffusion. The resulting equation has now a non-trivial solution up to an additive constant. We therefore select a quasi-periodicity of the type: \be{cons-bc} \phi(c)=c+\phi_{0}, \qquad \phi_{0}=\spatavg{c}_{\partial\Omega^+}-\spatavg{c}_{\partial\Omega^-} \expandafter\end{equation} where $\spatavg{\cdot}$ represents an averaging (surface- or flux-weighted) operator\footnote{In fact, considering a linear and periodic operator $\mathfrak{L}$, surface and flux-weighted averages are equivalent, i.e., $\spatavg{\mathfrak{L}c}=\spatavg{c}$.}, and the constant $\phi_{0}$ is a variable of the problem and it has to be computed to counter-balance the volumetric source term. Without loss of generality we can assume then $\spatavg{\phi(c|_{\partial\Omega^-})}=0$. We can then compute the local gradients (whose norm gives the dispersion coefficient), produced by the transport operator $\mathfrak{L}$ under the macroscopic gradient $\phi_{0}$, by subtracting the source term $\mathfrak{f}_{0}$ as $$ \frac{ \nabla c - \nabla{(\mathbf{p}\cdot \mathbf{x})}}{\phi_{0}} $$ \item \textit{Homogeneous equation:} $\mathfrak{F}=0$ and $\mathfrak{f}(c)=\mathfrak{f}_{1}c$. This is the case of linear bulk and surface reaction, where, in general, the solution is given up to a multiplicative constant. This suggests us to look for quasi-periodic solutions with \be{hom-bc} \phi(c)={\phi_{1}}{c}, \qquad \phi_{1} = \frac{\spatavg{\mathfrak{L}c}_{\partial\Omega^-}}{\spatavg{\mathfrak{L}c}_{\partial\Omega^+}} =\frac{\spatavg{c}_{\partial\Omega^-}}{\spatavg{c}_{\partial\Omega^+}} \expandafter\end{equation} in such a way that the quasi-periodic boundary conditions balance the non-conservative terms and allows to identify $\phi_{1}$ as an equivalent reaction rate (observing it can be related to the volume-averaging of all non-conservative terms). \item \textit{General linear case:} Integrating \eq{quasi-periodic} over the volume, and considering a standard advection diffusion operator $\mathfrak{L}=\mathbf{u}+{D}\nabla$, linear bulk reaction $\mathfrak{R}$, linear surface reaction $$ \mathfrak{f}(c)=\mathfrak{f}_{1}c+\mathfrak{f}_{0} $$ and a linear transformation $\phi(c)=\phi_{1}c+\phi_{0}$, the following conditions hold for $\phi_{1}$ \begin{eqnarray} \nonumber \phi_{1} &=& \frac{\spatavg{\mathfrak{L}c}_{\partial\Omega^-}}{\spatavg{\mathfrak{L}c}_{\partial\Omega^+}} = 1 + \mathfrak{R}\frac{\int_{\Omega}{c}}{\int_{\partial\Omega^+}{\mathfrak{L}c}}+ \mathfrak{f_{1}}\frac{\int_{\Gamma}{c}}{\int_{\partial\Omega^+}{\mathfrak{L}c}}\\ \label{eq:gen-bc} &=& 1 + \mathfrak{R}\frac{\spatavg{c}}{\spatavg{\mathfrak{L}c}_{\partial\Omega^+}}\frac{|\Omega|}{|\partial\Omega^+|}+ \mathfrak{f_{1}}\frac{\spatavg{c}_\Gamma}{\spatavg{\mathfrak{L}c}_{\partial\Omega^+}}\frac{|\Gamma|}{|\partial\Omega^+|} \end{eqnarray} and $\phi_{0}$ \begin{eqnarray} \nonumber \phi_{0} &=& \mathfrak{F}\frac{\int_{\Omega}1}{\int_{\partial\Omega^-}\mathfrak{L}1}+ \mathfrak{f_{0}}\frac{\int_{\Gamma}1}{\int_{\partial\Omega^-}\mathfrak{L}1}\\ \label{eq:gen-bc2} &=& \mathfrak{F}\frac{1}{\spatavg{\mathbf{u}\cdot\mathbf{n}}_{\partial\Omega^-}}\frac{|\Omega|}{|\partial\Omega^-|}+ \mathfrak{f_{0}}\frac{1}{\spatavg{\mathbf{u}\cdot\mathbf{n}}_{\partial\Omega^-}}\frac{|\Gamma|}{|\partial\Omega^-|} \end{eqnarray} As it can be seen from the last equality, the integrals can be rewritten highlighting the geometric factors $\frac{|\Omega|}{|\partial\Omega^-|}$ and $\frac{|\Gamma|}{|\partial\Omega^-|}$, and the mean concentration values $\spatavg{c}$ in the volume and on the surfaces. If the flow field is incompressible, $\spatavg{\mathbf{u}\cdot\mathbf{n}}_{\partial\Omega^-}$ is equivalent to the mean (volumetric) velocity. \item \textit{Non-linear case:} For general non-linear operators, one has to find a function $\phi(c)$ such that $$ {\int_{\partial\Omega^+}{\mathfrak{L}\phi(c)}} = \int_{\partial\Omega^+}{\mathfrak{L}c} + {\int_{\Omega}{\mathfrak{R}c}}+ {\int_{\Gamma}{\mathfrak{f}(c)}} $$ This problem is not generally easily solvable. One possibility is to assume a ``macroscopically'' linear reaction, i.e., $\phi$ to be linear. If $\phi(c)=\phi_{1}c$, the problem of finding $\phi_{1}$ becomes equivalent to a generalised non-linear eigenvalue problem. There is no guarantee, however, that the problem admits a solution. More in general, in our future works, we will investigate a more general algorithm to find linear and non-linear functions $\phi$. \end{itemize} This proposed approach can be proven to be equivalent to the several cell problems formally derived by two-scale expansions \citep{hornung2012homogenization}, and it is particularly convenient in the cases where the phenomenon of interest would require very long physical time or especially, a very large spatial domain. This is true in most upscaling problems where, to derive upscaled equations, the asymptotic regime is often sought. Simulations on a smaller periodic domain with these sets of \textit{quasi-periodic boundary conditions} can significantly reduce the computational effort. It is interesting to notice that, in two-scale asymptotic homogenisation, this is usually overcome with the formal derivation of cell problems with periodic conditions. Our approach, despite being more phenomenological, is indeed equivalent and can be applied also to problems where an homogenisation limit does not exist. From the computational point of view, \eq{cons-bc}, \eq{hom-bc}, and \eq{gen-bc}, have to be implemented via outer iterations, with $\phi_{1}$ and $\phi_{0}$ computed from the field $c$. In our implementation, outer iterations are always present also to allow for non-linear terms, non-linear discretisation schemes, and explicit corrections for non-orthogonal cells. At each iteration, the boundary conditions are computed with the equations above. The iterations stop when the residuals of the equations fall below a certain threshold and, at the same time, the estimated pseudo-periodic boundary conditions converges to a constant value. \section{Numerical discretisation}\label{sec:numerics} \subsection{Numerical schemes} \textsf{OpenFOAM} implements the finite volume method (FVM) \cite{FerzigerPeric2002} with co-located grid arrangement. Internal values are stored at the cell centre, while boundary values are stored at the face centroid on the corresponding boundary cell faces. The Rhie-Chow approach \cite{RhieChow1983,shenImproved2001} is used to address the pressure-velocity decoupling observed when this grid arrangement is adopted. When computing fluxes, values of each variable are computed at the face centroid of each interior cell through a reconstruction technique, following \cite{darwish_tvd_2003} (more details on the implementation can be found in \cite{jasak1996error}), as shown below. The implementation of numerical schemes for convection and diffusion schemes follows the standard FVM approach \cite{FerzigerPeric2002,Darwish2016}. The convective term is discretised by applying Gauss' theorem. If we consider a scalar $c$, advected with a velocity $\mathbf{u}$, and we indicate the volume of a computational cell with $V$, while $\partial V$ is its surface area, we can write \begin{equation} \int_V \nabla \cdot \left( \mathbf{u} c \right) \textrm{d}V = \int_{\partial V} (\mathbf{u} c) \cdot \mathbf{n}\ \textrm{d}S \approx \sum_f c_f \mathbf{u}_f \cdot \mathbf{n}_f S_f. \end{equation} The value of both $\mathbf{u}_f$ and of $c_f$ need to be evaluated at cell faces; $S_f$ is the surface area of the considered faces and $\mathbf{n}_f$ is the associated outward unit vector. If a second-order TVD scheme is used, this is achieved by means of the reconstruction procedure illustrated in \cite{darwish_tvd_2003} (Sec. 2), in which the face value of a variable is obtained as \begin{equation} c_i = c_P + \frac{1}{2}\psi(r_f) (c_W - c_P), \end{equation} where $c_P$ is the value of $c$ at the cell centre (labelled $P$, as illustrated in Fig.~\ref{fig:cartoon}), $c_W$ the one at the downwind node, and $\psi(r_f)$ is the limiter function. The quantity $r_f$ is defined as \cite{darwish_tvd_2003} \begin{equation} r_f = \frac{c_P - c_E}{c_W - c_P}, \end{equation} where $c_E$ is the upwind value of $c$. The advantage of this procedure consists in its generality, which allows the first-order upwind scheme and the central differencing scheme to be recovered by setting $\psi(r_f) = 0$ and $\psi(r_f) = 1$ respectively, \cite{darwish_tvd_2003}. The diffusion term is discretised considering the Laplacian operator \cite{OpenFoamProgrammersGuide} \begin{equation} \int_V \nabla \cdot \left( \Gamma \nabla c \right) \ \textrm{d} V = \int_{\partial V} \left( \Gamma \nabla c \right) \cdot \mathbf{n}\ \textrm{d}S \approx \sum_f \Gamma_f \left(\nabla c \right)_f \cdot \mathbf{n}_f S_f, \end{equation} with \begin{equation} \left(\nabla c \right)_f \cdot \mathbf{n}_f = \frac{c_E - c_P}{|\mathbf{d}|} S_f \ \cos \theta + \mathcal{C}_\textrm{g}, \label{eq:FaceNGrad} \end{equation} where $\mathbf{d}$ is the vector joining the cell centre of the considered adjacent cells. The term $\mathcal{C}_\textrm{g}$ is an explicit correction for non-orthogonality \cite{Darwish2016}. Gradients are computed either with Gauss' integration \begin{equation} \int_V \nabla c \ \textrm{d} V \approx \sum_f c_f S_f, \label{eq:GaussGrad} \end{equation} where the face values of $c$ are found using a linear reconstruction of the cell values on cell faces, or using the least-squares approach. Several implementations of the latter are available in \textsf{OpenFOAM}: the second-order least-squares approach, available in different flavours, either using the cell-centred values or the centre and nodal values for the calculation, and a fourth-order least-square approach. The adoption of Gauss' gradient approach (Eq.~\eqref{eq:GaussGrad}) is only appropriate on regular meshes, where it has second-order accuracy (see, among others~\cite{FerzigerPeric2002}). Additionally, this approach is sensitive to mesh anisotropy. Consequently, on meshes with varying cell size, mesh refinement and arbitrarily shaped cells, the least-squares approach is the recommended choice to ensure second-order accuracy. Thus, we adopt it when dealing with irregular meshes, as described in the following cases. Another source of potential difficulties is the presence of the explicit non-orthogonal correction in the calculation of the face-normal gradient (Eq.~\eqref{eq:FaceNGrad}), which may lead to unbounded results, depending on the mesh quality and on the problem being solved; the difficulty with the correction term is that, being it explicit, it may lead to unboundedness when the non-orthogonality is too large, with consequent loss of robustness of the scheme. The approach used in OpenFOAM is presented in~\cite{jasak1996error}. To avoid unboundedness, one can either apply of the correction only when mesh non-orthogonality is high, or exclude the correction altogether. Direct consequence of this is the reduction of the accuracy of the numerical scheme, which deteriorates when the non-orthogonality problem is particularly serious. \begin{figure}[h!] \centering \input{computational_cells.tex} \caption{Schematic representation of two computational cells. In this picture, the cell on the right (in blue) represents the upwind cell.} \label{fig:cartoon} \end{figure} \subsection{Meshing} Aside from the issue concerning the generation of the geometry, the inherent randomness which characterises most porous systems also affects the generation of the computational mesh. Far from being a marginal part of the workflow, a fundamental trade-off in the setup of the computational model has to be evaluated, weighing a numerically satisfactory mesh with the increasing costs in machine time for a finer discretisation, both for the mesh generation itself and the consequent flow/transport problem. Studying a satisfactory portion of a porous medium (i.e.: a \textit{representative elementary volume}) means having to deal (in most cases) with a random system, which immediately excludes the possibility of employing a structured (or block-structured) mesh, usually favoured for their efficiency. Moreover, one often has to deal with grids composed of a number of elements up to tens of millions \citep{icardi2014pore} or much more \citep{mosby2016computational}, making any kind of ``naked-eye'' inspection unfeasible. As a result, it is essential to develop a robust pipeline for both the generation of the mesh, which can be controlled by an a-priori discretisation strategy (not reliant on qualitative analysis of the resulting grid), and an efficient process of testing for error convergence based on the simulation results. Here, we have tested two different unstructured mesh generators, both present in the \textsf{OpenFOAM} suite: \textsf{snappyHexMesh} and \textsf{foamyHexMesh} (hence respectively sHM and fHM for brevity). More details on these tools are available in the appendix while a test-case with the results of the meshing algorithms and their effects on the numerical solution are reported in Section~\ref{sec:mesh}. We note, for the interested reader, that some valid meshing alternatives can be found in \textsf{Cubit} or \textsf{Gmsh}~\cite{gmsh}, the latter being a freely available and consolidated tool. \section{Results} \label{sec:results} \subsection{Transport in periodic sphere packings}\label{sec:BCC} One of the fields which most commonly deals with the study of transport in porous media is filtration theory. For example, this has been applied for the investigation of aquifer contamination and remediation, \citep{Krol2013,Rolle1,messina2015,Crevacore2016271}. Correspondingly, a lot of effort has been expended in the past decades to approach this problem, both experimentally and via theoretical investigation. These kind of problems are basically equivalent, in their simplest formulation, to linear advection-diffusion equations. Upscaling techniques are therefore robust and well-known~\citep{hornung2012homogenization,bearBook}. However, despite the simplicity of the problem, some issues are still unsolved, such as the quantitative understanding of the role of geometrical parameters in transport processes or the effect of non-linearities. In recent years, computational studies have begun to complement the available analytical and empirical results, especially when the physical problem presented further difficulties (e.g.: mixing controlled reactions, heterogeneous materials, etc.) which rendered an a-priori upscaling process exceedingly difficult, if not outright impossible~\citep{battiato2011applicability}. As mentioned elsewhere, the pre-processing step constitutes a big part of a CFD case of transport in porous media. While in the next sections we explored the process of the generation of the geometrical model, here we will explore the issue of the choice of the optimal meshing strategy and its consequences in the convergence of the finite volume scheme. To this end, we will consider here the Stokes flow over a unit cell of a periodic arrangement of spheres. Then, in Section \ref{sec:lagrang}, we turn to look to the upscaling step, and specifically on how to conveniently solve unsteady transport Eulerian equations and on an alternative Lagrangian approach to extract residence-time distribution curves for a solute transport problem. \subsubsection{Unstructured meshing generation}\label{sec:mesh} Tests were performed meshing a body-centred cubic structure, where grains were spheres of equal size \citep{P006}, and the medium porosity was held fixed at $0.366$ (i.e.: close to the minimum porosity, but sufficiently high to avoid issues due to the meshing of contact points between adjacent spheres). Four different meshing strategy were then employed: \begin{itemize} \item \textsf{snappyHexMesh} with uniform cell size (sHM\_U); \item \textsf{snappyHexMesh} with one level of refinement (sHM\_R); \item \textsf{foamyHexMesh} with uniform cell size (fHM\_U); \item \textsf{foamyHexMesh} with one level of refinement (fHM\_R). \end{itemize} A representation of these different meshing strategies can be found in Fig.~\ref{fig:meshes}. When present, the refinement level is located around the grain surface, in order to allow a better discretisation of the boundary layers (of the momentum and concentration/temperature gradients). For each meshing strategy, meshes of different cell size were built, paying attention to both the mean cells size and the total number of cells. This latter parameter, see Table \ref{tab:1}, was used to analyse the grid convergence results. Technical details on the meshing generation are reported in the Appendix. Equations (\ref{eq:Navier-Stokes}) and (\ref{eq:mass-balance}) were solved imposing a pressure drop between two opposite faces of the domain (along the $x$-axis). The resulting (mean) velocity is thus comparable with environmental applications (Re $\approx$ 3). \begin{figure} \centering \includegraphics[width=.48\textwidth]{meshes.pdf} \caption{Comparison of four meshing strategy: sHM\_U (up-left), sHM\_R (up-right), fHM\_U (low-left), fHM\_R (low-right). Each mesh appearing in this picture has a total number of cells that is a little less than $5\times10^5$.} \label{fig:meshes} \end{figure} \begin{table} \caption{Total number of cells for ten increasingly fine meshes. The four columns report cell number for an uniform and a wall-refined meshing strategy for \textsf{snappyHexMesh} and \textsf{foamyHexMesh} respectively.} \label{tab:1} \begin{tabular}{lrrrr} \hline\noalign{\smallskip} & sHM\_U & sHM\_R & fHM\_U & fHM\_R \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & 1296 & 1296 & 1278 & 914 \\ 2 & 3744 & - & 3768 & 2799 \\ 3 & 5952 & 4176 & 5991 & 4944 \\ 4 & 11376 & 8016 & 11509 & 10133 \\ 5 & 22800 & 21232 & 22792 & 22919 \\ 6 & 44976 & 48288 & 44987 & 46638 \\ 7 & 86352 & 84280 & 87856 & 90763 \\ 8 & 172944 & 168688 & 182910 & 169449 \\ 9 & 302880 & 309360 & 333254 & 280978 \\ 10 & 601352 & 606760 & 698629 & 501542 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsubsection{Fluid flow convergence study} Using the meshes described above, we solve the Stokes~\footnote{We remind the reader that although in the OpenFOAM implementation the equation actually solved is Navier-Stokes, given that \hl{the simulations are performed in laminar flow, this is equivalent to the simple Stokes problem.}} problem Eqs.(\ref{eq:Navier-Stokes}) and (\ref{eq:mass-balance}), and analyse the numerical performance of the finite-volume schemes. Technical details on the meshing generation are reported in the Appendix. \begin{figure} \includegraphics[width=.48\textwidth]{gridConvergence4.pdf} \caption{Numerical test to verify the convergence properties of the solver for Stokes flow around the body-centred cubic packing (shown in the right upper corner). Relative error in the mean velocity with respect to the total degrees of freedom of the mesh: uniform mesh with third order schemes (black~$\times$), uniform mesh with second order scheme (red~$\circ$), refined mesh (green~$\Box$), unstructured Voronoi mesh (blue~$\Diamond$). Power law curves are reported according to an equivalent cell size $h$.} \label{fig:convergence} \end{figure} In \fig{convergence} we can observe the effect of the various meshing strategies, combined with different schemes. The mean velocity (or equivalently the drag coefficient) is compared against a finer resolution result (and reference results in literature \cite{khirevich2015coarse}). First of all, it is interesting to notice how in the first part of the plot (i.e.: coarse grids) the main error is always approximately second-order. This is due to the fact that, in our meshing strategy, the surface of the sphere is not exactly preserved but it is itself discretised. This can be easily estimated from simple geometrical arguments and the theory of elliptic operators. This ``geometrical error'' can instead behave very differently in presence of significant boundary-layer effects (Navier-Stokes or boundary reactions). After this first regime, we can observe that, in general, the uniform meshes generated with \textsf{snappyHexMesh} perform significantly better reaching a second- and third-order convergence, according to the finite-volume scheme chosen. On the other hand, the refined grid sHM\_R loses the second-order accuracy due to the introduced non-orthogonality and skewness of the cells. As explained earlier, this can be corrected only with explicit terms that could cause unboundedness. Therefore we limit the corrections, and obtain and sub-optimal convergence rate. For this linear problem, the advantage of refining near the boundary does not pay off, as the computational savings are not enough to justify this deterioration of the numerical convergence. Similarly, the unstructured meshing strategies based on Voronoi tessellation (fHM both uniform and refined), despite being attractive for their robustness and simplicity, are characterised only by a linear decay of the error; please note that fHM\_R is not reported in \fig{convergence} for sake of simplicity, as it practically behaves like the fHM\_U. However they seem to perform better for very coarse meshes where the geometrical features can be better represented with a small number of Voronoi tiles. \subsubsection{Residence time distribution and breakthrough curves}\label{sec:lagrang} The solute transport problem can be solved, either in a Lagrangian or Eulerian way, given the computation of the flow field (from now on, we will refer only to the results obtained with the finest uniform mesh with second-order schemes). For infinite P\'eclet numbers (purely hyperbolic equation), the transport can be fully characterised by the streamlines. This is particularly convenient as, within an Eulerian formulation, specific advection schemes and highly refined meshes would be needed to avoid significant numerical artificial diffusion or instabilities~\cite{benson2017comparison}. Streamlines are integrated with the \textsf{streamLine} post-processing tool available in OpenFOAM 4, and they are generated seeding a cloud of points that can be located wherever within the domain. In particular, we provided points seeded on the inlet face of the domain, perpendicular to the flow direction. Points can be placed either randomly or on a regular grid; the former strategy should be preferred to provide statistically reliable samples. In presence of finite (possibly very small) diffusion, a linear drift diffusion Ito's Stochastic Differential Equation should instead be solved. This requires more sophisticated and expensive random walk algorithms. These are usually coupled to the fluid flow equations, allowing a possible coupling between the solute concentration and the fluid. Despite the higher accuracy of Lagrangian methods for advection-dominated transport, the Lagrangian discretisation of more complex non-linear advection diffusion reaction and the coupling with the fluid flow can be non trivial and give rise to significant statistical errors due to the finite number of trajectories. Here we follow 1000 particle trajectories from the inlet to the outlet and compute the cumulative distribution of arrival (or residence) times $f(t)$ in the pure advection regime, and compare them with Eulerian simulations at increasing P\'eclet numbers. The cumulative arrival times of a random ensemble of particles initially positioned at the inlet is equivalent to the time integral of the concentration of particles at the outlet with inlet concentration delta-distributed in time (the ``response'' function of an impulse at time $t=0$). This is therefore equivalent to the time-derivative of the so-called ``breakthrough curve'', i.e. the response to an Heaviside function. Having established a connection between Eulerian and Lagrangian simulations in the non-reacting case, what we want to study is the appearance of ``anomalous transport'' characteristics when the diffusion is small enough not to allow for a significant mixing, therefore resulting in particles following preferential directions and staying for long times in the same streamline. This, apart from very special cases (e.g. Gaussian distributed flow field), generates non-Fickian transport behaviour. This is a well-known problem in porous media who has received significant attention (see, for example, \cite{Dentz:Carrera:2007} and \cite{DentzGouzeAWR2012}). It can be easily understood if one tries to fit the three-dimensional results with an equivalent one-dimensional linear advection diffusion equation (with constant velocity, equal to the mean velocity, and unknown diffusion\footnote{In this case, it would be more correct to talk about hydro-dynamical \textit{dispersion} as the effective diffusion is caused both by molecular diffusion and by mechanical dispersion due to the heterogeneous velocity field.}). For the purpose of this section, simulation were performed on a face-centred body structure (made of spherical grains of equal size) with porosity 0.40. In \fig{RTD} the residual concentration $1-f(t)$ is shown for Lagrangian streamlines (equivalent to infinite P\'eclet number) and for P\'eclet numbers ranging from 10 to 1000, against the dimensionless advective time $t/\tau$, where $\tau$ is the arrival time for the advective front (i.e., considering the mean velocity). At first, a uniform (across the inlet plane) boundary condition has been tested (ref. \fig{RTD} continuous lines). Here it can be seen that for $Pe=10$ the transport is still dominated by diffusion, with a first arrival time lower than the front arrival time. Then, for higher P\'eclet numbers, the curves start to show a significant power-law ``tailing'', while the solution of a 1D advection diffusion equation would predict an exponential decay. This is due to the incomplete mixing and preferential trajectories. Although it is not in the objectives of the paper to analyse the anomalous transport characteristics, it is interesting to notice here that a significantly different behaviour is observed when the technique and boundary conditions described in \ref{sec:bc} are used (dashed lines). In fact, when the non reactive stationary quasi-periodic problem is solved within the porous space, an ``asymptotic'' self-similar profile is obtained. When using this asymptotic profile as the boundary condition at the inlet, the resulting transport becomes clearly ``Fickian'', with exponential tails. This is because the profile found by the quasi-periodic ``closure'' represents the profile for which the stationary outlet concentration is uniform. Therefore, it can be considered as an inverse problem of finding the profile that cancels out the preferential trajectories. The black solid line finally represents the arrival (or transit or residence) times of purely advective Lagrangian trajectories. As it can be seen, the first arrival times match well with the high P\'eclet case but the absence of diffusion cause a very significant tailing with a power-law behaviour. \begin{figure} \includegraphics[width=.5\textwidth]{curve3_v2.pdf} \caption{Breakthrough curves $f(t)=\spatavg{c}_{u_{x}}$ for a single face-centered cubic cell computed from Eulerian simulations at finite P\'eclet numbers (respectively in red, green, and blue for Pe= 10, 100, and 1000) compared with the Lagrangian residence time distribution (in black). For the Eulerian cases, the continuous lines result from employing the classic boundary conditions, while the dashed lines are the results employing the scalar quasi-periodic boundary conditions described in Section~\ref{sec:bc}. } \label{fig:RTD} \end{figure} \subsection{Flow over porous surfaces}\label{sec:pipeSand} In this section, we will explore the case of flow and heat transfer in a pipe containing, for a small portion of its volume, a number of spherical solid grains settled on its lower side. The problem of studying heat transfer in pipes, and consequently evaluating the heat transfer coefficient in pipes has been extensively studied~\citep{Bird1960,incropera2007fundamentals,choi1997momentum,manes2009turbulence,mossner2015}; in this work, we want to show how the presence of randomly arranged solid matter affects fluid flow and heat transfer. We compare the results with the analogous ``clean'' cases where no deposited matter is present. In particular, this section will serve as a proof-of-concept for the methodology we employed for both the generation of the geometry comprising the settled matter (as described in the following paragraphs) and the easiness with which it is possible to explore geometrical parameter space and operating conditions. Then, an overview of the numerical details of the performed simulations, as well as the results obtained, are presented. \subsubsection{Geometry generation} As it has been mentioned, all the geometrical models used in this work were built \textit{in-silico}, avoiding the costly step of obtaining experimental samples and adapting them for use in CFD codes. Specifically, the case study presented in this section was created with \textsf{Blender}, a free and open-source software used in computer graphics applications, which can be used to perform rigid-body simulations. The settled matter is represented by uniformly-sized spheres, which were chosen to be of a diameter equal to one-twentieth of the pipe diameter. In the \textsf{Blender} simulation setup, grains are first located above the container (which is now represented by just the bottom half of the pipe, to let the granular matter settle into it), taking care of adding randomness to their initial position in order to avoid unwanted ``arranged'' structures in the final packing. Then, the solid grains are free to fall due to the effect of gravity, which is the only outside force. The final state of the system is given by the solution of the rigid-body problem, where the solid grains are treated as non-deformable and impenetrable entities\footnote{More precisely, the definition for the \textit{rigid bodies} is given as solids possessing an infinite repulsive potential present on their surface.}, and the only interactions considered are instantaneous collisions and sliding. After reaching the final state, identified by the absence of relative motion between all the solid grains and the pipe, the geometric model is exported for its use in the CFD code (substituting the half-pipe with the complete pipe for the simulations). This kind of methodology has been successfully used to represent both packings of catalyst particles in chemical reactors~\citep{P003,boccardo2019fine} and in digital rock physics applications~\citep{icardi2016predictivity}: the numerical details of the rigid-body simulations can be found therein. With the process just described, we created three different geometries, each characterised by the (increasing) volume of settled matter, roughly corresponding to an average height, $H_g$, of the settled matter packings equal to two, three, and four grain diameters respectively. In all cases, the pipe length is equal to three times the pipe diameter. The spatial domain was discretised with a sHM\_R strategy. Also, an in-depth description of the use of this mesh generator in the case of random packings and an analysis of mesh convergence for the study of advection-transport problems can be found in our earlier work~\citep{IBMTS2014}. \subsubsection{Numerical details} The first step, as for the case described in the previous section, has been to solve the equation for the motion of fluid inside the pipe. We thus solved Eqs.(\ref{eq:Navier-Stokes}) and (\ref{eq:mass-balance}) in laminar regime by setting a constant pressure drop between the inlet and the outlet of the domain, corresponding to a Reynolds number Re $\approx$ 1 (we used again second-order schemes). A snapshot of the velocity contour plot inside the pipe can be found in Fig.~\ref{fig:pS}, together with a representation of the slower fluid streamlines inside the volume occupied by the settled granular matter. A simulation of heat transfer was then performed solving Eq.~\eqref{eq:heatTransfer}, setting a constant fluid temperature at inlet, equal to T=293 K , and a zero flux at the outlet. A Dirichlet condition of constant temperature T=343 K was set on the pipe wall, while a zero thermal flux (corresponding to $\nabla T=0$) was set on the grains. This choice of boundary conditions is equivalent to assume a situation of perfect thermal equilibrium between the flowing fluid and the solid grains: this is justified in the limiting cases of infinitely fast heat transport in the solid, if one considers the long-time, steady-state, solution of the problem. A snapshot of the contour plots of fluid temperature on the median length-wise section of the pipe\footnote{More precisely, taken on a plane parallel to both the main flow direction and the direction of gravity, positioned on the pipe axis.} is found in Fig.~\ref{fig:pS-contourT}, which shows the amount of solid matter in each case and the slight difference in the thermal boundary layer at the bottom. In the next section, we will instead explore the problem of pure heat transfer in solids, between a continuous matrix and a disperse solid phase, characterised by two different thermal diffusivity values: these two cases are meant to represent, in a way, two opposite situations in the study of heat transfer in heterogeneous materials. \begin{figure}[h!] \includegraphics[width=.5\textwidth]{pS.png} \caption{Velocity contour plot inside the pipe and fluid streamlines (coloured by fluid velocity) inside the granular portion of the domain, showing the increasing distortion of the fluid paths moving from the bulk towards the bottom of the pipe.} \label{fig:pS} \end{figure} \subsubsection{Heat transfer coefficient} When the steady-state solution of the problem of advection-diffusion has been obtained, the heat fluxes along the main flow direction are extracted in order to calculate the local heat transfer coefficient $h_{loc}$ \citep{Bird1960} evaluated at a number of successive planes orthogonal to the main flow direction (i.e. the pipe axis): \begin{equation}\label{eq:h_loc} h_{loc}=\dfrac{q \rho C_p}{\pi D}\dfrac{d T_{bulk}(x)}{d x}\dfrac{1}{T_{wall}-T_{bulk}} \end{equation} where $q$ (m s$^{-1}$) is the average flow velocity, $\rho$ (kg m$^{-3}$) its density, $C_p$ (J kg$^{-1}$ K$^{-1}$) its specific heat capacity, $x$ (m) the distance of the averaging plane from the inlet boundary, and $T_{wall}$ and $T_{bulk}$ (K) respectively the temperature on the wall of the pipe and the local average of the fluid bulk temperature. \begin{figure}[h!] \includegraphics[width=.5\textwidth]{pipeSand.png} \caption{Temperature contour plots on an axial plane of the pipe for cases with average granular matter height from the bottom of the pipe equal to roughly four, three, and two grain diameters (from top to bottom), and for the ``clean'' pipe (bottom).} \label{fig:pS-contourT} \end{figure} \begin{figure}[h!] \includegraphics[width=.5\textwidth]{nusselt.pdf} \caption{Heat transfer coefficient $h_{loc}$ as a function of the distance from the inlet (in meters): values for the ``clean'' pipe (black~$\times$, line at the top) and for granular matter height from the bottom of the pipe equal to roughly two, three, and four grain diameters (respectively red~$\circ$, green~$\Box$, and blue~$\Diamond$ curves, in descending order in the graph).} \label{fig:h_loc} \end{figure} The $h_{loc}$ results for each case are shown in Fig.~\ref{fig:h_loc}, compared with the heat transfer coefficient of the ``clean'' pipe devoid of settled matter. As it can be seen, the presence of granular solids at the bottom of the pipe cause two effects. The first is that of reducing the total heat transfer coefficient (and consequently Nusselt number), which is smaller the bigger the volume of settled particles: this is due to the reduction of the contribution of advective heat transport with respect to conductive transport, caused by the much lower fluid velocities in the inter-granular portion of the domain. Then, it can also be noticed that the presence of randomly arranged grains have an effect on the dynamics of reaching an asymptotic regime: while in the ``clean'' pipe case the heat transfer coefficient reaches a plateau not far from the inlet (as it is expected), for the cases with granular matter it is clear that a decreasing trend is still present even close to the domain outlet. \subsection{Heat transfer in heterogeneous materials} \label{sec:soliDisp} In the preceding section, we only considered the effect of the presence of a granular solid on flow and heat transfer in pipes, neglecting the heat transfer between the bulk of the fluid (or the pipe wall) and the grains themselves. In this section, we will instead explore a case where no fluid flow is present, investigating the case of heat transfer between two solid phases with different thermal diffusivities $\alpha_{1}$ and $\alpha_{2}$, the first phase constituting a continuous matrix and the second phase dispersed (as solid inclusions) in the first; moreover, the diffusivity in the continuous matrix is anisotropic, having different values for its longitudinal and transversal components. \subsubsection{Extended Jodrey-Tory algorithm (EJT)} For this application, a periodic extension of the classical Jodrey-Tory algorithm \cite{jodrey1981computer} has been implemented to deal with randomly oriented ellipsoids. Despite not being computationally optimal for low-porosity (high-density) materials, it can easily generate random packings with a specific porosity and a specific overlapping between grains, as well as clustering. The new algorithm has been implemented in the open-source repository \textsf{mlmc-porescale}\citep{icardi2016predictivity}, and can be sketched as follows: \begin{enumerate} \item The user specifies a domain size (including periodicity information), a target porosity, the minimum degree of overlapping $\theta$ ($\theta<1$ allows overlapping, $\theta>1$ prescribe a certain distance from grain to grain), a maximum distance between grains $\Theta>\theta$ (this is enforced on clusters of $N_{\Theta}$ particles), and a maximum limit of particle numbers and displacement iterations. Furthermore, geometrical statistics about the ellipticity of the grains and their size have be to provided. In the simplest (isotropic) case, a single random variable is selected (log-normal, truncated Gaussian or uniform) to describe the ellipses canonical axes length. \item Random ellipsoids are generated until they reach the desired porosity (not considering overlapping). For each ellipsoid, a random unitary matrix $C$ is sampled~\citep{ozols2009generate} according to the axes lengths and orientation statistics. This is then decomposed $C=C_{1}C_{2}$ into a rigid rotation part $C_{1}\in SO(3)$ and a diagonal part $C_{2}$(responsible for scaling). \item Once all ellipsoids have been randomly placed in the box, a greedy-type algorithm is executed: at each iteration, the pair-wise distances are generated and the first $N_{moves}$ pairs, characterised by the largest overlapping ratio, are detached along the line of the pair-wise distance axis. The displacement length controls the convergence properties of the algorithm but its optimal value strongly depends on the porosity of the packing. Usually a relative displacement of $0.2-0.5$ times the particle size is chosen. \item If a maximum distance is specified, the same type of moves are applied to shorten the pair-wise distance. Here, however, this can be only applied to a specific cluster of particles selected as the closest ones. \item The moves are applied iteratively until all the pair-wise distances satisfy the criteria. Periodicity is ensured by implementing the appropriate toric metric to compute the distances. \end{enumerate} \hl{It has to be noted that there is not ensured stopping point for the algorithm and a total number of moves of the greedy algorithm described in point 3. is fixed as an upper bound. For a given available volume and number of ellipsoids, in the current implementation of the code, one needs to choose the overlapping layer appropriately and large enough for a packing solution to exist. In the future, however, simple adaptivity could be implemented to override the settings for overlaps or for ellipsoid size, to converge faster to a final arrangement.} An example of 200 touching ($\theta=1,\,\Theta=\infty$) ellipsoids in a periodic arrangement with 0.45 porosity is shown in \fig{eJT}. \begin{figure}[h!] \includegraphics[width=.5\textwidth]{snapshot00.png} \caption{Example of random ellipsoids with random (statistically uniform in $SO(3)$) orientation, generated with the Extended Jodrey-Tory algorithm (EJT).} \label{fig:eJT} \end{figure} \subsubsection{Effective diffusion} The micro-scale heat transfer problem to be solved is \eq{heatTransfer} with $\mathbf{u}=0$ and $$ \alpha(\mathbf{x})= \begin{cases} \mbox{diag}(0.1, 10, 10) \quad \mbox{for}\; \mathbf{x}\in\Omega_{1}\;,\\ \mbox{diag}(1, 1, 1) \quad \mbox{for}\; \mathbf{x}\in\Omega_{2} \end{cases} $$ Where diag is the diagonal matrix constructed from the vector. This represents the case of a continuous phase made of highly conductive layers (e.g., graphene, therefore with large conductivity in only two directions), reinforced with ellipsoidal inclusions with isotropic conductive properties. In \fig{eJT-mesh2}, the periodic structure (with the same properties of the one in \fig{eJT} but with up to a 20\% overlapping allowed) is shown with the underlying locally refined mesh (sHM\_R). We have shown above that the local refinements destroys the second-order convergence of the numerical schemes. However, due to the discontinuous diffusion coefficient, a second-order convergence would not be possible with our numerical schemes. \begin{figure}[h!] \includegraphics[width=.5\textwidth]{mesh_solid2.png} \caption{Representation of the complete geometry, showing the mesh of two distinct domains, with the dispersed solid inclusions generated with the Extended Jodrey-Tory algorithm highlighted as transparent red volumes.} \label{fig:eJT-mesh2} \end{figure} To compute the effective (homogenised) diffusion coefficient, one may follow a volume averaging strategy, applying the quasi-periodic BCs in \eq{cons-bc} (results shown in Fig.~(\ref{fig:eJT-mesh3})). This is more convenient than solving the homogenisation cell problem where the derivative of the discontinuous conductivity appears as a source term of the equation~\citep{hornung2012homogenization}. The resulting effective diffusion is, in this case: $$ \alpha_{eff}=\matrix{4.955 & \epsilon & \epsilon\\\epsilon & 39.86 & \epsilon\\ \epsilon & \epsilon & 38.41} $$ where $\epsilon$ represents small coefficients $\approx10^{-3}$. As it can be seen, the resulting effective diffusion, as expected, is again anisotropic. \begin{figure}[h!] \includegraphics[width=.5\textwidth]{Tsolid.png} \caption{Local temperature fluctuations $c-\mathbf{p}\cdot\mathbf{x}$ for the the quasi-periodic cell problem. The volume average of their gradient gives the effective diffusion coefficient.} \label{fig:eJT-mesh3} \end{figure} \section{Conclusions} In this work, we have shown the computational challenges involved in the simulation of transport problems in the field of porous media or, more generally, in problems which feature multiple \textit{spatial} scales, or whose evolution takes place over multiple \textit{temporal} scales: examples of which we have respectively provided in the form of the case of a pipe with small-size granular sedimentation, and the study of solute transport at very high P\'eclet numbers. Moreover, we have also presented applications of a number of third-party open-source codes and showcased our freely available extensions. These range from tools used to quickly and robustly generate a wide variety of three-dimensional realistic porous media models, to innovative computational setups providing for a great decrease of the computational expense in quasi-periodic cases~\cite{P006}. In the first case, for example, an extended Jodrey-Tory algorithm has been developed, to generate random packings of ellipsoids, while, in the latter, a simple steady-state closure problem, fully consistent with volume averaging and homogenisation, is derived for time-dependent transport with linear (and constant in time) transport parameters. Then, the semi-periodic setting allows to solve one single periodic cell, while a Representative Elementary Volume is, in the case of advection-diffusion, dependent on the P\'eclet numbers of the system, and possibly including tens or hundreds of periodic cells. Given our focus on free and open-source software and having primarily chosen, due to its robustness and flexibility, the \textsf{OpenFOAM} platform to discretise our models, we have illustrated the main features of the meshing and the discretisation schemes underlying its classical finite-volume approach. We have explored the alternatives provided in the \textsf{OpenFOAM} suite regarding the choice of the meshing pre-processors, providing an in-depth analysis about the order of relative error convergence of the two approaches, while also exploring different meshing strategies. Again, this is done with the objective of providing guidance and an initial benchmark with respect to the pre-processing part of the numerical simulation, which probably has the greatest impact on the accuracy of the final results of the model. The simulations presented, although representing three-dimensional, realistic, and physically meaningful systems, do not represent real large-scale structures and therefore do not require any sophisticated adaptivity and parallelism. In this work therefore, we have chosen to limit our investigation to showing the difference between two radically different body-fitted approaches, a Cartesian-based cut-cell approach and a Voronoi unstructured approach, presenting the differences in accuracy between the two, \hl{with the focus on building a general-purpose package for medium-scale simulation.} \textsf{OpenFOAM} has, however, a very flexible parallel structure and, as such, most of our solvers are inherently parallel, with some extra functionality (particularly for I/O and averaging operations) implemented in a parallel way. Depending on the overall size of the geometry, this can be easily be parallelisable, with good scaling properties, up to a few hundreds cores. \hl{It has to be noted that, while OpenFOAM can easily be parallelised on many thousands cores (although, depending on the specific solver, losing the optimal scaling), in our experience, a multi-scale micro-macro offline coupling could be a more viable and robust option to deal with porous media.} Finally, the aggregate results of this work demonstrate how it is possible to build a completely computational pipeline for the study of a variety of physical problems, from the generation of a suitable and realistic geometric model, to the actual simulation run, to the extraction of the relevant data. This already found many applications in the field of environmental and chemical engineering and will be extended for other applications, for example, in energy storage. This emphasis is meant and presented both as an alternative between free and open-source software as opposed to licensed and ``closed'' software, and between \textit{in-silico} methods of reconstruction of the geometry of the porous medium as opposed to more costly (and often less available) methods such as micro-computed-tomography or x-ray imaging. In both cases, the clear purpose is to work towards the development of open scientific frameworks providing easily reproducible and verifiable results that links rigorous mathematical models and numerics with physics and engineering applications. \begin{acknowledgements} MI acknowledge the financial support of AVL and the computational resources provided by the Warwick Centre for Scientific Computing. \end{acknowledgements} \bibliographystyle{spmpsci}
proofpile-arXiv_068-10304
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION}\label{Section 1} Nabla Laplace transform is a powerful tool to convert a discrete time signal, which is a sequence of real or complex numbers, into a complex frequency domain representation \cite{Hein:2011PAMJ}. Similar to the classic $Z$ transform, it can be considered as a discrete time generalization of the Laplace transform \cite{Jarad:2012ADE,Ortigueira:2013IFAC,Ortigueira:2016CNSNS}. It is crucial for a wide range of applications, whenever signal sampling or discrete treatment or backward difference are involved. Especially for the fractional order case, nabla Laplace transform technique is much more convenient and effective than $Z$ transform \cite{Abdeljawad:2012ADE,Jonnalagadda:2016IJNS,Abdeljawad:2017RMP}. The basic idea now known as the nabla Laplace transform was inspired from Bohner and Peterson \cite{Bohner:2001Book}, and it was formally introduced and investigated in 2009 by At{\i}c{\i} and Eloe as a way to treat fractional finite difference equation \cite{Atici:2009EJQTDE}. It gives a tractable way to solve linear time-invariant fractional backward difference equations whose sampling period $h=1$. It was also dubbed ``the $N$-transform'' for short. Afterwards, the sampling period $h$ was generalized as a positive real number, and then a sampling based nabla Laplace transform was developed \cite{Cheng:2011Book}. For the constructed nabla Laplace transform, some important properties were presented, e.g. linearity, shifting in the time domain, and convolution theorem \cite{Mohan:2014CMS}. Afterwards, several novel properties were derived and then applied in the nabla fractional calculus (see Chapter 3 of the famous monograph \cite{Goodrich:2015Book}). In \cite{Wei:2018ArXiva}, we established the initial/final value theorem, the stable criterion and then applied such properties to analyze the monotonicity and overshoot properties of the zero input system response. To further explore the properties of such a handful tool, a comprehensive survey was made by us on the existing results, and 14 innovative properties were proposed subsequently \cite{Wei:2019FDTA}. By using this tool, six kinds of infinite dimensional frequency distributed models were equivalent derived to describe a nabla fractional order system \cite{Wei:2019CNSNS}. To obtain the time-domain sequence $f(k)$ of a given function $F(s)$ in frequency domain, a rational approximation approach was proposed in \cite{Wei:2019AJC}. Though this method is effective, it always leads to the approximation error. To get the exact value, a method generalized from the initial/final value theorem was derived in \cite{Wei:2018ArXiva}. However, this method need calculate the limit for each $k$ in $f(k)$. To obtain the explicit expression of $f(k)$, we developed the inverse nabla Laplace transform in the form of a contour integral \cite{Wei:2019AJC}. To be honest, it is generally difficult or even impossible to calculate such a contour integral. Additionally, it is well known that the residual calculation method and the partial fraction expansion method perform well in solving the similar problems in inverse Laplace transform and inverse $Z$ transform. Motivated by these, we will develop the two analytical method for inverse nabla transform Laplace transform. The outline of the rest paper is as follows. In Section \ref{Section 2}, the basic definition on nabla Laplace transform is review. In Section \ref{Section 3}, two methods are designed and discussed to evaluate the contour integral effectively. In Section \ref{Section 4}, two typical examples are provided to show the feasibility and effectiveness of the developed methods. Finally, some conclusions are drawn in Section \ref{Section 5}. \section{PRELIMINARIES}\label{Section 2} In this section, some basic definitions and concepts for nabla Laplace transform are provided. Afterwards, the objective of this work is restated. The nabla Laplace transform of a sequence $f: \mathbb{N}_{a+1}\to \mathbb{R}$ is defined by \cite{Atici:2009EJQTDE} \begin{equation}\label{Eq1} {\textstyle {\mathscr N}_a\left\{ {f\left( k \right)} \right\} \triangleq \sum\nolimits_{k = 1}^{+\infty} {{{\left( {1 - s} \right)}^{k - 1}}f\left( k+a \right)},} \end{equation} where $s\in\mathbb{C}$, $\mathbb{N}_{a+1}\triangleq\{a+1,a+2,a+3,\cdots\}$, $a\in\mathbb{R}$. More exactly, such a transform should be called the sampling free nabla Laplace transform, since the sampling period is assumed to be 1 \cite{Wei:2019FDTA}. The region of convergence for $F(s)={{\mathscr N}_a}\left\{ {f\left( k \right)} \right\} $ is defined as the set of points in the complex plane for which the infinite series converges \cite{Goodrich:2015Book} \begin{eqnarray}\label{Eq2} {\textstyle {\rm{ROC}} \triangleq \left\{ {s:\big| {\sum\nolimits_{k = 1}^{ + \infty } {{{\left( {1 - s} \right)}^{k - 1}}f\left( {k + a} \right)} } \big| < + \infty } \right\} .} \end{eqnarray} The inverse nabla Laplace transform can be written as \cite{Wei:2019AJC} \begin{equation}\label{Eq3} {\textstyle {\mathscr N}_a^{-1}\left\{ {F\left( s \right)} \right\} \triangleq \frac{1}{{2\pi {\rm{j}}}}\oint_c {F\left( s \right){{(1 - s)}^{ - k + a}}{\rm{d}}s} ,} \end{equation} where $k \in {\mathbb{N}_{a + 1}}$, $c$ is a closed curve rotating around the point $(1,{\rm j}0)$ clockwise and it also locates in the convergent region of $F(s)$ (defined as (\ref{Eq2})). In this paper, the objective is to develop some methods to obtain the sequence $f(k)$ with $k\in\mathbb{N}_{a+1}$ from its nabla Laplace transform $F(s)={\mathscr N}_a\left\{ {f\left( k \right)} \right\} $ and the corresponding convergent region ${\rm ROC}$. \section{MAIN RESULTS}\label{Section 3} This section extends the conventional residual calculation method and partial fraction method to nabla Laplace domain. After carefully providing these methods, the advantage and disadvantage are then evaluated. \subsection{Residual calculation method} Based on the residue theorem \cite{Stein:2010Book}, the contour integral in equation (\ref{Eq3}) can be calculated accordingly. When the finite poles of ${F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}}}$ inside of the circle $c$ is $\{s_m\}$, then the sequence $f(k)$ with $k\in\mathbb{N}_{a+1}$ can be computed by the opposite number of the sum of all residues at the poles $\{s_m\}$, i.e., \begin{equation}\label{Eq4} {\textstyle f\left( k \right) = - \sum\limits_m {{\rm{Res}}\big[ {F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}},{s_m}} \big]} .} \end{equation} If $s_m$ is $N$ times pole with $N\in\mathbb{Z}_+$, then the residue at $s_m$ is equal to \begin{eqnarray}\label{Eq5} {\textstyle \begin{array}{l} {\rm{Res}}\big[ {F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}},{s_m}} \big] \\ = \frac{1}{{\left( {N - 1} \right)!}}\mathop {\lim }\limits_{s \to {s_m}} {\frac{{{{\rm{d}}^{N - 1}}}}{{{\rm{d}}{s^{N - 1}}}}{{\left( {s - {s_m}} \right)}^N}F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}}} \end{array}.} \end{eqnarray} If $s_m$ is single pole, then \begin{eqnarray}\label{Eq6} {\textstyle \begin{array}{l} {\rm{Res}}\big[ {F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}},{s_m}} \big] \\ = \mathop {\lim }\limits_{s \to {s_m}} \left( {s - {s_m}} \right)F\left( s \right){\left( {1 - s} \right)^{ - k + a}} \end{array}.} \end{eqnarray} When the finite poles of ${F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}}}$ outside of the circle $c$ is $\{s_n\}$, then the sequence $f(k)$ with $k\in\mathbb{N}_{a+1}$ can be computed by \begin{equation}\label{Eq7} {\textstyle f\left( k \right) = \sum\limits_n {{\rm{Res}}\big[ {F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}},{s_n}} \big]},} \end{equation} where the residue at the poles $\{s_n\}$ can be calculated like the previous discussion. Note that the formulae in (\ref{Eq4}) and (\ref{Eq7}) are different from those in inverse $Z$ transform, since the clockwise curve $c$ is inverted. As a result, the sign should be specially handled when using the Residual theorem. It is worth emphasizing that for a finite value sequence $f(k)$, $s=1$ cannot be the pole of $F(s)$, since $f\left( {a + 1} \right) = \mathop {\lim }\limits_{s \to 1} F\left( s \right)\neq \infty$ can be obtained from the initial value theorem \cite{Wei:2019FDTA}. \subsection{Partial fraction expansion method} If the considered function $F(s)$ can be expressed as the following fractions \begin{equation}\label{Eq8} {\textstyle F\left( s \right) = \sum\nolimits_{i = 1}^n {\frac{{{r_i}}}{{s - {s_i}}}} ,} \end{equation} where the coefficient ${r_i} = \mathop {\lim }\limits_{s \to {s_i}} \left( {s - {s_i}} \right)F\left( s \right)$, then the corresponding sequence satisfies \begin{equation}\label{Eq9} {\textstyle f\left( k \right) = \sum\nolimits_{i = 1}^n {\frac{{{r_i}}}{{{{\left( {1 - {s_i}} \right)}^{k - a}}}}} ,k\in\mathbb{N}_{a+1}.} \end{equation} In equation (\ref{Eq7}), $F(s)$ has single pole $s_i$, $i=1,2,\cdots,n$. If there exists a multiple pole $\lambda$, i.e., \begin{equation}\label{Eq10} {\textstyle F\left( s \right) = \sum\nolimits_{i = 1}^{n - N} {\frac{{{r_i}}}{{s - {s_i}}}} + \sum\nolimits_{i = 1}^N {\frac{{{q_i}}}{{{{\left( {s - \lambda } \right)}^i}}}},} \end{equation} where $q_i$, $i=1,2,\cdots,N$ can be calculated via ${q_i} = \mathop {\lim }\limits_{s \to \lambda } \frac{1}{{\left( {N - i} \right)!}}\frac{{{{\rm{d}}^{N - i}}}}{{{\rm{d}}{s^{N - i}}}}{\left( {s - \lambda } \right)^N}F\left( s \right)$, then the sequence $f(k)$, $k\in\mathbb{N}_{a+1}$ follows \begin{eqnarray}\label{Eq11} {\textstyle f\left( k \right) = \sum\nolimits_{i = 1}^{n - N} {\frac{{{r_i}}}{{{{\left( {1 - {s_i}} \right)}^{k - a}}}}} + \sum\nolimits_{i = 1}^N {\frac{{{q_i}{{\left( {k - a} \right)}^{i - 1}}}}{{\left( {i - 1} \right)!{{\left( {1 - \lambda } \right)}^{k - a + i - 1}}}}} .} \end{eqnarray} Likewise, the function $F(s)$ can be written as the sum of some familiar items, then the desired sequence can be obtained. For example, if the following equation holds \begin{equation}\label{Eq12} {\textstyle F\left( s \right) = \sum\nolimits_{i = 1}^n {\frac{{{r_i}{s^{{\alpha _i} - {\beta _i}}}}}{{{s^{{\alpha _i}}} - {s_i}}}} ,} \end{equation} then one has \begin{equation}\label{Eq13} {\textstyle f\left( k \right) = \sum\nolimits_{i = 1}^n {{r_i}{{\mathcal F}_{{\alpha _i},{\beta _i}}}\left( {{s_i},k,a} \right)} ,} \end{equation} where $\alpha_i\in\mathbb{R}$, $\beta_i\in\mathbb{R}_+$ and $k\in\mathbb{N}_{a+1}.$ This method can also be seen as the look-up table method. For convenience, with the help of some fundamental properties of nabla Laplace transform in \cite{Wei:2019FDTA}, 16 commonly used sequences and their nabla Laplace transforms are provided in Table \ref{Table 1}. \begin{table*} \centering \caption{Nabla Laplace Transform Pairs.} \label{Table 1} \begin{tabular}{c l l l} \hhline number&$f(k)$, $k\in\mathbb{N}_{a+1}$, $a\in\mathbb{R}$ & $F(s)={\mathscr N}_a\left\{ {f\left( k \right)} \right\} $ & {\rm ROC}\\ \hline 1&${\delta \left( {k-a - 1} \right)}$ & 1 & $s\in\mathbb{C}$\\ 2&${u\left( {k-a - 1} \right)}$ &$\frac{{\rm{1}}}{s}$&$\left| {1 - s} \right| < 1$\\ 3&${{k - a} }$&$\frac{{1}}{{{s^2}}}$&$\left| {1 - s} \right| < 1$\\ 4&${{\gamma ^{k - a - 1}}}$&$\frac{1}{{1 - \gamma + \gamma s}}$&$\left| {1 - s} \right|\left| \gamma \right| < 1$, $\gamma\neq0$\\ 5&$\frac{{{\left( {k - a} \right)}^{ \overline \alpha }}}{\Gamma \left( {\alpha + 1} \right)}$&$\frac{1}{{{s^{\alpha + 1}}}}$&$\left| {1 - s} \right| < 1,\alpha \in \mathbb{C},\alpha \notin {Z_- }$\\ 6&${{\gamma ^{k - a - 1}}}\frac{{\left( {k - a} \right)}^{ \overline \alpha }}{\Gamma \left( {\alpha + 1} \right)}$&$\frac{1}{{{{\left( {1 - \gamma + \gamma s} \right)}^{\alpha + 1}}}}$&$\left| {1 - s} \right|\left| \gamma \right| < 1$, $\gamma\neq0$\\ 7&${\frac{1}{{{{\left( {1 - \lambda } \right)}^{k - a}}}}}$&$\frac{1}{{s - \lambda }}$&$\left| {1 - s} \right| < \left| {1 - \lambda } \right|,\lambda \ne 1$\\ 8&$\frac{{{{\left( {k - a} \right)}^{ \overline {N-1} }}}}{{\left( {N - 1} \right)!{{\left( {1 - \lambda } \right)}^{k - a + N - 1}}}}$&$\frac{1}{{{{\left( {s - \lambda } \right)}^N}}}$&$\left| {1 - s} \right| < \min \left\{ {\left| {1 - \lambda } \right|,1} \right\},\lambda \ne 1$, $N\in\mathbb{Z}_+$\\ 9&${{{\mathcal F}_{\alpha ,\beta }}\left( {\lambda ,k,a} \right)}$&$\frac{{{s^{\alpha - \beta }}}}{{{s^\alpha } - \lambda }}$&$\left| {1 - s} \right| < 1,\left| \lambda \right| < {\left| s \right|^\alpha }$, $\alpha,\beta \in {\mathbb{R}_ + }$\\ 10&${\left( {k - a - 1} \right){{\mathcal F}_{\alpha ,\alpha }}\left( {\lambda ,k,a} \right)}$&$\frac{{\alpha {s^{\alpha - 1}}\left( {1 - s} \right)}}{{{{\left( {{s^\alpha } - \lambda } \right)}^2}}}$&$\left| {1 - s} \right| < 1,\left| \lambda \right| < {\left| s \right|^\alpha },\alpha \in \mathbb{R}_+$\\ 11&${{{\rm{e}}^{ - \lambda (k - a - 1)}}}$&$\frac{1}{{1 - {{\rm{e}}^{ - \lambda }}\left( {1 - s} \right)}}$&$\left| {1 - s} \right| < {{\rm{e}}^\lambda }$\\ 12&${{\gamma ^{k - a - 1}}{{\rm{e}}^{ - \lambda (k - a - 1)}}}$&$\frac{1}{{1 - \gamma {{\rm{e}}^{ - \lambda }}\left( {1 - s} \right)}}$&$\left| {1 - s} \right|\left| \gamma \right| < {{\rm{e}}^\lambda },\gamma \ne 0$\\ 13&${\sin \left( {\omega (k - a - 1)} \right)}$&$\frac{{\sin \left( \omega \right)\left( {1 - s} \right)}}{{1 - 2\cos \left( \omega \right)\left( {1 - s} \right) + {{\left( {1 - s} \right)}^2}}}$&$\left| {1 - s} \right| < 1$\\ 14&${\cos \left( {\omega (k - a - 1)} \right)}$&$\frac{{1 - \cos \left( \omega \right)\left( {1 - s} \right)}}{{1 - 2\cos \left( \omega \right)\left( {1 - s} \right) + {{\left( {1 - s} \right)}^2}}}$&$\left| {1 - s} \right| < 1$\\ 15&${\sinh \left( {\omega (k - a - 1)} \right)}$&$\frac{{\sinh \left( \omega \right)\left( {1 - s} \right)}}{{1 - 2\cosh \left( \omega \right)\left( {1 - s} \right) + {{\left( {1 - s} \right)}^2}}}$&$\left| {1 - s} \right| < \min \left\{ {{{\rm{e}}^\omega },{{\rm{e}}^{ - \omega }}} \right\}$\\ 16&${\cosh \left( {\omega (k - a - 1)} \right)}$&$\frac{{1 - \cosh \left( \omega \right)\left( {1 - s} \right)}}{{1 - 2\cosh \left( \omega \right)\left( {1 - s} \right) + {{\left( {1 - s} \right)}^2}}}$&$\left| {1 - s} \right| < \min \left\{ {{{\rm{e}}^\omega },{{\rm{e}}^{ - \omega }}} \right\}$\vspace{3pt}\\ \hhline \end{tabular} \end{table*} A brief statement should be made in advance to facilitate understanding of Table \ref{Table 1}. $\delta \left( {n} \right) \triangleq \left\{ \begin{array}{l} 1,n = 0\\ 0,n \ne 0 \end{array} \right.$ is the discrete-time unit impulse function. $u\left( {n} \right) \triangleq \left\{ \begin{array}{l} 1,n \ge 0\\ 0,n < 0 \end{array} \right.$ is the discrete-time unit step function. ${(k-a)}^{\overline{\alpha}}\triangleq\frac{\Gamma(k-a+\alpha)}{\Gamma(k-a)}$ is the rising function. ${{\mathcal F}_{\alpha ,\beta }}\left( {\lambda ,k,a} \right) \triangleq \sum\nolimits_{i = 0}^{ + \infty } {\frac{{{\lambda ^i}}}{{\Gamma \left( {i\alpha + \beta } \right)}}{{\left( {k - a} \right)}^{\overline {i\alpha + \beta - 1} }}} $ is the discrete-time Mittag--Leffler function. ${\rm sinh}(\cdot)$ is the hyperbolic sine function and ${\rm cosh}(\cdot)$ is the hyperbolic cosine function. Besides, it can be calculated that ${{\mathscr N}_a}\left\{ {u\left( {k-a} \right)} \right\} = \frac{1}{s}={{\mathscr N}_a}\left\{ {u\left( {k-a-1} \right)} \right\} $, since we can only obtain the value of $f(k)$ with $k\in\mathbb{N}_{a+1}$ from $F(s)$ and ${u\left( {k-a} \right)}$ is exactly equal to ${u\left( {k-a-1} \right)}$ for any $k\in\mathbb{N}_{a+1}$. For the purpose of comparison, more results are provided in Table \ref{Table 2} and Table \ref{Table 3}. More especially, Table \ref{Table 2} gives the generalized Laplace transform pairs. The transform and its inverse transform are defined as \cite{Wei:2018ArXivb} \begin{equation}\label{Eq14} {\textstyle {{\mathscr L}_a}\left\{ {f\left( t \right)} \right\} \triangleq \int_a^{ + \infty } {{{\rm{e}}^{ - s\left( {t - a} \right)}}f\left( t \right){\rm{d}}t} ,} \end{equation} \begin{equation}\label{Eq15} {\textstyle {{\mathscr L}_a^{-1}}\left\{ {F\left( s \right)} \right\} \triangleq \frac{1}{{2\pi {\rm{i}}}}\int_{\beta - {\rm{i}}\infty }^{\beta + {\rm{i}}\infty } { {{\rm{e}}^{s\left( {t - a} \right)}}{F\left( s \right)}{\rm{d}}s},} \end{equation} where $\beta$ is a real number so that the contour path of integration is in the convergence region of $F(s)$. When the sampling time $h$ is introduced in the nabla Laplace transform, the relationship between ${{\mathscr N}_a}\left\{ \cdot \right\}$ and ${{\mathscr L}_a}\left\{ \cdot \right\}$ can be derived (see \cite{Cheng:2011Book,Ortigueira:2016CNSNS}). The continuous-time Mittag--Leffler function is defined as ${{\mathcal E}_{\alpha ,\beta }}\left( {\lambda ,t,a} \right) \triangleq \sum\nolimits_{i = 0}^{ + \infty } {\frac{{{\lambda ^i}}}{{\Gamma \left( {i\alpha + \beta } \right)}}{{\left( {t - a} \right)}^{i\alpha }}} $. Table \ref{Table 3} gives the generalized Z-transform pairs. The transform and its inverse transform are defined as \begin{equation}\label{Eq16} {\textstyle {\mathscr Z}_a \left\{ {f\left( {k} \right)} \right\} \triangleq \sum\nolimits_{k = 0}^{ + \infty } {z^{-k}f\left( {k+a} \right) } , } \end{equation} \begin{equation}\label{Eq17} {\textstyle {\mathscr Z}_a^{ - 1}\left\{ {F\left( z \right)} \right\} \triangleq \frac{1}{{2\pi {\rm{i}}}}\oint_c {{z^{k - a - 1}}F\left( z \right){\rm{d}}z} ,} \end{equation} where $c$ is a closed curve rotating around the point $(0,{\rm j}0)$ anticlockwise located in the convergent domain of $F(z)$ (see \cite{Wei:2019AJC}). By defining $g(k)=f(k+1)$, $k\in\mathbb{N}_{a}$ and $z^{-1}=1-s$, then ${\mathscr Z}_a \left\{ {g\left( {k} \right)} \right\} ={\mathscr N}_a \left\{ {f\left( {k} \right)} \right\} $. \begin{table*} \centering \caption{Generalized Laplace Transform Pairs.} \label{Table 2} \begin{tabular}{c l l l} \hhline number&$f(t)$, $t\ge a$, $a\in\mathbb{R}$ & $F(s)={\mathscr L}_a\left\{ {f\left( t \right)} \right\} $ & {\rm ROC}\\ \hline 1&${\delta \left( {t - a} \right)}$ & 1 & $s\in\mathbb{C}$\\ 2&${u\left( {t-a} \right)}$ &$\frac{{\rm{1}}}{s}$&${\rm Re} \left\{ s \right\} > 0$\\ 3&${{t - a} }$&$\frac{{1}}{{{s^2}}}$&${\rm Re} \left\{ s \right\} > 0$\\ 4&${{\gamma ^{t - a}}}$&$\frac{1}{{ s-\ln\gamma}}$&${\rm Re} \left\{ s \right\} > \ln \gamma$, $\gamma > 0$\\ 5&$\frac{{{\left( {t - a} \right)}^{\alpha }}}{\Gamma \left( {\alpha + 1} \right)}$&$\frac{1}{{{s^{\alpha + 1}}}}$&${\mathop{\rm Re}\nolimits} \left\{ s \right\} > 0$, ${\rm Re} \left\{ \alpha \right\} > - 1$\\ 6&${{\gamma ^{t - a }}}\frac{{\left( {t - a} \right)}^{\alpha }}{\Gamma \left( {\alpha + 1} \right)}$&$\frac{1}{{{{\left( {s - \ln\gamma} \right)}^{\alpha + 1}}}}$&${\rm Re} \left\{ s \right\} > \ln \gamma $, $\gamma > 0$, ${\rm Re}\left\{ \alpha \right\} > - 1$\\ 7&${{{\rm{e}}^{ \lambda (t - a)}}}$&$\frac{1}{{s - \lambda }}$&${\rm Re} \left\{ s \right\} > \lambda $\\ 8&$\frac{{{{\left( {t- a} \right)}^{N - 1}}}}{{\left( {N - 1} \right)!}}{{\rm{e}}^{ \lambda (t - a)}}$&$\frac{1}{{{{\left( {s - \lambda } \right)}^N}}}$&${\rm Re} \left\{ s \right\} > \lambda$, $N\in\mathbb{Z}_+$\\ 9&${\left( {t - a } \right)^{\beta-1}{{\mathcal E}_{\alpha ,\beta }}\left( {\lambda ,t,a} \right)}$&$\frac{{{s^{\alpha - \beta }}}}{{{s^\alpha } - \lambda }}$&${\rm Re} \left\{ s \right\} > 0$, $\left| \lambda \right| < {\left| s \right|^\alpha }$, $\alpha ,\beta \in {\mathbb{R}_ + }$\\ 10&${\left( {t - a } \right)^\alpha{{\mathcal E}_{\alpha ,\alpha }}\left( {\lambda ,t,a} \right)}$&$\frac{{\alpha {s^{\alpha - 1}}}}{{{{\left( {{s^\alpha } - \lambda } \right)}^2}}}$&${\rm Re} \left\{ s \right\} > 0$, $\left| \lambda \right| < {\left| s \right|^\alpha }$, $\alpha \in {\mathbb{R}_ + }$\\ 11&${{{\rm{e}}^{ - \lambda (t - a )}}}$&$\frac{1}{s+\lambda}$&${\rm Re} \left\{ s \right\} > -\lambda $\\ 12&${{\gamma ^{t - a }}{{\rm{e}}^{ - \lambda (t - a)}}}$&$\frac{1}{s+\lambda-\ln \gamma}$&${\rm Re} \left\{ s \right\} > \ln \gamma - \lambda $, $\gamma > 0$\\ 13&${\sin \left( {\omega (t - a )} \right)}$&$\frac{\omega}{s^2+\omega^2}$&${\rm Re} \left\{ s \right\} > 0$\\ 14&${\cos \left( {\omega (t - a)} \right)}$&$\frac{s}{s^2+\omega^2}$&${\rm Re} \left\{ s \right\} > 0$\\ 15&${\sinh \left( {\omega (t - a)} \right)}$&$\frac{\omega}{s^2-\omega^2}$&${\rm Re} \left\{ s \right\} > \left| \omega \right|$\\ 16&${\cosh \left( {\omega (t - a)} \right)}$&$\frac{s}{s^2-\omega^2}$&${\rm Re} \left\{ s \right\} > \left| \omega \right|$\vspace{3pt}\\ \hhline \end{tabular} \end{table*} \begin{table*} \centering \caption{Generalized Z-Transform Pairs.} \label{Table 3} \begin{tabular}{c l l l} \hhline number&$f(k)$, $k\in\mathbb{N}_{a+1}$, $a\in\mathbb{R}$ & $F(z)={\mathscr Z}_a\left\{ {f\left( k \right)} \right\} $ & {\rm ROC}\\ \hline 1&${\delta \left( {k-a } \right)}$ & 1 & $s\in\mathbb{C}$\\ 2&${u\left( {k-a } \right)}$ &$\frac{{\rm{1}}}{1-z^{-1}}$&$| z^{-1}| < 1$\\ 3&${{k - a+1} }$&$\frac{1}{{{(1-z^{-1})^2}}}$&$| z^{-1}| < 1$\\ 4&${{\gamma ^{k - a}}}$&$\frac{1}{{1 - \gamma z^{-1}}}$&$| z^{-1}|\left| \gamma \right| < 1$, $\gamma\neq0$\\ 5&$\frac{{{\left( {k - a+1} \right)}^{\bar \alpha }}}{\Gamma \left( {\alpha + 1} \right)}$&$\frac{1}{{{(1-z^{-1})^{\alpha + 1}}}}$&$| z^{-1}| < 1,\alpha \in \mathbb{C},\alpha \notin {Z_- }$\\ 6&${{\gamma ^{k - a }}}\frac{{\left( {k - a+1} \right)}^{\bar \alpha }}{\Gamma \left( {\alpha + 1} \right)}$&$\frac{1}{{{{( {1 - \gamma z^{-1}} )}^{\alpha + 1}}}}$&$| z^{-1}|\left| \gamma \right| < 1$, $\gamma\neq0$\\ 7&${\frac{1}{{{{\left( {1 - \lambda } \right)}^{k - a+1}}}}}$&$\frac{1}{{1-z^{-1} - \lambda }}$&$| z^{-1}| < \left| {1 - \lambda } \right|,\lambda \ne 1$\\ 8&$\frac{{{{\left( {k - a+1} \right)}^{{\overline {^{N - 1}} }}}}}{{\left( {N - 1} \right)!{{\left( {1 - \lambda } \right)}^{k - a + N}}}}$&$\frac{1}{{{{( {1-z^{-1} - \lambda } )}^N}}}$&$| z^{-1}| < \min \left\{ {\left| {1 - \lambda } \right|,1} \right\},\lambda \ne 1$, $N\in\mathbb{Z}_+$\\ 9&${{{\mathcal F}_{\alpha ,\beta }}\left( {\lambda ,k+1,a} \right)}$&$\frac{{{(1-z^{-1})^{\alpha - \beta }}}}{{{(1-z^{-1})^\alpha } - \lambda }}$&$| z^{-1}| < 1,\left| \lambda \right| < {| 1-z^{-1}|^\alpha },\alpha ,\beta \in {\mathbb{R}_ + }$\\ 10&${\left( {k - a } \right){{\mathcal F}_{\alpha ,\alpha }}\left( {\lambda ,k+1,a} \right)}$&$\frac{{\alpha {(1-z^{-1})^{\alpha - 1}}z^{-1}}}{{{{[ {{(1-z^{-1})^\alpha } - \lambda } ]}^2}}}$&$| z^{-1}| < 1,\left| \lambda \right| < {| 1-z^{-1}|^\alpha },\alpha \in \mathbb{R}_+$\\ 11&${{{\rm{e}}^{ - \lambda (k - a)}}}$&$\frac{1}{{1 - {{\rm{e}}^{ - \lambda }}z^{-1}}}$&$| z^{-1}| < {{\rm{e}}^\lambda }$\\ 12&${{\gamma ^{k - a }}{{\rm{e}}^{ - \lambda (k - a)}}}$&$\frac{1}{{1 - \gamma {{\rm{e}}^{ - \lambda }}z^{-1}}}$&$| z^{-1}|\left| \gamma \right| < {{\rm{e}}^\lambda },\gamma \ne 0$\\ 13&${\sin \left( {\omega (k - a)} \right)}$&$\frac{{\sin \left( \omega \right)z^{-1}}}{{1 - 2\cos \left( \omega \right)z^{-1} + z^{-2}}}$&$| z^{-1}| < 1$\\ 14&${\cos \left( {\omega (k - a)} \right)}$&$\frac{{1 - \cos \left( \omega \right)z^{-1}}}{{1 - 2\cos \left( \omega \right)z^{-1} + z^{-2}}}$&$| z^{-1}| < 1$\\ 15&${\sinh \left( {\omega (k - a )} \right)}$&$\frac{{\sinh \left( \omega \right)z^{-1}}}{{1 - 2\cosh \left( \omega \right)z^{-1} + z^{-2}}}$&$| z^{-1}| < \min \left\{ {{{\rm{e}}^\omega },{{\rm{e}}^{ - \omega }}} \right\}$\\ 16&${\cosh \left( {\omega (k - a)} \right)}$&$\frac{{1 - \cosh \left( \omega \right)z^{-1}}}{{1 - 2\cosh \left( \omega \right)z^{-1} + z^{-2}}}$&$| z^{-1}| < \min \left\{ {{{\rm{e}}^\omega },{{\rm{e}}^{ - \omega }}} \right\}$\vspace{3pt}\\ \hhline \end{tabular} \end{table*} \subsection{Suitability and limitations} Both of the two developed methods can get the exact inverse nabla Laplace transform of some given $F(s)$ with special ROC. Nevertheless, every coin has two sides. They are generally used for $F(s)$ with rational polynomial expression and therefore their applicability of these methods should be pointed out clearly. Nowadays, fractional calculus has played a critical role in many theoretical and practical scenarios. For this, when fractional sum or fractional difference of a sequence $f(k)$ is considered, the fractional order polynomial of the nabla Laplacian operator `$s$' will emerge in the expression of $F(s)$, such as $\frac{1}{s^\alpha-\lambda}$, $\frac{1}{(s-\lambda)^\alpha}$ and $\frac{1}{(s^2+1)^\alpha}$. For the residual calculation method, if $F(s)=\frac{1}{s^\alpha-\lambda}$ with $\alpha\in(0,1)$, then there are infinite poles outside for the closed curve $c$ especially for the irrational case of $\alpha$ and the multiple pole $s=1$ has $k-a$ times. Therefore, it is difficult to compute $f(k)$ with the poles inside or outside $c$. Likewise, if $F(s)=\frac{1}{(s-\lambda)^\alpha}$ or $F(s)=\frac{1}{(s^2+1)^\alpha}$ with $\alpha\in(0,1)$ is considered, the problem follows immediately, i.e., $\lambda$ is $\alpha$ times pole of $F(s)$. As a result, both (\ref{Eq5}) and (\ref{Eq6}) cannot work for this case. By adopting the basic properties of nabla Laplace transform, such as linearity, time advance, time delay, right shifting, left shifting, scaling in the frequency domain, differentiation in the frequency domain, integration in the frequency domain, accumulation, convolution and multiplication, etc, some transform pairs $f(k)\leftrightarrow F(s)$ can be found. Then, combining the obtained transform pairs like those in Table \ref{Table 1} with the partial fraction method, some inverse transform of fractional order polynomial. Moreover, some special functions like $\frac{1}{{\rm e}^{s}-\lambda}$, $\frac{1}{{\rm log}(s^2+\lambda)}$, $\frac{\Gamma(s)}{\Gamma(s+\lambda)}$, $\frac{1}{{\rm tan}(1/s)}$, $\frac{1}{{\rm tanh}(s)}$ and $\frac{1}{{\rm sinh}(\sqrt{s})\sqrt{s}}$ will also appear under particular circumstances. The proposed two methods In other words, it is an opportunity and challenge to handle with such complicated irrational $F(s)$. Maybe, analytical or numerical techniques for the Laplace transform inversion \cite{Cohen:2007Book} could give us a lot of inspiration. \section{EXAMPLES STUDY}\label{Section 4} In this section, two illustrative examples are presented to further evaluate the theoretical approaches. {\bf Example 1.} $F\left( s \right) = \frac{9}{{{{\left( {s + 1} \right)}^2}\left( {s - 2} \right)}}$, $\left| {1 - s} \right| < 1$. \begin{enumerate}[(i)] \item Compute $f(k)$, $k\in\mathbb{N}_{a+1}$ with the residual calculation method. The poles of $F\left( s \right){\left( {1 - s} \right)^{ - k+a}}$ can be found as ${s_1} = 2,{s_2} = -1,{s_3} = 1$, where $s_1$ is a single pole, $s_2$ is a 2 times pole, and $s_3$ is a $k-a$ times pole. Selecting a closed curve $c$ in the convergent region $\left| {1 - s} \right| < 1$, then $s_1$, $s_2$ are outside the curve and $s_3$ is inside the curve. By applying formula (\ref{Eq4}), it follows \[ \begin{array}{l} f\left( k \right)\\ = - {\rm{Res}}\big[ {F\left( s \right){{\left( {1 - s} \right)}^{- k +a }},1} \big]\\ = \frac{{ - 1}}{{\left( {k - a - 1} \right)!}}\mathop {\lim }\limits_{s \to 1} \frac{{{{\rm{d}}^{k - a - 1}}}}{{{\rm{d}}{s^{k - a - 1}}}}\big[ {{{\left( {s - 1} \right)}^{k - a}}F\left( s \right){{\left( {1 - s} \right)}^{ - k+a}}} \big]\\ = \frac{{{{\left( { - 1} \right)}^{k - a + 1}}}}{{\left( {k - a - 1} \right)!}}\mathop {\lim }\limits_{s \to 1} \frac{{{{\rm{d}}^{k - a - 1}}}}{{{\rm{d}}{s^{k - a - 1}}}}\big[ {\frac{1}{{s - 2}} - \frac{1}{{s + 1}} - \frac{3}{{{{\left( {s + 1} \right)}^2}}}} \big]\\ = \mathop {\lim }\limits_{s \to 1} \big[ {\frac{1}{{{{\left( {s - 2} \right)}^{k - a}}}} - \frac{1}{{{{\left( {s + 1} \right)}^{k - a}}}} - \frac{{3\left( {k - a} \right)}}{{{{\left( {s + 1} \right)}^{k - a + 1}}}}} \big]\\ = {\left( { - 1} \right)^{k - a}} - {2^{a - k}} + 3\left( {a - k} \right){2^{a - k - 1}}. \end{array}\] If we use formula (\ref{Eq7}), then \[\begin{array}{rl} f\left( k \right) =\hspace{-6pt}& {\rm{Res}}\big[ {F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}},2} \big]\\ \hspace{-6pt}&+ {\rm{Res}}\big[ {F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}}, - 1} \big]\\ =\hspace{-6pt}& \mathop {\lim }\limits_{s \to 2} \big[ {\left( {s - 2} \right)F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}}} \big]\\ \hspace{-6pt}&+ \mathop {\lim }\limits_{s \to - 1} \big[ {\left( {s + 1} \right)F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}}} \big]\\ =\hspace{-6pt}& {\left( { - 1} \right)^{k - a}} - {2^{a - k}} + 3\left( {a - k} \right){2^{a - k - 1}}. \end{array}\] \item Compute $f(k)$, $k\in\mathbb{N}_{a+1}$ with the partial fraction method. After simple mathematic deduction, one has \[{\textstyle F\left( s \right) = \frac{1}{{s - 2}} - \frac{1}{{s + 1}} - \frac{3}{{{{\left( {s + 1} \right)}^2}}}.}\] By the aid of Table \ref{Table 1}, one obtains \[\begin{array}{l} f\left( k \right)\\ = {\mathscr N}_a^{ - 1}\big\{ {\frac{1}{{s - 2}}} \big\} - {\mathscr N}_a^{ - 1}\big\{ {\frac{1}{{s + 1}}} \big\} - {\mathscr N}_a^{ - 1}\big\{ {\frac{3}{{{{\left( {s + 1} \right)}^2}}}} \big\}\\ = \frac{1}{{{{\left( {1 - 2} \right)}^{k - a}}}} - \frac{1}{{{{\left( {1 + 1} \right)}^{k - a}}}} - \frac{{3\left( {k - a} \right)}}{{{{\left( {1 + 1} \right)}^{k - a + 1}}}}\\ = {\left( { - 1} \right)^{k - a}} - {2^{a - k}} + 3\left( {a - k} \right){2^{a - k - 1}}. \end{array}\] \end{enumerate} The first method seems more complex than the second one, since some arithmetic operations have been done when building Table \ref{Table 1}. However, we have to admit that both the two established methods can solve the problem exactly. {\bf Example 2.} $F\left( s \right) = \frac{{0.2{s^{0.2}} - 0.3}}{{{s^{1.2}} - 0.2{s^{0.7}} - 0.3{s^{0.5}} + 0.06}}$, $\left| {1 - s} \right| < 1$ and $\left|s \right|>0.3^{10/7}$. \begin{enumerate}[(i)] \item Compute $f(k)$, $k\in\mathbb{N}_{a+1}$ with the residual calculation method. The poles of $F\left( s \right){\left( {1 - s} \right)^{ - k+a}}$ can be found as ${s_1} = 0.04,{s_2} = 0.3^{10/7}{{\rm{e}}^{{\rm{j}}20\pi i/7}},{s_3} = 1$, where $s_1$ is a single pole and $s_3$ is a $k-a$ times pole. Though $s_2$ is a single pole, different value of $i$ correspond to different $s_2$. In fact, the number of $s_2$ is infinite. Selecting a closed curve $c$ in the convergent region, then $s_1$, $s_2$ are outside the curve and $s_3$ is inside the curve. To avoid the infinite number of poles, we utilize the pole inside the curve $c$. Along this way, one has \[\begin{array}{l} f\left( k \right) \\ = - {\rm{Res}}\big[ {F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}},1} \big]\\ = \frac{{ - 1}}{{\left( {k - a - 1} \right)!}}\mathop {\lim }\limits_{s \to 1} \frac{{{{\rm{d}}^{k - a - 1}}}}{{{\rm{d}}{s^{k - a - 1}}}}\big[ {{{\left( {s - 1} \right)}^{k - a}}F\left( s \right){{\left( {1 - s} \right)}^{ - k + a}}} \big]\\ = \frac{{{{\left( { - 1} \right)}^{k - a + 1}}}}{{\left( {k - a - 1} \right)!}}\mathop {\lim }\limits_{s \to 1} \frac{{{{\rm{d}}^{k - a - 1}}}}{{{\rm{d}}{s^{k - a - 1}}}}\big[\frac{{0.2{s^{0.2}} - 0.3}}{{{s^{1.2}} - 0.2{s^{0.7}} - 0.3{s^{0.5}} + 0.06}}\big] \end{array}\] To be honest, it is difficult to provide a brief expression of $\frac{{{{\rm{d}}^{k - a - 1}}}}{{{\rm{d}}{s^{k - a - 1}}}}\big[\frac{{0.2{s^{0.2}} - 0.3}}{{{s^{1.2}} - 0.2{s^{0.7}} - 0.3{s^{0.5}} + 0.06}}\big]$. In other words, the residual calculation method cannot solve the problem effectively. \item Compute $f(k)$, $k\in\mathbb{N}_{a+1}$ with the partial fraction method. The considered function $F(s)$ can be equivalently rewritten as \[{\textstyle F\left( s \right) = \frac{1}{{{s^{0.5}} - 0.2}} - \frac{{{s^{0.2}}}}{{{s^{0.7}} - 0.3}}}.\] Similarly, using Table \ref{Table 1} yields \[\begin{array}{rl} f\left( k \right) =\hspace{-6pt}& {\mathscr N}_a^{ - 1}\big\{ {\frac{1}{{{s^{0.5}} - 0.2}}} \big\} - {\mathscr N}_a^{ - 1}\big\{ {\frac{{{s^{0.2}}}}{{{s^{0.7}} - 0.3}}} \big\}\\ =\hspace{-6pt}& {{\mathcal F}_{0.5,0.5}}\left( {0.2,k,a} \right) - {{\mathcal F}_{0.7,0.5}}\left( {0.3,k,a} \right), \end{array}\] which means that the partial fraction method can compute the sequence $f(k)$ from some special irrational $F(s)$. Generally speaking, it is essential to disassemble $F(s)$ into some essential elements and build a more detailed nabla Laplace pairs table include these elements. \end{enumerate} \section{CONCLUSIONS}\label{Section 5} In this paper, the conventional residual calculation method and partial fraction expansion method have been investigated for computing the nabla Laplace transform. It is the first time to give two available methods instead of calculating the contour integral directly. Although the exact solution can be achieved, the developed methods have limitations. More related methods are expected for the irrational case. It is hoped that this paper will be a useful tool for all those who use nabla Laplace transforms in their work.
proofpile-arXiv_068-10312
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Qualitative Results} \section{Conclusion} \label{sec:conclude} In this paper, we presented a simple, efficient, and powerful pooling scheme -- SVM pooling -- for video representation learning. We cast the pooling problem in a multiple instance learning framework, and seek to learn useful decision boundaries on video features against background/noise features. We provide an efficient scheme that jointly learns these decision boundaries and the action classifiers on them. Extensive experiments were showcased on eight challenging benchmark datasets, demonstrating state-of-the-art performance. Given the challenging nature of these datasets, we believe the benefits afforded by our scheme is a significant step towards the advancement of recognition systems designed to represent sets of images or videos. \comment{ It summarize actions in video sequences by filtering useful features from a bag of per-frame CNN features. And We propose to use the output of SVM pooling as a descriptor for representing the sequence, namely the SVM Pooled (SVMP) descriptor. Considering SVMP descriptor as a better representation than CNN features, we extended SVMP to NSVMP to deal with the non-linear sequences. Extensive experiments were implemented on two popular benchmark datasets: HMDB51 and UCF101, which clearly show the advantage and effectiveness of SVMP compared with the traditional pooling scheme on CNN features. More importantly, after combining with the hand-crafted local features, we reach the state-of-the-art result on these two datasets. } \section{End-to-End CNN Learning} \label{sec:e2e} In this section, we address the problem of training a CNN end-to-end with SVM pooling as an intermediate layer -- the main challenge is to derive the gradients of SVMP for efficient backpropagation. This challenge is amplified by the fact that we use the parameters of the decision hyperplane to generate our pooling descriptor, this hyperplane is obtained via a non-differentiable argmin function (refer to~\eqref{eq:mil}). However, fortunately, there is well-developed theory addressing such cases using the implicit function theorem~\cite{dontchev2009implicit}, and several recent works towards this end in the CNN setting~\cite{gould2016differentiating}. We follow these approaches and derive the gradients of SVMP below. \subsection{Discriminative Pooling Layer} In Figure~\ref{fig:dpl}, we describe two ways to insert the discriminative pooling layer into the CNN pipeline, namely (i) inserting SVMP at some intermediate layer and (ii) inserting SVMP at the end of the network just before the final classifier layer. While the latter pools smaller dimensional features, computing the gradients will be faster (as will be clear shortly). However, the last layer might only have discriminative action features alone, and might miss other spatio-temporal features that could be useful for discriminative pooling. This is inline with our observations in our experiments in Section~\ref{sec:exp} that suggest that applying discriminative pooling after pool5 or fc6 layers is significantly more useful than at the end of the fc8 layer. This choice of inserting the pooling layer between some intermediate layers of the CNN leads to the first choice. Figure~\ref{fig:dpl} also provides the gradients that need to be computed for back-propagation in either case. The only new component of this gradient is that for the argmin problem of pooling, which we derive below. \subsection{Gradients Derivations for SVMP} Assume a CNN $f$ taking a sequence $S$ as input. Let $f_{L}$ denote the $L$-th CNN layer and let $X_{L}$ denote the feature maps generated by this layer for all frames in $S$. We assume these features go into an SVMP pooling layer and produces as output a descriptor $w$ (using a precomputed set of negative feature maps), which is then passed to subsequent CNN layers for action classification. Mathematically, let $g(z) = \argmin_{w} \svmp(X_{L-1})$ define the SVM pooling layer, which we re-define using hinge-loss in the objective $f(z,w)$ as: \begin{equation} \svmp(X_{L-1})=\frac{1}{2}\enorm{w}^2+ \frac{\lambda}{2}\sum_{z\in X_{L-1}}\!\!\max\left(0,\theta(z;\eta)w^Tz-1\right)^2.\nonumber \end{equation} As is by now clear, with regard to a CNN learning setup, we are dealing with a bilevel optimization problem here -- that is, optimizing for the CNN parameters via stochastic gradient descent in the outer optimization, which requires the gradient of an argmin inner optimization with respect to its optimum, i.e., we need to compute the gradient of $g(z)$ with respect to the data $z$. By applying Lemma 3.3 of~\cite{gould2016differentiating}, this gradient of the argmin at an optimum SVMP solution $w^*$ can be shown to be the following: \begin{equation} \nabla_{z} g(z)|_{w=w^*} = -\nabla_{ww} \svmp(X_{L-1})^{\!\!-1} \nabla_{zw} \svmp(X_{L-1}),\nonumber \end{equation} where the first term captures the inverse of the Hessian evaluated at $w^*$ and the second term is the second-order derivative wrt $z$ and $w$. Substituting for the components, we have the gradient at $w=w^*$ as: {\small \begin{align} -\!\!\left(\!\eye{}\!\!+\!\!\lambda\hspace*{-0.5cm}\sum_{\forall j:\theta_jw^Tz_j >1}\hspace*{-0.5cm} (\theta_jz_j)(\theta_jz_j)^T\right)^{\!\!\!\!-1}\!\!\!\left[\lambda\hspace*{-0.3cm}\sum_{\forall j: \theta_jw^Tz_j >1} \hspace*{-0.6cm}\text{D }(\theta_j^2w^Tz_j\!-\!\theta_j)\!+\!\theta_j^2wz_j^T\!\!\right] \label{eq:bilevel} \end{align} } where for brevity, we use $\theta_j = \theta(z_j; \eta)$, and $\text{D}$ is a diagonal matrix, whose $i$-th entry as $D_{ii}=\theta_i^2w^Tz_i-\theta_i$. \section{Experiments} \label{sec:exp} In this section, we explore the utility of discriminative pooling on several vision tasks, namely (i) action recognition using video and skeletal features, (ii) localizing actions in videos, (iii) image set verification, and (iv) recognizing dynamic texture videos. We introduce the respective datasets and experimental protocols in the next. \subsection{Datasets} \noindent\textbf{HMDB-51~\cite{kuehne2011hmdb} and UCF-101~\cite{soomro2012ucf101}:} are two popular benchmarks for video action recognition. Both datasets consist of trimmed videos downloaded from the Internet. HMDB-51 has 51 action classes and 6766 videos, while UCF-101 has 101 classes and 13320 videos. Both datasets are evaluated using 3-fold cross-validation and mean classification accuracy is reported. For these datasets, we analyze different combinations of features on multiple CNN frameworks. \noindent\textbf{Charades~\cite{sigurdsson2016hollywood}:} is an untrimmed and multi-action dataset, containing 11,848 videos split into 7985 for training, 1863 for validation, and 2,000 for testing. It has 157 action categories, with several fine-grained categories. In the classification task, we follow the evaluation protocol of ~\cite{sigurdsson2016hollywood}, using the output probability of the classifier to be the score of the sequence. In the detection task, we follow the `post-processing' protocol described in~\cite{Sigurdsson_2017_CVPR}, which uses the averaged prediction score of a small temporal window around each temporal pivot. Using the provided two-stream fc7 feature\footnote{http://vuchallenge.org/charades.html}, we evaluate the performance on both tasks using mean average precision (mAP) on the validation set. \noindent\textbf{Kinetics-600~\cite{kay2017kinetics}:} is one of the largest dataset for action recognition. It consists of 500K trimmed video clips over 600 action classes with at least 600 video clips in each class. Each video clip is at least 10 seconds long with a single action class label. We apply our SVMP scheme on the CNN features (2048-D) extracted from the I3D network~\cite{carreira2017quo}. \noindent\textbf{MSR Action3D~\cite{li2010action} and NTU-RGBD~\cite{shahroudy2016ntu}:} are two popular action datasets providing 3D skeleton data. Specifically, MSR Action3D has 567 short sequences with 10 subjects and 20 actions, while NTU-RGBD has 56,000 videos and 60 actions performed by 40 people from 80 different view points. NTU-RGBD is by far the largest public dataset for depth-based action recognition. To analyze the performance of SVMP on non-linear features, we use a lie-algebra encoding of the skeletal data as proposed in~\cite{vemulapalli2014human} for the MSR dataset. As for NTU-RGBD, we use a temporal CNN as in~\cite{kim2017interpretable}, but uses SVMP instead of their global average pooling. \noindent\textbf{Public Figures Face Database (PubFig)~\cite{kumar2009attribute}:} contains 60,000 real-life images of 200 people. All the images are collected directly from the Internet without any post-processing, which make the images in each fold have large variations in lighting, backgrounds, and camera views. Unlike video-based datasets, PubFig images are non-sequential. To generate features, we fine-tune a ResFace-101 network~\cite{masi16dowe} on this dataset and follow the evaluation protocol of~\cite{hayat2015deep}. \noindent\textbf{YUP++ dataset~\cite{feichtenhofer2017temporal}:} is recent dataset for dynamic scene understanding. It has 20 scene classes, such as Beach, Fireworks, Waterfall, Railway, etc. There are 60 videos in each class. Half of the videos are recorded by a static camera and the other half by a moving camera. Accordingly, it is divided into two sub-datasets, \textit{YUP++ moving camera} and \textit{YUP++ static camera}. We use the latest Inception-ResNet-v2 model~\cite{szegedy2017inception} to generate features (from last dense layer) from RGB frames and evaluate the performance according to the setting in~\cite{feichtenhofer2017temporal}, which use a 10/90 train-test ratio. \begin{figure}[htbp] \begin{center} \subfigure[]{\label{subfig:1}\includegraphics[width=0.49\linewidth,clip]{figure/figure_eta.eps}} \subfigure[]{\label{subfig:2}\includegraphics[width=0.49\linewidth,clip]{figure/figure3.eps}} \subfigure[]{\label{subfig:3}\includegraphics[width=0.49\linewidth,clip]{figure/figure4.eps}} \subfigure[]{\label{subfig:4}\includegraphics[width=0.49\linewidth,clip]{figure/figure5.eps}} \end{center} \caption{Analysis of the parameters used in our scheme. All experiments use VGG features from fc6 dense layer. See text for details.} \label{fig:all_plots} \end{figure} \subsection{Parameter Analysis} In this section, we analyze the influence of each of the parameters in our scheme. \noindent\textbf{Selecting Negative Bags:} An important step in our algorithm is the selection of the positive and negative bags in the MIL problem. We randomly sample the required number of frames (say, 50) from each sequence/fold in the training/testing set to define the positive bags. In terms of the negative bags, we need to select samples that are unrelated to the ones in the positive bags. We explored four different negatives in this regard to understand the impact of this selection. We compare our experiments on the HMDB-51 (and UCF101) datasets. Our considered the following choices for the negative bgs: clips from (ithe ActivityNet dataset~\cite{caba2015activitynet} unrelated to HMDB-51, (ii) the UCF-101 dataset unrelated to HMDB-51, (iii) the Thumos Challenge background sequences\footnote{http://www.thumos.info/home.html}, and (iv) synthesized random white noise image sequences. For (i) and (ii), we use 50 frames each from randomly selected videos, one from every unrelated class, and for (iv) we used 50 synthesized white noise images, and randomly generated stack of optical flow images. Specifically, for the latter, we pass white noise RGB images to the same CNN models and extract the feature from the last fully-connected layer. As for hand-crafted or geometry features used in our other experiments (such as action recognition on human pose sequences), we directly use the white noise as the negative bag. As shown in Figure ~\ref{subfig:1}, the white noise negative is seen to showcase better performance for both lower and higher value of $\eta$ parameter. To understand this trend, in Figure~\ref{fig:tsne}, we show TSNE plots visualizing the deep CNN features for the negative bag variants. Given that the CNNs are trained on real-world image data and we extract features from the layer before the last linear layer, it is expected that these features be linearly separable (as seen in Figure~\ref{fig:thumos2} and~\ref{fig:ucf2}). However, we believe using random noise inputs may be activating combinations of filters in the CNN that are never co-activated during training, resulting in features that are highly non-linear (as Figure~\ref{fig:whitenoise} shows). Thus, when requiring SVMP to learn linear/non-linear decision boundaries to classify video features against these ``noise'' features perhaps forces the optimizer to select those dimensions in the inputs (positive bag) that are more correlated with actions in the videos, thereby empowering the descriptor to be more useful for classification. In Figure~\ref{fig:4}, we show the TSNE visualizations of SVMP descriptors comparing to average pooling and max pooling on data from 10-classes of HDMB-51 dataset. The visualization shows that SVMP leads to better separated clusters, substantiating that SVMP is learning discriminative representations. \begin{figure}[htbp] \centering \subfigure[Thumos]{\label{fig:thumos2}\includegraphics[width=0.3\linewidth,clip]{figure/thumo2.eps}} \subfigure[UCF101]{\label{fig:ucf2}\includegraphics[width=0.3\linewidth,clip]{figure/ucf2.eps}} \subfigure[White Noise]{\label{fig:whitenoise}\includegraphics[width=0.3\linewidth,clip]{figure/whitenoise2.eps}} \caption{T-SNE plots of positive (blue) and negative bags (red) when using negatives from: (a) Thumos, (b) UCF101, and (c) white noise.} \label{fig:tsne} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=0.9\linewidth,trim={0cm 0cm 0cm 0cm},clip]{figure/points.eps} \end{center} \caption{T-SNE visualizations of SVMP and other pooling methods on sequences from the HMDB51 dataset (10 classes used). From left to right, Average Pooling, Max Pooling, and SVMP.} \label{fig:4} \end{figure} \noindent\textbf{Choosing Hyperparameters:} The three important parameters in our scheme are (i) the $\eta$ deciding the quality of an SVMP descriptor, (ii) $C_1=C$ used in Algorithm~\ref{alg3} when finding SVMP per sequence, and (iii) sizes of the positive and negative bags. To study (i) and (ii), we plot in Figures~\ref{subfig:3} and~\ref{subfig:1} for HMDB-51 dataset, classification accuracy when $C$ is increased from $10^{-4}$ to $10^{4}$ in steps and when $\eta$ is increased from 0-100\% and respectively. We repeat this experiment for all the different choices of negative bags. As is clear, increasing these parameters reduces the training error, but may lead to overfitting. However, Figure~\ref{subfig:2} shows that increasing $C$ increases the accuracy of the SVMP descriptor, implying that the CNN features are already equipped with discriminative properties for action recognition. However, beyond $C=10$, a gradual decrease in performance is witnessed, suggesting overfitting to bad features in the positive bag. Thus, we use $C=10$ ( and $\eta=0.9$) in the experiments to follow. To decide the bag sizes for MIL, we plot in Figure~\ref{subfig:2}, performance against increasing size of the positive bag, while keeping the negative bag size at 50 and vice versa; i.e., for the red line in Figure \ref{subfig:2}, we fix the number of instances in the positive bag at 50; we see that the accuracy raises with the cardinality of the negative bag. A similar trend, albeit less prominent is seen when we repeat the experiment with the negative bag size, suggesting that about 30 frames per bag is sufficient to get a useful descriptor. \noindent\textbf{Running Time:} In Figure~\ref{subfig:4}, we compare the time it took on average to generate SVMP descriptors for an increasing number of frames in a sequence on the UCF101 dataset. For comparison, we plot the running times for some of the recent pooling schemes such as rank pooling~\cite{bilen2016dynamic,fernando2015modeling} and the Fisher vectors~\cite{wang2013action}. The plot shows that while our scheme is slightly more expensive than standard Fisher vectors (using the VLFeat\footnote{http://www.vlfeat.org/}), it is significantly cheaper to generate SVMP descriptors than some of the recent popular pooling methods. To be comparable, we use publicly available code of SVM in SVMP as well as in rank pooling. \subsection{Experiments on HMDB-51 and UCF-101} Following recent trends, we use a two-stream CNN model in two popular architectures, the VGG-16 and the ResNet-152~\cite{feichtenhofer2016convolutional,simonyan2014very}. For the UCF101 dataset, we directly use publicly available models from ~\cite{feichtenhofer2016convolutional}. For the HMDB dataset, we fine-tune a two-stream VGG/ResNet model trained for the UCF101 dataset. \noindent\textbf{SVMP Optimization Schemes:} We proposed three different optimization strategies for solving our formulation (Section~\ref{opti_solution}). The enumerative solution is trivial and non-practical. Thus, we will only compare Algorithms~\ref{alg2} and~\ref{alg3} in terms of the performance and efficiency. In Table~\ref{algori_compari}, we show the result between the two on fc6 features from a VGG-16 model. It is clear that the alternating solution is slightly better than parameter-tuning solution; however, is also more computationally expensive. Considering the efficiency, especially for the large-scale datasets, we use parameter-tuning solution in the following experiments. \noindent\textbf{SVMP on Different CNN Features:} We generate SVMP descriptors from different intermediate layers of the CNN models and compare their performance. Specifically, features from each layer are used as the positive bags and SVMP descriptors computed using Alg.~\ref{alg2} against the chosen set of negative bags. In Table~\ref{table:1}, we report results on split-1 of the HMDB dataset and find that the combination of fc6 and pool5 gives the best performance for the VGG-16 model, while pool5 features alone show good performance using ResNet. We thus use these feature combinations for experiments to follow. \begin{table}[] \centering \caption{Comparison between Algorithms~\ref{alg2} and~\ref{alg3} in HMDB-51 split-1.} \label{algori_compari} \begin{tabular}{lcc} \hline Method & Accuracy & Avg. Time (sec)/Video\\ \hline Alternating Algorithm (Alg.~\ref{alg2}) & \textbf{69.8\%} & 2.4\\ Parameter-tuning Algorithm (Alg.~\ref{alg3}) & 69.5\% &\textbf{0.2} \end{tabular} \end{table} \begin{table}[] \centering \caption{Comparison of SVMP descriptors using various CNN Features on HMDB split-1.} \label{table:1} \begin{tabular}{lcc} \hline Feature/ & Accuracy & Accuracy when\\ model & independently & combined with:\\ \hline pool5 (vgg-16) & 57.9\% & \textbf{63.8\%} (fc6) \\ fc6 (vgg-16) & 63.3\% & - \\ fc7 (vgg-16) & 56.1\% & 57.1\% (fc6) \\ fc8 (vgg-16) & 52.4\% & 58.6\% (fc6) \\ softmax (vgg-16) & 41.0\% & 46.2\% (fc6) \\ \hline pool5 (ResNet-152) & \textbf{69.5\%} & - \\ fc1000 (ResNet-152) & 61.1\% & 68.8\% (pool5) \end{tabular} \end{table} \noindent\textbf{Linear vs Non-Linear SVMP:} We analyze the complementary nature of SVMP and its non-linear extension NSVMP (using a homogeneous kernel) on HMDB-51 and UCF-101 split1. The results are provided in Table~\ref{table:3}, and clearly show that the combination leads to significant improvements consistently on both datasets. \noindent\textbf{End-to-End Learning and Ordered-SVMP:} In Table~\ref{table:4}\footnote{All experiments in Table~\ref{table:4} use the same input features.}, we compare to the end-to-end learning setting as described in Section~\ref{sec:e2e}. For end-to-end learning, we insert our discriminative pooling layer after the 'fc6' layer in VGG-16 model and the 'pool5' layer in ResNet model. We also present results when using the temporal ordering constraint (TC) into the SVMP formulation to build the ordered-SVMP. From the results, it appears that although the soft-attention scheme performs better than average pooling, it is inferior to SVMP itself; which is unsurprising given it does not use a max-margin optimization. Further, our end-to-end SVMP layer is able to achieve similar (but slightly inferior) performance to SVMP, which perhaps is due to the need to approximate the Hessian. As the table shows, we found that the temporal ranking is indeed useful for improving the performance of na\"ive SVMP. Thus, in the following experiments, we use SVMP with temporal ranking for all video-based tasks. \begin{table}[] \centering \caption{Comparison between SVMP and NSVMP on split-1.} \label{table:3} \begin{tabular}{l|ll|ll} \hline \multicolumn{1}{l|}{} & \multicolumn{2}{c|}{HMDB-51} & \multicolumn{2}{c}{UCF-101} \\ \hline & VGG & ResNet &VGG & ResNet \\\hline linear-SVMP & 63.8\% & 69.5\% &91.6\% & 92.2\% \\ nonlinear-SVMP & 64.4\% & 69.8\% &92.0\% & 93.1\% \\ Combination & \textbf{66.1\%}& \textbf{71.0\%} &\textbf{92.2\%} &\textbf{94.0\%} \end{tabular} \end{table} \begin{table}[ht] \centering \caption{Comparison to standard pooling methods on split-1. TC is short for Temporal Constraint, E2E is short for end-to-end learning.} \label{table:4} \begin{tabular}{l|ll|ll} \hline \multicolumn{1}{l|}{} & \multicolumn{2}{c|}{HMDB-51} & \multicolumn{2}{c}{UCF-101} \\ \hline & VGG & ResNet &VGG & ResNet \\\hline Spatial Stream-AP\cite{feichtenhofer2016spatiotemporal,feichtenhofer2016convolutional} & 47.1\% & 46.7\% &82.6\% & 83.4\% \\ Spatial Stream-SVMP & 58.3\% & 57.4\% &85.7\% &87.6\% \\ Spatial Stream-SVMP(E2E) & 56.4\% & 55.1\% &83.2\% &85.7\% \\ Spatial Stream-SVMP+TC & \textbf{59.4\%} & \textbf{57.9\%} &\textbf{86.6\%} & \textbf{88.9\%} \\ \hline Temporal Stream-AP \cite{feichtenhofer2016spatiotemporal,feichtenhofer2016convolutional} & 55.2\% & 60.0\% &86.3\% & 87.2\% \\ Temporal Stream-SVMP & 61.8\% &65.7\% &88.2\% & 89.8\% \\ Temporal Stream-SVMP(E2E) & 58.3\% &63.2\% &87.1\% & 87.8\% \\ Temporal Stream-SVMP+TC & \textbf{62.6\%} &\textbf{67.1\%} &\textbf{88.8\%} & \textbf{90.9\%} \\ \hline Two-Stream-AP \cite{feichtenhofer2016spatiotemporal,feichtenhofer2016convolutional} & 58.2\%& 63.8\% &90.6\% &91.8\% \\ Two-Stream-SVMP &66.1\% &71.0\% &92.2\% &94.2\% \\ Two-Stream-SVMP(E2E) &63.5\% &68.4\% &90.6\% &92.3\% \\ Two-Stream-SVMP+TC &\textbf{67.2\%}&\textbf{71.3\%} &\textbf{92.5\%} &\textbf{94.8\%} \end{tabular} \end{table} \noindent\textbf{SVMP Image:} In Figure~\ref{fig:5}, we visualize SVMP descriptor when applied directly on raw video frames. We compare the resulting image against those from other schemes such as the dynamic images of~\cite{bilen2016dynamic}. It is clear that SVMP captures the essence of action dynamics in more detail. To understand the action information present in these images, we trained an action classifier directly on these images, as is done on Dynamic images in~\cite{bilen2016dynamic}. We use the BVLC CaffeNet~\cite{jia2014caffe} as the CNN -- same the one used in~\cite{bilen2016dynamic}. The results are shown in the Table~\ref{svmpimage} on split-1 of JHMDB (a subset of HMDB-51, containing 21 classes) and UCF-101. As is clear, SVMP images are seen to outperform \cite{bilen2016dynamic} by a significant margin, suggesting that SVMP captures more discriminative and useful action-related features. Howeer, we note that in contrast to dynamic images, our SVMP images do not intuitively look like motion images; this is perhaps because our scheme captures different information related to the actions, and we do not use smoothing (via running average) when generating them. The use of random noise features as the negative bag may be adding additional artifacts. \begin{figure}[] \begin{center} \includegraphics[width=0.75\linewidth,trim={0cm 0cm 0cm 0cm},clip]{figure/Figures.eps} \end{center} \caption{Visualizations of various pooled descriptors.} \label{fig:5} \end{figure} \begin{table}[] \centering \caption{Recognition rates on split-1 of JHMDB and UCF-101.} \label{svmpimage} \begin{tabular}{lcc} \hline Datasets & JHMDB & UCF-101\\ \hline Mean image&31.3\% &52.6\%\\ Max image &28.6\% &48.0\%\\ Dynamic image~\cite{bilen2016dynamic} & 35.8\% & 57.2\%\\ SVMP image & \textbf{45.8\%} &\textbf{65.4\%} \end{tabular} \end{table} \subsection{Action Recognition at Large Scale} Kinetics-600 is one the largest state-of-the-art dataset for action recognition on trimmed videos. For this experiment, we use the I3D network~\cite{carreira2017quo} (using the Inception-V3 architecture), as the baseline for feature generator. This model is pre-trained on ImageNet dataset~\cite{krizhevsky2012imagenet} and stacks 64 continuous frames as inputs. Specifically, we extract the CNN features from the second last layer (Mix5c) and apply average pooling to reshape the feature from 4 x 7 x 7 x 1024 into 1024-D vector for each 64-chunk of RGB frames. For each video clip, we use a sliding window to generate a sequence of such features with a window size of 64 and a temporal stride of 8 frames. Then, we apply our proposed SVMP to generate video descriptors for action recognition. In Table~\ref{kinectic}, we make comparisons with the baseline result on the validation set of Kinetics-600, and indicates that SVMP can bring clear improvements even on the large-scale setting. \begin{table}[] \centering \caption{Comparisons on Kinetics-600 dataset using I3D feature.} \label{kinectic} \begin{tabular}{lc} \hline Method & Accuracy\\\hline AP~\cite{carreira2018short} & 71.9\% \\ MP &67.8\% \\ SVMP &\textbf{73.5\%}\\ \end{tabular} \end{table} \subsection{Action Recognition/Detection in untrimmed videos} We ues the Charades untrimmed dataset for this task. We use the publicly available two-stream VGG features from the fc7 layer for this dataset. We trained our models on the provided training set (7985 videos), and report results (mAP) on the provided validation set (1863 videos) for the tasks of action classification and detection. In the classification task, we concatenate the two-stream features and apply a sliding window pooling scheme to create multiple descriptors. Following the evaluation protocol in~\cite{sigurdsson2016hollywood}, we use the output probability of the classifier to be the score of the sequence. In the detection task, we consider the evaluation method with post-processing proposed in~\cite{Sigurdsson_2017_CVPR}, which uses the averaged prediction score of a temporal window around each temporal pivots. Instead of average pooling, we apply the SVMP. From Table~\ref{table:5}, it is clear that SVMP improves performance against other pooling schemes by a significant margin; the reason for this is perhaps the following. During training, we use trimmed video clips, however, when testing, we extract features from every frame/clip in the untrimmed test video. As the network has seen only action-related frames during training, features from background frames may result in arbitrary predictions; and average pooling or max pooling on those features would hurt performance. When optimizing the binary classification problem between positive and negative bags for SVMP, the decision boundary would capture the most discriminative data support, leading to better summary of the useful features and leading to improved performance. \begin{table}[] \centering \caption{Comparisons on Charades dataset.} \label{table:5} \begin{tabular}{lccc} \hline Tasks & AP & MP & SVMP\\\hline Classification (mAP) & 14.2\% & 15.3 & \textbf{26.3\%} \\ Detection (mAP) & 10.9\% & 9.2 &\textbf{15.1\%}\\ \end{tabular} \end{table} \subsection{SVMP Evaluation on Other Tasks} In this section, we provide comprehensive evaluations justifying the usefulness of SVMP on non-video datasets and non-action tasks. We consider experiments on images sets recognition, skeleton-sequence based action recognition, and dynamic texture understanding. \textbf{MSR Action3D:} In this experiment, we explore the usefulness of SVMP on non-linear geometric features. Specifically, we chose the scheme of Vemulapilli et al.~\cite{vemulapalli2014human} as the baseline that generates Lie algebra based skeleton encodings for action recognition. While they resort to a dynamic time warping kernel for the subsequent encoded skeleton pooling, we propose to use SVMP instead. We use the random noise with the dataset mean and deviation as the negative bag, which achieve better performance. \textbf{NTU-RGBD:} On this dataset, we apply our SVMP scheme on the skeleton-based CNN features. Specifically, we use~\cite{kim2017interpretable} as the baseline, which applies a temporal CNN with residual connections on the vectorized 3D skeleton data. We swap the global average pooling layer in~\cite{kim2017interpretable} by SVM pooling layer. For the evaluation, we adopt the official cross-view and cross-subject protocols. What's interesting here is we try to explore whether the dimension of the feature point would affect the SVMP performance. During the SVMP, we use feature points with dimension from 150 to 4096. It seems only the number of data points would affect the performance of SVMP (from Charades dataset experiment), and it is not sensitive for the dimensionality. \textbf{PubFig:} In this task, we evaluate the use of SVMP for image set representation. We follow the evaluation setting in~\cite{hayat2015deep} and create the descriptor for the training and testing by applying SVMP over ResFace-101~\cite{masi16dowe} features from every image in the PubFig dataset. Unlike the video-based tasks, all input features in this setting are useful and represent the same person; however their styles vary significantly, which implies the CNN features may be very different even if they are from the same person. This further demands that SVM pooling would need to find discriminative dimensions in the features that are correlated and invariant to the person identity. \textbf{YUP++:} To investigate our SVMP scheme on deeper architectures, we use features from the latest Inception-ResNet-v2 model~\cite{szegedy2017inception}, which has achieved the state-of-the-art performance on the 2015 ILSVRC challenge. Specifically, we extract the RGB frames from videos and divide them into training and testing split according to the setting in~\cite{feichtenhofer2017temporal} (using a 10/90 train test ratio). Like the standard image-based CNNs, the clip level label is used to train the network on every frame. \begin{table}[tbp] \centering \caption{Accuracy comparison on different subsets of HMDB-51(H) and UCF-101(U) split-1 using I3D+ features.} \label{tab:11} \begin{tabular}{l|l|l|l|l|l|l} \hline Min \# of frames & 1 & 80 & 140 & 180 & 260 \\ \hline \# of classes (H) & 51 & 49 & 27 & 21 & 12 \\ \hline \# of classes (U) & 101 & 101 & 95 & 82 & 52 \\ \hline I3D (H) & 79.6\% & 81.8\% & 84.1\% & 78.0\% & 77.3\% \\ \hline SVMP (H) & \textbf{80.0\%} & \textbf{82.9\%} & \textbf{84.8\%} & \textbf{85.1\%} & \textbf{86.8\%} \\ \hline I3D (U) & 98.0\% & 98.0\% & 98.0\% & 95.9\% & 93.8\% \\ \hline SVMP (U) & \textbf{98.4\%} & \textbf{98.9\%} & \textbf{99.3\%} & \textbf{98.5\%} & \textbf{97.3\%} \\ \hline \end{tabular} \end{table} \begin{table}[] \centering \caption{Comparison to the state of the art in each dataset, following the official evaluation protocol for each dataset.} \label{table:10} \scalebox{0.95}{ \begin{tabular}{@{}l|c|c@{}}\hline \multicolumn{3}{c}{HMDB-51 \& UCF-101 (accuracy over 3 splits)} \\\hline Method & HMDB-51 & UCF-101 \\\hline Temporal segment networks\cite{Wang2016} & 69.4\% & 94.2\% \\ AdaScan\cite{Kar_2017_CVPR} & 54.9\% & 89.4\% \\ AdaScan + IDT + C3D\cite{Kar_2017_CVPR} & 66.9\% & 93.2\% \\ ST ResNet\cite{feichtenhofer2016spatiotemporal} & 66.4\% & 93.4\% \\ ST ResNet + IDT\cite{feichtenhofer2016spatiotemporal} & 70.3\% & 94.6\% \\ ST Multiplier Network\cite{feichtenhofer2017spatiotemporal} & 68.9\% & 94.2\% \\ ST Multiplier Network + IDT\cite{feichtenhofer2017spatiotemporal} & 72.2\% & 94.9\% \\ Hierarchical rank pooling\cite{fernando2016discriminative} &65.0\% &90.7\% \\ Two-stream I3D\cite{carreira2017quo} & 66.4\% & 93.4\% \\ Two-stream I3D+ (Kinetics 300k)\cite{carreira2017quo} & 80.7\% & 98.0\% \\\hline Ours (SVMP) & 71.3\% & 94.6\% \\ Ours (SVMP+IDT) & \textbf{72.6}\% & \textbf{95.0}\% \\ Ours (I3D+) & \textbf{81.8}\% & \textbf{98.5}\% \\\hline \multicolumn{3}{c}{Kinetics-600} \\\hline Method & \multicolumn{2}{c}{Accuracy} \\\hline I3D RGB\cite{carreira2018short} & \multicolumn{2}{c}{71.3\%} \\ Second-order Pooling~\cite{cherian2017second} & \multicolumn{2}{c}{54.7\%} \\\hline Ours (SVMP) & \multicolumn{2}{c}{\textbf{73.5\%}} \\\hline \multicolumn{3}{c}{Charades (mAP)} \\\hline Method & Classification & Detection \\\hline Two-stream\cite{simonyan2013deep} & 14.3\% & 10.9\% \\ ActionVlad + IDT\cite{girdhar2017actionvlad} & 21.0\% & - \\ Asynchronous Temporal Fields~\cite{Sigurdsson_2017_CVPR} & 22.4\% & 12.8\% \\\hline Ours (SVMP) & 26.3\% & 15.1\% \\ Ours (SVMP+IDT) & \textbf{27.4\%} & \textbf{16.3\%} \\\hline \multicolumn{3}{c}{MSR-Action3D} \\\hline Method & \multicolumn{2}{c}{Accuracy} \\\hline Lie Group\cite{vemulapalli2014human} & \multicolumn{2}{c}{92.5\%} \\ ST-LSTM + Trust Gate\cite{liu2017skeleton} & \multicolumn{2}{c}{94.8\%} \\\hline Ours (SVMP) & \multicolumn{2}{c}{\textbf{95.5\%}} \\\hline \multicolumn{3}{c}{NTU-RGBD} \\\hline Method & Cross-Subject & Cross-View \\\hline Res-TCN\cite{kim2017interpretable} & 74.3\% & 83.1\% \\ ST-LSTM + Trust Gate\cite{liu2017skeleton} & 69.2\% & 77.7\% \\\hline Ours (SVMP) & \textbf{79.4\%} & \textbf{87.6\%} \\\hline \multicolumn{3}{c}{PubFig} \\\hline Method & \multicolumn{2}{c}{Accuracy}\\\hline Deep Reconstruction Models\cite{hayat2015deep} & \multicolumn{2}{c}{89.9\%} \\ ESBC\cite{hayat2017empowering} & \multicolumn{2}{c}{98.6\%} \\\hline Ours (SVMP) & \multicolumn{2}{c}{\textbf{99.3\%}} \\\hline \multicolumn{3}{c}{YUP++} \\\hline Method & Stationary & Moving \\\hline Temporal Residual Networks\cite{feichtenhofer2017temporal}&92.4\% & 81.5\% \\\hline Ours (SVMP)&\textbf{92.9\%}& \textbf{84.0\%} \\\hline \end{tabular}} \end{table} \subsection{Comparisons to the State of the Art} In Table \ref{table:10}, we compare our best results against the state-of-the-art on each dataset using the standard evaluation protocols. For a fair comparison, we also report on SVMP combined with hand-crafted features (IDT-FV)~\cite{wang2013dense} for HMDB-51. Our scheme outperforms other methods on all datasets by 1--4\%. For example, on HMDB-51, our results are about 2-3\% better than the next best method without IDT-FV. On Charades, we outperform previous methods by about 3\% while faring well on the detection task against~\cite{Sigurdsson_2017_CVPR}. We also demonstrate significant performance (about 3-4\%) improvement on NTU-RGBD and marginally better performance on MSR datasets on skeleton-based action recognition. Our results are superior (by 1-2\%) on the PubFig and YUP++ datasets. We further analyze the benefits of combining I3D+ with SVMP (instead of their proposed average pooling) on both HMDB-51 and UCF-101 datasets using the settings in~\cite{carreira2017quo}. However, we find that the improvement over average pooling in I3D+ is not significant; which we believe is because learning the SVMP descriptor needs to solve a learning problem implicitly, requiring sufficient number of training samples, i.e., number of frames in the sequence. The I3D network uses 64-frame chunks as one sample, thereby reducing the number of samples for SVMP, leading to sub-optimal learning. We analyze this hypothesis in Table~\ref{tab:11}; each column in this table represents performances on a data subset, filtered as per the minimum number of frames in their sequences. As is clear from the table, while SVMP performs on par with I3D+ when the sequences are shorter, it demonstrates significant benefits on subsets having longer sequences. \section{Introduction} \label{sec:intro}} \IEEEPARstart{W}{e} are witnessing an astronomical increase of video data around us. This data deluge has brought out the problem of effective video representation -- specifically, their semantic content -- to the forefront of computer vision research. The resurgence of convolutional neural networks (CNN) has enabled significant progress to be made on several problems in computer vision~\cite{he2016deep,he2017mask} and is now pushing forward the state-of-the-art in action recognition and video understanding as well~\cite{carreira2017quo,feichtenhofer2017spatiotemporal,Hu_2018_ECCV,zhou2017temporalrelation}. Even so, current solutions for video representation are still far from being practically useful, arguably due to the volumetric nature of this data modality and the complex nature of real-world human actions. \begin{figure} \begin{center} \includegraphics[width=1\linewidth,trim={0cm 0cm 0cm 0cm},clip]{figure/SVMP_concept.eps} \end{center} \caption{A illustration of our discriminative pooling scheme. Our main idea is to learn a representation for the positive bag (left) of CNN features from the video of interest. To extract useful features from this video, we use a negative bag (right) of features from videos that are known to contain irrelevant/noise features. The representation learning problem is cast as a binary (non)-linear classification problem in an SVM setting; the hyperplane found via the optimization (which is a linear combination of support vectors) is used as the representation of the positive bag, which we call the~\emph{SVM pooled descriptor}.} \label{fig:1} \end{figure} Using effective architectures, CNNs are often found to extract features from images that perform well on recognition tasks. Leveraging this know-how, deep learning solutions for video action recognition have so far been straightforward extensions of image-based models\cite{simonyan2014two,ji20133d,zhou2017temporalrelation}. However, applying such models directly on video data is not an easy task as the video can be arbitrarily long, to address which a CNN may need to be scaled up by yet another dimension of complexity, which could increase the number of parameters sharply. This demands more advanced computational infrastructures and greater quantities of clean training data~\cite{carreira2017quo,monfort2018moments}. To overcome this problem, the trend has been on converting the video data to short temporal segments consisting of one to a few frames, on which the existing image-based CNN models are trained. For example, in the popular two-stream model~\cite{feichtenhofer2016convolutional, simonyan2014two, simonyan2014very, wang2015action, wangtwo}, the CNNs are trained to independently predict actions from short video clips (consisting of single frames or stacks of about ten optical flow frames) or a snippet of about 64 frames as in the recent I3D architecture~\cite{carreira2017quo}; these predictions are then pooled to generate a prediction for the full sequence -- typically using average/max pooling. While average pooling gives equal weights to all the predictions, max pooling may be sensitive to outliers. There have also been recent approaches that learn representations over features produced by, say a two-stream model, such as the temporal relation networks of Zhou et al.~\cite{zhou2017temporalrelation}, the rank pooling and its variants Bilen et al.,~\cite{bilen2016dynamic}, Fernando et al.,~\cite{fernando2015modeling}, and Cherian et al.,~\cite{cherian2018non,grp} that capture the action dynamics, higher-order statistics of CNN features Cherian et al.,~\cite{cherian2017second,cherian2017higher}, CNN features along motion trajectories Wang et al., ~\cite{wang2015action} and temporal segments Wang et al.,~\cite{Wang2016}, to name a few. However, none of these methods avoid learning meaningless information from the noise/background within the video, explicitly modeling which and demonstrating its benefits, are the main contributions of this paper. To this end, we observe that not all predictions on the short video snippets are equally informative, yet some of them must be~\cite{schindler2008action}. This allows us to cast the problem in a multiple instance learning (MIL) framework, where we assume that some of the features in s given sequence are indeed useful, while the rest are not. We assume all the CNN features from a sequence (containing both the good and the bad features) to represent a positive bag, while CNN features from unrelated video frames or synthetically generated random noise frames as a negative bag. We would ideally want the features in the negative bag to be correlated well to the uninformative features in the positive bag. We then formulate a binary classification problem of separating as many good features as possible in the positive bag using a discriminative classifier (we use a support vector machine (SVM) for this purpose). The decision boundary of this classifier thus learned is then used as a descriptor for the entire video sequence, which we call the SVM Pooled (SVMP) descriptor. To accommodate the fact that we are dealing with temporally-ordered data in the positive bag, we also explore learning our representations with partial ordering relations. An illustration of our SVMP scheme is shown in the Figure~\ref{fig:1}. Our SVMP scheme/descriptor shares several properties of standard pooled descriptors, however also showcases several important advantages. For example, similar to other pooling schemes, SVM pooling results in a compact and fixed length representation of videos of arbitrary length. However differently, our pooling gives different weights to different features, and thus may be seen as a type of weighted average pooling, by filtering out features that are perhaps irrelevant for action recognition. Further, given that our setup uses a max-margin encoding of the features, the pooled descriptor is relatively stable with respect to data perturbations and outliers. Our scheme is agnostic to the feature extractor part of the system, for example, it could be applied to the intermediate features from any CNN model or even hand-crafted features. Moreover, the temporal dynamics of actions are explicitly encoded in the formulation. The scheme is fast to implement using publicly available SVM solvers, and also could be trained in an end-to-end manner within a CNN setup. To evaluate our SVMP scheme, we provide extensive experiments on various datasets spanning a diverse set of tasks, namely action recognition and forecasting on HMDB-51~\cite{kuehne2011hmdb}, UCF-101~\cite{soomro2012ucf101}, Kinetics-600~\cite{kay2017kinetics} and Charades~\cite{sigurdsson2016hollywood}; skeleton-based action recognition on MSR action-3D~\cite{li2010action}, and NTU-RGBD~\cite{shahroudy2016ntu}; image-set verification on the PubFig dataset~\cite{kumar2009attribute}, and video-texture recognition on the YUP++ dataset~\cite{feichtenhofer2017temporal}. We outperform standard pooling methods on these datasets by a significant margin (between 3--14\%) and demonstrate superior performance against state-of-the-art results by 1--5\%. Before moving on, we summarize below the main contributions of this paper: \begin{itemize} \item We introduce the concept of multiple instance learning (MIL) into a binary SVM classification problem for learning video descriptors. \item We propose SVM pooling that captures and summarizes the discriminative features in a video sequence while explicitly encoding the action dynamics. \item We explore variants of our optimization problem and present progressively cheaper inference schemes, including a joint pooling and classification objective, as well as an end-to-end learnable CNN architecture. \item We demonstrate the usefulness of our SVMP descriptor by applying it on eight popular vision benchmarks spanning diverse input data modalities and CNN architectures. \end{itemize} \section*{Acknowledgments} \ifCLASSOPTIONcaptionsoff \newpage \fi {\small \bibliographystyle{IEEEtran} \section{Proposed Method} \label{sec:setup} In this section, we first describe the problem of learning SVMP descriptors and introduce three different ways to solve it. Before proceeding, we provide a snapshot of our main idea and problem setup graphically in Figure~\ref{fig:2}. Starting from frames (or flow images) in positive and negative bags, these frames are first passed through some CNN model for feature generation. These features are then passed to our SVMP module that learns (non-linear) hyperplanes separating the features from the positive bag against the ones from the negative bag, the latter is assumed fixed for all videos. These hyperplane representations are then used to train an action classifier at the video level. In the following, we formalize these ideas concretely. \subsection{Problem Setup} \label{sec:setup} Let us assume we are given a dataset of $N$ video sequences $\mathcal{X}^{\small{+}}_{} = \set{\pseq{1}, \pseq{2},\cdots, \pseq{N}}$, where each $\pseq{i}$ is a set of frame level features, $i.e.$, $\pseq{i}=\set{\pfeat{i}{1}, \pfeat{i}{2}, \cdots, \pfeat{i}{n}}$, each $\pfeat{i}{k}\in\reals{p}$. We assume that each $\pseq{i}$ is associated with an action class label $\ypseq{i}\in\set{1,2,\cdots, d}$. Further, the $+$ sign denotes that the features and the sequences represent a positive bag. We also assume that we have access to a set of sequences $\mathcal{X}^{\small{-}}_{}=\set{\nseq{1}, \nseq{2},\cdots \nseq{M}}$ belonging to actions different from those in $\mathcal{X}^{\small{+}}_{}$, where each $\nseq{j}=\set{\nfeat{j}{1}, \nfeat{j}{2}, \cdots, \nfeat{j}{n}}$ are the features associated with a negative bag, each $\nfeat{j}{k}\in\reals{p}$. For simplicity, we assume all sequences have same number $n$ of features. Further note that our scheme is agnostic to the type of features, i.e., the feature may be from a CNN or are hand-crafted. Our goals are two-fold, namely (i) to learn a classifier decision boundary for every sequence in $\mathcal{X}^{\small{+}}_{}$ that separates a fraction $\eta$ of them from the features in $\mathcal{X}^{\small{-}}_{}$ and (ii) to learn video level classifiers on the classes in the positive bags that are represented by the learned decision boundaries in (i). In the following, we will provide a multiple instance learning formulation for achieving (i), and a joint objective combining (i) and learning (ii). However, before presenting our scheme, we believe it may be useful to gain some insights into the main motivations for our scheme. As alluded to above, given the positive and negative bags, our goal is to learn a linear (or non-linear) classification boundary that could separate the two bags with a classification accuracy of $\eta\%$ -- this classification boundary is used as the descriptor for the positive bag. Referring to the conceptual illustration in Figure~\ref{fig:3a}, when no negative bag is present, there are several ways to find a decision hyperplane in a max-margin setup that could potentially satisfy the $\eta$ constraint. However, there is no guarantee that these hyperplanes are useful for action recognition. Instead, by introducing a negative bag, which is almost certainly to contain irrelevant features, it may be easier for the decision boundary to identify useless features from the rest; the latter containing useful action related features, as shown in Figure~\ref{fig:3b}. This is precisely our intuitions for proposing this scheme. \begin{figure}[htbp] \subfigure[]{\label{fig:3a}\includegraphics[width=4cm,trim={4cm 2cm 15cm 6cm},clip]{./figure/svmp_pami_illust_1.eps}} \subfigure[]{\label{fig:3b}\includegraphics[width=4cm,trim={4cm 2cm 15cm 6cm},clip]{./figure/svmp_pami_illust_2.eps}} \caption{An illustration of our overall idea. (a) the input data points, and the plausible hyperplanes satisfying some $\eta$ constraint, (b) when noise $\mathcal{X}^{-}$ is introduced (green dots), it helps identify noisy features/data dimensions, towards producing a hyperplane $w$ that classifies useful data from noise, while satisfying the $\eta$ constraint. } \label{fig:3} \end{figure} \subsection{Learning Decision Boundaries} As described above, our goal in this section is to generate a descriptor for each sequence $\pseq{}\in\mathcal{X}^{\small{+}}_{}$; this descriptor we define to be the learned parameters of a hyperplane that separates the features $\pfeat{}{}\in\pseq{}$ from all features in $\mathcal{X}^{\small{-}}_{}$. We do not want to warrant that all $\pfeat{}{}$ can be separated from $\mathcal{X}^{\small{-}}_{}$ (since several of them may belong to a background class), however we assume that at least a fixed fraction $\eta$ of them are classifiable. Mathematically, suppose the tuple $(w_i,b_i)$ represents the parameters of a max-margin hyperplane separating some of the features in a positive bag $\pseq{i}$ from all features in $\mathcal{X}^{\small{-}}_{}$, then we cast the following objective, which is a variant of the sparse MIL (SMIL)~\cite{bunescu2007multiple}, normalized set kernel (NSK)~\cite{gartner2002multi}, and $\propto$-SVM~\cite{yu2013propto} formulations: \begin{align} \label{eq:mil} &\!\!\argmin_{w_i\in\reals{p},b_i\in\reals{},\zeta\geq 0} P1(w_i,b_i) := \frac{1}{2}\enorm{w_i}^2 + C_1\sum_{k=1}^{(M+1)n}\zeta_k\\ &\subjectto\ \theta(\mathbf{x};\eta)\left(w_i^T\mathbf{x}+b_i\right) \geq 1 - \zeta_k\\ \label{eq:5}&\theta(\mathbf{x};\eta)= -1, \forall \mathbf{x} \in \left\{\pseq{i}\bigcup \mathcal{X}^{\small{-}}_{}\right\}\backslash \hpseq{i}\\ \label{eq:6}&\theta(\mathbf{\hat{x}}; \eta) = 1, \forall \mathbf{\hat{x}} \in\hpseq{i} \\ &\card{\hpseq{i}} \geq \eta \card{\pseq{i}}. \quad\quad \label{eq:ratio-constraint} \end{align} In the above formulation, we assume that there is a subset $\hpseq{i}\subset\pseq{i}$ that is classifiable, while the rest of the positive bag need not be, as captured by the ratio in~\eqref{eq:ratio-constraint}. The variables $\zeta$ capture the non-negative slacks weighted by a regularization parameter $C_1$, and the function $\theta$ provides the label of the respective features. Unlike SMIL or NSK objectives, that assumes the individual features $\mathbf{x}$ are summable, our problem is non-convex due to the unknown set $\hpseq{}$. However, this is not a serious deterrent to the usefulness of our formulation and can be tackled as described in the sequel and supported by our experimental results. As our formulation is built on an SVM objective, we call this specific discriminative pooling scheme as~\emph{SVM pooling} and formally define the descriptor for a sequence as: \begin{definition}[SVM Pooling Desc.] \label{def:svmp} Given a sequence $X$ of features $\mathbf{x}\in\reals{p}$ and a negative dataset $\mathcal{X}^{\small{-}}_{}$, we define the~\emph{SVM Pooling} (SVMP) descriptor as $\svmp(X) = [w,b]^T\in\reals{p+1}$, where the tuple $(w,b)$ is obtained as the solution of problem $P1$ defined in~\eqref{eq:mil}. \end{definition} \subsection{Optimization Solutions} \label{opti_solution} The problem $P1$ could be posed as a mixed-integer quadratic program (MIQP), which is unfortunately known to be in NP~\cite{lazimy1982mixed}. The problem $P1$ is also non-convex due to the proportionality constraint $\eta$, and given that the labels $\theta(\mathbf{x};\eta)$ are unknown. Towards a practically useful approximate solution circumventing these difficulties, we present three optimization strategies below. \subsubsection{Exhaustive Enumeration} A na\"ive way to solve problem $P1$ could be to enumerate all the possible $\theta(\mathbf{x};\eta)$ that meet a given $\eta$ constraint, which reduces solving the problem $P1$ to the classical SVM problem for each instantiation of the plausible $\theta$ assignments. In such a setting, for a given sequence, we can rewrite~\eqref{eq:mil} as: \begin{align} \argmin_{w_i\in\reals{p},b_i\in\reals{},\zeta\geq 0} &\frac{1}{2}\enorm{w_i}^2 + C_1\!\!\!\sum_{k=1}^{(M+1)n}\zeta_k\notag \\ &+ \max(0,1-\zeta_k-\theta(\mathbf{x};\eta)(w_i^T\mathbf{x}+b_i)), \end{align} where the constraints are included via the hinge loss. Once these subproblems are solved, we could compare the optimal solutions for the various subsets of the positive bag, and pick the best solution with smallest objective value. As is apparent, this na\"ive strategy becomes problematic for longer sequences or when $\eta$ is not suitably chosen. \subsubsection{Alternating algorithm} \label{alternating} This is a variant of the scheme proposed in~\cite{yu2013propto}. Instead of enumerating all possible $\theta(\mathbf{x};\eta)$ as above, the main idea here is to fix $\theta(\mathbf{x};\eta)$ or $[w,b]$ alternately and optimize the other. The detailed algorithm is shown in the Alg.~\ref{alg2}. \begin{algorithm} \SetAlgoLined \KwIn{$\pseq{}$, $\mathcal{X}^{\small{-}}_{}$, $\eta$} $Initialize\ \theta\ according\ to\ \eta$\; \Repeat{$Reduction\ is\ smaller\ than\ a\ threshold\ (10^{-4})$}{$Fix\ \theta\ to\ solve\ [w,b]\leftarrow\svm(\pseq{},\ \mathcal{X}^{\small{-}}_{},\ \theta)$\; $Fix\ [w,b]\ to\ solve\ \theta\colon$ $\ \ Reinitialize\ \theta_i \leftarrow -1,\forall i\in(i,n)$\; \ \ \For{$i=1\ \to\ n$}{$Set\ \theta_i \leftarrow 1;$\\ $record\ the\ reduction\ of\ Objective$} $\ \ Sort\ and\ select\ the\ top\ R\ reductions,\ R=\eta n$\; $\ \ Get\ \theta\ according\ to\ the\ sorting$;} \KwRet{$[w,b]$} \caption{Alternating solution to the MIL problem $P1$} \label{alg2} \end{algorithm} In the Algorithm~\ref{alg2}, fixing $\theta$ to solve $[w,b]$ is a standard SVMP problem as in the enumeration algorithm above. When fixing $[w,b]$ to solve $\theta$, we apply a similar strategy as in~\cite{yu2013propto}; i.e., to initialize all labels in $\theta$ as $-1$, and then to turn each $\theta_i$ to $+1$ and record the reduction in the objective. Next, we sort these reductions to get the top $R$ best reductions, where $R=\eta n$. A higher reduction implies it may lead to a smaller objective. Next, these top $R$ $\theta_i$ will be set to $+1$ in $\theta$. While, there is no theoretical guarantee for this scheme to converge to a fixed point, empirically we observe a useful convergence, which we limit via a suitable threshold \subsubsection{Parameter-tuning algorithm} As is clear, both the above schemes may be computationally expensive in general. We note that the regularization parameter $C_1$ in~\eqref{eq:mil} controls the positiveness of the slack variables $\zeta$, thereby influencing the training error rate. A smaller value of $C_1$ allows more data points to be misclassified. If we make the assumption that useful features from the sequences are easily classifiable compared to background features, then a smaller value of $C_1$ could help find the decision hyperplane easily (further assuming the negative bag is suitably chosen). However, the correct value of $C_1$ depends on each sequence. Thus, in Algorithm~\eqref{alg3}, we propose a heuristic scheme to find the SVMP descriptor for a given sequence $\pseq{}$ by iteratively tuning $C_1$ such that at least a fraction $\eta$ of the features in the positive bag are classified as positive. \begin{algorithm} \SetAlgoLined \KwIn{$\pseq{}$, $\mathcal{X}^{\small{-}}_{}$, $\eta$} $C_1 \leftarrow \epsilon,\ \lambda > 1$\; \Repeat{$\frac{\card{\hpseq{}}}{\card{\pseq{}}}\geq \eta$} { $C_1 \leftarrow \lambda C_1$\; $[w,b] \leftarrow \argmin_{w,b} \svm(\pseq{},\ \mathcal{X}^{\small{-}}_{},\ C_1)$\; $\hpseq{} \leftarrow \set{\mathbf{x}\in\pseq{}\ |\ w^T\mathbf{x}+b \geq 0}$\; } \KwRet{$[w,b]$} \caption{Parameter-tuning solution for MIL problem $P1$} \label{alg3} \end{algorithm} \emph{A natural question here is how optimal is this heuristic?} Note that, each step of Algorithm~\eqref{alg3} solves a standard SVM objective. Suppose we have an oracle that could give us a fixed value $C$ for $C_1$ that works for all action sequences for a fixed $\eta$. As is clear, there could be multiple combinations of data points in $\hpseq{}$ that could satisfy this $\eta$ (as we explored in the Enumeration algorithm above). If $\hpseq{p}$ is one such $\hpseq{}$. Then, $P1$ using $\hpseq{p}$ is just the SVM formulation and is thus convex. Different from previous algorithms, in Alg.~\ref{alg3}, we adjust the SVM classification rate to $\eta$, which is easier to implement. Assuming we find a $C_1$ that satisfies the $\eta$-constraint using $P1$, then due to the convexity of SVM, it can be shown that the optimizing objective of P1 will be the same in both cases (exhaustive enumeration and our proposed regularization adjustment), albeit the solution $\hat{X}_p^+$ might differ (there could be multiple solutions). \subsection{Nonlinear Extensions} In problem $P1$, we assume a linear decision boundary generating SVMP descriptors. However, looking back at our solutions in Algorithms~\eqref{alg2} and~\eqref{alg3}, it is clear that we are dealing with standard SVM formulations to solve our relaxed objectives. In the light of this, instead of using linear hyperplanes for classification, we may use nonlinear decision boundaries by using the kernel trick to embed the data in a Hilbert space for better representation. Assuming $\mathcal{X}=\mathcal{X}^{\small{+}}_{}\cup\mathcal{X}^{\small{-}}_{}$, by the Representer theorem~\cite{smola1998learning}, it is well-known that for a kernel $K:\mathcal{X}\times \mathcal{X}\rightarrow \reals{}_+$, the decision function $f$ for the SVM problem P1 will be of the form: \begin{equation} f(.) = \sum_{\mathbf{x}\in\pseq{}\cup\mathcal{X}^{\small{-}}_{}}\alpha_{\mathbf{x}} K(., \mathbf{x}), \label{eq:ksvm} \end{equation} where $\alpha_{\mathbf{x}}$ are the parameters of the non-linear decision boundaries. However, from an implementation perspective, such a direct kernelization may be problematic, as we will need to store the training set to construct the kernel. We avoid this issue by restricting our formulation to use only homogeneous kernels~\cite{vedaldi2012efficient}, as such kernels have explicit linear feature map embeddings on which a linear SVM can be trained directly. This leads to exactly the same formulations as in~\eqref{eq:mil}, except that now our features $\mathbf{x}$ are obtained via a homogeneous kernel map. In the sequel, we call such a descriptor a~\emph{nonlinear SVM pooling} (NSVMP) descriptor. \subsection{Temporally-Ordered Extensions} \label{osvmp} In the formulations we proposed above, there are no explicit constraints to enforce the temporal order of features in the SVMP descriptor. This is because, in the above formulations, we assume the features themselves capture the temporal order already. For example, the temporal stream in a two-stream model is already trained on a densely-sampled stack of consecutive optical flow frames. However, motivated by several recent works~\cite{bilen2016dynamic,grp,fernando2015modeling,dynamic_flow}, we extend our Equation~\eqref{eq:mil} by including ordering constraints as: \begin{equation} w^T\pfeat{i}{j} + \delta \leq w^T\pfeat{i}{k}, \quad \forall j<k; \pfeat{i}{j}, \pfeat{i}{k} \in \hpseq{i} \end{equation} where we reuse the notation defined above and define $\delta>0$ as a margin enforcing the order. In the sequel, we use this temporally-ordered variant of SVMP for our video representation. Note that with the ordering constraints enforced, it is difficult to use the enumerative or alternating schemes for finding the SVMP descriptors, instead we use Alg.~\ref{alg3} by replacing the SVM solver by a custom solver~\cite{manopt}. \comment{ In this section, we introduce our SVM pooling and decision boundary; We describe important formulation for generating the decision boundary with MIL scheme, followed by the discussion about how to combine linear and non-linear decision boundary. At last, an overall structure of the algorithm is presented. \subsection{Learning a decision boundary} The decision boundary is from the classifier used in multiple instance learning on the CNN features. Let \begin{equation} S_{i}=<x_i^1,x_i^2,...,x_i^n> \label{eq:1} \end{equation} where $S_i$ is the $i^{th}$ sequence in the target dataset and $x_i^1, x_i^2,..., x_i^n$ represent the feature of $n$ samples in this sequence. When training the classifier, all the $S$ will be treated as positive. Meanwhile, we define a sequence $\overline{S}$, which has the same format as $S_i$ and for each training on $S_i$, this sequence will be used as the negative part to against the positive one to get distinguishable decision boundary. And in this negative sequence, it is required to be different from the positive ones but have the similar noise and background information as the positive ones. Specifically, when doing the action recognition on the target datasets HMDB51 \cite{kuehne2011hmdb} UCF101\cite{soomro2012ucf101}, we chose 169 videos from another dataset, Activity Net \cite{caba2015activitynet}, to form the negative sequence, in which we include 169 actions that differ from the target datasets. Also note that, the feature of $S$ and $\overline{S}$ is the CNN feature from layer 'pool5' and 'fc6' in the two-stream network. The discussion of choosing features from different layer in CNNs will be presented in (Section~\ref{sec:exp}). Back to the decision boundary, this problem can be written as: \emph{\color{red} this equation might be wrong when consider $\overline{S}$} \begin{equation} \label{eq:2} \begin{split} &\min \sum_i \lVert w_i \rVert_2^2\\ \text{Subject to:} &\quad S+y_i(w_i^T x_i+b)\geq 1 \end{split} \end{equation} where the decision boundary is $w_i$ for the $i^{th}$ sequence, and $y_i\in\{-1, 1\}$ that is to represent the positive and negative sequence. After getting decision boundaries for each sequence, we train another classifier to do the classification on action recognition. Thus, these two optimization problems can be jointly solved. From equation \ref{eq:2}, the new formulation is: \begin{equation} \label{eq:3} \begin{split} &\min \sum_i \lVert w_i \rVert_2^2 + \lVert D \rVert_2^2\\ \text{Subject to:} &\quad S+y_i(w_i^T x_i+b)\geq 1\\ & \quad z_i(D^T w_i+c)\geq 1 \end{split} \end{equation} where the $D$ is the new decision boundary to classify action in the video. Please note that the decision boundary here is different from the decision boundary we talked above, which is the representation of videos. And now, $D$ and $w_i$ can be jointly optimized by fixing one to solve the other in each loop. However, for the efficiency, in the experiment, we just run such iteration once. In terms of the training options, because the number of sample in the negative sequence is far larger than the one in each positive sequence, we could chose some or all of negative samples for training classifier. When utilize all the sample in the negative sequence, due to the limitation of memory, we apply the strategy of hard negative mining that is to train a classifier using a subset of negative samples at first and to retrain the classifier in the next loop using the wrong predicted negative samples and so on. This process will not stop until it go through all the negative samples. The comparison between different training options will be given in the Section \ref{sec:exp}. \subsection{Linear and non-linear decision boundary} As shown in the Figure \ref{fig:1}, when training the classifier between positive and negative sequence, the decision boundary could be either linear or non-linear. To maximize the power of decision boundary, we apply both linear and non-linear kernel (RBF kernel \cite{vert2004primer}) on the top of features to train SVM\cite{CC01a,fan2008liblinear} as the classifier and make fusion afterwards. As the non-linear decision boundary comes from the RBF kernel, we cannot concatenate it with linear decision boundary directly. Thus, we apply a homogeneous kernel on the top of non-linear decision boundary to make it comparable with the linear one. An extensive comparison is made in the Section \ref{sec:exp}.\emph{\color{red} To be extended} Finally the process of this algorithm is presented in Figure \ref{fig:2}. } \section{Related Work} \label{sec:related_work} The problem of video representation learning has received significant interest over the past decades. Thus, we restrict our literature review to some of the more recent methods, and defer the interested reader to excellent surveys on the topics such as~\cite{herath2017going,poppe2010survey,aggarwal2011human}. \subsection{Video Representation Using Shallow Features} Traditional methods for video action recognition typically use hand-crafted local features, such as dense trajectories, HOG, HOF, etc.~\cite{wang2011action}, which model videos by combining dense sampling with feature tracking. However, the camera motion, as one of the video natures, usually result in non-static video background and hurt the quality of features. To tackle this problem, Wang et al.~\cite{wang2013action} improved the performance of dense trajectories by removing background trajectories and warping optical flow. Based on the improved dense trajectories, high-level representations are designed via pooling appearance and flow features along these trajectories, and have been found to be useful to capture human actions. For example, Sadanand et al.~\cite{sadanand2012action} propose Action Bank, which converts the individual action detector into semantic and viewpoint space. Similarly, Bag of words model~\cite{sivic2003video}, Fisher vector~\cite{perronnin2010improving}, and VLAD~\cite{jegou2012aggregating} representations are mid-level representations built on such hand-crafted features with the aim of summarizing local descriptors into a single vector representation. In Peng et al.~\cite{peng2016bag}, a detailed survey of these ideas is presented. In comparison to these classic representation learning schemes, our proposed setup is grounded on discriminatively separating useful data from the rest. \subsection{Video Representation Using Deep Features}With the resurgence of deep learning methods for object recognition~\cite{krizhevsky2012imagenet}, there have been several attempts to adapt these models to action recognition. Recent practice is to feed the video data, including RGB frames, optical flow subsequences, and 3D skeleton data into a deep (recurrent) network to train it in a supervised manner. Successful methods following this approach are the two-stream models and their extensions~\cite{feichtenhofer2017spatiotemporal,feichtenhofer2017temporal,hayat2015deep,kim2017interpretable,simonyan2014two}. As apparent from its name, it has two streams, spatial stream is to capture the appearance information from RGB frames and temporal stream is to learn the motion dynamics from stacked optical flow. And then, they apply early or late fusion strategy to predict the final label. Although the architecture of these networks are different, the core idea is to split the video into short clips and embed them into a semantic feature space, and then recognize the actions either by aggregating the individual features per clip using some statistic (such as max or average) or directly training a CNN based end-to-end classifier~\cite{feichtenhofer2017spatiotemporal}. While the latter schemes are appealing, they usually need to store the feature maps from all the frames in memory which may be prohibitive for longer sequences. Moreover, this kind of training strategy may fail to capture the long-term dynamics in the video sequence. To tackle this problem, some recurrent models~\cite{baccouche2011sequential,donahue2015long,du2015hierarchical,li2016action,srivastava2015unsupervised,yue2015beyond} are proposed, which use long-short term memory (LSTM) or gate recurrent unit (GRU) to embed the temporal relationship among frames by using logistic gates and hidden states. However, the recurrent neural networks are usually hard to train~\cite{pascanu2013difficulty} due to the exploding and vanishing gradient problem. Temporal Segment Network (TSN)~\cite{Wang2016} and Temporal Relation Network (TRN)~\cite{zhou2017temporalrelation} provide alternative solutions that are easier to train. Another promising solution is to use 3D convolutional filters~\cite{carreira2017quo,tran2015learning,wang2018non,zhou2018mict,wang2017appearance}. Compared to 2D filters, 3D filters can capture both spatial and temporal video structure. However, feeding the entire video into the CNNs may be computationally prohibitive. Further, 3D kernels bring more parameters into the architecture; as a result, may demand large and clean data for effective training~\cite{carreira2017quo}. While, an effective CNN architecture that can extract useful action-related features is essential to make progress in video understanding, we focus on the other aspect of the problem -- that is, given a CNN architecture how well can we summarize the features it produces for improving action recognition. To this end, our efforts in this paper can be seen as complimentary to these recent approaches. \subsection{Video Representation Using Pooling Schemes}Typically, pooling schemes consolidate input data into compact representations based on some data statistic that summarizes the useful content. For example, average and max pooling captures zero-th and first order statistics. There are also works that use higher-order pooling, such as Cherian and Gould~\cite{anoop_secondorder} using second-order, Cherian et al.~\cite{cherian2017higher} using third-order, and Girdhar et al.,~\cite{Girdhar_17a_ActionVLAD} proposing a video variant of the VLAD encoding which is approximately a mixture model. A recent trend in pooling schemes, which we also follow in this paper, is to use the parameters of a data modeling function, as the representation. For example, rank pooling~\cite{fernando2015modeling} proposes to use the parameters of a support vector regressor as a video representation. In Bilen et al.,~\cite{bilen2016dynamic}, rank pooling is extended towards an early frame-level fusion, dubbed~\emph{dynamic images}; Wang et al.~\cite{dynamic_flow}, extends this idea to use optical flow, which they call \emph{dynamic flow} representation. Cherian et al.~\cite{grp} generalized rank pooling to include multiple hyperplanes as a subspace, enabling a richer characterization of the spatio-temporal details of the video. This idea was further extended to non-linear feature representations via kernelized rank pooling in \cite{cherian2018non}. However, while most of these methods optimize a rank-SVM based regression formulation, our motivation and formulation are different. We use the parameters of a binary SVM to be the video level descriptor, which is trained to classify the frame level features from a pre-selected (but arbitrary) bag of negative features. Similar works are Exemplar-SVMs~\cite{malisiewicz2011ensemble,willems2009exemplar,zepeda2015exemplar}, that learn feature filters per data sample and then use these filters for feature extraction. However, in this paper, we use the decision boundary of the SVM to be the video level descriptor, that separate as many discriminative features as possible in each sequence while implicitly encoding the temporal order of these features. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.7\linewidth,trim={0cm 0cm 0cm 0cm},clip]{figure/pipeline.eps} \end{center} \caption{Illustration of our SVM Pooling scheme. (i) Extraction of frames from videos, (ii) Converting frames $f$ into feature $x$, (iii) Learning decision boundary $w$ from feature $x$, and (iv) Using $w$ as video descriptor.} \label{fig:2} \end{figure*} \subsection{Multiple Instance Learning} An important component of our algorithm is the MIL scheme, which is a popular data selection technique~\cite{cinbis2017weakly,li2015multiple,wu2015deep,yi2016human,zhang2015self}. In the context of video representation, schemes similar in motivation have been suggested before. For example, Satkin and Hebert~\cite{satkin2010modeling} explore the effect of temporal cropping of videos to regions of actions; however, it assumes these regions are continuous. Nowozin et al.~\cite{nowozin2007discriminative} represent videos as sequences of discretized spatiotemporal sets and reduces the recognition task into a max-gain sequence finding problem on these sets using an LPBoost classifier. Similar to ours, Li et al.~\cite{li2013dynamic} propose an MIL setup for complex activity recognition using a dynamic pooling operator -- a binary vector that selects input frames to be part of an action, which is learned by reducing the MIL problem to a set of linear programs. Chen and Nevatia~\cite{sun2014discover} propose a latent variable based model to explicitly localize discriminative video segments where events take place. Vahdat et al. present a compositional model in~\cite{vahdat2013compositional} for video event detection, which is presented using a multiple kernel learning based latent SVM. While all these schemes share similar motivations as ours, we cast our MIL problem in the setting of normalized set kernels~\cite{gartner2002multi} and reduce the formulation to standard SVM setup which can be solved rapidly. In the $\propto$-SVMs of Yu et al.,~\cite{lai2014video,yu2013propto}, the positive bags are assumed to have a fixed fraction of positives, which is a criterion we also assume in our framework. However, the negative bag selection, optimization setup and our goals are different; specifically, our goal is to learn a video representation for any subsequent task including recognition, anticipation, and detection, while the framework in~\cite{lai2014video} is designed for event detection. And we generate the negative bag by using CNN features generated via inputing random noise images to the network. The current paper is an extension of our published conference paper~\cite{SVMP} and differs in the following ways. Apart from the more elaborate literature survey we present, we also provide extensions of our pooling scheme, specifically by incorporating temporal-ordering constraints. We provide detailed derivations of our end-to-end pooling variant. We further present elaborate experiments on five more datasets in addition to the three datasets that we used in~\cite{SVMP}, including a large scale action recognition experiment using the recently proposed Kinetics-600 dataset.
proofpile-arXiv_068-10465
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} With the swift progress in computer vision, surveillance systems using static cameras are promising technologies for advanced tasks such as behavior analysis \mbox{\cite{Zhang2020}}, object segmentation \mbox{\cite{Ammar2020}} and motion analysis \cite{Park2018}, \cite{Bilge2017}. Among their various functionalities, background modeling is a pivotal component when it comes to the proper understanding of scene dynamics, thereby enabling extraction of attributes of interest. An ideal background is a scene containing only stationary objects and components that are not of interest to the system (e.g., streets, houses, trees). Thus, by comparing visual inputs with the background, desired objects, called foregrounds (e.g., cars, pedestrians), can be localized for further analysis. As real-life scenarios involve various degrees of dynamics like illumination changes, scene dynamics or bootstrapping, there are many approaches to the construction of the background \cite{Bouwmans2014}. One of the most prominent approaches in tackling the background modeling problem employs pixel-based statistical frameworks such as Gaussian Mixture Models (GMM) \cite{Stauffer1999}, \cite{Zivkovic2004}, \cite{Martins2018}, \cite{Ha2020}. These methods are based on the hypothesis that background intensities appear predominantly throughout a scene, thereby constructing usefully explicit mathematical structures for exploitation of the dominance. In addition, an important property of such approaches is their adaptability to changing conditions of real-world scenarios, even under illumination changes (e.g. moving clouds), view noises (e.g. rain, snow drops) and implicit motions (e.g. moving body of water). However, the generalization of the methods’ correctness is hindered when the hypothesis fails under appearances of stopped objects or high degrees of view noises (e.g. camera shaking, abrupt view changes), thereby producing corruptive backgrounds that often lead to poor estimations of foregrounds. Furthermore, the statistical schemes still follow the sequential processing paradigm that under-utilize modern parallel processing units in the presence of big data. On the other hand, riding on the increasing advancement wave of specialized processing units for large-scale data, Deep Neural Networks (DNNs) have emerged as a prominent pattern matching and visual prediction mechanism. Deep learning approaches for the motion detection problem are rapidly demonstrating their effectiveness not only in utilizing tremendous sets of processing cores of modern parallel computing technologies, but also in producing highly accurate predictions from data-learning. However, the typical DNNs’ architectures are very computationally expensive if they actually can produce highly accurate results, especially regarding those providing solutions to the problem of background modeling and foreground detection. Furthermore, the DNNs in the literature have experienced two primary shortcomings: \textit{A requirement of a huge-scale dataset of labeled images}: DNNs-based models for motion detection exploit weak statistical regularities between input sequences of images and annotated background scenes. Thus, to generalize all practical scenarios in real life, a prohibitively large dataset consisting of all practical scenarios and effects is needed. With few training labels in video sequences for building generalized background models \mbox{\cite{Kalsotra2019}}, there are currently no universal data-driven experiments to assure that the scenes' true properties are appropriately presented. \textit{A prevailing fail on contextual variation}: Recently, foreground segmentation has been considered from the perspective of binary classification schemes. It has been proposed to minimize a sum-of-squares or a cross-entropy error function in DNNs-based approaches to reflect the motion analysis problem's true objective as closely as possible. In this approach, models are usually trained to represent the semantics properties on the training sets when the actual aim is to generalize well to experimental datasets' specific target video sequence. This conditional average will be inadequate for various unseen contextual semantics and dynamics that might occur in real-world \mbox{\cite{Bouwmans2019, Geoffrey2015}}. In other words, DNNs-based methods usually perform well on experimental datasets of background modeling and change detection but can still fail on unseen situations in real-world scenarios. Nevertheless, the DNNs-based approach is particularly promising as the literature has rapidly demonstrated their ability to approximate any functions up to arbitrary accuracy within highly parallelizable architectures. In other words, we can exploit their parallelizable capability to approximate the mechanism behind the optimization of GMM, in a way that boosts the construction of statistical model estimations of our data using modern parallel computing technologies. Hence, it becomes possible to efficiently exploit GMM-based background models' characteristics, which are clear and consistent with their mathematical framework, for functional extension, i.e. tackling stopped objects and high-degreed view shifts via DNNs' common data-driven effectiveness. In this article, to address the issues of DNNs while also utilizing its benefits, we incorporate the mathematics of modeling statistical GMM into our processes, and introduce a novel, light-weighted, dual framework of two convolutional neural networks (CNN): (1) the \textbf{C}onvolutional \textbf{D}ensity \textbf{N}etwork of \textbf{G}aussian \textbf{M}ixtures (\textbf{CDN-GM}) for the task of generalistically modeling backgrounds; and (2) the \textbf{M}otion \textbf{E}stimation with \textbf{D}ifferencing \textbf{A}pproximation via \textbf{L}earning on a convolutional network (\textbf{MEDAL-net}), for context-driven foreground extraction. Specifically, our contributions are actually three-fold, and they are summarized as follows: Firstly, by leveraging existing technologies and being inspired by Bishop \cite{Bishop1995}, we propose our CDN-GM, a feed-forward, highly parallelizable CNN representing a conditional probability density function that models the temporal history for each pixel location in the first pipeline of the proposed framework. In this architecture, conditioned on pixel-wise vectors of intensity values across a time period, the network approximates a Gaussian-Mixture statistical mapping function to efficiently produce models of their underlying multimodular distributions. Accordingly, at each pixel, the mixture is characterized by the weighted combination of its Gaussian components, where each capture and highlight a context-relevant range of pixel-wise values in the manner of a mean and variance. Thus, from statistical models of data in Gaussian Mixtures at pixel level, backgrounds are extracted from the most informative components, resulting in our compressed, light-weighted and efficient architecture. Secondly, with the goal of modeling the underlying generator of the data, we propose a loss function in the manner of unsupervised learning. This loss function serves to direct the proposed CDN-GM's architectural parameters into approximating the mathematical structure behind GMM-driven modeling of the data with expectation maximization. Thus, because of this, the resulting inferences will consist of mixtures of Gaussian components describing the data, and the most likely background description of actually observed data can be made, with the trained network being subsequently presented with new values of input. In conjunction with CDN-GM, the proposed background modeling architecture not only achieves higher degrees of interpretability compared to the idea of estimating an implicit hidden function in previous neural network methods, but it also gains better capability of adaptation under contextual dynamics with statistical learning, as it is able to utilize a virtually inexhaustible amount of data for incorporation of expectation maximization into the neural-network parameters. Thirdly, in the latter pipeline of the proposed framework, we design a compact convolutional auto-encoder for context-driven foreground extraction called MEDAL-net, which simulates a context-driven difference mapping between input frames and their corresponding background scenes. This is greatly encouraged because even though real-life scenarios involve various degrees of contextual variations that yet any existing mathematical framework can completely capture, we can construct consistently GMM-driven background models of those variations with CDN-GM to provide semantic understanding of the scene. Thus, we are able to make good use of information from features in images from the first module of background modeling, and even from features seemingly corruptive to motion extractions (e.g. stopped objects), for formulating foreground extraction from raw inputs, thereby resulting in a very light-weighted and efficient structure with high accuracy. The network is trained in a supervised manner in such a way that it maintains good generalization to various views, and to even unseen situations of similar scenery dynamics. The organization of this paper is as follows: Section \ref{section:related-works} encapsulates the synthesis of recent approaches in background initialization and foreground segmentation. The proposed method is described in Section \ref{section:proposed-method}. Experimental evaluations are discussed in Section \ref{section:experiments-and-discussion}. Finally, our conclusion and motivations towards future works are reached in Section \ref{section:conclusion}. \section{Related Works} \label{section:related-works} The new era of video analysis has witnessed a proliferation of methods that concentrate on background modeling and foreground detection. Prior studies in recent decades were encapsulated in various perspectives of feature concepts \cite{Bouwmans2014a, Bouwmans2019, Garcia-Garcia2020}. Among published methods that meet the requirements of robustness, adaptation to scene dynamics, memory efficiency, and real-time processing, two promising approaches of background subtraction are statistical methods and neural-network-based models. Statistical studies aim to characterize the history of pixels' intensities with a model of probabilistic analysis. On the other hand, neural-network-driven approaches implicitly estimate a mapping between an input sequence of observed scenes and hand-labeled background/foreground images on non-linear regularities. In statistical approaches, the pixels' visual features are modeled with an explainable probabilistic foundation regarding either pixel-level or region-level in temporal and spatial resolution perspectives. In the last decades, there have been a variety of statistical models that were proposed to resolve the problem of background initialization. Stauffer and Grimson \cite{Stauffer1999} proposed a pioneering work that handled gradual changes in outdoor scenes using pixel-level GMM with a sequential K-means distribution matching algorithm. To enhance the foreground/background discrimination ability regarding scene dynamics, Pulgarin-Giraldo \textit{et al.} \cite{PulgarinGiraldo2017} improved GMM with a contextual sensitivity that used a Least Mean Squares formulation to update the parameter estimation framework. Validating the robustness of background modeling in a high amount of dynamic scene changes, Ha \textit{et al.} \cite{Synh2018} proposed a GMM with high variation removal module using entropy estimation. To enhance the performance, Lu \textit{et al.} \cite{Lu2018} applied a median filter on an input frame to reduce its spatial dimension before initializing its background. To address the sequential bottleneck among statistical methods in pixel-wise learning, an unsupervised, tensor-driven framework of GMM was proposed by Ha \textit{et al.} \cite{Ha2020} with balanced trade-off between satisfactory foreground mask and exceptional processing speed. However, the approach's number of parameters requires a lot of manual tuning. In addition to GMM, Cauchy Mixture Models (CMM) was exploited to detect foreground objects via eliminating noise and capturing periodical perturbations in varying lighting conditions and dynamic scenarios \cite{Sowmiya2019}. Overall, statistical models were developed with explicit probabilistic hypotheses to sequentially present the correlation of history observation at each image point or a pixel block, added with a global thresholding approach to extract foreground. This global thresholding technique for foreground detection usually leads to a compromise between the segregation of slow-moving objects and rapid adaptation to sudden scene changes within short-term measurement. This trade-off usually damages the image-background subtraction in multi-contextual scenarios, which is considered as a sensitive concern in motion estimation. Hence, regarding foreground segmentation from background modeling, it is critical to improve frame differencing from constructed background scene with a better approximation mechanism, and utilize parallel technologies. Recently, there have also been many attempts to apply DNNs into background subtraction and background modeling problems with supervised learning. Inspired from LeNet-5 \cite{Lecun1998} used for handwritten digit recognition, one of the earliest efforts to subtract the background from the input image frame was done by Braham \textit{et al.} \cite{Braham2016}. This work explores the potential of visual features learned by hidden layers for foreground-background pixel classification. Similarly, Wang \textit{et al.} \cite{Wang2017} proposed a deep CNN trained on only a small subset of frames as there is a large redundancy in a video taken by surveillance systems. The model requires a hand-labeled segmentation of moving regions as an indicator in observed scenes. Lim \textit{et al.} \cite{Lim2017} constructed an encoder-decoder architecture with the encoder inherited from VGG-16 \cite{Simonyan2015}. The proposed encoder-decoder network takes a video frame, along its corresponding grayscale background and its previous frame as the network's inputs to compute their latent representations, and to deconvolve these latent features into a foreground binary map. Another method is DeepBS \cite{Babaee2018} which was proposed by Babaee \textit{et al.} to compute the background model using both SuBSENSE \cite{Charles2015} and Flux Tensor method \cite{Wang2014a}. The authors extract the foreground mask from a small patch from the current video frame and its corresponding background to feed into the CNN, and the mask is later post-processed to give the result. Nguyen \textit{et al.} proposed a motion feature network \mbox{\cite{Nguyen2019}} to exploit motion patterns via encoding motion features from small samples of images. The method's experimental results showed that the network obtained a promising results and well-performed on unseen data sequences. Another method that used a triplet convolutional autoencoder to learn multi-scale hidden representations for motion mask extraction of the observed scenes was proposed as FgSegNet \cite{Lim2018}. Recently, there is also a work from Chen \textit{et al.} \cite{Chen2019} which aims to exploit high-level spatial-temporal features with a deep pixel-wise attention mechanism and convolutional long short-term memory (ConvLSTM). \begin{figure*}[!t] \vspace{-3mm} \centering \subfloat{\includegraphics[width=0.9\textwidth]{Fig-01-CDN-FDN-overview.png}} \caption{The overview of the proposed method for background modeling and foreground detection} \label{figure:01} \vspace{-3mm} \end{figure*} All things considered, most neural-network-based methods are benefitted from a significant number of weak statistical regularities in associative mapping, where the aim is to learn a transformation from an input batch of consecutive frames to the target hand-labeled foreground or background. There is little evidence that this supervised learning approach ensures that DNNs possess true properties of observed scenes from the sampling peculiarities of training datasets, and be able to generalize to varying degree of contextual dynamics in the real-world. Furthermore, recent CNN methods do not ensure real-time performance, which is a crucial requirement for any practical systems. However, CNN's ability to utilize the parallelism mechanism of modern hardware very efficiently, and the effective use of data for high-accuracy prediction is appealing to investigate. Therefore, in this work, we propose a scheme of two compact CNN with a couple of strategies. First, grounded on a probabilistic model, the former network models a conditional density function via exploiting temporal information to construct background scenes. Second, the latter CNN-based encoder-decoder aims to approximate frame-background differencing to extract moving regions. \section{The Proposed Method} \label{section:proposed-method} As shown with an overview in Fig. \ref{figure:01}, the primary goal of our proposed framework is to address the previously listed problems of DNNs and statistical methods, via adaptively acquiring the underlying properties of a sequence of images to construct corresponding background scenes with CDN-GM (the left subfigure), and extract foregrounds of interest through data-driven learning with MEDAL-net (the right lower subfigure). Following pixel-wise temporal data reformation (the right upper subfigure), a batch of video frames is decompressed into a sequence of pixel histories to estimate each pixel's true background intensity with CDN-GM. After reconstructing the background image from the output intensity sequence of CDN-GM, the input frame is concatenated along the channel dimension with the background to estimate the final segmentation map. The concatenation before the foreground extraction step provides information to engender context-driven difference mapping within MEDAL-net, rather than memorizing the single-valued mapping between input frames and labeled foregrounds. This difference mapping idea effectively limits MEDAL-net’s parameter search space, while enabling our proposed foreground extraction network to be more robust against various real-world motion dynamics. \subsection{Convolution Density Network of Gaussian Mixture} \label{subsection:convolution-density-network-of-gaussian-mixture} \begin{figure*}[!b] \vspace{-3mm} \centering \subfloat{\includegraphics[width=0.9\textwidth]{Fig-02-CDN-architecture.png}} \caption{The proposed architecture of Convolution Density Network of Gaussian Mixture Model} \label{figure:02} \vspace{-3mm} \end{figure*} According to Zivkovic's study \cite{Zivkovic2004}, let $\boldsymbol{\chi}_c^T = \{ {{\bf{x}}_1},{{\bf{x}}_2},...,{{\bf{x}}_T}|{{\bf{x}}_i} \in {[0,255]^c}\}$ be the time series of the $T$ most recently observed color signals of a pixel where the dimension of the vector ${{\bf{x}}_i}$ in the color space is $c$, the distribution of pixel intensity ${{\bf{x}}_i}$ can be modeled by a linear combination of $K$ probabilistic components $\boldsymbol\theta_k$ and their corresponding conditional probability density functions $P({{{\bf{x}}_i}}|{\boldsymbol\theta_k})$. The marginal probability $P({{{\bf{x}}_i}})$ of the mixture is defined in: \begin{equation} \label{eq:marginal-probability} P({{\bf{x}}}) = \sum\limits_{k = 1}^K {P({\boldsymbol\theta_k})} P({{\bf{x}}}|{\boldsymbol\theta_k}) = \sum\limits_{k = 1}^K {{\pi _k}} P({{\bf{x}}}|{\boldsymbol\theta_k}) \end{equation} \noindent where ${{\pi _k}}$ is the non-negative mixing coefficient that sums to unity, representing the likelihood of occurrence of the probabilistic component ${\boldsymbol\theta_k}$. Because of the multimodality of observed scenes, the intensity of target pixels is assumed to be distributed normally in a finite mixture. Regarding RGB space of analyzed videos, each examined color channel in $\textbf{x}_i$ was assumed to be distributed independently and can be described with a common variance $\sigma _k$ to avoid performing costly matrix inversion as indicated in \cite{Stauffer1999}. Hence, the multivariate Gaussian distribution can be re-formulated as: \begin{equation} \label{eq:simple-multivariate-gaussian-mixture} \begin{aligned} P({{\bf{x}}}|{\boldsymbol\theta_k}) &= \mathcal{N}({{\bf{x}}}|{\boldsymbol\mu _k},{\sigma_k}) \\ &= \frac{1}{{\sqrt {{{(2\pi )}^c}{\sigma_k^c}} }}\exp \left( { - \frac{\parallel {{\bf{x}}} - {{\boldsymbol{\mu }}_k}{\parallel ^2}}{2\sigma_k}} \right) \end{aligned} \end{equation} \noindent where ${\boldsymbol\mu _k}$ is the estimated mean and $\sigma _k$ is the estimated universal covariance of examined color channels in the $k^{th}$ Gaussian component. From this hypothesis, in this work, we propose an architecture of convolutional neural network, called Convolutional Density Network of Gaussian Mixtures (CDN-GM), which employs a set of non-linearity transformations ${f_\theta }\left( \cdot \right)$ to formulate a conditional formalism of GMM density function of $\bf{x}$ given a set of randomly selected, vectorized data points $\boldsymbol{\chi}_T$: \begin{equation} \label{eq:formulate-equation} {\bf{y}}_T = {f_\theta } (\boldsymbol{\chi}^T_c) \sim P({{\bf{x}|\boldsymbol{\chi}^T_c}}) \end{equation} The ability of multilayer neural networks that was trained with an optimization algorithm to learn complex, high-dimensional, nonlinear mappings from large collections of examples increases their capability in pattern recognition via gathering relevant information from the input and eliminating irrelevant variabilities. With respect to problems of prediction, the conditional average represents only a very limited statistic. For applicable contexts, it is considerably beneficial to obtain a complete description of the probability distribution of the target data. In this work, we incorporate the mixture density model with the convolutional neural network instead of a multi-layer perceptron as done by Bishop \textit{et al.} in the vanilla research \cite{Bishop1995}. In the proposed scheme, the network itself learns to act as a feature extractor to formulate statistical inferences on temporal series of intensity values. First, regarding recently proposed CNN methods, the local connectivity characteristics in convolution layers motivate CNN to learn common visual patterns in a local region of images. Literally, a background image contains most frequently presented intensities in the sequence of observed scenes. Hence, in CDN-GM, we take advantage of this mechanism to exploit the most likely intensity value that will raise in the background image via consideration of temporal arrangement. Second, the memory requirement to store so many weights may rule out certain hardware implementations. In convolutional layers, shift invariance is automatically obtained by forcing the replication of weight configurations across space. Hence, the scheme of weight sharing in the proposed CNN reduces the number of parameters, making CDN-GM lighter and exploiting the parallel processing of a set of multiple pixel-wise analysis within a batch of video frames. The architecture of CDN-GM contains seven learned layers, not counting the input -- two depthwise convolutional, two convolutional and three dense layers. Our network is summarized in Fig. \ref{figure:02}. The input of our rudimentary architecture of the proposed network is a time series of color intensity at each pixel, which was analyzed with noncomplete connection schemes in four convolution layers regarding temporal perspective. Finally, the feature map of the last convolution layer was connected with three different configurations of dense layers to form a three-fold output of the network which present the kernel parameter of the Gaussian Mixture Model. The main goal of CDN-GM is to construct an architecture of CNN which presents multivariate mapping in forms of Gaussian Mixture Model with the mechanism of offline learning. With the simulated probabilistic function, we aim to model the description of the most likely background scenes from actual observed data. In other words, the regularities in the proposed CNN should cover a generalized presentation of the intensity series of a set of consecutive frames at pixel level. To achieve this proposition, instead of using separate GMM for each pixel-wise statistical learning, we consider to use a single GMM to formulate the temporal history of all pixels in the whole image. Accordingly, CDN-GM architecture is extended through a spatial extension of temporal data at image points with an extensive scheme defined in Table \ref{table:CDN-architecture}. \begin{table}[!t] \centering \caption{Architecture of Convolutional Density Network} \label{table:CDN-architecture} \begin{tabular}{lll} \toprule \multicolumn{1}{c}{\textbf{Type / Stride}} & \multicolumn{1}{c}{\textbf{Filter Shape}} & \multicolumn{1}{c}{\textbf{Output Size}} \\ \hline Input & - & $(H*W) \times 1 \times T \times 3 $ \\ \hline Conv dw / s7 & $1 \times 7 \times 1 $ dw & $(H*W) \times 1 \times 35 \times 3 $ \\ \hline Conv / s1 & $ 1 \times 1 \times 3 \times 7 $ & $(H*W) \times 1 \times 35 \times 7 $ \\ \hline Conv dw / s7 & $ 1 \times 7 \times 7 $ dw & $(H*W) \times 1 \times 5 \times 7 $ \\ \hline Conv / s1 & $ 1 \times 1 \times 7 \times 7 $ & $(H*W) \times 1 \times 5 \times 7 $ \\ \hline Dense / s1 & $K \times C $ & $(H*W) \times K \times d $ \\ \hline Dense / s1 / Softmax & $K$ & $(H*W) \times K $ \\ \hline Dense / s1 & $K$ & $(H*W) \times K $ \\ \bottomrule \end{tabular} \vspace{-3mm} \end{table} The network output ${\bf{y}_T}$, whose dimension is $\left( {c + 2} \right) \times K$, is partitioned into three portions ${\bf{y}}_\mu \left( \boldsymbol{\chi}^T_c \right)$, ${\bf{y}}_\sigma \left( \boldsymbol{\chi}^T_c \right)$, and ${\bf{y}}_\pi \left( \boldsymbol{\chi}^T_c \right)$ corresponding to the latent variables of GMM model: \begin{equation} \label{eq:output-formulate} \begin{aligned} {\bf{y}_T} &= [{\bf{y}}_\mu \left( \boldsymbol{\chi}^T_c \right), {\bf{y}}_\sigma \left( \boldsymbol{\chi}^T_c \right), {\bf{y}}_\pi \left( \boldsymbol{\chi}^T_c \right)] \\ &= [{\bf{y}}_\mu ^1, \ldots ,{\bf{y}}_\mu ^K,{\bf{y}}_\sigma ^1, \ldots ,{\bf{y}}_\sigma ^K,{\bf{y}}_\pi ^1, \ldots ,{\bf{y}}_\pi ^K] \end{aligned} \end{equation} With our goal of formulating the GMM, we impose a different restriction on threefold outputs from the network: \begin{itemize} \item First, as the mixing coefficients $\pi_k$ indicate the proportion of data accounted for by mixture component $k$, they must be defined as independent and identically distributed probabilities. To achieve this regulation, in principle, we activate the network output with a softmax activation function: \begin{equation} \label{eq:softmax-function} \pi_k(\boldsymbol{\chi}^T_c) = \frac{{\exp ({\bf{y}}_\pi ^k)}}{{\sum\nolimits_{l = 1}^K {\exp ({\bf{y}}_\pi ^l)} }} \end{equation} \item Second, in the realistic scenarios, the measured intensity of observed image signals may fluctuate due to a variety of factors, including illumination transformations, dynamic contexts and bootstrapping. In order to conserve the estimated background, we have to restrict the value of the variance of each component to the range $[\bar \sigma_{min}, \bar \sigma_{max}]$ so that each component does not span spread the entire color space, and does not focus on one single color cluster: \begin{equation} \label{eq:variance-constraint} \sigma _k (\boldsymbol{\chi}^T_c) = \frac{\bar\sigma_{min} \times (1-\hat\sigma_k) + \bar\sigma_{max} \times \hat\sigma_k }{255} \end{equation} \noindent where $\sigma _k (\boldsymbol{\chi}^T_c)$ is normalized towards a range of $[0,1]$ over the maximum color intensity value, 255; and $\hat\sigma_{k}$ is the normalized variance that was activated through a hard-sigmoid function from the output neurons ${\bf{y}}_\sigma $ that correspond to the variances: \begin{equation} \label{eq:hard-sigmoid-sigma} {{\hat \sigma }_k}(\boldsymbol{\chi}^T_c) = \max \left[ {0,\min \left( {1,\frac{{{\bf{y}}_\sigma ^k + 1}}{2}} \right)} \right] \end{equation} \noindent In this work, we adopt the hard sigmoid function because of the piecewise linear property and correspondence to the bounded form of linear rectifier function (ReLU) of the technique. Furthermore, this was proposed and proved to be more efficient in both in software and specialized hardware implementations by Courbariaux \textit{et al.} \cite{Courbariaux2015}. \item Third, the mean of the probabilistic mixture is considered on a normalized RGB color space where the intensity values retain in a range of $[0,1]$ so that they can be approximated correspondingly with the normalized input. Similar to the normalized variance $\hat\sigma_{k}$, the mixture mean is standardized from the corresponding network outputs with a hard-sigmoid function: \begin{equation} \label{eq:hard-sigmoid-mu} \mu_k (\boldsymbol{\chi}^T_c) = \max \left[ {0,\min \left( {1,\frac{{{\bf{y}}_\mu ^k + 1}}{2}} \right)} \right] \end{equation} \end{itemize} From the proposed CNN, we extract the periodical background image for each block of pixel-wise time series of data in a period of $T$. This can be done by selecting the means whose corresponding distributions have the highest degree of high-weighted, low-spread. To have a good grasp of the importance of a component in the mixture, we use a different treatment of weight updates with a ratio of ${{{\pi _{k'}}({\boldsymbol{\chi }}_c^T)} \mathord{\left/ {\vphantom {{{\pi _{k'}}({\boldsymbol{\chi }}_c^T)} {{\sigma _{k'}}({\boldsymbol{\chi }}_c^T)}}} \right. \kern-\nulldelimiterspace} {{\sigma _{k'}}({\boldsymbol{\chi }}_c^T)}}$. This is the manner of weighting components within a mixture at each pixel by valuing high-weighted, low-spread distributions in the mixture, thereby spotlighting the most significant distribution contributing to the construction of backgrounds. \begin{equation} \label{equation:13} BG(\boldsymbol{\chi}^T_c) = \mathop {\max } (\mu_{k} \cdot \hat {BG}_{k,T})\text{,} \quad \text{for } k \in \left[ {1,K} \right] \end{equation} where background mapping is defined at each pixel $\textbf{x}$ as: \begin{equation} \label{equation:14} {{\hat {BG}}_{k,T}}({\boldsymbol{\chi}}^T_c) = \left\{ \begin{array}{l} \begin{multlined} 1, \quad \text{if } \mathop {argmax}\limits_{k'} [{\raise0.7ex\hbox{${{\pi _{k'} ({\boldsymbol{\chi}}^T_c)}}$} \!\mathord{\left/ {\vphantom {{{\pi _k }} {{\sigma _k}}}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{${{\sigma _{k'} ({\boldsymbol{\chi}}^T_c)}}$}}] = k \\ \text{for } k \in \left[ {1,K} \right] \end{multlined} \\ 0, \quad \text{otherwise} \end{array}\right.\\ \end{equation} \subsection{The unsupervised loss function of CDN-GM} \label{subsection:loss-function} In practice, particularly in each real-life scenario, the background model must capture multiple degrees of dynamics, which is more challenging by the fact that scene dynamics may also change gradually under external effects (e.g. lighting deviations). These effects convey the latest information regarding contextual deviations that may constitute new background predictions. Therefore, the modeling of backgrounds must not only take into account the various degrees of dynamics across multiple imaging pixels of the data source, but it must also be able to adaptively update its predictions with respect to semantic changes. Equivalently, in order to approximate a statistical mapping function for background modeling, the proposed neural network function has to be capable of approximating a conditional probability density function, thereby estimating a multi-modular distribution conditioned on its time-wise latest raw imaging inputs. The criteria for the neural statistical function to be instituted can be summarized as follows: \begin{itemize} \item As a metric for estimating distributions, input data sequences cannot be weighted in terms of order. \item Taking adaptiveness into account, the neural probabilistic density function can continuously interpolate predictions in evolving scenes upon reception of new data. \item The neural network function has to be generalizable such that its model parameters are not dependent on specific learning datasets. \end{itemize} Hence, satisfying the prescribed criteria, we propose a powerful loss function capable of directing the model's parameters towards adaptively capturing the conditional distribution of data inputs, thereby approximating a statistical mapping function in a technologically parallelizable form. At every single pixel, the proposed CNN estimates the probabilistic density function on the provided data using its GMM parameters. Specifically, given the set $\boldsymbol{\chi}^T_c$ randomly selected, vectorized data points, it is possible to retrieve the continuous conditional distribution of the data target $\bf{x}$ with the following functions: \begin{equation} \label{eq:sss} P({{\bf{x}}}) = \sum\limits_{k = 1}^K {{\pi _k}}({\boldsymbol{\chi}^T_c}) \cdot \mathcal{N}({{\bf{x}}}|{\boldsymbol\mu _k},{\sigma_k}) \end{equation} \noindent where the general disposition of this distribution is approximated by a finite mixture of Gaussians, whose values are dependent on our learnable neural variables: \begin{equation} \label{eq:} \mathcal{N}({{\bf{x}}}|{\boldsymbol\mu _k},{\sigma_k}) = {1 \over {\sqrt {2\pi \cdot {\sigma_k}{{({\boldsymbol{\chi}^T_c})}^2}} }} \cdot \exp \left\{ { - {{{{\left\| {{{\bf{x}}} - {\mu _k}({\boldsymbol{\chi}^T_c})} \right\|}^2}} \over {2{\sigma_k}{{({\boldsymbol{\chi}^T_c})}^2}}}} \right\} \end{equation} In our proposed loss function, the data distribution to be approximated is the set of data points relevant to background construction. This is rationalized by the proposed loss function's purpose, which is to direct the neural network's variables towards generalizing universal statistical mapping functions. Furthermore, even with constantly evolving scenes where the batches of data values also vary, this loss measure can constitute fair weighting on the sequence of inputs. Our proposed loss measure is designed to capture various pixel-wise dynamics over a video scene and to encompass even unseen perspectives via exploiting the huge coverage of multiple scenarios across more than one case with data. In other words, the order of the network's input does not matter upon loading, which is proper for any statistical function on estimating distribution. For modeling tasks, we seek to establish a universal multi-modular statistical mapping function on the RGB color space, which would require optimizing the loss not just on any single pixel, but for $b$ block of time-series image intensity data fairly into a summation value. \begin{equation} \vspace{-2mm} \label{eq:los-funct} {\cal L} = \sum\limits_i^b {\sum\limits_j^T {\cal L}_{ j}^{(i)} } \end{equation} \begin{equation} \label{eq:log-of-mixtures} \textnormal{where}\quad\quad\quad{\cal L}_{ j}^{(i)} = - \ln \left( {\sum\limits_{k = 1}^K {\pi _k^{(i)}} {\cal N}({{\bf{x}}_j}|{\bf{\mu }}_k^{(i)},\sigma _k^{(i)})} \right) \end{equation} \noindent where ${\bf{x}}_j$ is the $j^{th}$ element of the $i^{th}$ time-series data ${\boldsymbol{\chi}^{T,(i)}_c}$ of pixel values; $\pi^{(i)}$, $\mu ^{(i)}$, and $\sigma ^{(i)}$ are respectively the desired mixing coefficients, means, and variances that commonly model the distribution of ${\boldsymbol{\chi}^{T,(i)}_c}$ in GMM. We define ${\cal L}_{ j}^{(i)}$ as the error function for our learned estimation on an observed data point ${\bf{x}}_j$, given the locally relevant dataset ${\boldsymbol{\chi}^{T,(i)}_c}$ for the neural function. ${\cal L}_{ j}^{(i)}$ is based on the statistical log-likelihood function and is equal to the negative of its magnitude. Hence, by minimizing this loss measure, we will essentially be maximizing the expectation value of the GMM-based neural probabilistic density function $P({{\bf{x}}})$, from the history of pixel intensities at a pixel position. Employing stochastic gradient descent on the negative logarithmic function ${\cal L}_{ j}^{(i)}$ involves not only monotonic decreases, which are steep when close to zero, but also upon convergence it also leads to the proposed neural function approaching an optimized mixture of Gaussians probability density function. In addition, since our loss function depends entirely on the input and the output of the network (i.e., without external data labels), the proposed work can be considered an unsupervised approach. This is because the objective of our network is to maximize the likelihood of the output on the data itself, not to any external labels. With this loss function, the optimization of the network to generalize on new data is available on the fly without needing any data labeled manually by humans. The key thing here is that whether the neural network can learn to optimize the loss function with the standard stochastic gradient descent algorithm with \textit{back-propagation}. This can only be achieved if we can obtain suitable equations of the partial derivatives of the error $\mathcal{L}$ with respect to the outputs of the network. As we describe in the previous section, ${\bf{y}}_\mu $, ${\bf{y}}_\sigma $, and ${\bf{y}}_\pi $ present the proposed CDN-GM's outputs that formulate to the latent variables of GMM model. The partial derivative ${{\partial \mathcal{L}_j^{(i)}} \mathord{\left/ {\vphantom {{\partial \mathcal{L}_j^{(i)}} {\partial {o_k}}}} \right. \kern-\nulldelimiterspace} {\partial {{\bf{y}}^{(k)}}}}$ can be evaluated for a particular pattern and then summed up to produce the derivative of the error function $\cal L$. To simplify the further analysis of the derivatives, it is convenient to introduce the following notation that presents the posterior probabilities of the component $k$ in the mixture, using Bayes theorem: \begin{equation} \label{eq:convenient-notation} \Pi _k^{(i)} = \frac{{\pi _k^{(i)}\mathcal{N}({{\mathbf{x}}_j}|{{\boldsymbol\mu }}_k^{(i)},\sigma _k^{(i)})}}{{\sum\limits_{l = 1}^K {\pi _l^{(i)}} \mathcal{N}({{\mathbf{x}}_j}|{{\boldsymbol\mu }}_l^{(i)},\sigma _l^{(i)})}} \end{equation} First, we need to consider the derivatives of the loss function with respect to network outputs ${\bf{y}}_{\pi}$ that correspond to the mixing coefficients $\pi_k$. Using Eq. (\ref{eq:log-of-mixtures}) and (\ref{eq:convenient-notation}), we obtain: \begin{equation} \label{eq:partial-deriv-loss-wrt-mixture} \frac{{\partial \mathcal{L}_j^{(i)}}}{{\partial \pi _k^{(i)}}} = \frac{{\Pi _k^{(i)}}}{{\pi _k^{(i)}}} \end{equation} \noindent From this expression, we perceive that the value of $\pi_k^{(i)}$ explicitly depends on ${\bf{y}}^{(l)}_{\pi}$ for $l=1,2,...,K$ as $\pi_k^{(i)}$ is the result of the softmax mapping from ${\bf{y}}^{(l)}_{\pi}$ as indicated in Eq. (\ref{eq:softmax-function}). We continue to examine the partial derivative of $\pi_k^{(i)}$ with respect to a particular network output ${\bf{y}}^{(l)}_{\pi}$, which is \begin{equation} \label{eq:mixture-softmax-deriv} \frac{{\partial \pi _k^{(i)}}}{{\partial {\bf{y}}^{(l)}_{\pi} }} = \left\{ {\begin{array}{*{20}{l}} {\pi _k^{(i)}(1 - \pi _l^{(i)}{\text{),}}}&{\quad {\text{if }}k = l} \\ { - \pi _l^{(i)}\pi _k^{(i)}{\text{,}}}&{\quad {\text{otherwise}}{\text{.}}} \end{array}} \right. \end{equation} \noindent By chain rule, we have \begin{equation} \label{eq:loss-wrt-output-chain-rule-deriv} \frac{{\partial \mathcal{L}_j^{(i)}}}{{\partial {\bf{y}}^{(l)}_{\pi} }} = \sum\limits_k {\frac{{\partial \mathcal{L}_j^{(i)}}}{{\partial \pi _k^{(i)}}}} \frac{{\partial \pi _k^{(i)}}}{{\partial {\bf{y}}^{(l)}_{\pi} }} \end{equation} \noindent From Eq. (\ref{eq:convenient-notation}), (\ref{eq:partial-deriv-loss-wrt-mixture}), (\ref{eq:mixture-softmax-deriv}), and (\ref{eq:loss-wrt-output-chain-rule-deriv}), we then obtain \begin{equation} \label{eq:loss-wrt-mixture-output-unit} \frac{{\partial \mathcal{L}_j^{(i)}}}{{\partial \bf{y}}^{(l)}_{\pi}} = \pi _l^{(i)} - \Pi _l^{(i)} \end{equation} For ${\bf{y}}^{(k)}_{\sigma}$, we make use of Eq. (\ref{eq:simple-multivariate-gaussian-mixture}), (\ref{eq:variance-constraint}), (\ref{eq:hard-sigmoid-sigma}), (\ref{eq:log-of-mixtures}), and (\ref{eq:convenient-notation}), by differentiation, to obtain \begin{equation} \label{eq:derivative-of-var-out} \frac{{\partial \mathcal{L}_j^{(i)}}}{{\partial {\bf{y}}^{(k)}_{\sigma} }} = \frac{{3.2}}{{255}}\Pi _k^{(i)}\left( {\frac{c}{2}\sqrt {{{(2\pi )}^c}{{(\sigma _k^{(i)})}^{c + 2}}} - \frac{{\parallel {{\mathbf{x}}_j} - {{\boldsymbol{\mu }}_k}{\parallel ^2}}}{{2{{(2\pi )}^c}{{(\sigma _k^{(i)})}^{c + 2}}}}} \right) \end{equation} for $-2.5 < {\bf{y}}^{(k)}_{\sigma} < 2.5$. This is because the piece-wise property in the definition of the hard-sigmoid activation function. Finally, for ${\bf{y}}^{(k)}_{\mu}$, let $\mu_{k,l}^{(i)}$ be the $l^{th}$ element of the mean vector where $l$ is an integer lies in $[0,c)$ and suppose that $\mu_{k,l}^{(i)}$ corresponds to an output $o_k^\mu$ of the network. We can get derivative of $\mu_{k,l}^{(i)}$ by taking Eq. (\ref{eq:simple-multivariate-gaussian-mixture}), (\ref{eq:hard-sigmoid-mu}), (\ref{eq:log-of-mixtures}), (\ref{eq:convenient-notation}) into the differentiation process: \begin{equation} \label{eq:derivative-of-mix-out} \frac{{\partial \mathcal{L}_j^{(i)}}}{{\partial {\bf{y}}^{(k)}_{\mu} }} = 0.2 \times \Pi _k^{(i)}\frac{{{x_{j,l}} - \mu _{k,l}^{(i)}}}{{\sigma _k^{(i)}}} \end{equation} \noindent for $-2.5 < {\bf{y}}^{(k)}_{\mu} < 2.5$. From Eq. (\ref{eq:loss-wrt-mixture-output-unit}), (\ref{eq:derivative-of-var-out}), and (\ref{eq:derivative-of-mix-out}), when CDN-GM is performed data-driven learning individually on each video sequence using Adam optimizer with a learning rate of $\alpha$, the process tries to regulate the values of laten parameters in the mixture model via minimizing the negative of log likelihood function. Hence, once the proposed model has been trained on video sequences, it is obviously seen that the network can predict the conditional density function of the target background, which is a statistical description of time-series data of each image point, so far, the foreground mask is then segmented correspondingly. The primary conceptualization in the model is to address the problems of DNNs as we mentioned above via online adaptively acquiring the underlying properties of a sequence of images to construct corresponding background scenes at concrete moments rather than memorizing the single-valued mapping between input frames and labelled backgrounds. \subsection{Foreground Segmentation with Non-linearity Differencing} \label{subsection:foreground-segmentation} In this section, we present the description of our proposed convolutional auto-encoder, called MEDAL-net, which simulates non-linear frame-background differencing for foreground detection. Traditionally, thresholding schemes are employed to find the highlighted difference between an imaging input and its corresponding static view in order to segment motion. For example, Stauffer and Grimson \cite{Stauffer1999} employed variance thresholding on background - input pairs by modeling the static view with the Gaussian Mixture Model. While the experimental results suggest certain degrees of applicability due to its simplicity, the approach lacks in flexibility as the background model is usually not static and may contain various motion effects such as occlusions, stopped objects, shadow effects, etc. In practice, a good design of a difference function between the current frame and its background must be capable of facilitating motion segmentation across a plethora of scenarios and effects. However, for the countless scenarios in real life, where there are unique image features and motion behaviors to each, there is yet any explicit mathematical model that is general enough to cover them all. Because effective subtraction requires high-degreed non-linearity in order to compose a model for the underlying mathematical framework of many scenarios, following the Universal approximation theorem \cite{Hornik1989}, we design the technologically parallelizable neural function for an approximation of such framework. Specifically, we make use of a CNN to construct a foreground segmentation network. The motive is further complemented by two folds: \begin{itemize} \item Convolutional Neural Networks have long been known for their effectiveness in approximating nonlinear functions with arbitrary accuracy. \item Convolutional Neural Networks are capable of balancing between both speed and generalization accuracy, especially when given an effective design and enough representative training data. \end{itemize} However, recent works exploiting CNN in motion estimation are still generating heavy-weighted models which are computationally expensive and not suitable for real-world deployment. In our proposed work, we exploit the use of a pair of the current video frame and its corresponding background as the input to the neural function and extract motion estimation. By combining this with a suitable learning objective, we explicitly provide the neural function with enough information to mold itself into a context-driven non-linear difference function, thereby restricting model behavior and its search directions. This also allows us to scale down the network’s parameter size, width, and depth to focus on learning representations while maintaining generalization for unseen cases. As empirically shown in the experiments, the proposed architecture is light-weighted in terms of the number of parameters, and is also extremely resource-efficient, e.g. compared to FgSegNet \cite{Lim2018}. \begin{table}[!b] \vspace{-5mm} \centering \caption{Body Architecture of MEDAL-net} \label{table:MEDAL-architecture} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lll} \toprule \multicolumn{1}{c}{\textbf{Type / Stride}} & \textbf{Filter shape} & \textbf{Ouput size} \\ \hline Input & - & N x H x W x 6 \\ \hline DW conv / s1 & 3 x 3 x 1 & N x H x W x 6 \\ \hline Conv / s1 / ReLU & 1 x 1 x 6 x 16 & N x H x W x 16 \\ \hline DW conv / s1 & 3 x 3 x 1 & N x H x W x 16 \\ \hline Conv / s1 / ReLU & 1 x 1 x 16 x 16 & N x H x W x 16 \\ \hline Max pool / s2 & 2 x 2 x 1 & N x (H / 2) x (W / 2) x 16 \\ \hline DW conv / s1 & 3 x 3 x 1 & N x (H / 2) x (W / 2) x 16 \\ \hline Conv / s1 / ReLU & 1 x 1 x 6 x 16 & N x (H / 2) x (W / 2) x 16 \\ \hline DW conv / s1 & 3 x 3 x 1 & N x (H / 2) x (W / 2) x 16 \\ \hline Conv / s1 / ReLU & 1 x 1 x 16 x 16 & N x (H / 2) x (W / 2) x 16 \\ \hline Max pool / s2 & 2 x 2 x 1 & N x (H / 4) x (W / 4) x 16 \\ \hline DW conv / s1 & 3 x 3 x 1 & N x (H / 4) x (W / 4) x 16 \\ \hline Conv / s1 & 1 x 1 x 16 x 16 & N x (H / 4) x (W / 4) x 16 \\ \hline InstanceNorm / ReLU & - & N x (H / 4) x (W / 4) x 16 \\ \hline Upsampling & - & N x (H / 2) x (W / 2) x 16 \\ \hline DW conv / s1 & 3 x 3 x 1 & N x (H / 2) x (W / 2) x 16 \\ \hline Conv / s1 & 1 x 1 x 16 x 16 & N x (H / 2) x (W / 2) x 16 \\ \hline InstanceNorm / ReLU & - & N x (H / 2) x (W / 2) x 16 \\ \hline Upsampling & - & N x H x W x 16 \\ \hline DW conv / s1 & 3 x 3 x 1 & N x H x W x 16 \\ \hline Conv / s1 & 1 x 1 x 16 x 16 & N x H x W x 16 \\ \hline InstanceNorm / ReLU & - & N x H x W x 16 \\ \hline DW conv / s1 & 3 x 3 x 1 & N x H x W x 16 \\ \hline Conv / s1 / Hard Sigmoid & 1 x 1 x 16 x 1 & N x H x W x 1 \\ \bottomrule \end{tabular}} \end{table} \subsubsection{Architectural design} \begin{figure*}[!b] \vspace{-2mm} \centering \subfloat{\includegraphics[width=0.95\textwidth]{Fig-03-FDN-architecture.png}} \caption{The proposed architecture of MEDAL-net grounded on convolutional autoencoder for foreground detection} \label{figure:03} \vspace{-2mm} \end{figure*} The overall flow of the MEDAL-net is shown in Fig. \ref{figure:03}. We employ the encoder-decoder design approach for our segmentation function. With this approach, data inputs are compressed into a low-dimensional latent space of learned informative variables in the encoder, and the encoded feature map is then passed into the decoder, thereby generating foreground masks. In our design, we fully utilize the use of depthwise separable convolution introduced in MobileNets \cite{Howard2017} so that our method can be suitable for mobile vision applications. Because this type of layer significantly scales down the number of convolutional parameters, we reduced the number of parameters of our network by approximately 81.7\% compared to using only standard 2D convolution, rendering a light-weighted network of around 2,800 parameters. Interestingly, even with such a small set of parameters, the network still does not lose its ability to generalize predictions at high accuracy. Our architecture also employs normalization layers, but only for the decoder. This design choice is to avoid the loss of information in projecting the contextual differences of background-input pairs into the latent space via the encoder, while formulating normalization to boost the decoder’s learning. The architecture of the proposed model is described in Table \ref{table:MEDAL-architecture}. \paragraph{Encoder} The encoder can be thought of as a folding function that projects the loaded data into an information-rich low-dimensional feature space. In our architecture, the encoder takes in pairs of video frames and their corresponding backgrounds concatenated along the depth dimension as its inputs. Specifically, the background image estimated by CDN-GM is concatenated with imaging signals such that raw information can be preserved for the neural network to freely learn to manipulate. Moreover, with the background image also in its raw form, context-specific scene dynamics (e.g. moving waves, camera jittering, intermittent objects) are also captured. Thus, as backgrounds are combined with input images to formulate predictions, MEDAL-net may further learn to recognize motions that are innate to a scene, thereby selectively segmenting motions of interest based on the context. In addition, by explicitly providing a pair of the current input frame and its background image to segment foregrounds, our designed network essentially constructs a simple difference function that is capable of extending its behaviors to accommodate contextual effects. Thus, we theorize that approximating this neural difference function would not require an enormous number of parameters. In other words, it is possible to reduce the number of layers and the weights' size of the foreground extraction network to accomplish the task. Hence, the encoder only consists of a few convolutional layers, with 2 max-pooling layers for downsampling contextual attributes into a feature-rich latent space. \paragraph{Decoder} The decoder of our network serves to unfold the encoded feature map into the foreground space using convolutional layers with two upsampling layers to restore the original resolution of its input data. In order to facilitate faster training and better estimation of the final output, we engineered the decoder to include instance normalization, which is apparently more efficient than batch normalization \cite{Ulyanov2017}. Using upsampling to essentially expand the latent tensors, the decoder also employs convolutional layers to induce non-linearity like the encoder. The final output of the decoder is a grayscale probability map where each pixel’s value represents the chance that it is a component of a foreground object. This map is the learned motion segmentation results with pixel-wise confidence scores determined on account of its neighborhood and scene-specific variations. In our design, we use the hard sigmoid activation function because of its property that allows faster gradient propagation, which results in less training time. At inference time, the final segmentation result is a binary image obtained by placing a constant threshold on the generated probability map. Specifically, suppose $\mathbf{X}$ is a probability map of size $N \times H \times W \times 1$, and let the set $F$ be defined as: \begin{equation} \label{eq:F-function} F = \left\{ {(x,y,z)|{{\bf{X}}_{x,y,z,0}} \ge \epsilon} \right\} \end{equation} \noindent where $x \in [0,N]$, $y \in [0,H]$, $z \in [0,W]$, and $\epsilon$ is an experimentally determined parameter. In other words, $F$ is a set of indices of $\mathbf{X}$ that satisfy the threshold $\epsilon$. The segmentation map $\hat{\mathbf{Y}}$ of size $N \times H \times W$ is obtained by: \begin{equation} \label{eq:segmap-Yij} \hat{\mathbf{Y}}_{i,j,k} = \begin{cases} 1, & (i,j,k) \in F \\ 0, & otherwise \end{cases} \end{equation} \noindent where 1 represents indices classified as foreground, and 0 represents background indices. \subsubsection{Training} \paragraph{Data preparation} The training dataset for MEDAL-net is carefully chosen by hand so that the data maintains the balance between background labels and foreground labels since imbalance data will increase the model's likelihood of being overfitted. We choose just 200 labeled ground truths to train the model. This is only up to 20\% of the number of labeled frames for some sequences in CDnet, and 8.7\% of CDnet's labeled data in overall. During training, the associated background of each chosen frame is directly generated using CDN-GM as MEDAL-net is trained separately from CDN-GM because of the manually chosen input-label pairs. \paragraph{Training procedure} We penalize the output of the network using the cross-entropy loss function commonly used for segmentation tasks $\left[ {x,y,z} \right]$, as the goal of the model is to learn a Dirac delta function for each pixel. The description of the loss function is as follows: \begin{equation} \label{eq:Loss-MEDAL-net} \begin{aligned} L= - \frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{H} \sum_{k=1}^{W} [ & \mathbf{Y}_{i,j,k} \log(\hat{\mathbf{Y}}_{i,j,k}) \\ & + (1 - \mathbf{Y}_{i,j,k}) \log(1 - \hat{\mathbf{Y}}_{i,j,k})] \end{aligned} \end{equation} \noindent where $\mathbf{Y}$ is the corresponding target set of foreground binary masks for $\hat{\mathbf{Y}}$, the batch of predicted foreground probability maps. The network is trained for about 1000 epochs for each sequence in CDnet using Adam optimizer with the learning rate = 0.005. With this straightforward learning objective applied on our CNN, the designed architecture is enabled to learn not only pixel-wise motion estimates of the training set, but it also is taught to recognize inherent dynamics in its data, and perform as a context-driven neural difference function to accurately interpolate region-wise foreground predictions of unseen perspectives. \begin{table*}[t!] \caption{F - measure comparisons over all of eleven categories in the CDnet 2014 dataset} \label{tab:cdnet-fmeasure} \centering \resizebox{1.0\textwidth}{!}{ \begin{tabular}{c|llllllllllll} \toprule \multicolumn{1}{c|}{\textbf{~}} & \multicolumn{1}{c}{\textbf{Method}} & \multicolumn{1}{c}{\textit{\textbf{BDW}}} & \multicolumn{1}{c}{\textit{\textbf{LFR}}} & \multicolumn{1}{c}{\textit{\textbf{NVD}}} & \multicolumn{1}{c}{\textit{\textbf{PTZ}}} & \multicolumn{1}{c}{\textit{\textbf{THM}}} & \multicolumn{1}{c}{\textit{\textbf{SHD}}} & \multicolumn{1}{c}{\textit{\textbf{IOM}}} & \multicolumn{1}{c}{\textit{\textbf{CJT}}} & \multicolumn{1}{c}{\textit{\textbf{DBG}}} & \multicolumn{1}{c}{\textit{\textbf{BSL}}} & \multicolumn{1}{c}{\textit{\textbf{TBL}}} \\ \midrule \parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{Unsupervised}}} & GMM -- S \& G & 0.7380 & 0.5373 & 0.4097 & 0.1522 & 0.6621 & 0.7156 & 0.5207 & 0.5969 & 0.6330 & 0.8245 & 0.4663 \\ & GMM -- Zivkovic & 0.7406 & 0.5065 & 0.3960 & 0.1046 & 0.6548 & 0.7232 & 0.5325 & 0.5670 & 0.6328 & 0.8382 & 0.4169 \\ & SuBSENSE & \textcolor{green}{$0.8619_{(2)}$} & 0.6445 & \textcolor{blue}{$0.5599_{(3)}$} & \textcolor{blue}{$0.3476_{(3)}$} & \textcolor{blue}{$0.8171_{(3)}$} & \textcolor{blue}{$0.8646_{(3)}$} & 0.6569 & \textcolor{green}{$0.8152_{(2)}$} & 0.8177 & \textcolor{red}{$0.9503_{(1)}$} & \textcolor{green}{$0.7792_{(2)}$} \\ & PAWCS & 0.8152 & \textcolor{blue}{$0.6588_{(3)}$} & 0.4152 & \textcolor{red}{$0.4615_{(1)}$} & \textcolor{red}{$0.9921_{(1)}$} & \textcolor{green}{$0.8710_{(2)}$} & \textcolor{blue}{$0.7764 _{(3)}$} & \textcolor{blue}{$0.8137_{(3)}$} & \textcolor{red}{$0.8938_{(1)}$} & \textcolor{blue}{$0.9397_{(3)}$} & 0.6450 \\ & TensorMoG & \textcolor{red}{$0.9298_{(1)}$} & \textcolor{green}{$0.6852_{(2)}$} & \textcolor{green}{$0.5604_{(2)}$} & 0.2626 & 0.7993 & \textcolor{red}{$0.9738_{(1)}$} & \textcolor{red}{$0.9325_{(1)}$} & \textcolor{red}{$0.9325_{(1)}$} & 0.6493 & \textcolor{green}{$0.9488_{(2)}$} & \textcolor{red}{$0.8380_{(1)}$} \\ & BMOG & 0.7836 & 0.6102 & 0.4982 & 0.2350 & 0.6348 & 0.8396 & 0.5291 & 0.7493 & 0.7928 & 0.8301 & 0.6932 \\ & FTSG & 0.8228 & 0.6259 & 0.5130 & 0.3241 & 0.7768 & 0.8535 & \textcolor{green}{$0.7891_{(2)}$} & 0.7513 & \textcolor{green}{$0.8792_{(2)}$} & 0.9330 & 0.7127 \\ & SWCD & \textcolor{blue}{$0.8233_{(3)}$} & \textcolor{red}{$0.7374_{(1)}$} & \textcolor{red}{$0.5807_{(1)}$} & \textcolor{green}{$0.4545_{(2)}$} & \textcolor{green}{$0.8581_{(2)}$} & 0.8302 & 0.7092 & 0.7411 & \textcolor{blue}{$0.8645_{(3)}$} & 0.9214 & \textcolor{blue}{$0.7735_{(3)}$} \\ \midrule \parbox[t]{2mm}{\multirow{1}{*}{\rotatebox[origin=c]{90}{$\ast$}}} & \textbf{CDN-MEDAL-net} & \textbf{0.9045} & \textbf{0.9561} & \textbf{0.8450} & \multicolumn{1}{c}{-} & \textbf{0.9129} & \textbf{0.8683} & \textbf{0.8249} & \textbf{0.8427} & \textbf{0.9372} & \textbf{0.9615} & \textbf{0.9187} \\ \midrule \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Supervised}}} & FgSegNet\_S & \textcolor{green}{$0.9897_{(2)}$} & \textcolor{green}{$0.8972_{(2)}$} & \textcolor{green}{$0.9713_{(2)}$} & \textcolor{red}{$0.9879_{(1)}$} & \textcolor{red}{$0.9921 _{(1)}$} & \textcolor{blue}{$0.9937_{(3)}$} & \textcolor{blue}{$0.9940_{(3)}$} & \textcolor{green}{$0.9957_{(2)}$} & \textcolor{green}{$0.9958_{(2)}$} & \textcolor{red}{$0.9977_{(1)}$} & 0.9681 \\ & FgSegNet & \textcolor{blue}{$0.9845_{(3)}$} & \textcolor{blue}{$0.8786_{(3)}$} & \textcolor{blue}{$0.9655_{(3)}$} & \textcolor{blue}{$0.9843_{(3)}$} & \textcolor{blue}{$0.9648_{(3)}$} & \textcolor{green}{$0.9973_{(2)}$} & \textcolor{red}{$0.9958_{(1)}$} & \textcolor{blue}{$0.9954_{(3)}$} & \textcolor{blue}{$0.9951_{(3)}$} & \textcolor{blue}{$0.9944_{(3)}$} & \textcolor{green}{$0.9921_{(2)}$} \\ & FgSegNet\_v2 & \textcolor{red}{$0.9904_{(1)}$} & \textcolor{red}{$0.9336_{(1)}$} & \textcolor{red}{$0.9739_{(1)}$} & \textcolor{green}{$0.9862_{(2)}$} & \textcolor{green}{$0.9727_{(2)}$} & \textcolor{red}{$0.9978_{(1)}$} & \textcolor{green}{$0.9951_{(2)}$} & \textcolor{red}{$0.9971_{(1)}$} & \textcolor{red}{$0.9961_{(1)}$} & \textcolor{green}{$0.9952_{(2)}$} & \textcolor{red}{$0.9938_{(1)}$} \\ & Cascade CNN & 0.9431 & 0.8370 & 0.8965 & 0.9168 & 0.8958 & 0.9414 & 0.8505 & \textcolor{blue}{$0.9758_{(3)}$} & 0.9658 & 0.9786 & 0.9108 \\ & DeepBS & 0.8301 & 0.6002 & 0.5835 & 0.3133 & 0.7583 & 0.9092 & 0.6098 & 0.8990 & 0.8761 & 0.9580 & 0.8455 \\ & STAM & 0.9703 & 0.6683 & 0.7102 & 0.8648 & 0.9328 & 0.9885 & 0.9483 & 0.8989 & 0.9155 & 0.9663 & \textcolor{blue}{$0.9907_{(3)}$} \\ \bottomrule \end{tabular}} \resizebox{1.0\textwidth}{!}{\begin{tabular}{cccccc} \multicolumn{6}{p{20cm}}{\footnotesize ${}^{\ast}$Semi-Unsupervised; Experimented scenarios include bad weather (\textit{BDW}), low frame rate (\textit{LFR}), night videos (\textit{NVD}), pan-tilt-zoom (\textit{PTZ}), turbulence (\textit{TBL}), baseline (\textit{BSL}), dynamic background (\textit{DBG}), camera jitter (\textit{CJT}), intermittent object motion (\textit{IOM}), shadow (\textit{SHD}), and thermal (\textit{THM}). In each column, \textcolor{red}{$Red_{(1)}$} is for the best, \textcolor{green}{$Green_{(2)}$} {is for} the second best, and~\textcolor{blue}{$Blue_{(3)}$} {is for} the third best.} \end{tabular}} \end{table*} \begin{figure*}[!t] \vspace{-5mm} \setlength\tabcolsep{1.5pt} \renewcommand{\arraystretch}{2.7} \resizebox{0.83\textwidth}{!}{ \begin{tabular*}{1.0\textwidth}{@{}rcccccccccccccccc@{}} & ($\star$) & ($\diamond$) & (a) & (b) & (c) & (d) & (e) & (f) & (g) & (h) & (i) & (j) & (k) & (l) \\ \arrayrulecolor{red}\cline{11-11} \rule{0pt}{25pt}\parbox[t]{2mm}{\rotatebox[origin=c]{90}{BDW}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/badWeather/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/badWeather/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/badWeather/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/badWeather/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/badWeather/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/badWeather/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/badWeather/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/badWeather/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/badWeather/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/badWeather/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/badWeather/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/badWeather/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/badWeather/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/badWeather/fg.png} & \\ \parbox[t]{2mm}{\rotatebox[origin=c]{90}{BSL}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/baseline/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/baseline/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/baseline/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/baseline/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/baseline/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/baseline/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/baseline/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/baseline/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/baseline/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/baseline/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/baseline/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/baseline/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/baseline/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/baseline/fg.png} & \\ \parbox[t]{2mm}{\rotatebox[origin=c]{90}{CJT}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/cameraJitter/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/cameraJitter/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/cameraJitter/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/cameraJitter/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/cameraJitter/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/cameraJitter/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/cameraJitter/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/cameraJitter/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/cameraJitter/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/cameraJitter/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/cameraJitter/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/cameraJitter/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/cameraJitter/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/cameraJitter/fg.png} & \\ \parbox[t]{2mm}{\rotatebox[origin=c]{90}{DBG}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/dynamicBackground/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/dynamicBackground/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/dynamicBackground/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/dynamicBackground/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/dynamicBackground/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/dynamicBackground/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/dynamicBackground/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/dynamicBackground/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/dynamicBackground/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/dynamicBackground/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/dynamicBackground/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/dynamicBackground/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/dynamicBackground/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/dynamicBackground/fg.png} & \\ \parbox[t]{2mm}{\rotatebox[origin=c]{90}{IOM}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/intermittentObjectMotion/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/intermittentObjectMotion/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/intermittentObjectMotion/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/intermittentObjectMotion/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/intermittentObjectMotion/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/intermittentObjectMotion/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/intermittentObjectMotion/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/intermittentObjectMotion/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/intermittentObjectMotion/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/intermittentObjectMotion/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/intermittentObjectMotion/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/intermittentObjectMotion/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/intermittentObjectMotion/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/intermittentObjectMotion/fg.png} & \\ \parbox[t]{2mm}{\rotatebox[origin=c]{90}{LFR}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/lowFramerate/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/lowFramerate/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/lowFramerate/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/lowFramerate/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/lowFramerate/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/lowFramerate/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/lowFramerate/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/lowFramerate/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/lowFramerate/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/lowFramerate/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/lowFramerate/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/lowFramerate/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/lowFramerate/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/lowFramerate/fg.png} & \\ \parbox[t]{2mm}{\rotatebox[origin=c]{90}{NVD}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/nightVideos/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/nightVideos/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/nightVideos/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/nightVideos/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/nightVideos/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/nightVideos/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/nightVideos/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/nightVideos/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/nightVideos/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/nightVideos/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/nightVideos/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/nightVideos/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/nightVideos/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/nightVideos/fg.png} & \\ \parbox[t]{2mm}{\rotatebox[origin=c]{90}{SHD}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/shadow/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/shadow/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/shadow/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/shadow/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/shadow/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/shadow/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/shadow/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/shadow/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/shadow/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/shadow/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/shadow/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/shadow/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/shadow/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/shadow/fg.png} & \\ \parbox[t]{2mm}{\rotatebox[origin=c]{90}{THM}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/thermal/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/thermal/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/thermal/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/thermal/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/thermal/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/thermal/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/thermal/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/thermal/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/thermal/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/thermal/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/thermal/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/thermal/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/thermal/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/thermal/fg.png} & \\ \parbox[t]{2mm}{\rotatebox[origin=c]{90}{TBL}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/turbulence/input.jpg} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{input_and_gt/turbulence/gt.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_S_G/turbulence/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/GMM_ZV/turbulence/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SUBSENSE/turbulence/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/PAWCS/turbulence/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/BMOG/turbulence/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FTSG/turbulence/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/SWCD/turbulence/fg.png} & \Thickvrule{\includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CDN-MEDAL-net/turbulence/fg.png}} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_S/turbulence/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/FgSegNet_v2/turbulence/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/CascadeCNN/turbulence/fg.png} & \includegraphics[align=c, width=0.08\textwidth, height = 37pt]{Methods/DeepBS/turbulence/fg.png} & \\[2ex] \arrayrulecolor{red}\cline{11-11} \end{tabular*}} \caption{Visual quality comparison for foreground detection on all video sequences in eleven categories in CDnet 2014. The columns include: ($\star$) input frame, ($\diamond$) corresponding groundtruth foreground, (a) GMM -- S \& G, (b) GMM -- Zivkovic, (c) SuBSENSE, (d) PAWCS, (e) BMOG, (f) FTSG, (g) SWCD, (h) CDN-MEDAL-net, (i) FgSegNet\_S, (j) FgSegNet\_v2 (k) Cascade CNN, (l) DeepBS.} \label{fig:cdnet2014} \end{figure*} \begin{table*}[!b] \vspace{-3mm} \setcounter{table}{4} \caption{F - measure comparisons over the six sequences of Wallflower dataset with model parameters tuned on CDnet-2014} \label{tab:wallflower-fmeasure} \centering \begin{tabular}{l|l|llllll} \toprule & \multicolumn{1}{c|}{\textbf{Method}} & \textit{\textbf{Bootstrap}} & \textit{\textbf{LightSwitch}} & \textit{\textbf{WavingTrees}} & \textit{\textbf{Camouflage}} & \textit{\textbf{ForegroundAperture}} & \textit{\textbf{TimeOfDay}} \\ \midrule \parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{${}^{\mp}$UnS.}}} & \multicolumn{1}{l|}{GMM -- Stauffer \& Grimson} & 0.5306 & 0.2296 & \textbf{0.9767} & 0.8307 & 0.5778 & 0.7203 \\ & SuBSENSE & 0.4192 & 0.3201 & 0.9597 & 0.9535 & 0.6635 & 0.7107 \\ \midrule \parbox[t]{2mm}{\multirow{1}{*}{\rotatebox[origin=c]{90}{$\ast$}}} & \multirow{1}{*}{\textbf{CDN-MEDAL-net}} & \textbf{0.7680} & 0.5400 & 0.8156 & 0.9700 & \textbf{0.8401} & \textbf{0.7429} \\ \midrule \parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{${}^{\mp}$Sup.}}} & DeepBS {[}33{]} & 0.7479 & 0.6114 & 0.9546 & \textbf{0.9857} & 0.6583 & 0.5494 \\ & STAM & 0.7414 & \textbf{0.9090} & 0.5325 & 0.7369 & 0.8292 & 0.3429 \\ \bottomrule \end{tabular} \resizebox{0.93\textwidth}{!}{\begin{tabular}{cccccc} \multicolumn{6}{p{17cm}}{\footnotesize ${}^{\ast}$Semi-Unsupervised; ${}^{\mp}$UnS. = Unsupervised and Sup. = Supervised; In each column, \textbf{Bold} is for the best within each scenario.} \end{tabular}}{} \end{table*} \begin{table}[t!] \setcounter{table}{3} \caption{Result of quantitative evaluation on CDnet 2014 dataset} \label{tab:quantitative-evaluation-cdnet-2014} \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|llllll} \toprule & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Method}}} & \multicolumn{1}{c}{\textit{\textbf{Average}}} & \multicolumn{1}{c}{\textit{\textbf{Average}}} & \multicolumn{1}{c}{\textit{\textbf{Average}}} & \multicolumn{1}{c}{\textit{\textbf{Average}}} & \multicolumn{1}{c}{\textit{\textbf{Average}}} \\ & & \multicolumn{1}{c}{\textit{\textbf{Recall}}} & \multicolumn{1}{c}{\textit{\textbf{FPR}}} & \multicolumn{1}{c}{\textit{\textbf{FNR}}} & \multicolumn{1}{c}{\textit{\textbf{PWC}}} & \multicolumn{1}{c}{\textit{\textbf{Precision}}} \\ \midrule \parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{Unsupervised}}} & GMM -- S \& G & 0.6846 & 0.0250 & 0.3154 & 3.7667 & 0.6025 \\ & GMM -- Zivkovic & 0.6604 & 0.0275 & 0.3396 & 3.9953 & 0.5973 \\ & SuBSENSE & 0.8124 & 0.0096 & \textcolor{red}{$0.1876_{(1)}$} & 1.6780 & 0.7509 \\ & PAWCS & \textcolor{blue}{$0.7718_{(3)}$} & \textcolor{red}{$0.0051_{(1)}$} & 0.2282 & \textcolor{red}{$1.1992_{(1)}$} & \textcolor{green}{$0.7857_{(2)}$} \\ & TensorMoG & \textcolor{green}{$0.7772_{(2)}$} & 0.0107 & \textcolor{blue}{$0.2228_{(3)}$} & 2.3315 & \textcolor{red}{$0.8215_{(1)}$} \\ & BMOG & 0.7265 & 0.0187 & 0.2735 & 2.9757 & 0.6981 \\ & FTSG & 0.7657 & \textcolor{blue}{$0.0078_{(3)}$} & 0.2343 & \textcolor{blue}{$1.3763_{(3)}$} & \textcolor{blue}{$0.7696_{(3)}$} \\ & SWCD & \textcolor{red}{$0.7839_{(1)}$} & \textcolor{green}{$0.0070_{(2)}$} & \textcolor{green}{$0.2161_{(2)}$} & \textcolor{green}{$1.3414_{(2)}$} & 0.7527 \\ \midrule \parbox[t]{2mm}{\multirow{1}{*}{\rotatebox[origin=c]{90}{$\ast$}}} & \textbf{CDN-MEDAL-net} & \textbf{0.9232} & \textbf{0.0039 } & \textbf{0.0768} & \textbf{0.5965} & \textbf{0.8724} \\ \midrule \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Supervised}}} & FgSegNet\_S & \textcolor{red}{$0.9896_{(1)}$} & \textcolor{green}{$0.0003_{(2)}$} & \textcolor{red}{$0.0104_{(1)}$} & \textcolor{green}{$0.0461_{(2)}$} & 0.9751 \\ & FgSegNet & \textcolor{blue}{$0.9836_{(3)}$} & \textcolor{red}{$0.0002_{(1)}$} & \textcolor{blue}{$0.0164_{(3)}$} & \textcolor{blue}{$0.0559_{(3)}$} & 0.9758 \\ & FgSegNet\_v2 & \textcolor{green}{$0.9891_{(2)}$} & \textcolor{red}{$0.0002_{(1)}$} & \textcolor{green}{$0.0109_{(2)}$} & \textcolor{red}{$0.0402_{(1)}$} & \textcolor{green}{$0.9823_{(2)}$} \\ & Cascade CNN & 0.9506 & 0.0032 & 0.0494 & 0.4052 & 0.8997 \\ & DeepBS & 0.7545 & 0.0095 & 0.2455 & 1.9920 & 0.8332 \\ & STAM & 0.9458 & \textcolor{blue}{$0.0005_{(3)}$} & 0.0542 & 0.2293 & \textcolor{red}{$0.9851_{(1)}$} \\ \bottomrule \end{tabular}}{} \resizebox{0.48\textwidth}{!}{\begin{tabular}{cccccc} \multicolumn{6}{p{10cm}}{\footnotesize ${}^{\ast}$Semi-Unsupervised; In each column, \textcolor{red}{$Red_{(1)}$} is for the best, \textcolor{green}{$Green_{(2)}$} {is for} the second best, and~\textcolor{blue}{$Blue_{(3)}$} {is for} the third best.} \end{tabular}}{} \vspace{-5mm} \end{table} \section{Experiments and Discussion} \label{section:experiments-and-discussion} \subsection{Experimental Setup} In this section, we proceed to verify experimentally the capabilities of the proposed method via comparative evaluations in capturing motion attributes. This is in order to evaluate the effectiveness of CDN-MEDAL-net in foreground detection. Our proposed scheme is designed to explicitly incorporate the probabilistic density properties into the architecture to achieve accurate adaptiveness, while taking advantage of parallel computing technologies often used with DNNs to compete with state-of-the-art works in speed given its light structure. Therefore, we compare the accuracy of the proposed framework not only with unsupervised approaches that are light-weighted and generalizable without pretraining: GMM -- Stauffer \& Grimson \cite{Stauffer1999}, GMM -- Zivkovic \cite{Zivkovic2004}, SuBSENSE \cite{Charles2015}, PAWCS \cite{Charles2015a}, TensorMoG \cite{Ha2020}, BMOG \cite{Martins2018}, FTSG \cite{Wang2014a}, SWCD \cite{Sahin2018}, but also with the data-driven, supervised models which trade computational expenses for high accuracy performance: FgSegNet\_S \cite{Lim2018}, FgSegNet \cite{Lim_2018}, FgSegNet\_v2 \cite{Lim2019}, Cascade CNN \cite{Wang2017}, DeepBS \cite{Babaee2018}, STAM \cite{Liang2019}. In terms of chosen metrics for measuring motion features, we employ quantitative analysis on values that can be appraised from confusion matrices, i.e. Precision, Recall, F-Measure, False-Negative Rate (FNR), False-Positive Rate (FPR) and Percentage of Wrong Classification (PWC). With the overall results being drawn from the combination of all confusion matrices across given scenarios, the benchmarks on CDnet-2014 \cite{Wang2014} were performed by comparing foreground predictions against provided ground-truths. Then, we evaluate the proposed framework trained with CDnet-2014 on Wallflower \cite{Toyama1999} without any tuning or retraining laten parameters to examine the capability of our proposed approach in unseen scenarios having similar dynamics. Finally, we will also analyze all methods in terms of processing speed with the image resolution of $320 \times 240$ and draw final conclusions. In our experiment, the number of Gaussians $K$ is empirically and heuristically to balance the CDN-GM's capability of modeling constantly evolving contexts (e.g. moving body of water) under many effects of potentially corruptive noises. With $K$ too big, many GMM components many be unused or they simply capture the various noises within contextual dynamics. As the Gaussian component corresponding to the background intensity revolves around the most frequently occurring color subspaces to draw predictions, the extra components serve only as either placeholders for abrupt changes in backgrounds, be empty or capture intermittent noises of various degrees. In practice, noise Gaussian components in GMM are pulse-like as they would appear for short durations, and low-weighted because they are not as often matched as background components. Nevertheless, they still present corruptive effects to our model. Our proposed CDN-GM model was set up with the number of Gaussian components $K = 3$ for all experimented sequences, and was trained on CDnet-2014 dataset with Adam optimizer using a learning rate of $\alpha = 1e^{-4}$. In addition, the constants $\bar\sigma_{min}$ and $\bar\sigma_{max}$ were chosen such that no Gaussian components span the whole color space while not contracting to a single point that represents noises. If the $\left[ {{{\bar \sigma }_{min}},{{\bar \sigma }_{\max }}} \right]$ interval is too small, all of the Gaussian components will be likely to focus on one single color cluster. Otherwise, if the interval is too large, some of the components might still cover all intensity values, making it hard to find the true background intensity. Based on this assumption and experimental observations, we find that the difference between color clusters usually does not exceed approximately $16$ at minimum and $32$ at maximum. Regarding MEDAL-net, the value of $\epsilon$ was emperically chosen to be 0.3 in order to extract the foreground effectively even under high color similarity between objects and background. \subsection{Results on CDnet 2014 Benchmarks} Using the large-scale CDnet-2014 dataset, we demonstrate empirically the effectiveness of our proposed approach across a plethora of scenarios and effects. For each thousands-frame sequence of a scenario, we sample only 200 foreground images for training our foreground estimator. This strategy of sampling for supervised learning is the same as that of FgSegNet's and Cascade CNN. The experimental results are summarized in Table \ref{tab:cdnet-fmeasure}, which highlights the F-measure quantitative results of our approach compared against several existing state-of-the-art approaches, along with Fig. \ref{fig:cdnet2014} that provides qualitative illustrations. Despite its compact architecture, the proposed approach is shown to be capable of significantly outperforming unsupervised methods, and competing with complex deep-learning-based, supervised approaches in terms of accuracy on all but only the \textit{PTZ} scenario. In this experimental dataset, we pass over the \textit{PTZ} subdivision where our approach of CDN-GM is unsustainable to model the underlying description of the most likely background because of the fluctuation of actually observed data sequences when the recording camera rotates continuously. Accordingly, our MEDAL-net scheme of foreground segmentation encounters difficulty in estimating difference between input frames and corresponding background scenes. In comparison with unsupervised models built on the GMM background modeling framework like GMM -- Stauffer \& Grimson, GMM -- Zivkovic, BMOG and TensorMoG, the proposed approach is better augmented by the context-driven motion estimation plugin, without being constrained by simple thresholding schemes. Thus, it is able to provide remarkably superior F-measure results across the scenarios, especially on those where there are high degrees of noises or background dynamics like \textit{LFR}, \textit{NVD}, \textit{IOM}, \textit{CJT}, \textit{DBG} and \textit{TBL}. However, it is apparently a little worse than TensorMoG on \textit{BDW}, \textit{SHD}, \textit{IOM} and \textit{CJT}, which may be attributed to TensorMoG carefully tuned hyperparameters on segmenting foreground, thereby suggesting that the proposed method is still limited possibly by its architectural size and training data. Comparison with other unsupervised methods is also conducted, using mathematically rigorous approaches such as SuBSENSE, PAWCS, FTSG, SWCD that are designed to tackle scenarios commonly seen in real life (i.e. \textit{BSL}, \textit{DBG}, \textit{SHD}, and \textit{BDW}). Nevertheless, F-measure results of the proposed approach around 0.90 suggests that it is still able to outperform these complex unsupervised approaches, possibly ascribing to its use of hand-labeled data for explicitly enabling context capturing. In comparison with supervised approaches, the proposed approach is apparently very competitive against the more computationally expensive state-of-the-arts. For instance, our approach considerably surpasses the generalistic methods of STAM and DeepBS on \textit{LFR} and \textit{NVD}, but it loses against both of these methods on \textit{SHD} and \textit{CMJ}, and especially is outperformed by STAM on many scenarios. While STAM and DeepBS are constructed using only 5\% of CDnet-2014, they demonstrate good generalization capability across multiple scenarios by capturing the holistic features of their training dataset. However, despite being trained on all scenarios, their behaviors showcase higher degrees of instability (e.g. with \textit{LFR}, \textit{NVD}) than our proposed approach on scenarios that deviate from common features of the dataset. Finally, as our proposed method is compared against similarly scene-specific approaches like FgSegNet’s, Cascade CNN, the results were within expectations for almost all scenarios that ours would not be significantly outperformed, as the compared models could accommodate various features of each sequence in their big architectures. However, surprisingly, our method surpasses even these computationally expensive to be at the top of the \textit{LFR} scenarios. This suggests that, with a background for facilitating motion segmentation from an input, our trained model can better tackle scenarios where objects are constantly changing and moving than even existing state-of-the-arts. Overall, these comparisons serve to illustrate the superiority of the proposed approach in terms of accuracy over unsupervised approaches using only small training datasets, while cementing its practical use in its ability to compete with supervised ones despite its light-weighted structure. Table \ref{tab:quantitative-evaluation-cdnet-2014} presents evaluation metrics of a confusion matrix. \begin{figure*}[!t] \vspace{-2mm} \centering \subfloat{\includegraphics[width=1.0\textwidth]{Fig-04_new.png}} \caption{Computational speed and average F-measure comparison with state-of-the-art methods.} \label{figure:05} \vspace{-2mm} \end{figure*} \subsection{Results on Wallflower Benchmarks without Tuning} Using the Wallflower dataset, we aim to empirically determine our proposed approach's effectiveness on unseen sequences, using only trained weights from scenarios of similar dynamics in CDnet-2014. The results apparently tend towards suggesting good degrees of our generalization from trained scenarios over to those unseen. Experimental evaluations are presented in Table \ref{tab:wallflower-fmeasure}, highlighting the F-measure quantitative results of our approach compared against some state-of-the-art methods in supervised, and unsupervised learning. Specifically, on the \textit{Camouflage} scenario, our approach presents a very high score of 0.97 in terms of F-measure using the \textit{copyMachine} sequence of the \textit{SHD} scenario in CDnet-2014. As the model learns to distinguish between object motions and the shadow effects of \textit{copyMachine}, it even extends to recognizing object motions of similar colors. Under Bootstrap where motions are present throughout the sequence, we employ the straight-forward background subtraction function learned via the clear features of static-view-versus-motion of \textit{highway} in \textit{BSL}, giving an F-score of 0.768. Likewise, the model’s capture of scene dynamics with \textit{office} of \textit{BSL}, \textit{backdoor} of \textit{SHD} and \textit{fountain02} of \textit{DBG} are extended towards respective views of similar features: \textit{ForegroundAperture} of clear motions against background, \textit{TimeOfDay} where there are gradual illumination changes and \textit{WavingTrees} of dynamic background motions, providing decently accurate results. On the other hand, the \textit{LightSwitch} scenario presents a big challenge where lightings are abruptly changed. As there is no scenario with this effect on the CDnet-2014 dataset, we chose the \textit{SHD} simply for its ability to distinguish objects clearly but the F-measure result is quite poor. In comparison with existing methods whose aim are towards generalization like some unsupervised approaches GMM -- Stauffer \& Grimson, SuBSENSE, and CDnet-pretrained supervised approaches STAM, DeepBS, our proposed method yields very good results on \textit{Camouflage} and \textit{WavingTrees}, with even relatively better results on \textit{Bootstrap}, \textit{ForegroundAperture} and \textit{TimeOfDay}. While obviously this does not evidence that our approach is capable of completely better generalization from training than others, it does suggest that the proposed framework is able to excellently generalize to scenarios with dynamics similar to those learned, as supported by its relatively poor accuracy on \textit{LightSwitch}. \subsection{Computational Speed Comparison} The proposed framework was implemented on a CUDA-capable machine with an NVIDIA GTX 1070 Ti GPU or similar, along with the methods that require CUDA runtime, i.e., TensorMoG, DeepBS, STAM, FgSegNet, and Cascade CNN. For unsupervised approaches, we conducted our speed tests on the configuration of an Intel Core i7 with 16 GB RAM. Our results are recorded quantitatively with execution performance in frame-per-seconds (FPS), and time (miliseconds) versus accuracy in Fig. \mbox{\ref{figure:05}}. At the speed of 129.4510 fps, it is apparent that CDN-MEDAL-net is much faster than other supervised deep learning approaches, of which the fastest - FgSegNet\_S - runs at 23.1275 fps. By concatenating estimations of background scenes with raw signals for foreground extraction, our approach makes such efficient use of hardware resources due of its completely lightweight architecture and the latent-space-limitation approach. In contrast, other DNNs architectures are burdened with a large number of trainable parameters to achieve accurate input-target mapping. Furthermore, the proposed scheme dominates the mathematically rigorous unsupervised methods frameworks in terms of speed and accuracy such as SuBSENSE, SWCD, and PAWCS, as their paradigms of sequential processing is penalized by significant penalties in execution. Significantly, the average speeds of the top three methods dramatically disparate. With the objective of parallelizing the traditional imperative outline of rough statistical learning on GMM, TensorMoG reformulates a tensor-based framework that surpasses our duo architectures at 302.5261 fps. On the other hand, GMM - Zivkovic's design focuses on optimizing its mixture components, thereby significantly trading off its accuracy to attain the highest performance. Notwithstanding, our proposed framework gives the most balanced trade-off (top-left-most) in addressing the speed-and-accuracy dilemma. Our model outperforms other approaches of top accuracy ranking when processing at exceptionally high speed, while obtaining good accuracy scores, at over 90\% on more than half of CDnet's categories and at least 84\%. \section{Conclusion} \label{section:conclusion} This paper has proposed a novel, two-stage framework with a GMM-based CNN for background modeling, and a convolutional auto-encoder MEDAL-net to simulate input-background subtraction for foreground detection, thus being considered as a search space limitation approach to compress a model of DNNs, while keeping its high accuracy. Our first and second contributions in this paper include a pixel-wise, light-weighted, feed-forward CNN representing a multi-modular conditional probability density function of the temporal history of data, and a corresponding loss function for the CNN to learn from virtually inexhaustible datasets for approximating the mixture of Gaussian density function. In such a way, the proposed CDN-GM not only gains better capability of adaptation in contextual dynamics with humanly interpretable statistical learning for extension, but it is also designed in the tensor form to exploit technologically parallelizing modern hardware. Secondly, we showed that incorporating such statistical features into MEDAL-net's motion-region extraction phase promises more efficient use of powerful hardware, with prominent speed performance and high accuracy, along a decent generalization ability using a small-scale set of training labels, in a deep non-linear scheme of only a few thousands of latent parameters. \bibliographystyle{IEEEtran}
proofpile-arXiv_068-10951
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{Sec:intro} Fractional differential equations (FDEs) are a generalization of the classical ordinary and partial differential equations, in which the order of differentiation is permitted to be any real (or even complex) number, not only a natural number. FDEs containing not only one fractional derivative but also more than one fractional derivative are intensively studied in many physical processes \cite{Kilbas,Podlubny,Samko}. Many authors demonstrate two essential mathematical ways to use this idea: multi-term equations \cite{Ahmadova-Mahmudov,Bazhlekova E.,Hilfer-Luchko-Tomovski,Luchko and Gorenflo, Luchko} and multi-order systems \cite{A-H-F-M,ismail-arzu}. Multi-term FDEs have been studied due to their applications in modelling, and solved using various mathematical methods. Finding the solution to these equations is an interesting and challenging subject that attracted many scientists in the last decades. Up to now, various analytical and computational techniques have been investigated to find the solution of multi-term FDEs, of which we mention a few as follows. Luchko and several collaborators \cite{Hilfer-Luchko-Tomovski,Luchko and Gorenflo,Luchko} used the method of operational calculus to solve multi-order FDEs with different types of fractional derivatives. In the realm of ordinary differential equations, Mahmudov and other collaborators \cite{Ahmadova-Mahmudov,Mahmudov-Huseynov-Aliev-Aliev} have derived an analytical representation of solutions for special cases of fractional differential equations with multi-orders, namely: Langevin and Bagley-Torvik equations involving scalar coefficients and permutable matrices by using Laplace transform method and fractional analogue of variation of constants formula, respectively, while the other authors \cite{Pak-Choi-Sin-Ri} have solved multi-term differential equations in Riemann-Liouville's sense with variable coefficients applying a new method to construct analytical solutions. Several results have been investigated on solving multi-dimensional time-delay deterministic and stochastic systems with permutable matrices \cite{arzu-mahmudov,Huseynov and Mahmudov} in classical and fractional senses. In \cite{diblik}, Diblik et al. have considered inhomogeneous system of linear differential equations of second order with multiple different delays \& pairwise permutable matrices and represented a solution of corresponding initial value problem by using matrix polynomials. Khusainov and other collaborators in \cite{Khusainov-permutable,Khusainov-Shuklin} have proposed exact analytical representations of linear autonomous time-delay systems with commutative matrices. Pospisil \cite{Pospisil} has introduced a representation of the solution of delayed differential equation, the condition that the linear parts are given by pairwise permutable matrices. In \cite{Medved-Pospisil}, Medved and Pospisil have tackled this strong condition (commutativity of matrices) and derived a representation of solutions of functional differential equations with nonconstant coefficients and variable delays. Recently, Mahmudov \cite{mahmudov3} has introduced a fractional analogue of delayed matrices cosine and sine in the commutative case i.e., $AB=BA$ to solve the sequential Riemann-Liouville type linear time-delay systems whilst Liang et al. \cite{liang} have also obtained an explicit solution of the differential equation with pure delay and sequential Caputo type fractional derivative. However, there are a few papers involving non-permutable matrices which are recently studied fractional time-continuous \cite{mahmudov1} and discrete \cite{mahmudov2} systems with a constant delay using recursively defined matrix-equations, and also delayed linear difference equations \cite{Nmahmudov} applying $\mathscr{Z}$-transform technique by Mahmudov. Meanwhile, Sobolev type evolution equations and their fractional-order analogues have attracted a great deal of attention from applications' point of view and studied by several authors \cite{Balachandran-1, Balachandran-2,Feckan-1, Wang-1,Wang-2} in recent decades. In \cite{Balachandran-1}, Balachandran and Dauer have derived sufficient conditions for controllability of partial functional differential systems of Sobolev type in a Banach space by using compact semigroups and Schauder's fixed point theorem. Moreover, Balachandran et al. \cite{Balachandran-2} have considered existence results of solutions for nonlinear impulsive integrodifferential equations of Sobolev type with nonlocal conditions via Krasnoselkii's fixed point technique. In terms of fractional differential equations, Wang et al. \cite{Wang-1} have investigated controllability results of Sobolev type fractional evolution equations in a seperable Banach space by using the theory of propagation family and contraction mapping principle. In addition, Feckan et al. \cite{Feckan-1} have presented controllability of fractional functional evolution sytems of Sobolev type with the help of new characteristic solution operators and well-known Schauder's fixed point approach. In addition, Mahmudov \cite{Mahmudov-Sobolev} have considered approximate controllability results for a class of fractional evolution equations of Sobolev type by using fixed point approach. In \cite{Wang-2}, Wang and Li have discussed stability analysis of fractional evolution equations of Sobolev type in Ulam-Hyers sense. In \cite{Chang-1}, Chang et al. have studied the asymptotic behaviour of resolvent operators of Sobolev type and their applications to the existence and uniqueness of mild solutions to fractional functional evolution equations in Banach spaces. Vijayakumar et al. \cite{Vijakumar-1} have presented approximate controllability results for Sobolev type time-delay differential systems of fractional-order in Hilbert spaces. To the best of our knowledge, the fractional evolution equations of Sobolev type with non-permutable operators and two independent fractional orders of differentiation $\alpha$ and $\beta$ which are assumed to be in the interval $(1,2]$ and $(0,1]$, respectively are an untreated topic in the present literature. Thus, motivated by the above research works, we consider the following Cauchy problem for fractional evolution equation of Sobolev type with orders $1<\alpha\leq 2$ and $0<\beta \leq 1$ on $\mathbb{J}\coloneqq[0,T]$: \begin{align}\label{mtde} \begin{cases} \left( \prescript{C}{}{D^{\alpha}_{0+}}Ey\right) (t) -A_{0} \left( \prescript{C}{}{D^{\beta}_{0+}}y\right) (t)=B_{0}y(t)+g(t), \quad t>0,\\ Ey(0)=\eta , \quad Ey^{\prime}(0)=\tilde{\eta}, \end{cases} \end{align} where $\prescript{C}{}{D^{\alpha}_{0+}}$ and $ \prescript{C}{}{D^{\beta}_{0+}}$ Caputo fractional differential operators of orders $1<\alpha\leq2$ and $0<\beta\leq1$, respectively, with the lower limit zero, the operators $E: D(E)\subset X \to Y$, $A_{0}:D(A_{0})\subset X \to Y$ and $B_{0 }:D(B_{0})\subset X \to Y$ are linear, where $X$ and $Y$ are Banach spaces, $y(\cdot)$ is a $X$-valued function on $\mathbb{J}$, i.e., $y(\cdot):\mathbb{J}\to X$ and $\eta, \hat{\eta}\in Y$. In addition, $g(\cdot): \mathbb{J}\to Y$ is a continuous function. The domain $D(E)$ of $E$ becomes a Banach space with respect to $\|y\|_{D(E)}=\|Ey\|_{Y}, y \in D(E)$. The main idea is that under the hypotheses $(H_{1})$-$(H_{4})$ we transform Sobolev type fractional multi-term evolution equation with linear operators \eqref{mtde} to fractional-order evolution equation with multi-orders and linear bounded operators \eqref{mtde-1}. Secondly, we solve fractional evolution equation with nonpermutable linear bounded operators by using Laplace transform technique which is used as a necessary tool for solving and analyzing fractional-order differential equations and systems in \cite{Ahmadova-Mahmudov},\cite{Huseynov and Mahmudov},\cite{kexue},\cite{Sabatier-Moze-Farges}. Then we propose exact analytical representation of a mild solution of \eqref{mtde-1} and \eqref{mtde}, respectively with the help of new defined Mittag-Leffler function which is expressed via linear bounded operators by removing the exponential boundedness of a forced term $g(\cdot)$ and $\left( \prescript{C}{}{D}_{0+}^{\beta}x\right)(\cdot)$ for $\beta\in(0,1]$ (or $\left( \prescript{C}{}{D}_{0+}^{\alpha}x\right)(\cdot)$ for $\alpha\in(1,2])$ in both cases: with nonpermutable and permutable linear operators $A,B\in\mathscr{B}(Y)$. The structure of this paper contains important improvement in the theory of Sobolev type fractional multi-term evolution equations and is outlined as below. Section \ref{sec:2} is a preparatory section where we recall main definitions and results from fractional calculus, special functions and fractional differential equations. In Section \ref{sec:3}, we establish a new Mittag-Leffler type function which is generated by linear bounded operators via a double infinity series and investigate some necessary properties of this function which are accurate tool for testing the candidate solutions of fractional-order dynamical equations. We also investigate that $Q_{k,m}^{A,B}$ with nonpermutable linear operators $A,B\in \mathscr{B}(Y)$ is a generalization of well-known Pascal's rule binomial coefficients. Moreover, we introduce the sufficient conditions for exponential boundedness of \eqref{mtde-1} to guarantee the existence of Laplace integral transform of equation \eqref{mtde-1}. Then we solve multi-order fractional evolution equations \eqref{mtde} and \eqref{mtde-1} with the help of Laplace integral transform. Meanwhile, we tackle this strong condition and verify that the sufficient conditions can be omitted easily. Section \ref{sec:4} deals with an analytical representation of a mild solution to Sobolev type evolution equations with commutative linear bounded operators. In addition, we propose exact solutions for multi-dimensional multi-term fractional dynamical systems with commutative and noncommutative matrices. In Section \ref{sec:concl} we discuss our main contributions of this paper and future research work. \section{Preliminary concept}\label{sec:2} We embark on this section by briefly presenting some notations and definition fractional calculus and fractional differential equations \cite{Kilbas,Samko,Podlubny} which are used throughout the paper. Let $\mathbb{C}^{2}\left(\mathbb{J},X\right)\coloneqq\left\lbrace y\in \mathbb{C}\left(\mathbb{J},X\right) : y^{'},y^{''}\in\mathbb{C}\left(\mathbb{J},X\right) \right\rbrace $ denote the Banach space of functions $y(t)\in X$ for $t\in \mathbb{J}$ equipped with a norm $\|y\|_{\mathbb{C}^{2}(\mathbb{J},X)}=\sum\limits_{i=0}^{2}\sup\limits_{t\in \mathbb{J}}\|y^{(i)}(t)\|$. The space of all bounded linear operators from $X$ to $Y$ is denoted by $\mathscr{B}(X,Y)$ and $\mathscr{B}(Y,Y)$ is written as $\mathscr{B}(Y)$. \begin{definition}\label{RLI}\cite{Kilbas,Samko,Podlubny} The fractional integral of order $\alpha > 0 $ for a function $\ g\in \left( [0,\infty), \mathbb{R}\right) $ is defined by \begin{equation} \prescript{}{}(I^{\alpha}_{0^{+}}g)(t)=\frac{1}{\Gamma(\alpha)}\int\limits_0^t(t-s)^{\alpha-1}g(s)\,\mathrm{d}s \\, \quad t>0, \end{equation} where $\Gamma(\cdot)$ is the well-known Euler's gamma function \end{definition} \begin{definition}\label{RLD} \cite{Kilbas,Samko,Podlubny} The Riemann-Liouville fractional derivative of order $n-1<\alpha\leq n$, $n \in \mathbb{N}$ for a function $\ g\in \left( [0,\infty), \mathbb{R}\right) $ is defined by \begin{equation} (\prescript{RL}{}D^{\alpha}_{0^{+}}g)(t)=\frac{1}{\Gamma(n-\alpha)}\left( \frac{d}{dt}\right) ^{n}\int\limits_0^t(t-s)^{n-\alpha-1}g(s)\,\mathrm{d}s, \quad t>0, \end{equation} where the function $g(\cdot)$ has absolutely continuous derivatives up to order $n$. \end{definition} The following theorem and its corollary is regarding fractional analogue of the eminent Leibniz integral rule for general order $\alpha \in (n-1,n], n \in \mathbb{N}$ in Riemann-Liouville's sense which is more productive tool for the testing particular solution of inhomogeneous linear multi-order fractional differential equations with variable and constant coefficients is considered by Huseynov et al. \cite{Leibniz}. \begin{theorem}\label{thm-class} Let the function $K:J\times J\to\mathbb{R}$ be such that the following assumptions are fulfilled: (a) For every fixed $t\in J$, the function $\hat{K}(t,s)=\prescript{RL,t}{}{D^{\alpha-1}_{s+}}K(t,s)$ is measurable on $J$ and integrable on $J$ with respect to some $t^{*}\in J$; (b) The partial derivative $\prescript{RL,t}{}{D^{\alpha}_{s+}}K(t,s)$ exists for every interior point $(t, s) \in \hat{J} \times \hat{J}$; (c) There exists a non-negative integrable function $g$ such that $\left|\prescript{RL,t}{}{D^{\alpha}_{s+}}K(t,s)\right| \leq g(s)$ for every interior point $(t, s) \in \hat{J}\times \hat{J}$; (d) The derivative $\frac{d^{l-1}}{dt^{l-1}}\lim\limits_{s\to t-0}\prescript{RL,t}{}{D^{\alpha-l}_{s+}}K(t,s)$, $l=1,2,\ldots,n$ exists for every interior point $(t, s) \in \hat{J} \times \hat{J}$. Then, the following relation holds true for fractional derivative in Riemann-Liouville sense under Lebesgue integration for any $t \in \hat{J}$: \begin{equation}\label{thm-RL} \prescript{RL}{}{D^{\alpha}_{t_{0}+}}\int\limits_{t_{0}}^{t}K(t,s)\mathrm{d}s=\sum_{l=1}^{n}\frac{d^{l-1}}{dt^{l-1}}\lim\limits_{s\to t-0}\prescript{RL,t}{}{D^{\alpha-l}_{s+}}K(t,s)+\int\limits_{t_{0}}^{t}\prescript{RL,t}{}{D^{\alpha}_{s+}}K(t,s)\mathrm{d}s. \end{equation} If we have $K(t,s)=f(t-s)g(s)$, $t_{0}=0$ and assumptions of Theorem \ref{thm-class} are fulfilled, then following equality holds true for convolution operator in Riemann-Liouville sense for any $n \in\mathbb{N}$: \begin{align}\label{Leibniz} \prescript{RL}{}{D^{\alpha}_{0+}}\int\limits_{0}^{t}f(t-s)g(s)\mathrm{d}s&=\sum_{l=1}^{n}\lim\limits_{s\to t-0} \prescript{RL,t}{}{D^{\alpha-l}_{s+}}f(t-s)\frac{d^{l-1}}{dt^{l-1}}\lim\limits_{s\to t-0}g(s)\nonumber\\&+\int\limits_{0}^{t}\prescript{RL,t}{}{D^{\alpha}_{s+}}f(t-s)g(s)\mathrm{d}s, \quad t >0. \end{align} where $\prescript{RL,t}{}{D^{\gamma}_{t_{0}+}}K(t,s)$ is a partial Riemann-Liouville fractional differentiation operator of order $\gamma > 0$ \cite{Kilbas} with respect to $t$ of a function $K(t,s)$ of two variables with lower terminal $t_{0}$ and $J=[t_{0},T]$, $\hat{J}=(t_{0},T)$. \end{theorem} In the special cases, Riemann-Riouville type differentiation under integral sign holds for convolution operator \cite{Leibniz}: \begin{itemize} \item If $\alpha \in (0,1]$, then \begin{align*} \prescript{RL}{}{D^{\alpha}_{0+}}\int\limits_{0}^{t}f(t-s)g(s)\mathrm{d}s&=\lim\limits_{s\to t-0} \prescript{RL,t}{}{D^{\alpha-1}_{s+}}f(t-s)\lim\limits_{s\to t-0}g(s)\nonumber\\&+\int\limits_{0}^{t}\prescript{RL,t}{}{D^{\alpha}_{s+}}f(t-s)g(s)\mathrm{d}s, \quad t >0 ; \end{align*} \item If $\alpha \in (1,2]$, then \begin{align*} \prescript{RL}{}{D^{\alpha}_{0+}}\int\limits_{0}^{t}f(t-s)g(s)\mathrm{d}s&=\lim\limits_{s\to t-0} \prescript{RL,t}{}{D^{\alpha-1}_{s+}}f(t-s)\lim\limits_{s\to t-0}g(s)\nonumber\\&+\lim\limits_{s\to t-0} \prescript{RL,t}{}{D^{\alpha-2}_{s+}}f(t-s)\lim\limits_{s\to t-0}g(s)+\int\limits_{0}^{t}\prescript{RL,t}{}{D^{\alpha}_{s+}}f(t-s)g(s)\mathrm{d}s, \quad t >0. \end{align*} \end{itemize} \begin{definition}\label{CD}\cite{Kilbas,Podlubny} The Caputo fractional derivative of order, $n-1<\alpha\leq n$, $n \in \mathbb{N}$ for a function $\ g\in \left( [0,\infty), \mathbb{R}\right) $ is defined by \begin{equation} (\prescript{C}{}D^{\alpha}_{0^{+}}g)(t)=\frac{1}{\Gamma(n-\alpha)}\int\limits_0^t(t-s)^{n-\alpha-1}\left(\frac{d}{ds}\right)^{n}g(s)\,\mathrm{d}s \\, \quad t>0, \end{equation} where the function $g(\cdot)$ has absolutely continuous derivatives up to order $n$. \end{definition} \begin{definition}\label{RLC}\cite{Kilbas,Podlubny} The relationship between Caputo and Riemann-Liouville fractional differential operators of order $n-1<\alpha\leq n$, $n \in \mathbb{N}$ for a function $\ g\in \left( [0,\infty), \mathbb{R}\right) $ is defined by \begin{equation}\label{relation} (\prescript{C}{}D^{\alpha}_{0^{+}}g)(t)= \prescript{RL}{}D^{\alpha}_{0^{+}}\left( g(t)-\sum_{k=0}^{n-1}\frac{t^{k}}{k!}g^{(k)}(0)\right) \\, \quad t>0, \end{equation} where the function $g(\cdot)$ has absolutely continuous derivatives up to order $n$. \end{definition} \begin{remark} If $g(\cdot)$ is an abstract function with values in $X$, then the integrals which appear in Definition \ref{RLI}, \ref{RLD}, \ref{CD} and \ref{RLC} are taken in Bochner's sense. \end{remark} The Laplace transform of the Caputo's fractional differentiation operator \cite{Kilbas} is defined by \begin{equation} \label{lapCaputo} \mathscr{L} \left\lbrace (\prescript{C}{}D^{\alpha}_{0^{+}}g)(t)\right\rbrace(s) =s^{\alpha}G(s)-\sum_{k=1}^{n}s^{\alpha-k}g^{(k-1)}(0), \quad n-1<\alpha\leq n, \quad n\in\mathbb{N}, \end{equation} where $G(s)=\mathscr{L} \left\lbrace g(t) \right\rbrace(s)$. In the particular cases, the Laplace integral transform of the Caputo fractional derivative is: \begin{itemize}\label{remLap} \item If $\alpha \in (0,1]$, then \begin{equation*} \mathscr{L}\left\lbrace \left( \prescript{C}{}D^{\alpha}_{0^{+}}x\right) (t)\right\rbrace (s)=s^{\alpha}X(s)-s^{\alpha-1}x(0) ; \end{equation*} \item If $\alpha \in (1,2]$, then \begin{equation*} \mathscr{L}\left\lbrace \left( \prescript{C}{}D^{\alpha}_{0^{+}}x\right) (t)\right\rbrace (s)=s^{\alpha}X(s)-s^{\alpha-1}x(0)-s^{\alpha-2}x^{\prime}(0) , \end{equation*} \end{itemize} where $X(s)=\mathscr{L} \left\lbrace x(t) \right\rbrace(s)$. \begin{lemma} [\label{sumA}\cite{Yosida}] Suppose that A is linear bounded operator defined on the Banach space $X$ and assume that $\|A\| < 1$. Then, $(I-A)^{-1}$ is linear bounded on $X$ and \begin{equation}\label{operator} (I-A)^{-1}=\sum_{k=0}^{\infty}A^{k}. \end{equation} \end{lemma} The following well-known generalized Gronwall inequality which plays an important role in the qualitative analysis of the solutions to fractional differential equations is stated and proved in \cite{henry,ye} for $\beta>0$. In particular case, if $\beta=1$, then the following relations hold true: \begin{theorem}\label{thm-1} Suppose $a(t)$ is a nonnegative function locally integrable on $0 \leq t < T$ (some $T\leq +\infty$), $b(t)$ is a nonnegative, nondecreasing continuous function defined on $0\leq t<T$, $|b(t)|\leq M$, ($M$ is a positive constant) and suppose $u(t)$ is a nonnegative and locally integrable on $0 \leq t < T$ with \begin{equation*} u(t)\leq a(t)+ b(t)\int\limits_{0}^{t}u(s)ds, \end{equation*} on this interval; then \begin{equation*} u(t)\leq a(t)+ b(t)\int\limits_{0}^{t}\exp\left( b(t)(t-s)\right) a(s)ds, \quad 0 \leq t < T. \end{equation*} \end{theorem} \begin{corollary} Under the hypothesis of Theorem \ref{thm-1}, let $a(t)$ be a nondecreasing function on $[0,T)$. Then \begin{equation}\label{coroll} u(t)\leq a(t) \exp\left( b(t)t\right) , \quad 0 \leq t < T. \end{equation} \end{corollary} The Mittag-Leffler function is a natural generalization of the exponential function, first proposed as a single parameter function of one variable by using an infinite series \cite{ML}. Extensions to two or three parameters are well known and thoroughly studied in textbooks such as \cite{Gorenflo}. Extensions to two or several variables have been studied more recently \cite{ ismail-arzu,A-H-F-M,fernandez-kurt-ozarslan,saxena-kalla-saxena}. \begin{definition}[\cite{ML}] \label{Def:ML} The classical Mittag-Leffler function is defined by \begin{equation}\label{ML1} E_{\alpha}(t)= \sum_{k=0}^{\infty}\frac{t^{k}}{\Gamma(k \alpha +1)}, \quad \alpha>0, \quad t\in\mathbb{R}. \end{equation} The two-parameter Mttag-Leffler function \cite{Wiman} is given by \begin{equation}\label{ML-2} E_{\alpha,\beta}(t)= \sum_{k=0}^{\infty}\frac{t^{k}}{\Gamma(k \alpha +\beta)}, \quad \alpha>0, \quad \beta\in\mathbb{R}, \quad t\in\mathbb{R}. \end{equation} The three-parameter Mittag-Leffler function \cite{Prabhakar} is determined by \begin{equation} E_{\alpha,\beta}^{\gamma}(t)= \sum_{k=0}^{\infty}\frac{(\gamma)_k}{\Gamma(k \alpha +\beta)}\frac{t^{k}}{k!}, \quad \alpha >0, \quad \beta,\gamma\in\mathbb{R}, \quad t\in\mathbb{R}, \end{equation} where $(\gamma)_k$ is the Pochhammer symbol denoting $\frac{\Gamma(\gamma+k)}{\Gamma(\gamma)}$. These series are convergent, locally uniformly in $t$, provided the $\alpha>0$ condition is satisfied. It is important to note that \[ E_{\alpha,\beta}^1(t)=E_{\alpha,\beta}(t),\quad E_{\alpha,1}(t)=E_{\alpha}(t),\quad E_1(t)=\exp(t). \] \end{definition} \begin{lemma}[\cite{Prabhakar}] \label{Lem:PrabhLap} The Laplace transform of the three-parameter Mittag-Leffler function is given by \begin{equation}\label{ad} \mathscr{L}\left\{t^{\beta-1}E_{\alpha,\beta}^{\gamma}(\lambda t^{\alpha})\right\}(s)=s^{-\beta}\left(1-\lambda s^{-\alpha}\right)^{-\gamma}, \end{equation} where $\alpha>0$,\quad$\beta,\gamma,\lambda \in \mathbb{R}$ and $Re(s)>0$. \end{lemma} \begin{definition}\cite{fernandez-kurt-ozarslan} A bivariate Mittag-Leffler type function which is a particular case of multivariate Mittag-Leffler function \cite{Luchko and Gorenflo} is defined by \begin{equation}\label{bivtype} E_{\alpha,\beta,\gamma}(\lambda_{1}t^{\alpha},\lambda_{2}t^{\beta})=\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}\frac{\lambda_{1}^k\lambda_{2}^mt^{k\alpha+m\beta}}{\Gamma(k\alpha+m\beta+\gamma)}, \quad \alpha,\beta>0, \quad \gamma\in \mathbb{R}. \end{equation} \end{definition} \section{A representation of mild solution to \eqref{mtde} with non-permutable linear operators}\label{sec:3} In this section, we consider the Cauchy problem for fractional evolution equation of Sobolev type in a Banach spaces. Firstly, we introduce the following hypotheses on the linear operators $A_{0}$, $B_{0}$ and $E$: $(H_{1})$: $A_{0}$ is a closed operator; $(H_{2})$: $B_{0}$ is a bounded operator; $(H_{3})$: $D(E)\subset D(A_{0})$ and $E$ is bijective; $(H_{4})$: A linear operator $E^{-1}: Y \to D(E)\subset X$ is compact. It is important to stress out that $(H_{4})$ implies $E^{-1}$ is bounded. Furthermore, $(H_{4})$ also implies that $E$ is closed since the fact: $E^{-1}$ is closed and injective, then its inverse is also closed. It comes from the closed graph theorem, we acquire the boundedness of the linear operator $ A\coloneqq A_{0}E^{-1}: Y \to Y$. Furthermore, $ B\coloneqq B_{0}E^{-1}: Y \to Y$ is a linear bounded operator since $E^{-1}$ and $B_{0}$ are bounded. Obviously, the substitution $Ey(t)=x(t)$ is equivalent to $y(t)=E^{-1}x(t)$. The central idea is that applying the substitution $y(t)=E^{-1}x(t)$, under the hypotheses $(H_{1})-(H_{4})$, we transform the Sobolev type fractional-order evolution system \eqref{mtde} to the following multi-term evolution system with linear bounded operators $A,B\in\mathscr{B}(Y)$: \begin{equation}\label{mtde-1} \begin{cases} \left(\prescript{C}{}{D^{\alpha}_{0+}}x\right) (t) -A\left( \prescript{C}{}{D^{\beta}_{0+}}x\right) (t)=Bx(t)+g(t), \quad t>0,\\ x(0)=\eta , \quad x^{\prime}(0)=\tilde{\eta}, \end{cases} \end{equation} where $x(\cdot):\mathbb{J}\to Y$ and $\eta,\tilde{\eta}\in Y$. This signifies that a mild solution of the Cauchy problem for Sobolev type multi-term fractional evolution equation \eqref{mtde} is the multiplication of $E^{-1}\in \mathscr{B}(Y)$ and the solution of an initial value problem for fractional evolution equation with multi-orders and linear bounded operators \eqref{mtde-1}. \begin{remark} Alternatively, we can modify the assumptions which are given above in a similar way: $(H'_1)$: $A_{0}$ is a bounded operator; $(H'_2)$: $B_{0}$ is a closed operator; $(H'_3)$: $D(E)\subset D(B_{0})$ and $E$ is bijective; $(H'_4)$: $E^{-1}: Y\to D(E)\subset X$ is compact. It follows from the closed graph theorem $B\coloneqq B_{0}E^{-1}:Y \to Y$ is a linear bounded operator. Furthermore, $A\coloneqq A_{0}E^{-1}:Y\to Y$ is also a linear bounded operator since $A_{0}$ and $E^{-1}$ are bounded. In conclusion, under the assumptions $(H'_1)-(H'_4)$, the Sobolev type fractional multi-term evolution equation with initial conditions \eqref{mtde} is converted to the fractional evolution system with linear bounded operators \eqref{mtde-1} by using the same transformation $y(t)=E^{-1}x(t)$. \end{remark} To get an analytical representation of the mild solution of \eqref{mtde-1}, first, we need to show that exponentially boundedness of $x(\cdot)$ and its Caputo derivatives $\left( \prescript{C}{}{D^{\alpha}_{0+}}x\right)(\cdot)$ ,$\left( \prescript{C}{}{D^{\beta}_{0+}}x\right)(\cdot)$ for $1<\alpha\leq2$ and $0<\beta\leq 1$, respectively. To do this, we need to assume exponential boundedness for one of the given fractional differentiation operators and a forced term with the aid of following Theorem \ref{thm2}. \begin{theorem} \label{thm2} Assume \eqref{mtde-1} has a unique continuous solution $x(t)$, if $g(t)$ is continuous \& exponentially bounded and $\left( \prescript{C}{}D^{\beta}_{0^{+}}x\right) (t)$ for $0<\beta\leq1$ is exponentially bounded on $\left[ 0, \infty\right)$, then $x(t)$ and its Caputo derivative $\left( \prescript{C}{}D^{\alpha}_{0^{+}}x\right ) (t)$ is exponentially bounded for $1<\alpha\leq2$ on $\left[ 0, \infty\right)$ and, thus, their Laplace transforms exist. \end{theorem} \begin{proof} Since $g(t)$ and $\left( \prescript{C}{}D^{\beta}_{0^{+}}x\right) (t)$ for $0<\beta\leq1$ is exponentially bounded, there exists positive constants $L,P,\delta$ and sufficient large $T$ such that $\|g(t)\| \leq L \exp(\delta t)$ and $ \|\left( \prescript{C}{}D^{\beta}_{0^{+}}x\right) (t)\|\leq P \exp(\delta t)$ for any $t \geq T$. It is clear that the system \eqref{mtde-1} is equivalent to the following Volterra fractional integral equation of second-kind: \begin{align} \label{Volterra} x(t)&=\left( 1-\frac{At^{\alpha-\beta}}{\Gamma(\alpha-\beta+1)}\right)\eta+t\hat{\eta}+\frac{A}{\Gamma(\alpha-\beta)}\int\limits_{0}^{t}(t-r)^{\alpha-\beta-1}x(r)dr\nonumber\\&+ \frac{1}{\Gamma(\alpha)}\int\limits_{0}^{t}(t-r)^{\alpha-1}[Bx(r)+g(r)]dr. \end{align} This means that every solution of \eqref{Volterra} is also a solution of \eqref{mtde-1} and vice versa. For $t \geq T$, \eqref{Volterra} can be expressed as \begin{align*} x(t)&=\left( 1-\frac{At^{\alpha-\beta}}{\Gamma(\alpha-\beta+1)}\right)\eta+t\hat{\eta}+\frac{A}{\Gamma(\alpha-\beta)}\int\limits_{0}^{T}(t-r)^{\alpha-\beta-1}x(r)dr\nonumber\\&+ \frac{1}{\Gamma(\alpha)}\int\limits_{0}^{T}(t-r)^{\alpha-1}[Bx(r)+g(r)]dr+\frac{A}{\Gamma(\alpha-\beta)}\int\limits_{T}^{t}(t-r)^{\alpha-\beta-1}x(r)dr\nonumber\\&+ \frac{1}{\Gamma(\alpha)}\int\limits_{T}^{t}(t-r)^{\alpha-1}[Bx(r)+g(r)]dr. \end{align*} In view of hypotheses of Theorem \ref{thm2}, the solution $x(t), (x(0)=\eta, \quad x'(0)=\hat{\eta})$ is unique and continuous on $\left[ 0, \infty\right)$, then $Ax(t)$ and $Bx(t)+g(t)$ are bounded on $[0, T]$, namely: \begin{equation*} \exists M >0 \quad \text{s.t.} \quad \|Ax(t)\| \leq M, \quad \forall t \in [0, T], \end{equation*} and \begin{equation*} \exists N >0 \quad \text{s.t.} \quad \|Bx(t)+g(t)\| \leq N, \quad \forall t \in [0, T]. \end{equation*} We have \begin{align*} \|x(t)\|&\leq\left( 1+\frac{\|A\|t^{\alpha-\beta}}{\Gamma(\alpha-\beta+1)}\right) \|\eta\|+t\|\hat{\eta}\|+\frac{M}{\Gamma(\alpha-\beta)}\int\limits_{0}^{T}(t-r)^{\alpha-\beta-1}dr\nonumber\\&+ \frac{N}{\Gamma(\alpha)}\int\limits_{0}^{T}(t-r)^{\alpha-1}dr+\frac{\|A\|}{\Gamma(\alpha-\beta)}\int\limits_{T}^{t}(t-r)^{\alpha-\beta-1}\|x(r)\|dr\nonumber\\&+ \frac{\|B\|}{\Gamma(\alpha)}\int\limits_{T}^{t}(t-r)^{\alpha-1}\|x(r)\|dr+\frac{1}{\Gamma(\alpha)}\int\limits_{T}^{t}(t-r)^{\alpha-1}\|g(r)\|dr. \end{align*} Multiplying last inequality by $\exp(-\delta t)$ and note that \begin{equation*} \exp(-\delta t) \leq \exp(-\delta r), \quad r \in [T,t] \quad \text{and} \quad \exp(-\delta t) \leq \exp(-\delta T),\quad \|g(t)\| \leq L\exp(\delta t), \quad t \geq T. \end{equation*} \allowdisplaybreaks Using the aforementioned inequalities, we attain \begin{align*} &\|x(t)\|\exp(-\delta t)\leq \|\eta\| \exp(-\delta t)+\frac{\|A\|t^{\alpha-\beta}}{\Gamma(\alpha-\beta+1)}\|\eta\|\exp(-\delta t)\\&+t\|\eta\|\exp(-\delta t)+ \frac{M\exp(-\delta t)}{\Gamma(\alpha-\beta)}\int\limits_{0}^{T}(t-r)^{\alpha-\beta-1}dr\\ &+\frac{N\exp(-\delta t)}{\Gamma(\alpha)}\int\limits_{0}^{T}(t-r)^{\alpha-1}dr+ \frac{\|A\|\exp(-\delta t)}{\Gamma(\alpha-\beta)}\int\limits_{T}^{t}(t-r)^{\alpha-\beta-1}\|x(r)\|dr\\ &+\frac{\|B\|\exp(-\delta t)}{\Gamma(\alpha)}\int\limits_{T}^{t}(t-r)^{\alpha-1}\|x(r)\|dr+\frac{\exp(-\delta t)}{\Gamma(\alpha)}\int\limits_{T}^{t}(t-r)^{\alpha-1}\|g(r)\|dr\\ &\leq\|\eta\|\exp(-\delta T)+\frac{\|A\|t^{\alpha-\beta}}{\Gamma(\alpha-\beta+1)}\|\eta\|\exp(-\delta T)+t\|\eta\|\exp(-\delta T)\\& +\frac{M\exp(-\theta T)}{\Gamma(\alpha-\beta+1)}(t^{\alpha-\beta}-(t-T)^{\alpha-\beta})+\frac{N\exp(-\delta T)}{\Gamma(\alpha+1)}(t^{\alpha}-(t-T)^{\alpha})\\ &+\frac{\|A\|}{\Gamma(\alpha-\beta)}\int\limits_{T}^{t}(t-r)^{\alpha-\beta-1}\|x(r)\|\exp(-\delta r)dr\\ &+\frac{\|B\|}{\Gamma(\alpha)}\int\limits_{T}^{t}(t-r)^{\alpha-1}\|x(r)\|\exp(-\delta r)dr\\&+\frac{L}{\Gamma(\alpha)}\int\limits_{T}^{t}(t-r)^{\alpha-1}\exp(\delta(r-t))dr\\ &\leq\|\eta\| \exp(-\delta T)+\frac{\|A\|t^{\alpha-\beta}}{\Gamma(\alpha-\beta+1)}\|\eta\|\exp(-\delta T)\\&+t\|\eta\|\exp(-\delta T) +\frac{M\exp(-\delta T)}{\Gamma(\alpha-\beta+1)}T^{\alpha-\beta}+\frac{N\exp(-\delta T)}{\Gamma(\alpha+1)}T^{\alpha}\\&+\int\limits_{0}^{t}\left( \frac{\|A\|(t-r)^{\alpha-\beta-1}}{\Gamma(\alpha-\beta)}+\frac{\|B\|(t-r)^{\alpha-1}}{\Gamma(\alpha)}\right) \|x(r)\|\exp(-\delta r)dr\\ &+\frac{L}{\Gamma(\alpha)}\int\limits_{0}^{t}(t-r)^{\alpha-1}\exp(-\delta(t-r))dr\\ &\leq\|\eta\| \exp(-\delta T)+\frac{\|A\|t^{\alpha-\beta}}{\Gamma(\alpha-\beta+1)}\|\eta\|\exp(-\delta T)\\&+t\|\eta\|\exp(-\delta T) +\frac{M\exp(-\delta T)}{\Gamma(\alpha-\beta+1)}T^{\alpha-\beta}+\frac{N\exp(-\delta T)}{\Gamma(\alpha+1)}T^{\alpha}\\&+\left( \frac{\|A\|t^{\alpha-\beta-1}}{\Gamma(\alpha-\beta)}+\frac{\|B\|t^{\alpha-1}}{\Gamma(\alpha)}\right)\int\limits_{0}^{t} \|x(r)\|\exp(-\delta r)dr+\frac{L}{\delta^{\alpha}}, \quad t \geq T. \end{align*} Denote \begin{equation*} \begin{cases*} a(t)=\frac{\|A\|t^{\alpha-\beta}}{\Gamma(\alpha-\beta+1)}\|\eta\|\exp(-\delta T)+t\|\eta\|\exp(-\delta T)+\|\eta\| \exp(-\delta T)\\ \hspace{+0.65cm}+\frac{M\exp(-\delta T)}{\Gamma(\alpha-\beta+1)}T^{\alpha-\beta}+\frac{N\exp(-\delta T)}{\Gamma(\alpha+1)}T^{\alpha}+\frac{L}{\delta^{\alpha}} ,\\ b(t)= \frac{\|A\|t^{\alpha-\beta-1}}{\Gamma(\alpha-\beta)}+\frac{\|B\|t^{\alpha-1}}{\Gamma(\alpha)}, \\ v(t)= \|x(t)\| \exp(-\delta t). \end{cases*} \end{equation*} Thus, we attain \begin{equation} v(t) \leq a(t)+b(t) \int\limits_{0}^{t}v(s)ds, \quad t \geq T. \end{equation} According to the Gronwall's inequality \eqref{coroll}, we have \begin{equation} \label{rt} v(t)\leq a(t) \exp(tb(t))\leq\exp(a(t)+tb(t)) . \end{equation} Then, it yields from \eqref{rt} that \begin{align*} \|x(t)\|\leq\exp(a(t)+tb(t)+\delta t), \quad t \geq T. \end{align*} Since $g(t)$ and $\left( \prescript{C}{}D^{\beta}_{0^{+}}x\right) (t)$ for $\beta\in(0,1]$ are exponentially bounded on $[0, \infty)$, from equation \eqref{mtde-1}, we acquire \begin{align*} \| \left( \prescript{C}{}D^{\alpha}_{0^{+}}x\right) (t)\| &\leq \|A\|\|\left( \prescript{C}{}D^{\beta}_{0^{+}}x\right) (t)\|+\|B\| \|x(t)\|+ \|g(t)\| \\ &\leq\|A\| P \exp(\delta t) + \|B\|\exp(a(t)+tb(t)+\delta t) + L \exp(\delta t) \\ &\leq \left(\|A\| P+ \|B\|+L\right) \exp(a(t)+tb(t)+\delta t), \quad t \geq T. \end{align*} In other words, $\left(\prescript{C}{}D^{\alpha}_{0^{+}}x \right)(t)$ is also exponentially bounded, the Laplace integral transforms of $x(t)$ and its Caputo derivatives $\left(\prescript{C}{}D^{\alpha}_{0^{+}}x \right)(t)$, $\left(\prescript{C}{}D^{\beta}_{0^{+}}x \right)(t)$ exist for $ \alpha \in (1,2]$ and $\beta \in (0,1]$, respectively. The proof is complete. \end{proof} Alternatively, we can also use the following version of Theorem \ref{thm2}, for exponential boundedness of $x(\cdot)$ and its derivatives $\left( \prescript{C}{}D^{\alpha}_{0^{+}}x\right) (\cdot)$ ,$\left( \prescript{C}{}D^{\beta}_{0^{+}}x\right) (\cdot)$ of order $1<\alpha\leq2$ and $0<\beta\leq1$, respectively in Caputo's sense on $\left[ 0, \infty\right)$. \begin{theorem} \label{thm3} Assume \eqref{mtde-1} has a unique continuous solution $x(t)$, if $g(t)$ is continuous \& exponentially bounded and $\left( \prescript{C}{}D^{\alpha}_{0^{+}}x\right) (t)$ for $1<\alpha\leq2$ is exponentially bounded on $\left[ 0, \infty\right)$, then $x(t)$ and its Caputo derivative $\left( \prescript{C}{}D^{\beta}_{0^{+}}x\right ) (t)$ is exponentially bounded for $0<\beta\leq1$ on $\left[ 0, \infty\right)$ and, thus, their Laplace transforms exist. \end{theorem} \begin{proof} This proof is similar to the proof of Theorem \ref{thm2}. So, we omit it here. \end{proof} \begin{definition} We define a new Mittag-Leffler function $ \mathscr{E}_{\alpha,\beta,\gamma}^{A,B}(\cdot) :\mathbb{R}\to Y$ generated by nonpermutable linear bounded operators $A,B\in\mathscr{B}(Y)$ as follows: \begin{equation} \mathscr{E}_{\alpha,\beta,\gamma}^{A,B}(t)\coloneqq\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}\frac{t^{k\alpha+m\beta}}{\Gamma(k\alpha+m\beta+\gamma)}, \quad \alpha,\beta>0, \quad \gamma \in \mathbb{R}, \end{equation} where $Q_{k,m}^{A,B}\in\mathscr{B}(Y)$, $k,m\in\mathbb{N}_{0}$ is given by \begin{equation}\label{important} Q_{k,m}^{A,B}\coloneqq\sum_{l=0}^{k} A^{k-l}BQ_{l,m-1}^{A,B}, \quad k,m\in\mathbb{N}, \qquad Q_{k,0}^{A,B}\coloneqq A^{k}, \quad k\in\mathbb{N}_{0}, \qquad Q_{0,m}^{A,B}\coloneqq B^{m}, \quad m\in\mathbb{N}_{0}. \end{equation} \end{definition} A linear bounded operator $Q_{k,m}^{A,B}$ can be represented explicitly in Table \ref{tab:1}. \begin{table}[h!] \caption{Explicit representation of $Q_{k,m}^{A,B}$ for $r,s \in \mathbb{N}_{0}$ } \label{tab:1} \centering \begin{tabular}{ c c c c c c } \hline $Q_{k,m}^{A,B}$ &k=0 & k=1 & k=2 & \ldots & k=r \\[2ex] \hline $m=0$ & $I$ & $A$ & $A^{2}$ &\ldots& $A^{r}$ \\[2ex]\hline $m=1$ & $B$ & $AB+BA$ & $A^{2}B+ABA+BA^{2}$& \ldots&$A^{r}B+\ldots+BA^{r}$\\[2ex]\hline $m=2$ & $B^{2}$ & $AB^{2}+BAB+B^{2}A$ & $\!\begin{aligned} &A^{2}B^{2}+ABAB+AB^{2}A\\ &+BA^{2}B+BABA+B^{2}A^{2} \end{aligned}$ &\ldots& $A^{r}B^{2}+\ldots+B^{2}A^{r}$\\[2ex]\hline \ldots & \ldots & \ldots & \ldots &\ldots &\ldots\\[2ex]\hline $m=s$ &$B^{s}$ &$AB^{s}+\ldots+B^{s}A$ & $A^{2}B^{s}+\ldots+B^{s}A^{2}$ &\ldots& $A^{r}B^{s}+\ldots+B^{s}A^{r}$\\[2ex]\hline \end{tabular} \end{table} From the above table, it can be easily seen that, in the case of commutativity $AB = BA$, we have $Q_{k,m}^{A,B}:=\binom{k+m}{m}A^{k}B^{m}$, $k,m\in\mathbb{N}_{0}$. \begin{theorem} A linear operator $Q_{k,m}^{A,B}\in\mathscr{B}(Y)$ for $k,m\in\mathbb{N}_{0}$ has the following properties: $(i)$ $Q_{k,m}^{A,B}$, $k,m\in\mathbb{N}$ generalizes classical Pascal's rule for linear operators $A,B\in\mathscr{B}(Y)$ as follows: \begin{equation}\label{gen-Pascal} Q_{k,m}^{A,B}=AQ_{k-1,m}^{A,B}+BQ_{k,m-1}^{A,B}, \quad k,m \in \mathbb{N}; \end{equation} $(ii)$ If $AB=BA$, then we have \begin{equation}\label{commutative} Q_{k,m}^{A,B}=\binom{k+m}{m}A^{k}B^{m}, \quad k,m\in\mathbb{N}_{0}. \end{equation} \end{theorem} \begin{proof} $(i)$ By making use of the mathematical induction principle, we can prove \eqref{gen-Pascal} is true for all $k \in \mathbb{N}$. It is obvious that the relation \eqref{gen-Pascal} is true for $k=1$. With the help of \eqref{important} we obtain: \begin{align*} Q_{1,m}^{A,B}=\sum_{l=0}^{1}A^{1-l}BQ_{l,m-1}^{A,B}=ABQ_{0,m-1}^{A,B}+BQ_{1,m-1}^{A,B}=AQ_{0,m}^{A,B}+BQ_{1,m-1}^{A,B}, \end{align*} where $Q_{0,m}^{A,B}=BQ_{0,m-1}^{A,B}$. Suppose that the formula \eqref{gen-Pascal} is true for $(k-1)\in\mathbb{N}$. Then, by applying definition \eqref{important} for $(k-1)$-th case, we prove the statement \eqref{gen-Pascal} for $k\in\mathbb{N}$ as below: \begin{align*} &Q_{k,m}^{A,B}=\sum_{l=0}^{k}A^{k-l}BQ_{l,m-1}^{A,B}=\sum_{l=0}^{k-1}A^{k-l}BQ_{l,m-1}^{A,B}+BQ_{k,m-1}^{A,B}\\&=A\sum_{l=0}^{k-1}A^{k-l-1}BQ_{l,m-1}^{A,B}+BQ_{k,m-1}^{A,B}=AQ_{k-1,m}^{A,B}+BQ_{k,m-1}^{A,B}. \end{align*} To show $(ii)$ we will use proof by induction with regard to $m\in\mathbb{N}_{0}$ via the definition of $Q_{k,m}^{A,B}$ \eqref{important}. Obviously, for $m=0,1$, we have \begin{equation*} Q_{k,0}^{A,B}\coloneqq A^{k}, \quad Q_{k,1}^{A,B}=\sum_{l=0}^{k}A^{k-l}BQ_{l,0}^{A,B}=\sum_{l=0}^{k}A^{k-l}BA^{l}=(k+1)A^{k}B=\binom{k+1}{1}A^{k}B. \end{equation*} Suppose that it is true for $m=n\in\mathbb{N}$: \begin{equation*} Q_{k,n}^{A,B}=\binom{k+n}{n}A^{k}B^{n}. \end{equation*} Let us prove it for $m=n+1$: \begin{align*} Q_{k,n+1}^{A,B}&=\sum_{l=0}^{k}A^{k-l}BQ_{l,n}^{A,B}=\sum_{l=0}^{k}A^{k-l}B\binom{l+n}{n}A^{l}B^{n}\\&=A^{k}B^{n+1}\sum_{l=0}^{k}\binom{l+n}{n}=\binom{k+n+1}{n+1}A^{k}B^{n+1}. \end{align*} The proof is complete. \end{proof} According to the above theorem, a linear operator $Q_{k,m}^{A,B}$ for $k,m\in\mathbb{N}$ satisfies the following Pascal's rule for permutable linear operators $A,B\in\mathscr{B}(Y)$ as follows: \begin{align} \binom{k+m}{m}A^{k}B^{m}=\binom{k+m-1}{m}A^{k-1}B^{m}+\binom{k+m-1}{m-1}A^{k}B^{m-1}, \quad k,m\in\mathbb{N}. \end{align} By using the property of $Q_{k,m}^{A,B}$ \eqref{commutative} we define the following bivariate Mittag-Leffler function via permutable linear bounded operators which is similar to \eqref{bivtype}. \begin{definition} We define a Mittag-Leffler function $ E_{\alpha,\beta,\gamma}(A(\cdot)^{\alpha},B(\cdot)^{\beta}) :\mathbb{R}\to Y$ generated by permutable linear bounded operators $A,B\in\mathscr{B}(Y)$ as follows: \begin{equation}\label{per} E_{\alpha,\beta,\gamma}(At^{\alpha},Bt^{\beta})\coloneqq\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A^{k}B^{m}\frac{t^{k\alpha+m\beta}}{\Gamma(k\alpha+m\beta+\gamma)}, \quad \alpha,\beta>0, \quad \gamma \in \mathbb{R}. \end{equation} \end{definition} In the special case, bivariate Mittag-Leffler function \eqref{per} via commutative linear bounded operators converts to the product of classical exponential functions as follows. \begin{lemma} If $\alpha=\beta=\gamma=1$, then we get the double exponential function: \begin{equation*} E_{1,1,1}(At,Bt)=\exp(At)\exp(Bt)=\exp((A+B)t), \quad t \in \mathbb{R}. \end{equation*} \end{lemma} \begin{proof} Applying the formula \eqref{per}, we attain \begin{align*} E_{1,1,1}(At,Bt)&=\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A^{k}B^{m}\frac{t^{k+m}}{(k+m)!}\\ &=\sum_{k=0}^{\infty}A^{k}\frac{t^{k}}{k!}\sum_{m=0}^{\infty}B^{m}\frac{t^{m}}{m!}=\exp(At)\exp(Bt)=\exp((A+B)t),\quad t \in \mathbb{R}. \end{align*} \end{proof} The following lemma plays a crucial role for solving the given Cauchy problem \eqref{mtde-1} with linear bounded operators. In general case, it holds true whenever $\alpha>0$, $\alpha>\beta$, $\gamma \in \mathbb{R}$. \begin{lemma}\label{Q^A,B} For $A,B\in\mathscr{B}(Y)$ which are satisfying $AB\neq BA$, we have: \begin{align}\label{eq1} &\mathscr{L}^{-1}\left\lbrace \frac{s^{\gamma}}{s^{(m+1)\beta}} \left[ (s^{\alpha-\beta}I-A)^{-1}B\right]^{m} (s^{\alpha-\beta}I-A)^{-1}\right\rbrace (t)\nonumber\\&=\sum_{k=0}^{\infty}\frac{Q_{k,m}^{A,B}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\gamma)}t^{k(\alpha-\beta)+m\alpha+\alpha-\gamma-1}, \quad m\in\mathbb{N}_{0}\coloneqq\mathbb{N}\cup \left\lbrace 0\right\rbrace. \end{align} \end{lemma} \begin{proof} To prove, we will use a mathematical induction principle with regard to $m\in \mathbb{N}_{0}$. Obviously, according to the relation \eqref{ad}, \eqref{eq1} is true for $m=0$, which establishes the basis for induction: \begin{align}\label{1-relation} &\mathscr{L}^{-1}\left\lbrace s^{\gamma-\beta}(s^{\alpha-\beta}I-A)^{-1}\right\rbrace (t)=t^{\alpha-\gamma-1}E_{\alpha-\beta,\alpha-\gamma}^{1}(At^{\alpha-\beta})\nonumber\\&=t^{\alpha-\gamma-1}E_{\alpha-\beta,\alpha-\gamma}(At^{\alpha-\beta})=\sum_{k=0}^{\infty}A^{k}\frac{t^{k(\alpha-\beta)+\alpha-\gamma-1}}{\Gamma(k(\alpha-\beta)+\alpha-\gamma)}\nonumber\\&=\sum_{k=0}^{\infty}Q_{k,0}^{A,B}\frac{t^{k(\alpha-\beta)+\alpha-\gamma-1}}{\Gamma(k(\alpha-\beta)+\alpha-\gamma)}, \quad \text{where} \quad Q_{k,0}^{A,B}\coloneqq A^{k}, \quad k\in \mathbb{N}_{0}. \end{align} For $m=1$, we use the convolution property of Laplace integral transform and formula \eqref{1-relation}: \allowdisplaybreaks \begin{align}\label{relation-2} &\mathscr{L}^{-1}\left\lbrace s^{\gamma-2\beta}(s^{\alpha-\beta}I-A)^{-1}B(s^{\alpha-\beta}I-A)^{-1}\right\rbrace (t)\nonumber\\=&\mathscr{L}^{-1}\left\lbrace s^{-\beta}(s^{\alpha-\beta}I-A)^{-1}B\right\rbrace (t)\ast\mathscr{L}^{-1}\left\lbrace s^{\gamma-\beta}(s^{\alpha-\beta}I-A)^{-1}\right\rbrace(t)\nonumber\\=&t^{\alpha-1}E_{\alpha-\beta,\alpha}(At^{\alpha-\beta})B\ast t^{\alpha-\gamma-1}E_{\alpha-\beta,\alpha-\gamma}(At^{\alpha-\beta})\nonumber\\=& \int\limits_{0}^{t}(t-s)^{\alpha-1}E_{\alpha-\beta,\alpha}(A(t-s)^{\alpha-\beta})B s^{\alpha-\gamma-1}E_{\alpha-\beta,\alpha-\gamma}(As^{\alpha-\beta})\mathrm{d}s. \end{align} Then interchanging the order of integration and summation in \eqref{relation-2} which is permissible in accordance with the uniform convergence of the series \eqref{ML-2}, we attain: \allowdisplaybreaks \begin{align}\label{relation-3} &\mathscr{L}^{-1}\left\lbrace s^{\gamma-2\beta}(s^{\alpha-\beta}I-A)^{-1}B(s^{\alpha-\beta}I-A)^{-1}\right\rbrace (t)\nonumber\\=&\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\frac{A^{k}BA^{l}}{\Gamma(k(\alpha-\beta)+\alpha)\Gamma(l(\alpha-\beta)+\alpha-\gamma)}\int\limits_{0}^{t}(t-s)^{k(\alpha-\beta)+\alpha-1}s^{l(\alpha-\beta)+\alpha-\gamma-1}\mathrm{d}s\nonumber\\=&\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\frac{A^{k}BA^{l}}{\Gamma(k(\alpha-\beta)+\alpha)\Gamma(l(\alpha-\beta)+\alpha-\gamma)}t^{(k+l)(\alpha-\beta)+2\alpha-\gamma-1}\mathcal{B}(k(\alpha-\beta)+\alpha,l(\alpha-\beta)+\alpha-\gamma)\nonumber\\=&\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\frac{A^{k}BA^{l}}{\Gamma((k+l)(\alpha-\beta)+2\alpha-\gamma)}t^{(k+l)(\alpha-\beta)+2\alpha-\gamma-1}, \end{align} where $\mathcal{B}(\cdot,\cdot)$ is a well-known beta function. Applying Cauchy product formula to the double infinity series in \eqref{relation-3}, we get: \allowdisplaybreaks \begin{align}\label{relation-4} &\mathscr{L}^{-1}\left\lbrace s^{\gamma-2\beta}(s^{\alpha-\beta}I-A)^{-1}B(s^{\alpha-\beta}I-A)^{-1}\right\rbrace (t)\nonumber\\=&\sum_{k=0}^{\infty}\sum_{l=0}^{k}\frac{A^{k-l}BA^{l}}{\Gamma(k(\alpha-\beta)+2\alpha-\gamma)}t^{k(\alpha-\beta)+2\alpha-\gamma-1}\nonumber\\ =&\sum_{k=0}^{\infty}\sum_{l=0}^{k}\frac{A^{k-l}BQ_{l,0}^{A,B}}{\Gamma(k(\alpha-\beta)+2\alpha-\gamma)}t^{k(\alpha-\beta)+2\alpha-\gamma-1}\nonumber\\=&\sum_{k=0}^{\infty}\frac{Q_{k,1}^{A,B}}{\Gamma(k(\alpha-\beta)+\alpha+\alpha-\gamma)}t^{k(\alpha-\beta)+\alpha+\alpha-\gamma-1}, \quad\text{where} \quad Q_{k,1}^{A,B}\coloneqq\sum_{l=0}^{k} A^{k-l}BQ_{l,0}^{A,B},\quad k\in \mathbb{N}_{0}. \end{align} \allowdisplaybreaks To verify the induction step, we assume that \eqref{eq1} holds true for $m=n$ where $n\in\mathbb{N}_{0}$: \begin{align}\label{relation-5} &\mathscr{L}^{-1}\left\lbrace s^{\gamma-(n+1)\beta}\left[ (s^{\alpha-\beta}I-A)^{-1}B\right]^{n} (s^{\alpha-\beta}I-A)^{-1}\right\rbrace (t)\nonumber\\=&\sum_{k=0}^{\infty}\sum_{l=0}^{k}\frac{A^{k-l}BQ_{l,n-1}^{A,B}}{\Gamma(k(\alpha-\beta)+(n+1)\alpha-\gamma)}t^{k(\alpha-\beta)+(n+1)\alpha-\gamma-1}\nonumber\\=&\sum_{k=0}^{\infty}\frac{Q_{k,n}^{A,B}}{\Gamma(k(\alpha-\beta)+n\alpha+\alpha-\gamma)}t^{k(\alpha-\beta)+n\alpha+\alpha-\gamma-1}, \quad\text{where} \quad Q_{k,n}^{A,B}\coloneqq\sum_{l=0}^{k} A^{k-l}BQ_{l,n-1}^{A,B},\quad k\in \mathbb{N}_{0}. \end{align} Then it yields that for $m=n+1$ as follows: \allowdisplaybreaks \begin{align}\label{relation-6} &\mathscr{L}^{-1}\left\lbrace s^{\gamma-(n+2)\beta}\left[ (s^{\alpha-\beta}I-A)^{-1}B\right]^{n+1} (s^{\alpha-\beta}I-A)^{-1}\right\rbrace (t)\nonumber\\ =&\mathscr{L}^{-1}\left\lbrace s^{-\beta}(s^{\alpha-\beta}I-A)^{-1}B\right\rbrace (t)\ast\mathscr{L}^{-1}\left\lbrace s^{\gamma-(n+1)\beta}\left[ (s^{\alpha-\beta}I-A)^{-1}B\right] ^{n}(s^{\alpha-\beta}I-A)^{-1}\right\rbrace(t)\nonumber\\=&t^{\alpha-1}E_{\alpha-\beta,\alpha}(At^{\alpha-\beta})B\ast \sum_{l=0}^{\infty}\frac{Q_{l,n}^{A,B}}{\Gamma(l(\alpha-\beta)+(n+1)\alpha-\gamma)}t^{l(\alpha-\beta)+(n+1)\alpha-\gamma-1}\nonumber\\=& \int\limits_{0}^{t}(t-s)^{\alpha-1}E_{\alpha-\beta,\alpha}(A(t-s)^{\alpha-\beta})B \sum_{l=0}^{\infty}\frac{Q_{l,n}^{A,B}}{\Gamma(l(\alpha-\beta)+(n+1)\alpha-\gamma)}s^{l(\alpha-\beta)+(n+1)\alpha-\gamma-1}\mathrm{d}s\nonumber\\ =&\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\frac{A^{k}BQ_{l,n}^{A,B}}{\Gamma(k(\alpha-\beta)+\alpha)\Gamma(l(\alpha-\beta)+(n+1)\alpha-\gamma)}\int\limits_{0}^{t}(t-s)^{k(\alpha-\beta)+\alpha-1}s^{l(\alpha-\beta)+(n+1)\alpha-\gamma-1}\mathrm{d}s\nonumber\\=&\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\frac{A^{k}BQ_{l,n}^{A,B}}{\Gamma(k(\alpha-\beta)+\alpha)\Gamma(l(\alpha-\beta)+(n+1)\alpha-\gamma)}t^{(k+l)(\alpha-\beta)+(n+1)\alpha+\alpha-\gamma-1}\nonumber\\\times&\mathcal{B}(k(\alpha-\beta)+\alpha,l(\alpha-\beta)+(n+1)\alpha-\gamma)\nonumber\\=&\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\frac{A^{k}BQ_{l,n}^{A,B}}{\Gamma((k+l)(\alpha-\beta)+(n+1)\alpha+\alpha-\gamma)}t^{(k+l)(\alpha-\beta)+(n+1)\alpha+\alpha-\gamma-1}\nonumber\\ =&\sum_{k=0}^{\infty}\sum_{l=0}^{k}\frac{A^{k-l}BQ_{l,n}^{A,B}}{\Gamma(k(\alpha-\beta)+(n+1)\alpha+\alpha-\gamma)}t^{k(\alpha-\beta)+(n+1)\alpha+\alpha-\gamma-1}\nonumber\\=&\sum_{k=0}^{\infty}\frac{Q_{k,n+1}^{A,B}}{\Gamma(k(\alpha-\beta)+(n+1)\alpha+\alpha-\gamma)}t^{k(\alpha-\beta)+(n+1)\alpha+\alpha-\gamma-1}, \quad\text{where} \quad Q_{k,n+1}^{A,B}\coloneqq\sum_{l=0}^{k} A^{k-l}BQ_{l,n}^{A,B},\quad k\in \mathbb{N}_{0}. \end{align} Thus, \eqref{relation-6} holds true whenever \eqref{relation-5} is true, and by the principle of mathematical induction, we conclude that the formula \eqref{eq1} holds true for all $m \in \mathbb{N}_{0}$. \end{proof} \begin{theorem} Let $A,B\in\mathscr{B}(Y)$ with non-zero commutator, i.e., $\left[ A, B\right] \coloneqq AB- BA \neq 0$. Assume that $g(\cdot): \mathbb{J} \to Y$ and $\left( \prescript{C}{}{D^{\beta}_{0+}}x\right) (t)$ where $0<\beta\leq 1$ are exponentially bounded. A mild solution $x(\cdot)\in \mathbb{C}^{2}(\mathbb{J},Y)$ of the Cauchy problem \eqref{mtde-1} can be represented as \allowdisplaybreaks \begin{align} x(t)&=\left(I+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s\nonumber\\&=\left( I+t^{\alpha}\mathscr{E}_{\alpha-\beta,\alpha,\alpha+1}^{A,B}(t)B\right) \eta+t\mathscr{E}_{\alpha-\beta,\alpha,2}^{A,B}(t)\hat{\eta}+\int\limits_{0}^{t}(t-s)^{\alpha-1}\mathscr{E}_{\alpha-\beta,\alpha,\alpha}^{A,B}(t-s)g(s)\mathrm{d}s, \quad t>0, \end{align} where $I \in \mathscr{B}(Y)$ is an identity operator. \end{theorem} \begin{proof} We recall that the existence of Laplace transform of $x(\cdot)$ and its Caputo derivatives $\prescript{C}{}{D^{\alpha}_{0^{+}}x(\cdot)}$ and $\prescript{C}{}{D^{\beta}_{0^{+}}x(\cdot)}$ for $ 1<\alpha\leq 2$ and $0<\beta\leq 1$, respectively, is guaranteed by Theorem \ref{thm2}. Thus, to find the mild solution $x(t)$ of \eqref{mtde-1} satisfying the initial conditions $x(0)=\eta$, $x'(0)=\hat{\eta}$, we can use the Laplace integral transform. By assuming $T=\infty$, taking the Laplace transform on both sides of equation \eqref{mtde-1} and using the following facts that \begin{align*} &\mathscr{L}\left\lbrace \prescript{C}{}{D^{\alpha}_{0^{+}}x(t)} \right\rbrace(s)=s^{\alpha}X(s)-s^{\alpha-1}\eta-s^{\alpha-2}\hat{\eta}, \\ &\mathscr{L}\left\lbrace \prescript{C}{}{D^{\beta}_{0^{+}}x(t)} \right\rbrace(s)=s^{\beta}X(s)-s^{\beta-1}\eta, \end{align*} which implies that \begin{align*} \left( s^{\alpha} I-As^{\beta}-B\right) X(s)=s^{\alpha-1}\eta +s^{\alpha-2}\hat{\eta}-s^{\beta-1}A \eta +G(s), \end{align*} where $X(s)$ and $G(s)$ represent the Laplace integral transforms of $x(t)$ and $g(t)$, respectively. Thus, after solving the above equation with respect to the $X(s)$, we get \begin{align*} X(s)&=s^{\alpha-1}\left( s^{\alpha}I-As^{\beta}-B\right)^{-1}\eta+s^{\alpha-2} \left( s^{\alpha}I-As^{\beta}-B\right)^{-1}\hat{\eta}\\ &-s^{\beta-1}\left( s^{\alpha}I-As^{\beta}-B\right)^{-1}A \eta +\left( s^{\alpha-1}I-As^{\beta}-B\right)^{-1}G(s)\\ &=s^{-1}\eta+s^{-1}\left( s^{\alpha}I-As^{\beta}-B\right)^{-1}B\eta+s^{\alpha-2} \left( s^{\alpha}I-As^{\beta}-B\right)^{-1}\hat{\eta}\\&+\left( s^{\alpha-1}I-As^{\beta}-B\right)^{-1}G(s). \end{align*} On the other hand, in accordance with the relation \eqref{operator}, for sufficiently large $s$, such that \begin{equation*} \|(s^{\alpha-\beta}I-A)^{-1}Bs^{-\beta}\|< 1. \end{equation*} Thus, for nonpermutable linear operators $A,B\in\mathscr{B}(Y)$ and sufficiently large $s$, we have \begin{align*} \left( s^{\alpha}I-As^{\beta}-B\right)^{-1}&=\left(s^{\beta}\left[ s^{\alpha-\beta}I-A-Bs^{-\beta}\right] \right)^{-1}\\ &=\left(s^{\beta}(s^{\alpha-\beta}I-A)\left[ I-(s^{\alpha-\beta}I-A)^{-1}Bs^{-\beta}\right] \right) ^{-1}\\ &=\left(s^{\beta}\left[ I-\left( s^{\alpha-\beta}I-A\right) ^{-1}Bs^{-\beta}\right]\right)^{-1}\left(s^{\alpha-\beta}I-A \right)^{-1} \\ &=\left[I-(s^{\alpha-\beta}I-A)^{-1}Bs^{-\beta} \right]^{-1}s^{-\beta}\left(s^{\alpha-\beta}I-A \right)^{-1} \\ &=\sum_{m=0}^{\infty}\frac{1}{s^{\beta m}}\left[ \left(s^{\alpha-\beta}I-A \right)^{-1}B \right]^{m}s^{-\beta}\left(s^{\alpha-\beta}I-A \right)^{-1}\\ &=\sum_{m=0}^{\infty}\frac{1}{s^{(m+1)\beta}}\left[ \left(s^{\alpha-\beta}I-A \right)^{-1}B \right]^{m}\left(s^{\alpha-\beta}I-A \right)^{-1} \end{align*} Then, by taking inverse Laplace transform, we have \allowdisplaybreaks \begin{align} \label{x(t)} x(t)&=\mathscr{L}^{-1}\left\lbrace s^{-1}\right\rbrace(t) \eta+ \mathscr{L}^{-1}\left\lbrace\sum_{m=0}^{\infty}\frac{s^{-1}}{s^{ (m+1)\beta}}\left[ \left(s^{\alpha-\beta}I-A \right)^{-1}B \right]^{m}\left(s^{\alpha-\beta}I-A \right)^{-1}\right\rbrace (t)B\eta \nonumber\\ &+\mathscr{L}^{-1}\left\lbrace\sum_{m=0}^{\infty} \frac{s^{\alpha-2}}{s^{(m+1)\beta}}\left[ \left(s^{\alpha-\beta}I-A \right)^{-1}B \right]^{m}\left(s^{\alpha-\beta}I-A \right)^{-1}\right\rbrace (t)\hat{\eta}\nonumber\\ &+\mathscr{L}^{-1}\left\lbrace \sum_{m=0}^{\infty}\frac{1}{s^{(m+1)\beta}}\left[ \left(s^{\alpha-\beta}I-A \right)^{-1}B \right]^{m}\left(s^{\alpha-\beta}I-A \right)^{-1} G(s)\right\rbrace (t). \end{align} Therefore, in accordance with Lemma \ref{Q^A,B}, we acquire \allowdisplaybreaks \begin{align} x(t)&=\left(I+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s\nonumber\\ &\coloneqq\left( I+t^{\alpha}\mathscr{E}_{\alpha-\beta,\alpha,\alpha+1}^{A,B}(t)B\right) \eta+t\mathscr{E}_{\alpha-\beta,\alpha,2}^{A,B}(t)\hat{\eta}+\int\limits_{0}^{t}(t-s)^{\alpha-1}\mathscr{E}_{\alpha-\beta,\alpha,\alpha}^{A,B}(t-s)g(s)\mathrm{d}s,\quad t>0. \end{align} \end{proof} It should stressed out that the assumption on the exponential boundedness of the function $g(\cdot)$ and $\left( \prescript{C}{}{D^{\beta}_{0+}}x\right) (\cdot)$ where $0<\beta\leq 1$ (alternatively, $\left( \prescript{C}{}{D^{\alpha}_{0+}}x\right) (\cdot)$ for $1<\alpha\leq2$) can be omitted. As is shown below, the statement of the above theorem holds for a more general function $g(\cdot)\in \mathbb{C}(\mathbb{J},Y)$. \begin{theorem} Let $A,B\in\mathscr{B}(Y)$ with non-zero commutator, i.e., $\left[ A, B\right] \coloneqq AB- BA \neq 0$. A mild solution $x(\cdot)\in \mathbb{C}^{2}(\mathbb{J},Y)$ of the Cauchy problem \eqref{mtde-1} can be represented as \allowdisplaybreaks \begin{align} x(t)&=\left(I+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s\nonumber\\ &\coloneqq\left( I+t^{\alpha}\mathscr{E}_{\alpha-\beta,\alpha,\alpha+1}^{A,B}(t)B\right) \eta+t\mathscr{E}_{\alpha-\beta,\alpha,2}^{A,B}(t)\hat{\eta}+\int\limits_{0}^{t}(t-s)^{\alpha-1}\mathscr{E}_{\alpha-\beta,\alpha,\alpha}^{A,B}(t-s)g(s)\mathrm{d}s,\quad t>0. \end{align} \end{theorem} \begin{proof} For making use of verification by substitution, we apply superposition principle for the initial value problem of linear inhomogeneous multi-order fractional evolution equation \eqref{mtde-1}. For this, firstly let us consider the following homogeneous system with inhomogeneous initial conditions: \begin{equation} \label{eq:f5hom} \begin{cases} \left( \prescript{C}{}{D^{\alpha}_{0+}}x\right) (t)-A \left( \prescript{C}{}{D^{\beta}_{0+}}x\right) (t)-Bx(t)=0,\quad t>0\\ x(0)=\eta , \quad x^{\prime}(0)=\tilde{\eta}, \end{cases} \end{equation} has a mild solution \begin{align}\label{x(t)-hom} x(t)=&\left(I+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A,B}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\nonumber\\=&\left( I+t^{\alpha}\mathscr{E}^{A,B}_{\alpha-\beta, \alpha,\alpha+1}(t)B\right) \eta+t\mathscr{E}^{A,B}_{\alpha-\beta, \alpha,2}(t)\tilde{\eta}. \end{align} With the help of verification by substitution and the property of $Q^{A,B}_{k,m}$ \eqref{gen-Pascal}, we confirm that \eqref{x(t)-hom} is a mild solution of linear homogeneous fractional evolution equation \eqref{eq:f5hom}: \allowdisplaybreaks \begin{align*} \left( \prescript{C}{}{D^{\alpha}_{0+}}x\right) (t)&= \prescript{C}{}{D^{\alpha}_{0+}}\left( I+ t^{\alpha}\mathscr{E}^{A,B}_{\alpha-\beta, \alpha,\alpha+1}(t)B\right) \eta+ \prescript{C}{}{D^{\alpha}_{0+}} \left( t\mathscr{E}^{A,B}_{\alpha-\beta, \alpha,2}(t)\right) \hat{\eta}\\ &=\prescript{C}{}{D^{\alpha}_{0+}}\left(I+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m} B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right) \eta\\ &+\prescript{C}{}{D^{\alpha}_{0+}}\left( \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m} \frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\right) \hat{\eta}. \end{align*} In this case, we first apply the property of $Q^{A,B}_{k,m}$ \eqref{gen-Pascal} before Caputo differentiation the first and second terms above, in accordance with the following formula \cite{Podlubny}: \begin{equation}\label{formula} \prescript{C}{}D^{\nu}_{0+}\left(\frac{t^{\eta}}{\Gamma(\eta+1)}\right)= \begin{cases} \frac{t^{\eta-\nu}}{\Gamma(\eta-\nu+1)},\quad\eta>\lfloor\nu\rfloor,\\ 0, \qquad \qquad \eta=0,1,2,\ldots, \lfloor\nu\rfloor,\\ \text{undefined}, \qquad \text{otherwise}. \end{cases} \end{equation} Then, we have \begin{align*} \left(\prescript{C}{}{D^{\alpha}_{0+}}x\right)(t)&=\prescript{C}{}{D^{\alpha}_{0+}}\Big[B\frac{t^{\alpha}}{\Gamma(\alpha)}+ \sum_{k=1}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k-1,m} B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\\&+\sum_{k=0}^{\infty}\sum_{m=1}^{\infty}BQ^{A,B}_{k,m-1} B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)} \Big] \eta\\ &+\prescript{C}{}{D^{\alpha}_{0+}}\Big[tI+\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k-1,m} \frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\\ &+\sum_{k=0}^{\infty}\sum_{m=1}^{\infty}BQ^{A,B}_{k,m-1} \frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\Big] \hat{\eta}\\ &=B\eta+\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k-1,m} B\frac{t^{k(\alpha-\beta)+m\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+1)}\eta\\ &+\sum_{k=0}^{\infty}\sum_{m=1}^{\infty}BQ^{A,B}_{k,m-1} B\frac{t^{k(\alpha-\beta)+m\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+1)}\eta\\ &+\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k-1,m} \frac{t^{k(\alpha-\beta)+m\alpha+1-\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+2-\alpha)}\hat{\eta}\\ &+\sum_{k=0}^{\infty}\sum_{m=1}^{\infty}BQ^{A,B}_{k,m-1} \frac{t^{k(\alpha-\beta)+m\alpha+1-\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+2-\alpha)} \hat{\eta}. \end{align*} Next, we can attain that \begin{align*} \left(\prescript{C}{}{D^{\alpha}_{0+}}x\right)(t) &=B\eta+\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m} B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha-\beta}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\beta+1)}\eta\\ &+\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}BQ^{A,B}_{k,m} B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\eta\\ &+\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m} \frac{t^{k(\alpha-\beta)+m\alpha+1-\beta}}{\Gamma(k(\alpha-\beta)+m\alpha+2-\beta)}\hat{\eta}\\ &+\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}BQ^{A,B}_{k,m} \frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)} \hat{\eta}. \end{align*} Then, the Caputo fractional differentiation of $x(t)$ \eqref{x(t)-hom} of order $0<\beta\leq 1$ is as follows: \begin{align*} \left( \prescript{C}{}{D^{\beta}_{0+}}x\right) (t)&= \prescript{C}{}{D^{\beta}_{0+}}\left( I+t^{\alpha} \mathscr{E}^{A,B}_{\alpha-\beta, \alpha,\alpha+1}(t)\right) \eta+ \prescript{C}{}{D^{\beta}_{0+}}\left( t \mathscr{E}^{A,B}_{\alpha-\beta, \alpha,2}(t)\right) \hat{\eta}\\ &= \prescript{C}{}{D^{\beta}_{0+}}\left(I+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m} B\frac{t^{k(\alpha-\beta)+m\beta+\alpha}}{\Gamma(k(\alpha-\beta)+m\beta+\alpha+1)}\right) \eta\\ &+\prescript{C}{}{D^{\beta}_{0+}}\left( \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m} \frac{t^{k(\alpha-\beta)+m\beta+1}}{\Gamma(k(\alpha-\beta)+m\beta+2)}\right) \hat{\eta}\\ &=\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m} B\frac{t^{k(\alpha-\beta)+m\beta+\alpha-\beta}}{\Gamma(k(\alpha-\beta)+m\beta+\alpha-\beta+1)}\eta\\ &+\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m} \frac{t^{k(\alpha-\beta)+m\beta+1-\beta}}{\Gamma(k(\alpha-\beta)+m\beta+2-\beta)} \hat{\eta}. \end{align*} Finally, taking a linear combination of above results, we acquire the desired result: \begin{align*} \left(\prescript{C}{}{D^{\alpha}_{0+}}x\right)(t)-A\left( \prescript{C}{}{D^{\beta}_{0+}}x\right)(t)&=B\eta+\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}BQ^{A,B}_{k,m} B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\eta\\&+\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}BQ^{A,B}_{k,m} \frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\coloneqq Bx(t). \end{align*} Next, we consider the following linear inhomogeneous fractional evolution equation: \begin{equation} \label{eq:f5inhom} \left( \prescript{C}{}{D^{\alpha}_{0+}}x\right) (t)-A \left( \prescript{C}{}{D^{\beta}_{0+}}x\right) (t)-Bx(t)=g(t), \end{equation} with zero initial conditions: \begin{equation*} x(0)=x'(0)=0, \end{equation*} has an integral representation of a mild solution which is a particular solution of \eqref{mtde-1}: \begin{align*} \bar{x}(t)=\int\limits_{0}^{t}(t-s)^{\alpha-1}\mathscr{E}^{A,B}_{\alpha-\beta, \alpha,\alpha}(t-s)g(s)\mathrm{d}s=\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s,\quad t>0. \end{align*} In accordance with fractional analogue of variation of constants formula any particular mild solution of inhomogeneous differential equation of fractional-order \eqref{eq:f5inhom} should be looked for in the form of \begin{equation} \bar{x}(t)=\int\limits_{0}^{t}(t-s)^{\alpha-1}\mathscr{E}^{A,B}_{\alpha-\beta, \alpha,\alpha}(t-s)f(s)\mathrm{d}s=\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}f(s)\mathrm{d}s, \quad t>0, \end{equation} where $f(s)$ is unknown function for $s\in [0,t]$ with $\bar{x}(0)=\bar{x}'(0)=0$. Because of this homogeneous initial values $\bar{x}(0)=\bar{x}'(0)=0$, it follows that in this case, for any given order either in $(1, 2]$ and $(0, 1]$, the Riemann–Liouville and Caputo type fractional differentiation operators are equal in accordance with \eqref{relation}. Therefore, in the work below we will apply Riemann–Liouville derivative instead of Caputo one to verify the mild solution of evolution equation with two independent fractional-orders. Applying the property of a linear operator $Q_{k,m}^{A,B}$ \eqref{gen-Pascal} and having Caputo differentiation of order \\ $1<\alpha\leq 2$ of $\bar{x}(t)$, we obtain: \allowdisplaybreaks \begin{align*} &\left( \prescript{C}{}{D^{\alpha}_{0+}}\bar{x}\right) (t)=\left( \prescript{RL}{}{D^{\alpha}_{0+}}\bar{x}\right) (t)\\&= \prescript{RL}{}{D^{\alpha}_{0+}}\Big[\int\limits_{0}^{t}\frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}f(s)\mathrm{d}s+\int\limits_{0}^{t}\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k-1,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}f(s)\mathrm{d}s\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=1}^{\infty}BQ^{A,B}_{k,m-1}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}f(s)\mathrm{d}s\Big]\\ &=\left( \prescript{RL}{}{D^{\alpha}_{0+}}(\prescript{}{}{I^{\alpha}_{0+}}f)\right)(t)+\prescript{RL}{}{D^{\alpha}_{0+}}\Big[\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+2\alpha-\beta-1}}{\Gamma(k(\alpha-\beta)+m\alpha+2\alpha-\beta)}f(s)\mathrm{d}s\Big]\\ &+\prescript{RL}{}{D^{\alpha}_{0+}}\Big[\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}BQ^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+2\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+2\alpha)}f(s)\mathrm{d}s\Big]. \end{align*} By making use of the fractional Leibniz integral rules \eqref{Leibniz} in Riemann-Liouville's sense for the second and third terms of the above expression, we get \allowdisplaybreaks \begin{align*} &\left( \prescript{C}{}{D^{\alpha}_{0+}}\bar{x}\right) (t)=\left( \prescript{RL}{}{D^{\alpha}_{0+}}\bar{x}\right) (t)\\ &=f(t)+\lim\limits_{s\to t-0}\prescript{RL,t}{}{D}^{\alpha-1}_{0+} \left( \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\lim\limits_{s\to t-0}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+2\alpha-\beta-1}}{\Gamma(k(\alpha-\beta)+m\alpha+2\alpha-\beta)}\right) \lim\limits_{s\to t-0}f(s)\\ &+\lim\limits_{s\to t-0}\prescript{RL,t}{}{D}^{\alpha-2}_{0+}\left( \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\lim\limits_{s\to t-0}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+2\alpha-\beta-1}}{\Gamma(k(\alpha-\beta)+m\alpha+2\alpha-\beta)}\right)\frac{d}{dt}\lim\limits_{s\to t-0}f(s)\\ &+\int\limits_{0}^{t}\prescript{RL,t}{}{D}^{\alpha}_{0+}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+2\alpha-\beta-1}}{\Gamma(k(\alpha-\beta)+m\alpha+2\alpha-\beta)}f(s)\mathrm{d}s\\ &+\lim\limits_{s\to t-0}\prescript{RL,t}{}{D}^{\alpha-1}_{0+} \left(\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\lim\limits_{s\to t-0}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+2\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+2\alpha)}\right) \lim\limits_{s\to t-0}f(s)\\ &+\lim\limits_{s\to t-0}\prescript{RL,t}{}{D}^{\alpha-2}_{0+}\left( \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\lim\limits_{s\to t-0}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+2\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+2\alpha)}\right)\frac{d}{dt}\lim\limits_{s\to t-0}f(s)\\ &+\int\limits_{0}^{t}\prescript{RL,t}{}{D}^{\alpha}_{0+}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+2\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+2\alpha)}f(s)\mathrm{d}s\\ &=f(t)+\lim\limits_{s\to t-0} \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\lim\limits_{s\to t-0}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-\beta}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\beta+1)}\lim\limits_{s\to t-0}f(s)\\ &+\lim\limits_{s\to t-0} \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\lim\limits_{s\to t-0}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-\beta+1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\beta+2)}\frac{d}{dt}\lim\limits_{s\to t-0}f(s)\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-\beta-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\beta)}f(s)\mathrm{d}s\\ &+\lim\limits_{s\to t-0}\prescript{RL,t}{}{D}^{\alpha-1}_{0+} \left(\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\lim\limits_{s\to t-0}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right) \lim\limits_{s\to t-0}f(s)\\ &+\lim\limits_{s\to t-0} \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\lim\limits_{s\to t-0}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+2)}\frac{d}{dt}\lim\limits_{s\to t-0}f(s)\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}f(s)\mathrm{d}s\\ &=f(t)+\int_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}AQ^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-\beta-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\beta)}f(s)\mathrm{d}s\\ &+\int_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}BQ^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}f(s)\mathrm{d}s \end{align*} Then, Caputo fractional derivative of $\bar{x}(t)$ of order of $0<\beta\leq 1$ is \allowdisplaybreaks \begin{align*} &\left( \prescript{C}{}{D^{\beta}_{0+}}\bar{x}\right)(t)=\left( \prescript{RL}{}{D^{\beta}_{0+}}\bar{x}\right) (t)\\&=\prescript{RL}{}{D^{\beta}_{0+}}\Big[\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}f(s)\mathrm{d}s\Big]\\ &=\lim\limits_{s\to t-0}\prescript{RL,t}{}{D}^{\beta-1}_{0+}\left( \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}\right) \lim\limits_{s\to t-0}f(s)\\ &+\int\limits_{0}^{t}\prescript{RL}{}{D^{\beta}_{0+}}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}f(s)\mathrm{d}s\\ &=\lim\limits_{s\to t-0} \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-\beta}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\beta+1)} \lim\limits_{s\to t-0}f(s)\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-\beta-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\beta)}f(s)\mathrm{d}s\\ &=\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-\beta-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\beta)}f(s)\mathrm{d}s. \end{align*} Thus, linear combinations of above results yield that \begin{align*} &\left( \prescript{C}{}{D^{\alpha}_{0+}}\bar{x}\right) (t)-A \left( \prescript{C}{}{D^{\beta}_{0+}}\bar{x}\right) (t)\\&=f(t)+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}BQ^{A,B}_{k,m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}f(s)\mathrm{d}s\\&=f(t)+B\bar{x}(t)=g(t)+B\bar{x}(t). \end{align*} Therefore, $f(t)=g(t)$, $t>0$ which confirms the desired verification. The proof is complete. \end{proof} Then it follows that by using the substitution $y(t)=E^{-1}x(t)$, we can acquire a mild solution of \eqref{mtde} as below. \begin{theorem} Let $A,B\in\mathscr{B}(Y)$ with non-zero commutator, i.e., $\left[ A, B\right] \coloneqq AB- BA \neq 0$. A mild solution $y(\cdot)\in \mathbb{C}^{2}(\mathbb{J},X)$ of the Cauchy problem \eqref{mtde} can be represented as \allowdisplaybreaks \begin{align} y(t)&=\left(E^{-1}+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}E^{-1}Q_{k,m}^{A,B}B\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta\nonumber\\ &+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}E^{-1}Q_{k,m}^{A,B}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}E^{-1}Q_{k,m}^{A,B}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s\nonumber\\ &\coloneqq\left( E^{-1}+t^{\alpha}E^{-1}\mathscr{E}_{\alpha-\beta,\alpha,\alpha+1}^{A,B}(t)B\right) \eta+tE^{-1}\mathscr{E}_{\alpha-\beta,\alpha,2}^{A,B}(t)\hat{\eta}\nonumber\\&+\int\limits_{0}^{t}(t-s)^{\alpha-1}E^{-1}\mathscr{E}_{\alpha-\beta,\alpha,\alpha}^{A,B}(t-s)g(s)\mathrm{d}s, \quad t>0. \end{align} \end{theorem} \begin{remark} Let $A_{0}=\Theta$ be a zero operator. Then, a mild solution $y(\cdot)\in \mathbb{C}^{2}(\mathbb{J},X)$ of the Cauchy problem \eqref{mtde-3} \begin{equation}\label{mtde-3} \begin{cases} \left( \prescript{C}{}{D^{\alpha}_{0+}}Ey\right)(t)-B_{0}y(t)=g(t), \quad t>0, \quad 1<\alpha\leq2,\\ y(0)=\eta , \quad y^{\prime}(0)=\tilde{\eta}, \end{cases} \end{equation} can be determined by means of two parameter Mittag-Leffler functions as follows \begin{align} y(t)&=\sum_{m=0}^{\infty}E^{-1}B^{m}\frac{t^{m\alpha}}{\Gamma(m\alpha+1)}\eta+ \sum_{m=0}^{\infty}E^{-1}B^{m}\frac{t^{m\alpha+1}}{\Gamma(m\alpha+2)}\hat{\eta} +\int\limits_{0}^{t}\sum_{m=0}^{\infty}E^{-1}B^{m}\frac{(t-s)^{m\alpha+\alpha-1}}{\Gamma(m\alpha+\alpha)}g(s)\mathrm{d}s\nonumber\\ &\coloneqq E^{-1}E_{\alpha,1}(Bt^{\alpha})\eta+tE^{-1}E_{\alpha,2}(Bt^{\alpha})\hat{\eta}+\int\limits_{0}^{t}(t-s)^{\alpha-1}E^{-1}E_{\alpha,\alpha}(B(t-s)^{\alpha})g(s)\mathrm{d}s, \quad t>0. \end{align} \end{remark} Similar problem to \eqref{mtde-3} has been considered in \cite{Feckan-1} for Sobolev type functional evolution equations with fractional-order as follows: \begin{equation}\label{mtde-4} \begin{cases} \prescript{C}{0}{D}^{q}_{t}(Ex(t))+Ax(t)=f(t,x_{t}), \quad t\in J\coloneqq[0,a],\\ x(t)=\phi(t) , \quad -r\leq t\leq 0, \end{cases} \end{equation} where $\prescript{C}{0}{D}^{q}_{t}$ is the Caputo fractional derivative of order $0<q<1$ with lower limit zero. The operators $A: D(A)\subset X \to Y$ and $E:D(E)\subset X \to Y$, where $X,Y$are Banach spaces. Moreover, $f(\cdot,\cdot):J\times C\to Y$ with $C\coloneqq C\left([-r,0],X\right)$. $x(\cdot):J^{*}\coloneqq[-r,a]\to X$ is continuous, $x_{t}$ is the element of $C$ defined by $x_{t}(s)\coloneqq x(t+s)$, $-r\leq s\leq 0$. The domain $D(E)$ of $E$ becomes a Banach space with norm $\|x\|_{D(E)}\coloneqq\|Ex\|_{Y}$, $x\in D(E)$ and $\phi\in C(E)\coloneqq C([-r,0],D(E))$. Feckan et al. \cite{Feckan-1} have introduced the following assumptions on the operators $A$ and $E$: $(\hat{H}_{1})$: $A$ and $E$ are linear operators and $A$ is closed; $(\hat{H}_{2})$: $D(E)\subset D(A)$ and $E$ is bijective; $(\hat{H}_{3})$: Linear operator $E^{-1}: Y \to D(E)\subset X$ is compact. By making use of the substitution $x(t)=E^{-1}y(t)$, under the hypotheses $(\hat{H}_{1})$-$(\hat{H}_{3})$, we transform the Sobolev type fractional-order functional evolution system \eqref{mtde-4} to the following evolution system with a linear bounded operator $\hat{A}\coloneqq-AE^{-1}:Y\to Y$: \begin{equation}\label{mtde-5} \begin{cases} \prescript{C}{0}{D^{\alpha}_{t}}y (t) -\hat{A}y(t)=f(t,E^{-1}y_{t}), \quad t\in J,\\ y(t)=\varphi(t) , \quad -r\leq t\leq 0, \end{cases} \end{equation} where $y(\cdot):J^{*}\to Y$, $\varphi(t)=E\phi(t), t \in [-r,0]$ and $y_{t}(s)=y(t+s), s\in[-r,0]$. A mild solution of an initial value problem for functional evolution equation \eqref{mtde-5} can be expressed by means of classical Mittag-Leffler type functions as \begin{align} y(t)=E_{q,1}(\hat{A}t^{q})\varphi(0)+\int\limits_{0}^{t}(t-s)^{q-1}E_{q,q}(\hat{A}(t-s)^{q})f(s,E^{-1}y_{s})\mathrm{d}s, \quad t>0. \end{align} Thus, the mild solution of Sobolev type functional evolution equation of fractional-order should be represented by \begin{align}\label{correct} x(t)=E^{-1}E_{q,1}(\hat{A}t^{q})E\phi(0)+\int\limits_{0}^{t}(t-s)^{q-1}E^{-1}E_{q,q}(\hat{A}(t-s)^{q})f(s,x_{s})\mathrm{d}s,\quad t>0. \end{align} However, the mild solution of \eqref{mtde-4} was represented via characteristic solution operators (see Lemma 3.1 in \cite{Feckan-1}) instead of Mittag-Leffler functions generated by a linear operator $\hat{A}\coloneqq-AE^{-1}\in \mathscr{B}(Y)$. It should be stressed out that if $E=I$, then a mild solution of \eqref{mtde-4} can be expressed with the help of characteristic solution operators (see Remark 3.1 in \cite{Feckan-1}), otherwise, under hypotheses $(\hat{H}_{1})-(\hat{H}_{3})$ it should be determined by classical Mittag-Leffler functions with two parameters which are compact linear operators in $Y$ as \eqref{correct}. \begin{remark} In particular case, we consider the following initial value problem for multi-dimensional multi-term fractional differential equation with noncommutative matrices \begin{equation}\label{multi-1} \begin{cases} \left( \prescript{C}{}{D^{\alpha}_{0+}}y\right) (t) -A_{0} \left( \prescript{C}{}{D^{\beta}_{0+}}y\right) (t)-B_{0}y(t)=g(t), \quad t>0,\\ y(0)=\eta , \quad y^{\prime}(0)=\tilde{\eta}, \end{cases} \end{equation} where $\prescript{C}{}{D^{\alpha}_{0+}}$ and $ \prescript{C}{}{D^{\beta}_{0+}}$ Caputo fractional derivatives of orders $1<\alpha\leq2$ and $0<\beta\leq1$, respectively, with the lower limit zero. $E=I\in\mathbb{R}^{n\times n}$ is an identity matrix, the matrices $A_{0},B_{0}\in \mathbb{R}^{n\times n}$ are nonpermutable i.e., $AB\neq BA$, $y(t)\in \mathbb{R}^{n}$ is a vector-valued function on $\mathbb{J}$, i.e., $y(\cdot):\mathbb{J}\to \mathbb{R}^{n}$ and $\eta, \hat{\eta}\in \mathbb{R}^{n}$. In addition, a forced term $g(\cdot): \mathbb{J}\to \mathbb{R}^{n}$ is a continuous function. The exact analytical representation of solution $y(\cdot)\in \mathbb{C}^{2}(\mathbb{J},\mathbb{R}^{n})$ of \eqref{multi-1} can be expressed by \begin{align} y(t)&\coloneqq\left( 1+t^{\alpha}\mathscr{E}_{\alpha-\beta,\alpha,\alpha+1}^{A_{0},B_{0}}(t)B_{0}\right) \eta+t\mathscr{E}_{\alpha-\beta,\alpha,2}^{A_{0},B_{0}}(t)\hat{\eta}+\int\limits_{0}^{t}(t-s)^{\alpha-1}\mathscr{E}_{\alpha-\beta,\alpha,\alpha}^{A_{0},B_{0}}(t-s)g(s)\mathrm{d}s\nonumber\\&=\left(1+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A_{0},B_{0}}B_{0}\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A_{0},B_{0}}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}Q_{k,m}^{A_{0},B_{0}}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s, \quad t>0. \end{align} \end{remark} \section{A representation of solutions of \eqref{mtde} with permutable linear operators}\label{sec:4} To get an analytical representation of a mild solution of \eqref{mtde-1} with permutable linear operators i.e., $AB=BA$, first, we need to prove auxiliary lemma for making use of Laplace integral transform according to the Theorem \ref{thm2}. Moreover, the scalar analogue of following theorem has been considered by Ahmadova and Mahmudov for fractional Langevin equations with constant coefficients in \cite{Ahmadova-Mahmudov}. In general, the following theorem is true for $\alpha>0$, $\alpha>\beta$ and $\gamma\in \mathbb{R}$. \begin{theorem} \label{Q^A,B-per} Let $m \in \mathbb{N}_{0}$ and $Re(s)>0$. For $A,B\in\mathscr{B}(Y)$ with $[A,B]=AB-BA=0$, we have: \begin{align*} \mathscr{L}^{-1}\Bigl\{ \frac{s^{\gamma}B^{m}}{(s^{\alpha}I-A s^{\beta})^{m+1}}\Bigr\}(t)&= t^{m\alpha+\alpha+\gamma-1}\sum_{k=0}^{\infty}\binom{k+m}{m}\frac{A^{k}B^{m}t^{k(\alpha-\beta)}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\gamma)}\\ &=t^{m\alpha+\alpha-\gamma-1}E^{m+1}_{\alpha-\beta, m\alpha+\alpha-\gamma}(A t^{\alpha-\beta})B^{m}. \end{align*} \end{theorem} \begin{proof} By using the Taylor series representation of $\frac{1}{(1-t)^{m+1}}, m \in \mathbb{N}_{0}$ of the form \begin{equation*} \frac{1}{(1-t)^{m+1}}= \sum_{k=0}^{\infty}\binom{k+m}{m}t^{k}, \quad |t|<1, \end{equation*} we achieve that \begin{align*} \frac{s^{\gamma}B^{m}}{(s^{\alpha}I-A s^{\beta})^{m+1}}=\frac{s^{\gamma}B^{m}}{(s^{\alpha }I)^{m+1}}\frac{1}{(1-\frac{A}{s^{\alpha-\beta}} )^{m+1}}&=\frac{s^{\gamma}B^{m}}{s^{(m+1)\alpha}}\sum_{k=0}^{\infty}\binom{k+m}{m}\Big(\frac{A}{s^{\alpha-\beta}}\Big)^{k}\\ &=\sum_{k=0}^{\infty}\binom{k+m}{m}\frac{A^{k}B^{m}}{s^{(m+1)\alpha+k(\alpha-\beta)-\gamma}}. \end{align*} By using the inverse Laplace integral formula for the above function, we get the desired result: \begin{align*} \mathscr{L}^{-1}\Bigl\{ \frac{s^{\gamma}B^{m}}{(s^{\alpha}I-A s^{\beta})^{m+1}}\Bigr\}(t)&=\sum_{k=0}^{\infty}A^{k}B^{m}\binom{k+m}{m}\mathscr{L}^{-1}\Bigr\{\frac{1}{s^{k(\alpha-\beta)+(m+1)\alpha-\gamma}}\Bigr\}(t)\\ &=\sum_{k=0}^{\infty}A^{k}B^{m}\binom{k+m}{m}\frac{t^{k(\alpha-\beta)+m\alpha+\alpha-\gamma-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha-\gamma)}\\ &=t^{m\alpha+\alpha-\gamma-1}E^{m+1}_{\alpha-\beta, m\alpha+\alpha-\gamma}(A t^{\alpha-\beta})B^{m}. \end{align*} We have required an extra condition on $s$ such that \begin{equation*} s^{\alpha-\beta}>\|A\|, \end{equation*} for proper convergence of the series. But, this condition can be removed at the end of calculation since analytic continuation of both sides, to give the desired result for all $ s \in \mathbb{C}$ which is satisfying $Re(s)>0$. \end{proof} Then, we acquire analytical representation of mild solution for multi-term fractional evolution equation with permutable linear bounded operators via the following theorem. \begin{theorem} Let $A,B\in\mathscr{B}(Y)$ with zero commutator, i.e., $\left[ A, B\right] \coloneqq AB- BA = 0$. Assume that $g(\cdot): \mathbb{J} \to X$ and $\left(\prescript{C}{}{D^{\beta}_{0^{+}}x}\right) (\cdot)$ for $0<\beta\leq1$ are exponentially bounded. A mild solution $x(\cdot)\in \mathbb{C}^{2}(\mathbb{J},Y)$ of the Cauchy problem \eqref{mtde-1} can be represented by means of bivariate Mittag-Leffler type functions \eqref{bivtype} as follows \begin{align} x(t)&=\left( I+t^{\alpha}BE_{\alpha-\beta,\alpha,\alpha+1}(At^{\alpha-\beta},Bt^{\alpha})\right) \eta+tE_{\alpha-\beta,\alpha,2}(At^{\alpha-\beta},Bt^{\alpha})\hat{\eta}+t^{\alpha-1}E_{\alpha-\beta,\alpha,\alpha}(At^{\alpha-\beta},Bt^{\alpha})\ast g(t)\nonumber\\&=\left(I+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A^{k}B^{m+1}\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta\nonumber\\&+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A^{k}B^{m}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A^{k}B^{m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s, \quad t>0. \end{align} \end{theorem} \begin{proof} We recall that the existence of Laplace transform of $x(\cdot)$ and its Caputo derivatives $\left(\prescript{C}{}{D^{\alpha}_{0^{+}}x}\right) (\cdot)$ and $\left(\prescript{C}{}{D^{\beta}_{0^{+}}x}\right) (\cdot)$ for $ 1<\alpha\leq 2$ and $0<\beta\leq 1$, respectively, is guaranteed by Theorem \ref{thm2}. Thus, to find the mild solution $x(t)$ of \eqref{mtde} with permutable linear operators, i.e., $AB=BA$, we can use the Laplace transform technique. By assuming $T=\infty$, applying the Laplace transform technique on both sides of equation \eqref{mtde-1} and solving the equation with respect to the $X(s)$, we get \begin{align*} X(s) &=s^{-1}\eta+s^{-1}\left( s^{\alpha}I-As^{\beta}-B\right)^{-1}B\eta+s^{\alpha-2} \left( s^{\alpha}I-As^{\beta}-B\right)^{-1}\hat{\eta}\\&+\left( s^{\alpha-1}I-As^{\beta}-B\right)^{-1}G(s). \end{align*} On the other hand, since \eqref{operator} for sufficiently large $s$, we have \begin{equation*} \|\left( s^{\alpha}I-As^{\beta}\right) ^{-1}B\|<1. \end{equation*} Then, for permutable linear operators $A,B\in \mathscr{B}(Y)$ and sufficiently large $s$, one can attain \begin{align*} \left( s^{\alpha}I-As^{\beta}-B\right)^{-1} &=\left(s^{\alpha}I-A s^{\beta}\right)^{-1}\left( I-\left( s^{\alpha}I-As^{\beta}\right) ^{-1}B\right)^{-1} \\ &=\left(s^{\alpha}I-A s^{\beta}\right)^{-1}\sum_{m=0}^{\infty} \left(s^{\alpha}I-As^{\beta} \right)^{-m}B^{m}\\ &=\sum_{m=0}^{\infty}\frac{B^{m}}{\left(s^{\alpha}I-As^{\beta} \right)^{(m+1)}}. \end{align*} Then taking inverse Laplace transform, we have \allowdisplaybreaks \begin{align} \label{y(t)} x(t)=\mathscr{L}^{-1}\left\lbrace s^{-1}\right\rbrace(t) \eta&+ \mathscr{L}^{-1}\left\lbrace\sum_{m=0}^{\infty}\frac{s^{-1}B^{m+1}}{\left(s^{\alpha}I-As^{\beta} \right)^{(m+1)}}\right\rbrace (t)B\eta \nonumber\\ &+\mathscr{L}^{-1}\left\lbrace\sum_{m=0}^{\infty}\frac{s^{\alpha-2}B^{m}}{\left(s^{\alpha}I-As^{\beta} \right)^{(m+1)}}\right\rbrace (t)\hat{\eta}\nonumber\\ &+\mathscr{L}^{-1}\left\lbrace \sum_{m=0}^{\infty}\frac{B^{m}}{\left(s^{\alpha}I-As^{\beta} \right)^{(m+1)}} G(s)\right\rbrace (t). \end{align} Therefore, in accordance with Theorem \eqref{Q^A,B-per}, we acquire \begin{align} x(t)&=\left\lbrace I+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{k}\frac{A^{k}B^{m+1}t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right\rbrace \eta\nonumber\\&+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{k}\frac{A^{k}B^{m}t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{k}\frac{A^{k}B^{m}(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s\nonumber\\ &\coloneqq\left( I+t^{\alpha}BE_{\alpha-\beta,\alpha,\alpha+1}(At^{\alpha-\beta},Bt^{\alpha})\right) \eta+tE_{\alpha-\beta,\alpha,2}(At^{\alpha-\beta},Bt^{\alpha})\hat{\eta}\nonumber\\&+\int\limits_{0}^{t}(t-s)^{\alpha-1}E_{\alpha-\beta,\alpha,\alpha}(A(t-s)^{\alpha-\beta},B(t-s)^{\alpha})g(s)\mathrm{d}s, \quad t>0. \end{align} \end{proof} \begin{remark} The analytical mild solution for the initial value problem for \eqref{mtde-1} can be attained from the property of $Q_{k,m}^{A,B}$ \eqref{commutative} for linear bounded operators $A,B\in\mathscr{B}(Y)$ satisfying $AB=BA$ where \begin{equation*} Q_{k,m}^{A,B}=\binom{k+m}{m}A^{k}B^{m}, \quad k,m \in \mathbb{N}_{0}. \end{equation*} \end{remark} It should be emphasized that the assumption on the exponential boundedness of the function $g(\cdot)$ and $\left( \prescript{C}{}{D^{\beta}_{0+}}x\right) (\cdot)$ for $0<\beta\leq 1$ ($\left( \prescript{C}{}{D^{\alpha}_{0+}}x\right) (\cdot)$ for $1<\alpha\leq 2$ ) can be omitted for the case of permutable linear bounded operators, too. \begin{theorem} Let $A,B\in\mathscr{B}(Y)$ with zero commutator, i.e., $\left[ A, B\right] \coloneqq AB- BA= 0$. A mild solution $x(\cdot)\in \mathbb{C}^{2}(\mathbb{J},Y)$ of the Cauchy problem \eqref{mtde-1} can be expressed as \begin{align}\label{form-per} x(t)&=\left(I+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A^{k}B^{m+1}\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta\nonumber\\&+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A^{k}B^{m}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A^{k}B^{m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s, \quad t>0. \end{align} \end{theorem} \begin{proof} For linear homogeneous and inhomogeneous cases, by using the following Pascal identity for binomial coefficients: \begin{equation*} \binom{k+m}{m}=\binom{k+m-1}{m}+\binom{k+m-1}{m-1}, \quad k, m\in \mathbb{N}, \end{equation*} the formula \eqref{formula} and fractional Leibniz integral rules \eqref{Leibniz}, it can be easily shown that \eqref{form-per} is a mild solution of the Cauchy problem for \eqref{mtde-1} with permutable linear bounded operators. Moreover, this case have considered by Mahmudov et al. for multi-dimensional Bagley-Torvik equations with permutable matrices in \cite{Mahmudov-Huseynov-Aliev-Aliev}. \end{proof} \begin{theorem} Let $A,B\in\mathscr{B}(Y)$ with zero commutator, i.e., $\left[ A, B\right] \coloneqq AB- BA = 0$. A mild solution $y(\cdot)\in \mathbb{C}^{2}(\mathbb{J},X)$ of the Cauchy problem \eqref{mtde} can be determined as below \begin{align} y(t)&=\left( E^{-1}+t^{\alpha}E^{-1}BE_{\alpha-\beta,\alpha,\alpha+1}(At^{\alpha-\beta},Bt^{\alpha})\right) \eta+tE^{-1}E_{\alpha-\beta,\alpha,2}(At^{\alpha-\beta},Bt^{\alpha})\hat{\eta}\nonumber\\&+t^{\alpha-1}E^{-1}E_{\alpha-\beta,\alpha,\alpha}(At^{\alpha-\beta},Bt^{\alpha})\ast g(t)\nonumber\\&=\left(E^{-1}+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}E^{-1}A^{k}B^{m+1}\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta\nonumber\\&+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}E^{-1}A^{k}B^{m}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}E^{-1}A^{k}B^{m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s, \quad t>0. \end{align} \end{theorem} \begin{remark} In a special case, the exact analytical representation of solution $y(\cdot)\in \mathbb{C}^{2}(\mathbb{J},\mathbb{R}^{n})$ of Cauchy problem for multi-dimensional fractional differential equation with multi-orders and permutable matrices $A_{0},B_{0} \in \mathbb{R}^{n\times n}$ i.e., $A_{0}B_{0}= B_{0}A_{0}$ \eqref{multi-1} can be represented by, \begin{align} y(t)&=\left(1+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A_{0}^{k}B_{0}^{m+1}\frac{t^{k(\alpha-\beta)+m\alpha+\alpha}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha+1)}\right)\eta\nonumber\\&+ \sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A_{0}^{k}B_{0}^{m}\frac{t^{k(\alpha-\beta)+m\alpha+1}}{\Gamma(k(\alpha-\beta)+m\alpha+2)}\hat{\eta}\nonumber\\ &+\int\limits_{0}^{t}\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\binom{k+m}{m}A_{0}^{k}B_{0}^{m}\frac{(t-s)^{k(\alpha-\beta)+m\alpha+\alpha-1}}{\Gamma(k(\alpha-\beta)+m\alpha+\alpha)}g(s)\mathrm{d}s, \quad t>0. \end{align} \end{remark} \section{Discussion and future work}\label{sec:concl} In this research work, first, we convert Sobolev type fractional evolution equation with multi-orders \eqref{mtde} to multi-term fractional evolution equation with linear bounded operators \eqref{mtde-1}. Secondly, we give the sufficient conditions to guarantee the rationality of solving multi-term fractional differential equations with linear bounded operators by the Laplace transform method. Then we solve linear inhomogeneous fractional evolution equation with nonpermutable \& permutable linear bounded operators $A,B\in\mathscr{B}(Y)$ by making use of Laplace integral transform. Next we propose exact analytical representation of a mild solution of \eqref{mtde-1} and \eqref{mtde}, respectively with the help of newly defined Mittag-Leffler type function which is generated by linear bounded operators by removing the strong condition which is an exponential boundedness of a forced term and one of fractional orders with the help of analytical methods, namely: verification by substitution and fractional analogue of variation of constants formula. The main contributions of this paper are as follows: \begin{itemize} \item we introduce a new Mittag-Leffler type function which is generated by linear bounded operators $A,B\in\mathscr{B}(Y)$ via a double infinity series ; \item we derive new properties of Mittag-Leffler type function which are useful tool for checking the candidate solutions of multi-term fractional differential equations; \item we propose the property of $Q_{k,m}^{A,B}$ with nonpermutable linear operators $A,B\in \mathscr{B}(Y)$ which is a generalization of well-known Pascal's rule binomial coefficients. \item we acquire the analytical representation of a mild solution for linear Sobolev type fractional multi-term evolution equations with nonpermutable and permutable linear operators; \item we derive the exact analytical representation of multi-dimensional fractional differential equations with two independent orders and nonpermutable \& permutable matrices. \end{itemize} The possible directions for future work in which to extend the results of this paper is looking at Sobolev type fractional functional evolution equations with multi-orders \eqref{f-w}. Furthermore, one can expect the results of this paper to hold for a class of problems such as Sobolev type functional evolution system governed by \begin{equation}\label{f-w} \begin{cases} \left( \prescript{C}{}{D^{\alpha}_{0+}}Ey\right) (t) -A_{0} \left( \prescript{C}{}{D^{\beta}_{0+}}y\right) (t)=B_{0}y(t-\tau)+g(t), \quad 1\geq \alpha >\beta >0, \\ Ey(\tau)=E\phi(t),\quad -\tau \leq t \leq 0. \end{cases} \end{equation} It would be interesting to see how the theorems proved above can be extended to these cases. Another direction in which we would like to investigate stability and approximate controllability results for Sobolev type multi-term fractional differential equations \eqref{mtde} and \eqref{f-w}.
proofpile-arXiv_068-12647
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Solar flares are the largest explosive events that occur around the surface of the Sun. This phenomenon is caused by the reconnection of magnetic field lines above sun spots and produces electromagnetic radiation from radio to $\gamma$-rays~\citep{1974IAUS...57..105K}. Solar flares sometimes occur associated with coronal mass ejections~(CMEs), which are eruptions of the atmospheric plasma into interplanetary space~\citep{1974JGR....79.1799C, 1975SoPh...40..439G}. The frequency of these explosive events is strongly correlated with the activity of the Sun. For both types of energetic events, the typical energy released by the explosion is estimated in the range of $10^{27}$--$10^{32}$~erg~\citep{1963QJRAS...4...62E,1994ESASP.373..409H, 2010ApJ...722.1522V}. When a solar flare occurs, high energy particles that do not normally exist in the solar atmosphere are generated~\citep{1973Natur.241..333C, 1974SoPh...35..193D}. Solar imaging methods can partially identify the location at which these particles are generated, such as the magnetic loop-top~\citep{1994Natur.371..495M, 2010ApJ...714.1108K}, the loop-foot~\citep{2008ApJ...675.1645F}, and the reconnection point~\citep{2014ApJ...787..125N}. This indicates that these particles are accelerated by solar flares. Although the acceleration mechanisms remain poorly understood, several theoretical models of solar flares are proposed~\citep{1998ApJ...495L..67T, 2004A&A...419.1159K, 2008ApJ...676..704L}. Neutral particles associated with solar flares, such as $\gamma$-rays and neutrinos, are important to test theoretical aspects of particle acceleration in the magnetic reconnection because they can escape from the acceleration site\footnote{Some of line $\gamma$-rays cannot escape from the photosphere due to Compton scattering.}. Their observation reveals both the spatial and time profile of primary particle acceleration while primary and secondary charged particles are trapped by the magnetic field. Neutrinos are only produced by accelerated protons above $300$~MeV~\citep{1975SSRv...18..341R, 1995ARA&A..33..239H}, which can generate pions~($\pi^{\pm}$ and $\pi^{0}$) by interacting with dense plasma in the lower solar atmosphere during solar flares. The generated $\pi^{\pm}$ produce neutrinos in their decay chain. \begin{figure*}[] \centering\includegraphics[width=0.8\textwidth]{./fig1.pdf} \caption{Typical solar-flare neutrino fluxes from a powerful solar flare~(\cite{2003ChJAS...3...75F}, red thick and black thick lines) together with other neutrino fluxes, such as atmospheric neutrinos~(\cite{2005APh....23..526B}, blue lines), solar neutrinos~(\cite{2001ApJ...555..990B}, light blue lines), and relic neutrinos~(\cite{2009PhRvD..79h3013H}, light gray line).\label{fig:flare-flux}} \end{figure*} Figure~\ref{fig:flare-flux} shows the typical neutrino fluxes from a powerful solar flare together with other neutrino fluxes, such as atmospheric neutrinos, solar neutrinos, and supernova relic neutrinos. In this article, we refer to such neutrinos from solar-flares as solar-flare neutrinos. Solar-flare neutrinos have been searched for by neutrino detectors since the 1980s. However, no clear signal has been found in spite of setting a timing gate coincident with soft X-rays from visible solar flares. Setting a narrow search window, which covers only the period of neutrino production, \cite{2016arXiv160600681D} and \cite{2020SoPh..295..133O} proposed to open search windows coincident with $\gamma$-rays originating from $\pi^{0}$ decays and nuclear interactions. These methods are helpful to minimize the background rate in the neutrino searches. During the solar cycles 23~($1996$--$2008$) and 24~($2008$--$2019$), the Super-Kamiokande detector~(hereafter SK) had been operating since April 1st, 1996~\citep{2003NIMPA.501..418F} with five distinct periods, from SK-I to SK-V with ultra-pure water. Although several neutrino telescopes are running during those solar cycles, SK is unique in its search for solar-flare neutrinos because its data set covers almost two cycles of solar activity. The largest solar flare, whose class is X$28.0$ and which occurred on November 4th 2003~\citep{2005AA...433.1133K}, is also included. This paper is organized as follows. In Section~\ref{sec:review} we provide a brief overview of neutrinos associated with solar flares and the determination of the search windows to find solar-flare neutrinos. In Section~\ref{sec:analysis} we describe the performance of the SK detector and the analysis methods to search for solar-flare neutrinos within the selected search windows. In Section~\ref{sec:result} and Section~\ref{sec:discuss} we present analysis results and make comparisons to results from other neutrino experiments. In the final section we conclude this study and give future prospects. \section{solar-flare neutrinos} \label{sec:review} \subsection{Particle acceleration and neutrino production in solar flare} \label{sec:time-intro} In many solar flares, the hard X-ray, (line)~$\gamma$-ray, and microwave emissions are observed almost simultaneously~\citep{1981ApJ...244L.171C, 1983Natur.305..292N, 1984JPSJ...53.4499Y}. Those observations suggest that electrons, protons, and ions are accelerated over a short period of time. The $\gamma$-ray emissions from solar flares provide information on the processes of proton acceleration and the subsequent reactions of the protons in the chromosphere. For example, line $\gamma$-rays from neutron capture on hydrogen~\citep{1973Natur.241..333C} and de-excitation $\gamma$-rays from $\mathrm{^{12}C}$ and $\mathrm{^{16}O}$~\citep{2003ApJ...595L..81S} imply the acceleration and nuclear reactions of protons. The time profile of $\gamma$-rays from $\pi^{0}$ decays~($\pi^{0}\to2\gamma$) also provides information on the time scale of neutrino production since charged pions can be generated at the same time as neutral pions. Observations of $\gamma$-rays from $\pi^{0}$ decay have been performed by several instruments on-board satellites in geostationary and polar orbits~\citep{1986AdSpR...6f.115F, 1987ApJ...318..913C,1993A&AS...97..345L,1997ApJ...479..997D,1993A&AS...97..349K,2010CosRe..48...70K,2017ApJ...835..219A}. Those $\gamma$-ray observations indirectly demonstrate that protons are accelerated up to relativistic energies and subsequently neutrinos should be produced during solar flares. On the theoretical side, several simulations of neutrino emission from solar flares have been developed in order to estimate the expected event rate in neutrino detectors. Table~\ref{tb:neutrino-model} summarizes the features of three theoretical models for solar-flare neutrinos by~\cite{1991NCimC..14..417K}, \cite{2003ChJAS...3...75F}, and \cite{2013ICRC...33.3656T}. \begin{table*}[] \begin{center} \caption{The summary of theoretical models for solar-flare neutrinos. In each theoretical model, the number of expected interactions in the SK detector are calculated. For the expected number of events in the SK detector from~\cite{2003ChJAS...3...75F}, the conversion factor~($\eta$ defined in~\cite{2003ChJAS...3...75F}) is assumed to be ${\sim}0.10$ for the visible side and ${\sim}1.0$ for the invisible side.} \label{tb:neutrino-model} \begin{tabular}{cccccc} \hline Theoretical model & Side of the Sun & Power index & Directional feature of & Number of expected \\ (Reference) & & of proton & generated neutrino & events in SK~[$\mathrm{flare^{-1}}$] \\ \hline \cite{1991NCimC..14..417K} & Visible side & $3$--$4$ & Isotropic &$1.36 \times 10^{-4}$ \\ \cite{1991NCimC..14..417K} & Invisible side & $1$ & Beam like & $0.85$ \\ \cite{2004JHEP...06..045F} & Visible side & -- & Isotropic & $0.75$ \\ \cite{2004JHEP...06..045F} & Invisible side & -- & Beam like & $7.5$ \\ \cite{2013ICRC...33.3656T} & Invisible side & $3$ & Isotropic & $9.0 \times 10^{-5}$ \\ \cite{2013ICRC...33.3656T} & Invisible side & $1$ & Beam like &$3.8 \times 10^{-6}$ \\ \hline \end{tabular} \end{center} \end{table*} These theoretical models describe neutrino emission from the most powerful solar flares whose energy is larger than $10^{31}$~erg. The neutrino fluxes change depending on the assumptions in the models, such as the proton spectral index, the interaction cross sections between accelerated protons and nuclei in the chromosphere, the angular distribution of neutrinos with respect to the proton direction, and the location of the solar flare as summarized in Table~\ref{tb:neutrino-model}. Although the absolute fluxes are quite different, the predicted energy spectrum of solar-flare neutrinos is almost identical to the atmospheric neutrino energy spectrum. This is because in both cases neutrinos are created through the same production process. \cite{2004JHEP...06..045F} estimated the number of neutrino interactions between neutrinos and the free protons in the water of the SK detector; \begin{equation} n_{\mathrm{int}} = 7.5 \, \eta \left(\frac{E_{\mathrm{FL}}}{10^{31}~\mathrm{erg}} \right), \label{eq_eta} \end{equation} \noindent where $n_{\mathrm{int}}$ is the number of interactions in the SK detector~(fiducial volume $22.5$~kton), $E_{\mathrm{FL}}$ is the total energy of the solar flare, and $\eta$ is an energy conversion factor from the solar flare~$E_{\mathrm{FL}}$ to neutrino energy, as defined in~\cite{2003ChJAS...3...75F}. According to~\cite{2004JHEP...06..045F}, several neutrino interactions are expected in the SK detector when a solar flare classified as the largest explosion~($\geq 10^{32}$~erg) occurs on the visible~(invisible) side of the Sun and $\eta {\sim} 0.10~(1.0)$. On the other hand, \cite{2013ICRC...33.3656T} argues that the assumed value of $\eta$ is questionable and it is typically of order $10^{-6}$. Therefore, experimental searches for neutrinos from powerful solar flares can test theoretical aspects of neutrino production during the impulsive phase of the solar flares. \subsection{Searches for solar-flare neutrinos using neutrino detectors} \label{sec:neutrino} The possibility of detecting solar-flare neutrinos with neutrino experiments has been discussed since the 1980's~\citep{1982JETPL..35..341B, 1983ICRC....7..104E}. In 1988, the Homestake experiment reported an excess of neutrino events when energetic solar flares occurred~\citep{1994PrPNP..32...13D}. This observation suggested a possible correlation between solar flares and the neutrino capture rate on $\mathrm{^{37}Cl}$~\citep{1988PhRvL..61.2650B,1987ApJ...320L..69B}. Soon after the Mont Blanc Neutrino Detector searched for solar-flare neutrinos but no significant signal was found in time coincidence with any solar flares that occurred between 1988 to 1991, including the largest solar flare in 1989~\citep{1991ApJ...382..344A}. Since then various neutrino detectors have searched for solar-flare neutrinos by analyzing different solar flare samples~\citep{1988PhRvL..61.2653H, 1990ApJ...359..574H, 2012ApJ...745..193G, 2014APh....55....1A, Agostini:2019yuq, 2021PhRvD.103j2001A, 2022ApJ...924..103A}. However, no significant signal for solar-flare neutrinos has been found by any of these experiments. \subsection{Search window for solar flares on the visible side of the Sun} \label{sec:window} Atmospheric neutrinos are continuously produced by collisions between primary cosmic rays and nuclei in the Earth's atmosphere~\citep{2016PhRvD..94e2001R}. In neutrino experiments the separation between atmospheric neutrinos and solar-flare neutrinos is technically difficult since their energy ranges overlap with each other due to their identical production process. While atmospheric neutrinos are generated constantly, solar-flare neutrinos are released only during the period of particle acceleration during the flare. Therefore, a search window that is appropriately narrow in time allows neutrino detectors to substantially reduce the atmospheric neutrino background rate The first proposal to use search windows when searching for solar-flare neutrinos was published by~\cite{2016arXiv160600681D}, which analyzed the detection time of $\gamma$-rays from $\pi^{0}$ decays using the Fermi-Large Area Telescope~(LAT) satellite~\citep{2009ApJ...697.1071A}. Following this proposal, the IceCube collaboration searched for neutrinos from solar flares in the energy range from $500$~MeV to $5$~GeV~\citep{2021PhRvD.103j2001A} and constrained the integrated neutrino flux emitted during the considered time window according to the catalog of $\gamma$-ray flares recorded by Fermi-LAT~\citep{2021ApJS..252...13A}. However, this catalog covers the period of solar cycle~$24$~($2008$--$2019$) after the launch of Fermi-LAT in 2008. Hence, a different method must be used to identify search windows for solar flares that occurred before 2008. \cite{2020SoPh..295..133O} proposed to determine the search window by analyzing $2.2$~MeV line $\gamma$-rays and the derivative of soft X-rays to improve the signal-to-noise ratio to find solar-flare neutrinos. The former channel selected three solar flares across solar cycles 23 and 24 with the observation of line $\gamma$-rays recorded by the RHESSI satellite\footnote{The Reuven Ramaty High-Energy Solar Spectroscopic Imager~(RHESSI)}~\citep{2002SoPh..210....3L} on July 23rd 2002~(X5.1), November 2nd 2003~(X9.2), and January 20th 2005~(X7.1). The latter channel selected twenty-three solar flares~(above X5.0) recorded by the GOES\footnote{Geostationary Operational Environmental Satellite~(GOES)}~\citep{2004SPIE.5171...65L} across solar cycles 23 and 24. Note that this selection set the search window for the largest flare~(X28.0) on November 4th, 2003. Although the derivative of soft X-rays extracts the time scale of non-thermal electron acceleration in general, this channel is still appropriate to improve the signal-to-noise ratio for finding solar-flare neutrinos because the recent study by Fermi-LAT concluded ions and electrons are accelerated, transported, and interact with the ambient medium at the same time~\citep{2021ApJS..252...13A}. In the latter section, we separately searched in the SK detector for neutrinos from selected solar flares occurred on the visible side of the Sun within these two different search windows. \subsection{Search windows for solar flares on the invisible side of the Sun} \label{sec:invisible} Energetic proton flux directed back to the Sun generates a nuclear cascade in the solar atmosphere. Such flux results in narrow beam of relativistic protons with a rather hard spectrum from solar flares on the invisible side of the Sun. Hence the searches for neutrinos associated from solar flares occurring at the invisible side provide information about acceleration mechanism of downward going proton flux. \cite{2003ChJAS...3...75F} argue that the probability of solar-flare neutrino detection increases when solar flares occur on the invisible side of the Sun due to efficient collisions between the accelerated protons and the dense plasma at the surface of the Sun. Searching for neutrinos from solar flares on the invisible side of the Sun allows us to test these proposed neutrino production models. However, a selection of solar flares that occur on the invisible side of the Sun has never been performed for this purpose. To select solar flares that occur on the invisible side of the Sun, the time of CME emission allows one to infer the occurrence time of solar flares because large energetic solar flares are usually accompanied by CMEs~\citep{2003SoPh..218..261A}. The observation of CMEs occurring on the invisible side of the Sun has been performed by the LASCO\footnote{The Large Angle Spectroscopic Coronagraph~(LASCO) on the Solar and Heliospheric Observatory~(SOHO)} coronagraph~\citep{1995SoPh..162....1D, 1995SoPh..162..357B} and the CME emission times are listed in a catalog maintained by NASA~\citep{2004JGRA..109.7105Y}. From the catalog we selected energetic CMEs whose emission speed is more than $2000~\mathrm{km{\,}s^{-1}}$, which roughly corresponds to class X2.0 solar flares. This criteria allowed us to select ten~CMEs from 1996 April to 2018 May. The search window for solar flares occurring on the invisible side is set to $7238$~s as explained in Appendix~\ref{app:cme}. The date of the selected CMEs is summarized in Table~\ref{tb:time-invisible} in Appendix~\ref{app:cme}. \section{Detector and analysis} \label{sec:analysis} \subsection{The Super-Kamiokande detector} Super-Kamiokande is a water Cherenkov detector in a cavern beneath Ikeno-yama mountain, Japan~\citep{2003NIMPA.501..418F}. It is a cylindrical stainless tank structure and contains $50$~kiloton~(ktons) of ultra-pure water. The detector is divided into two regions by the tank structure, separated optically by Tyvek sheets: one is the inner detector~(ID) and the other is the outer detector~(OD). The ID serves as the target volume for neutrino interactions and the OD is used to veto external cosmic-ray muons as well as $\gamma$-rays from the surrounding rock. In the ID, the diameter~(height) of the cylindrical tank is $33.8$~m~($36.2$~m). It contains $32$~kton of water and holds $11,129$~inward-facing $20$-inch photomultipliers~(PMTs)\footnote{The SK-I detector used $11,149$~PMTs while the other phases use $11,129$~PMTs except for SK-II, which used $5,182$.} to observe the Cherenkov light produced by charged particles. The diameter~(height) of the OD tank is $39.3$~m~($41.4$~m). The detector simulation has been developed using the \textsc{Geant3} toolkit~\citep{Brun:1994aa} and tuned to calibration data. The details of the detector configuration, the calibration, and the performance can be found elsewhere~\citep{2014NIMPA.737..253A}. In this article, we analyzed the data taken in SK-I through SK-IV~(from April 1996 to May 2018) to cover solar cycles 23 and 24\footnote{During SK-V~(from January 2019 to July 2020), no solar flare above X5.0 occurred due to low solar activity.}. In order to determine the initial neutrino interaction vertex and the trajectories and momenta of any subsequent charged particles, event reconstruction is performed by analyzing the timing and the ring pattern of the observed Cherenkov light in the SK detector. Using this water Cherenkov technique the SK detector has sensitivity to a wide range of neutrino energies, from a few MeV to tens of GeV. The neutrino events are categorized into two samples depending on the energy of the reconstructed charged particles after the initial neutrino interaction. A neutrino event reconstructed with less than $100$~MeV of visible energy is categorized as part of the ``low energy sample'' and is mainly used for studies of solar neutrinos~\citep{2016PhRvD..94e2010A} and supernova neutrinos~\citep{2021PhRvD.104l2002A}. In this energy region the reconstruction tool searches for an interaction point because the track length of the charged particle is at most $30$~cm, which is small compared to the reconstructed vertex resolution~(typically more than $50$~cm). On the other hand, a neutrino event reconstructed with more than $100$~MeV of visible energy is categorized as part of the ``high energy sample'' and is mainly used for the study of atmospheric neutrinos~\citep{2016PhRvD..94e2001R} and to search for proton decay~\citep{2020PhRvD.102k2011T}. In this energy region the majority of neutrino interactions occur on nuclei and can produce a number of charged particles. The event reconstruction algorithm then determines the number of Cherenkov rings in the event, identifies the particle type that created each ring, locates the interaction vertex and predicts the energy of each charged particle. \subsection{Low energy sample} The SK detector can potentially reconstruct the energies of charged particles down to a few MeV. In the energy range of the low energy sample, the dominant reaction is the inverse beta decay~(IBD) of electron anti-neutrinos because of its relatively large cross section. Other sub-dominant reactions are elastic scattering between electrons and electron neutrinos, and the charged current and neutral current interactions with oxygen~\citep{2002PhRvD..66a3007K}. Even though we set an appropriate search window for solar-flare neutrinos, the signal-to-noise ratio is still poor below $16$~MeV due to solar neutrinos and background events originated from radioactive isotopes dissolved in the SK water~\citep{2020NIMPA.97764297N} and produced by penetrating muons~\citep{Super-Kamiokande:2015xra}. Hence, we set the energy threshold to $16$~MeV in this analysis, where the total energy of a positron produced by the IBD is considered. For the energy range above $16$~MeV, the possible backgrounds are atmospheric neutrino interactions, decay electrons originated from invisible muons, and low energy pions. For selecting positrons from IBD reactions we applied the selection cuts used for supernova relic neutrino searches since these cut criteria are optimized to maximize the event selection efficiency of the positron. The detailed analysis method is described in~\cite{2012PhRvD..85e2007B} and \cite{2021PhRvD.104l2002A}. For evaluating the event selection efficiency in the low energy sample, we first simulated positrons from IBD reactions. In this simulation, we use the neutrino energy spectrum from~\cite{2003ChJAS...3...75F}. Then, the positron energy is calculated by considering the cross section of the IBD reaction from~\cite{2003PhLB..564...42S}. Note that only the IBD reaction is considered in this simulation because of its large cross section in this energy range. Figure~\ref{fig:e-dist-input}~(left-top) shows the input energy distribution of positrons from the neutrino interactions and the reconstructed total positron energy after selection cuts. \begin{figure*}[] \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig2_left.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig2_right.pdf} \end{minipage} \caption{Energy spectra and selection efficiencies for the low~(left) and high~(right) energy samples. The distributions of the reconstructed positron energy and the visible energy calculated using the MC simulation which uses the energy spectra of solar-flare neutrinos from~\cite{2003ChJAS...3...75F}. The horizontal axis is reconstructed positron kinetic energy in the left panel~(reconstructed visible energy in the right panel), and the vertical axis shows the number of events. The light-green histograms represent the energy spectra of the generated events and the red histograms represent the energy spectrum after the selection cuts are applied. The blue histograms represent the selection efficiencies. \label{fig:e-dist-input}} \end{figure*} Assuming the electron anti-neutrino energy spectrum of~\cite{2003ChJAS...3...75F}, the selection efficiency, defined as $\varepsilon_\mathrm{low}^{\mathrm{Fargion}}$, in the low energy sample is about $27\%$, since the energy range covered by the low energy sample is relatively narrow. We also evaluated the selection efficiency for neutrinos between $16$ and $100$~MeV by generating a flat neutrino energy distribution. This produced the model independent analysis detailed in Section~\ref{sec:ind}. That selection efficiency, defined as $\varepsilon^{\mathrm{Ind}}_{\mathrm{low}}$, is about $75\%$. Table~\ref{tb:bg_rate} summarizes the livetime, the selection efficiencies, and the background rate after all reduction cuts in the low energy sample using all of the SK data sets. \begin{table*}[] \begin{center} \caption{The summary of the dates, the livetimes, the selection efficiencies~($\varepsilon^{\mathrm{Fargion}}_{\mathrm{low}}$ and $\varepsilon^{\mathrm{Fargion}}_{\mathrm{high}}$) for the energy spectrum from~\cite{2003ChJAS...3...75F}, the model-independent selection efficiencies~($\varepsilon^{\mathrm{Ind}}_{\mathrm{low}}$), and the background rates of the low energy sample and the high energy sample. The difference in their livetimes comes from differences in the SK detector data quality between the low and high energy analyses.} \label{tb:bg_rate} \begin{tabular}{cccccc} \hline Category& SK phase & SK-I & SK-II & SK-III & SK-IV \\ \hline Date & Start & Apr. 1996 & Oct. 2002 & Jul. 2006 & Sep. 2008 \\ & End & Jul. 2001 & Oct. 2005 & Aug. 2008 & May 2018 \\ \hline & Livetime~[day] & 1497 & 794 & 562 & 2970 \\ Low energy & Selection efficiency~($\varepsilon_{\mathrm{low}}^{\mathrm{Fargion}}$)~[$\%$] & $26.2$ & $27.1$ & $27.8$ & $28.3$ \\ & Selection efficiency~($\varepsilon^{\mathrm{Ind}}_{\mathrm{low}}$)~[$\%$] & $72.3$ & $74.8$ & $76.6$ & $78.1$ \\ & Background rate~[$\mathrm{event \, day^{-1}}$] & $0.20\pm0.01$ & $0.19\pm0.02$ & $0.20\pm0.01$ & $0.19\pm0.01$ \\ \hline & Livetime~[day] & 1489 & 825 & 522 & 3235 \\ High energy & Selection efficiency~($\varepsilon_{\mathrm{high}}^{\mathrm{Fargion}}$)~[$\%$] & $61.9$ & $62.1$ & $61.8$ & $61.6$ \\ & Background rate~[$\mathrm{event \, day^{-1}}$] & $7.45\pm0.07$ & $7.33\pm0.09$ & $7.53\pm0.12$ & $7.48\pm0.05$ \\ \hline \end{tabular} \end{center} \end{table*} After the installation of new front-end electronics at SK-IV~\citep{Super-Kamiokande:2010kjr}, the SK trigger system allows the detector to tag neutron signals after IBD reactions using the delayed coincidence technique. However, we did not require the neutron signal to identify electron anti-neutrinos in this analysis since the trigger system did not allow us to record the neutron signals in SK-I, -II, and -III. \subsection{High energy sample} This sample is further divided into three sub-samples based on the event topology: a fully contained~(FC) sample, a partially contained~(PC) sample, and an upward going muon~(UPMU) sample. In this study, we only analyzed the FC sample because the energy of solar-flare neutrinos is less than $10$~GeV and this results in the tracks of the all charged particles being essentially contained in the inner tank. In the energy region of the high energy sample different interactions occur depending on the neutrino energy. For simulating neutrino interactions with hydrogen and oxygen in the detector, we used the NEUT generator~\citep{2021EPJST.230.4469H}. Figure~\ref{fig:e-dist-input}~(right-top) shows the input energy distribution of charged particles from the neutrino interactions and the reconstructed energies after selection cuts. The event selection criteria for the high energy sample are detailed in~\cite{2005PhRvD..71k2005A}. The selection efficiency for the neutrino spectrum from~\cite{2003ChJAS...3...75F} is typically $62\%$ after all reduction cuts. Table~\ref{tb:bg_rate} also summarizes the livetime, the selection efficiency, and the background rate of the high energy sample for each SK phase. Directional information can be used to test whether neutrino signals come from a specific astrophysical source or not. Figure~\ref{fig:angular-resolution} shows the typical distribution of angles between the incoming neutrino and the direction of the final state charged particles based on the MC simulation. In the energy region above $1$~GeV the direction of the final state charged particles is highly correlated with the direction of the incoming neutrino and this tendency clearly depends on the neutrino energy. \begin{figure*}[] \centering\includegraphics[width=1.0\textwidth]{./fig3.pdf} \caption{Angular distribution between the direction of the incident neutrino and the reconstructed directions of produced charged particles~($e^{\pm}$, $\mu^{\pm}$, and $\pi^{\pm}$) from the MC simulation of the high energy sample. For multi-ring events, the direction of the neutrino is reconstructed as the momentum weighted sum of the directions of all the identified rings. \label{fig:angular-resolution}} \end{figure*} \section{Results} \label{sec:result} \subsection{Results for solar-flare neutrinos coincident with line $\gamma$-ray observations} \label{result_line} As explained in Section~\ref{sec:time-intro}, $2.2$~MeV line $\gamma$-rays are produced by the acceleration of hadrons, their interactions with nuclei in the chromosphere, and the production of neutrinos. Hence, the signal-to-noise ratio of solar-flare neutrinos is high when in coincidence with line $\gamma$-rays. For this reason, we searched for neutrino candidate events within the search windows determined by the light curve of line $\gamma$-rays by \cite{2020SoPh..295..133O}. As explained in Section~\ref{sec:window}, three solar flares on the visible side of the Sun are selected by~\cite{2020SoPh..295..133O}. The SK data does not cover the period of the solar flare that occurred on July 23rd 2002 due to the re-instrumentation of the SK detector following the implosion accident in 2001. Within the remaining two search windows no signal was found in either the low or high energy samples. \subsection{Results for solar flares on the visible side of the Sun} \label{result_soft} We searched for solar-flare neutrinos from the visible side of the Sun within the search windows determined by the time derivative of soft X-rays recorded by the GOES satellite, as described in Section~\ref{sec:window}. \cite{2020SoPh..295..133O} selected twenty-three solar flares using this channel, with SK missing three of these (on August 25th 2001, December 13th 2001 and July 23rd 2002) because of the re-instrumentation work discussed previously. Hence, we searched for neutrinos from twenty solar flares across solar cycles 23 and 24. No signal was found in the low energy sample while two events are found in the high energy sample. The first event was observed on November 4th 2003 and the second event on September 6th 2017, as summarized in Table~\ref{tb:summary-event}. \begin{table*}[] \begin{center} \caption{Summary of two events observed within the search window for neutrinos associated with a solar flare on the visible side of the Sun. The estimated background rate is normalized by the duration of the corresponding search window determined by~\cite{2020SoPh..295..133O}.} \label{tb:summary-event} \begin{tabular}{ccc} \hline Date~(UTC) & November 4th, 2003 & September 6th, 2017 \\ Solar flare class & X$28.0$ & X$9.4$ \\ \hline SK phase & SK-II & SK-IV \\ Observed time~(UTC) & 19:42:26 & 12:03:05 \\ Duration of window~[s] & 1144 & 521 \\ Event topology & $2$-ring $e$-like & $1$-ring $\mu$-like \\ Reconstructed energy & $178.3$~MeV & $1.2$~GeV \\ $\theta_{\mathrm{Sun}}$ & $67.1^{\circ}$ & $39.6^{\circ}$ \\ Estimated background rate~[$\mathrm{event{\,}flare^{-1}}$] & $0.20$ & $0.12$ \\ $p$-value of the null hypothesis & $18.1\%$ & $11.3\%$ \\ \hline \end{tabular} \end{center} \end{table*} Figure~\ref{fig:light-curve-visible} shows the time of the observed neutrino events together with the light curves recorded by the GOES satellite. The event on November 4th 2003 was observed during the impulsive phase of the solar flare, where particle acceleration is expected to be active. Furthermore, \cite{Watanabe_2006} reported that relativistic neutrons associated with this solar flare were observed by the neutron monitors on the ground at 19:45~(UTC), which is about $3$~minutes after the detection of the neutrino candidate in SK. This simultaneous observation also indicates that hadrons~(ions) were accelerated to more than $1$~GeV during this solar flare. On the other hand, the event on September 6th 2017 was observed during the dimming phase after the peak of the soft X-ray light curve, when all processes of particle acceleration are likely to have been completed. The event displays and sky-maps for the two observed candidates are shown in Figure~\ref{fig:skymap-visible1} and Figure~\ref{fig:skymap-visible2} in Appendix~\ref{app:visible}. \begin{figure*}[] \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\textwidth]{./fig4_left.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\textwidth]{./fig4_right.pdf} \end{minipage} \caption{The time of observed neutrino events for the solar flare on November 4th, 2003~(left), and July 6th, 2017~(right). The black vertical line shows the time of neutrino event in the SK detector. The red~(green) plot shows the derivative of the light curves~(original light curve) recorded by the GOES satellite. The shaded region shows the search windows determined by using the derivative of soft X-ray according to the method developed by \cite{2020SoPh..295..133O}. In the case of the solar flare on November 4th, 2003~(left), the instrument on the GOES satellite saturated due to the high intensity of soft X-rays. That resulted in the satellite not recording data for more than $15$~minutes from 19:45 to 20:00. \label{fig:light-curve-visible}} \end{figure*} Figure~\ref{fig:rate} shows the energies of the two observed events compared to the expected background energy spectrum. Here, the background spectrum is from events accumulated outside the search windows and the main component is the interaction of atmospheric neutrinos. The expected number of background events in the high energy sample in the search window is $0.20$~events~($0.12$~events) for the solar flare event on November 4th 2003~(September 6th 2017). The $p$-value of the null hypothesis is $18.1\%$~($11.3\%$). \begin{figure*}[] \begin{center} \includegraphics[width=0.8\linewidth]{./fig5.pdf} \end{center} \caption{The energy distribution of the simulated neutrino events from solar flares occurring on the visible side of the Sun compared to the background sample. The first event~(in the green square) was observed as a $2$-ring $e$-like event on November 4th, 2003 while the other~(in the magenta circle) was observed as a $1$-ring $\mu$-like event on September 6th, 2017. The background spectrum~(red histogram) is normalized such that it corresponds to the expectation for a search window with an average duration of $700$~s, determined from the derivative of the soft X-ray light curve by~\cite{2020SoPh..295..133O}. \label{fig:rate}} \end{figure*} In order to investigate whether neutrino candidate events come from the direction of the Sun or not, we examined the angular distribution of $\theta_{\mathrm{Sun}}$, which is defined as the angle between the reconstructed direction of the charged particles and the direction pointing to the Sun. In the case of a multi-ring event, the direction of the neutrino is reconstructed as the momentum weighted sum of the directions of all the identified rings. The value of $\theta_{\mathrm{Sun}}$ of the candidate event on November 4th, 2003 (September 6th, 2017) is $67.1^{\circ}$~($39.6^{\circ}$). Figure~\ref{fig:angle_front} shows the $\theta_{\mathrm{Sun}}$ of the two observed events together with the angular distribution derived from the MC simulation. \begin{figure*}[] \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig6_left.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig6_right.pdf} \end{minipage} \caption{Reconstructed angles of the observed neutrino events in coincidence with solar flares occurring on November 4th, 2003~(left panel) and September 6th, 2017~(right panel) together with the angular distributions from MC sample for signal and background. The light green dashed line shows the angles between the reconstructed event direction and the direction of the Sun, $\theta_{\rm Sun}$, at the time when the neutrino event was observed. The red~(black) histograms show the angular distribution of the MC~(background) sample in the given energy range. \label{fig:angle_front}} \end{figure*} \subsection{Results for solar flares on the invisible side of the Sun} \label{result_soft_invisible} As explained in Section~\ref{sec:invisible}, we selected ten large CMEs that occurred on the invisible side of the Sun by setting criteria on their emission speed. However, SK did not take data for the two CMEs that occurred on July 18th and 19th, 2002 due to the detector re-instrumentation work. Hence, we searched for solar-flare neutrinos from the remaining eight CMEs that occurred on the invisible side of the Sun. There was no signal in the low-energy sample while six events were found in the high-energy sample as summarized in Table~\ref{tb:summary-event-invisible}. Two neutrino events were identified for the solar flares on November 7th, 2003 and July 24th, 2005 while one event was observed for those on June 4th, 2011 and July 23rd, 2012. The expected number of background events in the high-energy sample is $0.62~\mathrm{event \, flare^{-1}}$. The $p$-value for the null hypothesis finding one~(two) events for these solar flares is $10.2\%$~($33.5\%$). Figure~\ref{fig:e-dist-invisible} shows the energies of the observed events together with the background energy spectrum. \begin{figure*}[] \begin{center} \includegraphics[width=0.8\linewidth]{./fig7.pdf} \end{center} \caption{The reconstructed energies of the neutrino events from solar flares that occurred on the invisible side of the Sun and the typical energy distribution of events in the background sample. The green circles, magenta squares, yellow upward triangles, and blue downward triangles are the energy of observed events on November 7th, 2003, July 24th, 2005, June 4th, 2011, and July 23rd, 2012, respectively. The background spectrum is normalized by the time duration of $7238$~s according to the method described in Appendix~\ref{app:cme}. \label{fig:e-dist-invisible}} \end{figure*} The event displays and sky-maps for the two observed candidates are shown from Figure~\ref{fig:skymap-invisible1} to Figure~\ref{fig:skymap-invisible4} in Appendix~\ref{app:invisible}. \begin{longrotatetable} \begin{deluxetable}{ccccccc} \tablecaption{The summary of events observed within the search window for neutrinos associated with solar flares that occurred on the invisible side of the Sun. The duration of the search window is $7238$~s as detailed in Appendix~\ref{app:cme}. The number of expected background events in the search window is $0.62 \pm 0.01$.} \label{tb:summary-event-invisible} \startdata \tablehead{ \colhead{Date~(UTC)} & \twocolhead{November 7th, 2003} & \twocolhead{July 24th, 2005} & \colhead{June 4th, 2011} & \colhead{July 23rd, 2012} } SK phase & \multicolumn{2}{c}{SK-II} & \multicolumn{2}{c}{SK-II} & SK-IV & SK-IV \\ Observed time~(UTC) & 15:18:34 & 15:40:45 & 13:07:36 & 14:15:21 & 21:05:07 & 03:03:47 \\ Time difference between two events & \multicolumn{2}{c}{$1131$~s} & \multicolumn{2}{c}{$4065$~s} & -- & -- \\ Event topology & $1$-ring $e$-like & $2$-ring $e$-like & $1$-ring $\mu$-like & $1$-ring $\mu$-like & $4$-ring $e$-like & $3$-ring two $\mu$-like, $e$-like \\ Reconstructed energy & $3.58$~GeV & $493$~MeV & $126$~MeV & $1.35$~GeV & $2.14$~GeV & $834$~MeV \\ $\theta_{\mathrm{Sun}}$ & $20.0^{\circ}$ & $71.4^{\circ}$ & $100.4^{\circ}$ & $94.0^{\circ}$ & $101.0^{\circ}$ & $76.7^{\circ}$ \\ $p$-value of the null hypothesis & \multicolumn{2}{c}{$10.2\%$} & \multicolumn{2}{c}{$10.2\%$} & $33.5\%$ & $33.5\%$ \\ Probability of background event & \multicolumn{2}{c}{$10.2\%$} & \multicolumn{2}{c}{$34.5\%$} & -- & --\\ from timing distribution & & & & & & \\ \enddata \end{deluxetable} \end{longrotatetable} Figure~\ref{fig:time-dist-invisible} shows the observed neutrino events around the time of the solar flare. Note that the duration of the search window for solar flares on the invisible side of the Sun is uniform~($7238$~s) as detailed in Appendix~\ref{app:cme}. \begin{figure*}[] \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig8_tl.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig8_tr.pdf} \end{minipage} \\ \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig8_bl.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig8_br.pdf} \end{minipage} \caption{The time distributions of neutrino events around the solar flares that occurred on the invisible side of the Sun on November 7th, 2003~(top-left), July 24th, 2005~(top-right), June 4th, 2011~(bottom-left), and July 23rd, 2012~(bottom-right). The red points show the times of the neutrino events, which are summarized in Table~\ref{tb:summary-event-invisible}. The dashed vertical lines show the estimated start time of the particular CME emission and the shaded regions show the search windows~($7238$~s) according to the method described in Section~\ref{sec:invisible}. \label{fig:time-dist-invisible}} \end{figure*} In the case of the two solar flares on November 7th, 2003 and on July 24th, 2005, we found two consecutive neutrino events within their search windows. Their time differences are $1131$~s and $4065$~s, respectively. We analyzed the time difference distribution between consecutive events in the background sample in order to verify whether their time differences are likely or not. Figure~\ref{fig:tdiff-dist-invisible} shows the time difference of the two consecutive events observed within their search windows together with the time difference distribution of the background sample. \begin{figure}[] \begin{center} \includegraphics[width=0.8\linewidth]{./fig9.pdf} \end{center} \caption{The time difference between the two events observed within the search windows for solar flares on November 7th, 2003~($1131$~s, dashed green line) and July 24th, 2005~($4065$~s, dotted pink line). The red histogram shows the distribution of the time difference between consecutive events in the background sample using the combined data from SK-I to SK-IV. \label{fig:tdiff-dist-invisible}} \end{figure} Comparing the time difference distribution of the background sample, we estimated the occurrence probabilities for each solar flare, which correspond to $10.2\%$ and $34.5\%$, respectively. Figure~\ref{fig:angle_rear} shows the reconstructed angle $\theta_{\mathrm{Sun}}$ together with the typical distribution derived from the MC simulation. The reconstructed values of $\theta_{\mathrm{Sun}}$ are also summarized in Table~\ref{tb:summary-event-invisible}. \begin{figure*}[] \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig10_tl.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig10_tr.pdf} \end{minipage} \\ \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig10_ml.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig10_mr.pdf} \end{minipage} \\ \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig10_bl.pdf} \end{minipage} \begin{minipage}{0.5\hsize} \centering\includegraphics[width=1.0\linewidth]{./fig10_br.pdf} \end{minipage} \caption{The reconstructed angle of the neutrino events in coincidence with solar flares that occurred on the invisible side of the Sun together with the typical angular distributions from the MC simulation for signal and background sample. The light green dashed lines show the angle between the reconstructed event direction and the direction from the Sun to SK, $\theta_{\rm Sun}$, at the time when the neutrino candidate was observed. The red~(black) histograms show the angular distributions of the MC~(background) sample in the given energy range.\label{fig:angle_rear}} \end{figure*} \section{Discussion} \label{sec:discuss} \subsection{solar-flare neutrino fluence derived from the theoretical predictions} We estimated the fluence of solar-flare neutrinos produced by powerful solar flares based on the number of observed events within their corresponding search window. Here, we calculate the upper limit of the neutrino fluence using a Bayesian method, in the absence of a significant excess of observed events above the expected background rate~\citep{2000PhRvD..63a3009R}. We separately calculate the upper limits of neutrino fluence for low and high energy samples depending on the neutrino energies. The neutrino fluence at the Earth~$\mathit{\Phi}$ is calculated using the neutrino flux~$F(E_{\nu})$ at the Earth in the search window, \begin{equation} \mathit{\Phi} = t_{\mathrm{emit}}\int F(E_{\nu}){\rm d}E_{\nu}, \end{equation} where $t_{\mathrm{emit}}$ is a time duration of neutrino emissions in a solar flare, which is $100$~s according to the assumption in~\cite{2003ChJAS...3...75F}, $E_{\nu}$ is the neutrino energy, and $F(E_{\nu})$ is the predicted neutrino flux without neutrino oscillations in unit of $\mathrm{cm^{-2}\,s^{-1}\,MeV^{-1}}$. We note that the duration of the search windows is sufficient to cover the duration of neutrino emission $t_{\mathrm{emit}}$. For the low energy sample, the expected number of neutrino interactions within the search window~$S$ is calculated using the following equation: \begin{equation} S \equiv N_{p}t_{\mathrm{emit}}\int \left[ F(E_{\bar{\nu}_{\mathrm{e}}} )P_{\mathrm{ee}} +F(E_{\bar{\nu}_{\mu}} )P_{\mu\mathrm{e}} \right] \sigma_{\mathrm{IBD}}(E_{\bar{\nu}_{\mathrm{e}}}) \varepsilon_{\mathrm{low}}^{\mathrm{Fargion}}(E_{{\nu}_{\mathrm{e}}}) {\mathrm{d}}E_{{\overline{\nu}_{\mathrm{e}}}}, \label{def_s} \end{equation} \noindent where $N_{p}$ is the number of target protons in the SK fiducial volume relevant to the neutrino interactions, $P_{\alpha \beta}$ is the probability for a neutrino produced as flavor $\alpha$ to oscillate to flavor $\beta$ when travelling from the Sun to the Earth, $\sigma_{\mathrm{IBD}}$ is the IBD cross section as a function of electron anti-neutrino energy derived from the theoretical model from~\cite{2003PhLB..564...42S} and $\varepsilon_{\mathrm{low}}^{\mathrm{Fargion}}$ is the event selection efficiency of the low energy sample defined in Section~\ref{sec:analysis}. Using this expected number of neutrino interactions within the search window, the probability density function for the number of observed events is defined as follows: \begin{equation} P_{\mathrm{low}}(S+B | n_{\mathrm{obs}})= \frac{1}{A}\int\!\!\!\int\!\!\!\int \frac{\mathrm{e}^{-(S+B)}(S+B)^{n_{\rm obs}}}{n_{\mathrm{obs}}!} P(\sigma_{\mathrm{IBD}})P \left(\varepsilon_{\mathrm{low}}^{\mathrm{Fargion}} \right)P(B)d\sigma_{\mathrm{IBD}}d\varepsilon_{\mathrm{low}}^{\mathrm{Fargion}}dB \label{eq-fluence_low}, \end{equation} \noindent where $B$ is the number of expected background events in the search window, $A$ is a normalization factor representing the total integral of $P(S+B|n_{\mathrm{obs}})$, and $n_{\mathrm{obs}}$ is the number of observed events in the search window. To include the effect of systematic uncertainties, $P(\sigma_{\mathrm{IBD}})$, $P \left(\varepsilon_{\mathrm{low}}^{\mathrm{Fargion}} \right)$, and $P(B)$ are introduced as the prior probabilities for fluctuations of the IBD cross section, the event selection efficiency, and the number of expected background events in the search window, respectively. The priors are assumed to follow a Gaussian distribution, \begin{eqnarray} G(x) = \frac{1}{\sqrt{2\pi\delta^{2}_{x}}}\exp\left[-\frac{(x-x_{0})^{2}}{2\delta^{2}_{x}}\right], \label{eq-low} \end{eqnarray} \noindent where $x$ stands for the parameters $\sigma_{\mathrm{IBD}}$, $\varepsilon_{\mathrm{low}}$, and $B$, respectively, $x_{0}$ is their best estimates, and $\delta_{x}$ stands for their systematic uncertainties, expressed as $\delta_{\sigma_{\mathrm{IBD}}}$, $\delta_{\varepsilon_{\mathrm{low}}}$, and $\delta_{B}$, respectively. For the systematic uncertainty of the IBD cross section~($\sigma_{\mathrm{IBD}}$), we assigned the uncertainty estimated in~\cite{2003PhLB..564...42S}. For the systematic uncertainty of the selection efficiency~($\delta_{\varepsilon_{\mathrm{low}}}$), the variation in the number of events after all reduction cuts was evaluated by artificially changing the reduction parameters as performed in~\cite{2021PhRvD.104l2002A}. For the systematic uncertainty of the background rate~($\delta_{B}$), the deviation of the actual background rate is conservatively assigned. Table~\ref{tb:sys_error} summarises the values of these systematic uncertainties. With these definitions, the neutrino fluence at a given confidence level~(C.L.) is calculated as \begin{equation} {\rm C.L.} = \frac{\int^{\mathit{\Phi}_{\rm limit}}_{0}P_{\mathrm{low}}(S+B|n_{\rm obs}){\rm d} \mathit{\Phi}}{\int^{\infty}_{0}P_{\mathrm{low}}(S+B|n_{\rm obs}){\rm d} \mathit{\Phi}}, \end{equation} where $\mathit{\Phi}_{\mathrm{limit}}$ is the upper limit of the neutrino fluence to be obtained. \begin{table*}[] \begin{center} \caption{A summary of systematic uncertainties in this analysis. The systematic uncertainties for the event selection efficiency of the low and high energy samples are estimated from~\cite{2021PhRvD.104l2002A} and~\cite{2018PhRvD..97g2001A}, respectively. The systematic uncertainty of the neutrino cross section for the low energy sample simulations has been taken from~\cite{2003PhLB..564...42S}. For the high energy sample, the difference between the cross sections from~\cite{Smith:1972xh} and from~\cite{2004PhRvC..70e5503N} is assigned as the systematic uncertainty of the cross section. The deviation of the background rate which is listed in Table~\ref{tb:bg_rate} has been used as the systematic uncertainty of the background rate.} \label{tb:sys_error} \begin{tabular}{ccccccccc} \hline Valuable & \multicolumn{4}{c}{Low energy sample} & \multicolumn{4}{c}{High energy sample} \\ &SK-I& SK-II & SK-III & SK-IV & SK-I& SK-II & SK-III & SK-IV \\\hline Selection efficiency ($\delta_{\varepsilon}$) & $5.0\%$ & $5.3\%$ & $3.5\%$ & $4.1\%$ & $1.5\%$ & $0.4\%$ & $1.5\%$ & $0.1\%$ \\ Cross section ($\delta_{\sigma}$) & \multicolumn{4}{c}{$\phantom{0}1.0\%$} & \multicolumn{4}{c}{$20.0\%$}\\ Background rate ($\delta_{B}$) & $5.0\%$ & $10.5\%$ & $5.0\%$ & $5.3\%$ & $0.9\%$ & $1.2\%$ & $1.6\%$ & $0.6\%$ \\ \hline \end{tabular} \end{center} \end{table*} Table~\ref{tb:lowe-flence} summarizes the upper limits of neutrino fluence of anti-electron neutrinos for the low energy sample. Since there are no events observed within the search windows, the upper limits of the neutrino fluence are $4.0\times10^{7}~\mathrm{cm^{-2}}$~($4.1\times10^{7}~\mathrm{cm^{-2}}$) selected solar flares occurring on SK-I~(SK-II and SK-IV). \begin{table*}[] \begin{center} \caption{A summary of the upper limits of neutrino fluence of anti-electron neutrinos for the low energy sample. We assumed the neutrino energy spectrum from \cite{2003ChJAS...3...75F} and that IBD is the dominant reaction for reconstructed positron energies below $100$~MeV. As described in \cite{2020SoPh..295..133O}, for the solar flare on September 9th, 2005, the brightening of its soft X-ray light curve was relatively slow and the time derivative of the curve was not large enough. Due to this unexpected behavior, the time window for this solar flare was determined using the soft X-ray light curve directly instead of its derivative. Owing to such treatment, the duration of the search window is much longer than others and this results in the higher background rate.} \label{tb:lowe-flence} \begin{tabular}{ccccc} \hline Side & Date of flare & Observed event & Expected background & Neutrino fluence \\ (Channel) & [UTC] & within window & within window & [$\mathrm{cm^{-2}{\,}flare^{-1}}$] \\ \hline Visible & 2002 Jul. 23 & No SK data & -- & -- \\ (Line $\gamma$-rays) & 2003 Nov. \phantom{0}2 & $0$ & $0.0029$ & $<4.0\times10^{7}$ \\ & 2005 Jan. 20 & $0$ & $0.0035$ & $<4.0\times10^{7}$ \\ \hline & 1997 Nov. \phantom{0}6 & $0$ & $0.0004$ & $<4.0\times10^{7}$ \\ & 2000 Jul. 14 & $0$ & $0.0027$ & $<4.0\times10^{7}$ \\ & 2001 Apr. \phantom{0}2 & $0$ & $0.0025$ & $<4.0\times10^{7}$ \\ & 2001 Apr. \phantom{0}6 & $0$ & $0.0012$ & $<4.0\times10^{7}$ \\ & 2001 Apr. 15 & $0$ & $0.0010$ & $<4.0\times10^{7}$ \\ & 2001 Aug. 25 & No SK data & -- & -- \\ & 2001 Dec. 13 & No SK data & -- & -- \\ & 2002 Jul. 23 & No SK data & -- & -- \\ & 2003 Oct. 23 & $0$ & $0.0022$ & $<4.1\times10^{7}$ \\ & 2003 Oct. 28 & $0$ & $0.0016$ & $<4.1\times10^{7}$ \\ Visible& 2003 Oct. 29 & $0$ & $0.0016$ & $<4.1\times10^{7}$ \\ (Soft X-ray & 2003 Nov. \phantom{0}2 & $0$ & $0.0019$ & $<4.1\times10^{7}$ \\ derivative) & 2003 Nov. \phantom{0}4 & $0$ & $0.0026$ & $<4.1\times10^{7}$ \\ & 2005 Jan. 20 & $0$ & $0.0025$ & $<4.1\times10^{7}$ \\ & 2005 Sep. \phantom{0}7 & $0$ & $0.0023$ & $<4.1\times10^{7}$ \\ & 2005 Sep. \phantom{0}8 & $0$ & $0.0011$ & $<4.1\times10^{7}$ \\ & 2005 Sep. \phantom{0}9 & $0$ & $0.018$\phantom{0} & $<4.1\times10^{7}$ \\ & 2006 Dec. \phantom{0}5 & $0$ & $0.0016$ & $<4.1\times10^{7}$ \\ & 2006 Dec. \phantom{0}6 & $0$ & $0.0008$ & $<4.1\times10^{7}$ \\ & 2011 Aug. \phantom{0}9 & $0$ & $0.0007$ & $<4.1\times10^{7}$ \\ & 2012 Mar. \phantom{0}7 & $0$ & $0.0029$ & $<4.1\times10^{7}$ \\ & 2017 Sep. \phantom{0}6 & $0$ & $0.0012$ & $<4.1\times10^{7}$ \\ & 2017 Sep. 10 & $0$ & $0.0024$ & $<4.1\times10^{7}$ \\ \hline & 2001 Apr. 18 & $0$ & $0.016$ & $<4.0\times10^{7}$ \\ & 2002 Jul. 18 & No SK data & -- & -- \\ & 2002 Jul. 19 & No SK data & -- & -- \\ & 2003 Nov. \phantom{0}2 & $0$ & $0.016$ & $<4.1\times10^{7}$ \\ Invisible & 2003 Nov. \phantom{0}7 & $0$ & $0.016$ & $<4.1\times10^{7}$ \\ & 2003 Nov. \phantom{0}9 & $0$ & $0.016$ & $<4.1\times10^{7}$ \\ & 2005 Jul. 24 & $0$ & $0.016$ & $<4.1\times10^{7}$ \\ & 2011 Jun. \phantom{0}4 & $0$ & $0.016$ & $<4.1\times10^{7}$ \\ & 2012 Jul. 23 & $0$ & $0.016$ & $<4.1\times10^{7}$ \\ & 2014 Dec. 13 & $0$ & $0.016$ & $<4.1\times10^{7}$ \\ \hline \end{tabular} \end{center} \end{table*} For the high energy sample, we considered interactions of all neutrino flavors because the distance between the Sun and the Earth is sufficiently long compared with the oscillation length of neutrinos whose energy is less than $100$~GeV~\citep{2006PhRvD..74i3004F}. The flavor ratio of solar-flare neutrinos at the production point is $\nu_{e}:\nu_{\mu}:\nu_{\tau} = 1:2:0$ due to their origin from $\pi^{\pm}$ and $\mu^{\pm}$ decay while the flavor ratio at the detector is approximately $\nu_{e}:\nu_{\mu}:\nu_{\tau} = 1:1:1$~\citep{2009PhRvD..80k3006C}, where we also assume that the ratio of neutrino to anti-neutrino is approximately equal, as $\nu:\bar{\nu}=1:1$. The fluence upper limit for solar-flare neutrinos using the high energy sample can be obtained using a similar procedure as for the low energy sample. The difference in the calculation procedure between them is the definition of the probability density function for the number of observed events. In the high energy sample it is defined as follows: \begin{eqnarray} P_{\mathrm{high}}(S+B | n_{\mathrm{obs}}) & = & \displaystyle \frac{1}{A'}\int\!\!\!\int\!\!\!\int \frac{\mathrm{e}^{-(S+B)}(S+B)^{n_{\mathrm{obs}}}}{n_{\mathrm{obs}}!} P(\sigma(E_{\nu}))P \left(\varepsilon_{\mathrm{high}}^{\mathrm{Fargion}} \right)P(B)d\sigma(E_{\nu})d\varepsilon_{\mathrm{high}}^{\mathrm{Fargion}}dB, \\ \mathrm{and}~S & = & N_T \displaystyle \int dE_{\nu} \sum_{i=e,\mu,\tau,\bar{e},\bar{\mu},\bar{\tau}} \left( \frac{F(E_{\nu_{i}}) \sigma(E_{\nu_{i}})\varepsilon_{\mathrm{high}}^{\mathrm{Fargion}}(E_{\nu_{i}})}{6} \right) \label{eq-fluence_high}, \end{eqnarray} \noindent where $A'$ is a normalization factor representing the total integral of $P(S+B|n_{\mathrm{obs}})$, $N_{T}$ is the number of target nuclei in the detector's fiducial volume relevant to the neutrino interactions, $\sigma(E_{\nu})$ is the combined cross section for all interactions, and $\varepsilon_{\mathrm{high}}^{\mathrm{Fargion}}(E_{\nu})$ is the event selection efficiency of the high energy sample as defined in Section~\ref{sec:analysis}. The systematic uncertainty of the total cross section~($\delta_{\sigma(E_{\nu})}$) is estimated by the difference between two theoretical models from~\cite{Smith:1972xh} and from~\cite{2004PhRvC..70e5503N}. For the other systematic uncertainties~($\delta_{\varepsilon_{\mathrm{high}}}$ and $\varepsilon_{B}$), the same procedure as for the low energy sample was performed. Table~\ref{tb:high-flence} summarizes the fluence limits of all detactable flavors for the high energy sample. The upper limits of neutrino fluence at $90\%$~C.L. for solar flares occurring on the visible side of the Sun without neutrino candidates is $7.3\times10^{5}~\mathrm{cm^{-2}}$. The upper limit for solar flares with one neutrino candidate, which occurred on November 4th, 2003 (September 6th, 2017), is $1.1\times10^{6}~\mathrm{cm^{-2}}$ ($1.2\times10^{6}~\mathrm{cm^{-2}}$). For the solar flares occurring on the invisible side of the Sun, the upper limits of neutrino fluence at $90\%$~C.L. are $7.3\times10^{5}~\mathrm{cm^{-2}}$, $1.1\times10^{6}~\mathrm{cm^{-2}}$, and $1.6\times10^{6}~\mathrm{cm^{-2}}$ for solar flares with zero, one and two neutrino candidates, respectively. \begin{table*}[] \begin{center} \caption{A summary of fluence limits of all detectable flavors for the high energy sample. We assumed the energy spectrum from~\cite{2003ChJAS...3...75F} and the neutrino interaction model from~\cite{2021EPJST.230.4469H}. The higher background rate for the solar flare on September 9th, 2005 is described in the caption of Table~\ref{tb:lowe-flence}.} \label{tb:high-flence} \begin{tabular}{ccccc} \hline Side & Date of flare & Observed event & Expected background & Neutrino fluence \\ (Channel) & [UTC] & within window & within window & [$\mathrm{cm^{-2}{\,}flare^{-1}}$] \\ \hline Visible & 2002 Jul. 23 & No SK data & -- & -- \\ (Line $\gamma$-rays) & 2003 Nov. \phantom{0}2 & $0$ & $0.11$ & $<7.3\times10^{5}$ \\ & 2005 Jan. 20 & $0$ & $0.14$ & $<7.3\times10^{5}$ \\ \hline & 1997 Nov. \phantom{0}6 & $0$ & $0.04$ & $<7.3\times10^{5}$ \\ & 2000 Jul. 14 & $0$ & $0.20$ & $<7.3\times10^{5}$ \\ & 2001 Apr. \phantom{0}2 & $0$ & $0.20$ & $<7.3\times10^{5}$ \\ & 2001 Apr. \phantom{0}6 & $0$ & $0.10$ & $<7.3\times10^{5}$ \\ & 2001 Apr. 15 & $0$ & $0.08$ & $<7.3\times10^{5}$ \\ & 2001 Aug. 25 & No SK data & -- & -- \\ & 2001 Dec. 13 & No SK data & -- & -- \\ & 2002 Jul. 23 & No SK data & -- & -- \\ & 2003 Oct. 23 & $0$ & $0.16$ & $<7.3\times10^{5}$ \\ & 2003 Oct. 28 & $0$ & $0.12$ & $<7.3\times10^{5}$ \\ Visible& 2003 Oct. 29 & $0$ & $0.12$ & $<7.3\times10^{5}$ \\ (Soft X-ray & 2003 Nov. \phantom{0}2 & $0$ & $0.14$ & $<7.3\times10^{5}$ \\ derivative) & 2003 Nov. \phantom{0}4 & $1$ & $0.20$ & $<1.1\times10^{6}$ \\ & 2005 Jan. 20 & $0$ & $0.18$ & $<7.3\times10^{5}$ \\ & 2005 Sep. \phantom{0}7 & $0$ & $0.18$ & $<7.3\times10^{5}$ \\ & 2005 Sep. \phantom{0}8 & $0$ & $0.08$ & $<7.3\times10^{5}$ \\ & 2005 Sep. \phantom{0}9 & $0$ & $0.67$ & $<7.3\times10^{5}$ \\ & 2006 Dec. \phantom{0}5 & $0$ & $0.12$ & $<7.3\times10^{5}$ \\ & 2006 Dec. \phantom{0}6 & $0$ & $0.06$ & $<7.3\times10^{5}$ \\ & 2011 Aug. \phantom{0}9 & $0$ & $0.06$ & $<7.3\times10^{5}$ \\ & 2012 Mar. \phantom{0}7 & $0$ & $0.20$ & $<7.3\times10^{5}$ \\ & 2017 Sep. \phantom{0}6 & $1$ & $0.12$ & $<1.2\times10^{6}$ \\ & 2017 Sep. 10 & $0$ & $0.18$ & $<7.3\times10^{5}$ \\ \hline & 2001 Apr. 18 & $0$ & $0.62$ & $<7.3\times10^{5}$ \\ & 2002 Jul. 18 & No SK data & -- & -- \\ & 2002 Jul. 19 & No SK data & -- & -- \\ & 2003 Nov. \phantom{0}2 & $0$ & $0.62$ & $<7.3\times10^{5}$\\ Invisible & 2003 Nov. \phantom{0}7 & $2$ & $0.62$ & $<1.6\times10^{6}$\\ & 2003 Nov. \phantom{0}9 & $0$ & $0.62$ & $<7.3\times10^{5}$\\ & 2005 Jul. 24 & $2$ & $0.62$ & $<1.6\times10^{6}$ \\ & 2011 Jun. \phantom{0}4 & $1$ & $0.62$ & $<1.1\times10^{6}$ \\ & 2012 Jul. 23 & $1$ & $0.62$ & $<1.1\times10^{6}$ \\ & 2014 Dec. 13 & $0$ & $0.62$ & $<7.3\times10^{5}$ \\ \hline \end{tabular} \end{center} \end{table*} In order to calculate the upper limit with each theoretical model, $F(E_{\nu})$ in Eq.~(\ref{eq-fluence_high}) is replaced by the other flux predictions from~\cite{1991NCimC..14..417K} and~\cite{2013ICRC...33.3656T} and the selection efficiencies are also evaluated with the replaced predictions. Figure~\ref{fig:other_flux} shows the comparison between the upper limits of neutrino fluence and the predicted neutrino fluence from three theoretical models. From the results, the SK data experimentally excluded the model of~\cite{2003ChJAS...3...75F} even though this model expects several interactions in the SK detector when a energetic solar flare occurs on the invisible side of the Sun as listed in Table~\ref{tb:neutrino-model}. However, the upper limits assuming the neutrino spectra from~\cite{1991NCimC..14..417K} and \cite{2013ICRC...33.3656T} are still higher than their predictions. As a future prospect, the model from~\cite{1991NCimC..14..417K} will be tested by the next generation of neutrino detectors with significantly larger target volumes. The model of~\cite{2013ICRC...33.3656T}, however, may be difficult to test even with the next generation of neutrino detectors. \begin{figure}[] \begin{center} \includegraphics[width=0.8\linewidth]{./fig11.pdf} \end{center} \caption{The comparison between upper limits of neutrino fluence considering neutrino spectra from the specific theoretical models of~\cite{1991NCimC..14..417K}, \cite{2004JHEP...06..045F}, and~\cite{2013ICRC...33.3656T}, and their predicted fluences. The black dashed lines with downward arrows~(red solid lines) show the upper limit of neutrino fluence~(expected neutrino fluence based on each theoretical model). The upper limits are conservatively calculated by considering two neutrino candidates detected within the search windows. \label{fig:other_flux} } \end{figure} \subsection{Model-independent solar-flare neutrino fluences} \label{sec:ind} As explained in Section~\ref{sec:neutrino}, the excess of events reported by the Homestake experiment originally suggested the existence of solar-flare neutrinos. From this result experimental searches for solar-flare neutrinos have mainly been made by the neutrino detectors in the energy region below $100$~MeV. It should be noted that past studies by the SNO~\citep{2014APh....55....1A}, Borexino~\citep{Agostini:2019yuq}, and KamLAND~\citep{2022ApJ...924..103A} experiments searched for neutrinos from solar flares in coincidence with the soft X-ray light curves recorded by the GOES satellite, including solar flares with smaller intensity, such as M-class flares. Due to the different assumptions and samples of selected solar flares, we cannot directly compare previous experimental results with the results presented in this article. To compare these results with those from other experiments, the upper limit of neutrino fluence without considering a specific model was also calculated using the low energy sample. In this case, the probability density function at a neutrino energy $E$ is defined as follows: \begin{eqnarray} P_{\mathrm{low}}(S+B | n_{\mathrm{obs}})(E) &=& \frac{1}{A}\int\!\!\!\int\!\!\!\int \frac{\mathrm{e}^{-(S(E)+B)}(S(E)+B)^{n_{\mathrm{obs}}}}{n_{\mathrm{obs}}!} P(\sigma_{\mathrm{IBD}})P \left(\varepsilon^{\mathrm{Ind}}_{\mathrm{low}} \right)P(B)d\sigma_{\mathrm{IBD}}d\varepsilon_{\mathrm{low}}^{\mathrm{Ind}}dB, \\ \mathrm{and}~S(E) &=& N_{p}t_{\mathrm{emit}}\int F(E_{\nu})\theta(E,E_\nu) \sigma_{\mathrm{IBD}}(E_{\nu}) \varepsilon_{\mathrm{low}}^{\mathrm{Ind}}{\mathrm{d}}E_{\nu} \end{eqnarray} \noindent where $\varepsilon^{\mathrm{Ind}}_{\mathrm{low}}$ is the selection efficiency listed in Table~\ref{tb:bg_rate}, $\theta(E,E_{\nu})$ is a step function which is defined as, \begin{equation} \theta(E,E_\nu) = \left\{ \begin{array}{ll} 1 & (E-5~{\rm MeV}< E_{\nu} \le E+5~{\rm MeV}), \\ 0 & ({\rm otherwise}), \end{array} \right. \end{equation} and the other variables and functions are the same as those used in Eq.~(\ref{eq-fluence_low}). To convert from the reconstructed positron energy to the incoming electron anti-neutrino energy the theoretical model of~\cite{2003PhLB..564...42S} is used. To address the effect of energy resolution and the energy of the simultaneously produced neutron, the data in the neutrino energy range from $20$ to $110$~MeV was analyzed and the upper limit of neutrino fluence calculated every $10$~MeV Figure~\ref{fig:fluence} shows the SK result for the upper limit of neutrino fluence without considering a specific theoretical model, together with other experimental results~\citep{1994PrPNP..32...13D, 1988PhRvL..61.2653H, 2014APh....55....1A, Agostini:2019yuq, 2022ApJ...924..103A}. Comparing to other experimental limits of neutrino fluence, the SK limit is improved by at least an order of magnitude in the energy region from $20$ to $110$~MeV. The SK limit fully excludes the allowed parameter region which was favored by the Homestake experiment~\citep{1994PrPNP..32...13D} and gives a strong constraint for neutrino fluence from powerful solar flares. \begin{figure}[] \begin{center} \includegraphics[width=0.8\linewidth]{./fig12.pdf} \end{center} \caption{The upper limit of neutrino fluence from the data taken by SK-I, II, III, and IV~(red thick-solid) together with the other experimental results. The orange contour shows the allowed parameter region from the Homestake experiment~\citep{1994PrPNP..32...13D}. Black long-dashed-dotted, blue dotted, green thin-solid, and pink dashed lines show the upper limits from Kamiokande~\citep{1988PhRvL..61.2653H}, SNO~\citep{2014APh....55....1A}, Brexino~\citep{Agostini:2019yuq}, and KamLAND~\citep{2022ApJ...924..103A} experiments, respectively. \label{fig:fluence}} \end{figure} \subsection{Energy conversion factor} \label{sec:e_conversion} As explained in Section~\ref{sec:time-intro}, \cite{2003ChJAS...3...75F} estimated the number of interactions in the SK detector by introducing the conversion factor $\eta$ in Eq.~(\ref{eq_eta}). The experimental search for solar-flare neutrinos by the SK detector gives a constraint on this parameter. However, estimating the total energy of a solar flare is difficult because the magnetic energy released in a solar flare is converted into a variety of different forms. Accordingly, the estimation of total energy is performed for a limited number of solar flares. By considering the selection efficiencies of each sample and the energies of powerful solar flares, the conversion factor is calculated based on the number of observed events in each sample. For the high energy sample, we used Eq.~(\ref{eq_eta}) as the number of interactions. For the low energy sample, we used the following equation, since~\cite{2004JHEP...06..045F} also estimated the number of interactions only using the IBD reaction, \begin{equation} n_{\mathrm{int}}^{\mathrm{IBD}} = \left[0.63 \left(\frac{\overline{E}_{\bar{\nu}_{e}}}{35~\mathrm{MeV}} \right) + 1.58 \right] \, \eta \left(\frac{E_{\mathrm{FL}}}{10^{31}~\mathrm{erg}} \right), \end{equation} \noindent where $n_{\mathrm{int}}^{\mathrm{IBD}}$ is the number of IBD interactions in the SK detector, $\overline{E}_{\bar{\nu}_{e}}$ is the average energy of the electron anti-neutrino spectrum derived from~\cite{2004JHEP...06..045F}, the first, and the second terms in the square bracket are the number of interactions in the energy range of $10$--$100$~MeV, and $100$~MeV--$1$~GeV, respectively\footnote{The expected number of interactions above $1$~GeV is estimated to be $4.9\%$ by calculating the anti-electron neutrino spectrum from~\cite{2003ChJAS...3...75F} and the IBD cross section from~\cite{2003PhLB..564...42S}. We ignored this contribution because other uncertainties, such as the total energy of the solar flare, and the fluctuation of the background event rate in the low energy sample exist in this analysis.}. Table~\ref{tb:high-flence} summarizes the $90\%$~C.L. upper limits of the conversion factors, $\eta_{\mathrm{low}}$ for the low energy sample and $\eta_{\mathrm{high}}$ for the high energy sample, from the most powerful solar flares during solar cycle 23 and 24. In the case of the solar flare that occurred on November 4th, 2003, \cite{2005AA...433.1133K} estimated the total energy released as $1.3\times10^{34}$~ergs by analyzing electrons above $20$~keV. By analyzing the high~(low) energy sample the conversion factor~$\eta$ was found to be $<0.0006$~($<0.0025$) at $90\%$~C.L. These conversion factors are at least $10^{3}$ smaller than the assumption from~\cite{2003ChJAS...3...75F}. Hence, this result suggests that the conversion factor introduced in~\cite{2003ChJAS...3...75F} is too optimistic an assumption for the energy transfer that produces neutrinos during the solar flare. \begin{table*}[] \begin{center} \caption{A summary of the upper limits of the conversion factor~$\eta$. The estimated energies of selected solar flares are taken from~\cite{2004JGRA..10910104E}, \cite{2005AA...433.1133K}, \cite{2014ApJ...797...50A}, \cite{2015ApJ...802...53A}, and \cite{2020GeAe..60..929M}. Note that their flare energies are estimated using different forms of energy, i.e. magnetic energy in \cite{2004JGRA..10910104E} and \cite{2020GeAe..60..929M}, total released energy by electrons more than $20$~keV in \cite{2005AA...433.1133K}, thermal energy in \cite{2015ApJ...802...53A}, and magnetic potential energy in \cite{2014ApJ...797...50A}, respectively.} \label{tb:conversion} \begin{tabular}{ccccc} \hline Date of flare & Estimated energy & Reference & $\eta_{\mathrm{low}}$ & $\eta_\mathrm{high}$\\ & of solar flare~[erg] & for estimated energy &\\ \hline 2002 Jul. 23 & $10^{32.3}$ & \cite{2004JGRA..10910104E} & \multicolumn{2}{c}{No SK data} \\ 2003 Oct. 28 & $10^{32.3}$ & \cite{2004JGRA..10910104E} & $<0.16\phantom{00}$ & $<0.025\phantom{0}$ \\ 2003 Nov. \phantom{0}4 & $1.3 \times 10^{34}$ & \cite{2005AA...433.1133K} & $<0.0025$ & $<0.0006$ \\ 2011 Aug. \phantom{0}9 & $1.29 \times 10^{32}$ & \cite{2015ApJ...802...53A} & $<0.095\phantom{0}$ & $<0.038\phantom{0}$ \\ 2012 Mar. \phantom{0}7 & $1.74\times10^{33}$ & \cite{2014ApJ...797...50A} & $<0.018\phantom{0}$ & $<0.0028$ \\ 2017 Sep. \phantom{0}6 & $5.6\times10^{32}$ & \cite{2020GeAe..60..929M} & $<0.022\phantom{0}$ & $<0.014\phantom{0}$ \\ \hline \end{tabular} \end{center} \end{table*} \section{Summary and future prospects} Neutrinos from solar flares are necessary to understand the mechanisms of proton acceleration at the astrophysical site. For solar flares that occurred on the visible and invisible sides of the Sun, we first estimated the time of neutrino emission using optical light curves and CME observations by solar satellites. We then searched for neutrino events in the Super-Kamiokande detector coincident with these solar flares. Two neutrino events were observed coincident with solar flares that occurred on the visible side of the Sun while six neutrino events were observed coincident with solar flares that occurred on the invisible side of the Sun. All of them are consistent with the background rate under the usual operation of the SK detector. Based on the observed events within the search window we obtained upper limits of the neutrino fluence depending on the assumed theoretical neutrino production model. For example, the fluence limit for the largest solar flare of class X$28.0$ that occurred at the visible side of the Sun on the November 4th, 2003 is $1.1\times10^{6}~\mathrm{cm^{-2}}$. In addition, the fluence limit for the solar flare that occurred on the invisible side of the Sun on November 7th, 2003, which followed the largest solar flare, is $1.6\times10^{6}~\mathrm{cm^{-2}}$. From the obtained fluences, the upper limits on the energy conversion factor were estimated based on \cite{2003ChJAS...3...75F}. In the case of the largest solar flare on November 4th, 2003, $\eta<0.0006$ at $90\%$~C.L., which is two orders of magnitude smaller than the estimate of~\cite{2003ChJAS...3...75F}. Therefore, this experimental result suggests that the theoretical assumption of energy conversion during solar flares should be reconsidered. In order to compare these results with other experimental searches, the fluence limit below $100$~MeV was also obtained without considering a specific theoretical model. The SK result is the most stringent constraint on the neutrino fluence from solar flares in the MeV region to date. In July 2020, $13$~tonnes of $\mathrm{Gd_{2}(SO_{4})_{3}\cdot8H_{2}O}$~(gadolinium sulfate octahydrate) were dissolved into the SK water tank in order to improve its neutron detection efficiency~\citep{2022NIMPA102766248A}, followed by an additional $26$~tonnes in June 2022. The main motivation for the gadolinium loading is to increase the detector's sensitivity to diffuse supernova electron anti-neutrinos. This technique also enhances the sensitivity to solar-flare neutrinos as well. In addition to the SK phases with Gd, further searches to understand the production of solar-flare neutrinos should be performed by large scale neutrino detectors such as Hyper-Kamiokande~\citep{2018arXiv180504163H}, IceCube gen-2~\citep{2020arXiv200804323T}, and JUNO~\citep{2015arXiv150807166A} during solar cycle~$25$ that started from late 2019. \begin{acknowledgments} We thank D.~Fargion from the Sapienza University of Rome for providing the expected fluence of neutrinos from solar flares. We also thank S.~Masuda from Institute for Space-Earth Environmental Research, Nagoya University, T.~Terasawa from the institute for cosmic ray research, the University of Tokyo, and S.~Yashiro from Catholic University of America, for valuable discussion related with the search windows determination at the both visible and invisible sides of the Sun. We gratefully acknowledge the cooperation of the Kamioka Mining and Smelting Company. The Super‐Kamiokande experiment has been built and operated from funding by the Japanese Ministry of Education, Culture, Sports, Science and Technology, the U.S. Department of Energy, and the U.S. National Science Foundation. Some of us have been supported by funds from the National Research Foundation of Korea NRF‐2009‐0083526~(KNRC) funded by the Ministry of Science, ICT, and Future Planning and the Ministry of Education~(2018R1D1A1B07049158, 2021R1I1A1A01059559), the Japan Society for the Promotion of Science, the National Natural Science Foundation of China under Grants No.11620101004, the Spanish Ministry of Science, Universities and Innovation~(grant PGC2018-099388-B-I00), the Natural Sciences and Engineering Research Council~(NSERC) of Canada, the Scinet and Westgrid consortia of Compute Canada, the National Science Centre~(UMO-2018/30/E/ST2/00441) and the Ministry of Education and Science~(DIR/WK/2017/05), Poland, the Science and Technology Facilities Council~(STFC) and GridPPP, UK, the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement no.754496, H2020-MSCA-RISE-2018 JENNIFER2 grant agreement no.822070, and H2020-MSCA-RISE-2019 SK2HK grant agreement no. 872549. This work was carried out by the joint research program of the Institute for Space-Earth Environmental Research~(ISEE), Nagoya University. A part of this study was carried using the computational resources of the Center for Integrated Data Science, Institute for Space-Earth Environmental Research, Nagoya University, through the joint research program. \end{acknowledgments}
proofpile-arXiv_068-13314
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{SecIntroduction} Neutral particles can emulate the dynamics of electrons in the presence of magnetic fields through the engineering of artificial gauge fields \cite{Dalibard2011,Goldman2014}. In the well-known Aharonov-Bohm effect \cite{Aharonov1959,Wu1975}, a charged particle performing a closed loop on a region with a non-zero electromagnetic potential acquires not only a dynamical phase but also an additional phase known as the Aharonov-Bohm phase. For particular periodic lattice geometries, single-particle wavefunctions undergo a sharp localization due to destructive interference known as Aharonov-Bohm caging \cite{Vidal1998,Vidal2000a}. This effect arises in systems such as the $\mathcal{T}_3$ model \cite{Vidal1998,Bercioux2009,Bercioux2011} or the diamond chain \cite{Vidal2000a}, and it has been observed in several experimental platforms, such as networks of conducting wires \cite{Abilio1999,Naud2001}, ultracold atoms \cite{Shinohara2002}, and photonic lattices \cite{Mukherjee2018,Kremer2020,Jorg2020}. Of particular interest is the role that interactions play in a system with single-particle Aharonov-Bohm caging, which has been explored in different regimes \cite{DiLiberto2019,Gligoric2019,Vidal2000a,Creffield2010,Pelegri2020}. Addition of interactions lifts the degeneracy of the single-particle flat bands, providing a mechanism for particles to avoid caging \cite{DiLiberto2019,Vidal2000a,Creffield2010}. However, in the regime of strong interactions, Aharonov-Bohm caging of two particles can be recovered for appropriately tuned magnetic fluxes through the formation of bound states \cite{Creffield2010}. Here, we study a one-dimensional lattice of ring potentials populated by orbital angular momentum (OAM) modes with $l=1$ and winding numbers $\nu=\pm l$. Such states give rise to complex couplings that can be engineered by modifying the geometry of the lattice \cite{Polo2016a,Pelegri2019,Pelegri2019a,Pelegri2019b,Pelegri2019c,Pelegri2020}. Thus, it is a system where synthetic fluxes arise naturally. Ring trapping potentials can be created experimentally using a variety of techniques (see \cite{Amico2021} and references therein), and OAM can be transferred by rotating a weak link \cite{Ramanathan2011,Wright2013}, by coherent transfer of angular momentum from photons to the atoms \cite{Andersen2006,Franke-Arnold2017}, or by doing a temperature quench \cite{Corman2014a}. Alternatively, such a model can be realized by exciting atoms to the $p$ band in a conventional optical lattice \cite{Wirth2011,Li2016,Kiely2016,Kock2016}. The local eigenstates with winding number $\nu=\pm l$ provide the system with a synthetic dimension, such that it can be mapped to a Creutz ladder model with a flux threading each plaquette. For this family of models, interaction induced effects have been studied for repulsive \cite{Takayoshi2013,Tovmasyan2013,Zurita2020} and attractive \cite{Tovmasyan2016,Tovmasyan2018} on-site interactions, and for nearest-neighbor interactions \cite{,Sticlet2014,Junemann2017,Kuno2020b}. In particular, two-body Aharonov-Bohm caging was explored in \cite{Zurita2020}, where a photonic lattice implementation was proposed. Here, we explore the $N$-boson case and further generalize the study to the case of non-uniform fluxes, which are known to enrich the Aharonov-Bohm caging phenomenology in single-particle diamond lattices \cite{Mukherjee2020}. The article is organized as follows. We introduce the system in Section \ref{SecPhysicalSystem} and analyze the single-particle case in Sec. \ref{SecSingleParticle}. For the case in which a $\pi$-flux threads each plaquette, we analyze both the topology of the system and study the Aharonov-Bohm caging effect in terms of the compact localized states (CLSs) that compose the flat-band spectrum. In Section \ref{SecNParticle}, we generalize this study to the case of $N$ particles by introducing on-site repulsive interactions and studying the regime of strong interactions using perturbation theory. In Sec. \ref{SecStaggered}, we generalize the study to the case of non-uniform fluxes and summarize our conclusions in Sec. \ref{SecConclusions}. \section{Physical system}\label{SecPhysicalSystem} We consider a few bosons loaded into a one-dimensional lattice where the adjacent sites are equally separated by a distance $d$. Each unit cell $k$ is composed of two sites $A_k$ and $B_k$, and we make the lattice staggered by introducing an angle $\phi$ as depicted in Fig. \ref{FigPhysicalSystem}. Given the local polar coordinates of each site, $(\rho_{j_k},\varphi_{j_k})$ with $j=A,B$, the local trapping potential is a ring potential of the form $V(\rho_{j_k})=\frac{1}{2} M \omega^{2}(\rho_{j_k}-\rho_0)^{2}$, where $\omega$ is the frequency of the radial potential, $M$ is the mass of the particles, and $\rho_0$ is the radius. For $\rho_0=0$, the ring trap reduces to a harmonic potential and we consider identical local potentials at each site. \begin{figure}[t] \includegraphics{figure0} \caption{Diagram of the one-dimensional staggered chain where the adjacent sites $A$ and $B$ are separated by a distance $d$. The unit cell is marked by a rectangle and the grey line indicates the origin of the phase $\varphi_0$. The black arrows denote real tunneling amplitudes while the blue ones indicate complex tunneling amplitudes between states of different winding number.}\label{FigPhysicalSystem} \end{figure} The eigenstates of each isolated ring have a well-defined orbital angular momentum (OAM) $l$ with winding numbers $\nu=\pm l$. We will denote the local eigenstates as $|j_k^\nu\rangle$, where $k$ is the unit cell index, $j=A,B$ is the site, and $\nu$ is the winding number. These sets of local eigenstates with different OAM $l$ are well-separated in energy, which makes them effectively decoupled in a lattice structure \cite{Polo2016a,Pelegri2019}. Then, the total field operator for the states with OAM $l$ in the lattice reads \begin{equation}\label{EqWavefunctionL} \begin{aligned} \hat{\Psi}_l=& \sum_{k=1}^{N_{c}} \sum_{\nu=\pm l} \phi^{\nu}_{A_{k}}\left(\rho_{A_{k}}, \varphi_{A_{k}}\right) \hat{a}^{\nu}_{k}+\phi^{\nu}_{B_{k}}\left(\rho_{B_{k}}, \varphi_{B_{k}}\right) \hat{b}^{\nu}_{k}, \end{aligned} \end{equation} where $N_c$ is the number of unit cells, and $\hat{a}^{\nu}_{k}$ and $\hat{b}^{\nu}_{k}$ are the annihilation operators of the local eigenstates $|A_k^{\nu}\rangle$ and $|B_k^{\nu}\rangle$, respectively. The wavefunctions of each state $|j_k^\nu\rangle$ are given by \begin{equation} \phi^{\nu}_{j_{k}}\left(\rho_{j_{k}}, \varphi_{j_{k}}\right)=\left\langle\mathbf{r} \mid j_{k}^ \nu\right\rangle=\psi\left(\rho_{j_{k}}\right) e^{i\nu\left(\varphi_{j_{k}}-\varphi_{0}\right)}, \end{equation} where $\psi\left(\rho_{j_{k}}\right)$ is the radial part of the wavefunction and $e^{i\nu\left(\varphi_{j_{k}}-\varphi_{0}\right)}$ is the complex phase due to the non-zero OAM, with $\varphi_0$ indicating the origin of the phase. Consider now a single unit cell, \textit{i.e.}, two rings side by side ($j=A,B$). The single-particle Hamiltonian restricted to a fixed value of OAM reads \begin{equation}\label{EqTotalHamiltonian} \hat{\mathcal{H}}_{l}^0=\int d^2r\, \hat{\Psi}_{l}^{\dagger}\left[-\frac{\hbar^{2} \nabla^{2}}{2 M}+V(\mathbf{r})\right] \hat{\Psi}_{l}, \end{equation} where the total potential $V(\mathbf{r})$ is the sum of the truncated potentials of each site. The tunneling amplitudes between the states $|j^\nu_k\rangle$ with OAM $l$ are given by the overlap integrals of the corresponding wavefunctions $\phi^\nu_{j}(\rho_j,\varphi_j)$ \cite{Polo2016a}, \begin{equation}\label{EqCouplings} J^{\nu,\nu'}_{j,j'}=e^{i(\nu-\nu') \varphi_{0}} \int\left(\phi^{\nu}_{j}\left(\varphi_{0}=0\right)\right)^{*} \hat{\mathcal{H}}_{l}^0\, \phi^{\nu'}_{j'}\left(\varphi_{0}=0\right) d^{2} r, \end{equation} where $j,j'=A,B$ identify the sites, and $\nu,\nu'=\pm l$, the winding numbers. Also, we have factorized and rewritten the wavefunctions as $\phi_{j}^{\nu}=e^{-i\nu\varphi_{0}} \phi_{j}^{\nu}\left(\varphi_{0}=0\right)$. These couplings were thoroughly analyzed in \cite{Polo2016a} by studying the mirror symmetries of the system. The authors found that there are only three distinct couplings: $J_1\equiv J_{j, j}^{\nu,-\nu}$ couples the opposite winding number OAM modes within a single ring, $J_{2} \equiv J_{A,B}^{\nu,\nu}$ couples same winding number modes in adjacent rings, and $J_{3} \equiv J_{A,B}^{\nu,-\nu}$ couples opposite winding number modes in adjacent rings. The complex factor in each coupling (\ref{EqCouplings}) is determined by the origin of the phase, $\varphi_0$, through the factor $e^{i(\nu-\nu')\varphi_0}$. For two inline rings, $\varphi_0$ can always be chosen so that the complex factor vanishes. We choose the origin of the phase along the $A_k$ and $B_k$ sites of the same unit cell (see Fig.~\ref{FigPhysicalSystem}), such that the corresponding couplings are real. The inter-cell couplings between the sites $B_k$ and $A_{k+1}$ form an angle $\phi$ with respect to the origin of the phase, such that the corresponding couplings $J_3$ and $J_1$ acquire a complex phase $e^{\pm i2l\phi}$. Therefore, one can tune the complex phase of these couplings by modifying the geometry of the staggered chain, \textit{i.e.}, the angle $\phi$ (see Fig.~\ref{FigPhysicalSystem}). The couplings in a two-ring system for $l=1$ were studied in \cite{Pelegri2019}: the authors found that the magnitudes of the couplings decay with the separation distance $d$ between the two rings while the difference between $|J_3|$ and $|J_2|$ also decreases with $d$ \cite{Pelegri2019}. Additionally, $|J_1|$ is one order of magnitude smaller than $|J_2|$ and $|J_3|$ for all distances. In this work, we focus on the regime of large distances, defining $|J_2|=|J_3|\equiv J$, and we neglect the $J_1$ coupling. Also, we study the states with OAM $l=1$ and winding numbers $\nu=\pm 1$ and consider an integer number of unit cells. Henceforth, we will replace the winding number with the label of the circulation $\alpha=\pm$. Given the above assumptions and using harmonic oscillator units, the single-particle Hamiltonian of this system reads \begin{equation}\label{EqSingleParticleBoseHubbardHamiltonian} \begin{aligned} \hat{\mathcal{H}}_{l=1}^0=\,&J\sum_{\alpha=\pm}\Bigg[ \sum_{k=1}^{N_c}\Big(\hat{a}^{\alpha \dagger}_{k} \hat{b}^{\alpha}_{k}+\hat{a}^{\alpha \dagger}_{k} \hat{b}^{-\alpha}_{k}\Big)+\\ &\sum_{k=1}^{N_c-1}\Big(\hat{b}^{\alpha \dagger}_{k} \hat{a}^{\alpha}_{k+1}+e^{-2 \alpha i \phi} \hat{b}^{\alpha\dagger}_{k } \hat{a}^{-\alpha}_{k+1}\Big)+\mathrm{H.c.}\Bigg]. \end{aligned} \end{equation} By representing the two circulations $+$ and $-$ as separate sites, one can depict this system as the Creutz ladder with vanishing vertical couplings shown in Fig. \ref{FigCreutz}. The two circulations $\alpha=\pm$ act as a synthetic dimension that constitutes the two legs of the ladder. Henceforward, we use the notation $|j_k^\alpha,n\rangle$ to denote the number of particles $n$ in the local state $|j_k^\alpha\rangle$. In the following Section, where we discuss the single-particle case, $n$ will always be $n=1$. For this case, the states in each site are $|A_k^\alpha,1\rangle$ and $|B_k^\alpha,1\rangle$, the couplings are $\mathcal{J}=J$ and $\theta=2\phi$. \begin{figure}[t] \includegraphics{figure1} \caption{Schematic representation of the sites and couplings of the lattice formed by a real dimension and the synthetic dimension spanned by the two circulations $\pm $ in each site $A_k$ and $B_k$. The unit cell is indicated as a dotted rectangle and the complex couplings are $e^{i\theta}\mathcal{J}$ from circulation $+$ to $-$ and its complex conjugate in the opposite direction. }\label{FigCreutz} \end{figure} \section{Single particle}\label{SecSingleParticle} In this Section, we will analyze in detail the single-particle case, which will be the basis to understand the generalization to $N$ particles that we explore in Section \ref{SecNParticle}. As we have seen, the complex factor $e^{\pm 2i\phi}$ that appears in the $J_3$ couplings can be tuned by modifying the real space angle $\phi$ of the staggered chain (see Fig.~\ref{FigPhysicalSystem}). We are interested in the case $\phi=\pi/2$, for which the $J_3$ inter-cell couplings become $J_3=-J_2=-J$, thus generating a synthetic $\pi$-flux in each plaquette. Note that the couplings in the staggered chain can form either rhombus or triangle plaquettes with two configurations each, such that every one of them contains a $\pi$-flux (see Fig.~\ref{FigPlaquettes}). As a result, a particle cannot tunnel two sites to the right or to the left due to destructive interference. This destructive interference that leads to localization due to the presence of a flux is known as Aharonov-Bohm caging \cite{Vidal1998,Vidal2000a}. For $\phi=\pi/2$, the Hamiltonian in Eq.~(\ref{EqSingleParticleBoseHubbardHamiltonian}) reduces to \begin{equation}\label{EqPiFluxHamiltonian} \begin{aligned} \hat{\mathcal{H}}^0_{l=1}=\,&J\sum_{\alpha=\pm}\Bigg[ \sum_{k=1}^{N_c}\big( \hat{a}^{\alpha \dagger}_{k} \hat{b}^{\alpha}_{k}+\hat{a}^{\alpha\dagger}_{k } \hat{b}^{-\alpha}_{k}\big)+\\ & \sum_{k=1}^{N_c-1}\big(\hat{b}^{\alpha\dagger}_{k } \hat{a}^{\alpha}_{k+1}- \hat{b}^{\alpha\dagger}_{k } \hat{a}^{-\alpha}_{k+1}\big)+\mathrm{H.c.}\Bigg]. \end{aligned} \end{equation} A topological characterization of this system can be obtained by analyzing the block-diagonalized Hamiltonian. We introduce the following basis change (with $n=1$), \begin{equation}\label{EqBasisChange} \begin{aligned}&\left|A_{k}^{s(a)},n\right\rangle=\frac{1}{\sqrt{2}}\left(\left|A_{k}^+,n\right\rangle \varpm\left|A_{k}^-,n\right\rangle\right),\\ &\left|B_{k}^{s(a)},n\right\rangle=\frac{1}{\sqrt{2}}\left(\left|B_{k}^+,n\right\rangle \varpm\left|B_{k}^-,n\right\rangle\right),\end{aligned} \end{equation} that decouples the system into the two following Hamiltonians, \begin{equation}\label{EqSSHHamiltonians} \begin{aligned} \hat{\mathcal{H}}_s=&2J \sum_{k=1}^{N_c}\hat{a}^{s\dagger}_{k } \hat{b}^{s}_{k}+\mathrm{H.c.},\\ \hat{\mathcal{H}}_a=&2J \sum_{k=1}^{N_c-1}\hat{a}^{a\dagger}_{k+1 } \hat{b}^{a}_{k}+\mathrm{H.c.}, \end{aligned} \end{equation} where $\hat{a}^{s(a)}_{k }$ and $\hat{b}^{s(a)}_{k}$ are the annihilation operators of the states in Eq. (\ref{EqBasisChange}). The Hamiltonians $\hat{\mathcal{H}}_a$ and $\hat{\mathcal{H}}_s$ correspond to two Su-Schrieffer-Heeger (SSH) chains in the dimerized limit, \textit{i.e.}, linear chains with alternating couplings where either the inter or the intra-cell coupling is zero (see Fig. \ref{FigSSH} with $n=1$ and $\mathcal{J}=J$). The two models have the same couplings, $2J$ and $0$, in opposite configurations, which leads to them having opposite topological phases. \begin{figure}[t] \includegraphics{figure2}\vspace{4mm} \includegraphics{figure3} \caption{Schematic representation of the lattice with a $\pi$-flux in each plaquette, for which the cross-circulation couplings reduce to $-\mathcal{J}$ (blue dashed lines). The different diagrams highlight the plaquette configurations that enclose a $\pi$-flux: rhombi and triangles with two configurations each. }\label{FigPlaquettes} \end{figure} \begin{figure}[h] \includegraphics{figure4} \caption{Decoupled symmetric and antisymmetric SSH chains with alternating couplings $2\mathcal{J}$ and $0$. The unit cell of each chain is indicated by the dotted rectangles.} \label{FigSSH} \end{figure} We consider an integer number of unit cells and that the first site of the chain is a site $A$ (and thus, the last, a site $B$), such that the edge couplings are real. In that case, the symmetric SSH chain, $\hat{\mathcal{H}}_s$, is in the trivial phase, characterized by a quantized Zak phase $\gamma=0$, and the antisymmetric chain, $\hat{\mathcal{H}}_a$, is in the topological phase with a quantized Zak phase, $\gamma=\pi$. If we instead consider a lattice starting with a $B$ site, the symmetric chain would be the one in the topological phase. Thus, for an integer number of unit cells, there are always two edge states present regardless of the configuration of the chain. In Fig. \ref{FigCLS}(a), we represent the energy spectrum of a chain with $N_c=12$ unit cells and $\phi=\pi/2$ obtained through exact diagonalization. We obtain two flat bands and two zero-energy edge states that correspond to the superposition of the energy spectra of $\hat{\mathcal{H}}_s$ and $\hat{\mathcal{H}}_a$, in Eq.~(\ref{EqSSHHamiltonians}). The edge states are eigenstates of the antisymmetric chain and are completely localized at the edge sites (with $n=1$), \begin{equation}\label{EqEdge} \begin{aligned}\left|A_{1}^a,n\right\rangle_{edge}&=\frac{1}{\sqrt{2}}\left(\left|A_{1}^+,n\right\rangle -\left|A_{1}^-,n\right\rangle\right),\\ \left|B_{N_c}^a,n\right\rangle_{edge}&=\frac{1}{\sqrt{2}}\left(\left|B_{N_c}^+,n\right\rangle -\left|B_{N_c}^-,n\right\rangle\right).\end{aligned} \end{equation} \subsection{Single-particle Aharonov-Bohm caging} In this Section, we explore single-particle Aharonov-Bohm caging. The flat bands that appear in the spectrum when a $\pi$-flux threads each plaquette [see Fig.~\ref{FigCLS}(a)], are characterized by the presence of compact localized states (CLSs). These eigenstates have high real space localization: their amplitude is non-zero in a few close-by sites while being exactly zero everywhere else. The smallest possible basis for the CLSs in this model spans the states of one unit cell and an extra site (where $n=1$), \begin{equation}\label{EqCLSbasis} \left\{|A_k^+,n\rangle,|A_k^-,n\rangle,|B_k^+,n\rangle,|B_k^-,n\rangle,|A_{k+1}^{+},n\rangle,|A_{k+1}^{-},n\rangle\right\}. \end{equation} The CLSs are found to be [see Fig.~\ref{FigCLS}(b)] \begin{equation}\label{EqCLS} \begin{aligned} |\Upsilon_k^1,n\rangle&=\frac{1}{2}\left(|B_k^+,n\rangle+|B_k^-,n\rangle-|A_k^+,n\rangle-|A_k^-,n\rangle\right),\\ |\Upsilon_k^2,n\rangle&=\dfrac{1}{2}\left(|B_k^+,n\rangle-|B_k^-,n\rangle-|A_{k+1}^{+}\rangle+|A_{k+1}^-,n\rangle\right),\\ |\Upsilon_k^3,n\rangle&=\dfrac{1}{2}\left(|B_k^+,n\rangle+|B_k^-,n\rangle+|A_k^+,n\rangle+|A_k^-,n\rangle\right),\\ |\Upsilon_k^4,n\rangle&=\dfrac{1}{2}\left(|B_k^+,n\rangle-|B_k^-,n\rangle+|A_{k+1}^+,n\rangle-|A_{k+1}^-,n\rangle\right), \end{aligned} \end{equation} and their corresponding energies are $E_1=E_2=-2\mathcal{J}$ and $E_3=E_4=2\mathcal{J}$ (where $\mathcal{J}=J$ in the single-particle case). Any initial state that can be written as a superposition of these states will remain localized in the caging cell defined in (\ref{EqCLSbasis}). \begin{figure} \includegraphics{figure5} \caption{(a) Single-particle energy spectrum for $N_c=12$ unit cells and $\phi=\pi/2$. (b) Representation of the CLSs defined in Eq.~(\ref{EqCLS}) that are eigenstates of the Creutz ladder, see Fig.~\ref{FigCreutz}, when a $\pi$-flux threads each plaquette. The radius represents the amplitude and the color represents the phase, with red being a $\pi$ phase, and green being a phase zero.}\label{FigCLS} \end{figure} We consider an initial state where only a single site $A_k$ in the bulk of the chain is populated. Fig. \ref{FigCaging1}(a) shows the time evolution of the population of each local eigenstate, $P_{|j_k^{\alpha},1\rangle}$ (with $j=A,B$), for the initial state $\left(\left|A_{k}^+,1\right\rangle +\left|A_{k}^-,1\right\rangle\right)/\sqrt{2}$, which corresponds to the superposition $(|\Upsilon_k^3,1\rangle-|\Upsilon_k^1,1\rangle)/\sqrt{2}$. The population coherently oscillates between the sites $A_{k}$ and $B_{k}$ without populating any other sites due to destructive interference at $B_{k-1}$ and $A_{k+1}$. Thus, the total caged population, $P_{cag}=P_{|A_k^{+},1\rangle}+P_{|A_k^{-},1\rangle}+P_{|B_k^{+},1\rangle}+P_{|B_k^{-},1\rangle}$, stays at $P_{cag}=1$ throughout the time evolution. Additionally, the two circulations within each site maintain the same population at all times: $P_{|A_k^+,1\rangle}=P_{|A_k^-,1\rangle}$ and $P_{|B_{k}^+,1\rangle}=P_{|B_k^-,1\rangle}$. For the initial state $\left(\left|A_{k}^+,1\right\rangle -\left|A_{k}^-,1\right\rangle\right)/\sqrt{2}=(|\Upsilon_k^4,1\rangle-|\Upsilon_k^2,1\rangle)/\sqrt{2}$, one obtains identical dynamics but the exchange in population takes place between the sites $A_{k}$ and $B_{k-1}$, as the sign of the superposition shifts the destructive interference to the sites $B_k$ and $A_{k-1}$. Fig. \ref{FigCaging1}(b) shows the time evolution for the initial state $\left|A_{k}^+,1\right\rangle=(-|\Upsilon_k^1,1\rangle+|\Upsilon_k^3,1\rangle-|\Upsilon_{k-1}^2,1\rangle+|\Upsilon_{k-1}^4,1\rangle)/2$. As this initial state cannot be written as a superposition of CLSs of a single caging cell, the population reaches both the sites $B_k$ and $B_{k-1}$. The total caged population, which in this case also stays constant, is $P_{cag}=P_{|A_k^{+},1\rangle}+P_{|A_k^{-},1\rangle}+P_{|B_k^{+},1\rangle}+P_{|B_k^{-},1\rangle}+P_{|B_{k-1}^{+},1\rangle}+P_{|B_{k-1}^{-},1\rangle}$. Also, we simulate a chain with $N_c=12$ unit cells and choose the unit cell $k=4$ for the initial state. The caging dynamics in Fig.~\ref{FigCaging1} can also be understood in terms of the decoupled dimers of the SSH chains. For the symmetric and antisymmetric initial states, in Eq.~(\ref{EqBasisChange}), the population remains trapped in the corresponding dimer of the symmetric, $\hat{\mathcal{H}}_s$, or the antisymmetric, $\hat{\mathcal{H}}_a$, chain (see Fig.~\ref{FigSSH}). In contrast, the initial state $\left|A_{k}^+,1\right\rangle$ populates both the symmetric and antisymmetric SSH chains, such that the population reaches both dimers and as a consequence reaches a broader spatial extent. \begin{figure}[t] \includegraphics[width=1\columnwidth]{caging1} \caption{Time evolution of the population of the states $|j_k^\alpha,1\rangle$ with $j=A,B$ and total caged population, obtained through exact diagonalization for $J=1$, $N_c=12$ unit cells and $\phi=\pi/2$. The continuous red line is the total caged population $P_{cag}$; the dashed black line is the population in the states $|A_4^\alpha,1\rangle$, with $\alpha=\pm$; and the dotted blue line is the population in the states (a) $|B_{4}^\alpha,1\rangle$, (b) $|B_{3}^\alpha,1\rangle$ and $|B_{4}^\alpha,1\rangle$. The initial states are (a) $\left(\left|A_{4}^+,1\right\rangle +\left|A_{4}^-,1\right\rangle\right)/\sqrt{2}$ and (b) $\left|A_{4}^+,1\right\rangle$.}\label{FigCaging1} \end{figure} \section{N particle}\label{SecNParticle} In this Section, we explore the many-body dynamics of the system for $N$ bosons with repulsive interactions. For an ultracold and dilute gas of atoms, two-body collisions dominate, and the interaction Hamiltonian for a lattice of rings restricted to a single OAM manifold can be written as \begin{equation}\label{EqInteractionHamiltonian} \hat{\mathcal{H}}^{int}_{l}=\frac{g}{2} \int d^2r\, \hat{\Psi}_{l}^{\dagger} \hat{\Psi}_{l}^{\dagger} \hat{\Psi}_{l} \hat{\Psi}_{l}, \end{equation} where $g$ is proportional to the $s$-wave scattering length and fulfills $g>0$. Introducing the expression of the bosonic field operator, Eq.~(\ref{EqWavefunctionL}), and considering only on-site interactions, the interaction Hamiltonian for $l=1$ becomes \begin{equation}\label{EqInteractionHamiltonianHubbard} \hat{\mathcal{H}}^{int}_{l=1}\hspace{-0.5mm}=\hspace{-0.5mm}\dfrac{U}{2}\hspace{-1mm}\sum_{j=A,B}\sum_{k=1}^{N_c}\!\left[ \hat{n}_{j_k}^+(\hat{n}^+_{j_k}\!-\!1)\!+\!\hat{n}^-_{j_k}(\hat{n}^-_{j_k}\!-\! 1)\!+\!4\hat{n}^+_{j_k}\hat{n}^-_{j_k}\right]\!, \end{equation} where $\hat{n}^{\alpha}_{j_k}=\hat{j}^{\alpha\dagger}_{k}\hat{j}^{\alpha}_{k}$ is the number operator and the interaction strength is defined as $U \equiv g \int d^2r\left|\psi\left(\rho_{j_{k}}\right)\right|^{4}$ \cite{Pelegri2019}. Besides the common Bose-Hubbard interaction terms for each of the circulations, $\alpha=\pm$, a cross-circulation term appears. Thus, this realization of a Creutz ladder yields a nearest-neighbor interaction term along the rungs of the ladder that is not usually present in other realizations of this model. Henceforward, we will analyze the regime of strong interactions, in which the interaction term dominates over the tunneling term, $U\gg J$. We are interested in the bound-states where the $N$ bosons occupy a single site of the lattice, $\left\{|j_k^{\alpha},n\rangle\otimes|j_k^{-\alpha},m\rangle\right\}$, where there are $n$ particles in one circulation and $m$ particles in the other circulation (with $n+m=N$). In the regime of strong interactions, the kinetic Hamiltonian, $\hat{\mathcal{H}}_{l=1}^0$ [Eq.~(\ref{EqSingleParticleBoseHubbardHamiltonian})], is introduced as a perturbation that couples the bound states $\left\{|j_k^{\alpha},n\rangle\otimes|j_k^{-\alpha},m\rangle\right\}$ in adjacent sites. This effect creates subspaces that are well-separated in energy, and thus, effectively uncoupled. We will analyze in detail the two and three-particle cases as an example in the next subsections. The matrix elements of the effective Hamiltonian of each subspace up to third order are given by \cite{Bir1974,Tannoudji1992} \begin{equation}\label{EqEffectiveHamiltonian} \begin{aligned}\langle d|&\hat{\mathcal{H}}_{\mathrm{eff}}| d^{\prime}\rangle= E_{d}^{0} \delta_{d d^{\prime}}+\frac{1}{2} \sum_{w}\langle d|\hat{\mathcal{H}}_{l=1}^0| w\rangle\langle w|\hat{\mathcal{H}}_{l=1}^0| d^{\prime}\rangle\cdot\\ &\cdot\!\!\left[\frac{1}{E_{d}^{0}-E_{w}^{0}}+\frac{1}{E_{d^{\prime}}^{0}-E_{w}^{0}}\right]+ \\ &+\frac{1}{2} \sum_{w w^{\prime}}\langle d|\hat{\mathcal{H}}_{l=1}^0| w\rangle\langle w|\hat{\mathcal{H}}_{l=1}^0| w^{\prime}\rangle\langle w^{\prime}|\hat{\mathcal{H}}_{l=1}^0| d^{\prime}\rangle\cdot\\ &\cdot\!\!\left[\frac{1}{\left(E_{d}^{0}-E_{w}^{0}\right)\left(E_{d}^{0}-E_{w^{\prime}}^{0}\right)}+\frac{1}{\left(E_{d^{\prime}}^{0}-E_{w}^{0}\right)\left(E_{d^{\prime}}^{0}-E_{w^{\prime}}^{0}\right)}\right]\!, \end{aligned} \end{equation} where $|d\rangle,|d'\rangle$ are the bound-states, $|w\rangle,|w'\rangle$ are the mediating states in each hopping process, and $E^0$ are the unperturbed energies. Note that the first-order corrections are always zero. For $|d\rangle\neq|d'\rangle$, one obtains an effective tunneling term, while for $|d\rangle=|d'\rangle$, one obtains an effective on-site potential. While Eq.~(\ref{EqEffectiveHamiltonian}) provides a good description up to $N=3$, for $N>3$, one would need to compute the higher-order terms of the perturbative expansion. \subsection{Two and three particles}\label{SecTwoParticle} For the two and three-particle cases, there are only two subspaces available that arise from the following bound-state classes: \begin{enumerate} \item $\mathcal{A}$: $N$ particles occupy the same site and the same circulation, $|j_k^{\alpha},N\rangle$. These are the bound-states that minimize the interaction energy, which is $E_{\mathcal{A}}=N(N-1)U/2$. \item $\mathcal{B}$: these bound-states maximize the interaction energy and take the following two forms: \begin{enumerate} \item For $N$ even, $N/2$ particles in each circulation, $$\left\{|j_k^{+},N/2\rangle\otimes|j_k^-,N/2\rangle\right\},$$ with energy $E_{\mathcal{B},\rm{even}}=(3N^2/2-N)U/2$. \item For $N$ odd, $(N-1)/2$ particles in one circulation and $(N-1)/2+1$ in the other $$\hspace{12mm}\left\{ \begin{array}{c} |j_k^{+},(N-1)/2\rangle\otimes|j_k^-,(N-1)/2+1\rangle,\\ |j_k^{+},(N-1)/2+1\rangle\otimes|j_k^-,(N-1)/2\rangle \end{array}\right\},$$ with a slightly lower energy, $E_{\mathcal{B},\rm{odd}}=(3N^2/2-N-1/2)U/2$. \end{enumerate} \end{enumerate} \subsubsection{$\mathcal{A}$ subspace} We introduce the coupling $J$ as a perturbation, \textit{i.e.}, $U\gg J$, such that the states of the $\mathcal{A}$ subspace in adjacent sites become coupled. The states for the two-particle case, \textit{e.g.} $|A_k^{\alpha},2\rangle$ and $|{B}_k^{\alpha'},2\rangle$, become coupled through second-order hopping processes, while the states in the three-particle case, \textit{e.g.} $|A_k^{\alpha},3\rangle$ and $|{B}_k^{\alpha'},3\rangle$, become coupled through third-order hopping processes. Additionally, each state is coupled to itself also through second-order hoppings, such that an effective on-site potential arises. Note that for both cases, the third-order contribution to the effective on-site potential is zero. Also, the on-site potential has different magnitudes for the bulk, $V_B$, and the edge, $V_E$, since the number of available mediating states for the bulk states is twice the number of the ones available for the states localized at the edge sites \cite{Bello2016,DiLiberto2016,Marques2017}. Using Eq.~(\ref{EqEffectiveHamiltonian}) up to second order for the two-particle case and up to third order for the three-particle case, the resulting effective chains become a Creutz ladder, depicted in Fig.~(\ref{FigCreutz}) with $n=2$ or $3$. The parameters that characterize the two and three-particle effective models as well as those of the single-particle case are given in Table~\ref{Table}. \begin{table}[b] \begin{center} \begin{tabular}{c|c|c|c|c} & \,Single-particle\, & $\mathcal{A}_2$ & $\mathcal{A}_3$ & $\mathcal{B}_3$ \\ \hline\hline \rule{0pt}{12pt} $\mathcal{J}$ & $J$ & $2J^2/U$ & $3J^3/(2U^2)$ & $121J^3/(72U^2)$ \\[2pt] \hline $\theta$ & $2\phi$ & $4\phi$ & $6\phi$ & $2\phi$ \\[2pt] \hline $\phi$ & $\pi/2$ & $\pi/4$ & $\pi/2,\pi/6$ & $\pi/2$\\[2pt] \hline $V_E$ & --- & $4J^2/U$ & $3J^2/U$ & $11J^2/(6U)$ \\[2pt] \hline $V_B$ & --- & $8J^2/U$ & $6J^2/U$ & $11J^2/(3U)$ \\[2pt] \hline $V$ & --- & $2J^2/U$ & $J^2/U$ & --- \end{tabular} \end{center} \caption{Summary of parameters that characterize the single-particle case and the two and three-particle effective subspaces that exhibit Aharonov-Bohm caging. Parameters of the Creutz ladder defined in Fig. \ref{FigCreutz}: couplings $\mathcal{J}$, angle $\theta$ and real space angle $\phi$ that induces a $\pi$-flux. Effective on-site potential up to second-order corrections at the edge sites, $V_E$, and the bulk sites, $V_B$, and edge correction potential $V$.} \label{Table} \end{table} The inter-cell cross couplings between the $\mathcal{A}$ subspace states with opposite circulations contain a complex factor $e^{\pm i\theta}$ (see Table~\ref{Table}). Then, for two (three) particles and the real space angle $\phi=\pi/4$ ($\phi=\pi/2$ or $\pi/6$) (see Fig~\ref{FigPhysicalSystem}), the complex factor becomes a $\pi$ phase and the effective chain acquires a $\pi$-flux in each plaquette of the Creutz ladder, see Fig.~\ref{FigPlaquettes}. Due to the similarities between the single-particle model and the effective $\mathcal{A}$ subspace, we can apply the basis-change employed for the single-particle case, taking $n=2$ or $3$ in Eq.~(\ref{EqBasisChange}). As expected, one obtains two dimerized SSH-like decoupled systems with renormalized couplings [Fig.~\ref{FigSSH} with $n=2$ or $3$ and $\mathcal{J}=2J^2/U$ or $3J^3/(2U^2)$], with additional on-site potentials inherited from the Creutz ladder, $V_B$ and $V_E$. Fig.~\ref{FigSpectrum2} shows the energy spectrum of the $\mathcal{A}$ subspace for (a1) two particles and (b1) three particles for $U/J=50$ and $N_c=12$ unit cells. We choose the angle $\phi$ that induces a $\pi$-flux in each effective Hamiltonian, $\phi=\pi/4$ and $\phi=\pi/2$, respectively. In contrast with a regular SSH model, the effective chains are not chirally symmetric due to the presence of the bulk-edge on-site potential mismatch. Therefore, the four eigenstates that fall outside the bulk bands (blue rhombi) are non-topological Tamm-Shockley edge states, \textit{i.e.}, states induced by interactions that are localized at the edge sites due to the bulk-edge on-site potential mismatch \cite{DiLiberto2016,Bello2016,Gorlach2017,Salerno2018}. One can recover chiral symmetry in the effective model by introducing an on-site potential $V$ at the edge sites of the real space chain that exactly compensates the potential mismatch \cite{Bello2016}. Figures~\ref{FigSpectrum2}(a2) and (b2) show the two and three-particle spectra of the $\mathcal{A}$ subspace when we introduce the on-site potential correction at the edge sites, $V=2J^2/U$ and $V=J^2/U$, respectively. In this case, we recover the spectrum of an SSH model with two symmetry-protected edge states (red triangles). \begin{figure}[t] \includegraphics[width=1\linewidth]{spectrumsA} \caption{Energy spectrum of the $\mathcal{A}$ subspace for (a) two ($\phi=\pi/4$) and (b) three ($\phi=\pi/2$) particles, $U/J=50$ and $N_c=12$ unit cells with or without an on-site potential correction $V$ at the edge sites: (a1), (b1) $V=0$, (a2) $V=2J^2/U$, and (b2) $V=J^2/U$. We depict bulk states with black circles, Tamm-Shockley states with blue rhombi, topologically protected edge states with red triangles, and the green crosses indicate states slightly below the bulk bands.}\label{FigSpectrum2} \end{figure} There are some differences between the two and three-particles cases. For three particles, the processes that induce the bulk-edge on-site potential mismatch are one order of magnitude higher than the ones that generate the bulk bands. Thus, the bulk-edge mismatch effectively uncouples the edge sites from the rest of the lattice, which retains chiral symmetry. Given that the symmetric and antisymmetric SSH chains are in opposite topological phases, removing the edge sites from the lattice exchanges the topological phase between the two chains. Therefore, the spectrum in Fig.~\ref{FigSpectrum2}(b1) presents not only the four Tamm-Shockley edge states (blue rhombi), well-separated energetically from the bulk bands, but also two topologically protected edge states (red triangles). When we introduce the potential correction $V=J^2/U$ in Fig.~\ref{FigSpectrum2}(b2), we exchange the topological phases of the symmetric and antisymmetric chains. The Tamm-Shockley states are absorbed by the bulk and two topologically protected edge states remain. We can also observe two states in each band (green crosses) with slightly lower energies than the others due to fourth-order corrections to the on-site potential. These corrections are not observable in the two-particle case, see Fig.~\ref{FigSpectrum2}(a2), as the fourth-order corrections are two orders of magnitude smaller than the couplings that generate the bulk bands. \begin{figure}[t] \includegraphics[width=1\columnwidth]{caging2} \includegraphics[width=1\columnwidth]{RobustnessU} \caption{(a) and (b) Time evolution of the population of the states $|j_k^\alpha,2\rangle$ with $j=A,B$ and total caged population, obtained through exact diagonalization for $U/J=50$, $N_c=12$ unit cells, and $\phi=\pi/4$. The continuous red line is the total caged population $P_{cag}$; the dashed black line is the population in the states $|A_4^\alpha,2\rangle$, with $\alpha=\pm$; and the dotted blue line is the population in the states (a) $|B_{4}^\alpha,2\rangle$, (b) $|B_{3}^\alpha,2\rangle$ and $|B_{4}^\alpha,2\rangle$. The initial states are (a) $\left(\left|A_{4}^+,2\right\rangle +\left|A_{4}^-,2\right\rangle\right)/\sqrt{2}$ and (b) $\left|A_{4}^+,2\right\rangle$. (c) Caged population, $P_{cag}$, after a time $3JT_N$ for the $\mathcal{A}$ subspace with $N=2$ and $N=3$ as a function of the ratio $U/J$. $JT_N$ is the period of the oscillations for $U/J=100$, for the two and three-particle cases and taking $\phi$ from Table \ref{Table}. The number of unit cells is $N_{c}=10$ for $N=2$ and $N_{c}=6$ for $N=3$. }\label{FigCaging2} \end{figure} Following the analogy with the single-particle case, the eigenstates of the flat-band spectra obtained for two and three particles are the CLSs in Eq.~(\ref{EqCLS}) taking $n=2$ or $3$, with energies $\pm 2\mathcal{J}$. Fig. \ref{FigCaging2} shows the time evolution of the population of the two-particle bound-states of the $\mathcal{A}$ subspace for different initial states. In particular, we consider the initial states analogous to the ones used in the single-particle case: in Fig. \ref{FigCaging2}(a), $\left(\left|A_{k}^+,2\right\rangle +\left|A_{k}^-,2\right\rangle\right)/\sqrt{2}$, and in Fig. \ref{FigCaging2}(b), $\left|A_{k}^+,2\right\rangle$. One can see that the dynamical evolution is identical to the one observed for a single particle (see Fig. \ref{FigCaging1}). In this case, the dynamics correspond to two-particle Aharonov-Bohm caging and they take place over a much longer timescale. This is because the couplings of the effective Creutz ladder are a second-order effect and, thus, much smaller in magnitude than the ones in the single-particle case (see Table~\ref{Table}). We define the total caged population as the sum of the population in a series of states: (a) $P_{cag}=P_{|A_k^{+},2\rangle}+P_{|A_k^{-},2\rangle}+P_{|B_k^{+},2\rangle}+P_{|B_k^{-},2\rangle}$; (b) $P_{cag}=P_{|A_k^{+},2\rangle}+P_{|A_k^{-},2\rangle}+P_{|B_k^{+},2\rangle}+P_{|B_k^{-},2\rangle}+P_{|B_{k-1}^{+},2\rangle}+P_{|B_{k-1}^{-},2\rangle}$. The total caged population reveals slight population losses that are due to higher-order corrections to the effective model that make the flat bands in Fig.~\ref{FigSpectrum2} slightly dispersive. For three particles and the analogous initial states, $\left(\left|A_{k}^+,3\right\rangle +\left|A_{k}^-,3\right\rangle\right)/\sqrt{2}$ and $\left|A_{k}^+,3\right\rangle$, we obtain identical (albeit slower) dynamics that correspond to three-particle Aharonov-Bohm caging. The periods of the oscillations for the different numbers of particles and $U/J=50$ are $JT_{N=1}=1.55$, $JT_{N=2}=39.5$, $JT_{N=3}=2600$. To further compare the two and three-particle Aharonov-Bohm caging, we consider an initial state in the $\mathcal{A}$ subspace, $(|B_{k}^{+},n\rangle+|B_k^{-},n\rangle)/\sqrt{2}$ (with $n=2$ or $n=3$), located at the middle of the lattice, and we let it evolve through time. The caged population for this initial state is $P_{cag}=P_{|A_k^{+},n\rangle}+P_{|A_k^{-},n\rangle}+P_{|B_k^{+},n\rangle}+P_{|B_k^{-},n\rangle}$. Fig.~\ref{FigCaging2}(c) shows the caged population after a time $3JT_N$, where $JT_N$ is the period of the oscillations for $U/J=100$, as a function of the ratio $U/J$ for the two and three-particle cases. The caged population rapidly increases for $U>J$, reaching a value close to $1$ as the system enters the regime of strong interactions. The growth of the caged population is faster for the three-particle subspace compared to the two-particle case, and it saturates at a smaller value of $U/J$. This can be understood by inspecting the higher-order terms of the perturbative expansion. As the ratio $U/J$ decreases, higher-order terms of the perturbative expansion have to be taken into account. For two particles (and also for any subspace with an even number of particles), the odd-order perturbative corrections are always zero. Then, the next perturbative correction is fourth order, and it leads to effective on-site potentials, nearest-neighbor hoppings, and also next-nearest neighbor hoppings that destroy the CLSs. In contrast, the fourth-order correction to the three-particle case only induces an effective on-site potential, and the fifth order induces nearest-neighbor hopping terms that maintain the Creutz ladder structure that exhibits flat bands. It is not until the sixth-order correction, that the next-nearest neighbor hoppings appear, making the CLSs disappear. Thus, the three-particle subspaces are more resilient to deviations from the regime of strong interactions than the two-particle $\mathcal{A}$ subspace. \subsubsection{$\mathcal{B}$ subspace} The bound-states of the $\mathcal{B}$ subspace for the two-particles case have one particle in each circulation, $|j_k^{+},1\rangle\otimes|j_k^-,1\rangle$. When we consider the couplings between states in adjacent sites, \textit{e.g.} between $|A_k^{+},1\rangle\otimes|A_k^-,1\rangle$ and $|B_k^{+},1\rangle\otimes|B_k^-,1\rangle$, there is no complex factor, as any hopping process between opposite circulations will necessarily be followed by a hopping process with the opposite phase factor. This results in an effective linear chain with uniform couplings $2J^2/U$ and on-site potentials $V_B=4J^2/U$ at the bulk and $V_E=2J^2/U$ at the edges. Therefore, the two-particle $\mathcal{B}$ subspace has a dispersive spectrum for any $\phi$ [see Fig.~\ref{FigSpectrumB}(a)] and therefore cannot exhibit Aharonov-Bohm caging. The three-particle $\mathcal{B}$ subspace arises from bound states of the form $|j_k^{\alpha},2\rangle\otimes|j_k^{-\alpha},1\rangle$. In analogy with the $\mathcal{A}$ subspace cases, the $\mathcal{B}$ effective subspace is a Creutz ladder with a bulk-edge on-site potential mismatch that can be mapped to two decoupled SSH-like chains with the same on-site potential mismatch (see Table~\ref{Table}). Fig. \ref{FigSpectrumB}(b) shows the energy spectrum for the three-particle $\mathcal{B}$ subspace for $U/J=50$, $N_c=12$ unit cells, and $\phi=\pi/2$. However, in this case there is an extra ingredient: the two bound-states in the same site, $|j_k^{\alpha},2\rangle\otimes|j_k^{-\alpha},1\rangle$ and $|j_k^{\alpha},1\rangle\otimes|j_k^{-\alpha},2\rangle$, are also coupled through second-order processes that generate a complex vertical coupling in the effective Creutz model. For the angle $\phi$ that induces a $\pi$-flux, $\phi=\pi/2$, the complex couplings of each mediating process cancel with the symmetric mediating process (\textit{i.e.} inverting the direction of the hopping processes from right to left). This compensation does not occur on the edge sites, which results in an energy mismatch between the Tamm-Shockley states (blue rhombi) of the two edges. In analogy with the three-particle $\mathcal{A}$ subspace [see Fig.~\ref{FigSpectrum2}(b1)], there are two topologically protected edge states (red triangles) besides the Tamm-Shockley states. \begin{figure}[t] \includegraphics[width=1\linewidth]{spectrumsB} \caption{Energy spectrum of the $\mathcal{B}$ subspace for (a) two ($\phi=\pi/4$) and (b) three ($\phi=\pi/2$) particles, $U/J=50$ and $N_c=12$ unit cells. We depict bulk states with black circles, Tamm-Shockley states with blue rhombi, and topologically protected edge states with red triangles.}\label{FigSpectrumB} \end{figure} \subsection{$N$-particle generalization} From the above cases, one can deduce a recipe to obtain Aharonov-Bohm caging in any $N$-particle subspace by looking at the $N$-particle tunneling processes involving complex tunnelings, \textit{i.e.}, the cross-circulation couplings $J_3$. We define an arbitrary bound state $\left\{|j_k^{\alpha},n\rangle\otimes|j_k^{-\alpha},m\rangle\right\}$ with $n$ particles in one circulation and $m$ particles in the other circulation such that $n+m=N$. In the regime of strong interactions, Aharonov-Bohm caging can exist in the subspace generated by these bound-states if all the $N$-particle hopping processes involving a complex phase acquire the same total phase factor, such that by appropriately choosing the angle $\phi$, one can induce a $\pi$-flux. The bound-states in the sites $B_k$ will be coupled in the adjacent sites $A_{k+1}$ (see Fig.~\ref{FigNhoppings}) through the integer number of real hoppings from each circulation, $R_\alpha$ and $R_{-\alpha}$, and the integer number of complex hoppings from each circulation, $C_\alpha$ and $C_{-\alpha}$, such that \begin{equation}\label{EqNparticleOut} n=R_\alpha+C_\alpha \qquad \text{and} \qquad m=R_{-\alpha}+C_{-\alpha}. \end{equation} Then, the total complex factor will be given by $e^{\pm 2i\phi (C_\alpha-C_{-\alpha})}$. These states are coupled to both the bound-states $\left\{|A_{k+1}^{\alpha},n\rangle\otimes|A_{k+1}^{-\alpha},m\rangle\right\}$ [Fig.~\ref{FigNhoppings}(a)] and $\left\{|A_{k+1}^{\alpha},m\rangle\otimes|A_{k+1}^{-\alpha},n\rangle\right\}$ [Fig.~\ref{FigNhoppings}(b)] in the adjacent site, thus fulfilling the following conditions for each case, \begin{equation}\label{EqNparticleIn} \begin{aligned} \left\{|A_{k+1}^{\alpha},n\rangle\otimes|A_{k+1}^{-\alpha},m\rangle\right\}:&\left\lbrace \begin{aligned} n=C_{-\alpha}+R_{\alpha} \\ m=C_{\alpha}+R_{-\alpha} \end{aligned}\right\rbrace, \\ \left\{A_{k+1}^{\alpha},m\rangle\otimes|A_{k+1}^{-\alpha},n\rangle\right\}:& \left\lbrace \begin{aligned} n=R_{-\alpha}+C_{\alpha}\\ m=R_{\alpha}+C_{-\alpha} \end{aligned}\right\rbrace. \end{aligned} \end{equation} \begin{figure}[t] \includegraphics{figure6} \caption{Hopping processes of an arbitrary $N$-particle bound state $\left\{|B_k^{\alpha},n\rangle\otimes|B_k^{-\alpha},m\rangle\right\}$ that couples to the bound-states in the adjacent site (a) $\left\{|A_{k+1}^{\alpha},n\rangle\otimes|A_{k+1}^{-\alpha},m\rangle\right\}$ and (b) $\left\{|A_{k+1}^{\alpha},m\rangle\otimes|A_{k+1}^{-\alpha},n\rangle\right\}$ and corresponding phase factors. $R_\alpha$ and $C_\alpha$ are the numbers of real and complex hopping processes, respectively, coming from each circulation and the labels $n$ and $m$ denote the number of particles in each site. }\label{FigNhoppings} \end{figure} Combining Eqs.~(\ref{EqNparticleOut}) and (\ref{EqNparticleIn}), we obtain the following relations between the number of complex couplings $C_\alpha$ and the corresponding phase factors (see Fig.~\ref{FigNhoppings}), \begin{equation}\label{EqFactorsArbitrarySubspace} \begin{aligned} \left\{|A_{k+1}^{\alpha},n\rangle\otimes|A_{k+1}^{-\alpha},m\rangle\right\}: C_{\alpha}&=C_{-\alpha} \quad\Longrightarrow\quad 1, \\ \left\{|A_{k+1}^{\alpha},m\rangle\otimes|A_{k+1}^{-\alpha},n\rangle\right\}: C_{\alpha}&-C_{-\alpha}=n-m & \\ &\quad\Longrightarrow\quad e^{\pm 2i\phi (n-m)}. \end{aligned} \end{equation} Therefore, one can obtain an effective Creutz ladder model up to $N$-th order perturbation theory for any subspace with $n\neq m$. In this case, the states in the same site $\left\{|j_{k}^{\alpha},n\rangle\otimes|j_{k}^{-\alpha},m\rangle\right\}$ and $\left\{|j_{k}^{\alpha},m\rangle\otimes|j_{k}^{-\alpha},n\rangle\right\}$ are also coupled, which produces an effective vertical coupling in the Creutz ladder. The order of these couplings is $2|n-m|$ and they are in general complex. The effect of these couplings can be neglected if $2|n-m|\gg n+m=N$, as $N$ is the order of the other couplings that compose the Creutz ladder. Alternatively, the vertical couplings vanish in the bulk for $\phi=\pi/2$, as each $N$-particle hopping process cancels with its left-right symmetric counterpart. Then, considering the vertical coupling and using Eq.~(\ref{EqFactorsArbitrarySubspace}), one can obtain a $\pi$-flux through the plaquettes by choosing \begin{equation}\label{EqPhiNparticle} \left\lbrace \begin{aligned} \phi&=\frac{\pi}{2(n-m)}, &\quad\text{if}\quad 2|n-m|\gg n+m=N\\ \phi&=\dfrac{\pi}{2}, &\quad\text{if}\quad n-m \text{ is odd.} \end{aligned}\right. \end{equation} For $n=m$, there is only one type of bound state, $\left\{|j_{k+1}^{\alpha},n\rangle\otimes|j_{k+1}^{-\alpha},n\rangle\right\}$, such that the effective model is a linear chain with real couplings, and the system cannot exhibit Aharonov-Bohm caging. For the $N$-particle subspaces that exhibit flat bands with $\phi\neq\pi/2$, the single-particle spectrum is dispersive, which makes these Aharonov-Bohm caging phenomena a many-body effect. Let us see some examples. For the $\mathcal{A}$ subspaces, $N$ particles will accumulate a complex phase $e^{\pm 2iN\phi}$ when coupling the states $|B_k^{\alpha},N\rangle$ and $|A_{k+1}^{-\alpha},N\rangle$. For $N$ even, flat bands arise for $\phi=\pi/(2N)$, while for $N$ odd both $\phi=\pi/(2N)$ and $\phi=\pi/2$ yield a $\pi$-flux. Additionally, the vertical couplings are $2N$-order connections and thus, always negligible. For the $\mathcal{B}$ subspaces with an even number of particles, $N/2$, in each circulation, Aharonov-Bohm caging cannot occur. The complex phases accumulated by the particles cancel out such that all the couplings of the effective chain are real and the resulting energy bands are dispersive. However, for $N$ odd, the tunneling process of one of the particles is not compensated, leading to a complex factor $e^{\pm 2i\phi}$. Then, a phase $\phi=\pi/2$ leads to a flat-band spectrum while at the same time canceling the vertical couplings. For a real space angle $\phi=\pi/2$, the single-particle spectrum exhibits flat bands, and both the $N$ odd $\mathcal{A}$ and $\mathcal{B}$ subspaces also present a flat-band spectrum. However, for an angle $\phi=\pi/(2N)$ the $\mathcal{A}$ subspace presents flat bands in the absence of a single-particle flat-band spectrum, making this instance of Aharonov-Bohm caging a purely many-body effect. As one increases the number of particles in the system, the number of bound-state configurations increases and, in particular, other semi bound-states appear where not all particles are located in a single-site, \textit{i.e.} $\left\{|j_k^{\alpha},n\rangle\otimes|j_k^{-\alpha},m\rangle\right\}$ with $n+m<N$ and $N-(n+m)$ particles not bound to the site $j$. The picture described above will hold as long as the subspaces induced by bound-states do not become degenerate with the subspaces induced by these semi bound-states. For the $\mathcal{B}$ subspaces, as their bound-states have the maximum possible energy, they will not become degenerate with any other subspace. The other subspaces can become degenerate with a subspace with some particles in a bound state in the same site, and some in other sites of the lattice. However, these instances are rare: up to ten particles, only $8$ out of $34$ bound-states are degenerate, for example, $\left\{|j_i^{\alpha},5\rangle\right\}$ and $\left\{|j_i^{\alpha},2\rangle\otimes|j_i^{-\alpha},2\rangle\right\}$. We have checked numerically the recipe to obtain $\pi$-fluxes in arbitrary subspaces given in Eq.~(\ref{EqPhiNparticle}) up to six particles. \vspace{-3mm} \section{Generalization to non-uniform fluxes}\label{SecStaggered} In this Section, we generalize the study to the family of models where the angle $\phi$ of the staggered chain is introduced with an arbitrary lattice periodicity $\Gamma$, thus increasing the number of sites per unit cell [see Fig.~\ref{FigSystemTau}(a)]. The complex couplings between adjacent sites only occur between the last site of the unit cell and the first site of the next unit cell. Thus, the flux induced by this angle $\phi$ will not be present in each plaquette, with the exact flux pattern being a function of the number of sites in the unit cell. Non-uniform fluxes have been studied in diamond lattices \cite{Mukherjee2020,Li2020}, where it has been shown to lead to an enriched Aharonov-Bohm caging phenomenology. \begin{figure}[t] \includegraphics{figure7} \caption{(a) Diagram of the one-dimensional staggered chain for an arbitrary periodicity $\Gamma$. The unit cell $k$ contains $\Gamma$ sites $\{j_k^{(1)},j_k^{(2)},...,j_k^{(\Gamma-1)},j_k^{(\Gamma)}\}$ and is enclosed by a dotted rectangle. The grey line indicates the origin of the phase $\varphi_0$ such that an angle $\phi$ is introduced in the inter-cell couplings. The black arrows denote real tunneling amplitudes while the blue ones indicate complex tunneling amplitudes between states of different winding number. (b) Schematic representation of the sites and couplings of the lattice for $\Gamma=3$ and an angle $\phi$ such that a non-uniform $\pi$-flux arises.}\label{FigSystemTau} \end{figure} The analysis of Section \ref{SecNParticle} for the dynamics of $N$ particles in the regime of strong interactions applies also to this family of models. In particular, the angles given in Eq.~(\ref{EqPhiNparticle}) for each $N$-particle subspace also yield $\pi$-fluxes, that, in this case, are non-uniform [see an example for $\Gamma=3$ in Fig.~\ref{FigSystemTau}(b)]. The non-uniform pattern is composed of $\Gamma-2$ rhombi (or triangles) without a flux followed by two rhombi (or triangles) with a $\pi$-flux. For the case of $\Gamma=2$, discussed in Sections \ref{SecSingleParticle} and \ref{SecNParticle}, the number of rhombi plaquettes without flux is zero. As a result of the non-uniform flux pattern, a particle cannot tunnel $\Gamma$ sites to the right or the left due to destructive interference, and as a consequence, the spectrum is composed of a series of flat bands. Fig.~\ref{FigSpectrumTau} shows the energy spectrum for the single-particle case and the two and three-particle $\mathcal{A}$ subspaces for different periodicities, $\Gamma=2,3$ and $4$. The angles $\phi$, as given by Eq.~(\ref{EqPhiNparticle}), yield a $\pi$-flux, and we take $U/J=50$ and simulate $24$ sites for each case. Notably, by increasing the periodicity $\Gamma$, the number of flat bands increases, as the caging cell is enlarged and gives support to a larger number of CLSs. The zero-energy edge states that are present for $\Gamma=2$, are buried in the central band of the spectrum for $\Gamma>2$. As an example, we discuss the case of $\Gamma=3$ in the next subsection. \begin{figure}[h] \includegraphics[width=1\columnwidth]{EnergiesComparison} \caption{Energy spectrum for different number of particles (a) $N=1$, (b) $N=2$, (c) $N=3$ and periodicities (1) $\Gamma=2$, (2) $\Gamma=3$, and (3) $\Gamma=4$, for $24$ sites. For the two and three-particle cases, only the $\mathcal{A}$ subspace is shown, and we fix $U/J=50$ and introduce the on-site potential correction $V$ at the edge sites. The angle $\phi$ is taken from Eq.~(\ref{EqPhiNparticle}) such that a $\pi$-flux is obtained in each subspace: (1) $\phi=\pi/2$, (2) $\phi=\pi/4$, and (3) $\phi=\pi/2$.}\label{FigSpectrumTau} \end{figure} \subsection{Example: $\Gamma=3$} For a periodicity $\Gamma=3$, the unit cell has three sites that we will call $A$, $B$, and $C$. From Figures ~\ref{FigSpectrumTau}(a2), (b2), and (c2), one can see that the $N$-particle subspaces (with the appropriate $\pi$-flux inducing angle $\phi$) present six flat bands with two degenerate zero-energy bands. The eigenstates in these flat bands consist of a series of CLSs that one can find through the diagonalization of a small lattice. Analogously to the $\Gamma=2$ case, the basis states that compose the smallest caging cell are those within a unit cell and the next site \begin{equation}\label{EqCLSbasisTau} \left\{\begin{aligned} |A_k^+,n\rangle,|A_k^-,n\rangle,|B_k^+,n\rangle,|B_k^-,n\rangle,\\ |C_k^+,n\rangle,|C_k^-,n\rangle,|A_{k+1}^{+},n\rangle,|A_{k+1}^{-},n\rangle \end{aligned}\right\}. \end{equation} We give below the analytical expressions of the CLSs (dropping the label $n$ for conciseness) and give a visual representation in Fig.~\ref{FigCLSTau}, \begin{equation}\label{EqCLStau3} \begin{aligned}|\Upsilon_k^1\rangle&=\dfrac{|A_+^k\rangle+|A_-^k\rangle+\sqrt{2}|B_+^k\rangle+\sqrt{2}|B_-^k\rangle+|C_+^k\rangle+|C_-^k\rangle}{2\sqrt{2}},\\ |\Upsilon_k^2\rangle&=\dfrac{|A_+^k\rangle+|A_-^k\rangle-\sqrt{2}|B_+^k\rangle-\sqrt{2}|B_-^k\rangle+|C_+^k\rangle+|C_-^k\rangle}{2\sqrt{2}},\\ |\Upsilon_k^3\rangle&=\dfrac{|C_+^k\rangle-|C_-^k\rangle-|A_+^{k+1}\rangle+|A_-^{k+1}\rangle}{2},\\ |\Upsilon_k^4\rangle&=\dfrac{|C_+^k\rangle-|C_-^k\rangle+|A_+^{k+1}\rangle-|A_-^{k+1}\rangle}{2},\\ |\Upsilon_k^5\rangle&=\dfrac{|C_+^k\rangle+|C_-^k\rangle-|A_+^k\rangle-|A_-^k\rangle-\sqrt{2}|B_+^k\rangle+\sqrt{2}|B_-^k\rangle}{2\sqrt{2}},\\ |\Upsilon_k^6\rangle&=\dfrac{|C_+^k\rangle+|C_-^k\rangle-|A_+^k\rangle-|A_-^k\rangle+\sqrt{2}|B_+^k\rangle-\sqrt{2}|B_-^k\rangle}{2\sqrt{2}}.\end{aligned} \end{equation} The energies of the CLSs are given by \begin{equation} \begin{aligned} E_1&=2\sqrt{2}\mathcal{J}, & \quad E_2&=-2\sqrt{2}\mathcal{J},\quad & E_3&=-2\mathcal{J}, \\ E_4&=2\mathcal{J}, & E_5&=0, & E_6&=0. \end{aligned} \end{equation} Let us compare these CLSs with those obtained for $\Gamma=2$, in Eq.~(\ref{EqCLS}). For $\Gamma=3$, the unit cell is enlarged, and we obtain more CLSs (six for $\Gamma=3$ vs. four for $\Gamma=2$) that also span a larger number of sites. As a direct consequence, the caging dynamics resulting from these flat bands have larger support over the lattice. To give an example, we consider the two-particle $\mathcal{A}$ subspace with $\phi=\pi/4$, $U/J=50$ and $N_c=12$ unit cells for $\Gamma=3$. In Fig.~\ref{FigCagingTau}, we show the time evolution of the population of the states, $P_{|j_k^\alpha,2\rangle}$ for the initial state $\left(\left|A_{4}^+,2\right\rangle +\left|A_{4}^-,2\right\rangle\right)/\sqrt{2}$. The red line indicates the caged population $P_{cag}=P_{|A_k^{+},2\rangle}+P_{|A_k^{-},2\rangle}+P_{|B_k^{+},2\rangle}+P_{|B_k^{-},2\rangle}+P_{|C_{k}^{+},2\rangle}+P_{|C_{k}^{-},2\rangle}$. The population oscillates between the sites $A_k$, $B_k$, $C_k$ of a single unit cell, as the destructive interference occurs at the sites $C_{k-1}$ and $A_{k+1}$. \begin{figure}[t] \includegraphics{figure8} \caption{Representation of the CLSs for $\Gamma=3$ defined in Eq.~(\ref{EqCLStau3}) that are eigenstates of the Creutz ladder with a non-uniform $\pi$-flux, see Fig.~\ref{FigSystemTau}(b). The radius represents the amplitude and the color represents the phase, with red being a $\pi$ phase, and green being a phase zero.}\label{FigCLSTau} \end{figure} \begin{figure}[h] \includegraphics[width=1\columnwidth]{Caging2ASymTau3} \caption{Time evolution of the population of the states $|j_k^\alpha,2\rangle$ with $j=A,B,C$ and total caged population $P_{cag}$ (continuous red line), obtained through exact diagonalization for $U/J=50$, $N_c=12$ unit cells and $\phi=\pi/4$. The dashed black line is the population in the states $|A_4^\alpha,2\rangle$, with $\alpha=\pm$, the dotted blue line is the population in the states $|B_{4}^\alpha,2\rangle$, and the dashed-dotted green line is the population in $|C_{4}^\alpha,2\rangle$. The initial state is $\left(\left|A_{4}^+,2\right\rangle +\left|A_{4}^-,2\right\rangle\right)/\sqrt{2}$.}\label{FigCagingTau} \end{figure} \section{Conclusions}\label{SecConclusions} We have studied a system of bosons in a staggered lattice with ring traps in each site and considered the local eigenstates with orbital angular momentum $l=1$. The system can be mapped to a Creutz ladder with a real and a synthetic dimension, in which the flux enclosed in each plaquette is determined by the angle $\phi$ that makes the lattice staggered. In the single-particle case, one can tune the angle $\phi$ to obtain a uniform $\pi$-flux threading each plaquette. This leads to a flat-band spectrum characterized by the presence of CLSs and the system exhibits Aharonov-Bohm caging. For $N$ particles in the regime of strong on-site interactions, bound-states arise where the $N$ particles populate a single site. Using perturbation theory, most of the $N$-particle subspaces can be mapped to an effective Creutz ladder with a flux that depends on the angle $\phi$. We have identified the conditions under which these subspaces present a $\pi$-flux that leads to flat bands and Aharonov-Bohm caging. Remarkably, some of these subspaces can exhibit Aharonov-Bohm caging even in the presence of a single-particle dispersive spectrum, making these instances a purely many-body effect. Finally, we have generalized this study to the case of non-uniform fluxes by introducing the angle $\phi$ at an arbitrary lattice periodicity $\Gamma$. In this case, one can engineer flat-band spectra for different $N$-particle subspaces and an arbitrary $\Gamma$. As the unit cell increases in size, the number of flat bands increases, resulting in a larger number of CLSs that also have a greater spatial extent. As a result, the caged particles can explore a broader region of the lattice before encountering destructive interference, making the periodicity $\Gamma$ a tunable parameter that controls the spatial extent of the Aharonov-Bohm caging. \section{Acknowledgments} EN, VA, and JM acknowledge support through the Spanish Ministry of Science and Innovation (MINECO) (PID2020-118153GB-I00), the Catalan Government (Contract No. SGR2017-1646), and the European Union Regional Development Fund within the ERDF Operational Program of Catalunya (project QUASICAT/QuantumCat). EN acknowledges financial support from MINECO through the grant PRE2018-085815 and from COST through Action CA16221. AMM and RGD acknowledge financial support from the Portuguese Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N) through Projects No. UIDB/50025/2020, No. UIDP/50025/2020, and No. LA/P/0037/2020, and funding from FCT–Portuguese Foundation for Science and Technology through Project No. PTDC/FISMAC/29291/2017. AMM acknowledges financial support from the FCT through the work Contract No. CDL-CTTRI147-ARH/2018 and from i3N through the work Contract No. CDL-CTTRI-46-SGRH/2022. \section{Introduction}\label{SecIntroduction} Neutral particles can emulate the dynamics of electrons in the presence of magnetic fields through the engineering of artificial gauge fields \cite{Dalibard2011,Goldman2014}. In the well-known Aharonov-Bohm effect \cite{Aharonov1959,Wu1975}, a charged particle performing a closed loop on a region with a non-zero electromagnetic potential acquires not only a dynamical phase but also an additional phase known as the Aharonov-Bohm phase. For particular periodic lattice geometries, single-particle wavefunctions undergo a sharp localization due to destructive interference known as Aharonov-Bohm caging \cite{Vidal1998,Vidal2000a}. This effect arises in systems such as the $\mathcal{T}_3$ model \cite{Vidal1998,Bercioux2009,Bercioux2011} or the diamond chain \cite{Vidal2000a}, and it has been observed in several experimental platforms, such as networks of conducting wires \cite{Abilio1999,Naud2001}, ultracold atoms \cite{Shinohara2002}, and photonic lattices \cite{Mukherjee2018,Kremer2020,Jorg2020}. Of particular interest is the role that interactions play in a system with single-particle Aharonov-Bohm caging, which has been explored in different regimes \cite{DiLiberto2019,Gligoric2019,Vidal2000a,Creffield2010,Pelegri2020}. Addition of interactions lifts the degeneracy of the single-particle flat bands, providing a mechanism for particles to avoid caging \cite{DiLiberto2019,Vidal2000a,Creffield2010}. However, in the regime of strong interactions, Aharonov-Bohm caging of two particles can be recovered for appropriately tuned magnetic fluxes through the formation of bound states \cite{Creffield2010}. Here, we study a one-dimensional lattice of ring potentials populated by orbital angular momentum (OAM) modes with $l=1$ and winding numbers $\nu=\pm l$. Such states give rise to complex couplings that can be engineered by modifying the geometry of the lattice \cite{Polo2016a,Pelegri2019,Pelegri2019a,Pelegri2019b,Pelegri2019c,Pelegri2020}. Thus, it is a system where synthetic fluxes arise naturally. Ring trapping potentials can be created experimentally using a variety of techniques (see \cite{Amico2021} and references therein), and OAM can be transferred by rotating a weak link \cite{Ramanathan2011,Wright2013}, by coherent transfer of angular momentum from photons to the atoms \cite{Andersen2006,Franke-Arnold2017}, or by doing a temperature quench \cite{Corman2014a}. Alternatively, such a model can be realized by exciting atoms to the $p$ band in a conventional optical lattice \cite{Wirth2011,Li2016,Kiely2016,Kock2016}. The local eigenstates with winding number $\nu=\pm l$ provide the system with a synthetic dimension, such that it can be mapped to a Creutz ladder model with a flux threading each plaquette. For this family of models, interaction induced effects have been studied for repulsive \cite{Takayoshi2013,Tovmasyan2013,Zurita2020} and attractive \cite{Tovmasyan2016,Tovmasyan2018} on-site interactions, and for nearest-neighbor interactions \cite{,Sticlet2014,Junemann2017,Kuno2020b}. In particular, two-body Aharonov-Bohm caging was explored in \cite{Zurita2020}, where a photonic lattice implementation was proposed. Here, we explore the $N$-boson case and further generalize the study to the case of non-uniform fluxes, which are known to enrich the Aharonov-Bohm caging phenomenology in single-particle diamond lattices \cite{Mukherjee2020}. The article is organized as follows. We introduce the system in Section \ref{SecPhysicalSystem} and analyze the single-particle case in Sec. \ref{SecSingleParticle}. For the case in which a $\pi$-flux threads each plaquette, we analyze both the topology of the system and study the Aharonov-Bohm caging effect in terms of the compact localized states (CLSs) that compose the flat-band spectrum. In Section \ref{SecNParticle}, we generalize this study to the case of $N$ particles by introducing on-site repulsive interactions and studying the regime of strong interactions using perturbation theory. In Sec. \ref{SecStaggered}, we generalize the study to the case of non-uniform fluxes and summarize our conclusions in Sec. \ref{SecConclusions}. \section{Physical system}\label{SecPhysicalSystem} We consider a few bosons loaded into a one-dimensional lattice where the adjacent sites are equally separated by a distance $d$. Each unit cell $k$ is composed of two sites $A_k$ and $B_k$, and we make the lattice staggered by introducing an angle $\phi$ as depicted in Fig. \ref{FigPhysicalSystem}. Given the local polar coordinates of each site, $(\rho_{j_k},\varphi_{j_k})$ with $j=A,B$, the local trapping potential is a ring potential of the form $V(\rho_{j_k})=\frac{1}{2} M \omega^{2}(\rho_{j_k}-\rho_0)^{2}$, where $\omega$ is the frequency of the radial potential, $M$ is the mass of the particles, and $\rho_0$ is the radius. For $\rho_0=0$, the ring trap reduces to a harmonic potential and we consider identical local potentials at each site. \begin{figure}[t] \includegraphics{figure0} \caption{Diagram of the one-dimensional staggered chain where the adjacent sites $A$ and $B$ are separated by a distance $d$. The unit cell is marked by a rectangle and the grey line indicates the origin of the phase $\varphi_0$. The black arrows denote real tunneling amplitudes while the blue ones indicate complex tunneling amplitudes between states of different winding number.}\label{FigPhysicalSystem} \end{figure} The eigenstates of each isolated ring have a well-defined orbital angular momentum (OAM) $l$ with winding numbers $\nu=\pm l$. We will denote the local eigenstates as $|j_k^\nu\rangle$, where $k$ is the unit cell index, $j=A,B$ is the site, and $\nu$ is the winding number. These sets of local eigenstates with different OAM $l$ are well-separated in energy, which makes them effectively decoupled in a lattice structure \cite{Polo2016a,Pelegri2019}. Then, the total field operator for the states with OAM $l$ in the lattice reads \begin{equation}\label{EqWavefunctionL} \begin{aligned} \hat{\Psi}_l=& \sum_{k=1}^{N_{c}} \sum_{\nu=\pm l} \phi^{\nu}_{A_{k}}\left(\rho_{A_{k}}, \varphi_{A_{k}}\right) \hat{a}^{\nu}_{k}+\phi^{\nu}_{B_{k}}\left(\rho_{B_{k}}, \varphi_{B_{k}}\right) \hat{b}^{\nu}_{k}, \end{aligned} \end{equation} where $N_c$ is the number of unit cells, and $\hat{a}^{\nu}_{k}$ and $\hat{b}^{\nu}_{k}$ are the annihilation operators of the local eigenstates $|A_k^{\nu}\rangle$ and $|B_k^{\nu}\rangle$, respectively. The wavefunctions of each state $|j_k^\nu\rangle$ are given by \begin{equation} \phi^{\nu}_{j_{k}}\left(\rho_{j_{k}}, \varphi_{j_{k}}\right)=\left\langle\mathbf{r} \mid j_{k}^ \nu\right\rangle=\psi\left(\rho_{j_{k}}\right) e^{i\nu\left(\varphi_{j_{k}}-\varphi_{0}\right)}, \end{equation} where $\psi\left(\rho_{j_{k}}\right)$ is the radial part of the wavefunction and $e^{i\nu\left(\varphi_{j_{k}}-\varphi_{0}\right)}$ is the complex phase due to the non-zero OAM, with $\varphi_0$ indicating the origin of the phase. Consider now a single unit cell, \textit{i.e.}, two rings side by side ($j=A,B$). The single-particle Hamiltonian restricted to a fixed value of OAM reads \begin{equation}\label{EqTotalHamiltonian} \hat{\mathcal{H}}_{l}^0=\int d^2r\, \hat{\Psi}_{l}^{\dagger}\left[-\frac{\hbar^{2} \nabla^{2}}{2 M}+V(\mathbf{r})\right] \hat{\Psi}_{l}, \end{equation} where the total potential $V(\mathbf{r})$ is the sum of the truncated potentials of each site. The tunneling amplitudes between the states $|j^\nu_k\rangle$ with OAM $l$ are given by the overlap integrals of the corresponding wavefunctions $\phi^\nu_{j}(\rho_j,\varphi_j)$ \cite{Polo2016a}, \begin{equation}\label{EqCouplings} J^{\nu,\nu'}_{j,j'}=e^{i(\nu-\nu') \varphi_{0}} \int\left(\phi^{\nu}_{j}\left(\varphi_{0}=0\right)\right)^{*} \hat{\mathcal{H}}_{l}^0\, \phi^{\nu'}_{j'}\left(\varphi_{0}=0\right) d^{2} r, \end{equation} where $j,j'=A,B$ identify the sites, and $\nu,\nu'=\pm l$, the winding numbers. Also, we have factorized and rewritten the wavefunctions as $\phi_{j}^{\nu}=e^{-i\nu\varphi_{0}} \phi_{j}^{\nu}\left(\varphi_{0}=0\right)$. These couplings were thoroughly analyzed in \cite{Polo2016a} by studying the mirror symmetries of the system. The authors found that there are only three distinct couplings: $J_1\equiv J_{j, j}^{\nu,-\nu}$ couples the opposite winding number OAM modes within a single ring, $J_{2} \equiv J_{A,B}^{\nu,\nu}$ couples same winding number modes in adjacent rings, and $J_{3} \equiv J_{A,B}^{\nu,-\nu}$ couples opposite winding number modes in adjacent rings. The complex factor in each coupling (\ref{EqCouplings}) is determined by the origin of the phase, $\varphi_0$, through the factor $e^{i(\nu-\nu')\varphi_0}$. For two inline rings, $\varphi_0$ can always be chosen so that the complex factor vanishes. We choose the origin of the phase along the $A_k$ and $B_k$ sites of the same unit cell (see Fig.~\ref{FigPhysicalSystem}), such that the corresponding couplings are real. The inter-cell couplings between the sites $B_k$ and $A_{k+1}$ form an angle $\phi$ with respect to the origin of the phase, such that the corresponding couplings $J_3$ and $J_1$ acquire a complex phase $e^{\pm i2l\phi}$. Therefore, one can tune the complex phase of these couplings by modifying the geometry of the staggered chain, \textit{i.e.}, the angle $\phi$ (see Fig.~\ref{FigPhysicalSystem}). The couplings in a two-ring system for $l=1$ were studied in \cite{Pelegri2019}: the authors found that the magnitudes of the couplings decay with the separation distance $d$ between the two rings while the difference between $|J_3|$ and $|J_2|$ also decreases with $d$ \cite{Pelegri2019}. Additionally, $|J_1|$ is one order of magnitude smaller than $|J_2|$ and $|J_3|$ for all distances. In this work, we focus on the regime of large distances, defining $|J_2|=|J_3|\equiv J$, and we neglect the $J_1$ coupling. Also, we study the states with OAM $l=1$ and winding numbers $\nu=\pm 1$ and consider an integer number of unit cells. Henceforth, we will replace the winding number with the label of the circulation $\alpha=\pm$. Given the above assumptions and using harmonic oscillator units, the single-particle Hamiltonian of this system reads \begin{equation}\label{EqSingleParticleBoseHubbardHamiltonian} \begin{aligned} \hat{\mathcal{H}}_{l=1}^0=\,&J\sum_{\alpha=\pm}\Bigg[ \sum_{k=1}^{N_c}\Big(\hat{a}^{\alpha \dagger}_{k} \hat{b}^{\alpha}_{k}+\hat{a}^{\alpha \dagger}_{k} \hat{b}^{-\alpha}_{k}\Big)+\\ &\sum_{k=1}^{N_c-1}\Big(\hat{b}^{\alpha \dagger}_{k} \hat{a}^{\alpha}_{k+1}+e^{-2 \alpha i \phi} \hat{b}^{\alpha\dagger}_{k } \hat{a}^{-\alpha}_{k+1}\Big)+\mathrm{H.c.}\Bigg]. \end{aligned} \end{equation} By representing the two circulations $+$ and $-$ as separate sites, one can depict this system as the Creutz ladder with vanishing vertical couplings shown in Fig. \ref{FigCreutz}. The two circulations $\alpha=\pm$ act as a synthetic dimension that constitutes the two legs of the ladder. Henceforward, we use the notation $|j_k^\alpha,n\rangle$ to denote the number of particles $n$ in the local state $|j_k^\alpha\rangle$. In the following Section, where we discuss the single-particle case, $n$ will always be $n=1$. For this case, the states in each site are $|A_k^\alpha,1\rangle$ and $|B_k^\alpha,1\rangle$, the couplings are $\mathcal{J}=J$ and $\theta=2\phi$. \begin{figure}[t] \includegraphics{figure1} \caption{Schematic representation of the sites and couplings of the lattice formed by a real dimension and the synthetic dimension spanned by the two circulations $\pm $ in each site $A_k$ and $B_k$. The unit cell is indicated as a dotted rectangle and the complex couplings are $e^{i\theta}\mathcal{J}$ from circulation $+$ to $-$ and its complex conjugate in the opposite direction. }\label{FigCreutz} \end{figure} \section{Single particle}\label{SecSingleParticle} In this Section, we will analyze in detail the single-particle case, which will be the basis to understand the generalization to $N$ particles that we explore in Section \ref{SecNParticle}. As we have seen, the complex factor $e^{\pm 2i\phi}$ that appears in the $J_3$ couplings can be tuned by modifying the real space angle $\phi$ of the staggered chain (see Fig.~\ref{FigPhysicalSystem}). We are interested in the case $\phi=\pi/2$, for which the $J_3$ inter-cell couplings become $J_3=-J_2=-J$, thus generating a synthetic $\pi$-flux in each plaquette. Note that the couplings in the staggered chain can form either rhombus or triangle plaquettes with two configurations each, such that every one of them contains a $\pi$-flux (see Fig.~\ref{FigPlaquettes}). As a result, a particle cannot tunnel two sites to the right or to the left due to destructive interference. This destructive interference that leads to localization due to the presence of a flux is known as Aharonov-Bohm caging \cite{Vidal1998,Vidal2000a}. For $\phi=\pi/2$, the Hamiltonian in Eq.~(\ref{EqSingleParticleBoseHubbardHamiltonian}) reduces to \begin{equation}\label{EqPiFluxHamiltonian} \begin{aligned} \hat{\mathcal{H}}^0_{l=1}=\,&J\sum_{\alpha=\pm}\Bigg[ \sum_{k=1}^{N_c}\big( \hat{a}^{\alpha \dagger}_{k} \hat{b}^{\alpha}_{k}+\hat{a}^{\alpha\dagger}_{k } \hat{b}^{-\alpha}_{k}\big)+\\ & \sum_{k=1}^{N_c-1}\big(\hat{b}^{\alpha\dagger}_{k } \hat{a}^{\alpha}_{k+1}- \hat{b}^{\alpha\dagger}_{k } \hat{a}^{-\alpha}_{k+1}\big)+\mathrm{H.c.}\Bigg]. \end{aligned} \end{equation} A topological characterization of this system can be obtained by analyzing the block-diagonalized Hamiltonian. We introduce the following basis change (with $n=1$), \begin{equation}\label{EqBasisChange} \begin{aligned}&\left|A_{k}^{s(a)},n\right\rangle=\frac{1}{\sqrt{2}}\left(\left|A_{k}^+,n\right\rangle \varpm\left|A_{k}^-,n\right\rangle\right),\\ &\left|B_{k}^{s(a)},n\right\rangle=\frac{1}{\sqrt{2}}\left(\left|B_{k}^+,n\right\rangle \varpm\left|B_{k}^-,n\right\rangle\right),\end{aligned} \end{equation} that decouples the system into the two following Hamiltonians, \begin{equation}\label{EqSSHHamiltonians} \begin{aligned} \hat{\mathcal{H}}_s=&2J \sum_{k=1}^{N_c}\hat{a}^{s\dagger}_{k } \hat{b}^{s}_{k}+\mathrm{H.c.},\\ \hat{\mathcal{H}}_a=&2J \sum_{k=1}^{N_c-1}\hat{a}^{a\dagger}_{k+1 } \hat{b}^{a}_{k}+\mathrm{H.c.}, \end{aligned} \end{equation} where $\hat{a}^{s(a)}_{k }$ and $\hat{b}^{s(a)}_{k}$ are the annihilation operators of the states in Eq. (\ref{EqBasisChange}). The Hamiltonians $\hat{\mathcal{H}}_a$ and $\hat{\mathcal{H}}_s$ correspond to two Su-Schrieffer-Heeger (SSH) chains in the dimerized limit, \textit{i.e.}, linear chains with alternating couplings where either the inter or the intra-cell coupling is zero (see Fig. \ref{FigSSH} with $n=1$ and $\mathcal{J}=J$). The two models have the same couplings, $2J$ and $0$, in opposite configurations, which leads to them having opposite topological phases. \begin{figure}[t] \includegraphics{figure2}\vspace{4mm} \includegraphics{figure3} \caption{Schematic representation of the lattice with a $\pi$-flux in each plaquette, for which the cross-circulation couplings reduce to $-\mathcal{J}$ (blue dashed lines). The different diagrams highlight the plaquette configurations that enclose a $\pi$-flux: rhombi and triangles with two configurations each. }\label{FigPlaquettes} \end{figure} \begin{figure}[h] \includegraphics{figure4} \caption{Decoupled symmetric and antisymmetric SSH chains with alternating couplings $2\mathcal{J}$ and $0$. The unit cell of each chain is indicated by the dotted rectangles.} \label{FigSSH} \end{figure} We consider an integer number of unit cells and that the first site of the chain is a site $A$ (and thus, the last, a site $B$), such that the edge couplings are real. In that case, the symmetric SSH chain, $\hat{\mathcal{H}}_s$, is in the trivial phase, characterized by a quantized Zak phase $\gamma=0$, and the antisymmetric chain, $\hat{\mathcal{H}}_a$, is in the topological phase with a quantized Zak phase, $\gamma=\pi$. If we instead consider a lattice starting with a $B$ site, the symmetric chain would be the one in the topological phase. Thus, for an integer number of unit cells, there are always two edge states present regardless of the configuration of the chain. In Fig. \ref{FigCLS}(a), we represent the energy spectrum of a chain with $N_c=12$ unit cells and $\phi=\pi/2$ obtained through exact diagonalization. We obtain two flat bands and two zero-energy edge states that correspond to the superposition of the energy spectra of $\hat{\mathcal{H}}_s$ and $\hat{\mathcal{H}}_a$, in Eq.~(\ref{EqSSHHamiltonians}). The edge states are eigenstates of the antisymmetric chain and are completely localized at the edge sites (with $n=1$), \begin{equation}\label{EqEdge} \begin{aligned}\left|A_{1}^a,n\right\rangle_{edge}&=\frac{1}{\sqrt{2}}\left(\left|A_{1}^+,n\right\rangle -\left|A_{1}^-,n\right\rangle\right),\\ \left|B_{N_c}^a,n\right\rangle_{edge}&=\frac{1}{\sqrt{2}}\left(\left|B_{N_c}^+,n\right\rangle -\left|B_{N_c}^-,n\right\rangle\right).\end{aligned} \end{equation} \subsection{Single-particle Aharonov-Bohm caging} In this Section, we explore single-particle Aharonov-Bohm caging. The flat bands that appear in the spectrum when a $\pi$-flux threads each plaquette [see Fig.~\ref{FigCLS}(a)], are characterized by the presence of compact localized states (CLSs). These eigenstates have high real space localization: their amplitude is non-zero in a few close-by sites while being exactly zero everywhere else. The smallest possible basis for the CLSs in this model spans the states of one unit cell and an extra site (where $n=1$), \begin{equation}\label{EqCLSbasis} \left\{|A_k^+,n\rangle,|A_k^-,n\rangle,|B_k^+,n\rangle,|B_k^-,n\rangle,|A_{k+1}^{+},n\rangle,|A_{k+1}^{-},n\rangle\right\}. \end{equation} The CLSs are found to be [see Fig.~\ref{FigCLS}(b)] \begin{equation}\label{EqCLS} \begin{aligned} |\Upsilon_k^1,n\rangle&=\frac{1}{2}\left(|B_k^+,n\rangle+|B_k^-,n\rangle-|A_k^+,n\rangle-|A_k^-,n\rangle\right),\\ |\Upsilon_k^2,n\rangle&=\dfrac{1}{2}\left(|B_k^+,n\rangle-|B_k^-,n\rangle-|A_{k+1}^{+}\rangle+|A_{k+1}^-,n\rangle\right),\\ |\Upsilon_k^3,n\rangle&=\dfrac{1}{2}\left(|B_k^+,n\rangle+|B_k^-,n\rangle+|A_k^+,n\rangle+|A_k^-,n\rangle\right),\\ |\Upsilon_k^4,n\rangle&=\dfrac{1}{2}\left(|B_k^+,n\rangle-|B_k^-,n\rangle+|A_{k+1}^+,n\rangle-|A_{k+1}^-,n\rangle\right), \end{aligned} \end{equation} and their corresponding energies are $E_1=E_2=-2\mathcal{J}$ and $E_3=E_4=2\mathcal{J}$ (where $\mathcal{J}=J$ in the single-particle case). Any initial state that can be written as a superposition of these states will remain localized in the caging cell defined in (\ref{EqCLSbasis}). \begin{figure} \includegraphics{figure5} \caption{(a) Single-particle energy spectrum for $N_c=12$ unit cells and $\phi=\pi/2$. (b) Representation of the CLSs defined in Eq.~(\ref{EqCLS}) that are eigenstates of the Creutz ladder, see Fig.~\ref{FigCreutz}, when a $\pi$-flux threads each plaquette. The radius represents the amplitude and the color represents the phase, with red being a $\pi$ phase, and green being a phase zero.}\label{FigCLS} \end{figure} We consider an initial state where only a single site $A_k$ in the bulk of the chain is populated. Fig. \ref{FigCaging1}(a) shows the time evolution of the population of each local eigenstate, $P_{|j_k^{\alpha},1\rangle}$ (with $j=A,B$), for the initial state $\left(\left|A_{k}^+,1\right\rangle +\left|A_{k}^-,1\right\rangle\right)/\sqrt{2}$, which corresponds to the superposition $(|\Upsilon_k^3,1\rangle-|\Upsilon_k^1,1\rangle)/\sqrt{2}$. The population coherently oscillates between the sites $A_{k}$ and $B_{k}$ without populating any other sites due to destructive interference at $B_{k-1}$ and $A_{k+1}$. Thus, the total caged population, $P_{cag}=P_{|A_k^{+},1\rangle}+P_{|A_k^{-},1\rangle}+P_{|B_k^{+},1\rangle}+P_{|B_k^{-},1\rangle}$, stays at $P_{cag}=1$ throughout the time evolution. Additionally, the two circulations within each site maintain the same population at all times: $P_{|A_k^+,1\rangle}=P_{|A_k^-,1\rangle}$ and $P_{|B_{k}^+,1\rangle}=P_{|B_k^-,1\rangle}$. For the initial state $\left(\left|A_{k}^+,1\right\rangle -\left|A_{k}^-,1\right\rangle\right)/\sqrt{2}=(|\Upsilon_k^4,1\rangle-|\Upsilon_k^2,1\rangle)/\sqrt{2}$, one obtains identical dynamics but the exchange in population takes place between the sites $A_{k}$ and $B_{k-1}$, as the sign of the superposition shifts the destructive interference to the sites $B_k$ and $A_{k-1}$. Fig. \ref{FigCaging1}(b) shows the time evolution for the initial state $\left|A_{k}^+,1\right\rangle=(-|\Upsilon_k^1,1\rangle+|\Upsilon_k^3,1\rangle-|\Upsilon_{k-1}^2,1\rangle+|\Upsilon_{k-1}^4,1\rangle)/2$. As this initial state cannot be written as a superposition of CLSs of a single caging cell, the population reaches both the sites $B_k$ and $B_{k-1}$. The total caged population, which in this case also stays constant, is $P_{cag}=P_{|A_k^{+},1\rangle}+P_{|A_k^{-},1\rangle}+P_{|B_k^{+},1\rangle}+P_{|B_k^{-},1\rangle}+P_{|B_{k-1}^{+},1\rangle}+P_{|B_{k-1}^{-},1\rangle}$. Also, we simulate a chain with $N_c=12$ unit cells and choose the unit cell $k=4$ for the initial state. The caging dynamics in Fig.~\ref{FigCaging1} can also be understood in terms of the decoupled dimers of the SSH chains. For the symmetric and antisymmetric initial states, in Eq.~(\ref{EqBasisChange}), the population remains trapped in the corresponding dimer of the symmetric, $\hat{\mathcal{H}}_s$, or the antisymmetric, $\hat{\mathcal{H}}_a$, chain (see Fig.~\ref{FigSSH}). In contrast, the initial state $\left|A_{k}^+,1\right\rangle$ populates both the symmetric and antisymmetric SSH chains, such that the population reaches both dimers and as a consequence reaches a broader spatial extent. \begin{figure}[t] \includegraphics[width=1\columnwidth]{caging1} \caption{Time evolution of the population of the states $|j_k^\alpha,1\rangle$ with $j=A,B$ and total caged population, obtained through exact diagonalization for $J=1$, $N_c=12$ unit cells and $\phi=\pi/2$. The continuous red line is the total caged population $P_{cag}$; the dashed black line is the population in the states $|A_4^\alpha,1\rangle$, with $\alpha=\pm$; and the dotted blue line is the population in the states (a) $|B_{4}^\alpha,1\rangle$, (b) $|B_{3}^\alpha,1\rangle$ and $|B_{4}^\alpha,1\rangle$. The initial states are (a) $\left(\left|A_{4}^+,1\right\rangle +\left|A_{4}^-,1\right\rangle\right)/\sqrt{2}$ and (b) $\left|A_{4}^+,1\right\rangle$.}\label{FigCaging1} \end{figure} \section{N particle}\label{SecNParticle} In this Section, we explore the many-body dynamics of the system for $N$ bosons with repulsive interactions. For an ultracold and dilute gas of atoms, two-body collisions dominate, and the interaction Hamiltonian for a lattice of rings restricted to a single OAM manifold can be written as \begin{equation}\label{EqInteractionHamiltonian} \hat{\mathcal{H}}^{int}_{l}=\frac{g}{2} \int d^2r\, \hat{\Psi}_{l}^{\dagger} \hat{\Psi}_{l}^{\dagger} \hat{\Psi}_{l} \hat{\Psi}_{l}, \end{equation} where $g$ is proportional to the $s$-wave scattering length and fulfills $g>0$. Introducing the expression of the bosonic field operator, Eq.~(\ref{EqWavefunctionL}), and considering only on-site interactions, the interaction Hamiltonian for $l=1$ becomes \begin{equation}\label{EqInteractionHamiltonianHubbard} \hat{\mathcal{H}}^{int}_{l=1}\hspace{-0.5mm}=\hspace{-0.5mm}\dfrac{U}{2}\hspace{-1mm}\sum_{j=A,B}\sum_{k=1}^{N_c}\!\left[ \hat{n}_{j_k}^+(\hat{n}^+_{j_k}\!-\!1)\!+\!\hat{n}^-_{j_k}(\hat{n}^-_{j_k}\!-\! 1)\!+\!4\hat{n}^+_{j_k}\hat{n}^-_{j_k}\right]\!, \end{equation} where $\hat{n}^{\alpha}_{j_k}=\hat{j}^{\alpha\dagger}_{k}\hat{j}^{\alpha}_{k}$ is the number operator and the interaction strength is defined as $U \equiv g \int d^2r\left|\psi\left(\rho_{j_{k}}\right)\right|^{4}$ \cite{Pelegri2019}. Besides the common Bose-Hubbard interaction terms for each of the circulations, $\alpha=\pm$, a cross-circulation term appears. Thus, this realization of a Creutz ladder yields a nearest-neighbor interaction term along the rungs of the ladder that is not usually present in other realizations of this model. Henceforward, we will analyze the regime of strong interactions, in which the interaction term dominates over the tunneling term, $U\gg J$. We are interested in the bound-states where the $N$ bosons occupy a single site of the lattice, $\left\{|j_k^{\alpha},n\rangle\otimes|j_k^{-\alpha},m\rangle\right\}$, where there are $n$ particles in one circulation and $m$ particles in the other circulation (with $n+m=N$). In the regime of strong interactions, the kinetic Hamiltonian, $\hat{\mathcal{H}}_{l=1}^0$ [Eq.~(\ref{EqSingleParticleBoseHubbardHamiltonian})], is introduced as a perturbation that couples the bound states $\left\{|j_k^{\alpha},n\rangle\otimes|j_k^{-\alpha},m\rangle\right\}$ in adjacent sites. This effect creates subspaces that are well-separated in energy, and thus, effectively uncoupled. We will analyze in detail the two and three-particle cases as an example in the next subsections. The matrix elements of the effective Hamiltonian of each subspace up to third order are given by \cite{Bir1974,Tannoudji1992} \begin{equation}\label{EqEffectiveHamiltonian} \begin{aligned}\langle d|&\hat{\mathcal{H}}_{\mathrm{eff}}| d^{\prime}\rangle= E_{d}^{0} \delta_{d d^{\prime}}+\frac{1}{2} \sum_{w}\langle d|\hat{\mathcal{H}}_{l=1}^0| w\rangle\langle w|\hat{\mathcal{H}}_{l=1}^0| d^{\prime}\rangle\cdot\\ &\cdot\!\!\left[\frac{1}{E_{d}^{0}-E_{w}^{0}}+\frac{1}{E_{d^{\prime}}^{0}-E_{w}^{0}}\right]+ \\ &+\frac{1}{2} \sum_{w w^{\prime}}\langle d|\hat{\mathcal{H}}_{l=1}^0| w\rangle\langle w|\hat{\mathcal{H}}_{l=1}^0| w^{\prime}\rangle\langle w^{\prime}|\hat{\mathcal{H}}_{l=1}^0| d^{\prime}\rangle\cdot\\ &\cdot\!\!\left[\frac{1}{\left(E_{d}^{0}-E_{w}^{0}\right)\left(E_{d}^{0}-E_{w^{\prime}}^{0}\right)}+\frac{1}{\left(E_{d^{\prime}}^{0}-E_{w}^{0}\right)\left(E_{d^{\prime}}^{0}-E_{w^{\prime}}^{0}\right)}\right]\!, \end{aligned} \end{equation} where $|d\rangle,|d'\rangle$ are the bound-states, $|w\rangle,|w'\rangle$ are the mediating states in each hopping process, and $E^0$ are the unperturbed energies. Note that the first-order corrections are always zero. For $|d\rangle\neq|d'\rangle$, one obtains an effective tunneling term, while for $|d\rangle=|d'\rangle$, one obtains an effective on-site potential. While Eq.~(\ref{EqEffectiveHamiltonian}) provides a good description up to $N=3$, for $N>3$, one would need to compute the higher-order terms of the perturbative expansion. \subsection{Two and three particles}\label{SecTwoParticle} For the two and three-particle cases, there are only two subspaces available that arise from the following bound-state classes: \begin{enumerate} \item $\mathcal{A}$: $N$ particles occupy the same site and the same circulation, $|j_k^{\alpha},N\rangle$. These are the bound-states that minimize the interaction energy, which is $E_{\mathcal{A}}=N(N-1)U/2$. \item $\mathcal{B}$: these bound-states maximize the interaction energy and take the following two forms: \begin{enumerate} \item For $N$ even, $N/2$ particles in each circulation, $$\left\{|j_k^{+},N/2\rangle\otimes|j_k^-,N/2\rangle\right\},$$ with energy $E_{\mathcal{B},\rm{even}}=(3N^2/2-N)U/2$. \item For $N$ odd, $(N-1)/2$ particles in one circulation and $(N-1)/2+1$ in the other $$\hspace{12mm}\left\{ \begin{array}{c} |j_k^{+},(N-1)/2\rangle\otimes|j_k^-,(N-1)/2+1\rangle,\\ |j_k^{+},(N-1)/2+1\rangle\otimes|j_k^-,(N-1)/2\rangle \end{array}\right\},$$ with a slightly lower energy, $E_{\mathcal{B},\rm{odd}}=(3N^2/2-N-1/2)U/2$. \end{enumerate} \end{enumerate} \subsubsection{$\mathcal{A}$ subspace} We introduce the coupling $J$ as a perturbation, \textit{i.e.}, $U\gg J$, such that the states of the $\mathcal{A}$ subspace in adjacent sites become coupled. The states for the two-particle case, \textit{e.g.} $|A_k^{\alpha},2\rangle$ and $|{B}_k^{\alpha'},2\rangle$, become coupled through second-order hopping processes, while the states in the three-particle case, \textit{e.g.} $|A_k^{\alpha},3\rangle$ and $|{B}_k^{\alpha'},3\rangle$, become coupled through third-order hopping processes. Additionally, each state is coupled to itself also through second-order hoppings, such that an effective on-site potential arises. Note that for both cases, the third-order contribution to the effective on-site potential is zero. Also, the on-site potential has different magnitudes for the bulk, $V_B$, and the edge, $V_E$, since the number of available mediating states for the bulk states is twice the number of the ones available for the states localized at the edge sites \cite{Bello2016,DiLiberto2016,Marques2017}. Using Eq.~(\ref{EqEffectiveHamiltonian}) up to second order for the two-particle case and up to third order for the three-particle case, the resulting effective chains become a Creutz ladder, depicted in Fig.~(\ref{FigCreutz}) with $n=2$ or $3$. The parameters that characterize the two and three-particle effective models as well as those of the single-particle case are given in Table~\ref{Table}. \begin{table}[b] \begin{center} \begin{tabular}{c|c|c|c|c} & \,Single-particle\, & $\mathcal{A}_2$ & $\mathcal{A}_3$ & $\mathcal{B}_3$ \\ \hline\hline \rule{0pt}{12pt} $\mathcal{J}$ & $J$ & $2J^2/U$ & $3J^3/(2U^2)$ & $121J^3/(72U^2)$ \\[2pt] \hline $\theta$ & $2\phi$ & $4\phi$ & $6\phi$ & $2\phi$ \\[2pt] \hline $\phi$ & $\pi/2$ & $\pi/4$ & $\pi/2,\pi/6$ & $\pi/2$\\[2pt] \hline $V_E$ & --- & $4J^2/U$ & $3J^2/U$ & $11J^2/(6U)$ \\[2pt] \hline $V_B$ & --- & $8J^2/U$ & $6J^2/U$ & $11J^2/(3U)$ \\[2pt] \hline $V$ & --- & $2J^2/U$ & $J^2/U$ & --- \end{tabular} \end{center} \caption{Summary of parameters that characterize the single-particle case and the two and three-particle effective subspaces that exhibit Aharonov-Bohm caging. Parameters of the Creutz ladder defined in Fig. \ref{FigCreutz}: couplings $\mathcal{J}$, angle $\theta$ and real space angle $\phi$ that induces a $\pi$-flux. Effective on-site potential up to second-order corrections at the edge sites, $V_E$, and the bulk sites, $V_B$, and edge correction potential $V$.} \label{Table} \end{table} The inter-cell cross couplings between the $\mathcal{A}$ subspace states with opposite circulations contain a complex factor $e^{\pm i\theta}$ (see Table~\ref{Table}). Then, for two (three) particles and the real space angle $\phi=\pi/4$ ($\phi=\pi/2$ or $\pi/6$) (see Fig~\ref{FigPhysicalSystem}), the complex factor becomes a $\pi$ phase and the effective chain acquires a $\pi$-flux in each plaquette of the Creutz ladder, see Fig.~\ref{FigPlaquettes}. Due to the similarities between the single-particle model and the effective $\mathcal{A}$ subspace, we can apply the basis-change employed for the single-particle case, taking $n=2$ or $3$ in Eq.~(\ref{EqBasisChange}). As expected, one obtains two dimerized SSH-like decoupled systems with renormalized couplings [Fig.~\ref{FigSSH} with $n=2$ or $3$ and $\mathcal{J}=2J^2/U$ or $3J^3/(2U^2)$], with additional on-site potentials inherited from the Creutz ladder, $V_B$ and $V_E$. Fig.~\ref{FigSpectrum2} shows the energy spectrum of the $\mathcal{A}$ subspace for (a1) two particles and (b1) three particles for $U/J=50$ and $N_c=12$ unit cells. We choose the angle $\phi$ that induces a $\pi$-flux in each effective Hamiltonian, $\phi=\pi/4$ and $\phi=\pi/2$, respectively. In contrast with a regular SSH model, the effective chains are not chirally symmetric due to the presence of the bulk-edge on-site potential mismatch. Therefore, the four eigenstates that fall outside the bulk bands (blue rhombi) are non-topological Tamm-Shockley edge states, \textit{i.e.}, states induced by interactions that are localized at the edge sites due to the bulk-edge on-site potential mismatch \cite{DiLiberto2016,Bello2016,Gorlach2017,Salerno2018}. One can recover chiral symmetry in the effective model by introducing an on-site potential $V$ at the edge sites of the real space chain that exactly compensates the potential mismatch \cite{Bello2016}. Figures~\ref{FigSpectrum2}(a2) and (b2) show the two and three-particle spectra of the $\mathcal{A}$ subspace when we introduce the on-site potential correction at the edge sites, $V=2J^2/U$ and $V=J^2/U$, respectively. In this case, we recover the spectrum of an SSH model with two symmetry-protected edge states (red triangles). \begin{figure}[t] \includegraphics[width=1\linewidth]{spectrumsA} \caption{Energy spectrum of the $\mathcal{A}$ subspace for (a) two ($\phi=\pi/4$) and (b) three ($\phi=\pi/2$) particles, $U/J=50$ and $N_c=12$ unit cells with or without an on-site potential correction $V$ at the edge sites: (a1), (b1) $V=0$, (a2) $V=2J^2/U$, and (b2) $V=J^2/U$. We depict bulk states with black circles, Tamm-Shockley states with blue rhombi, topologically protected edge states with red triangles, and the green crosses indicate states slightly below the bulk bands.}\label{FigSpectrum2} \end{figure} There are some differences between the two and three-particles cases. For three particles, the processes that induce the bulk-edge on-site potential mismatch are one order of magnitude higher than the ones that generate the bulk bands. Thus, the bulk-edge mismatch effectively uncouples the edge sites from the rest of the lattice, which retains chiral symmetry. Given that the symmetric and antisymmetric SSH chains are in opposite topological phases, removing the edge sites from the lattice exchanges the topological phase between the two chains. Therefore, the spectrum in Fig.~\ref{FigSpectrum2}(b1) presents not only the four Tamm-Shockley edge states (blue rhombi), well-separated energetically from the bulk bands, but also two topologically protected edge states (red triangles). When we introduce the potential correction $V=J^2/U$ in Fig.~\ref{FigSpectrum2}(b2), we exchange the topological phases of the symmetric and antisymmetric chains. The Tamm-Shockley states are absorbed by the bulk and two topologically protected edge states remain. We can also observe two states in each band (green crosses) with slightly lower energies than the others due to fourth-order corrections to the on-site potential. These corrections are not observable in the two-particle case, see Fig.~\ref{FigSpectrum2}(a2), as the fourth-order corrections are two orders of magnitude smaller than the couplings that generate the bulk bands. \begin{figure}[t] \includegraphics[width=1\columnwidth]{caging2} \includegraphics[width=1\columnwidth]{RobustnessU} \caption{(a) and (b) Time evolution of the population of the states $|j_k^\alpha,2\rangle$ with $j=A,B$ and total caged population, obtained through exact diagonalization for $U/J=50$, $N_c=12$ unit cells, and $\phi=\pi/4$. The continuous red line is the total caged population $P_{cag}$; the dashed black line is the population in the states $|A_4^\alpha,2\rangle$, with $\alpha=\pm$; and the dotted blue line is the population in the states (a) $|B_{4}^\alpha,2\rangle$, (b) $|B_{3}^\alpha,2\rangle$ and $|B_{4}^\alpha,2\rangle$. The initial states are (a) $\left(\left|A_{4}^+,2\right\rangle +\left|A_{4}^-,2\right\rangle\right)/\sqrt{2}$ and (b) $\left|A_{4}^+,2\right\rangle$. (c) Caged population, $P_{cag}$, after a time $3JT_N$ for the $\mathcal{A}$ subspace with $N=2$ and $N=3$ as a function of the ratio $U/J$. $JT_N$ is the period of the oscillations for $U/J=100$, for the two and three-particle cases and taking $\phi$ from Table \ref{Table}. The number of unit cells is $N_{c}=10$ for $N=2$ and $N_{c}=6$ for $N=3$. }\label{FigCaging2} \end{figure} Following the analogy with the single-particle case, the eigenstates of the flat-band spectra obtained for two and three particles are the CLSs in Eq.~(\ref{EqCLS}) taking $n=2$ or $3$, with energies $\pm 2\mathcal{J}$. Fig. \ref{FigCaging2} shows the time evolution of the population of the two-particle bound-states of the $\mathcal{A}$ subspace for different initial states. In particular, we consider the initial states analogous to the ones used in the single-particle case: in Fig. \ref{FigCaging2}(a), $\left(\left|A_{k}^+,2\right\rangle +\left|A_{k}^-,2\right\rangle\right)/\sqrt{2}$, and in Fig. \ref{FigCaging2}(b), $\left|A_{k}^+,2\right\rangle$. One can see that the dynamical evolution is identical to the one observed for a single particle (see Fig. \ref{FigCaging1}). In this case, the dynamics correspond to two-particle Aharonov-Bohm caging and they take place over a much longer timescale. This is because the couplings of the effective Creutz ladder are a second-order effect and, thus, much smaller in magnitude than the ones in the single-particle case (see Table~\ref{Table}). We define the total caged population as the sum of the population in a series of states: (a) $P_{cag}=P_{|A_k^{+},2\rangle}+P_{|A_k^{-},2\rangle}+P_{|B_k^{+},2\rangle}+P_{|B_k^{-},2\rangle}$; (b) $P_{cag}=P_{|A_k^{+},2\rangle}+P_{|A_k^{-},2\rangle}+P_{|B_k^{+},2\rangle}+P_{|B_k^{-},2\rangle}+P_{|B_{k-1}^{+},2\rangle}+P_{|B_{k-1}^{-},2\rangle}$. The total caged population reveals slight population losses that are due to higher-order corrections to the effective model that make the flat bands in Fig.~\ref{FigSpectrum2} slightly dispersive. For three particles and the analogous initial states, $\left(\left|A_{k}^+,3\right\rangle +\left|A_{k}^-,3\right\rangle\right)/\sqrt{2}$ and $\left|A_{k}^+,3\right\rangle$, we obtain identical (albeit slower) dynamics that correspond to three-particle Aharonov-Bohm caging. The periods of the oscillations for the different numbers of particles and $U/J=50$ are $JT_{N=1}=1.55$, $JT_{N=2}=39.5$, $JT_{N=3}=2600$. To further compare the two and three-particle Aharonov-Bohm caging, we consider an initial state in the $\mathcal{A}$ subspace, $(|B_{k}^{+},n\rangle+|B_k^{-},n\rangle)/\sqrt{2}$ (with $n=2$ or $n=3$), located at the middle of the lattice, and we let it evolve through time. The caged population for this initial state is $P_{cag}=P_{|A_k^{+},n\rangle}+P_{|A_k^{-},n\rangle}+P_{|B_k^{+},n\rangle}+P_{|B_k^{-},n\rangle}$. Fig.~\ref{FigCaging2}(c) shows the caged population after a time $3JT_N$, where $JT_N$ is the period of the oscillations for $U/J=100$, as a function of the ratio $U/J$ for the two and three-particle cases. The caged population rapidly increases for $U>J$, reaching a value close to $1$ as the system enters the regime of strong interactions. The growth of the caged population is faster for the three-particle subspace compared to the two-particle case, and it saturates at a smaller value of $U/J$. This can be understood by inspecting the higher-order terms of the perturbative expansion. As the ratio $U/J$ decreases, higher-order terms of the perturbative expansion have to be taken into account. For two particles (and also for any subspace with an even number of particles), the odd-order perturbative corrections are always zero. Then, the next perturbative correction is fourth order, and it leads to effective on-site potentials, nearest-neighbor hoppings, and also next-nearest neighbor hoppings that destroy the CLSs. In contrast, the fourth-order correction to the three-particle case only induces an effective on-site potential, and the fifth order induces nearest-neighbor hopping terms that maintain the Creutz ladder structure that exhibits flat bands. It is not until the sixth-order correction, that the next-nearest neighbor hoppings appear, making the CLSs disappear. Thus, the three-particle subspaces are more resilient to deviations from the regime of strong interactions than the two-particle $\mathcal{A}$ subspace. \subsubsection{$\mathcal{B}$ subspace} The bound-states of the $\mathcal{B}$ subspace for the two-particles case have one particle in each circulation, $|j_k^{+},1\rangle\otimes|j_k^-,1\rangle$. When we consider the couplings between states in adjacent sites, \textit{e.g.} between $|A_k^{+},1\rangle\otimes|A_k^-,1\rangle$ and $|B_k^{+},1\rangle\otimes|B_k^-,1\rangle$, there is no complex factor, as any hopping process between opposite circulations will necessarily be followed by a hopping process with the opposite phase factor. This results in an effective linear chain with uniform couplings $2J^2/U$ and on-site potentials $V_B=4J^2/U$ at the bulk and $V_E=2J^2/U$ at the edges. Therefore, the two-particle $\mathcal{B}$ subspace has a dispersive spectrum for any $\phi$ [see Fig.~\ref{FigSpectrumB}(a)] and therefore cannot exhibit Aharonov-Bohm caging. The three-particle $\mathcal{B}$ subspace arises from bound states of the form $|j_k^{\alpha},2\rangle\otimes|j_k^{-\alpha},1\rangle$. In analogy with the $\mathcal{A}$ subspace cases, the $\mathcal{B}$ effective subspace is a Creutz ladder with a bulk-edge on-site potential mismatch that can be mapped to two decoupled SSH-like chains with the same on-site potential mismatch (see Table~\ref{Table}). Fig. \ref{FigSpectrumB}(b) shows the energy spectrum for the three-particle $\mathcal{B}$ subspace for $U/J=50$, $N_c=12$ unit cells, and $\phi=\pi/2$. However, in this case there is an extra ingredient: the two bound-states in the same site, $|j_k^{\alpha},2\rangle\otimes|j_k^{-\alpha},1\rangle$ and $|j_k^{\alpha},1\rangle\otimes|j_k^{-\alpha},2\rangle$, are also coupled through second-order processes that generate a complex vertical coupling in the effective Creutz model. For the angle $\phi$ that induces a $\pi$-flux, $\phi=\pi/2$, the complex couplings of each mediating process cancel with the symmetric mediating process (\textit{i.e.} inverting the direction of the hopping processes from right to left). This compensation does not occur on the edge sites, which results in an energy mismatch between the Tamm-Shockley states (blue rhombi) of the two edges. In analogy with the three-particle $\mathcal{A}$ subspace [see Fig.~\ref{FigSpectrum2}(b1)], there are two topologically protected edge states (red triangles) besides the Tamm-Shockley states. \begin{figure}[t] \includegraphics[width=1\linewidth]{spectrumsB} \caption{Energy spectrum of the $\mathcal{B}$ subspace for (a) two ($\phi=\pi/4$) and (b) three ($\phi=\pi/2$) particles, $U/J=50$ and $N_c=12$ unit cells. We depict bulk states with black circles, Tamm-Shockley states with blue rhombi, and topologically protected edge states with red triangles.}\label{FigSpectrumB} \end{figure} \subsection{$N$-particle generalization} From the above cases, one can deduce a recipe to obtain Aharonov-Bohm caging in any $N$-particle subspace by looking at the $N$-particle tunneling processes involving complex tunnelings, \textit{i.e.}, the cross-circulation couplings $J_3$. We define an arbitrary bound state $\left\{|j_k^{\alpha},n\rangle\otimes|j_k^{-\alpha},m\rangle\right\}$ with $n$ particles in one circulation and $m$ particles in the other circulation such that $n+m=N$. In the regime of strong interactions, Aharonov-Bohm caging can exist in the subspace generated by these bound-states if all the $N$-particle hopping processes involving a complex phase acquire the same total phase factor, such that by appropriately choosing the angle $\phi$, one can induce a $\pi$-flux. The bound-states in the sites $B_k$ will be coupled in the adjacent sites $A_{k+1}$ (see Fig.~\ref{FigNhoppings}) through the integer number of real hoppings from each circulation, $R_\alpha$ and $R_{-\alpha}$, and the integer number of complex hoppings from each circulation, $C_\alpha$ and $C_{-\alpha}$, such that \begin{equation}\label{EqNparticleOut} n=R_\alpha+C_\alpha \qquad \text{and} \qquad m=R_{-\alpha}+C_{-\alpha}. \end{equation} Then, the total complex factor will be given by $e^{\pm 2i\phi (C_\alpha-C_{-\alpha})}$. These states are coupled to both the bound-states $\left\{|A_{k+1}^{\alpha},n\rangle\otimes|A_{k+1}^{-\alpha},m\rangle\right\}$ [Fig.~\ref{FigNhoppings}(a)] and $\left\{|A_{k+1}^{\alpha},m\rangle\otimes|A_{k+1}^{-\alpha},n\rangle\right\}$ [Fig.~\ref{FigNhoppings}(b)] in the adjacent site, thus fulfilling the following conditions for each case, \begin{equation}\label{EqNparticleIn} \begin{aligned} \left\{|A_{k+1}^{\alpha},n\rangle\otimes|A_{k+1}^{-\alpha},m\rangle\right\}:&\left\lbrace \begin{aligned} n=C_{-\alpha}+R_{\alpha} \\ m=C_{\alpha}+R_{-\alpha} \end{aligned}\right\rbrace, \\ \left\{A_{k+1}^{\alpha},m\rangle\otimes|A_{k+1}^{-\alpha},n\rangle\right\}:& \left\lbrace \begin{aligned} n=R_{-\alpha}+C_{\alpha}\\ m=R_{\alpha}+C_{-\alpha} \end{aligned}\right\rbrace. \end{aligned} \end{equation} \begin{figure}[t] \includegraphics{figure6} \caption{Hopping processes of an arbitrary $N$-particle bound state $\left\{|B_k^{\alpha},n\rangle\otimes|B_k^{-\alpha},m\rangle\right\}$ that couples to the bound-states in the adjacent site (a) $\left\{|A_{k+1}^{\alpha},n\rangle\otimes|A_{k+1}^{-\alpha},m\rangle\right\}$ and (b) $\left\{|A_{k+1}^{\alpha},m\rangle\otimes|A_{k+1}^{-\alpha},n\rangle\right\}$ and corresponding phase factors. $R_\alpha$ and $C_\alpha$ are the numbers of real and complex hopping processes, respectively, coming from each circulation and the labels $n$ and $m$ denote the number of particles in each site. }\label{FigNhoppings} \end{figure} Combining Eqs.~(\ref{EqNparticleOut}) and (\ref{EqNparticleIn}), we obtain the following relations between the number of complex couplings $C_\alpha$ and the corresponding phase factors (see Fig.~\ref{FigNhoppings}), \begin{equation}\label{EqFactorsArbitrarySubspace} \begin{aligned} \left\{|A_{k+1}^{\alpha},n\rangle\otimes|A_{k+1}^{-\alpha},m\rangle\right\}: C_{\alpha}&=C_{-\alpha} \quad\Longrightarrow\quad 1, \\ \left\{|A_{k+1}^{\alpha},m\rangle\otimes|A_{k+1}^{-\alpha},n\rangle\right\}: C_{\alpha}&-C_{-\alpha}=n-m & \\ &\quad\Longrightarrow\quad e^{\pm 2i\phi (n-m)}. \end{aligned} \end{equation} Therefore, one can obtain an effective Creutz ladder model up to $N$-th order perturbation theory for any subspace with $n\neq m$. In this case, the states in the same site $\left\{|j_{k}^{\alpha},n\rangle\otimes|j_{k}^{-\alpha},m\rangle\right\}$ and $\left\{|j_{k}^{\alpha},m\rangle\otimes|j_{k}^{-\alpha},n\rangle\right\}$ are also coupled, which produces an effective vertical coupling in the Creutz ladder. The order of these couplings is $2|n-m|$ and they are in general complex. The effect of these couplings can be neglected if $2|n-m|\gg n+m=N$, as $N$ is the order of the other couplings that compose the Creutz ladder. Alternatively, the vertical couplings vanish in the bulk for $\phi=\pi/2$, as each $N$-particle hopping process cancels with its left-right symmetric counterpart. Then, considering the vertical coupling and using Eq.~(\ref{EqFactorsArbitrarySubspace}), one can obtain a $\pi$-flux through the plaquettes by choosing \begin{equation}\label{EqPhiNparticle} \left\lbrace \begin{aligned} \phi&=\frac{\pi}{2(n-m)}, &\quad\text{if}\quad 2|n-m|\gg n+m=N\\ \phi&=\dfrac{\pi}{2}, &\quad\text{if}\quad n-m \text{ is odd.} \end{aligned}\right. \end{equation} For $n=m$, there is only one type of bound state, $\left\{|j_{k+1}^{\alpha},n\rangle\otimes|j_{k+1}^{-\alpha},n\rangle\right\}$, such that the effective model is a linear chain with real couplings, and the system cannot exhibit Aharonov-Bohm caging. For the $N$-particle subspaces that exhibit flat bands with $\phi\neq\pi/2$, the single-particle spectrum is dispersive, which makes these Aharonov-Bohm caging phenomena a many-body effect. Let us see some examples. For the $\mathcal{A}$ subspaces, $N$ particles will accumulate a complex phase $e^{\pm 2iN\phi}$ when coupling the states $|B_k^{\alpha},N\rangle$ and $|A_{k+1}^{-\alpha},N\rangle$. For $N$ even, flat bands arise for $\phi=\pi/(2N)$, while for $N$ odd both $\phi=\pi/(2N)$ and $\phi=\pi/2$ yield a $\pi$-flux. Additionally, the vertical couplings are $2N$-order connections and thus, always negligible. For the $\mathcal{B}$ subspaces with an even number of particles, $N/2$, in each circulation, Aharonov-Bohm caging cannot occur. The complex phases accumulated by the particles cancel out such that all the couplings of the effective chain are real and the resulting energy bands are dispersive. However, for $N$ odd, the tunneling process of one of the particles is not compensated, leading to a complex factor $e^{\pm 2i\phi}$. Then, a phase $\phi=\pi/2$ leads to a flat-band spectrum while at the same time canceling the vertical couplings. For a real space angle $\phi=\pi/2$, the single-particle spectrum exhibits flat bands, and both the $N$ odd $\mathcal{A}$ and $\mathcal{B}$ subspaces also present a flat-band spectrum. However, for an angle $\phi=\pi/(2N)$ the $\mathcal{A}$ subspace presents flat bands in the absence of a single-particle flat-band spectrum, making this instance of Aharonov-Bohm caging a purely many-body effect. As one increases the number of particles in the system, the number of bound-state configurations increases and, in particular, other semi bound-states appear where not all particles are located in a single-site, \textit{i.e.} $\left\{|j_k^{\alpha},n\rangle\otimes|j_k^{-\alpha},m\rangle\right\}$ with $n+m<N$ and $N-(n+m)$ particles not bound to the site $j$. The picture described above will hold as long as the subspaces induced by bound-states do not become degenerate with the subspaces induced by these semi bound-states. For the $\mathcal{B}$ subspaces, as their bound-states have the maximum possible energy, they will not become degenerate with any other subspace. The other subspaces can become degenerate with a subspace with some particles in a bound state in the same site, and some in other sites of the lattice. However, these instances are rare: up to ten particles, only $8$ out of $34$ bound-states are degenerate, for example, $\left\{|j_i^{\alpha},5\rangle\right\}$ and $\left\{|j_i^{\alpha},2\rangle\otimes|j_i^{-\alpha},2\rangle\right\}$. We have checked numerically the recipe to obtain $\pi$-fluxes in arbitrary subspaces given in Eq.~(\ref{EqPhiNparticle}) up to six particles. \vspace{-3mm} \section{Generalization to non-uniform fluxes}\label{SecStaggered} In this Section, we generalize the study to the family of models where the angle $\phi$ of the staggered chain is introduced with an arbitrary lattice periodicity $\Gamma$, thus increasing the number of sites per unit cell [see Fig.~\ref{FigSystemTau}(a)]. The complex couplings between adjacent sites only occur between the last site of the unit cell and the first site of the next unit cell. Thus, the flux induced by this angle $\phi$ will not be present in each plaquette, with the exact flux pattern being a function of the number of sites in the unit cell. Non-uniform fluxes have been studied in diamond lattices \cite{Mukherjee2020,Li2020}, where it has been shown to lead to an enriched Aharonov-Bohm caging phenomenology. \begin{figure}[t] \includegraphics{figure7} \caption{(a) Diagram of the one-dimensional staggered chain for an arbitrary periodicity $\Gamma$. The unit cell $k$ contains $\Gamma$ sites $\{j_k^{(1)},j_k^{(2)},...,j_k^{(\Gamma-1)},j_k^{(\Gamma)}\}$ and is enclosed by a dotted rectangle. The grey line indicates the origin of the phase $\varphi_0$ such that an angle $\phi$ is introduced in the inter-cell couplings. The black arrows denote real tunneling amplitudes while the blue ones indicate complex tunneling amplitudes between states of different winding number. (b) Schematic representation of the sites and couplings of the lattice for $\Gamma=3$ and an angle $\phi$ such that a non-uniform $\pi$-flux arises.}\label{FigSystemTau} \end{figure} The analysis of Section \ref{SecNParticle} for the dynamics of $N$ particles in the regime of strong interactions applies also to this family of models. In particular, the angles given in Eq.~(\ref{EqPhiNparticle}) for each $N$-particle subspace also yield $\pi$-fluxes, that, in this case, are non-uniform [see an example for $\Gamma=3$ in Fig.~\ref{FigSystemTau}(b)]. The non-uniform pattern is composed of $\Gamma-2$ rhombi (or triangles) without a flux followed by two rhombi (or triangles) with a $\pi$-flux. For the case of $\Gamma=2$, discussed in Sections \ref{SecSingleParticle} and \ref{SecNParticle}, the number of rhombi plaquettes without flux is zero. As a result of the non-uniform flux pattern, a particle cannot tunnel $\Gamma$ sites to the right or the left due to destructive interference, and as a consequence, the spectrum is composed of a series of flat bands. Fig.~\ref{FigSpectrumTau} shows the energy spectrum for the single-particle case and the two and three-particle $\mathcal{A}$ subspaces for different periodicities, $\Gamma=2,3$ and $4$. The angles $\phi$, as given by Eq.~(\ref{EqPhiNparticle}), yield a $\pi$-flux, and we take $U/J=50$ and simulate $24$ sites for each case. Notably, by increasing the periodicity $\Gamma$, the number of flat bands increases, as the caging cell is enlarged and gives support to a larger number of CLSs. The zero-energy edge states that are present for $\Gamma=2$, are buried in the central band of the spectrum for $\Gamma>2$. As an example, we discuss the case of $\Gamma=3$ in the next subsection. \begin{figure}[h] \includegraphics[width=1\columnwidth]{EnergiesComparison} \caption{Energy spectrum for different number of particles (a) $N=1$, (b) $N=2$, (c) $N=3$ and periodicities (1) $\Gamma=2$, (2) $\Gamma=3$, and (3) $\Gamma=4$, for $24$ sites. For the two and three-particle cases, only the $\mathcal{A}$ subspace is shown, and we fix $U/J=50$ and introduce the on-site potential correction $V$ at the edge sites. The angle $\phi$ is taken from Eq.~(\ref{EqPhiNparticle}) such that a $\pi$-flux is obtained in each subspace: (1) $\phi=\pi/2$, (2) $\phi=\pi/4$, and (3) $\phi=\pi/2$.}\label{FigSpectrumTau} \end{figure} \subsection{Example: $\Gamma=3$} For a periodicity $\Gamma=3$, the unit cell has three sites that we will call $A$, $B$, and $C$. From Figures ~\ref{FigSpectrumTau}(a2), (b2), and (c2), one can see that the $N$-particle subspaces (with the appropriate $\pi$-flux inducing angle $\phi$) present six flat bands with two degenerate zero-energy bands. The eigenstates in these flat bands consist of a series of CLSs that one can find through the diagonalization of a small lattice. Analogously to the $\Gamma=2$ case, the basis states that compose the smallest caging cell are those within a unit cell and the next site \begin{equation}\label{EqCLSbasisTau} \left\{\begin{aligned} |A_k^+,n\rangle,|A_k^-,n\rangle,|B_k^+,n\rangle,|B_k^-,n\rangle,\\ |C_k^+,n\rangle,|C_k^-,n\rangle,|A_{k+1}^{+},n\rangle,|A_{k+1}^{-},n\rangle \end{aligned}\right\}. \end{equation} We give below the analytical expressions of the CLSs (dropping the label $n$ for conciseness) and give a visual representation in Fig.~\ref{FigCLSTau}, \begin{equation}\label{EqCLStau3} \begin{aligned}|\Upsilon_k^1\rangle&=\dfrac{|A_+^k\rangle+|A_-^k\rangle+\sqrt{2}|B_+^k\rangle+\sqrt{2}|B_-^k\rangle+|C_+^k\rangle+|C_-^k\rangle}{2\sqrt{2}},\\ |\Upsilon_k^2\rangle&=\dfrac{|A_+^k\rangle+|A_-^k\rangle-\sqrt{2}|B_+^k\rangle-\sqrt{2}|B_-^k\rangle+|C_+^k\rangle+|C_-^k\rangle}{2\sqrt{2}},\\ |\Upsilon_k^3\rangle&=\dfrac{|C_+^k\rangle-|C_-^k\rangle-|A_+^{k+1}\rangle+|A_-^{k+1}\rangle}{2},\\ |\Upsilon_k^4\rangle&=\dfrac{|C_+^k\rangle-|C_-^k\rangle+|A_+^{k+1}\rangle-|A_-^{k+1}\rangle}{2},\\ |\Upsilon_k^5\rangle&=\dfrac{|C_+^k\rangle+|C_-^k\rangle-|A_+^k\rangle-|A_-^k\rangle-\sqrt{2}|B_+^k\rangle+\sqrt{2}|B_-^k\rangle}{2\sqrt{2}},\\ |\Upsilon_k^6\rangle&=\dfrac{|C_+^k\rangle+|C_-^k\rangle-|A_+^k\rangle-|A_-^k\rangle+\sqrt{2}|B_+^k\rangle-\sqrt{2}|B_-^k\rangle}{2\sqrt{2}}.\end{aligned} \end{equation} The energies of the CLSs are given by \begin{equation} \begin{aligned} E_1&=2\sqrt{2}\mathcal{J}, & \quad E_2&=-2\sqrt{2}\mathcal{J},\quad & E_3&=-2\mathcal{J}, \\ E_4&=2\mathcal{J}, & E_5&=0, & E_6&=0. \end{aligned} \end{equation} Let us compare these CLSs with those obtained for $\Gamma=2$, in Eq.~(\ref{EqCLS}). For $\Gamma=3$, the unit cell is enlarged, and we obtain more CLSs (six for $\Gamma=3$ vs. four for $\Gamma=2$) that also span a larger number of sites. As a direct consequence, the caging dynamics resulting from these flat bands have larger support over the lattice. To give an example, we consider the two-particle $\mathcal{A}$ subspace with $\phi=\pi/4$, $U/J=50$ and $N_c=12$ unit cells for $\Gamma=3$. In Fig.~\ref{FigCagingTau}, we show the time evolution of the population of the states, $P_{|j_k^\alpha,2\rangle}$ for the initial state $\left(\left|A_{4}^+,2\right\rangle +\left|A_{4}^-,2\right\rangle\right)/\sqrt{2}$. The red line indicates the caged population $P_{cag}=P_{|A_k^{+},2\rangle}+P_{|A_k^{-},2\rangle}+P_{|B_k^{+},2\rangle}+P_{|B_k^{-},2\rangle}+P_{|C_{k}^{+},2\rangle}+P_{|C_{k}^{-},2\rangle}$. The population oscillates between the sites $A_k$, $B_k$, $C_k$ of a single unit cell, as the destructive interference occurs at the sites $C_{k-1}$ and $A_{k+1}$. \begin{figure}[t] \includegraphics{figure8} \caption{Representation of the CLSs for $\Gamma=3$ defined in Eq.~(\ref{EqCLStau3}) that are eigenstates of the Creutz ladder with a non-uniform $\pi$-flux, see Fig.~\ref{FigSystemTau}(b). The radius represents the amplitude and the color represents the phase, with red being a $\pi$ phase, and green being a phase zero.}\label{FigCLSTau} \end{figure} \begin{figure}[h] \includegraphics[width=1\columnwidth]{Caging2ASymTau3} \caption{Time evolution of the population of the states $|j_k^\alpha,2\rangle$ with $j=A,B,C$ and total caged population $P_{cag}$ (continuous red line), obtained through exact diagonalization for $U/J=50$, $N_c=12$ unit cells and $\phi=\pi/4$. The dashed black line is the population in the states $|A_4^\alpha,2\rangle$, with $\alpha=\pm$, the dotted blue line is the population in the states $|B_{4}^\alpha,2\rangle$, and the dashed-dotted green line is the population in $|C_{4}^\alpha,2\rangle$. The initial state is $\left(\left|A_{4}^+,2\right\rangle +\left|A_{4}^-,2\right\rangle\right)/\sqrt{2}$.}\label{FigCagingTau} \end{figure} \section{Conclusions}\label{SecConclusions} We have studied a system of bosons in a staggered lattice with ring traps in each site and considered the local eigenstates with orbital angular momentum $l=1$. The system can be mapped to a Creutz ladder with a real and a synthetic dimension, in which the flux enclosed in each plaquette is determined by the angle $\phi$ that makes the lattice staggered. In the single-particle case, one can tune the angle $\phi$ to obtain a uniform $\pi$-flux threading each plaquette. This leads to a flat-band spectrum characterized by the presence of CLSs and the system exhibits Aharonov-Bohm caging. For $N$ particles in the regime of strong on-site interactions, bound-states arise where the $N$ particles populate a single site. Using perturbation theory, most of the $N$-particle subspaces can be mapped to an effective Creutz ladder with a flux that depends on the angle $\phi$. We have identified the conditions under which these subspaces present a $\pi$-flux that leads to flat bands and Aharonov-Bohm caging. Remarkably, some of these subspaces can exhibit Aharonov-Bohm caging even in the presence of a single-particle dispersive spectrum, making these instances a purely many-body effect. Finally, we have generalized this study to the case of non-uniform fluxes by introducing the angle $\phi$ at an arbitrary lattice periodicity $\Gamma$. In this case, one can engineer flat-band spectra for different $N$-particle subspaces and an arbitrary $\Gamma$. As the unit cell increases in size, the number of flat bands increases, resulting in a larger number of CLSs that also have a greater spatial extent. As a result, the caged particles can explore a broader region of the lattice before encountering destructive interference, making the periodicity $\Gamma$ a tunable parameter that controls the spatial extent of the Aharonov-Bohm caging. \section{Acknowledgments} EN, VA, and JM acknowledge support through the Spanish Ministry of Science and Innovation (MINECO) (PID2020-118153GB-I00), the Catalan Government (Contract No. SGR2017-1646), and the European Union Regional Development Fund within the ERDF Operational Program of Catalunya (project QUASICAT/QuantumCat). EN acknowledges financial support from MINECO through the grant PRE2018-085815 and from COST through Action CA16221. AMM and RGD acknowledge financial support from the Portuguese Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N) through Projects No. UIDB/50025/2020, No. UIDP/50025/2020, and No. LA/P/0037/2020, and funding from FCT–Portuguese Foundation for Science and Technology through Project No. PTDC/FISMAC/29291/2017. AMM acknowledges financial support from the FCT through the work Contract No. CDL-CTTRI147-ARH/2018 and from i3N through the work Contract No. CDL-CTTRI-46-SGRH/2022.
proofpile-arXiv_068-13759
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Ever since the first exorcism of Maxwell's demon \cite{Szil29a}, determining how much energetic input a particular computation requires has been a broadly-appreciated theoretical question. In the current century, however, the question has taken on a markedly practical bent; a familiar example is the evolution of Moore's Law from initially provocative speculations decades ago to now addressing material, thermodynamic, and fabrication restrictions \cite{moore1964, moore1998, moore2006, hutcheson1993, hutcheson1996}. Transistor-based microprocessing presents fundamental scaling challenges that strictly limit potential directions for future optimization, and these challenges are no longer speculative. Clock speed, to take one example, has been essentially capped for two decades due to energy dissipation at high rates \cite{gelsinger1989,waldrop2016}. By some measures, Moore's law is already dead---as integrated circuit manufacturers go vertical, rather than face the expense of creating smaller transistors for 2D circuits that yield only marginal gains \cite{ball2022, vinet2011, courtland2021}. Given predicted explosive growth in societal demands for information processing and that digital microelectronics is now approaching the physical limits of available architectures \cite{ITRS20a}, exploring alternative computing paradigms is not only prudent but necessary. One alluring vision for the future involves hybrid devices, composed of a suite of computing modules---classical/quantum, digital/analog, deterministic/thermal---each with its own architecture and function that operate in concert. A hybrid architecture allows dynamically harnessing the processing node best suited for the task at hand. The underlying insight is that a computing device's physical substrate should match its desired processing function \cite{Feyn82a}. In keeping with this, momentum computing demonstrated that low dissipation operations do not require quasi-static operation \cite{ray2021}. That is, energy-efficient computation can be fast in a low-dissipation device. Reference \cite{ray2021} introduced a design framework and theory for an arbitrarily low-cost, high-speed bit swap, a logically-reversible gate (the only known logical framework with no nontrivial lower bound on its dissipation \cite{ITRS20a, frank2005, bhattacharya2021}.) It demonstrated that a universal reversible gate---a Fredkin gate \cite{toffoli1980, Fred82a}---can be built by coupling three such devices together. However, any particular physically-instantiated implementation will come with its own restrictions and considerations that are likely to disallow performing the swap exactly as theorized. And so, an implementation linked to a particular substrate must be built and analyzed in its own right. We present a physically-realizable device and control protocols that implement a bit swap gate that operates in the sub-$k_\text{B} T$ energy regime using superconducting Josephson junctions (JJs)---a well-known and scalable microtechnology. We recently used this device to measure the thermodynamic performance of bit erasure \cite{saira2020,Wims19a}. That extensive experimental effort demonstrated in practical terms that the device proposed here is realizable with today's microfabrication technologies and allows for detailed studies of thermodynamic costs. And so, the device's design and control protocol open up exploring the energy scales of highly energy-efficient, high-speed, general-purpose computing. \paragraph*{The Landauer} While there are many different quantities one might wish to optimize, the perspective here sets the goal as minimizing the net work invested $W$ when performing logical operations. It is well known that the most pressing physical limits on modern computation are power constraints \cite{frank2002}, thus the measure is well suited to diagnose the problems with current devices as well as potential strengths of new ones. For over half a century now \emph{Landauer's Principle} has exerted a major impact on the contemporary approach to thermodynamic costs of information processing \cite{Land61a,Benn82}. Its lower bound of $k_\text{B} T \ln 2$ energy dissipated per bit erased has served as standard candle for energy use in physical information processing. To aid comparing other computing paradigms and protocols, we refer to this temperature-dependent information-processing energy scale as a \emph{Landauer}: approximately a few zeptojoules at room temperature, and a few hundredths of a zeptojoule at liquid He temperatures. See Appendix \ref{sm:Landauer} for further comparisons. To appreciate the potential benefits of momentum computing operating at sub-Landauer energies we ask where contemporary computing is on the energy scale. Consider recent stochastic thermodynamic analyses of single-electron transistor logic gates \cite{gao2021,freitas2021}---analogs to conventional CMOS technology. The upshot is that these technologies currently operate between $10^3$ and $10^4$ Landauers. More to the point, devices using CMOS-based technology will only ever be able to operate accurately above $\approx 10^2$ Landauers \cite{frank2005,ITRS20a}. In short, momentum computing promises substantial improvements in efficiency with no compromise in speed. \paragraph*{Outline} Here we provide a brief overview of each section and appendix in the text. Section \ref{sec:bitswap} explains the importance of bit-swap operations and summarizes the protocol presented in Ref. \cite{ray2021}. Section \ref{sec:physswap} introduces the physical substrate, highlights why it is a good candidate, and addresses design restrictions. Section \ref{sec:performance} reports quantitative results on device performance as measured through detailed simulations of the microscopic degrees of freedom. Section \ref{sec:litreview} compares them to related results, both contemporary and foundational. Section \ref{sec:conclusion} concludes, summarizing the results and briefly outlining future directions and challenges for scaling up to general-purpose computing. Appendices include details necessary to understand the process by which the parameter space of control protocols was restricted and local work minima were found in simulation. Additionally, they also provide expository information that the interested reader might find relevant. In particular, Appendix \ref{sm:Landauer} discusses the temperature dependent energy scale, the ``Landauer''. Appendix \ref{sm:LimitStochThermo} outlines key physical differences between continuous-time Markov chains and hidden Markov chains. Appendix \ref{sm:DimensionlessEoM} presents the equations of motion of the bit-swap Josephson junction circuit in their dimensional form and their transformation to simulation-appropriate dimensionless equations. Appendix \ref{sm:PotentialSimplifications} details the process of algebraically eliminating large swaths of protocol parameter space. And, finally, Appendix \ref{sm:SearchMinWork} discusses the algorithmic details of the simulations. \section{Bit Swap} \label{sec:bitswap} The Landauer cost stood as a reference for so long since bit erasure is the dominant source of unavoidable dissipation when implementing universal computing with transistor logic gates. It is the elementary binary computation that most changes the Shannon entropy of the distribution over memory states. In this way, one sees $k_\text{B} T\ln 2$ not just as the cost of erasure, but as the cost of the maximally dissipative elementary operation on which conventional computing relies. And so, the Landauer naturally sets the energy scale for conventional computing. Taking inspiration from Landauer's pioneering work, we investigate the cost of the most expensive operation necessary to physically implement universal momentum computing: a bit swap. The ideal bit swap has no error, but in the thermodynamic setting one is also interested in an implementation's fidelity. And so, we write a swap with error rates $\epsilon_0$ and $\epsilon_1$ as a stochastic mapping between memory states $m \in \{0,1\}$ from time $0$ to time $\tau$: \begin{align*} P_\epsilon(m_\tau | m_0) = \left[\begin{array}{cc} \epsilon_0 & 1-\epsilon_0 \\ 1-\epsilon_1 & \epsilon_1 \end{array}\right] ~. \end{align*} The bit swap's dominance in the cost of universal momentum computing can be appreciated by considering the input-output mapping of the Fredkin gate---a $3$-bit universal gate with memory states $m_x m_y m_z$, $m_i \in \{0,1\}$. All inputs are preserved except for the exchange $101 \leftrightarrow 110$. We can decompose the informational state space into two regions. If $m_x = 0$, the operation is simply an identity, which trivially is costless. If $m_x = 1$ and $m_y = m_z$, we once again have an identity. Thus, it is only the subspace of $m_x=1$, where $m_y \neq m_z$ that a swap must take place. Reference \cite{ray2021} provides explicit potentials that impose effectively 1D swap potentials on a full 3-bit state space in order to implement the Fredkin gate, demonstrating that only 1D swap operations need contribute to the operation's thermodynamic cost. \subsection{Momentum Computing Realization} Storing information in a one-dimensional state space, it is not clear how to operate a thermodynamically-efficient bit swap with high accuracy. (In this, we recall the conventional interpretation of efficient to mean quasistatic or constantly-thermalizing Markovian dynamics \cite{esposito2012, seif2019}.) At time $t$ in the operation, the distribution of initial conditions corresponding to $m(t = 0)=0$ must overlap with that corresponding to $m(t = 0)=1$. And, from that point forward it is impossible to selectively separate them based on their initial positions. Information, and so reversibility, is lost. Consider, instead, a computation that happens faster than the equilibration timescale of the physical substrate and its thermal environment. In this regime, a particle's instantaneous momentum can be commandeered to carry useful information about its future behavior. Our protocol operates on this timescale, using the full phase space of the underlying system's degrees of freedom to transiently store information in their momenta. Due to this, the instantaneous microstate distribution is necessarily far from equilibrium during the computation. Moreover, the coarse-grained memory-state dynamics during the swap are not Markovian; despite both the net transformation over the memory states and the microscopic phase space dynamics being Markovian. Nonetheless, the system operates orders of magnitude more efficiently than current CMOS but, competing with CMOS, the dynamics evolve nonadiabatically in finite time---on nanosecond timescales for our physical implementation below. In this way, momentum computing offers up device designs and protocols that accomplish information processing that is at once fast, efficient, and low error. There is a trade-off---a loss of Markovianity in the memory-state dynamics. That noted, the dynamics of the memory states are faithfully described by continuous-time \emph{hidden} Markov chains (CTHMCs) \cite{bech2015, strasberg2016, ara2016}, rather than the continuous-time Markov chains (CTMCs) that are common in stochastic thermodynamics \cite{esposito2012, seif2019}. See Appendix. \ref{sm:LimitStochThermo} for a brief review. \subsection{Idealized Protocol} \label{sec:exact_protocol} Reference \cite{ray2021} describes a perfectly-efficient protocol for implementing a swap in finite time. The operation is straightforward. We begin with an ensemble of particles subject to a storage potential. The potential energy landscape $V^\text{\text{store}}(x)$ must contain at least two potential minima---positioned, say, at $x = \pm x_0$---with an associated energy barrier equal to $ \text{max}\{ V^{\text{store}}(x), x\in(-x_0,x_0) \} - V^{\text{store}}(x_0)$. During storage, a particle's environment is a thermal bath at temperature $T$. As the height of the potential energy barrier rises relative to the bath energy scale $k_\text{B} T$, the probability that the particle transitions between left ($x<0$) and right ($x\geq0$) decreases exponentially. In this way, if we assign the left half of the position space to memory state $0$ and the right half to memory state $1$, the energy landscape is capable of metastably storing a bit $m \in \{0,1\}$. At the protocol's beginning, we instantaneously apply a new potential energy landscape $V^{\text{comp}} \equiv k x^2 / 2$. The system is then temporarily isolated from its thermal environment, resulting in the particles undergoing a simple harmonic oscillation. Waiting a time $\tau$ until the oscillation is only half completed, the potential is returned to $V^{\text{store}}$. The initial conditions---for which $x_0<0$ $(x_0>0)$---have then been mapped to $x_\tau >0$ $(x_\tau<0)$, achieving the desired swap computation. If $V^{\text{store}}$ is an even function of $x$, the computation requires zero invested work as well. This follows since the harmonic motion created a mirror image to the original distribution and the energy imparted to the system at $t=0$ is completely offset by the energy extracted from the system turning off $V^{\text{comp}}$ at $t=\tau$. \section{Physical Instantiation} \label{sec:physswap} Due to its conceptual simplicity the protocol does not require any particular physical substrate. That said, the practical feasibility of performing such a computation must be addressed. One obvious point of practical concern is assuming the system can be isolated from its thermal environment during the computation. However, total isolation is not necessary. If $\tau \ll \tau_R$---the relaxation timescale associated with the energy flux rate between the system and its thermal bath---then the device performs close to the ideal case of zero coupling. As proof of concept, Ref. \cite{ray2021}'s simulations showed that this class of protocol is robust: thermodynamic performance persists in the presence of imperfect isolation from the thermal environment, albeit at an energetic cost. Thus, a system that obeys significantly-underdamped Langevin dynamics is an ideal candidate as the physical substrate for bit swap. We analyze in detail one physical instantiation---a \emph{gradiometric flux logic cell} (Fig. \ref{fig:circuit}), a mature technology for information processing. With suitable scale definitions, the effective degrees of freedom---Josephson phase sum $\varphi$ and difference $\varphi_{\text{dc}}$---follow a dimensionless Langevin equation \cite{barone1982, han1992theory, han1992experiment, rouse1995, saira2020, Wims19a}: \begin{align} dv' = -\lambda v' dt' - \theta \partial_{x'} U' + \eta r(t) \sqrt{2dt'} ~, \end{align} where $x' \equiv (\varphi,\varphi_{\text{dc}})$ and $v' \equiv (\dot{\varphi},\dot{\varphi_{\text{dc}}})$ are vector representations of the dynamical coordinates. Enacting a control protocol on this system involves changing the parameters of the potential over time: \begin{align} U'(t') & = U/U_0 \\ & = (\varphi-\varphi_x(t'))^2/2 + \gamma (\varphi_{\text{dc}}-\varphi_{\text{xdc}}(t'))^2/2 \nonumber \\ & \qquad + \beta \cos \varphi \cos (\varphi_{\text{dc}}/2) - \delta\beta \sin \varphi \sin (\varphi_{\text{dc}}/2) \nonumber ~. \end{align} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure_01.pdf} \caption{Gradiometric flux logic cell: The superconducting current has two important flow modes. One circulation around the inner loop---a DC SQUID. And, the other, a flow through the Josephson junctions in the inner loop and around the outer conductor pickup loops---an AC SQUID \cite{han1992theory}. This is the origin of the variable subscripts to distinguish $\varphi$ from $\varphi_{\text{dc}}$ and $\varphi_x$ from $\varphi_{\text{xdc}}$. } \label{fig:circuit} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure_02.pdf} \caption{(Left) $V^{\text{store}}$, the bistable storage potential. (Right) $V^{\text{comp}}$, the ``banana-harmonic'' potential. These potential energy profiles serve as qualitative pictures to represent prototypical computational and storage potentials, and do not represent any particularly favorable parameter set. } \label{mainfig:potentials} \end{figure} The relationships between the circuit parameters and the parameters in the effective potential $U'$ are as follows. $\varphi = (\varphi_1+\varphi_2)/2 -\pi$ and $\varphi_{\text{dc}} = (\varphi_2-\varphi_1)$, where $\varphi_1$ and $\varphi_2$ are the phases across the two Josephson elements; $\varphi_x = 2\pi \phi_x/\Phi_0 - \pi$ and $\varphi_{\text{xdc}} = 2\pi \phi_{xdc}/\Phi_0$, where $\Phi_0$ is the magnetic flux quantum and $(\phi_x, \phi_{xdc})$ are external magnetic fluxes applied to the circuit; $U_0 = \left(\Phi_0 / 2\pi\right)^2 / L$, $\gamma = L / 2\ell$, $\beta = I_+ 2\pi L / \Phi_0$, and $\delta\beta = I_- 2\pi L / \Phi_0$, where $L$ and $2\ell$ are geometric inductances; and $I_{\pm} \equiv I_{c1} \pm I_{c2}$ are the sum and difference of the critical currents of the two Josephson junctions. All parameters are real and it is assumed that $\gamma > \beta >1 \gg \delta\beta$. Some particularly important parameters of $U'$ are $\varphi_x$ and $\varphi_{\text{xdc}}$, which control the potential's shape by where the the dynamical variables $\varphi$ and $\varphi_{\text{dc}}$ localize in equilibrium, and $\gamma$, which controls how quickly $\varphi_{\text{dc}}$ localizes to the bottom of the quadratic well centered near $\varphi_{\text{dc}}=\varphi_{\text{xdc}}$. At certain control parameters ($\varphi_x, \varphi_{\text{xdc}}$), the effective potential contains only two minima: one located at $\varphi<0$ and one at $\varphi>0$. So, the device is capable of metastably storing a bit, as described above. In point of fact, the logic cell has been often used as a double well in $\varphi$ with a controllable tilt and barrier height \cite{han1992theory, rouse1995, saira2020}. The Langevin equation's coupling constants, $\lambda$ and $\eta$, determine the rate of energy flow between the system and its thermal environment and the. They depend on the parameters $L$, $R$, and $C$. In the regimes at which one typically finds $L$, $C$, and $R$ and with temperatures around $1$ K, the system is very underdamped; ring-down times are $\mathcal{O}(10^3)$ oscillations about the local minima. (Notably, the device thermalizes at a rate proportional to $R^{-1}$. A tunable $R$ allows the device to transition from the underdamped to overdamped regime, allowing for rapid thermalization, if desired.) Finally, $\theta$ is a dimensionless factor that depends on the relative inertia of the two degrees of freedom, it depends on the circuit architecture. Appendix \ref{sm:DimensionlessEoM} gives the equations of motion and thorough definitions of all parameters and variables in terms of dimensional quantities. \subsection{Realistic Protocol} With the device's physical substrate set, we now show how to design energy-efficient bit-swap control protocols. There are four parameters that depend primarily on device fabrication: $I_{c1}$, $I_{c2}$, $R$, and $C$. Two that depend on the circuit design: $L$ and $\ell$. And, four that allow external control: $\varphi_x$, $\varphi_{\text{xdc}}$, $T$ (the environmental temperature), and $\tau$ (the computation time). Without additional circuit complexities to allow tunable $L$, $R$, and $C$, we assume that once a device is made, any given protocol can only manipulate $\varphi_x$, $\varphi_{\text{xdc}}$, $T$, and $\tau$. A central assumption is that computation happens on a timescale over which the thermal environment has minimal effect on the dynamics, so the primary controls are $\varphi_x$, $\varphi_{\text{xdc}}$, and $\tau$. $\varphi_x$ is associated with asymmetry in the informational subspace, and will only take a nonzero value to help offset asymmetry from the $\delta\beta$ term in $U'$. Thus, $\varphi_{\text{xdc}}$ primarily controls the difference between $V^{\text{comp}}$ and $V^{\text{store}}$, while $\tau$ governs how long we subject the system to $V^{\text{comp}}$. $V^{\text{store}}$ must be chosen to operate the device in a parameter regime admitting two minima on either side of $\varphi=0$ as in Fig. \ref{mainfig:potentials}. They must also be sufficiently separated so that they are distinct memory states when immersed in an environment of temperature $T$. In the ideal case, $V^{\text{comp}}$ is a quadratic well with an oscillation period $\tau = \pi \sqrt{m/k}$. However, $U$ will never give an exact quadratic well unless $\beta = \delta\beta = 0$. So, a suitable replacement is necessary. The closest approximate is at the relatively obvious choice $\varphi_{\text{xdc}} = -2\pi$. In this case, the minima of both the quadratic and the periodic part of the potential lie on top of each other and the potential is well approximated by a quadratic function over most of the relevant position-domain. However, due to restrictions on $V^{\text{store}}$, transitioning between $V^{\text{store}}$ and $V^{\text{comp}}$ may induce unnecessarily large dissipation since the oscillations in the $\varphi_{\text{dc}}$ dimension have a large amplitude. (See Appendix \ref{sm:PotentialSimplifications} for details.) Instead, to dissipate the minimum energy, the control parameters must balance placing the system as close as possible to the pitchfork bifurcation where the two wells merge, while still maintaining dynamics that induce the $\varphi<0$ and $\varphi>0$ informational states to swap places due to an approximately harmonic oscillation. Near this parameter value, one typically finds a ``banana-harmonic'' potential energy landscape. (See Fig. \ref{mainfig:potentials} for a comparison of the distinct potential profiles for storage and computation.) \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure_03.pdf} \caption{A dynamic computation: $1,500$ trajectories from $V^{\text{store}}$'s equilibrium distribution in the $\varphi$ (top) and $\varphi_{\text{dc}}$ (bottom) dimensions. $V^{\text{comp}}$ is applied at $t\in(1,1+\tau)$, denoted by heavy black lines. $\varphi_{\text{dc}}$ oscillations are several times faster than the others, as expected when $\gamma \gg 1$. The work done on the system by the control apparatus, $W_0 = V^{\text{comp}}(t=1)-V^{\text{store}}(t=1)$, by its intervention at $t=1$ is largely offset by the work absorbed into the apparatus by its intervention at $t=1+\tau$, $W_\tau = V^{\text{store}}(t=1+\tau)-V^{\text{comp}}(t=1+\tau)$, when $V^{\text{comp}}$ re-engages. Visually, we can track this energy flux by the nonequilibrium oscillations induced at $t=1$ and the return to a near-equilibrium distribution at $t=1+\tau$. Time is measured in units of $\sqrt{LC}$, which is $\approx2$ns for the JJ device. } \label{fig:dynamics} \end{figure} \subsection{Computation Time} The final design task determines the computation timescale $\tau$. Under a perfect harmonic potential, the most energetically efficient $\tau$ is simply $\pi \sqrt{m/k}$. This ensures that $x(t=0) = -x(t=\tau)$. Since the design has an additional degree of freedom beyond that necessary---the $\varphi_{\text{dc}}$ dimension---however, we must not only ensure our information-bearing degree of freedom switches signs, but also ensure that $\varphi_{\text{dc}}(t=0) \approx \varphi_{\text{dc}}(t=\tau)$. This means that during time $\tau$, the $\varphi$ variables must undergo $n+1/2$ oscillations and the $\varphi_{\text{dc}}$ variables must undergo an integer number of complete oscillations. (See Fig. \ref{fig:dynamics}.) Hence, $\tau$ must satisfy matching conditions for the periods of the oscillations in both $\varphi$ and $\varphi_{\text{dc}}$ during the computation: \begin{align*} \omega \tau &\approx (2n-1)\pi \\ \omega_{dc} \tau &\approx 2n\pi ~. \end{align*} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure_04.pdf} \caption{Performing a successful and low-cost bit swap: (Top) Ensemble averages, conditioned on initial memory state, of the fluxes and their conjugate momenta. Line width tracks the distribution's variance. The shaded region indicates timescales that are potentially successful swap operations. These are probed more closely in the bottom two plots. (Middle) Ensemble averaged work, kinetic energy, and conjugate momentum in the $\varphi_{\text{dc}}$ coordinate. Note that work minima occur only at whole-integer oscillations of the momentum. Each dataset is scaled to its maximum value, so that it saturates at $1$. This emphasizes the qualitative relationships rather than the quantitative. (Bottom) Computational fidelity $f$ of the swap, approaching a perfect swap. } \label{fig:tau_sweep} \end{figure} Figure \ref{fig:tau_sweep} showcases this by displaying the behavior observed during simulations near the ideal timescale. The local work minima coincide with local minima in the average kinetic energy, but not every kinetic energy minimum coincides with a work minimum. While there are kinetic energy minima every half-integer oscillation in $\varphi_{\text{dc}}$, only integer multiples of $\varphi_{\text{dc}}$ oscillations yield minimum work. The equations of motion governing the system are stochastic, dissipative, and nonlinear, so the frequencies of the different oscillations $\omega, \omega_{dc}$ are nontrivial nonlinear stochastic mappings of device parameters, initial positions, and protocol parameters. They are not easily determined analytically. However, they change smoothly with small changes in the parameters they depends on. Thus, we were able to use an algorithmic approach to find the timescales that yield local minima and explore the regions surrounding them. \newcommand\xC{\SI{4.0}{\nano\farad}} \newcommand\xIp{\SI{2.0}{\micro\ampere}} \newcommand\xR{\SI{371}{\ohm} } \newcommand\xIm{\SI{7}{\nano\ampere}} \newcommand\xImb{\SI{35}{\nano\ampere}} \newcommand\xImbt{\SI{60}{\nano\ampere}} \newcommand\xL{\SI{1.0}{\nano\henry}} \newcommand\xT{$0.05 U_0$ } \newcommand\ntest{50,000} \newcommand\ntrial{40,000} \newcommand\dt{$0.005 \sqrt{LC}$} \subsection{Physically-Calibrated Bit Swap} We are most interested in the effect of parameters that are least constrained by fabrication. And so, all simulations assume constant fabrication parameters with $I_+$, $R$, and $C$ set to $\xIp$, $\xR$, and $\xC$, respectively. To explore how the $I_-$ asymmetry affects work cost, we simulated protocols with both a nearly-symmetric device ($I_- = \xIm$) and a moderately-asymmetric device ($I_- = \xImb$). Given devices with the parameters above, what values of the other parameters yield protocols with minimum work cost? This involves a twofold procedure. First, create a circuit architecture by setting $L$ and $\gamma$, thus fully specifying the device; details in Appendix \ref{sm:SearchMinWork}. Second, determine the ideal protocols for that combination of device parameters. \subsection{Computational Fidelity} To determine the best successful protocol, we must define what a successful bit swap is. First, we set a lower bound for the \emph{fidelity} $f$: $f \geq 0.99$. We define $f$ over an ensemble of $N$ independent trials as: $f = 1 - N_e / N$, with $N_e$ counting the number of failed trials, trials for which $\text{sign} [\varphi(t=0)] = \text{sign} [\varphi(t=\tau)]$. Second, the distribution over both $\varphi(t=\tau)$ and $\varphi(t=0)$ must be bimodal with clear and separate informational states. The criteria used for this second condition is: \begin{align} \langle \varphi < 0 \rangle + 3 \sigma_{\varphi<0} < \langle \varphi > 0 \rangle - 3 \sigma_{\varphi>0} ~, \end{align} were $\sigma_s$ and $\langle s \rangle$ are standard deviations and means of $\varphi$ conditioned on statement $s$ being true. The final choice concerns the initial distribution from which to sample trial runs. For this, we used the equilibrium distribution associated with $V^{\text{store}}$ with the environmental temperature set to satisfy $k_\text{B} T = 0.05 U_0$. Here, we ensure fair comparisons between different parameter settings by fixing a relationship between the potential's energy scale and that of thermal fluctuations. This resulted in temperatures from $400-1400$ mK, though it is possible to create superconducting circuits at much higher temperatures \cite{Yurg00a,Long12a,cybart2015,Revi21a} using alternative materials. Sampling initial conditions from a thermal state assumes no special intervention created the system's initial distribution. We only need wait a suitably long time to reach it. Moreover, this choice is no more than an algorithmic way to select a starting distribution. It is not a limitation or restriction of the protocol. Indeed, if some intervention allowed sampling initial conditions from a lower-variance distribution, it could be leveraged into even higher performance. \section{Performance} \label{sec:performance} Appendix \ref{sm:SearchMinWork} lays out the computational strategy used to find minimal $\langle W \rangle$ implementations among the protocols that satisfy the conditions above. Since the potential is held constant between $t=0$ and $t=\tau$, work is only done when turning $V^{\text{comp}}$ on at $t=0$ and turning it off at $t=\tau$. The ensemble average work done at $t=0$ is $W_0 \equiv \langle V^{\text{comp}}(\varphi(0),\varphi_{\text{dc}}(0)) - V^{\text{store}}(\varphi(0),\varphi_{\text{dc}}(0)) \rangle $ and returning to $V^{\text{comp}}$ at time $\tau$ costs $W_\tau \equiv \big\langle V^{\text{store}}(\varphi(\tau),\varphi_{\text{dc}}(\tau)) - V^{\text{comp}}(\varphi(\tau),\varphi_{\text{dc}}(\tau))\big\rangle$. Thus, the mean net work cost is the sum $\langle W \rangle=W_0 + W_\tau$. As we detail shortly, this yielded large regions of parameter space that implement bit swaps at sub-Landauer work cost. This result and others demonstrate the notable and desirable aspects of momentum computing: accuracy, low thermodynamic cost, and high speed. Let's recount these one by one. \subsection{Accuracy} Tradeoffs between a computation's fidelity and its thermodynamic cost are now familiar---an increase in accuracy comes at the cost of increased $W$ or computation time \cite{Boyd18a, lahiri2016, zulkowski2014, berut2012, riechers2020, gammaitoni2011}. These analyses conclude that accuracy generally raises computation costs. Momentum computing does not work this way. In fact, it works in the opposite way. The low cost of a momentum computing protocol comes from controlling the distribution over the computing system's final state. Due to this, fidelity and low operation cost are not in opposition, but go hand in hand, as Figs. \ref{fig:tau_sweep} and \ref{fig:fid_work} demonstrate. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure_05.pdf} \caption{Performance of the minimum work protocol as $\gamma$, the ratio of device inductances, goes from a region where the computation fails ($f < 0.99$) to a region of perfect fidelity ($f = 1.0$). Note that in the parameter space region in which the computation becomes successful, the work costs decrease as the fidelity approaches unity. Finally, $\tau$ decreases as the work cost minimizes to $\approx 1$ Landauer---showing that the work cost does not display $1/\tau$ adiabatic compute-time scaling. The parameter $\gamma$ controls the starting parameters for the suite of simulations represented by each data point and should not be read as the primary independent variable responsible for the behavior. Rather, the plots show $\tau$, $f$, and $\langle W \rangle_{min}$ evolving jointly to more preferable values. } \label{fig:fid_work} \end{figure} \subsection{Low Thermodynamic Cost} Conventional computing, based on transistor-network steady-state currents, operates nowhere near the theoretical limit of efficiency for logical gates. Even gates in Application Specific Integrated Circuits (ASICs) designed for maximal efficiency operate on the scale of $10^4-10^6$ Landauers \cite{chen2014, hamerly2019}. The physically-calibrated simulations described above achieved average costs well below a Landauer for a wide range of parameter values with an absolute minimum of $\langle W \rangle_\text{min} = 0.43$ Landauers, as shown in Figure \ref{fig:heatmaps} (left). For the less-ideal asymmetric critical-current device (right panel), the cost increases to only $\langle W \rangle_\text{min} = 0.60$ Landauers. And, the bulk of the protocols we explored operated at $< 10$ Landauers. Altogether, the momentum computing devices operated many orders of magnitude lower than the status quo. Moreover, the wide basins reveal robustness in the device's performance: an important feature for practical optimization and implementation. \subsection{High Speed} Paralleling accuracy, the now-conventional belief is that computational work generally scales inversely with the computation time: $W \sim 1/\tau$ \cite{Boyd18a, zulkowski2015, aurell2012, reeb2014}. Again, this is not the case for momentum computing, as Figs. \ref{fig:tau_sweep} and \ref{fig:fid_work} demonstrate. Instead, there are optimal times $\tau^*$ that give local work minima and around which the work cost increases. Optimal $\tau^*$s are upper bounded: the devices must operate \emph{faster} than particular timescales---timescales determined by the substrate physics. The bit swap's low work cost requires operating on a timescale faster than the rates at which the system exchanges energy and information with the environment. Thus, momentum computing protocols have a \emph{speed floor} rather than a speed limit. However, even assuming perfect thermal isolation there is a second bound on $\tau^*$. The computation must terminate before the initially localized ensemble---storing the memory---decoheres in position space due to dispersion. For our JJ device this is the more restrictive timescale. Due to local curvature differences in the potential, the initially compact state-space regions corresponding to peaks of the storage potential's equilibrium distribution begin to decohere after only one or two oscillations. Once they have spread to cover both memory states, the stored information is lost. This means it is most effective to limit the duration of the swap to just a half-oscillation of the $\varphi$ coordinate. For our devices, this typically corresponds to operating on timescales $< 15$ ns. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure_06.pdf} \caption{Thermodynamic energy cost $\langle W \rangle_\text{min}$ for momentum-computing bit-swap over $5,120$ parameter combinations of $L$ and $\gamma$. (Left) Slightly asymmetric device with $I_- = \xIm$ gives the overall minimum $\langle W \rangle_\text{min} = 0.43$ Landauers (large solid white circle). (Right) Substantially asymmetric device with $I_- = \xImb$ gives the overall minimum $\langle W \rangle_\text{min} = 0.60$ Landauers (large solid white circle). (Both) Small white circles indicate parameter values with protocols yielding $\langle W \rangle_\text{min} < 1$ Landauer. Black squares (lower right in each) represent parameter values where no successful swap was accomplished. Note that when the asymmetry is low, it can effectively be offset by the parameter $\varphi_x$, but for higher asymmetry, protocols that cost less than $1$ Landauer are less common. } \label{fig:heatmaps} \end{figure} \section{Related Work} \label{sec:litreview} Reversible computing implementations of various operations have been proposed many times over many decades. Perhaps the most famous is the Fredkin billiards implementation \cite{Fred82a}. While ingenious, it suffers from inherent dynamical instability (deterministic chaos) and cannot abide any interactions with the environment. At the other end of the spectrum is a family of superconducting adiabatic implementations \cite{likharev1982, likharev1996, Take13a,Take13b,Take14a, soloviev2017, soloviev2018, schegolev2016}. These are low cost in terms of dissipation and are stable, but they suffer from fundamental speed limits due to the adiabaticity requirement: $\langle W \rangle \propto 1/\tau$. Other recent implementations \cite{osborn2019, frank2017, frank2019} of reversible logic using JJs are more akin to the proposal at hand, in that they require nearly-ballistic dynamics and attempt to recapture the energy used in a swap at the final step. While these implementations are markedly different, their motivation follows similar principles. Particularly, the framework for asynchronous reversible computing proposed in \cite{frank2017, frank2019} might serve as a testbed for momentum computing elements. Another distinguishing feature of the present design is that the phenomenon supporting the computing is inherently linked to microscopic degrees of freedom evolving in the device's phase space. This moves one closer to the ultimate goal of using reversible nanoscale phenomena as the primitives for reversible computing---a goal whose importance and difficulty were recognized by Ref. \cite{morita2009}. Working directly with the underlying phase space also allows incorporating the thermal environment. And, this facilitates characterizing the effect of (inevitable) imperfect isolation from the environment. It is worth noting the similarity between the optimal timescales $\tau^*$ and the principal result in Ref. \cite{pidaparthi2021} in which a similar local minima emerges when comparing thermodynamic dissipation to computation time. These minima also come from certain matching conditions between the rate of thermalization and the system's response time to its control device. Another qualitatively similar result \cite{pankratov2004} found faster operation could lead to reduced errors in overdamped JJs under periodic driving. These similarities could point to a more general principle at play. \section{Conclusion} \label{sec:conclusion} Our detailed, thermodynamically-calibrated simulation of microscopic trajectories demonstrated that momentum computing can reliably (i) implement a bit swap at sub-Landauer work costs at (ii) nanosecond timescales in (iii) a well-characterized superconducting circuit. These simulations served two main purposes. The first highlights momentum computing's advantages. The proposed framework uses the continuum of momentum states to serve as the auxiliary system that allows a swap. In doing so, it eliminates the associated tradeoffs between energetic, temporal, or accuracy costs that are commonly emphasized in thermodynamic control analyses \cite{berut2012, Boyd18a, lahiri2016, zulkowski2014}. Momentum computing protocols are holistic in that low energy cost, high fidelity, and fast operation times all come from matching parallel constraints rather than competing ones. The second purpose points out key aspects of the proposed JJ circuit's physics. The simulations reveal several guiding principles---those that contribute most to decreasing work costs for the proposed protocols. The system is so underdamped that thermal agitation is not the primary cause of inefficiency. The two main contributors are (i) the appearance of dispersive behavior in the dynamics of an initially-coherent region of state space and (ii) asymmetries inherent to the device that arise from differing critical currents in the component superconducting JJ elements. Notably, if the elements are very close to each other in $I_c$, then symmetry can be effectively restored by setting the control parameter $\varphi_x$ to counteract the difference. However, the more asymmetry, the harder it is to find ultra low-cost protocols; cf. Fig. \ref{fig:heatmaps} left and right panels. Note, too, that initial-state dispersion can be ameliorated by using a $V^{\text{comp}}$ that is as harmonic (quadratic) as possible. However, this typically requires lower inductance $L$, possibly complicating circuit fabrication. Additionally, the potential-well separation parameter $\beta$'s linear dependence on $L$ hinders the system's ability to create two distinct states during information storage. Though these tradeoffs are complicated, our simulations suggest that dispersion can be controlled, yielding swap protocols with even lower work costs. Since the protocol search space is quite high-dimensional and contains many local-minima, we offer no proof that the protocols found give the global work minimum. Very likely, the thermodynamic costs and operation speed of our proposed JJ momentum computing device can be substantially improved using more sophisticated parameter optimization and alternative materials. Even with the work cost as it stands, though, sub-Landauer operation represents a radical change from transistor-based architectures. One calibration for this is given in the recent stochastic thermodynamic analysis of a NOT gate composed of single-electron-state transistors \cite{gao2021} that found work costs $10^4$ times larger. Note, too, that running at low temperatures requires significant off-board cooling costs, as required in superconducting quantum computing. Our current flux qubit implementation requires operating at liquid He temperatures \cite{saira2020, Wims19a}. However, there are also JJs that operate at $N_2$ temperatures, promising system cooling costs that would be $2$ to $3$ orders of magnitude lower \cite{Yurg00a,Long12a,cybart2015,Revi21a}. Additionally, the physics necessary to build a momentum computing swap---underdamped behavior and controllable multiwell dynamics---is far from unique to superconducting circuits. As an example, nanoelectromechanical systems (NEMS) are another well-known technology that is scalable with modern microfabrication techniques. NEMS provide the needed nonlinearity for multiple-well potentials, are extremely energy efficient, and have high Q factors even while operating at room temperature \cite{Lifs08a,Math14a,Ryu21a}. Momentum computing implemented with NEMS rather than superconductors completely obviates the cooling infrastructure and so may be better suited for large-scale implementations. That said, the JJ implementation at low temperatures augmented with appropriate calorimetry will provide a key experimental platform for careful, controlled, and detailed study of the physical limits of the thermodynamic costs of information processing. Thus, these devices are necessary to fully understand the physics of thermodynamic efficiency. And so, beyond technology impacts, the proposed device and protocols provide a fascinating experimental opportunity to measure energy flows that fluctuate at GHz timescales and at energy scales below thermal fluctuations. Success in these will open the way to theoretical investigations of the fundamental physics of information storage and manipulation, time symmetries, and fluctuation theorems \cite{riechers2020, boyd2021}. \section*{Acknowledgments} We thank Alec Boyd, Warren Fon, Scott Habermehl, Jukka Pekola, Paul Riechers, Michael Roukes, Olli-Pentti Saira, and Gregory Wimsatt for helpful discussions. The authors thank the Telluride Science Research Center for hospitality during visits and the participants of the Information Engines Workshops there. JPC acknowledges the kind hospitality of the Santa Fe Institute, Institute for Advanced Study at the University of Amsterdam, and California Institute of Technology. This material is based upon work supported by, or in part by, FQXi Grant number FQXi-RFP-IPW-1902 and U.S. Army Research Laboratory and the U.S. Army Research Office under grants W911NF-21-1-0048 and W911NF-18-1-0028.
proofpile-arXiv_069-1019
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} The study of extragalactic radio jets is an important area in astrophysics. In radio loud sources, jets contribute a large fraction of the total radiated power, and sustain the formation of energetic kiloparsec scale lobes. While observational properties of jets are widely differentiated, they are present in high and low power sources, with some common features; on the parsec scale, they are relativistic in both types and they are also intrinsically identical in beamed and misaligned sources. \citet{gio01} have shown that the Lorentz factor in the parsec scale jet of low power FRI radio galaxies as well as of more powerful FR IIs are both in the range $\Gamma = 3 - 10$. With these values, jets also appear intrinsically identical in beamed (BL Lac objects) and misaligned sources (FRI radio galaxies), if the former have jet axis oriented at an average viewing angle of $\langle \theta \rangle = 18^\circ \pm 5^\circ$ \citep{gir04b}. In the present paper we focus on the jet structure of the BL Lac source Markarian 501. This object is highly active and well-studied at all frequencies. Its activity and variability at high energy \citep[as high as TeV regime,][]{qui96} seems to require high Doppler factors and consequently a small angle to the line of sight. In the radio band, centimeter VLBI observations have revealed a clear limb-brightened structure, beginning in the very inner jet, suggestive of a dual velocity structure \citep[~hereinafter G04]{gir04a}. The complex limb-brightened structure makes component identification problematic and multi-epoch attempts to measure pattern speed conclude that it is not well defined \citepalias{gir04a} or in any case at most subluminal \citep{edw02}. These seem to be common features in TeV blazars \citep{pin04,gir06}, and theoretical models have been proposed to reconcile them with the very high energy emission \citep{ghi05,wan04}. However, the results obtained so far still leave some major questions unanswered. For example, it is not at all clear whether the jet velocity structure is intrinsic or produced by the interaction with the surrounding medium. We want to understand why the properties of the radio jet on parsec scales are different from those needed to explain the $\gamma-$ray emission in Mrk~501, as well as in other TeV blazars. Moreover, a change in regime must occur on much larger scales, since the large scale structure of the source is known to be symmetric rather than one-sided \citep[e.g.,][]{ulv83}. We want to investigate if this transition is smooth and what is the configuration of the magnetic field in the outer jet, which needs sensitive images in total intensity and polarization. In order to search for an answer to such questions, we need to go beyond the capability of the instruments available for ordinary centimeter wavelength VLBI, which provides information only for the region between $\sim 1$ and $\sim 100$ milliarcseconds. Smaller and larger scale regions remain inaccessible because of inadequate resolution and sensitivity, respectively. Improvements in the technical and organizational issues are now offering to astronomers VLBI arrays of unprecedented resolution and sensitivity, such as the High Sensitivity Array (HSA\footnote{{\tt http://www.nrao.edu/hsa/}}), and the Global mm-VLBI Array (GMVA\footnote{{\tt http://www.mpifr-bonn.mpg.de/div/vlbi/globalmm/}}). Thanks to its proximity and brightness, Mrk 501 is an ideal laboratory for experiments using these advanced VLBI techniques: it is at $z = 0.034$ (1 mas = 0.67 pc, using $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$); the total flux density at 5\,GHz is $S_5 = 1.4$\,Jy; the Schwarzschild radius for its central black hole is estimated around $1 R_S = 10^{-4}$\,pc ($1.4 \times 10^{-4}$\,mas), if we adopt $M_\mathrm{BH} = 10^9\,M_\odot$ \citep{rie03}. Using these new facilities, we can therefore access regions never studied previously: the jet base with the GMVA, and the faint, resolved jet region at $>100$\,mas with HSA. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{9784f1.eps}} \caption{Visibility amplitude vs.\ $(u,v)-$radius for the GMVA observations.\label{f.uv.gmva}} \end{figure} In \S \ref{sec:observations} we describe the instruments used for our new observations, along with the data reduction methods required by high frequency. Results are presented in \S 3 and discussed in \S \ref{sec:discussion}. We present our conclusions in \S \ref{sec:conclusions}. \section{OBSERVATIONS} \label{sec:observations} \subsection{High Sensitivity Array observations} We observed Mrk 501 with the HSA at 1.4\,GHz on 26 Nov 2004. The HSA is obtained by combining in the same array the 10 VLBA antennas and other sensitive elements, i.e. the Green Bank Telescope (GBT, 100 m.), the phased VLA ($27 \times 25$ m.), Arecibo (300 m.), and Effelsberg (100 m.). Even without Arecibo, whose declination limits do not allow it to observe Mrk~501, the collecting area is increased by a factor of 7 over the VLBA alone. The sensitivity was also improved thanks to a large recording rate (256\,Mbps) and a long integration time (8\,hrs). The Effelsberg telescope was in the experiment for the first 5 hrs; some failures affected SC, FD, MK, and the VLA during part of the observation. We reduced the data in the standard way in AIPS, using 3C345 as a fringe finder, OQ208 as a leakage calibrator, and 3C286 for the EVPA calibration. Final images were produced both in AIPS and Difmap, with different weighting schemes. The source structure is complex, with a strong peak ($\sim 0.7$\,Jy) and significant diffuse emission ($\sim 0.8$\,Jy). Although this prevents us from reaching the thermal noise, we still achieve in our best image a dynamic range as good as 30,000:1, with a noise level of $\sim 25\,\mu$Jy beam$^{-1}$ ($1\sigma$). During the HSA observations, the VLA (used as phased array) was in the A configuration. We obtained the internal VLA data and calibrated and reduced in the standard way to obtain also a VLA image of Mrk~501. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{9784f2.eps}} \caption{VLA image of Mrk 501. The peak is 1.7 Jy beam$^{-1}$, and the contours are traced at $(1, 5, 10, \dots) \times 1.0$ mJy beam$^{-1}$. The restoring beam is circular with a FWHM of $0.7\arcsec$. Sticks represent polarization vectors, with a scale of 1\arcsec = 8.3 mJy beam$^{-1}$.\label{f.vla}} \end{figure} \subsection{Global mm-VLBI observations} \begin{figure*} \sidecaption \includegraphics[width=12cm]{9784f3.eps} \caption{High Sensitivity Array image of Mrk 501. The peak is 762 mJy beam$^{-1}$, and the contours are traced at $(-1, 1, 2, 4, \dots) \times 0.075$ mJy beam$^{-1}$ ($3\sigma$ noise level). As a result of natural weighting with a Gaussian taper at 18 M$\lambda$, the restoring beam is $15.1 \times 11.4$ (mas $\times$ mas) in PA $-11.3^\circ$. We mark with letters the region of sharp brightness decrease $(a)$ and of limb brightening $(b)$ in the extended jet. See Fig.~\ref{f.lb} for the brightness profile across the jet.\label{f.largest}} \end{figure*} Millimeter VLBI permits a much higher angular resolution than ground or space based VLBI at centimeter wavelengths. Moreover, it offers the possibility to study emission regions which appear self-absorbed at longer wavelengths, with important consequences for our understanding of the physical processes in AGNs in the vicinity of supermassive black holes. After years of continuous development and technical improvement, the GMVA is now able to provide good quality images in the 3mm band, with an angular resolution of a few tens of micro-arcseconds \citep{kri06a}. We observed Mrk 501 on 14 Oct 2005 with the Global mm-VLBI Array. This experiment tested the sensitivity limits of the array, since on the basis of the observed centimeter wavelength flux density and spectral index \citepalias{gir04a}, Mrk 501 was expected to be only a few hundred mJy at this frequency. The standard frequency was 86.198 GHz, with 16 IFs of 8 MHz bandwidth each, 2 bit sampling in left circular polarisation (LCP). The participating telescopes were Effelsberg, Pico Veleta, the Plateau de Bure interferometer, Onsala, Mets\"ahovi, and 8 VLBA stations (i.e.\ all except Saint Croix and Hancock). The European telescopes observed for $\sim 9$ hours and the American ones joined in for the last $\sim 6$ hours (Mauna Kea only for the last $\sim 4$ hours); the telescopes at Mets\"ahovi and North Liberty failed. The calibrator 3C345 was readily detected with good signal-to-noise ratio. From the fringe fitting of 3C345 we determined rates and single-band delays, and applied them to the whole data set. We obtained an image of 3C345 and found it to be in agreement with published images of comparable or slightly lower resolution \citep{lob00,lis05}. At this stage, it was then possible to fringe fit Mrk 501 itself, averaging over the IFs, using a solution interval as long as the scan, and setting a SNR threshold of 3.0. Mrk~501 was well detected not only between large European apertures but also on baselines to the smaller VLBA antennas. Solutions that were obviously bad were edited out using SNEDT, and the data were subsequently frequency averaged. Final self-calibration and imaging were done in Obit \citep{cot08}. The final amplitude vs.\ $(u,v)-$distance plot is shown in Fig.~\ref{f.uv.gmva}. The coverage is good in the short baseline range and much sparser in the outer part of the $(u,v)-$plane. Due to the failure of the easternmost VLBA antenna, there is also a large gap in between the short and long baseline domains. A large baseline noise is visible; however, significant emission in the short baselines is clearly present. An image of the calibrator 3C345 and a spectral plot of the resulting phases vs.\ spectral channels for visibilities of Mrk 501 are shown in \citet{gag06}. \section{RESULTS} \label{sec:results} \subsection{The kpc scale structure} On kiloparsec scales, Mrk 501 is core dominated with a two sided extended structure visible as well, extending in PA $\sim 45^\circ$ for more than 30\arcsec\ on both sides of the core \citep{ulv83,kol92,cas99}. It is straightforward to identify this structure with the symmetric extended emission characteristic of a radio galaxy and to infer an orientation near to the line of sight, in agreement with what is expected from a BL Lac source. However, the symmetric emission implies that at this distance from the core no relativistic jet remains. Thanks to the VLA data available as a byproduct of the HSA observations, we obtained a higher resolution VLA image of Mrk~501 (see Fig.~\ref{f.vla}). The phased array image is dynamic range limited ($\sim 10000:1$), and it shows a one-sided emission with a short jet like structure in the same PA as the extended symmetric structure. From this one-sided emission we can derive constraints on the jet velocity ($\beta c$) and orientation ($\theta$) with respect to the line of sight. At $2\arcsec$ we have $\beta \cos \theta > 0.36$ and at $1\arcsec$, $\beta \cos \theta> 0.63$. This result implies that at 0.67 kpc (projected) from the core the jet is still at least mildly relativistic ($\beta > 0.63$). \subsection{The extended jet} \label{s.jet} We obtain a detailed look at the jet of Mrk~501 from the deep VLBI observations with the HSA. We show in Fig.~\ref{f.largest} a tapered image, where baselines longer than 18 M$\lambda$ have been significantly down-weighted to increase the signal to noise ratio of the low-surface brightness emission. We achieve a $1\sigma$ r.m.s.\ of $\sim 25\, \mu$Jy beam$^{-1}$ and emission is revealed on the main jet side up to a distance of $\sim 700$ mas from the core, i.e., five times further than detected in any previous VLBI observation. No emission is detected on the counter-jet side at the level of $3\sigma$ noise. In Table \ref{t.jcj}, we give at some selected distances (Col.\ 1) the jet brightness (Col.\ 2) and the corresponding lower limits to the jet/counter-jet brightness ratio (Col.\ 3). The minimum required velocity ($\beta_{\rm min}$) and largest allowed viewing angles ($\theta_{\rm max}$) are then reported in Cols.\ 4 and 5, respectively. The jet opening angle remains constant ($\phi_\mathrm{j} \sim 40^\circ$) after the well known bend at 30 mas. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{9784f4.eps}} \caption{Jet brightness profile across the slice shown in Fig.~\ref{f.largest} (center is at RA 16$^h$ 53$^m$ 52$^s$.2435035, Dec.\ +39$^\circ$ 45\arcmin 36\arcsec.906318, PA $= -53.4^\circ$).\label{f.lb}} \end{figure} \begin{table} \caption{Jet/counter-jet brightness ratio} \label{t.jcj} \centering \begin{tabular}{rcccrl} \hline\hline $r$ & $B_J$ & & & $\theta_{\rm max}$ & \\ (mas) & (mJy beam$^{-1}$) & $R_{\rm min}$ & $\beta_{\rm min}$ & ($^\circ$) & Notes \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline 12 & 180 & 7200 & 0.94 & 19 & Inner jet \\ 21 & 105 & 4200 & 0.93 & 21 & Jet bend \\ 62 & 21.4 & 856 & 0.87 & 29 & \\ 128 & 3.5 & 140 & 0.76 & 41 \\ 284 & 0.26 & 10.4 & 0.44 & 64 \\ 464 & 0.20 & 8.0 & 0.39 & 67 & \\ 706 & 0.12 & 5.0 & 0.31 & 72 & confused with noise \\ \hline \end{tabular} \end{table} The jet brightness shows a decrease with increasing jet distance from the core; the space distribution is quite uniform, i.e., no prominent knot is present in the extended jet. The most noteworthy features in Fig.~\ref{f.largest} are the relatively sharp brightness decrease at $\sim 100$\,mas (marked with $a$), which could correspond to a shock region, and the jet limb brightening across the slice at $\sim 60$\,mas from the core (marked by $b$). The jet brightness profile across this slice is shown in Fig.~\ref{f.lb} and, although less conspicuous, is similar to the structure visible in the inner jet in higher resolution images \citepalias{gir04a}. Besides these features, the jet presents an uniformly distributed flux density. If we consider only the region above the $3\sigma$ noise level, 73.2\% of the pixels in the image have a brightness between 75 and 300 $\mu$Jy beam$^{-1}$. Significant peaks are not present and local maxima can be related to small increases in the jet emissivity but also to artifacts brought about by the image reconstruction process. This seems to be a characteristic of the jet of Mrk 501 on all scales; even in the inner jet, images with high resolution \citepalias{gir04a} tend to show a uniform brightness distribution rather than the compact knots observed in other AGN jets. In \citetalias{gir04a}, we modeled the jet intensity of Mrk~501 as a function of jet velocity and radius, using the formulas for an adiabatically expanding jet derived by \citet{bau97} in the case of relativistic motion. The most sensitive observations available in \citetalias{gir04a} allowed us to study the jet only to a distance of $\sim 100$ mas, still allowing for a degeneracy between magnetic field orientation and jet velocity. Thanks to the HSA data, we can now extend this argument to a distance of almost 500\,mas. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{9784f5.eps}} \caption{Jet peak brightness and FWHM vs.\ distance from the core obtained from Gaussian fits. A boxcar filter (50 mas) has been applied to smooth the data at $r>50$ mas.\label{f.gaussians}} \end{figure} We derived brightness profiles across the jet using the AIPS task \texttt{SLICE} on the tapered HSA image for the extended jet, obtaining one slice every 5 mas in PA$=-56^\circ$. Using the AIPS task {\tt SLFIT}, we fitted single Gaussian components to each profile. We show the resulting data as functions of the distance from the core in Fig.~\ref{f.gaussians}. The fit could be done unambiguously in most cases, although some slices presented some deviation from a pure single Gaussian profile. However, the difference between the area subtended by the profile and the fit is generally smaller than 5\% and only a couple of fits (at $\sim 200$ mas from the core, marked by the larger error bars) had to be rejected. At a distance larger than $\sim 450$\,mas from the core, the best fit FWHM starts decreasing; we ascribe this behavior to the insufficient brightness at the jet edges, and do not consider any slice at $r>480$ mas in our analysis. Moreover, we note that for the same reason -- and because of the limb brightening of the jet -- even at smaller distances the actual jet FWHM is in some cases larger than the best fit one. The implication of this effect are discussed below. In the inner jet ($r<30$ mas), we keep the fit from \citetalias{gir04a}, whose better resolution is important in this region where the jet is not transversely resolved in the present image. The best-fit peak brightness and jet FWHM have then been smoothed with a boxcar filter (50 mas wide, for points at $r>50$ mas) to suppress local noise. \subsection{The core and inner jet structure} \begin{figure} \center \resizebox{\hsize}{!}{\includegraphics{9784f6.eps}} \caption{Mrk 501 at 86 GHz; the restoring beam is $110\, \mu$as $\times 40\, \mu$as in PA $-8^\circ$. The peak is 45 mJy beam$^{-1}$, and the contours are traced at $(-1, 1, 2, 4, 8) \times 4.0$ mJy beam$^{-1}$. The $1\sigma$ noise level is $\sim 1.5$ mJy beam$^{-1}$. The grey scale flux range is $-3.0$ to 40 mJy beam$^{-1}$.\label{f.gmva}} \end{figure} In Fig.~\ref{f.gmva}, we show our Global mm-VLBI Array image of Mrk 501 at a resolution of 110 $\mu$as $\times \, 40 \, \mu$as (beam FWHM, PA $-8^\circ$). Mrk~501 is clearly detected at 3 mm and it is dominated by a compact, prominent component, $\sim 45$ mJy beam$^{-1}$ peak brightness. The visibility data suggest that there is a fair amount of extended emission, although the $(u,v)$ coverage is not ideal and it is extremely difficult to image it. In our clean image and with modelfitting, we recover $\sim 110$\,mJy in the core region, including a jet-like feature in PA $144^\circ$ and some more diffuse emission in PA $\sim -135^\circ$. A tentative jet knot ($\sim 7\sigma$) is also visible 0.73\,mas south of the core (PA $172^\circ$). The features in the image plane can be described by model-fitting with four Gaussian components. These components are shown overlayed to the $3\sigma$ total intensity lowest contour in Fig.~\ref{f.mf}; each component is represented by a cross with major and minor axis equal to its FWHM, with the major axis aligned along the component's position angle. Quantitative results from model fitting are reported in Table~\ref{t.model}, where $r$ and $\theta$ are the polar coordinates of the component (re-referenced to the core position), $b_\mathrm{maj}$, $b_\mathrm{min}$, and $b_\phi$ are the deconvolved major and minor axis of the component and its position angle, $P$ and $I$ the peak brightness and the total flux density. Visibility model-fitting in Difmap provides a reduced $\chi^2 =1.14$ with this model. The only quantity that has a significant uncertainty in the best-fit (around 10\%) is the total flux density $I$, while nominal errors on the component positions are typically much less than 10 $\mu$as. Since such values are unrealistically small, we have estimated independent uncertainties on $r$ taking into account two basic parameters for each component: (1) the peak flux density $P$ and (2) its compactness. In simple words, the uncertainty brought about by noise in the visibility data (i.e.\ scatter of the $(u,v)-$points) will affect faint diffuse components much more than bright compact ones. Therefore, we estimate the uncertainty on the position of each component using the following formula: $$\Delta r = \frac{1}{2} \frac{\sqrt{b_\mathrm{maj} \times b_\mathrm{min}}}{P/3\sigma}$$ where $\sigma$ is the image local noise; the formula is therefore related to the SNR of the component in such a way that the position of a $3\sigma$ feature is not known to better than its mean angular radius. The uncertainties reported in Col.\ (2) are calculated in this way and they have been added in quadrature to that on the core position (0.004 mas), which is taken as a reference. \begin{figure} \center \resizebox{\hsize}{!}{\includegraphics{9784f7.eps}} \caption{Results from the model-fit with Gaussian components to the image plane, overlayed with contours traced at ($-4.5, 4.5)$ mJy beam$^-1$. The crosses mark the position, major and minor axis, and position angle of each component.\label{f.mf}} \end{figure} \begin{table*} \caption{Deconvolution of Gaussian component fit to the 86 GHz image.\label{t.model}} \centering \begin{tabular}{rrrrrrr} \hline\hline $r$ & $\theta$ & $b_\mathrm{maj}$ & $b_\mathrm{min}$ & $b_\phi$ & $P$ & $I$ \\ (mas) & ($^\circ$) & (mas) & (mas) & ($^\circ$) & (mJy & (mJy) \\ & & & & & beam$^{-1}$) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline 0.00 & 0.0 & 0.032 & $<0.048$ & 169 & 40.2 & $48.7 \pm 1.5$ \\ $ 0.08 \pm 0.03$ & $-$135.6 & 0.181 & 0.078 & 27 & 9.7 & $51.0 \pm 4.3$ \\ $ 0.11 \pm 0.01$ & 144.4 & $<0.090$ & $<0.036$ & 175 & 11.4 & $ 9.0 \pm 1.1$ \\ $ 0.73 \pm 0.01$ & 172.0 & $<0.102$ & $<0.030$ & 170 & 12.9 & $ 9.5 \pm 1.1$ \\ \hline \end{tabular} \end{table*} The brightest component, which we identify with the core visible at centimeter wavelengths, is still unresolved at 86 GHz. We then use our deconvolved size of this component to give an upper limit to the dimension of the jet base, and a lower limit to its brightness temperature. At $z=0.034$, 1\,mas\,=\,0.67\,pc, therefore the deconvolved angular size of the GMVA core corresponds to 0.021 $\times$ 0.032\,pc. The black hole mass for Mrk~501 is estimated around $M_\mathrm{BH} = 10^9\,M_\odot$ \citep{rie03}, which implies a Schwarzschild radius $R_S = 1.0 \times 10^{-4}$ parsecs. This means that the radio emission originates in a region that is smaller than 210 $\times$ 320 R$_S$. We derive the brightness temperature of this region from the following formula: $$T_B = \frac{B}{2k} \lambda^2$$ In our observations $\lambda=3.5$\,mm; moreover, to derive $B$ in MKS units, we calculate that 1 beam $= 7.52\times10^{-17} ab$ ster, where $a$ and $b$ are the major and minor semi-axis of the deconvolved component, in mas. Therefore: $$T_B = \frac{1.32 \times 10^{-13} (a b)^{-1} B_\mathrm{mJy}}{2 \times 1.38 \times 10^{-23}} \times 12.1 \times 10^{-6} K$$ i.e., $$T_B = 5.8 \times 10^4 \times B_\mathrm{mJy} (a b)^{-1} K$$ With the values from Table~\ref{t.model}, we find a brightness temperature for the core component $T_B \ge 6.8 \times 10^9 K$. If we make the reasonable assumption that the size of the emitting region is actually smaller (e.g.\ 1/3 of the deconvolved size), we get $T_B \ga 6 \times 10^{10} K$. However, even under this assumption, the result requires a high but not extreme value of the Doppler factor at the base of the radio jet. In fact, this is not surprising, since the brightness temperature depends only on the physical length of the maximum baseline and on the observed brightness. Therefore, observations with similar array at a frequency near the spectral peak can yield higher $T_B$. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{9784f8.eps}} \caption{Spectra of Mrk~501. Filled squares represent average single dish measurements \citep{ven01,all99,owe78,joy76,ste88}; empty squares show the VLBI core flux: data between 1.6 and 22 GHz are mean values taken from \citetalias{gir04a}; the datum at 86 GHz is from the present work. Solid lines connect the points simply to guide the eye; the dashed line is the difference between the two. Error bars show the standard deviation of averages but for the 86 GHz VLBI datum (instrumental calibration uncertainty).\label{f.spettro}} \end{figure} We show in Fig.~\ref{f.spettro} a spectral plot for both the (average) total flux density on kiloparsec scale as measured by single dish telescopes, and for the VLBI core, including our new data point at 86\,GHz. The spectrum of the VLBI core between 1.6 and 22\,GHz has been presented in \citetalias{gir04a}: the core has a turnover at about 8 GHz, and then the flux density falls as a power law of index $\alpha = 0.5$. Our new data point follows the optically thin part of the core spectrum. Note that as 86\,GHz flux density we do not just adopt the flux measure of the unresolved core in our image but we also include the jet like structure and the diffuse component discussed before; this is because observations at lower frequency do not have the angular resolution to distinguish the different components. This implies that the turnover frequency at $\sim 8$\,GHz is related to the whole structure and not to the 86\,GHz core, whose self-absorption peak is probably located at higher frequency. We also plot in Fig.~\ref{f.spettro} (dashed line) the difference between the total single dish flux density and the VLBI core one. Apart from some fluctuations (due to variability and non simultaneous data), the extended emission from the kiloparsec scale jet and lobes region must substantially contribute to the flux density even at high frequency. Such extended emission has a rather flat spectrum, with index $\alpha = 0.3$ between 1.4 and 86\,GHz, and fluctuations; this odd behavior is likely brought about as a consequence of variability of the core. The detection of a possible jet knot at $(r,\, \theta) = (0.72\, \mathrm{mas}, \, 172^\circ)$ is in agreement with images at lower frequency. Comparing the GMVA data (Table~\ref{t.model}) with modelfits at 15 and 22\,GHz by \citet{edw02} and \citetalias{gir04a}, the tentative jet knot can be identified with the region labelled as C4, which is found at the same distance and position angle. \citet{edw02} found this region to be apparently stationary between 1995 and 1999, and the positional coincidence lends support to this interpretation. At this resolution, the jet has therefore a still different orientation with respect to that seen at lower resolutions (see \citetalias{gir04a} and Sect.\ \ref{s.jet}): the jet PA is $\sim 100^\circ$ at $2<r<20$ mas and $\sim45^\circ$ at $r>20$ mas. \subsection{Polarization} \begin{figure*} \resizebox{0.5\hsize}{!}{\includegraphics{9784f9a.eps}} \resizebox{0.5\hsize}{!}{\includegraphics{9784f9b.eps}} \caption{Polarization images of the inner jet in Mrk 501, overlayed to total intensity contours at 1.4 GHz, traced at (0.2, 0.5, 0.8, 1, 3, 5, 30, 50, 300) mJy beam$^{-1}$. Left: EVPA, where 1 mas = 1 mJy beam$^{-1}$; right: polarized intensity in grey tones between 0 and 5 mJy beam$^{-1}$.\label{f.polin}} \end{figure*} In polarized intensity, previous VLBI observations of Mrk~501 have revealed flux densities of a few milliJansky, i.e., a few percent of the total intensity \citep{pus05}. Our new HSA observations confirm the presence of a significant fraction of polarized flux and reveal interesting details (see Figs.~\ref{f.polin} and \ref{f.polout}). The total flux density in polarization images is almost 100\,mJy (98.7\,mJy), with a peak of 18 mJy beam$^{-1}$. We plot electric vector polarization angle sticks, assuming a Faraday rotation measure of 0 rad m$^{-2}$. In the inner 10 mas (Fig.~\ref{f.polin}), we have a large cone of polarized flux, with polarization vectors aligned with the jet direction. Further downstream, the polarized flux lies predominantly toward the southern edge of the jet, in a structure similar to the `spine--sheath' detected in polarization by \citet{pus05} and in total intensity by \citetalias{gir04a}. At the large bending at 20 mas, we then find a knot of polarization and a rotation in the EVPA position angle. This suggests a strong interaction between the radio plasma and an external structure. This interaction could be the reason of the change in the source direction, which is further amplified by geometrical effects. After the large bend (see Fig.~\ref{f.polout}), the polarization angle becomes well aligned with the jet again at $\sim 100$ mas from the core. At larger distances, the polarized signal becomes weak, with a modest preference for a distribution on the south-east side and an orientation orthogonal to the jet direction. Finally, the data from the VLA only (Fig.~\ref{f.vla}) confirm an orientation of the electric vector parallel to the jet direction in the extended jet, turning slightly clockwise at $r \ga 1\arcsec$. \section{DISCUSSION} \label{sec:discussion} In \S\ref{sec:results}, we have presented our main new results about the core and jet of Mrk~501. We now discuss their relevance for our understanding of the physics of this source and of AGNs and jets in general. \subsection{The inner core: radio core spectrum and GMVA structure} The nuclear region of Mrk~501 consists of (1) an unresolved component: the radio `core', point-like at our resolution (deconvolved size smaller than $\sim 30 \times 20 \, \mu$as or $0.020 \times 0.014$ pc or $200 \times 140 R_S$), and (2) a faint resolved jet-like structure with a large opening angle, similar (taking into account the significant difference in flux density and linear resolution) to the inner structure of M87 \citep{ly07}. The unresolved component with a total flux density of about 45 mJy, is characterized by a relatively low $T_B$ in contrast with the higher $T_B$ of M87. However, we note that because of the different distance from us of these two sources, our `core' includes the whole M87 structure visible in images at 86 GHz \citep{kri06b}. Therefore, the $T_B$ of Mrk~501 is the average $T_B$ of a resolved structure where the jet velocity could be only mildly relativistic, as suggested by the detection of a counter-jet in the more misaligned M87 \citep{ly07,kov07}. The diffuse low brightness emission with a total flux density of about 65 mJy is interpreted as the continuation of the inner jet with spine--shear layer structure. Better sensitivity images are necessary to properly map this low brightness feature. The lack of a dominant unresolved component is in agreement with the spectrum shown in Fig.~\ref{f.spettro}, where we can assume that the observational data refers to both regions (1) and (2) discussed above. Of course, uncertainties related to the variability are always present, but because of the regularity of the spectrum we can use it to estimate the extension of the radio emitting region, by simply inverting the following formula \citep{mar87}: $$B = 3.2 \times 10^{-5} \, \theta^4 \, \nu^5_m \, S^{-2}_m \, \delta \, (1+z)^{-1} $$ We assume a local average magnetic field $B = 0.02$ G and a Doppler factor $\delta = 10$ \citep[as discussed by][]{tav01}. The low ($\sim 8$ GHz) self-absorption frequency requires that the emitting region has a size of the order of 0.1 mas, in agreement with Fig.~\ref{f.gmva}, and that a point-like source, if present, is not dominant. Note that the angular size is proportional to the magnetic field to the 1/4th power, therefore a relatively small (even a factor 10) increase of the value of the magnetic field does not affect these conclusions. In our GMVA image, we also find a remarkable feature at $(r,\, \theta) = (0.72\, \mathrm{mas}, \, 172^\circ)$. In fact, lower resolution VLBI observations have shown so far that the jet of Mrk~501 does not have compact jet knots on a few parsecs scale \citepalias{gir04a}. If the new component is indeed a jet knot, it will be important to re-observe the source and test if it has a proper motion that can be followed. Alternatively, as suggested by the positional coincidence with a feature in previous images, it could be a standing shock at the position of a change in jet direction, due to unknown reasons. Since this is not far from the first bend in the jet, it may be the result of a standing shock from the disturbance causing the bend. Moreover, beside this compact feature, a significant amount of flux density in the GMVA data remains difficult to image and/or modelfit. This is mainly due to the scarce coverage of the $(u,v)-$plane arising from some telescope failures. \subsection{Jet structure and polarization} \label{s.structure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{9784f10.eps}} \caption{Polarization vectors overlayed to total intensity contours in the extended jet. Contours are traced at $(-1, 1, 2, 4, 8) \times 4.0$ mJy beam$^{-1}$, polarization vectors are plotted with a scale of 1.5 mas mJy$^{-1}$, where unpolarized flux $> 2$ mJy and polarized flux $> 0.3$ mJy. \label{f.polout}} \end{figure} Limb brightening in the jet of Mrk~501 seems to be present on scales as small as 0.1 mas, but also after the two main bends at $\sim 2$ and $\sim 20$ mas, where the jet has significantly expanded transversely. Under a given viewing angle, different Doppler factors can arise from different velocities; therefore, a common explanation for limb brightening in jets lies in the existence of a velocity structure transverse to the jet, with an inner spine and an outer shear. The issue of the presence of transverse velocity structures is widely debated at the moment, in the light of both analytical models and of numeric simulations \citep{har07,miz07,per07,gop07,ghi05}. \citet{chi00} also suggested the presence of transverse velocity structures to reconcile observational results with the usually adopted AGN unification model. Direct evidence of limb brightening has recently been confirmed, e.g.\ on the sub-parsec scales in M87 at 43 GHz \citep{ly07} and on a few parsecs scale in 1144+35 \citep{gio07}. Mrk~501 is unique in the fact that the limb brightened structure is visible on scales over three orders of magnitude, and in sections of the jet that are differently oriented on the plane of the sky. In particular, the limb brightening in the mm-VLBI image indicates that a spine/shear structure could be present from the very inner part of the jet, close to the region where the higher frequency optical emission is produced. Our image provides support for the transverse dual velocity structure hypothesized by \citet{chi00} on the basis of the correlation between the radio and the optical core luminosity in radio galaxies and BL Lacs. We note that despite the large changes in jet PA, the jet direction appears to be constant in time; i.e., we do not have evidence of a change in the direction of jet launch, of precession, or of variable activity. Only the sharp brightness decrease at $\sim100$ mas from the core could be due to a discontinuity in the jet activity but the evidence is marginal at most. \citet{con95} tried to interpret this strucutre in terms of a geometrical saturating helix model. However, the sensitive HSA image reveals that the emission comes from a conical surface out to more that 700 mas, much further out than the transition to cylindrical surface required in the saturating helix model; moreover, in our image as well as in other sensitive low frequency images \citep{gir04a,pus05}, the presence of the counter-jet feature described by \citet{con95} and predicted by their model is excluded at high confidence levels. Clearly, helicity remains an interesting explanation for the large position angle changes in the jet ridge line \citep[see, e.g., the binary black hole system described by][]{vil99}. However, the necessary parameters can not be constrained by the available images and the physical reality has to be more complex than the geometrical model. \citet{lai06} have also ruled out a helical magnetic field configuration in the nearby, low-luminosity radio galaxy 3C 296. The role played by interactions with the surrounding medium is surely non negligible. The polarization structure (Fig.~\ref{f.polin}) is also suggestive of spine-sheath structures. The most prominent feature in our images is the polarized cone departing from the inner region in E-W direction. We note that because of the lower resolution, this cone extends over the full region resolved in the images at higher frequency \citep{pus05}. In our images, polarization vectors are predominantly aligned with the jet axis, in contrast with the dominant polarization in the external shear perpendicular to the jet found by \citet{pus05}. Because of different frequency and resolution, a comparison of the datasets is not obvious; one can assume that the difference in the polarization vector orientation is mainly due to Faraday Rotation or that at 1.4 GHz the dominant polarized flux is from the jet inner spine, and this polarized flux has vectors oriented along the jet direction. In our image, vectors are aligned with the jet direction also in the extended jet structure in NS direction (after the large bending at about 50 mas from the core). In this region, the central spine is again polarized with vectors aligned with the jet axis and in the external shear a marginal evidence of polarized flux oriented perpendicular to the jet is present. Only in the bending region vectors have a peculiar circular trend but here projection effects can be dominant; moreover, in this region a strong interaction with the ISM is present and it is the most likely region of the change of the jet PA. We note that also vectors in the VLA map show a similar orientation if we consider that the peak in the VLA polarization map is not at the core position but is located in the main pc scale region at about or more 100 mas from the core. We conclude that at relatively low resolution and at 1.4 GHz the dominant magnetic field structure is perpendicular to the jet axis (E vectors aligned with the jet direction). \subsection{Jet velocity and orientation} \begin{figure*} \resizebox{0.5\hsize}{!}{\includegraphics{9784f11a.eps}} \resizebox{0.5\hsize}{!}{\includegraphics{9784f11b.eps}} \caption{Estimated jet velocity in the case of parallel (left) and perpendicular (right) magnetic field. The initial velocity is $\beta=0.998$. Viewing angles of $\theta=5^\circ,10^\circ,15^\circ,20^\circ,25^\circ$ are shown, with smaller angles at the bottom. Note the different scale on the $y-$axis, due to the faster decrease of the jet speed in the case of parallel magnetic field.\label{f.adiabatic}} \end{figure*} Our results show that the jet in Mrk~501 is characterized by different properties on the various scales from a few hundreds to several millions Schwarzschild radii. The jet orientation and velocity, and the ratio between spine and shear contributions must significantly change over these scales. It is therefore impossible to describe it with constant parameters. Since a counter-jet is not detected, however, the jet has to be in a relativistic regime even in its faintest and most extended region seen in the A configuration VLA observation. The arcsecond scale structure is symmetric, so we argue that the transition to non relativistic velocity has to occur somewhere between projected distances of 1 to 10 kpc. The jet opening angle remains constant ($\phi_\mathrm{j} \sim 40^\circ$) for several 100 parsecs and any possible re-collimation can take place only in the region where the jet becomes confused with the noise. Under our estimated viewing angle for this part of the jet ($\theta \sim 15-20 ^\circ$), we derive an intrinsic opening angle of $\phi^\prime_\mathrm{j} \sim 10-15^\circ$. Jets with larger opening angles are found to have lower apparent speeds and Doppler factors in analytical models \citep{gop07}. This may explain why Mrk~501 and other TeV blazars do not tend to show strong superluminal motions \citep{pin04}. However, it is to be noted that the HSA jet of Mrk 501 is several beams wide and does not show evidence of any features such as the knots considered in the analytical modeling of \citet{gop07}. The results from the fits of an adiabatic model to the observed jet radius and peak brightness presented in Sect.~\ref{s.jet} can be used to constrain the bulk velocity, the orientation, and the magnetic field orientation in the various parts of the jet. As we have discussed in the section about the polarization properties (\ref{s.structure}), we have evidence that the magnetic field is predominantly orthogonal to the jet axis. First, we recall that from \citet{gir04a} it was shown that low initial Lorentz factors ($\Gamma < 5$) are ruled out, regardless of the magnetic field orientation, since they disagree with both the observed limb-brightened structure and the jet/counter-jet ratio; moreover, they require a jet deceleration between the $\gamma-$ray region and radio jet region that is too strong. In Fig.~\ref{f.adiabatic} we use the new HSA data to show the estimated jet velocity in the case of parallel and perpendicular magnetic field, assuming an initial $\beta=0.998c$ and an injection spectral index $\delta = 2$ (in accordance with \citetalias{gir04a}). In each plot, we draw five lines, corresponding to angles to the line of sight of 5$^{\circ}$, 10$^{\circ}$, ..., 25$^{\circ}$ (i.e., in the range of values allowed by the jet sidedness and core dominance). With these data, we now also rule out models starting with $\Gamma = 15$ with magnetic field parallel to the jet axis, since our lower limit on the jet/counter-jet brightness ratio near the core ($R>4000$), and at large distance ($R>140$ at 120 mas) is inconsistent with the velocity decrease predicted by a parallel magnetic field adiabatic model at $\sim 100$ mas. This conclusion is also in agreement with the results derived on the polarization properties, i.e., a magnetic field in the jet spine orthogonal to the jet direction (EVPA parallel to the jet axis). The fit with perpendicular magnetic field and an initial Lorentz factor $\Gamma = 15$ is in general consistent with the other observational constraints. Only in the case of the smallest viewing angle (i.e.\ $\theta = 5^\circ$) the jet velocity falls off rapidly after the main jet bend; in the extended part of the jet, narrow viewing angles are therefore not acceptable. However, it is possible that the jet is more closely aligned in its inner part and then it becomes oriented at a larger $\theta$ after the turn. For all the other viewing angles, the fit velocities behave in rather similar ways. After an initial decrease ($\beta = 0.985 - 0.991$, $\Gamma = 5.8 - 7.5$), the jet velocity remains relativistic, with small oscillation. This is also in agreement with the fact that the jet is still one-sided even on scales of a few kiloparsecs. Finally, we note that the jet FWHM is probably underestimated at large $r$. For this reason, we have also tried a fit with an input FWHM twice the measured one in the outer jet. The results are qualitatively similar to the previous ones: a parallel magnetic field is not acceptable and the perpendicular field implies an initial deceleration and a more or less constant velocity further out. Although the jet velocity is slightly smaller ($\Gamma = 4 - 6$), it still remains in the relativistic regime. \section{CONCLUSIONS} \label{sec:conclusions} We have successfully explored new regions in the remarkable jet of Mrk~501. Thanks to the great sensitivity of the HSA, we reveal that the VLBI jet is one-sided (and therefore in the relativistic regime) out to at least 500 parsecs from the core. The polarization vectors are clearly aligned with the jet spine, suggesting that the magnetic field is orthogonal to the jet main axis. This is also in agreement with the results of the adiabatic fit to the jet brightness and width as a function of distance from the core. Limb brightening -- already detected on intermediate scales by VSOP observations \citepalias{gir04a} -- is now visible on HSA transverse profiles at $\sim 60$ mas from the core, and is likely present even on the sub-parsec scales imaged by the GMVA. Despite its presumed weakness, Mrk~501 has in fact been clearly detected by the GMVA on sub-milliarcsecond scales. This result is encouraging given the performance of the existing mm-VLBI array and suggests that not only the brightest AGN can be studied on the smallest scales. Present and future upgrades of the array (e.g., the installation of new receivers at Plateau de Bure, inclusion of new or existing telescopes such as the 40\,m at Yebes, Spain) are expected to make the instrument even more sensitive and reliable. The brightness temperature of the most compact component is about $6 \times 10^{10}$\,K. This region has a linear size of $0.020 \times 0.014$\,pc or, in terms of gravitational radii, $200 \times 140 R_S$. Significant emission is also revealed in the short baselines between the most sensitive telescopes, and awaits proper imaging with increased fidelity. \begin{acknowledgements} We thank Dr.\ Luigina Feretti for useful discussions. We also thank the personnel of the observatories participating in the Global mm-VLBI array and particularly T.\ Krichbaum for his advice during the data reduction. MAPT research is funded through a Ramon y Cajal Fellowship from the Spanish Ministery of Education. The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation. This research has made use of NASA's Astrophysics Data System Bibliographic Services and of NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \end{acknowledgements}
proofpile-arXiv_069-1992
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The Sun is the most well-studied star and serves as a benchmark for our understanding of stellar physics. High resolution spectra of the Sun have been essential references for a variety of stellar and planetary studies that strive to understand atomic physics processes in the solar atmosphere \citep{intro_model,sun_aluminum}, determine the chemical abundances of other stars \citep{intro_asp09,intro_bru12}, and measure the radial velocities of solar system objects reflecting solar light \citep{intro_bon14}. More recently, the use of solar spectra has been helpful to understand the effects of magnetic activity and gravitational blueshift on radial velocity (RV) measurements \citep{harps_rv,Reiners16,harps15, harps_fit_challenge}. These studies strive to develop ways to reduce the effects of activity on stellar spectra that can limit the achieved RV measurement precision. An additional application of solar spectral observations is the study of telluric lines themselves since the Sun serves as a bright back-light to the atmosphere that provides enough light for high signal to noise, high resolution telluric measurements. This enables the study of weak telluric lines, or micro-tellurics, that are of additional pertinence to exoplanet radial velocity studies due to their ability to impact precision RV measurements \citep{cunha14,sam_rv_err_budget,micro_telluric_peter,artigau14,telluric_budget}. Solar spectra have also proven useful to measure abundances of atmospheric gases \citep{toon_balloon,solar_occult,solar_o3} and to validate line parameter databases \citep{toon16}. High resolution, disk integrated spectra of the Sun are difficult to obtain from space, but are crucial for studies that need to view the Sun as a star. Ground-based solar atlases generated with spectrographs having resolution too low to resolve the solar lines are useful, but suffer from the effects of an instrument-specific line spread profile (LSP). The convolution of the full observed spectrum with the LSP makes telluric removal challenging since a correction by dividing a telluric model is no longer mathematically exact. Furthermore, the convolution with the LSP complicates comparisons between data and models at different resolutions. Several ground-based instruments have been successful at observing disk integrated spectra of the Sun at a resolution that fully resolves the solar lines. These include \cite{Kurucz84} and \cite{Reiners16}, which both utilized Fourier Transform Spectrographs (FTS). In a comparison of the wavelength solutions of these two atlases, \cite{Reiners16} showed that errors in the wavelength calibration of the Kitt Peak solar atlas were \textgreater 50~m s$^{-1}$ in regions blue of 473~nm and 20~m~s$^{-1}$ red of 850~nm whereas the Institut f\"{u}r Astrophysik, G\"{o}ttingen (IAG) solar flux atlas shows good agreement to within 10~m s$^{-1}$ with a HARPS laser frequency comb calibrated atlas \citep{Molaro13}, which was taken at slightly lower resolution covering a 100~nm range around 530~nm. Comparisons of the IAG atlas with a second Kitt Peak atlas derived slightly differently from the first (see \citealt{Wallace11}) shows even larger offsets. These high resolution, disk integrated solar atlases are commonly used as a comparison to solar models that are important for studying non-local thermal equilibrium effects that change the line shapes of various molecular features, such as that of calcium \citep{calcium_line_tests}. Additionally, observing the Sun as a star and measuring the line bisectors provides important information for exoplanet radial velocity studies that must understand the limiting effects of granulation and star spots on their radial velocity measurements. For these applications, the Sun serves as a useful test case since high resolution and high signal-to-noise measurements can be achieved, which are necessary to compare measurements to models and study the effects of degrading the spectral resolution, as will commonly be the case for measurements of other stars \citep{cegla19, bisector_resolution}. Additionally, being able to image the stellar surface provides extra information in studying the effect of star spots. For all these applications, telluric lines can skew the measurements \citep{lars18_bisector} and limit the spectral regions that are useful for these studies \citep{sun_aluminum,sun_balloon}. Few efforts have been made to correct solar spectra for telluric lines, possibly because this is more difficult for the Sun due to the lack of telluric reference stars and the low Doppler shifts of the solar lines, meaning telluric lines do not shift significantly with respect to the solar features. In work by \cite{Kurucz06}, the Kitt Peak Solar Atlas (KPSA) was telluric-corrected using a full radiative transfer atmospheric model. Residuals were replaced by hand with lines connecting the boundaries of contaminated regions. In 2011 another disk-integrated solar atlas was observed at Kitt Peak by \cite{Wallace11} who corrected the spectrum for atmospheric absorption by using telluric data derived from disk-centered solar spectra. The improved wavelength calibration of the IAG atlas in addition to the lack of uncertainties on the Kitt Peak atlases motivates the derivation of a new telluric-corrected solar atlas derived from IAG solar spectral data. Here, we generate this telluric-corrected IAG solar flux atlas that has estimated uncertainties that capture the success of the telluric removal process and therefore makes it useful for studies that wish to mask or properly weight telluric-contaminated spectral regions. To achieve this, we develop a unique semi-empirical telluric fitting method that works well despite the small Doppler shifts of the solar lines that makes it challenging to dissociate them from overlapping telluric features. In \S \ref{sec:data} we describe the data set used to generate this atlas and the pre-processing steps performed to determine the wavelength calibration and solar radial velocity for each spectrum. In \S \ref{sec:model} we describe the model framework and in \S \ref{sec:fit} we describe the fitting sequence and how we use the best-fit models to generate the output data products that include the final atlas and an archive of telluric spectra. We provide an analysis in \S \ref{sec:analyze} in which we compare our solar atlas to the KPSA and discuss our telluric model and findings related to the telluric line shape parameters. Finally, we conclude in \S \ref{sec:conclude}. \section{Data}\label{sec:data} The data used here for generating a telluric-free, disk-integrated solar spectrum were taken with the Vacuum Vertical Telescope\footnote{https://www.uni-goettingen.de/en/217813.html} (VVT) at the Institut f\"{u}r Astrophysik in G\"{o}ttingen, Germany. A siderostat mounted on the telescope directs light into an optical fiber that passes through an iodine cell before feeding the Fourier transform spectrograph \citep{lemke16}. The iodine cell serves to provide a wavelength calibration that is more accurate than the internal calibration of the FTS. We utilize a subset of spectra from a 20-day data set taken over the span of a year with each day having disk-integrated solar spectra recorded over a multiple hour span. Additionally, 450 spectra were taken using a halogen light source with the iodine cell in order to generate an iodine template spectrum. Although the original spectra cover a slightly wider wavelength range, we only process the region from 500~nm-1000~nm. The resolution of the spectra is $\lambda/\Delta\lambda \approx 10^6$ and the signal to noise in the continuum spans from 100-300 over the full wavelength range. For more information on the data and instrument setup, we refer the reader to \cite{lemke16} and \cite{Reiners16}. For generating the atlas, we choose a subset of spectra with a wide range in both solar radial velocity and airmass in order to achieve the best separation of each spectral component by leveraging the fact that the telluric component varies with airmass while the solar component varies with radial velocity. We opt to combine different days of data not only to maximize the range in these variables but also to leverage the different atmospheric conditions that will produce different telluric residuals and therefore reduce the possibility of confusing a telluric residual with a solar feature. We create 11 groupings of data that we perform our fits on with each group containing 12-15 spectra. Before running these fits we perform several pre-processing steps to the full sample of data that we describe below. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{veltauiod_example_20160227.pdf} \caption{Observer-Sun velocity (top), iodine velocity (middle), and water vapor optical depth times airmass (bottom) versus time for observations taken on Feb. 27th, 2016. The dashed line in the top panel shows the actual solar velocity values that were calculated using the JPL web server and are offset from our measured velocities (gray points) due to the different zero-point velocities of the data and the template spectrum used to determine these values. The sequence of iodine velocities, measured by the iodine lines' positions relative to a template iodine spectrum, shows the drift in the Fourier Transform Spectrograph over this multi-hour observing run.} \label{fig:dnuvel} \end{figure} \paragraph{Flux Normalization} We first prepare the data for fitting by dividing out the continuum to produce spectra with normalized flux levels. We do this individually for each spectrum using an automated process that steps through 150~cm$^{-1}$ subregions for each spectrum and records the maximum flux value and corresponding wavenumber value. A cubic spline is fit to these $\sim$70 points that describes the continuum. The raw flux data is divided by this spline description of the continuum in order to produce the final normalized flux data. Any errors in this process are accounted for in our ultimate fitting sequence by including a floating linear continuum correction. \paragraph{Pre-Fits: Solar and Iodine Velocities} Using these flux-flattened spectra, we perform a fitting routine that determines (1) the solar velocity offset, v, relative to a template solar spectrum and (2) the wavelength calibration from the iodine lines using $\nu_c = \nu(1+\kappa)$, where $\nu_c$ is the corrected frequency array, $\nu$ is the original frequency array, and $\kappa=-\mathrm{v}_\mathrm{iod}/c$, where $\mathrm{v}_\mathrm{iod}$ is the iodine velocity, and $c$ is the speed of light. We determine these values a priori instead of optimizing them simultaneously with our telluric fitting sequence in order to enforce these values to be the same between the narrow (10~cm$^{-1}$) wavelength regions over which we perform each telluric fit. This ensures we are fully leveraging our knowledge of the locations of the stellar lines that is particularly important in regions that do not contain strong stellar features. The fitting routine we use to determine v and $\kappa$ for each spectrum follows the method described in Section 3 of \cite{lemke16} except instead of using an iodine-free IAG solar template, we use the telluric-corrected KPSA since we later use this for our starting guess of the solar spectrum in our full (solar+telluric) fits to the data. Although it is not necessary to begin with a solar model, this speeds up the iterative fitting process. In Figure \ref{fig:dnuvel} we show several parameters determined for a sequence of observations taken on Feb. 27th, 2016. In the top panel of Figure \ref{fig:dnuvel} we plot the measured velocities of the solar lines. The scatter in the measured solar velocity values is structured due to tracking errors, sunspots, and physical sources of line shifting that can occur (see \citealt{Reiners16} for more description). We ultimately shift our solar spectra by the Doppler velocity between the Sun and G\"{o}ttingen calculated using the JPL ephemeris generator\footnote{https://ssd.jpl.nasa.gov/horizons.cgi} (shown in Figure \ref{fig:dnuvel} as the black dashed line); therefore these effects do not affect the alignment of our final solar atlas and the random nature of the differences in the line shapes caused by tracking errors and sunspot position, for example, will be reduced due to averaging. An example sequence of $\kappa$ values is shown in the middle panel of Figure \ref{fig:dnuvel}, where the error bars depict the standard deviation of the values measured for each of the ten 350~cm$^{-1}$ wide spectral regions that were fit independently then averaged to determine the final iodine velocity. The average uncertainty in the measurements for this day is 0.9~m~s$^{-1}$, which is typical of other days of observations. The oscillatory behaviour in $\kappa$ is also seen in the other observing runs and shows the intrinsic drift of the instrument. \paragraph{Pre-Fits: H$_2$O Optical Depth} In Figure \ref{fig:dnuvel} in the bottom panel we show the product of airmass, $\alpha$, and water vapor optical depth, $\tau$, that accounts for the different column densities of water vapor between the various observations. The value of $\tau$ is first estimated from measuring the line depth of an isolated water vapor line located at 15411.73~cm$^{-1}$ and then further optimized in step 1 of our full fitting sequence that is described in \S \ref{sec:fit}. We only perform these optical depth fits for water vapor since oxygen is well-mixed in the atmosphere; therefore, using the airmass values of the observation to scale the oxygen telluric spectrum to the different observations is sufficient. This is described more in \S \ref{sec:telluric}. \paragraph{Noise Determination} Quantization noise, photonic noise, and instrumental noise all contribute to the final noise in our FTS spectra. Of these, photonic noise dominates at high resolution where the informative component of the interferogram is small and can easily be swamped by the photonic noise due to a constant term related to the half power of the source \citep{ftsNoise10}. While in theory the noise of our measurements can be calculated, it is simple and accurate to deduce the final measurement noise by measuring the flux RMS of a portion of featureless spectrum. We do this for each observation by taking the noise for each spectrum to be the RMS of the flux normalized spectra over a 1.5~nm wavelength range starting at 1048.8~nm, which gives the noise in the continuum to be about 1\%. This region was chosen for its lack of solar and telluric features, but it must be noted that the actual noise levels vary slightly across the full wavelength span of the data due to the presence of telluric lines and variations in the sensitivity of the FTS. The RMS in the continuum drops to 0.3\% around 680~nm, and increases again at bluer wavelengths. Not accounting for the varying sensitivity of the FTS does not significantly affect the performance of our fits; however, it is important to modify our noise array over regions where the transmission drops due to saturated absorption lines. For this, we increase the noise determined for the continuum, $\sigma_\mathrm{cont}$, by a range of factors such that it is equal to 1.25$\sigma_\mathrm{cont}$ to 10$\sigma_\mathrm{cont}$ for where the transmission drops to 4.0-0.3\%, respectively. For example, in regions where the transmission is less than 1\%, we multiply the noise array by a factor of 2.5. The factors were chosen to match the observed noise in the data. We propagate the final noise array, $\sigma$, determined using the linear normalized flux, $\mathcal{F}$, to the noise of the logarithmic flux by $\sigma/\mathcal{F}$ and use these values in evaluating the optimization function for our fits. \begin{table}[ht] \centering \caption{Summary of spectral parameters for each of the eleven fitting groups. For the observations included in each group we report the average value of the iodine velocity ($c \cdot \kappa$) in addition to the minimum and maximum values for the range of solar velocities, water vapor optical depths, and airmass values.} \begin{tabular}{ccccc} \centering Group & $c \cdot \overline{\kappa}$ & v$_{min}$-- v$_{max}$ & $\tau_{min}$ -- $\tau_{max}$ & $\alpha_{min}$-- $\alpha_{max}$\\ No. & (m~s$^{-1})$ & (km~s$^{-1}$) & & \\ \hline 0 & 4.1 & -0.1-- 0.3 & 1.0-- 5.3 & 1.1- 3.5 \\ 1 & 6.0 & -0.5-- 0.4 & 0.3-- 1.3 & 1.1-- 3.0 \\ 2 & 1.2 & -0.1-- 0.4 & 0.4-- 2.5 & 1.1-- 3.0 \\ 3 & 9.4 & -0.7-- 0.4 & 0.2-- 1.0 & 1.1-- 3.1 \\ 4 & 3.7 & -0.1-- 0.4 & 0.5-- 2.7 & 1.1-- 3.1 \\ 5 & 8.8 & -0.7-- 0.4 & 0.3-- 1.7 & 1.1-- 3.2 \\ 6 & 2.7 & -0.6-- 0.5 & 0.6-- 3.4 & 1.1-- 3.2 \\ 7 & 9.4 & -0.5-- 0.4 & 0.4-- 2.2 & 1.3-- 3.4 \\ 8 & 8.5 & -0.7-- 0.5 & 0.3-- 1.9 & 1.1-- 3.4 \\ 9 & 6.9 & -0.6-- 0.3 & 0.4-- 1.6 & 1.2-- 3.4 \\ 10 & 5.8 & -0.7-- 0.4 & 0.3-- 1.5 & 1.1-- 3.5 \\ \end{tabular} \label{tab:params_summary} \end{table} \section{Modeling Methods}\label{sec:model} Here we describe our model and justify our choices for how we represent each spectral component. \subsection{Model Representation} To fit the IAG solar spectra, we construct a model that is composed of solar, telluric, and iodine spectral components in addition to a linear continuum model. We represent each component in units of absorbance for the fitting process. For computing reasons, we split each spectrum into 10~cm$^{-1}$ chunks that are fit separately. For each of the 11 groups of data, we simultaneously fit 10-15 spectra that range widely in the airmass of the observation and the Sun-observer velocity in order to best separate the telluric and solar components of the data. These groupings and their respective parameters are listed and described in Table \ref{tab:params_summary}. We therefore generate a model that is an N$_{\mathrm{spec}}$ by N$_\mathrm{points}$ array, where N$_\mathrm{spec}$ is the number of spectra being fitted and N$_\mathrm{points}$ is the number of data points in the fit region. Each model along the N$_\mathrm{spec}$ axis is generated using the same underlying solar and telluric spectral models, but is shifted and scaled according to the solar radial velocity and the species' column densities (including the airmass factor and $\tau$ for water vapor), respectively. Our final calculated model, $\mathcal{C}$, for each spectrum, indexed by $i$, can be represented as a sum of each component in units of absorbance: \begin{equation}\label{eq:mod} \mathcal{C}(\nu)_i = \mathcal{A}_{\mathrm{T},i}(\nu) + \mathcal{A}_{\mathrm{S},i}(\nu) + \mathcal{A}_{\mathrm{I},i}(\nu) + \mathcal{A}_\mathrm{C}(\nu) \:, \end{equation} \noindent where we have used the subscripts T for telluric, S for solar, I for iodine, and C for the continuum and absorbance, $\mathcal{A}$, is just the logarithmic flux, $\mathcal{A} = -\log \mathcal{F}$. We demonstrate our model decomposed into all of these components in Figure \ref{fig:components} in which we have plotted the data, $\mathcal{O}$, as red points and our model, here $\mathcal{C}$, as the dashed black line, where both have been converted to linear flux units before plotting. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{components_high_airmass_range.pdf} \caption{Example fit showing the various spectral components of our model. The continuum is not shown but is near unity for this region. The data are shown as red points with the model as the dashed black line that is equal to the product of the telluric (navy), solar (yellow), and iodine (green) spectra. The residuals (data minus model) are shown in the bottom panel.} \label{fig:components} \end{figure} \subsubsection{Iodine Component} For the iodine spectrum, we use the flux normalized template spectrum made from averaging together 450 iodine spectra taken with the FTS with a halogen lamp light source. The iodine template spectrum, $\mathcal{A}_\mathrm{I,temp}$, is shifted by the predetermined wavelength calibration, $\kappa$, that was found for each spectrum (see \S \ref{sec:data}) before being added to the final model using a cubic interpolation. There are therefore no optimization parameters related to the iodine model. We point out that we chose not to simultaneously solve for $\kappa$ in our full fits since the estimates from the wider wavelength regions in the pre-fits are more reliable than fitting for the iodine line shifts in each 10~cm$^{-1}$ chunk simultaneously with the other parameters. This is particularly true in regions where few or no iodine lines exist, which is around half of our spectral range. \begin{equation} \mathcal{A}_{\mathrm{I},i} = \mathcal{A}_\mathrm{I,temp}(\nu \cdot (1+\kappa_i)) \end{equation} \subsubsection{Continuum Model} The continuum component of the model is important in order to capture errors in the flux normalization process that is a challenge in regions where the continuum is not well defined due to saturated telluric lines or a deep stellar feature. Because we are working in 10~cm$^{-1}$ chunks that are small compared to the curvature due to continuum modulation artifacts from the instrument, a linear trend sufficiently captures these errors. The continuum is specified in absorbance by two end point parameters, $\xi_l$ and $\xi_r$, and an extra vertical shift specified for each date, $\xi_\mathrm{date}$, so that \begin{equation} \mathcal{A}_\mathrm{C}(\nu)= \frac{\xi_l - \xi_r}{\nu_l - \nu_r} (\nu - \nu_l) + \xi_l + \xi_\mathrm{date}\: , \end{equation} where $\nu_l$ and $\nu_r$ are the corresponding leftmost and rightmost wavenumber values. $\xi_\mathrm{date}$ is added to account for differences in the flux normalization process due to changing telluric absorption as well as differences in the illumination of the instrument that both result in slight continuum offsets between the spectra. These continuum differences are similar for observations taken on the same day so we only add an extra vertical shifts per unique observation date that is applied to the spectra taken on the respective day. \subsubsection{Telluric Model}\label{sec:telluric} In the spectral range of our data, O$_2$ and H$_2$O are the only species whose atmospheric abundances and absorption strengths result in signals that exceed the noise of our data. We therefore only include these two species and choose to model each individual line with a Lorentz profile. We use the High-resolution Transmission Molecular Absorption Database (HITRAN), version 2016 \citep{Gordon17} for a starting guess to the strength, $S_{0}$, width\footnote{We use $\gamma_{\mathrm{air}}$ from HITRAN for both molecules}, $\gamma$, and center position, $\nu_c$, and then optimize these parameters individually. We discuss our reasoning for pursuing this semi-empirical telluric model in Appendix \ref{sec:justify_tel}. We optimize the Lorentz parameters for all individual line transitions with $S_0$ greater than $10^{-26}$ and $10^{-28}$ for water vapor and molecular oxygen, respectively, in HITRAN units of line intensity\footnote{https://hitran.org/docs/definitions-and-units/} (cm$^{-1}$/molecule/cm$^{-2}$). These parameters are queried directly from HITRAN using the HITRAN Application Programming Interface (HAPI; \citealt{hapi}). We also include in our model H$_2$O absorption features down to $S_{0}$=10$^{-28}$ cm$^{-1}$/molecule/cm$^{-2}$ that are not individually optimized, but do adopt a common modification on their line parameter values from an initial fitting step where all lines are shifted and scaled in unison. This line strength range corresponds to a linear flux approximately between 0.01-1\% deep with respect to the continuum for the highest airmass observations in this dataset. Different threshold values for each molecule are required due to their differing abundances in our atmosphere, $\Psi_\mathrm{mol}$, which are multiplied by each individually optimized $S$ value to determine the final line absorption strength. We summarize the full telluric model below, where we have indexed the H$_2$O lines by $l$ and the O$_2$ lines by $m$, where there are N$_{H_2O}$ water vapor lines and N$_{O_2}$ molecular oxygen lines in total. As before, each spectrum in the fit is indexed by $i$ and there are N$_\mathrm{spec}$ total. \begin{equation}\label{eq:telluric} \begin{split} \mathcal{A}_{\mathrm{T},i}(\nu) = \tau_i \cdot \alpha_i \cdot \Psi_{\mathrm{H_2O}} \cdot \sum_l^\mathrm{N_{H_2O}} \mathcal{L}(\nu;\gamma_l,\nu_{c,l},S_l) \\ + \alpha_i \cdot \Psi_{\mathrm{O_2}} \cdot \sum_m^\mathrm{N_{O_2}} \mathcal{L}(\nu;\gamma_m,\nu_{c,m},S_m) \end{split} \end{equation} \noindent For each species, we have one principle spectrum per day that is scaled to each solar observation by the airmass, $\alpha_i$, and for water vapor we additionally scale by the pre-fitted water vapor optical depth, $\tau_i$, previously determine for each observation. Although the telluric spectrum should be shifted by $k_i$, we do not include this since any bulk shift due to differences in instrument drift across spectra (\textless 10~m~s$^{-1}$ maximum, see column 2 of Table \ref{tab:params_summary}) will be small compared to a modification on $\nu_c$ (15 to 500~m s$^{-1}$) and will also ultimately be absorbed into the corrections on $\nu_c$. As will be discussed more in \S \ref{sec:fit}, the line parameters are first all modified simultaneously with all $\gamma$ values being multiplied by $f_{\gamma,\mathrm{mol}}$ for each molecule, the line strengths being multiplied by $\Psi_\mathrm{mol}$, and the line centers being shifted by $\delta_{air}\cdot P$, where $P$ is optimized and is physically motivated as a one dimensional pressure term and $\delta_{air}$ is the pressure-induced shift (units of cm$^{-1}$ atm$^{-1}$) and is provided in the HITRAN database for each line transition. \subsubsection{Solar Model}\label{sec:spline} For our solar model, we use a cubic spline that we initialize to a flux normalized version of the telluric-corrected KPSA. The flux normalization for the KPSA is performed by simply dividing each 10~cm$^{-1}$ chunk by the maximum value in that spectral range. This performs well outside regions containing very wide stellar features; however, the continuum component of our model accounts for these offsets and is later used to correct for them. We describe this in Step 3 of \S \ref{sec:fit}. Our spline is implemented using \texttt{BSpline} in the Python \texttt{scipy.interpolate} package and can be described as: \begin{equation}\label{eq:spline} \mathcal{A}_\mathrm{S}(\nu) = \sum_{j=0}^{n-1} c_j B_{j,q;t}(\nu) \: . \end{equation} \noindent Here, we define the spline for a chunk of spectrum over which we define knot points, $t_j$. The final spline function can be written as the sum of coefficients, $c_j$, multiplied by each basis spline, $B_{j,q;t}$ that are defined in Appendix \ref{sec:bsplines}. For our application we use a cubic spline ($q$=3) and position knots at intervals of 0.1~cm$^{-1}$ in regions with low stellar absorption (\textless 10\% absorption) as determined by the Kitt Peak telluric-corrected solar atlas and use a knot spacing of 0.05~cm$^{-1}$ for stellar spectral regions with greater than 10\% absorption. We found that this knot sampling was able to capture the curvature of the spectral features and a coarser spacing of knot points would introduce oscillatory numerical features above the noise in regions with high curvature. Because each knot point has multiplicity one (no overlapping points), our resultant stellar spectrum will be smooth, as desired. In our fits, we optimize the coefficients, $c_j$, of the spline that are initialized by performing a least squares minimization between the spline and the telluric-corrected KPSA. The final stellar model array, $\mathcal{A}_{\mathrm{S},i}$, contains N$_\mathrm{spec}$ stellar models each shifted by the solar and iodine velocity already measured for each spectrum included in the fit. We generate $\mathcal{A}_{\mathrm{S},i}$ by simply evaluating our solar spline at the array of wavenumbers modified by $\kappa_i$ and v$_i$: \begin{equation}\label{eq:solar} \mathcal{A}_{\mathrm{S},i}(\nu) = \mathcal{A}_\mathrm{S}(\nu\cdot(1+\kappa_i)(1-\mathrm{v}_i/\mathrm{c})) \end{equation} \begin{table}\label{tab:parameters} \centering \caption{A summary of the optimization parameters for our model described in the text.} \begin{tabular}{ccc} Model & Optimized & Prefit \\ Component & Parameters & Parameters \\ \hline $\mathcal{A}_\mathrm{S}$ & $c_j$ & v$_i, \kappa_i$ \\ $\mathcal{A}_\mathrm{T}$ & See Table \ref{tab:tel_pars} & $\tau_i$ \\ $\mathcal{A}_\mathrm{C}$ & $\xi_l$, $\xi_r$, $\xi_{\mathrm{date}}$ & - \\ $\mathcal{A}_\mathrm{I}$ & - & $\kappa_i$ \\ \end{tabular} \end{table} \begin{table} \centering \caption{List of telluric model parameters relevant to select absorption features depending on their transition line strengths. We additionally denote the line strength cutoff for weak telluric features that are omitted from the model and additionally define the minimum line strength defining the boundary of the `strong' group of features that are fit together with the solar spline model in Step 2 of the fitting sequence (see text for more description).} \begin{tabular}{ccc} Species & Line Strengths & Optimized Parameters \\ \hline H$_2$O & S \textless 10$^{-28}$ & (omitted) \\ & S \textgreater 10$^{-28}$ & $f_{\gamma,H_2O}$, $\Psi_{H_2O}$, $P$ \\ & S \textgreater 10$^{-26}$ & $\gamma_l$, $S_l$, $\nu_{c,l}$ \\ & S \textgreater 10$^{-25}$ & (`strong' lines) \\ & & \\ O$_2$ & S \textless 10$^{-28}$ & (omitted) \\ & S \textgreater 10$^{-28}$ & $f_{\gamma,O_2}$, $\Psi_{O_2}$, $P$, $\gamma_m$, $S_m$, $\nu_{c,m}$ \\ & S \textgreater 10$^{-27}$ & (`strong' lines) \\ \end{tabular} \label{tab:tel_pars} \end{table} \section{Fitting \& Processing Steps} \label{sec:fit} Here we discuss the fitting sequence in detail and describing how we generate the final solar atlas in addition to the telluric spectra extracted from the data set. \subsection{General Fitting Routine} \paragraph{Step 1: Telluric Optimization} The optimization sequence begins by stepping through the spectra in 10~cm$^{-1}$ chunks (each contains 664 datapoints) and for each chunk fitting $N_\mathrm{spec}$ spectra with the solar spectral model set to the KPSA for that region while we iteratively optimize\footnote{We found best behaviour using the sequential least squares programming (SLSQP) algorithm in the \texttt{scipy.optimize.minimization} function} the continuum and telluric parameters. For this we minimize a $\chi^2$ term with a penalty term, $p_1$, added in order to encourage the model to go to zero in saturated regions that otherwise do not contribute significantly to the $\chi^2$ value due to the large flux uncertainties. If $\chi_1^2$ is the objective function for Step 1, it can be summarized as $\chi_1^2 = \chi^2 + p_1$ with $\chi^2$ and $p_1$ defined as follows. \begin{equation}\label{eq:chi2} \chi^2 = \frac{1}{N_\mathrm{spec} N_\mathrm{point}}\sum_\nu \sum_i \frac{[\mathcal{O}_i - \mathcal{C}_i]^2}{\sigma_i^2/\mathcal{F}^2_i}, \end{equation} \begin{equation} p_1 = \frac{1}{N_\mathrm{sat}}\sum_{i}\sum_{\nu_{\mathrm{sat}}} \frac{[e^{-\mathcal{O}_i} - e^{-\mathcal{C}_i}]^2}{\sigma_\mathrm{med}^2} \end{equation} \noindent Here we have denoted the median uncertainty over each spectrum as $\sigma_\mathrm{med}$ that provides a scaling according to the noise in each spectrum without diminishing the term as would happen if we used the $\sigma$ values, which are large over those saturation regions. For the iteration series, we begin by optimizing the continuum parameters, $f_\gamma$ and $\Psi$ for each species, and $P$ that scales the linecenter shifts. Lines with strength greater than 10$^{-28}$ for both species are included in the model and are modified based on the best fit values of $f_\gamma$, $\Psi$, and $P$, which effectively modifies all features in unison. This unified shift and scaling serves as a first approach to the best fit solution and is additionally useful for the weakest features that have line depths near the noise making them challenging to fit individually. In the next stage, we optimize the continuum and telluric models, where all three parameters for each line of the telluric model is allowed to vary individually. As recorded in Table \ref{tab:tel_pars}, we optimize the line parameters for individual lines with strengths greater than 10$^{-26}$ and 10$^{-28}$ for water vapor and oxygen, respectively. We split the water vapor lines into two groups based on line strength that may be fit separately to improve computation times: these are a `weak' group and a `strong' line group. The lines in the `strong' group\footnote{The strong lines have depths greater than around 5\% in linear normalized flux on average.} includes absorption features with $S$ ten times the lower thresholds just defined for individually fitted H$_2$O lines with the `weak' group containing the remaining features ($10^{-26} < S < 10^{-27}$). We iterate between fitting these two telluric groups and the continuum parameters until convergence (i.e. no significant changes in the $\chi_1^2$ value). At the end we allow both groups of telluric lines and the continuum to vary simultaneously, having kept $f_\gamma$, $\Psi$, and $P$ fixed in this iteration process. We note that we bound the amount that the lines centers can shift to 0.01~cm$^{-1}$ in each optimization stage to avoid the fits swapping features in the data. We find that our fits perform well but that the optical depths of the water vapor lines are poorly estimated by our initial effort that determined $\tau$ by fitting one isolated water vapor line. We improve this by iterating between fitting for $\tau_i$ over all spectra and performing the fitting sequence described here until convergence. We only do this for one 10~cm$^{-1}$ wide range of spectra starting at 11010.0~cm$^{-1}$ (907.44 - 908.26~nm) for which the region is dominated by deep water vapor lines. We find the wings of the lines are very sensitive to the value of $\tau$ and fitting multiple lines at once significantly improves our estimates of the optical depth for each spectrum. \paragraph{Step 2: Solar and Telluric Fitting} Using the best fit estimates for the continuum and telluric line parameters, we now optimize both the spline coefficients and the group of strong telluric transitions. We choose not to fit the weak group simultaneously with the spline optimization since these lines are fit reliably well in Step 1. To perform this fit we use the optimization function from Step 1 and add another penalty term that serves to penalize the fit for adding features to the solar spectrum over saturated regions. This is to alleviate the potential problem of the spline filling in saturated regions, where no photon information exists. The two penalty terms together therefore ensure that the core of a saturated line is fit properly (i.e. the model goes to zero) while avoiding the scenario where a narrow, deep feature in the spline fills in the core, since this is unlikely and in any case cannot be constrained. We choose not to completely prevent solar features in these regions since a solar line will often overlap the edges of saturated regions and we wish to not compromise the shape of these features by abruptly forcing the spline, that must be smooth, to zero. We therefore define the penalty, $p_2$, as \begin{equation} p_2 = \beta \sum_{\nu_{sat}} \mathcal{A}_{\mathrm{S}} \: , \end{equation} \noindent where the coefficient $\beta$ is a scaling factor that adjusts the penalty term to an effective range that does not force the solar spectrum to zero, but still prevents large, unnecessary additions to the spectrum. The objective function for step 2 can be summarized as ${\chi_2}^2 = \chi^2 + p_1 + p_2$. Because we have a good first guess to the solar spectrum and our $\chi^2$ value, tuning $\beta$ uniquely to each portion of spectrum can be done based on this prior information. Since our $\chi^2$ value is in theory at best unity when the residuals only encompass noise on the magnitude of the noise of the data, we can define $\beta$ to be the value such that the initial penalty term is 0.5 or around 50\% of $\chi^2$ for a well optimized fit. We therefore set $\beta = \frac{0.5}{\sum_{\nu_{sat}} \mathcal{A}_{\mathrm{S},0}}$, where $\mathcal{A}_{\mathrm{S},0}$ is the initial solar spectrum that has been set to the KPSA. This method of tuning a penalty term is typical in cases where overfitting can potentially be an issue (e.g. \cite{intro_bed19}). \begin{figure} \centering \includegraphics[width=0.99\linewidth]{example_fit_high_airmass_range.pdf} \caption{Example fit shown for a small wavelength region for one group of spectra. The flux-normalized spectra are shown with colors corresponding to the water vapor optical depth and the best fit models for each are plotted as a dashed black line (top). The residuals for each fit is shown (bottom) with the same colors. A solar feature on the right is apparent due to its lack of change with airmass and demonstrates the small Doppler velocities that cause line shifts smaller than a fraction of a line width.} \label{fig:eg_fit} \end{figure} \paragraph{Step 3: Correcting the Continuum} As noted before, the fitting process is performed for each 10~cm$^{-1}$ subregion separately. Before removing the continuum, telluric, and iodine solutions to extract the final solar spectrum, we must address the fact that the continuum solution contains the corrections to both the data and solar spectral continua. We find that in saturated telluric regions, the original continuum corrections are due to errors in the flux normalization done to the data, while the solar spectrum normalization process only fails for regions where the entire spectral chunk contains a wide solar feature that spans the 10~cm$^{-1}$ subregion since otherwise the solar model is at maximum (unity) between stellar lines. Over our wavelength range, there are five occurrences of a wide solar feature and these all are present in regions that do not overlap saturated telluric features. We utilize this in separating the continuum offsets originating from the stellar model from offsets due to the continuum correction performed to the data. To remove the offsets in the continuum solution due to errors in the initial normalization to the solar spectrum, we take the final best-fit continuum array and, if the spectral subregion under consideration is overlapping a dense telluric band, we leave the continuum alone. If the subregion does not contain dense\footnote{We use an arbitrary definition for what constitutes `dense' that involves summing the telluric model and comparing it to a predetermined threshhold value.} telluric features, we modify the best-fit continuum model by subtracting from it the median value of the continuum solutions determined for each spectrum in the group. Because we will later use the solar spline model to replace regions containing telluric residuals, we add these subtracted values to our stellar spline model. This transfers any continuum offsets originating from the stellar normalization process back to the stellar spectrum, while keeping information about the relative offsets between the spectra grouped by date. Typically these offsets are small over telluric-free regions. \subsection{Generating the Final Solar Spectrum} \paragraph{Removing Best-Fit Model Components} Using the corrected continuum array along with the telluric model and iodine template, we can subtract these from the FTS data to leave just the solar component: \begin{equation} \mathcal{F}_{\mathrm{S},i} = e^{ - (\mathcal{A}_{\mathrm{data},i} - \mathcal{A}_{\mathrm{I},i} - \mathcal{A}_{\mathrm{T},i} - \mathcal{A'}_{\mathrm{C},i})} . \end{equation} \noindent Here we have denoted $\mathcal{A'}_{\mathrm{C},i}$ as our corrected continuum array and $\mathcal{F}_{\mathrm{S},i}$ as our final solar spectra for each observation defined over our full wavelength span and we have converted from absorbance to transmission. We note that in regions where the data are saturated there will be spurious values due to dividing regions dominated by noise by the model that is approximately zero in those regions. Residuals from the telluric subtraction may also remain in the final solar spectrum and are typically visible over deeper lines (\textgreater 10\% absorption). Instead of excising these regions entirely we instead replace them by our spline model. These regions are flagged in the final spectrum. \paragraph{Velocity Zero Point Determination} To achieve an absolute wavelength calibration, we use the iodine catalog from \cite{iodine_ascii} who recorded a Doppler-limited iodine spectrum using an FTS and corrected the wavelength scale to match other wavelength-calibrated iodine atlases including that of \cite{GERSTENKORN1981322}. From their comparison to these other atlases, they estimate that their spectrum is reliable to $\pm$0.003~cm$^{-1}$ across their frequency range of 14250-20000~cm$^{-1}$. We use their recorded spectrum to find the offset of our template iodine spectrum and find that our iodine spectrum is shifted redward of the template by about 70~m~s$^{-1}$, or 0.004~cm$^{-1}$. We shift our final solar spectrum by 70~m~s$^{-1}$, such that $\kappa$ = 3.2$\times 10^{-7}$ and adopt the uncertainty in the template iodine spectrum of $\pm$0.003~cm$^{-1}$, which translates to $\pm$45~m~s$^{-1}$ at frequencies of 10000~cm$^{-1}$ and $\pm$90~m~s$^{-1}$ at 20000~cm$^{-1}$. We note that any shifts in our iodine spectrum due to a temperature difference between our setup and the temperatures used by \cite{iodine_ascii} should be over 10 times smaller than our adopted uncertainties for the absolute shift of our final stellar spectrum \citep{iodine_stability}. We additionally attempted to derive a zero point offset solution using the central positions of the telluric lines as compared to their catalog linecenter modified by the pressure-induced line shift. This produced a consistent result but was less precise. This is described more in Appendix \ref{sec:zpo}. \paragraph{Combining Spectra} We combine our spectra after shifting $\mathcal{F}_{\mathrm{S},i}$ according to the velocity v$_{\mathrm{eph},i}$ between G{\"o}ttingen and the Sun at the time of the observation. We also perform a second shift for the calibration velocity measured from the iodine lines and the final shift for the absolute zero point velocity. We combine the spectra by stepping through the same 10~cm$^{-1}$ chunks and, before averaging, remove spectra with extreme residuals due to either a poor fit or a large airmass value that, although are useful for constraining the spline fit, have higher uncertainties and the least information over telluric regions. We record the final average of the remaining spectra as our telluric-free IAG solar flux atlas and record the standard deviation of these remaining spectra as the final uncertainty. We plot the final atlas in Figure \ref{fig:full_spec}. For ease of use and since the uncertainty array will fail over telluric-contaminated regions replaced by the spline model, we create a flag array for the final solar spectrum to identify regions with varying levels of telluric absorption: 0 indicates a robust spectral region, 1 indicates a region with telluric absorption exceeding 10\%, 2 for telluric absorption exceeding 25\%, and 3 for saturated regions. Of these, flags greater than or equal to 1 correspond to regions that have been replaced by the spline model. We inspect the final solar atlas and notice that the oxygen A bandhead and the area around the HeNe laser used for internal wavelength calibration of the FTS both contain spurious features. We excise these two regions (14522.0-14523.6~cm$^{-1}$ for the O$_2$ residuals and 15795.1-15799.0~cm$^{-1}$ for the HeNe residuals) by replacing them with unity and assigning zeros to the uncertainty array and a flag value of 3 for the full extent of both regions. We also point out that several deep solar lines in the 500-555~nm (18000-20000~cm$^{-1}$) range were very close to saturated such that, if an iodine feature overlapped the deepest portion, the final division of the iodine spectrum would be leave a large residual in the solar spectrum. Since the iodine features are stable in time, the solar spline model occasionally fit the residuals such that the model could not be reliably used to replace the erroneous spectral shape. A few near-saturated solar lines therefore contain poorly constrained line core shapes, however, the final uncertainties capture the magnitude of these deviations. \begin{figure*} \centering \includegraphics[width=0.98\linewidth]{full_spectrum_tel.pdf} \caption{Transmission as a function of wavelength for the full telluric-corrected IAG solar atlas. In black is the final solar spectrum and in blue is an extracted telluric spectrum. The telluric model shown is typical of conditions at G{\"o}ttingen (precipitable water vapor of $\sim$10~mm)}. \label{fig:full_spec} \end{figure*} \subsection{Extracting Telluric Spectra} The high resolution and high signal to noise of the IAG solar spectra makes it a good data set for various telluric line studies. For example, this may include studying commonly used telluric modeling codes, the stability of oxygen lines, and the impact of micro-telluric lines on radial velocity measurements. We therefore create solar-corrected telluric spectra from the data set and also make these publicly available for future studies. To do this, we divide the linear flux normalized data by the shifted iodine spectrum and by the final stellar model shifted by the solar velocity determined from the pre-fits. We then shift each spectrum by its iodine velocity, $\kappa$, and the zero point velocity and then save each spectrum recorded with the airmass and our measured water vapor optical depth values for the observation. We recommend the solar spectrum be downloaded and referred to as well depending on the use of the telluric spectra since overlapping solar lines could potentially skew the shape of an extracted telluric spectrum. \hfill \break \noindent We make available the solar and telluric data products online\footnote{http://web.sas.upenn.edu/ashbaker/solar-atlas/ or Zenodo, DOI:10.5281/zenodo.3598136}. In Figure \ref{fig:full_spec} we show the final solar atlas covering 500-1000~nm in black with an example telluric spectrum extracted from the data in blue. \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{KPSA_comparison2.pdf} \caption{Comparison of Kitt Peak and IAG telluric-corrected solar atlases. The top panel contains the IAG best fit solar spline model (black) and the residuals between \cite{Kurucz06} and the spline (red). In the bottom panel we show a telluric model in blue.} \label{fig:compare_KPSAfull} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{kurucz_compare2.pdf} \caption{Comparison of Kitt Peak and IAG telluric-corrected solar atlases over select wavelength ranges. Uncertainties are generated by taking the standard deviation of all telluric-subtracted spectra used to generate the IAG solar flux atlas.} \label{fig:compare_KPSA} \end{figure*} \section{Analysis \& Discussion}\label{sec:analyze} Here we compare our final solar atlas to the KPSA and discuss our telluric fits. We additionally compare our best fit telluric model parameters to the starting HITRAN values and comment on several observations. \subsection{Comparison to the Kitt Peak Atlas} We compare our final spectrum to the KPSA that we used as a starting guess to our solar spline model and note that the bulk of the differences occur over dense telluric bands, as expected. This can be seen in Figure \ref{fig:compare_KPSAfull} in which we plot the residuals between the two solar spectra. In regions overlapping telluric features \textless50\% in depth, we observe that the differences are \textless3\%. Spectral features overlapping saturated or near-saturated lines occasionally differ as high as 25\% or more. Inspecting the differences between the two spectra over strong telluric lines shows (a) similarly identified features differing in line strength and/or shape and (b) solar features present in one atlas but not in the other. We show examples of these cases in Figure \ref{fig:compare_KPSA}. Most of the differences in the line shape when the solar line is bordering a saturated feature, over which we have no information and our algorithm encourages the line to be narrower, sometimes splitting the line in two (e.g. bottom middle of Figure \ref{fig:compare_KPSA}). This was expected and the saturated flag we provide can be used to ignore the erroneous section of the line. We do see that sometimes our solution finds a narrower line over non-saturated lines (top middle and right of Figure \ref{fig:compare_KPSA}) than in the KPSA, though we note the reverse case happens as well. In Figure \ref{fig:compare_KPSAfull}, where we plot the residuals between the two spectra, we notice that more residuals fall below zero corresponding to the KPSA typically having lower transmission than our spectrum in discrepant regions. A partial explanation is that our algorithm will return to the continuum in saturated regions where there is no information otherwise. Also, occasionally \citealt{Kurucz06} would replace regions that had remaining large residual features with a linear interpolation that connects the adjacent regions. Our spline in contrast would smoothly return to the continuum level (e.g. top left of Figure \ref{fig:compare_KPSA}). In the blue region of our spectrum (wavenumbers higher than 17500~cm$^{-1}$) it can be seen that there is more scatter in our solution compared to the KPSA. This is due to higher instrument noise in this region as well as iodine features that were poorly removed. Because the iodine cell was not temperature stabilized at the time these data were observed, the line strengths of the iodine lines in the template spectrum differed slightly from the iodine strengths in the data. Nevertheless, the uncertainties in the final solution account for this (e.g. bottom right of Figure \ref{fig:compare_KPSA}). \subsection{Missing Water Vapor Lines} Three prominent spectral features were found that were unaccounted for in our model. For each feature, we correlated the integrated line absorption with airmass and determined that all are telluric in origin. Furthermore, we find that each has a one-to-one correlation to the integrated strength of a water vapor feature of similar depth, which confirms that all are water vapor lines. We add these lines to our local HITRAN water vapor database before performing the fitting sequence in the these regions. We initialize the fitting parameters for each line to those of lines similar in strength and summarize these values in Table \ref{tab:missing_lines}. We queried the HITRAN 2016 database around the line centers but did not find any other possible candidate species having large enough strength to explain the features. We note that the HITRAN line lists are extensively complete, especially for water vapor over optical wavelengths, and so it is possible these lines were accidentally omitted between versions as we see no reason that these lines would be missed in the detailed laboratory experiments that source the HITRAN line lists. These three lines were the only ones found missing, although we estimate that we would be limited in our ability to detect missing lines weaker than about 0.5\% in the continuum and about 2\% over other features, since this is on the order of the residuals in the continuum and over some telluric lines, respectively. Additionally, we sometime find that some dense saturated regions are easily fit very well, while some regions leave larger, structured residuals that could be due to a missing telluric feature or an overlapping solar line, but we do not have the ability to determine the true cause. \begin{table}[ht] \centering \caption{Telluric lines found unaccounted for from our HITRAN 2016 input parameters.} \begin{tabular}{cccc} Line Center & Initialized Strength & Initialized Line Width \\ cm$^{-1}$ & cm$^{−1}$/(molecule cm$^{−2}$) & cm$^{-1}$/atm \\ \hline 10519.8 & 4$\cdot 10^{-24}$ & 6.3$\cdot 10^{-2}$ \\ 13941.54 & 1.76$\cdot 10^{-24}$ & 8.8$\cdot 10^{-2}$ \\ 13943.0 & 1.76$\cdot 10^{-24}$ & 8.8$\cdot 10^{-2}$ \end{tabular} \label{tab:missing_lines} \end{table} \begin{figure*} \centering \begin{minipage}{.47\textwidth} \centering \includegraphics[width=0.99\linewidth]{pressure_shift.pdf} \caption{Best fit line centers minus the HITRAN value for H$_2$O as a function of $\delta_{air}$, the pressure induced line shift. Data points are colored by select references. The data are averaged together from separate fits and points with errors higher than 0.01~cm$^{-1}$ are removed for clarity. Black stars indicate independently selected points that have outlier line strengths and widths in comparison to the HITRAN values.} \label{fig:tel_shift} \end{minipage} \hfill \begin{minipage}{.47\textwidth} \centering \includegraphics[width=0.99\linewidth]{gamma_nu.pdf} \caption{Best fit Lorentz widths over the original HITRAN database air broadened $\gamma$ value plotted versus wavenumber for H$_2$O and colored by select references. Data points are the average from several fits and points with statistical uncertainties greater than 2\% are removed. The black stars are independently selected lines that have discrepant best-fit linecenters.} \label{fig:telbad_nu} \end{minipage} \end{figure*} \subsection{Comparison to HITRAN} The HITRAN line lists are a vital resource to many scientific studies from modeling Earth's atmosphere to remote detection of a molecular species. The water vapor line lists are of particular importance due to the role water plays in Earth's atmosphere and its large absorption features across the optical to NIR spectrum. A large amount of theoretical and laboratory work has gone into improving these line parameters, particularly for water vapor \citep{Gordon17,ptashnik16}. Additionally, comparisons between atmospheric absorption data and HITRAN databases have been performed demonstrating overall excellent agreement, but identifying some regions with small differences between some HITRAN releases and observed line shapes, strengths, and locations (e.g. \citealt{toon16, example_hitran_mod}). Several atmospheric modeling codes for astronomical applications rely on HITRAN (e.g. \citealt{tapas,molecfit,telfit,terraspec}). Discrepancies at the 1-5\% level between observations and theoretical telluric models may be found when using older versions of the HITRAN database, although such discrepancies can also stem from the specific implementation of the radiative transfer calculation. In certain spectral regions, particularly in the region between the optical and near infrared, the on-sky and laboratory data and calculations underlying HITRAN database parameters may not be as robust as they are in the optical. It is therefore interesting to compare the results of our simplified telluric fits to the HITRAN database values. We do this only for water vapor due to the complexities and smaller number of lines in the case of molecular oxygen. While our parameters are not accurate measures of the true underlying line parameters, we still expect to see trends between our line parameters and the physical quantities that describe how these lines vary with pressure and temperature. For example, in Figure \ref{fig:tel_shift} we show the difference between our best fit linecenters and the HITRAN catalog starting value plotted against the $\delta_{air}$ parameter that describes the magnitude of a pressure induced shift for a given line. Line transitions with a larger $\delta_{air}$ value will shift more at a specific pressure, which we observe. Since we fit multiple spectra simultaneously that were taken on different days and therefore under different atmospheric conditions, this induces extra scatter from averaging over different pressures and temperatures. However, since this trend largely depends on pressure, and higher in the atmosphere the pressure is consistently lower than the HITRAN reference pressure, the scatter induced by this fact does not wash out the overall trend. We note that we also see a correlation between the lower state energy level (elower) and the ratio of our optimized line strength to the HITRAN line strength values. However, this trend is slightly weaker due to other factors that determine how line strength changes. The same is true for the correlation between the line width ratio and $n_{air}$, the coefficient of temperature dependence on line broadening. In making these comparisons, we observe a handful of outliers that are apparent in Figures \ref{fig:tel_shift} and \ref{fig:telbad_nu}. For the linecenters, we see one group shifted 0.07~cm$^{-1}$ down and another shifted 0.05~cm$^{-1}$ upward in frequency. Some of these correspond to lines that are also outliers when comparing $\gamma$ from our Lorentz fit to $\gamma_{air}$ from the HITRAN database (shown in Figure \ref{fig:telbad_nu}), as well as are discrepant in the line strength parameter, $S$. These lines are mostly located between 0.9-1.0~$\mu$m, which contains a strong water vapor band and has many saturated lines that often overlap making them difficult to fit and also introduces degeneracies in the best fit solution. Weak, unsaturated absorption features overlapping saturated regions would also have poorly constrained linecenters. An inspection of a subset of outlier points confirms that some of the outliers result from saturation issues. A subset can also be attributed to lines that border the edge of a fitting subregion and therefore are also poorly constrained. These outliers resulting from fitting-related causes show higher variance in their mean value determined from the 11 fits, as would be expected. However, another set of discrepant points exist that exhibit small variation in their line parameters between fits and under inspection are isolated or minimally blended with a neighboring line such that the most likely explanation for the discrepancy is the HITRAN catalog value itself. These lines are found across the entire spectral range analyzed here. We color the points in Figures \ref{fig:tel_shift} and \ref{fig:telbad_nu} by the most common references for the parameters $\delta_{air}$ and $\gamma_{air}$, respectively, but do not find that one source was the cause of the offsets, although the more recent works colored in orange in both plots (\citealt{JACQUEMART05} and \citealt{gamache04}) show less scatter. A more detailed study of these parameters could elucidate the observed discrepancies. For example, fitting the telluric output spectra from this work with a full atmospheric modeling code such as MOLECFIT \citep{molecfit} or TERRASPEC \citep{terraspec} would be a good framework for validating the results from this analysis. \subsection{Discussion of the Telluric Model} The routine used for fitting the telluric spectrum demonstrates the benefits of using a simple semi-empirical model for telluric fitting. Because both the spline and telluric models were analytic, this significantly sped up the fitting process and reduced the number of parameters defining our fit. A downside however is that the Lorentz profile is an approximation to the true underlying line shape that can also differ between observations due to the solar light passing through different lines of sight through the atmosphere that will have different pressure and molecular abundance profiles. Each absorption feature will change shape differently due to the nonuniform pressure and temperature dependencies of the transitions. Despite this simplification of our model, it still performs very well as can be seen in Figures \ref{fig:examp_resids} and \ref{fig:h2oresid}. Here, we show the residuals of our model against telluric line depth for a section of unsaturated water vapor features between 783.9-813.9~nm. In Figure \ref{fig:examp_resids} we show a subset of this region where we plot the median telluric spectrum on top and below we plot the residuals for group 2 data. We also show the magnitude of the residuals averaged for the 12 spectra in group 2 (black) that plotted as the gray points in Figure \ref{fig:h2oresid}. These demonstrate the typical residual value in a single spectrum after dividing out the telluric lines. A second case is also shown where we allow the residuals to average down before taking the absolute value of the final array (red in both figures). This is characteristic of what happens when the solar atlas is generated (before replacing affected regions by the spline model) and we can see that the final remaining feature averages down better for some telluric lines depending on the residual structure. We can see that for both cases the magnitude of the residuals remains below 0.5\% for lines weaker than 10\% in depth with respect to the normalized continuum. Most of the residuals in Figures \ref{fig:examp_resids} and \ref{fig:h2oresid} are due to not accounting for differences in the line shapes due to changing atmospheric conditions. A possible improvement could be to address this by parameterizing the atmospheric changes in time and modifying the telluric lines by utilizing the HITRAN parameters that describe these pressure and temperature line shape dependencies. Alternatively, a more empirical approach could be adopted, such as what was done in \cite{empirical_telfit_Leet19} or in the Wobble code developed by \cite{intro_bed19}. Wobble defines the telluric model by three principle components that are linearly combined; the measured flux in each spectral pixel value for each principle component spectrum is solved for directly. While Wobble would not work with solar data due to the small velocity shifts between the telluric and solar spectra, a physically motivated set of principle components could be used to fit the residuals from our model as was suggested by \cite{artigau14}, who also developed a principle component-based empirical telluric fitting algorithm. More investigation would need to be done to validate the usefulness of combining these two methods. \begin{figure*} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=0.99\linewidth]{residuals_example.pdf} \caption{(Top) Stellar model in orange and median of extracted telluric spectra in blue from group 2 and (bottom) residuals from the fit in gray with the average shown in red and the average of their magnitudes shown in black. The residuals shown are from taking the telluric-corrected solar spectra and subtracting off the best fit solar spline model so they are centered at zero.} \label{fig:examp_resids} \end{minipage} \hfill \begin{minipage}{.45\textwidth} \centering \includegraphics[width=0.99\linewidth]{residuals_telluric_h2o.pdf} \caption{Water vapor average residual versus median telluric line depth for spectra in group 2. Here, a telluric depth of one corresponds to a saturated line. The residual array before averaging is defined to be the difference of each telluric-removed solar spectrum and the spline model. The gray points indicate values for which the absolute value was taken of the residual array before averaging, while for the red points the absolute value was taken after averaging. The corresponding triangles show the averages of the data in adjacent bins.} \label{fig:h2oresid} \end{minipage} \end{figure*} Nevertheless, the telluric modeling code used in this work produces excellent results and the model may work well further into the NIR, where many Doppler precision spectrographs targeting K and M dwarfs are being operated in order to capture higher stellar line densities and fluxes. In particular, it avoids propagating any potential errors from line list databases that have been previously shown to affect atmospheric fits in the NIR \citep{bean_hitran_modify,hitran_errors}, and the ability to adapt the model to fit the radial velocity of stellar lines simultaneously could ultimately increase the fraction of a spectrum that can be used in the RV extraction process, that in the J band can be as high as 55\% of the region \citep{telluric_hband_loss}. With a larger barycentric velocity, this could be done for stellar targets without needing as extreme a range in airmass measurements as was required for the work presented here. The quick evaluation of this analytical model would also make up for the slow convolution step that would need to be added for fitting lower resolution data. \subsection{Micro-telluric Lines} The impact of micro-telluric water vapor lines (lines having lower than $\sim$1\% depth relative to the continuum) is a growing concern to the field of high precision RV measurements that is pushing for the detection of terrestrial-sized exoplanets. Several studies have shown that micro-telluric lines, which are not visible after being convolved with an RV spectrograph's instrument profile, can skew RV measurements and be a large component of a survey's final error budget \citep{cunha14,sam_rv_err_budget,micro_telluric_peter,artigau14}. We point out that this telluric data set would be ideal for studying the temporal variations of micro-telluric line shapes since we are able to detect lines of depth 0.5\%-1\% compared to the continuum and binning multiple spectra in time or with similar airmass would help reduce the noise in the data to be able to study even weaker lines. We show a demonstration of two adjacent micro-telluric lines in Figure \ref{fig:microtelluric}, one due to molecular oxygen absorption and another due to water vapor absorption and show that these lines are clearly resolved in the average of 13 final telluric spectra. The water vapor line is weaker than our limit on lines included to be individually fit, however the uniform shift applied to these lines that was largely determined by the stronger features in the region did a good job aligning our model for the weaker telluric feature. Including these micro-telluric lines in the telluric model as we do here while solving for the radial velocity of the star may alleviate the impact they have on the radial velocity estimates. This should be confirmed in future work. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{microtell.pdf} \caption{Demonstration of micro-telluric lines in the raw data. Here 13 telluric spectra are shown in gray with their average plotted in magenta and the telluric model in black. The two observed lines are an oxygen feature (left) and a water vapor feature (right) located in a NIR telluric window.} \label{fig:microtelluric} \end{figure} \section{Conclusions}\label{sec:conclude} High resolution spectra of the Sun are important for many astrophysical studies including the study of stellar activity on Doppler spectroscopy, deriving the abundances of other stars, and understanding solar physics processes. High resolution spectra in which the individual solar lines are purely resolved are difficult to obtain from space and ground-based observations are plagued by telluric absorption features that move relative to the solar lines by a maximum of about a kilometer per second, which is not large enough for the stellar and telluric features to dissociate. Therefore, many of the stellar features overlapping telluric lines remain unreliable for analyses in high resolution solar spectra. Furthermore, the high signal-to-noise and high resolution of the telluric lines in FTS solar spectra also make them a useful dataset for studying micro-telluric lines, that are a poorly studies component in the error budget of next generation precision spectroscopy instruments. In this work we presented the telluric-corrected IAG solar flux atlas derived from observations taken in 2015 and 2016 in G{\"o}ttingen, Germany. We leverage the spread in airmass and Sun-Earth velocity to distinguish between spectral features that are either telluric or solar in origin and utilize a semi-empirical telluric model to separate the telluric lines from the solar data. We make available the final telluric-corrected solar spectrum online and additionally save the telluric spectra for possible use in studies such as investigating micro-telluric lines or validating various atmospheric models. We find that our simplified telluric model works well with lines weaker than 10\% depth with respect to the continuum have residuals consistently below 1\% with their average being around 0.1\%. The addition of more molecular species would be possible for future work to extend this data reduction to the NIR portion of the IAG solar spectral data. \section{Acknowledgements} The authors would like to thank the anonymous referee for his or her comments that improved this manuscript. The authors also thank Dr. Iouli Gordon for his constructive comments on this work and the organizers of the 2019 Telluric Hack Week for hosting a nice week of talks and discussion that led to some of the methods incorporated into our final telluric model. This material is based upon work by ADB supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1321851.
proofpile-arXiv_069-3420
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\@startsection{section}{1}\z@{.9\linespacing\@plus\linespacing}% {.7\linespacing} {\fontsize{13}{14}\selectfont\bfseries\centering}} \def\paragraph{\@startsection{paragraph}{4}% \z@{0.3em}{-.5em}% {$\bullet$ \ \normalfont\itshape}} \renewcommand\theequation{\thesection.\arabic{equation}} \@addtoreset{equation}{section} \makeatother \section{Introduction} \subsection{Motivation and the problem description} Relations between geometry and spectral properties have a rather long history. They are a trademark topic of mathematical physics at least since the celebrated Faber and Krahn proof \cite{F23, K24} of Lord Rayleigh's conjecture \cite{R77} about the shape of the drum that produces the lowest tone. While the original interest focused on problems with Dirichlet or Neumann boundary conditions, more recently the attention shifted to Robin boundary conditions. Spectral optimization for the Robin Laplacian was the topic of a number of studies in the last few years and it still offers many challenging open problems, see the reviews~\cite{BFK17, L19} and the references therein. It has to be said that spectral optimization can mean both the upper and lower bounds. While in the mentioned Faber and Krahn result the circle minimizes the principal eigenvalue, the Dirichlet Laplacian on non-simply connected domains exhibits the opposite effect when the full symmetry makes this eigenvalue maximal \cite{EHL99, HKK01}. Moreover, this effect is robust, we note that a similar result holds for a family of singular Schr\"odinger operators with an attractive interaction supported by a closed planar curve \cite{EHL06}. Recall that for a sufficiently regular planar domain $\Omega\subset\dR^2$ and the coupling constant $\alpha\in\dR$ the Robin eigenvalue problem can be written in PDE terms as \[ \begin{cases} -\Delta u = \lm u,&\quad\text{in}~\Omega,\\ \frac{\partial u}{\partial \nu} + \alpha v = 0,&\quad\text{on}~\p\Omega \end{cases} \] where $\frac{\partial u}{\partial \nu}$ is the normal derivative of $u$ with the normal $\nu$ pointing outwards of $\Omega$. In the present paper we are going to deal with the optimization of the first and the second Robin eigenvalues on two particular classes of non-simply connected planar domains admitting the so-called parallel coordinates, specifically of loop-shaped curved strips and of exteriors of convex sets. These two classes of domains are closely related in the sense that the exteriors of a convex set can be viewed as the loop-shaped curved strip of infinite thickness built over the boundary. In the first case, we are going to prove that the lowest Robin eigenvalue on such a curved strip of a fixed length of the inner boundary and a fixed width is maximized by that of the annulus. This result can be regarded as an extension of the indicated property of the Dirichlet Laplacian~\cite[Thm~1a]{EHL99}. We stress that no restrictions are imposed here on the Robin coefficient, it can be negative as well as positive, in other words, the strip boundary can be both repulsive and attractive. The proof of the claim relies on the min-max principle, an appropriate test function being constructed via transplantation of the ground state for the annular strip using the parallel coordinates. If the curve $\Sigma$ over which we build the domain is the boundary of a non-convex set, the existence of globally well defined parallel coordinates imposes a restriction on the strip width which could be expressed in terms of the \textit{absence of cut loci}, that is, points having the same distance from different parts of the domain boundary. Of course, any smooth curved strip of a fixed width has a cut locus represented by its axis, we have in mind nontrivial ones referring to the distance from the curve $\Sigma$ only. Our second result concerns an optimization of the second Robin eigenvalue in the exterior of a convex set under the assumption that the curvature $\kp$ of the boundary is non-negative and bounded from above by a fixed constant $\kp_\circ > 0$. The exterior of a disk with the boundary curvature $\kp_\circ$, in other words, with the radius $\kp_\circ^{-1}$, turns out be the unique maximizer. This new result complements the optimization of the lowest Robin eigenvalue in the exterior of a bounded set considered recently in~\cite{KL17a, KL17b}. It is not yet clear whether the stated condition on the curvature can be replaced by a more standard perimeter-type constraint. In contrast to the previous case, the Robin coefficient is now assumed to be negative as otherwise the spectrum of the respective Laplacian is purely essential and coincides with $[0,\infty)$, making thus the spectral optimization question void. In the proof, we again take advantage of the fact that the parallel coordinates are globally well defined. We apply the min-max principle to the span of the two transplanted eigenfunctions of the Robin Laplacian on the exterior of the disk corresponding to its first and second eigenvalues, respectively. Since the eigenfunction corresponding to the second eigenvalue is not radial, its transplantation is more involved and contains an additional geometric insight. In the setting of bounded domains, it has been proved that the disk is a maximizer of the second Robin eigenvalue having a fixed area \cite{FL18a} or a fixed perimeter \cite{FL18b}, provided that the negative boundary parameter lies in a specific interval. An analogous result has recently been proved in~\cite{GL19} for the third Robin eigenvalue with the maximizer being the union of two disks and with the negative boundary parameter again lying in a specific interval. In this context, we would like to emphasize that the optimization result for the second Robin eigenvalue in the present paper holds for \emph{all negative values} of the boundary parameters. \subsection{Geometric setting} Since the domain geometry is crucial in our results, let us first recall the necessary notions and state the assumptions we are going to use. \begin{hypothesis}\label{hyp} Let a $C^{\infty}$-smooth curve $\Sigma\subset\dR^2$ be the boundary of a bounded, simply connected domain $\Omega\subset\dR^2$. Let a circle $\cC\subset\dR^2$ be the boundary of a disk $\cB\subset\dR^2$. We denote by $L := |\Sigma|$ and $L_\circ := |\cC|$ the lengths of $\Sigma$ and $\cC$, respectively. \end{hypothesis} The mapping $\sigma\colon[0,L]\arr \dR^2$ provides the natural (counter-clockwise) parametrization of $\Sigma$ with the tangential vector $\tau(s) := \sigma'(s)$ satisfying $|\tau(s)| = 1$. We denote by $\kp \colon [0,L]\arr\dR$ the signed curvature of $\Sigma$; the convention we adopt is that $\kp\ge 0$ holds for convex $\Omega$. Recall the \emph{Frenet formula} \[ \tau'(s) = -\kp\nu(s), \] where $\nu$ is the outer unit normal vector to the domain $\Omega$. The object of our interest will be the curved strip built over $\Sigma$ with the thickness $d \in (0,\infty]$, that is, the set \begin{equation}\label{eq:strip} \Omega^{\rm c}_d := \big\{x\in\dR^2\setminus\overline{\Omega}\colon {\rm dist}\,(x,\Sigma) < d\big\}. \end{equation} The definition includes the unbounded domain $\Omega^{\rm c}_\infty$ identified with the exterior $\dR^2\sm\overline{\Omega}$ of $\Omega$ for which we will use the shorthand notation $\Omega^{\rm c} := \Omega^{\rm c}_\infty$. The boundary of $\Omega^{\rm c}_d$ is therefore \[ \p\Omega^{\rm c}_d = \begin{cases} \Sigma & \text{if}\;\: d = \infty,\\ \Sigma\cup \big\{x\in\dR^2\setminus\overline{\Omega}\colon {\rm dist}\,(x,\Sigma) = d\big\} & \text{if}\;\: d < \infty. \end{cases} \] In particular, $\p\Omega_d^{\rm c}$ has two components for $d < \infty$ and, respectively, one component for $d = \infty$. Consider the mapping \begin{equation}\label{eq:mapping} [0,L)\tm(0,d)\ni (s,t)\mapsto \s(s)+t\nu(s). \end{equation} If the curvature $\kp \ge 0$ is sign-definite, then the mapping~\eqref{eq:mapping} is injective for all $d \in (0,\infty]$. If the curvature $\kp$ is sign-changing, there exists by \cite[Prop. B.1]{BEHL17} a critical width $d_\star > 0$ such that~\eqref{eq:mapping} is injective for all $d < d_\star$ and in this case we assume that \begin{equation}\label{eq:d} d \in (0, d_\star). \end{equation} The parallel coordinates $(s,t) \in [0,L)\times(0,d)$ on $\Omega^{\rm c}_d\,$ \cite{hart}, alternatively dubbed Fermi or natural curvilinear, are under the above assumption on $d$ everywhere well defined by the formula $\Omega^{\rm c}_d\ni x = \sigma(s) + t\nu(s)$. \begin{remark} As indicated in the introduction, the set of all $x\in\Omega^{\rm c}_d$ for which the closest point in $\p\Omega^{\rm c}_d$ is not uniquely defined is nonempty and coincides with $\{\s(s) + \frac{d}{2}\nu(s)\colon s\in[0,L)\}$. If the curvature $\kp$ is sign-changing, the existence of the parallel coordinates may be spoiled by a nontrivial cut locus referring to the distance from $\Sigma$ only, which may come from different sources, local and global. From \cite[Prop. B.1]{BEHL17} we know that a necessary condition for the absence of such a cut locus is $$ d\|\kappa_-\|_\infty < 1, $$ where $\kappa_- = \min\{\kappa,0\}$. The condition \eqref{eq:d} is also sufficient, under the \textit{additional requirement} that $\int_s^{s'} \kp(s'')\,\dd s''>-\pi$ holds for all $0\le s<s'<L$. Indeed, the existence of a $\Sigma$-related cut locus would mean that the outer strip boundary must intersect itself, in other words, there must exist $s$ and $s'>s$ such that $\s(s)+d\nu(s) = \s(s')+d\nu(s')$. The curve $s\mapsto \s(\cdot)+d\nu(\cdot)$ is smooth, hence the angle between the tangents at the points with the parallel coordinates $(s,d)$ and $(s',d)$ cannot be larger than $-\pi$ (note that with our curvature convention the angles between tangents at nonconvex parts of the boundary are negative). However, the said tangents are parallel to the tangents $\tau(s)$ and $\tau(s')$ of the curve $\Sigma$, and since the angle between those is nothing else than $\int_s^{s'} \kp(s'')\,\dd s''$, the existence of such a cut locus leads to a contradiction, thus proving our claim. \end{remark} Since our setting is two-dimensional, it is useful to work with the complexified tangential and normal vectors \begin{equation}\label{eq:t} {\bf t}(s) = \tau_1(s) + \ii\tau_2(s) \quad\;\text{and}\quad\; {\bf n}(s) = \nu_1(s) + \ii\nu_2(s). \end{equation} In this notation, the Frenet formula can be written in the complex form as \begin{equation}\label{eq:Frenet} {\bf t}'(s) = -\kp{\bf n}(s)= \ii\kp{\bf t}(s). \end{equation} \subsection{The Robin Laplacian on $\Omega^{\rm c}_d$} For an arbitrary value of the coefficient $\alpha\in\dR$, which characterizes the strength of the coupling to the boundary, we introduce the self-adjoint operator $\Op$ in the Hilbert space $L^2(\Omega^{\rm c}_d)$ through its quadratic form \[ \frm[u] := \|\nabla u\|^2_{L^2(\Omega^{\rm c}_d;\dC^2)} + \alpha\|u|_{\p\Omega^{\rm c}_d}\|^2_{L^2(\p\Omega^{\rm c}_d)}, \qquad \mathrm{dom}\,\frm = H^1(\Omega^{\rm c}_d), \] where $H^1(\Omega^{\rm c}_d)$ is the first-order $L^2$-based Sobolev space on $\Omega^{\rm c}_d$. If $d < \infty$ the spectrum of $\Op$ is discrete and we denote by $\{\lm_k^\alpha(\Omega^{\rm c}_d)\}_{k\ge 1}$ its eigenvalues arranged in the non-decreasing order and repeated with multiplicities taken into account. The spectral properties of $\Opu$ corresponding to $d = \infty$ are different \cite[Prop. 2.1, Prop. 2.2]{KL17a}, namely \begin{myenum} \item $\sigma_{\rm ess}(\Opu) = [0,\infty)$. \item $\#\sigma_{\rm d}(\Opu) \ge 1$ for all $\aa < 0$. \item $\sigma_{\rm d}(\Opu) = \varnothing$ for all $\aa \ge 0$. \end{myenum} In analogy with the bounded domain case we denote by $\{\lm_k^\alpha(\Omega^{\rm c})\}_{k\ge 1}$ the negative eigenvalues of $\Opu$ arranged in the ascending order and repeated with the multiplicities taken into account. In the min-max spirit, this sequence is conventionally extended up to an infinite one by repeating the bottom of the essential spectrum $\inf\sigma_{\rm ess}(\Opu) = 0$ infinitely many times. \subsection{Main results} Let us now state our main results. The first one concerns optimization of $\lm_1^\aa(\Omega_d^{\rm c})$ on curved strips of a fixed width. \begin{thm}\label{thm1} Assume that Hypothesis~\ref{hyp} holds and that $L = L_\circ$. Let $\kp$ be the curvature of $\Sigma$. The strip-width $d\in (0,\infty]$ may be arbitrary if $\kp \ge 0$, while for a sign-changing $\kp$ we assume that~\eqref{eq:d} is satisfied. Let the domains $\Omega_d^{\rm c}$ and $\cB^{\rm c}_d$ be as in~\eqref{eq:strip}. Then for the lowest Robin eigenvalues on these domains the inequality \[ \lm_1^\alpha(\Omega_d^{\rm c}) \le \lm_1^\alpha(\cB_d^{\rm c}) \] holds for any $\alpha\in\dR$. \end{thm} Let us add a few comments. The above inequality holds trivially in the Neumann case, $\aa = 0$, since we have $\lm_1^0(\Omega_d^{\rm c}) = \lm_1^0(\cB_d^{\rm c}) = 0$. In the limit $\aa\arr+\infty$ it implies the respective inequality for the Dirichlet Laplacians providing thus an alternative proof of Theorem~1a in~\cite{EHL99}. Furthermore, if $\alpha < 0$, $\:\kp \ge 0$, and $d = \infty$, Theorem~\ref{thm1} reduces to the first claim of~\cite[Thm. 1.3]{KL17a}. Note that the geometric character of $\Omega$ manifested in the constraint on the `thickness' plays a role again: the annulus is always a maximizer here, even for $\alpha > 0$, while in the case of general bounded domains under fixed area constraint the disk is conjectured to be a maximizer in the subclass of simply-connected domains for $\aa < 0$ and is known to be a minimizer for $\aa > 0$, {\it cf.}\,~\cite{FK15, AFK17} in the former case and \cite{Bossel_1986, Daners_2006} in the latter. Moreover, for general bounded domains the disk is a maximizer for $\aa < 0$ under fixed perimeter constraint~\cite{AFK17}. Multi-dimensional analogues of the latter result are obtained in \cite{BFNT18, V19}. As already mentioned in the introduction, the proof of Theorem~\ref{thm1} will rely on the min-max principle with a suitable test function constructed through the transplantation of the radial ground-state eigenfunction for the annulus using the method of parallel coordinates. Our second result concerns optimization of $\lm_2^\aa(\Omega^{\rm c})$ on unbounded exterior domains described above. \begin{thm}\label{thm2} Assume that Hypothesis~\ref{hyp} holds. Let $\kp\colon[0,L]\arr\dR$ and $\kp_\circ \in\dR_+$ be the curvatures of $\Sigma$ and $\cC$, respectively. Assume further that $\Omega$ is convex, that is, $\kp \ge 0$, and that $\max\kp \le \kp_\circ$ holds . Let the domains $\Omega^{\rm c}$ and $\cB^{\rm c}$ be as in~\eqref{eq:strip} with $d = \infty$. Then for the second Robin eigenvalues on these domains the inequality \begin{equation}\label{eq:main2} \lm_2^\alpha(\Omega^{\rm c}) \le \lm_2^\alpha(\cB^{\rm c}) \end{equation} is valid for any $\alpha < 0$. If $\lm_2^\alpha(\cB^{\rm c}) < 0$ and the equality in~\eqref{eq:main2} holds, the two domains are congruent, $\Omega\cong\cB$. \end{thm} The above theorem and monotonicity of $\lm_2^\alpha(\cB^{\rm c})$, with respect to $L_\circ$ shown in Proposition~\ref{prop:annulus2}, below yield the following. \begin{cor}\label{cor} Assume that Hypothesis~\ref{hyp} holds and let $\kp_\circ > 0$ be fixed. Then, for all $\aa < 0$, \begin{equation}\label{eq:main3} \max_{\stackrel{\Omega~{\rm convex}}{\kp \le \kp_\circ}}\lm_2^\alpha(\Omega^{\rm c}) = \lm_2^\alpha(\cB^{\rm c}), \end{equation} where the maximum is taken over all convex smooth domains $\Omega\subset\dR^2$ whose curvature satisfies $\max\kp \le \kp_\circ$ and where $\cB\subset\dR^2$ is a disk of the curvature $\kp_\circ$. \end{cor} We remark that the inequality~\eqref{eq:main2} is nontrivial only if $\sfH_{\aa,\cB^{\rm c}}$ has more than one negative eigenvalue. We also emphasize that, in contrast to Theorem~\ref{thm1} we have $L \ne L_\circ$ in general, in fact, it is easy to show that $L > L_\circ$ holds unless $\Omega \cong\cB$. In order to prove Theorem~\ref{thm2}, we apply the min-max principle transplanting to $\Omega^{\rm c}$ the span of the two eigenfunctions of $\sfH_{\aa,\cB^{\rm c}}$ corresponding to the eigenvalues $\lm_1^\aa(\cB^{\rm c})$ and $\lm_2^\aa(\cB^{\rm c})$, respectively. The ground-state is transplanted in a conventional way, however, the transplantation of the first excited state is a little more involved. We note that an eigenfunction corresponding to the second Robin eigenvalue on the exterior of a disk can be written in parallel coordinates on $\cB^{\rm c}$ as \[ v_\circ(s,t) = \phi(t)\exp\left(\frac{2\pi\ii}{L_\circ}s\right). \] Since $\exp\left(\frac{2\pi\ii}{L_\circ}s\right)$ can be interpreted as the complexified tangent vector for $\cB$, a natural way of transplantation of $v_\circ$ onto $\Omega^{\rm c}$ would be \[ v_\star(s,t) = \phi(t) {\bf t}(s), \] where ${\bf t}$ is the complexified tangent vector for $\Omega$ defined in~\eqref{eq:t}. \section{Preliminaries} \subsection{The quadratic form $\frm$ in parallel coordinates} Our first main tool is the representation of the quadratic form $\frm$ in the parallel coordinates on $\Omega^{\rm c}_d$. Using them, the inner product in the Hilbert space $L^2(\Omega_d^{\rm c})$ can be written as follows, \[ (u,v)_{L^2(\Omega_d^{\rm c})} =\int_0^d\int_0^L u(s,t)\overline{v(s,t)} \big(1+\kp(s) t\big)\,\dd s\,\dd t. \] It is well known that the gradient in these coordinates is expressed as \[ \nabla u = \frac{\tau(s)}{1+\kp(s) t}\,\p_s u + \nu(s)\p_t u. \] Consequently, the quadratic form $\frm$ can be written in the parallel coordinates as \begin{equation}\label{eq:form_parallel} \begin{aligned} \frm[u] &\! = \!\int_0^d\int_0^L\left( \frac{|\p_s u(s,t)|^2}{1+\kp(s)t}+ |\p_t u(s,t)|^2(1+\kp(s)t) \right)\,\dd s\,\dd t + \aa\int_0^L |u(s,0)|^2\,\dd s,\\ \mathrm{dom}\,\frm &\! =\! \left\{u\colon \Sigma\tm (0,d)\arr\dC\colon\! \int_0^d\!\int_0^L\left[ \frac{|\p_s u|^2}{1+\kp t}+ (|u|^2\!+\! |\p_t u|^2)(1+\kp t) \right]\,\dd s\,\dd t < \infty\right\}. \end{aligned} \end{equation} The above representation remains valid for $d = \infty$, provided that $\Omega$ is convex. \subsection{Eigenfunctions in the radially symmetric case} We also need properties of the eigenfunctions corresponding to the first and the second eigenvalue in the radially symmetric case. They are elementary but we describe them in the next two propositions, the proofs of which are postponed to the appendices, in order to make the paper self-contained. Let us begin with the ground-state eigenfunction of the Robin annulus. \begin{prop}\label{prop:annulus1} Assume that Hypothesis~\ref{hyp} holds. For any fixed $d > 0$ and any $\alpha\in\dR$, or for $d = \infty$ and any $\aa <0$, the lowest eigenvalue $\lm_1^\aa(\cB^{\rm c}_d)$ of $\OpD$ is simple and the corresponding eigenfunction can be written in the parallel coordinates on $\cB_d^{\rm c}$ as \[ u_\circ(s,t) = \psi(t), \] with a given real-valued $\psi \in C^\infty([0,d])$ if $d < \infty$ and with $\psi \in C^\infty([0,\infty))$ satisfying \begin{equation}\label{eq:integrability1} \int_0^\infty\big[\psi(t)^2 + \psi'(t)^2](1+t)\,\dd t < \infty, \end{equation} if $d = \infty$. \end{prop} Consider next the first excited state of the Robin Laplacian in the exterior of a disk. \begin{prop}\label{prop:annulus2} Assume that Hypothesis~\ref{hyp} holds. Then for any fixed $\alpha < 0$ such that $\#\s_{\rm d}(\OpDu) > 1$, the second eigenvalue $\lm_2^\aa(\cB^{\rm c}) <0$ of $\OpDu$ has multiplicity two and the respective eigenfunctions of $\OpDu$ can be written in parallel coordinates on $\cB^{\rm c}$ as \[ v_\circ^\pm(s,t) = \exp\left( \pm \frac{2\pi\ii }{L_\circ}s\right) \phi(t),\quad\; s\in [0,L_\circ),\; t\in [0,\infty), \] with a given real-valued $\phi\in C^\infty([0,\infty))$ satisfying the integrability condition \begin{equation}\label{eq:integrability2} \int_0^\infty\big[\phi(t)^2 + \phi'(t)^2](1+t)\,\dd t < \infty. \end{equation} Moreover, $\lm_2^\alpha(\cB^{\rm c})$ is a non-increasing function of $L_\circ$. \end{prop} We remark that the functions $\psi$ and $\phi$ in Propositions~\ref{prop:annulus1} and~\ref{prop:annulus2} can be explicitly expressed in terms of Bessel functions, however, this is not essential for our analysis. \section{Proofs of the main results} Now we are going to provide proofs of Theorems~\ref{thm1} and~\ref{thm2}. Recall that the $C^\infty$-smooth curve $\Sigma\subset\dR^2$ is the boundary of a bounded, simply connected domain $\Omega\subset\dR^2$, and the circle $\cC\subset\dR^2$ is the boundary of the disk $\cB\subset\dR^2$. The lengths of $\Sigma$ and $\cC$ are denoted by $L$ and $L_\circ$, respectively. The curvature of $\Sigma$ is denoted by $\kp$ and the curvature of $\cC$ is a constant $\kp_\circ > 0$. \subsection{Proof of Theorem~\ref{thm1}} By assumption we have $L = L_\circ$ and we fix $d > 0$ satisfying the additional condition \eqref{eq:d} in the case that $\kp$ is sign-changing. Furthermore, $\alpha\in\dR$ is an arbitrary fixed number. The case $d = \infty$ is dealt with in~\cite[Thm~1.3]{KL17a} and thus we may omit it here. By Proposition~\ref{prop:annulus1}, there exists a function $\psi \in C^\infty([0,d])$ such that the ground-state $u_\circ \in C^\infty(\Omega^{\rm c}_d)$ of $\OpD$ can be written as $u_\circ(s,t) = \psi(t)$ in the parallel coordinates on $\cB^{\rm c}_d$. Using it we define the test function $u_\star\in H^1(\Omega_d^{\rm c})$ in the parallel coordinates on the curved strip $\Omega^{\rm c}_d$ as follows, \[ u_\star(s,t) := \psi(t),\qquad s\in [0,L],\, t\in [0,d]. \] Using the representation of $\frm$ in~\eqref{eq:form_parallel}, applying the min-max principle and the total curvature identity $\int_0^L \kp(s)\dd s = 2\pi$ we obtain \[ \begin{aligned} \lm_1^\alpha(\Omega^{\rm c}_d) & \le \frac{\frm[u_\star]}{\|u_\star\|^2_{L^2(\Omega^{\rm c}_d)}}\\ & = \frac{\displaystyle \int_0^d\int_0^L\psi'(t)^2(1+ \kp(s) t)\,\dd s\,\dd t + \alpha \int_0^L\Big[|\psi(0)|^2 + |\psi(d)|^2(1+d\kp(s))\Big] \,\dd s}{\displaystyle \int_0^d\int_0^L\psi(t)^2(1+ \kp(s) t)\,\dd s\,\dd t} \\ & = \frac{\displaystyle \int_0^d\psi'(t)^2(L+ 2\pi t)\,\dd t + \alpha L |\psi(0)|^2 + \alpha (L+2\pi d)|\psi(d)|^2}{\displaystyle \int_0^d\psi(t)^2(L+ 2\pi t)\,\dd t}\\ & = \frac{\frmD[u_\circ]}{\|u_\circ\|^2_{L^2(\cB^{\rm c}_d)}} = \lm_1^\alpha(\cB^{\rm c}_d), \end{aligned} \] which yields the sought claim. \subsection{Proof of Theorem~\ref{thm2}} In view of the convexity of $\Omega$ the curvature of $\Sigma$ satisfies $\kp\ge 0$ and by assumption $\max\kp \le \kp_\circ$ holds. Let us exclude the trivial case supposing that $\Omega\ncong\cB$. Then we have $\min\kp < \kp_\circ$ which implies \begin{equation}\label{eq:L} L = \frac{L\kp_\circ}{\kp_\circ} > \frac{\displaystyle\int_0^L\kp(s)\,\dd s}{\kp_\circ} = \frac{2\pi}{\kp_\circ} = L_\circ. \end{equation} We fix the `width' $d =\infty$ and the coupling constant $\alpha < 0$. Without loss of generality, we may assume that $|\aa|$ is large enough so that $\lm_2^\aa(\cB^{\rm c}) < 0$ as otherwise the inequality~\eqref{eq:main2} would trivially hold. \medskip \noindent {\bf Step 1.} \emph{Test functions}. In view of Propositions~\ref{prop:annulus1} and~\ref{prop:annulus2}, we can represent the eigenfunctions of $\OpDu$ corresponding to its simple first eigenvalue $\lm_1^\alpha(\cB^{\rm c})$ and the second eigenvalue $\lm_2^\alpha(\cB^{\rm c})$ of multiplicity two in parallel coordinates $(s,t)$ on $\cB^{\rm c}$ as \begin{equation}\label{eq:EFs} u_\circ(s,t) = \psi(t) \qquad\text{and}\qquad v_\circ^\pm(s,t) = \exp\left(\pm\frac{2\pi\ii s}{L_\circ} \right)\phi(t), \end{equation} where $\psi,\phi\in C^\infty([0,\infty))$ are real-valued and satisfy the integrability conditions~\eqref{eq:integrability1} and~\eqref{eq:integrability2}, respectively. We introduce test functions $u_\star,v_\star \in H^1(\Omega^{\rm c})$ on $\Omega^{\rm c}$ defining them in terms of the parallel coordinates as \[ u_\star(s,t) := \psi(t)\qquad\text{and}\qquad v_\star(s,t) := {\bf t}(s)\phi(t), \qquad s\in[0,L],\, t\in [0,\infty), \] where ${\bf t}(s)$ is the complexified normal~\eqref{eq:t}. \medskip \noindent {\bf Step 2.} \emph{Orthogonality.} Next, we are going to show that $u_\star$ and $v_\star$ are orthogonal in $L^2(\Omega^{\rm c})$. To this aim, we observe that \[ \int_0^L {\bf t}(s)\,\dd s = \int_0^L (\sigma_1'(s) + \ii \sigma_2'(s))\,\dd s = \sigma_1(L) + \ii\sigma_2(L) - \sigma_1(0) - \ii\sigma_2(0) =0, \] where the fact that $\Sigma$ is a closed curve was employed. Furthermore, using the Frenet formula we get \[ \int_0^L {\bf t}(s)\kp(s)\,\dd s = -\ii\int_0^L {\bf t}'(s)\,\dd s = -\ii({\bf t}(L) - {\bf t}(0)) = 0, \] where the closedness and smoothness of $\Sigma$ were taken into account. Combining these two relations we infer that \begin{equation}\label{eq:ortho1} \begin{aligned} (v_\star,u_\star)_{L^2(\Omega^{\rm c})} & \!= \! \int_0^\infty\int_0^L\psi(t)\phi(t) {\bf t}(s)(1+ t\kp(s))\,\dd s \,\dd t\\ & \! = \! \int_0^\infty\int_0^L\psi(t)\phi(t){\bf t}(s)\,\dd s \,\dd t\! +\! \int_0^\infty\int_0^Lt\psi(t)\phi(t) {\bf t}(s)\kp(s)\,\dd s \,\dd t \!=\! 0. \end{aligned} \end{equation} At the same time, we have \begin{equation}\label{eq:ortho2} \begin{aligned} \frmu[v_\star,\!u_\star] \! = \! \int_0^\infty\!\int_0^L\psi'(t)\phi'(t) {\bf t}(s)(1\!+\! t\kp(s))\,\dd s \,\dd t \!+\! \alpha\psi(0)\phi(0)\! \int_0^L {\bf t}(s)\,\dd s\! =\! 0. \end{aligned} \end{equation} \noindent {\bf Step 3.} \emph{Bounds on the Rayleigh quotients.} For a non-trivial function $u \in H^1(\Omega^{\rm c})$ we define \[ \sfR_{\aa,\Omega^{\rm c}}[u] := \frac{\frmu[u]}{\|u \|^2_{L^2(\Omega^{\rm c})}}. \] Using~\eqref{eq:form_parallel} and the total curvature identity $\int_0^L \kp(s)\,\dd s = 2\pi$, the Rayleigh quotient of the test function $u_\star$ defined in this way can be expressed as \[ \begin{aligned} \sfR_{\aa,\Omega^{\rm c}}[u_\star] & = \frac{\displaystyle\int_0^\infty\int_0^L\psi'(t)^2(1+t\kp(s)) \,\dd s \,\dd t + \aa L|\psi(0)|^2}{\displaystyle\int_0^\infty\int_0^L\psi(t)^2(1+t\kp(s)) \,\dd s\,\dd t}\\ & = \frac{\displaystyle\int_0^\infty \psi'(t)^2(L+2\pi t) \,\dd t + \aa L|\psi(0)|^2}{\displaystyle\int_0^\infty\psi(t)^2(L+2\pi t)\,\dd t}\\ & = \frac{\displaystyle\int_0^\infty\psi'(t)^2\left(1+\frac{2\pi t}{L}\right) \,\dd t + \aa|\psi(0)|^2}{\displaystyle\int_0^\infty\psi(t)^2\left(1+\frac{2\pi t}{L}\right)\,\dd t}. \end{aligned} \] Furthermore, using the inequalities $\lm_1^\aa(\cB^{\rm c}) < 0$ and $L > L_\circ$, we get the following estimate \begin{equation}\label{eq:bound1} \sfR_{\aa,\Omega^{\rm c}}[u_\star] \le \frac{\displaystyle\int_0^\infty\psi'(t)^2\left(1+\frac{2\pi t}{L_\circ}\right) \,\dd t + \aa|\psi(0)|^2}{\displaystyle\int_0^\infty\psi(t)^2\left(1+\frac{2\pi t}{L_\circ}\right)\,\dd t} =\mathsf{R}_{\alpha,\cB^{\rm c}}[u_\circ] =\lm_1^\aa(\cB^{\rm c}). \end{equation} Making use of~\eqref{eq:form_parallel}, the total curvature identity and the Frenet formula~\eqref{eq:Frenet}, the Rayleigh quotient corresponding to the test function $v_\star$ is given by \[ \begin{aligned} \mathsf{R}_{\aa,\Omega^{\rm c}}[v_\star] &\! =\! \frac{\displaystyle\int_0^\infty\!\int_0^L\phi'(t)^2(1\!+\!t\kp(s)) \,\dd s \,\dd t \!+\! \int_0^\infty\!\int_0^L \frac{\kp^2(s)\phi(t)^2}{1+t\kp(s)} \,\dd s \,\dd t \!+\! \aa L|\phi(0)|^2}{\displaystyle\int_0^\infty\int_0^L\phi(t)^2(1+t\kp(s)) \,\dd s\,\dd t}\\ &\! =\! \frac{\displaystyle\int_0^\infty\phi'(t)^2(L+2\pi t)\,\dd t \!+\! \int_0^\infty\!\int_0^L \frac{\kp^2(s)\phi(t)^2}{1+t\kp(s)} \,\dd s \,\dd t \!+\! \aa L|\phi(0)|^2}{\displaystyle\int_0^\infty \phi(t)^2(L+2\pi t)\,\dd t}. \end{aligned} \] Using further the strict monotonicity of the function \[ \dR_+\ni x\mapsto \frac{x^2}{1+ tx},\quad\; t \ge0, \] in combination with the inequalities $L > L_\circ$, $\,\max\kp\le \kp_\circ$, and $\min\kp < \kp_\circ$, we get for the Rayleigh quotient corresponding to $v_\star$ the following estimate, \begin{equation}\label{eq:bound2} \begin{aligned} \mathsf{R}_{\aa,\Omega^{\rm c}}[v_\star] & < \frac{\displaystyle\int_0^\infty\phi'(t)^2(L+2\pi t) \,\dd t + L\int_0^\infty \frac{\kp^2_\circ \phi(t)^2}{1+t\kp_\circ} \,\dd t + \aa L |\phi(0)|^2}{\displaystyle\int_0^\infty\phi(t)^2(L+ 2\pi t)\,\dd t}\\ & = \frac{\displaystyle\int_0^\infty\phi'(t)^2\left(1+\frac{2\pi t}{L}\right) \,\dd t + \int_0^\infty \frac{\kp^2_\circ \phi(t)^2}{1+t\kp_\circ} \,\dd t + \aa |\phi(0)|^2}{\displaystyle\int_0^\infty\phi(t)^2\left(1+ \frac{2\pi t}{L}\right)\,\dd t}\\ & \le \frac{\displaystyle\int_0^\infty\phi'(t)^2\left(1+\frac{2\pi t}{L_\circ}\right) \,\dd t + \int_0^\infty \frac{\kp^2_\circ\phi(t)^2}{1+t\kp_\circ} \,\dd t + \aa |\phi(0)|^2}{\displaystyle\int_0^\infty\phi(t)^2\left(1+ \frac{2\pi t}{L_\circ}\right)\,\dd t}\\ & = \sfR_{\alpha,\cB^{\rm c}}[v_\circ] = \lm_2^\aa(\cB^{\rm c}). \end{aligned} \end{equation} \noindent {\bf Step 4.} \emph{The min-max principle.} Any $w_\star\in{\rm span}\,\{u_\star,v_\star\}\sm\{0\}$ can be represented as a linear combination $w_\star = p u_\star + qv_\star$ with $(p,q)\in\dC^2_\tm:=\dC^2\sm\{(0,0)\}$. The following simple inequality, \begin{equation}\label{eq:trivial_ineq} \frac{a+b}{c+d} \le \max\left\{\frac{a}{c},\frac{b}{d}\right\}, \end{equation} holds obviously for any $a,b \in\dR$ and $c,d > 0$. Applying the min-max principle, using the orthogonality relatons~\eqref{eq:ortho1},~\eqref{eq:ortho2}, the bounds~\eqref{eq:bound1},~\eqref{eq:bound2}, and the inequality~\eqref{eq:trivial_ineq} we get \[ \begin{aligned} \lm_2^\alpha(\Omega^{\rm c}) &\le \max_{(p,q)\in\dC^2_\tm} \frac{\frmu[p u_\star + qv_\star]}{\|p u_\star + qv_\star\|^2_{L^2(\Omega^{\rm c})}}\\ & \hspace{-2em} = \max_{(p,q)\in\dC^2_\tm} \frac{|p|^2\frmu[ u_\star] +|q|^2\frmu[v_\star]}{ |p|^2\|u_\star\|^2_{L^2(\Omega^{\rm c})} + |q|^2\|v_\star\|^2_{L^2(\Omega^{\rm c})}} \le \max\left\{ \sfR_{\aa,\Omega^{\rm c}}[u_\star], \sfR_{\aa,\Omega^{\rm c}}[v_\star] \right\} < \lm_2^\alpha(\cB^{\rm c}), \end{aligned} \] \subsection*{Acknowledgment} The research was supported by the Czech Science Foundation within the project 17-01706S. P.E. also acknowledges support of the EU project \\ CZ.02.1.01/0.0/0.0/16\textunderscore 019/0000778. The authors are indebted to Magda Khalile for fruitful discussions and to the referees for useful comments. \begin{appendix} \section{Proof of Proposition~\ref{prop:annulus1}} The case $d = \infty$ was dealt with in~\cite[Sec. 3]{KL17a}. Assume that $d < \infty$ and let $\aa\in\dR$ be arbitrary. For the sake of simplicity and without loss of generality, we also assume that $L_\circ = 2\pi$. In this case the curvilinear coordinates $(s,t)$ essentially coincide with the polar coordinates. Using the complete family of orthogonal projections on $L^2(\cB_d^{\rm c})$, \[ (\Pi_n u)(s,t) = \frac{1}{\sqrt{2\pi}} \mathsf{e}^{\ii n s}\int_0^{2\pi} u(s',t)\,\frac{\mathsf{e}^{-\ii n s'}}{\sqrt{2\pi}}\,\dd s',\quad\; n\in\dZ, \] one can decompose $\OpD$ into an orthogonal sum \[ \OpD = \bigoplus_{n\in\dZ}\OpD^{[n]}, \] where the self-adjoint fiber operator $\OpD^{[n]}$ acts in the Hilbert space $L^2((0,d);(1+ t)\,\dd t)$ and corresponds to the quadratic form $(n\in\dZ)$ \[ \begin{aligned} \frmD^{[n]}[\psi] & =\int_0^d \left(|\psi'(t)|^2(1+t) +\frac{n^2|\psi(t)|^2}{1+t}\right)\,\dd t + \aa|\psi(0)|^2,\\ \mathrm{dom}\,\frmD^{[n]} &= H^1((0,d)). \end{aligned} \] Clearly the lowest eigenvalue of $\OpD^{[0]}$ is simple and strictly smaller than the lowest eigenvalues of $\OpD^{[n]}$ with $n\ne0$. Thus, the ground-state of $\OpD$ is simple and depends on $t$ variable only. The smoothness of the corresponding eigenfunction follows from standard elliptic regularity theory. \section{Proof of Proposition~\ref{prop:annulus2}} Using the complete family of orthogonal projections on $L^2(\cB^{\rm c})$ \[ (\Pi_n u)(s,t) = \frac{1}{\sqrt{L_\circ}} \mathsf{e}^{\frac{2\pi\ii n s}{L_\circ}}\int_0^{L_\circ} u(s',t)\,\frac{\mathsf{e}^{-\frac{2\pi\ii n s'}{L_\circ}}}{\sqrt{L_\circ}}\,\dd s',\quad\; n\in\dZ, \] one can again decompose $\OpDu$ into an orthogonal sum \[ \OpDu = \bigoplus_{n\in\dZ} \OpDu^{[n]}, \] where the fiber operators $\OpDu^{[n]}$, $\:n\in\dZ$, in the Hilbert space $L^2(\dR_+;(1 + \frac{2\pi t}{L_\circ})\,\dd t)$ correspond to the quadratic forms \[ \begin{aligned} \frmDu^{[n]}[\psi] & =\int_0^\infty \left(|\psi'(t)|^2\left(1 + \frac{2\pi t}{L_\circ}\right) +\frac{1}{L_\circ}\frac{4\pi^2n^2|\psi(t)|^2}{L_\circ + 2\pi t}\right)\,\dd t + \aa |\psi(0)|^2,\\ \mathrm{dom}\,\frmDu^{[n]} &= \left\{\psi\colon\dR_+\arr\dC\colon \psi,\psi'\in L^2(\dR_+;\left(1 + 2\pi L_\circ^{-1} t\right)\,\dd t)\right\}. \end{aligned} \] It is easy to see that $\OpDu^{[0]}$ has exactly one negative simple eigenvalue, which corresponds to the ground-state eigenvalue $\lm_1^\aa(\cB^{\rm c})$ of $\OpDu$. The first excited state eigenvalue $\lm_2^\aa(\cB^{\rm c})$ corresponds to the lowest eigenvalues of the identical operators $\OpDu^{[1]}$ and $\OpDu^{[-1]}$. Moreover, the smoothness of $\phi$ follows from standard elliptic regularity theory. Let $\cB_1,\cB_2$ be two disks with perimeters $L_1$ and $L_2$, respectively. Assume that $L_1< L_2$. Then we obtain that \[ \begin{aligned} \lm_2^\aa(\cB^{\rm c}_1) & = \inf_{\psi\in C^\infty_0([0,\infty))} \frac{\displaystyle \int_0^\infty \left(|\psi'(t)|^2\left(1 + \frac{2\pi}{L_1} t\right) +\frac{1}{L_1} \frac{4\pi^2|\psi(t)|^2}{L_1 + 2\pi t}\right)\,\dd t + \aa |\psi(0)|^2}{ \displaystyle \int_0^\infty |\psi(t)|^2\left(1 + \frac{2\pi}{L_1} t\right)\dd t} \\ & \ge \inf_{\psi\in C^\infty_0([0,\infty))} \frac{\displaystyle \int_0^\infty \left(|\psi'(t)|^2\left(1 + \frac{2\pi}{L_2} t\right) +\frac{1}{L_2} \frac{4\pi^2|\psi(t)|^2}{L_2 + 2\pi t}\right)\,\dd t + \aa |\psi(0)|^2}{ \displaystyle \int_0^\infty |\psi(t)|^2\left(1 + \frac{2\pi}{L_2} t\right)\dd t} \\ & = \lm_2^\aa(\cB^{\rm c}_2). \end{aligned} \] Hence, it follows that $\lm_2^\alpha(\cB^{\rm c})$ is a non-increasing function of its perimeter. \end{appendix} \newcommand{\etalchar}[1]{$^{#1}$}
proofpile-arXiv_069-3439
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Millimeter-wave (mmWave) technologies have played an important role in 5G communication systems. Compared to the microwave, the mmWave can achieve a large system capacity and security performance \cite{b1}. However, the blockage issue needs to be tackled before the commercial application of mmWave technologies. Specifically, mmWave frequencies are susceptible to blockage, which means mmWave communications are difficult to be applied in urban areas with dense buildings \cite{b2}. To handle this problem, reconfigurable intelligent surface (RIS) is proposed as a promising technology \cite{bc1,bc2,bc3,bc4,b5}. In particular, by adjusting the reflection matrix of RIS, the propagation direction of the transmitted signal can be changed, mitigating mmWave communication systems' blockage problem. {Therefore, RIS-aided mmWave systems are getting growing interest from both academia and industry \cite{b8,b9}.} {Physical layer security is a key technology for solving privacy protection problems in the physical layer and has been a hot topic in the past decade. To improve security performance, the basic idea is to improve legitimate users' achievable rate or deteriorate that of the eavesdroppers (Eve). In a RIS-aided wireless system, the received signal can be suppressed at Eve while being boosted at the legitimate user. Thus, deploying the RIS bring a new degree of freedom (DoF) in the space domain, and the secrecy rate can be further improved.} Recently, the secure problem of RIS-aided mmWave systems has been investigated in \cite{b20,b21,b22}. Specifically, in \cite{b20}, the authors maximized RIS-aided communication systems' secrecy rate by optimizing the RIS phase shifts and the transmit beamforming. In \cite{b21}, a low-complexity iterative algorithm was proposed to solve the sum secrecy rate maximization problem for RIS-aided multi-user mmWave system. In \cite{b22}, the authors optimized the hybrid precoding at APs and phase shifting at the RIS to maximize RIS-aided mmWave systems' secrecy rate. However, the RIS-aided mmWave security systems with hardware limitations have not been investigated yet. In general, since much more power consumption is needed for the high-resolution digital-to-analog converts (HDACs) \cite{b10}, to cut the hardware cost and power budget, low-resolution digital-to-analog converts (LDACs) have been widely used in the mmWave system. However, the hardware imperfections of LDACs and RIS phase noise caused by finite discrete phase shifts of the RIS are another crucial hardware impairment in RIS-aided systems \cite{b6,b13}. Hence, we focus on a RIS-aided mmWave security system with LDACs and phase noise in this paper. Specifically, we maximize the secrecy rate under these hardware constraints by jointly optimizing the transmit beamforming and the RIS phase shifts. Since the objective function and the feasible set are non-convex, the formulated problem is intractable. To cope with these difficulties, we propose an alternating optimization (AO)-based algorithm based on the successive convex approximation (SCA) method and the element-wise block coordinate descent (BCD) method. \section{System Model} \begin{figure}[!t] \centering \includegraphics[scale=0.45]{CL_0.eps} \caption{A RIS-aided mmWave secure communication system with hardware limitations.\vspace{-20pt}} \end{figure} We consider a RIS-aided massive MIMO mmWave downlink, as shown in Fig.~1, where the AP with $N_{t}$ antennas and $N_{RF}$ RF chains sends data to the user and Eve. The user and Eve are equipped with a single antenna, respectively. Considering $N_{t}$ being typically large, we assume the hybrid beamforming at the AP with 1-bit DACs to degrade the complexity and infinite-resolution analog-to-digital converters (ADCs) at the user and Eve, due to relatively smaller single antenna. Moreover, the RIS is equipped with $N_{r}$ reflection elements. In this paper, we adopt a geometric model for mmWave channels \cite{b8}. $\boldsymbol{G}\in\mathbb{C}^{N_{r}\times N_{t}}$ denotes the AP-to-RIS mmWave channel. $\boldsymbol{F}_{RF}\in\mathbb{C}^{N_{t}\times N_{RF}}$ and $\boldsymbol{w}\in\mathbb{C}^{N_{RF}\times 1}$ are the analog beamforming codebook and digital beamforming vector, respectively. $\boldsymbol{F}_{RF}$ adopts the semi-unitary codebook in \cite{b81}. $\mathcal{Q}(\cdot)$ denotes the 1-bit quantizer. We define the RIS reflection matrix as $\boldsymbol{\Theta}=\mathrm{diag}(\boldsymbol{\theta})\in\mathbb{C}^{N_{r}\times N_{r}}$, where $\boldsymbol{\theta}=[\beta e^{j\phi_{1}},\ldots, \beta e^{j\phi_{ N_{r}}}]\in\mathbb{C}^{1\times N_{r}}$, $\phi_{i}\in [0,2\pi],~\forall~i=1,\ldots,N_{r}$ and $\beta=1$ denote the phase shifts and the reflection coefficient of the passive element at the RIS, respectively. According to \cite{b13}, due to the hardware limitations, the RIS phase shift $\phi_{i}$ can only choose its phase shifts from a finite number of discrete values $\mathcal{G}=\{0,\Delta\theta,\ldots,(L-1)\Delta\theta\}$, where $L$ is the number of discrete values, and $\Delta\theta=\frac{2\pi}{L}$. Due to unfavorable propagation conditions (obstacles, buildings), the direct link from the AP to the user is ignored. Then, the RIS reflects the signal to the user and Eve, and the received signals at the user and Eve can be expressed as \begin{align} \textstyle{y=\boldsymbol{h}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\mathcal{Q}(\boldsymbol{w}s)+n}, \end{align} and \begin{align} \textstyle{y_{e}=\boldsymbol{h}_{e}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\mathcal{Q}(\boldsymbol{w}s)+n_{e}}, \end{align} in which $n\sim\mathcal{CN}(0,\sigma^{2})$ and $n_{e}\sim\mathcal{CN}(0,\sigma_{e}^{2})$ are the additive white Gaussian noise, $\boldsymbol{h}\in\mathbb{C}^{N_{r}\times 1}$ and $\boldsymbol{h}_{e}\in\mathbb{C}^{N_{r}\times 1}$ are the channel matrix between the RIS and the user, Eve, respectively. \section{Linear Quantization Models} We consider linear additive quantization noise model (AQNM) schemes \cite{b91} for the non-linear quantization operator $\mathcal{Q}(\cdot)$. {The linearization approximation is expressed as \begin{align} \textstyle{\mathcal{Q}(\boldsymbol{w}s)\approx b_{Q}\boldsymbol{w}s+\boldsymbol{q}_{Q}}, \end{align} where $b_{Q}$ is the weight. And it is expressed as \begin{align} \textstyle{b_{Q}=1-\eta_{b}}, \end{align} where $\eta_{b}$ is the distortion factor, which is generally approximated by $\eta_{b}=\frac{\pi\sqrt{3}}{2}2^{-2b}$ for $b$-bit quantization.} The more accurate value for 1-bit quantization is $\eta_{b}\approx 0.3634$ \cite{b91}. In (3), $\boldsymbol{q}_{Q}$ stands for the quantization distortion with the following covariance \begin{align} \textstyle{\boldsymbol{A}_{Q}=b_{Q}(1-b_{Q})\mathrm{diag}(\boldsymbol{w}\boldsymbol{w}^{H})}. \end{align} Then, the achievable rate at the user can be expressed as {\begin{align} &\textstyle{R=\log_{2}\left(1+\frac{|b_{Q}\boldsymbol{h}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\boldsymbol{w}|^{2}}{b_{Q}(1-b_{Q})\|\boldsymbol{h}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\mathrm{diag}(\boldsymbol{w})\|^{2}+\sigma^{2}}\right)}. \end{align} The achievable rate at the Eve can be expressed as \begin{align} &\textstyle{R_{e}=\log_{2}\left(1+\frac{|b_{Q}\boldsymbol{h}_{e}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\boldsymbol{w}|^{2}}{b_{Q}(1-b_{Q})\|\boldsymbol{h}_{e}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\mathrm{diag}(\boldsymbol{w})\|^{2}+\sigma_{e}^{2}}\right)}. \end{align}} The system secrecy rate can be written as \begin{align} \textstyle{R_{s}=[R-R_{e}]^{+}}, \end{align} where $[x]^{+}=\max(0,x)$. Then, the problem is formulated as {\begin{subequations} \begin{align} \textstyle{\max\limits_{\boldsymbol{\theta},\boldsymbol{w}}}~&\textstyle{R_{s}}\\ \mbox{s.t.}~ &\textstyle{\textstyle{|\theta_{i}|=1}}&\\ &\textstyle{\|\boldsymbol{F}_{RF}\mathcal{Q}(\boldsymbol{w}s)\|^{2}\leq P},& \end{align} \end{subequations}}% where (9b) denotes the unit-modulus constraint. (9c) is the power constraint, and $P$ is the maximum transmit power. Incorporating (9c) and (5), the transmit power constraint is rewritten as {\begin{align} \textstyle{\|\boldsymbol{F}_{RF}\mathcal{Q}(\boldsymbol{w}s)\|^{2}}&\textstyle{=\|b_{Q}\boldsymbol{F}_{RF}\boldsymbol{w}\|^{2}+\mathrm{tr}(\boldsymbol{F}_{RF}\boldsymbol{A}_{Q}\boldsymbol{F}_{RF})},\nonumber\\ &\textstyle{=\|b_{Q}\boldsymbol{w}\|^{2}+\mathrm{tr}(\boldsymbol{A}_{Q})}, \end{align}}% which follows from the semi-unitary $\boldsymbol{F}_{RF}$. Therefore, problem (9) is rewritten as {\begin{subequations} \begin{align} \textstyle{\max\limits_{\boldsymbol{\Theta},\boldsymbol{w}}}~&\textstyle{R_{s}}\\ \textstyle{\mbox{s.t.}}~ &\textstyle{\text{(9b)},}&\\ &\textstyle{\|b_{Q}\boldsymbol{w}\|^{2}+\mathrm{tr}(\boldsymbol{A}_{Q})\leq P}.& \end{align} \end{subequations}} \section{Alternating Optimization Algorithm} Since the problem in (11) is non-convex, we adopt an AO-based algorithm to optimize $\boldsymbol{w}$ and $\boldsymbol{\Theta}$, respectively. Specifically, we first optimize $\boldsymbol{w}$ with fixed $\boldsymbol{\Theta}$, then we fix $\boldsymbol{w}$ and optimize $\boldsymbol{\Theta}$. \subsection{Digital Beamforming Optimization} Under given $\boldsymbol{\Theta}$, problem (11) is rewritten as \begin{subequations} \begin{align} \textstyle{\max\limits_{\boldsymbol{w}}}~&\textstyle{\log_{2}\left(1+\frac{|\boldsymbol{D}\boldsymbol{w}|^{2}}{\omega}\right)-\log_{2}\left(1+\frac{|\boldsymbol{D}_{e}\boldsymbol{w}|^{2}}{\omega_{e}}\right)}\\ \textstyle{\mbox{s.t.}}~ &\textstyle{\text{(11c)},}& \end{align} \end{subequations} where \begin{align} &\textstyle{\boldsymbol{D}=b_{Q}\boldsymbol{h}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF},\boldsymbol{D}_{e}=b_{Q}\boldsymbol{h}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}},\nonumber\\ &\textstyle{\omega=|\boldsymbol{h}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\boldsymbol{A}_{Q}\mathrm{diag}(\boldsymbol{w})|^{2}+\sigma^{2}},\nonumber\\ &\textstyle{\omega_{e}=|\boldsymbol{h}_{e}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\boldsymbol{A}_{Q}\mathrm{diag}(\boldsymbol{w})|^{2}+\sigma_{e}^{2}}. \end{align} Then, to deal with the non-convex objective function in (12a), we introduce an auxiliary variable $t$ to rewrite (12) as \begin{subequations} \begin{align} \textstyle{\max\limits_{\boldsymbol{w}}}~&\textstyle{\log_{2}\left(1+\frac{|\boldsymbol{D}\boldsymbol{w}|^{2}}{\omega}\right)-t}\\ \mbox{s.t.}~ &\textstyle{\text{(11c)}},&\\ &\textstyle{t=\log_{2}\left(1+\frac{|\boldsymbol{D}_{e}\boldsymbol{w}|^{2}}{\omega_{e}}\right)}.& \end{align} \end{subequations} It is not difficult to find that the objective function is still non-convex. To handle the non-convex objective function, we use $\rho$ to further transform (14a) as \begin{align} \textstyle{\max\limits_{\boldsymbol{w},\rho}\log_{2}\left(1+\rho\right)-t} \end{align} where \begin{align} \textstyle{\rho\leq \frac{|\boldsymbol{D}\boldsymbol{w}|^{2}}{\omega}}. \end{align} According to the Schur complement in \cite{b15}, we have {\begin{align} \textstyle{\left[ \begin{matrix} 1 & z \\ z & |\boldsymbol{D}\boldsymbol{w}|^{2} \end{matrix} \right]\succeq\boldsymbol{0}}, \textstyle{~~\rho\leq \frac{z^{2}}{\omega}}, \end{align} where $z$ is an auxiliary variable.} Due to $\rho\leq \frac{z^{2}}{\omega}$ is non-convex with respect to $\rho$ and $\omega$, we use the SCA method based on the first-order Taylor expansion \cite{b3} to tackle $\rho\leq \frac{z^{2}}{\omega}$. In particular, for any fixed points $(\bar{z},\bar{\omega})$, we have \begin{align} \textstyle{\frac{z^{2}}{\omega}\geq \frac{2\bar{z}}{\bar{\omega}}z-\frac{\bar{z}^{2}}{\omega^{2}}\omega\geq \rho}. \end{align} By applying the concept of the SCA \cite{b3},\cite{b4}, we iteratively update $\bar{z}$ and $\bar{\omega}$ in the $n$-th iteration as \begin{align} \textstyle{\bar{\omega}^{(n)}=\omega^{(n-1)}, \bar{z}^{(n)}=z^{(n-1)}}. \end{align} Now, we handle the non-convex constraint in (14c), similarly, we use the Schur complement to deal with (14c) and (14c) can be transformed into the following equivalent forms. \begin{align} \textstyle{\left[ \begin{matrix} 2^{t}-1 & r \\ r & \omega_{e} \end{matrix} \right]\succeq\boldsymbol{0}}, \end{align} \begin{align} \textstyle{r^{2}\geq (2^{t}-1)\omega_{e}}, \end{align} and \begin{align} \textstyle{r^{2}-|\boldsymbol{D}_{e}\boldsymbol{w}|^{2}=0}. \end{align} In order to deal with the bilinear function on the left-hand side of (21), the SCA method based on the arithmetic geometric mean (AGM) inequality is adopted, so (21) can be rewritten as {}{\begin{align} \textstyle{\frac{1}{2}\left((\omega_{e}\eta)^{2}+(\frac{2^{t}-1}{\eta})^{2}\right)-\bar{r}(r-\bar{r})\leq 0}, \end{align}} where $\eta$ is a feasible point. To tighten the upper bound, $\eta$ is iteratively updated. The update formulation in the $n$-th iteration is expressed as \begin{align} \textstyle{\eta^{(n)}=\sqrt{\frac{2^{t^{(n-1)}}-1}{\rho^{(n-1)}}}}. \end{align} Then, we deal with non-convex constraint (22) and use the SCA method to transform (22) as the following convex constraints. \begin{align} \textstyle{|\boldsymbol{D}_{e}\boldsymbol{w}|^{2}-\bar{r}(r-\bar{r})\leq 0},~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\ \textstyle{r^{2}-|\boldsymbol{D}_{e}\bar{\boldsymbol{w}}|^{2}-2\mathrm{Re}(\bar{\boldsymbol{w}}^{H}\boldsymbol{D}_{e}^{H}\boldsymbol{D}_{e}(\boldsymbol{w}-\bar{\boldsymbol{w}}))\leq 0.} \end{align} The non-convex constraint in (11c) can be rewritten as the following form by using an similar manner with (25). \begin{align} (|b_{Q}\bar{\boldsymbol{w}}|^{2}-2\mathrm{Re}(b_{Q}\bar{\boldsymbol{w}}^{H}(\boldsymbol{w}-\bar{\boldsymbol{w}})))+\mathrm{tr}(\boldsymbol{A_{Q}})\leq P. \end{align} Now, the problem in (12) is transformed to the following convex approximation problem. \begin{subequations} \begin{align} \textstyle{\max\limits_{t,r,\omega,\rho,z,\boldsymbol{w}}}~&\textstyle{\log_{2}\left(1+\rho\right)-t}\\ \mbox{s.t.}~ &\left[ \begin{matrix} 1 & z \\ z & |\boldsymbol{D}\boldsymbol{w}|^{2} \end{matrix} \right]\succeq\boldsymbol{0},\\ &\textstyle{\text{(18)}, \text{(20)}, \text{(23)}, \text{(25)},\text{(26)}.}& \end{align} \end{subequations} The SCA-based algorithm is summarized in \textbf{Algorithm~1} for solving (12). \begin{algorithm}[htbp] \caption{SCA-based algorithm for problem (12).} \hspace*{0.02in}{\bf Initialization:} $\bar{r}^{(0)}$, $\bar{\omega}^{(0)}$, $\bar{z}^{(0)}$, $\bar{\boldsymbol{w}}^{(0)}$.\\ \hspace*{0.02in}{\bf Repeat}\\ Update $\{\boldsymbol{w}^{(n)}, r^{(n)}, \rho^{(n)}, \omega^{(n)}, z^{(n)},t^{(n)}\}$ with fixed $\bar{r}$, $\bar{\omega}$, $\bar{t}$, $\bar{\boldsymbol{w}}$ by solving (27).\\ Update $\eta^{(n+1)}$, $\bar{\omega}^{(n+1)}$, $\bar{z}^{(n+1)}$ based on (19) and (24).\\ Update $n=n+1$.\\ \hspace*{0.02in}{\bf Until} Convergence.\\ \hspace*{0.02in}{\bf Output:} $\boldsymbol{w}^{*}$.\\ \end{algorithm} \subsection{Discrete RIS Phase Shifts Optimization} Substituting the transmit beamforming vector $\boldsymbol{w}$ obtained in the previous section into problem (11). Then the sub-problem of phase shifts matrix can be rewritten at the top of next page. Since the variable $\phi_{i}$ belongs to a finite number of values, problem (28) can be resolved by using the exhaustive search method. However, feasible set $\mathcal{G}$ is large and the complexity of exhaustive search method is very high. To reduce complexity, an element-wise BCD algorithm is proposed. To obtain the closed form solution of phase shifts, we employ the element-wise BCD algorithm \cite{b3} to solve this problem. We assume $\phi_{i}$ as one block in the BCD and iteratively derive the continuous solutions of $\phi_{i}$ by using \textbf{Theorem~1}. \setcounter{equation}{28} \begin{theorem} There exists solution $\phi_{i}^{*}$ to maximize the value of (28) by solving the following equation \begin{align} \textstyle{\frac{(\mu_{i}-\bar{\mu}_{i})t-\tilde{\mu}_{i}}{\mu_{i}(1+t^{2})+\bar{\mu}_{i}(1-t^{2})-\tilde{\mu}_{i}2t}- \frac{(\eta_{i}-\bar{\eta}_{i})t-\tilde{\eta}_{i}}{\eta_{i}(1+t^{2})+\bar{\eta}_{i}(1-t^{2})-\tilde{\eta}_{i}2t}}\nonumber\\ \textstyle{+\frac{(\lambda_{i}-\bar{\lambda}_{i})t-\tilde{\lambda}_{i}}{\lambda_{i}(1+t^{2})+\bar{\lambda}_{i}(1-t^{2})-\tilde{\lambda}_{i}2t}- \frac{(\rho_{i}-\bar{\rho}_{i})t-\tilde{\rho}_{i}}{\rho_{i}(1+t^{2})+\bar{\rho}_{i}(1-t^{2})-\tilde{\rho}_{i}2t}=0}. \end{align} We use one dimension search method in \cite{b101} to solve the equation in (29). $\phi_{i}^{*}$ can be expressed as \begin{eqnarray} \phi_{i}^{*}=2\arctan(t). \end{eqnarray} The proof is given in \textbf{Appendix~A}. \end{theorem} \setcounter{equation}{30} According to \textbf{Theorem~1}, the discrete solution $\bar{\phi}_{i}^{*}$ can be calculated by the following formulation \begin{align} \bar{\phi}_{i}^{*}=\arg\min_{\phi_{i}\in\mathcal{G}}|\phi_{i}^{*}-\phi_{i}|. \end{align} The algorithm based on element-wise BCD can be summarized in \textbf{Algorithm~2}. \newcounter{mytempeqncnt} \begin{figure*}[!t] \normalsize \setcounter{mytempeqncnt}{\value{equation}} \setcounter{equation}{27} {\begin{subequations} \begin{align} \textstyle{\max\limits_{\boldsymbol{\Theta}}}~&\textstyle{\frac{\left(\frac{b_{Q}(1-b_{Q})\|\boldsymbol{\theta}^{T}\mathrm{diag}(\boldsymbol{h}^{H})\boldsymbol{G}\boldsymbol{F}_{RF}\mathrm{diag}(\boldsymbol{w})\|^{2}+\sigma^{2}+|b_{Q}\boldsymbol{h}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\boldsymbol{w}|^{2}}{b_{Q}(1-b_{Q})\|\boldsymbol{\theta}^{T}\mathrm{diag}(\boldsymbol{h}^{H})\boldsymbol{G}\boldsymbol{F}_{RF}\mathrm{diag}(\boldsymbol{w})\|^{2}+\sigma^{2}}\right)}{\left(\frac{b_{Q}(1-b_{Q})\|\boldsymbol{\theta}^{T}\mathrm{diag}(\boldsymbol{h}_{e}^{H})\boldsymbol{G}\boldsymbol{F}_{RF}\mathrm{diag}(\boldsymbol{w})\|^{2}+\sigma^{2}+|b_{Q}\boldsymbol{h}_{e}^{H}\boldsymbol{\Theta}\boldsymbol{G}\boldsymbol{F}_{RF}\boldsymbol{w}|^{2}}{b_{Q}(1-b_{Q})\|\boldsymbol{\theta}^{T}\mathrm{diag}(\boldsymbol{h}_{e}^{H})\boldsymbol{G}\boldsymbol{F}_{RF}\mathrm{diag}(\boldsymbol{w})\|^{2}+\sigma_{e}^{2}}\right)}}&\\ \mbox{s.t.}~ &\text{(9b)}& \end{align} \end{subequations}} \setcounter{equation}{\value{mytempeqncnt}} \hrulefill \vspace*{4pt} \end{figure*} \begin{algorithm}[htbp] \caption{Element-wise BCD-based algorithm for problem (28).} \hspace*{0.02in}{\bf Initialization:} $t=0$, $\boldsymbol{\Theta}^{0}$.\\ \hspace*{0.02in}{\bf Repeat:}\\ \hspace*{0.02in}{\bf for:}~$i=1,\ldots,N_{r}$\\ Calculate $\phi_{i}^{t}$ based on \textbf{Theorem~1}. \\ \hspace*{0.02in}{\bf End:}\\ Set $t=t+1$\\ \hspace*{0.02in}{\bf Until:} Convergence.\\ \hspace*{0.02in}{\bf Output:} $\boldsymbol{\Theta}^{*}$ \end{algorithm} Based on the above analysis, we can obtain the AO-based algorithm for solving problem (9). Following the results in \cite{b3}, since each sub-algorithm converges to a local optimal solution, we can guarantee that the AO-based algorithm can converge to a local optimal solution. \begin{algorithm}[htbp] \caption{AO-based Algorithm for problem (9).} \hspace*{0.02in}{\bf Initialization:} $\boldsymbol{w}^{(0)}$, $\boldsymbol{\Theta}^{(0)}$.\\ \hspace*{0.02in}{\bf Repeat}\\ Update $\boldsymbol{w}^{(j)}$ by using \textbf{Algorithm~1}.\\ Update $\boldsymbol{\Theta}^{(j)}$ by using \textbf{Algorithm~2}.\\ Update $j=j+1$.\\ \hspace*{0.02in}{\bf Until} Convergence.\\ \hspace*{0.02in}{\bf Output:} $\boldsymbol{w}^{*}$, $\boldsymbol{\Theta}^{*}$.\\ \end{algorithm} \subsection{Complexity Analysis} The complexity of the proposed method is about $\mathcal{O}(N_{r}(N_{r}N_{t}+L_{P})+N_{t}^{2})$, which depends on the computational complexity of $\phi_{i}$ and $\boldsymbol{w}$. We compare the complexity of these algorithms in \textbf{TABLE~1}, it is not difficult to find that the AO-based algorithm has the lowest complexity. \begin{table}[htbp] \centering \scriptsize \caption{Complexity Comparison of Algorithms } \label{tab:notations} \begin{tabular}{ll} \\[-2mm] \hline \hline\\[-2mm] {\bf \small Symbol}&\qquad {\bf\small Total Complexity}\\ \hline \vspace{0.7mm}\\[-2mm] Proposed AO-based algorithm & $\mathcal{O}(N_{r}(N_{r}N_{t}+L_{P})+N_{t}^{2})$\\ \vspace{0.7mm} the exhaustive method & $\mathcal{O}(N_{r}^{L_{P}+1} +N_{r}^{2}+N_{r}N_{t})$\\ \vspace{0.7mm} SDP-based algorithm~\cite{b100} & $\mathcal{O}(\zeta N_{r}^{8}+N_{t}^{2})$\\ \hline \hline \end{tabular} \end{table} \section{Numerical Results}\label{IV} As shown in Fig.1, we consider a RIS-aided mmWave system with hardware limitations, where AP's coordinate is $(0,0,0)$~m and IRS is located at $(0, 60, 20)$~m. While the user and the Eve are located at $(5,60,0)$~m and $(5,80,0)$~m, respectively. {We set $N_{t}=64$, $N_{r}=16$, $N_{RF}=8$, $b=1$, $\sigma^{2}=-110$~dBm.} The mmWave channels from the AP to the RIS, the RIS to the $k$th user are respectively expressed as \begin{eqnarray} \textstyle{\boldsymbol{G}=\sqrt{\frac{1}{\beta L_{1}}}\sum_{l=0}^{L_{1}-1}\alpha\boldsymbol{a}_{T}(N_{T},\theta)\boldsymbol{a}_{R}^{T}(N_{r},\varphi,\phi),} \end{eqnarray} \begin{eqnarray} \textstyle{\boldsymbol{h}_{k}=\sqrt{\frac{1}{\hat{\beta}_{k}L_{k}}}\sum_{l=0}^{L_{k}-1}\hat{\alpha}_{k}\boldsymbol{a}_{R}(N_{r},\vartheta_{k}),~k\in\{user, Eve\},} \end{eqnarray} where $\beta$ and $\hat{\beta}_{k}$ denote the large-scale fading coefficients. They are generated by \begin{align} 72+29.2\log_{10}d+\zeta, \end{align} where $d$ denotes the signal propagation distance, $\zeta\sim\mathcal{CN}(0,1)$ is the log-normal shadowing, $\alpha$ and $\hat{\alpha}_{k}$ denote the small-scale fading coefficients which follow $\mathcal{CN}(0,1)$ [15]. $\boldsymbol{a}_{T}(\cdot)$ and $\boldsymbol{a}_{R}(\cdot)$ represent array steering vectors at the AP and the RIS, respectively. Three schemes with fixed quantization bits for each DACs are considered for comparison: \begin{itemize} \item MRT-BCD: In this scheme, the transmit beamforming is designed based on maximum ratio transmission (MRT). Then, the RIS phase shifts is obtained by using BCD algorithm. \item NO-RIS: In this scheme, the mmWave do not use the RIS to assist communication. Moreover, the transmit beamforming is optimized by using \textbf{Algorithm~1}. \item Upper Bound: In this scheme, we consider the RIS-aided mmWave system without hardware limitations as an upper bound. Moreover, the transmit beamforming vector and the RIS phase shifts are optimized by using \textbf{Algorithm~1} and \textbf{Algorithm~2}, respectively. \end{itemize} \begin{figure*}[t] \centering \subfigure{ \begin{minipage}[t]{0.35\linewidth} \centering \includegraphics[scale=0.3]{CL1.eps} \caption{Convergence of \textbf{Algorithm~3}} \end{minipage}% }% \subfigure{ \begin{minipage}[t]{0.35\linewidth} \centering \includegraphics[scale=0.3]{CL2.eps} \caption{Secrecy rate versus $N_{r}$} \end{minipage}% }% \subfigure{ \begin{minipage}[t]{0.35\linewidth} \centering \includegraphics[scale=0.30]{CL3.eps} \caption{Secrecy rate versus $b$} \end{minipage} }% \centering \end{figure*} {The convergence behavior of the proposed AO-based algorithm under different iteration times is given in Fig.2. We observe that the secrecy rate increases monotonically with the increase of the number of iterations. In addition, the algorithm converges quickly, and can achieve a high security rate at the 5th iteration. } {Fig. 3 shows the secrecy rate under different number of reflecting elements of the RIS. As expected, compared with the MRT-BCD and NO-RIS schemes, the proposed algorithm can achieve the best performance. Moreover, although the resolution of DACs in the upper bound scheme is higher than that of the proposed scheme, we find that as the number of reflecting elements of RIS increases, both the proposed scheme and the upper bound scheme increase at the same time. This finding validates the feasibility of using RIS mitigates the influence of hardware limitations.} {Fig. 4 depicts the secrecy rate versus quantization bits under different schemes. Compared with the MRT-BCD scheme and NO-RIS scheme, the proposed AO-based algorithm can achieve the best security performance. It is not difficult to find that the schemes with the RIS can outperformance the NO-RIS scheme. From the perspective of maximizing the secrecy rate, when the system is assisted by RIS, there is no need to deploy high-resolution DACs. This result demonstrates that the RIS can suppress the hardware loss as well.} \section{Conclusion} The secrecy rate maximization problem for mmWave communications with hardware limitations at the AP and RIS was investigated in this paper. The RIS phase shifts, transmit beamforming vector were jointly optimized to maximize the secrecy rate under the hardware constraint and unit-modulus constraints. To solve this problem, we proposed an AO-based algorithm. Numerical results have shown that the proposed AO-based algorithm outperforms conventional schemes in terms of secrecy rate. Moreover, the numerical results also show that when the mmWave system is equipped with RIS, the mmWave system does not need to equip with excessive RF chains and high-resolution DACs. {In this paper, the CSI is assumed as perfect. In fact, the CSI is difficult to obtain due to the passive feature of the RIS. Therefore, we will design a channel estimation scheme \footnote{{We carefully studied the channel estimation scheme in \cite{b111,b222}. They are useful to improve our future work, and we will combine the channel estimation results to design the beamforming for RIS-aided mmWave system.}}in the future works, then we use the proposed algorithm in this paper to design beamforming for maximizing secrecy rate in a RIS-aided mmWave system.} \appendices \section{The proof of \textbf{theorem~1}} We assume $\phi_{i}$ as one block of BCD, (28a) can be rewritten as \begin{align} \textstyle{R_{s}=\left(\frac{|\boldsymbol{\theta}^{T}\boldsymbol{c}|^{2}+\|\boldsymbol{\theta}^{T}\boldsymbol{A}\|^{2}+\sigma^{2}}{|\boldsymbol{\theta}^{T}\boldsymbol{d}|^{2}+\|\boldsymbol{\theta}^{T}\boldsymbol{B}\|^{2}+\sigma_{e}^{2}}\right)\left(\frac{\|\boldsymbol{\theta}^{T}\boldsymbol{B}\|^{2}+\sigma_{e}^{2}}{\|\boldsymbol{\theta}^{T}\boldsymbol{A}\|^{2}+\sigma^{2}}\right)}, \end{align} where \begin{align} &\textstyle{\boldsymbol{c}=b_{Q}\mathrm{diag}(\boldsymbol{h}^{H})\boldsymbol{G}\boldsymbol{F}_{RF}\boldsymbol{w}, \boldsymbol{d}=b_{Q}\mathrm{diag}(\boldsymbol{h}_{e}^{H})\boldsymbol{G}\boldsymbol{F}_{RF}\boldsymbol{w}},\nonumber\\ &\textstyle{\boldsymbol{A}=b_{Q}(1-b_{Q})\mathrm{diag}(\boldsymbol{h}^{H})\boldsymbol{G}\boldsymbol{F}_{RF}\mathrm{diag}(\boldsymbol{w})},\nonumber\\ &\textstyle{\boldsymbol{B}=b_{Q}(1-b_{Q})\mathrm{diag}(\boldsymbol{h}_{e}^{H})\boldsymbol{G}\boldsymbol{F}_{RF}\mathrm{diag}(\boldsymbol{w})}. \end{align} To simplify (35), we expanse $|\boldsymbol{\theta}^{T}\boldsymbol{c}|^{2}$, $|\boldsymbol{\theta}^{T}\boldsymbol{d}|^{2}$, $|\boldsymbol{\theta}^{T}\boldsymbol{A}|^{2}$, and $|\boldsymbol{\theta}^{T}\boldsymbol{B}|^{2}$ as \begin{align} &\textstyle{|\boldsymbol{\theta}^{T}\boldsymbol{c}|^{2}=|e^{j\phi_{i}}c_{i}+p_{i}|^{2}=|c_{i}|^{2}+|p_{i}|^{2}}\nonumber\\ &\textstyle{+(\mathrm{Re}\{c_{i}\}\mathrm{Re}\{p_{i}\}+\mathrm{Im}\{c_{i}\}\mathrm{Im}\{p_{i}\})\cos(\phi_{i})}\nonumber\\ &\textstyle{-(\mathrm{Re}\{c_{i}\}\mathrm{Im}\{p_{i}\}+\mathrm{Im}\{c_{i}\}\mathrm{Re}\{p_{i}\})\sin(\phi_{i})},\nonumber\\ &\textstyle{|\boldsymbol{\theta}^{T}\boldsymbol{d}|^{2}=|e^{j\phi_{i}}\bar{c}_{i}+\bar{p}_{i}|^{2}=|\bar{c}_{i}|^{2}+|\bar{p}_{i}|^{2}}\nonumber\\ &\textstyle{+(\mathrm{Re}\{\bar{c}_{i}\}\mathrm{Re}\{\bar{p}_{i}\}+\mathrm{Im}\{\bar{c}_{i}\}\mathrm{Im}\{\bar{p}_{i}\})\cos(\phi_{i})}\nonumber\\ &\textstyle{-(\mathrm{Re}\{\bar{c}_{i}\}\mathrm{Im}\{\bar{p}_{i}\}+\mathrm{Im}\{\bar{c}_{i}\}\mathrm{Re}\{\bar{p}_{i}\})\sin(\phi_{i})},\nonumber\\ &\textstyle{|\boldsymbol{\theta}^{T}\boldsymbol{A}|^{2}=\sum_{k=1}^{N_{t}}|e^{j\phi_{i}}q_{i,k}+v_{k}|^{2}=\sum_{k=1}^{N_{t}}|q_{i,k}|^{2}+\sum_{k=1}^{N_{t}}|v_{k}|^{2}}\nonumber\\ &\textstyle{+\sum_{k=1}^{N_{t}}(\mathrm{Re}\{q_{i,k}\}\mathrm{Re}\{v_{k}\}+\mathrm{Im}\{q_{i,k}\}\mathrm{Im}\{v_{k}\})\cos(\phi_{i})}\nonumber\\ &\textstyle{-\sum_{k=1}^{N_{t}}(\mathrm{Re}\{q_{i,k}\}\mathrm{Im}\{v_{k}\}+\mathrm{Im}\{q_{i,k}\}\mathrm{Re}\{v_{k}\})\sin(\phi_{i})},\nonumber\\ &\textstyle{|\boldsymbol{\theta}^{T}\boldsymbol{B}|^{2}=\sum_{k=1}^{N_{t}}|e^{j\phi_{i}}\bar{q}_{i,k}+\bar{v}_{k}|^{2}=\sum_{k=1}^{N_{t}}|\bar{q}_{i,k}|^{2}+\sum_{k=1}^{N_{t}}|\bar{v}_{k}|^{2}}\nonumber\\ &\textstyle{+\sum_{k=1}^{N_{t}}(\mathrm{Re}\{\bar{q}_{i,k}\}\mathrm{Re}\{\bar{v}_{k}\}+\mathrm{Im}\{\bar{q}_{i,k}\}\mathrm{Im}\{\bar{v}_{k}\})\cos(\phi_{i})}\nonumber\\ &\textstyle{-\sum_{k=1}^{N_{t}}(\mathrm{Re}\{\bar{q}_{i,k}\}\mathrm{Im}\{\bar{v}_{k}\}+\mathrm{Im}\{\bar{q}_{i,k}\}\mathrm{Re}\{\bar{v}_{k}\})\sin(\phi_{i})}, \end{align} where \begin{align} &\textstyle{p_{i}=\sum_{j\neq i}^{N_{r}}e^{j\phi_{j}}c_{j}, \bar{p}_{i}=\sum_{j\neq i}^{N_{r}}e^{j\phi_{j}}d_{j},\bar{c}_{i}=d_{i}},\nonumber\\ &\textstyle{q_{i,k}=a_{i,k}, \bar{q}_{i,k}=b_{i,k}}\nonumber\\ &\textstyle{v_{k}=\sum_{j\neq 1}^{N_{r}}e^{j\phi_{j}}a_{k,j}, \bar{v}_{k}=\sum_{j\neq 1}^{N_{r}}e^{j\phi_{j}}b_{k,j}} \end{align} We continue to simplify (35) by introducing auxiliary variables $\mu_{i}$, $\bar{\mu}$, $\eta_{i}$, $\bar{\eta}_{i}$, $\lambda_{i}$, $\bar{\lambda}_{i}$, $\rho_{i}$, and $\bar{\rho}_{i}$. \begin{align} \textstyle{R_{s}=\left(\frac{\mu_{i}+\bar{\mu}_{i}\cos(\phi_{i})-\tilde{\mu}_{i}\sin(\phi_{i})}{\eta_{i}+\bar{\eta}_{i}\cos(\phi_{i})-\tilde{\eta}_{i}\sin(\phi_{i})}\right)\left(\frac{\lambda_{i}+\bar{\lambda}_{i}\cos(\phi_{i})-\tilde{\lambda}_{i}\sin(\phi_{i})}{\rho_{i}+\bar{\rho}_{i}\cos(\phi_{i})-\tilde{\rho}_{i}\sin(\phi_{i})}\right)}, \end{align} where \begin{align} &\textstyle{\mu_{i}=|c_{i}|^{2}+|p_{i}|^{2}+\sum_{k=1}^{N_{t}}|q_{i,k}|^{2}+\sum_{k=1}^{N_{t}}|v_{k}|^{2}+\sigma^{2}},\nonumber\\ &\textstyle{\bar{\mu}_{i}=(\mathrm{Re}\{c_{i}\}\mathrm{Re}\{p_{i}\}+\mathrm{Im}\{c_{i}\}\mathrm{Im}\{p_{i}\})}\nonumber\\ &\textstyle{+\sum_{k=1}^{N_{t}}(\mathrm{Re}\{q_{i,k}\}\mathrm{Re}\{v_{k}\}+\mathrm{Im}\{q_{i,k}\}\mathrm{Im}\{v_{k}\})},\nonumber\\ &\textstyle{\tilde{\mu}_{i}=\sum_{k=1}^{N_{t}}(\mathrm{Re}\{q_{i,k}\}\mathrm{Im}\{v_{k}\}+\mathrm{Im}\{q_{i,k}\}\mathrm{Re}\{v_{k}\})},\nonumber\\ &\textstyle{\eta_{i}=|\bar{c}_{i}|^{2}+|\bar{p}_{i}|^{2}+\sum_{k=1}^{N_{t}}|\bar{q}_{i,k}|^{2}+\sum_{k=1}^{N_{t}}|\bar{v}_{k}|^{2}+\sigma^{2}},\nonumber\\ &\textstyle{\bar{\eta}_{i}=(\mathrm{Re}\{\bar{c}_{i}\}\mathrm{Re}\{\bar{p}_{i}\}+\mathrm{Im}\{\bar{c}_{i}\}\mathrm{Im}\{\bar{p}_{i}\})}\nonumber\\ &\textstyle{+\sum_{k=1}^{N_{t}}(\mathrm{Re}\{\bar{q}_{i,k}\}\mathrm{Re}\{\bar{v}_{k}\}+\mathrm{Im}\{\bar{q}_{i,k}\}\mathrm{Im}\{\bar{v}_{k}\})},\nonumber\\ &\textstyle{\tilde{\eta}_{i}=\sum_{k=1}^{N_{t}}(\mathrm{Re}\{\bar{q}_{i,k}\}\mathrm{Im}\{\bar{v}_{k}\}+\mathrm{Im}\{\bar{q}_{i,k}\}\mathrm{Re}\{\bar{v}_{k}\})},\nonumber\\ &\textstyle{\rho_{i}=\sum_{k=1}^{N_{t}}|q_{i,k}|^{2}+\sum_{k=1}^{N_{t}}|v_{k}|^{2}+\sigma^{2}},\nonumber\\ &\textstyle{\bar{\rho}_{i}=\sum_{k=1}^{N_{t}}(\mathrm{Re}\{q_{i,k}\}\mathrm{Re}\{v_{k}\}+\mathrm{Im}\{q_{i,k}\}\mathrm{Im}\{v_{k}\})},\nonumber\\ &\textstyle{\tilde{\rho}_{i}=\sum_{k=1}^{N_{t}}(\mathrm{Re}\{q_{i,k}\}\mathrm{Im}\{v_{k}\}+\mathrm{Im}\{q_{i,k}\}\mathrm{Re}\{v_{k}\}},\nonumber\\ &\textstyle{\lambda_{i}=\sum_{k=1}^{N_{t}}|\bar{q}_{i,k}|^{2}+\sum_{k=1}^{N_{t}}|\bar{v}_{k}|^{2}+\sigma^{2}},\nonumber\\ &\textstyle{\bar{\lambda}_{i}=\sum_{k=1}^{N_{t}}(\mathrm{Re}\{\bar{q}_{i,k}\}\mathrm{Re}\{\bar{v}_{k}\}+\mathrm{Im}\{\bar{q}_{i,k}\}\mathrm{Im}\{\bar{v}_{k}\})},\nonumber\\ &\textstyle{\tilde{\lambda}_{i}=\sum_{k=1}^{N_{t}}(\mathrm{Re}\{\bar{q}_{i,k}\}\mathrm{Im}\{\bar{v}_{k}\}+\mathrm{Im}\{\bar{q}_{i,k}\}\mathrm{Re}\{\bar{v}_{k}\}}.\nonumber\\ \end{align} Let $\tan(\frac{\phi_{i}}{2})=t$, we have $\sin(\phi_{i})=\frac{2t}{1+t^{2}}$ and $\cos(\phi_{i})=\frac{1-t^{2}}{1+t^{2}}$. (39) is rewritten as \begin{align} &\textstyle{R_{s}=\left(\frac{\mu_{i}(1+t^{2})+\bar{\mu}_{i}(1-t^{2})-\tilde{\mu}_{i}2t}{\eta_{i}(1+t^{2})+\bar{\eta}_{i}(1-t^{2})-\tilde{\eta}_{i}2t}\right)}\textstyle{\left(\frac{\lambda_{i}(1+t^{2})+\bar{\lambda}_{i}(1-t^{2})-\tilde{\lambda}_{i}2t}{\rho_{i}(1+t^{2})+\bar{\rho}_{i}(1-t^{2})-\tilde{\rho}_{i}2t}\right)}. \end{align} When the derivative of (41) is $0$, the optimal solution $\phi_{i}^{*}$ can be determined by the following equation \begin{align} &\textstyle{\frac{(\mu_{i}-\bar{\mu}_{i})t-\tilde{\mu}_{i}}{\mu_{i}(1+t^{2})+\bar{\mu}_{i}(1-t^{2})-\tilde{\mu}_{i}2t}- \frac{(\eta_{i}-\bar{\eta}_{i})t-\tilde{\eta}_{i}}{\eta_{i}(1+t^{2})+\bar{\eta}_{i}(1-t^{2})-\tilde{\eta}_{i}2t}}\nonumber\\ &\textstyle{+\frac{(\lambda_{i}-\bar{\lambda}_{i})t-\tilde{\lambda}_{i}}{\lambda_{i}(1+t^{2})+\bar{\lambda}_{i}(1-t^{2})-\tilde{\lambda}_{i}2t}- \frac{(\rho_{i}-\bar{\rho}_{i})t-\tilde{\rho}_{i}}{\rho_{i}(1+t^{2})+\bar{\rho}_{i}(1-t^{2})-\tilde{\rho}_{i}2t}=0}, \end{align}\par Then, the equation (42) can be solved using a one dimension search in \cite{b101}. Finally, the \textbf{Theorem~1} is proved. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
proofpile-arXiv_069-4916
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $X$ be a (finite dimensional) locally compact Hausdorff space and $B$ a $C\sp*$-aglebra. We are concerned with the projection lifting problem from the corona algebra to the multiplier algebra of a $C\sp*$-algebra of the form $C(X)\otimes B$. An earlier result in this direction is that of W. Calkin \cite{ca}, who showed that a projection in the quotient algebra $B(H)/K$ is liftable to a projection in B(H) where $H$ is a separable infinite dimensional Hilbert space and $K$ is the $C\sp*$-algebra of compact operators on $H$; in our setting this corresponds to the case that $X$ is a one point set and $B$ is the $C\sp*$-algebra of compact operators. A generalization of this result was given by L. Brown and the author as follows. \begin{thm}\label{T:projectionlifting} Let $X$ be $[0,1]$, $(-\infty,\infty)$, $[0,\infty)$, or a unit circle. A projection $\mathbf{f}$ in the corona algebra of $C(X)\otimes K $, represented by finite umber of projection valued functions $(f_0,\cdots, f_n)$ under a suitable partition $\{ x_1,\dots, x_n \}$ of the interior of $X$, is liftable to a projection in the multiplier algebra if and only if there exist $l_0,\cdots,l_n$ satisfying the following conditions; suppose $k_i$'s are the essential codimensions of $f_i(x_i)$ and $f_{i-1}(x_i)$ for $1 \leq i \leq n$. \begin{equation}\label{E:eq1} l_i-l_{i-1}=-k_i \quad \mbox{for} \quad i>0 \quad \mbox{and}\quad l_0-l_n=-k_0 \quad \mbox{in the circle case,} \end{equation} if for some $x$ in $X_i$, $f_i(x)$ has finite rank, then \begin{equation}\label{E:eq2} l_i \geq - \rank(f_i(x)), \end{equation} if for some $x$ in $X_i$, $1-f_i(x)$ has finite rank, then \begin{equation}\label{E:eq3} l_i \leq \rank(1-f_i(x)), \end{equation} if either end point of $X_i$ is infinite, then \begin{equation}\label{E:eq4} l_i=0. \end{equation} \end{thm} We note that our result was obtained from the following interesting proposition on continuous fields of Hilbert spaces. This result says, roughly speaking, that a continuous filed of Hilbert spaces such that each fiber's rank is greater and equal to $n$ has a trivial subfield of rank $m$ for any $m \leq n$. (See also \cite[Proposition 3.2]{DNNP}.) \begin{prop}(\cite[Corollary A.5]{BL})\label{L:subprojection} If $X$ is a separable metric space such that whose covering dimension is less than or equal to 1 and $\mathcal{H}$ is a continuous field of Hilbert spaces over $X$ such that $\dim (H_x) \geq n $ for every $x \in X$, then $\mathcal{H}$ has a trivial subfield of rank n. Equivalently, if $p$ is a strongly continuous projection valued function on $X$ such that $\rank(p(x))\geq n$ for every $x \in X$, then there is a norm continuous projection valued function $q$ such that $q \leq p$ and $\rank(q(x))=n$ for every $x \in X$. \end{prop} We denote by $M(B)$ the mutilpier algebra of a stable $C\sp*$-algebra $B$, and consider the projection valued map $\mathbf{p}:X \to M(B)$, which is continuous with respect to the strict topology. Note that the associated fiber $p(x)H_B$ is a Hilbert submodule so that we have no proper notion of rank. But at least we can distinguish ``finiteness'' and ``infiniteness'' of a Hilbert (sub) module using finiteness and infiniteness of the corresponding fiberwise projection in $M(B)$. We are going to show that: \begin{lem}\label{L:deform1} Let $\mathbf{p}$ be a continuous section from $X$ to $M(B)$ with respect to the strict topology on $M(B)$ where $B$ is a $\sigma_p$-unital, stable $C\sp*$-algebra of real rank zero such that $M(B)$ contains a halving full projection. In addition, when we denote its image on $x\in X$ by $p_x$, assume that $p_x$ is a full, properly infinite projection for each $x$. Then for any $\alpha \in K_0(B)$ there exists a norm continuous section $\mathbf{r}$ from $X$ to $B$ such that $r_x \leq p_x$ such that $[r]_{K_0(B)}=\alpha$. \end{lem} Then using the above lemma we can generalize Theorem \ref{T:projectionlifting} to the case $B$ is a $\sigma_p$-unital, purely infinite simple $C\sp*$-algebra provided that $K_0(B)$ is an ordered group. \begin{thm}[Theorem \ref{T:liftingthm}]\label{T:liftingthm1} A projection $\mathbf{f}$ in $\mathcal{C}(C(X)\otimes B)$ represented by $(f_0,\cdots, f_n)$ under a suitable partition $\{ x_1,\dots, x_n \}$ of the interior of $X$ is liftable to a projection in $M(C(X)\otimes B)$ where $f_i(x)$'s are full and properly infinite projections for all $i$ and $x\in X$ if and only if there exist $l_0,\cdots,l_n$ in $K_0(B)$ satisfying the following conditions; suppose $k_i$'s are the (generalized) ``essential codimensions'' of $f_i(x_i)$ and $f_{i-1}(x_i)$ for $1 \leq i \leq n$. \begin{equation}\label{E:eq1} l_i-l_{i-1}=-k_i \quad \mbox{for} \quad i>0 \quad \mbox{and}\quad l_0-l_n=-k_0 \quad \mbox{in the circle case,} \end{equation} if for some $x$ in $X_i$, $f_i(x)$ belongs to $B$, then \begin{equation}\label{E:eq2} l_i \geq - [f_i(x)]_0, \end{equation} if for some $x$ in $X_i$, $1-f_i(x)$ belongs to $B$, then \begin{equation}\label{E:eq3} l_i \leq [1-f_i(x)]_0, \end{equation} if either end point of $X_i$ is infinite, then \begin{equation}\label{E:eq4} l_i=0. \end{equation} \end{thm} In addtion, we are going to show that two projections in the corona algebra are equivalent under suitable condtions on associated generalized essential codimensions. \section{The essential codimension}\label{S:codimension} As we observe in Theorem \ref{T:liftingthm1}, the technical tool other than Lemma \ref{L:deform1} is a K-theoretic notion which is called the generalized essential codimension. In fact, this generalizes the notion of classical essential codimension of Brown, Douglas, Fillmore. In Section \ref{S:codimension}, we give a careful treatment of this notion using rudiments of Kasparov's $KK$-theory although it appeared in \cite{Lee} without showing the connection to the classical essential codimension. Let $E$ be a (right) Hilbert $B$-module. We denote $\mathcal{L}(E,F)$ by the $C\sp{*}$-algebra of adjointable, bounded operators from $E$ to $F$. The ideal of `compact' operators from $E$ to $F$ is denoted by $\mathcal{K}(E,F)$. When $E=F$, we write $\mathcal{L}(E)$ and $\mathcal{K}(E)$ instead of $\mathcal{L}(E,E)$ and $\mathcal{K}(E,E)$. Throughout the paper, $A$ is a separable $C\sp{*}$-algebra, and all Hilbert modules are assumed to be countably generated over a separable $C\sp*$-algebra. We use the term representation for a $*$-homomorphism from $A$ to $\mathcal{L}(E)$. We let $H_B$ be the standard Hilbert module over $B$ which is $H\otimes B $ where $H$ is a separable infinite dimensional Hilbert space. We denote $M(B)$ by the multiplier algebra of $B$. It is well-known that $\mathcal{L}(H_B)=M(B\otimes K)$ and $\mathcal{K}(H_B)=B\otimes K$ where $K$ is the $C\sp{*}$-algebra of the compact operators on $H$ \cite{Kas80}. Let us recall the definition of Kasparov group $\KK(A,B)$. We refer the reader to \cite{Kas81} for the general introduction of the subject. A $\KK$-cycle is a triple $(\phi_0,\phi_1,u)$, where $\phi_i:A \to \mathcal{L}(E_i)$ are representations and $u \in \mathcal{L}(E_0,E_1)$ satisfies that \begin{itemize} \item[(i)] $u\phi_0(a)-\phi_1(a)u \in \mathcal{K}(E_0,E_1)$, \item[(ii)]$\phi_0(a)(u^*u-1)\in \mathcal{K}(E_0)$, $\phi_1(a)(uu^*-1)\in \mathcal{K}(E_1)$. \end{itemize} The set of all $KK$-cycles will be denoted by $\mathbb{E}(A,B)$. A cycle is degenerate if \[ u\phi_0(a)-\phi_1(a)u=0, \quad\phi_0(a)(u^*u-1)=0,\quad \phi_1(a)(uu^*-1)=0.\] An operator homotopy through $KK$-cycles is a homotopy $(\phi_0,\phi_1,u_t)$, where the map $t \to u_t$ is norm continuous. The equivalence relation $\underset{\text{oh}}{\sim}$ is generated by operator homotopy and addition of degenerate cycles up to unitary equivalence. Then $\KK(A,B)$ is defined as the quotient of $\mathbb{E}(A,B)$ by $\underset{\text{oh}}{\sim}$. When we consider non-trivially graded $C\sp{*}$-algebras, we define a triple $(E,\phi,F)$, where $\phi:A \to \mathcal{L}(E)$ is a graded representation, and $F\in \mathcal{L}(E)$ is of odd degree such that $F\phi(a)-\phi(a)F$, $(F^2-1)\phi(a)$, and $(F-F^*)\phi(a)$ are all in $\mathcal{K}(E)$ and call it a Kasparov $(A,B)$-module. Other definitions like degenerate cycle and operator homotopy are defined in similar ways. In the above, we introduced the Fredholm picture of $\KK$-group. There is an alternative way to describe the element of $KK$-group. The Cuntz picture is described by a pair of representations $\phi,\psi:A \to \mathcal{L}(H_B)=M(B\otimes K)$ such that $\phi(a)-\psi(a)\in \mathcal{K}(H_B)=B\otimes K$. Such a pair is called a Cuntz pair. They form a set denoted by $\mathbb{E}_h(A,B)$. A homotopy of Cuntz pairs consists of a Cuntz pair $(\Phi,\Psi):A \to M(C([0,1])\otimes (B\otimes K))$. The quotient of $\mathbb{E}_h(A,B)$ by homotopy equivalence is a group $\KK_h(A,B)$ which is isomorphic to $\KK(A,B)$ via the mapping sending $[\phi,\psi]$ to $[\phi,\psi,1]$. \begin{defn}[the generalized essential codimension]\label{D:BDF} Given two projections $p,q \in M(B\otimes K)$ such that $p-q \in B\otimes K$, we consider representations $\phi,\psi $ from $\mathbb{C}$ to $M(B\otimes K)$ such that $\phi(1)=p,\psi(1)=q$. Then $(\phi,\psi)$ is a Cuntz pair so that we define $[p:q]$ as the class $[\phi,\psi]\in \KK_h(\mathbb{C},B) \simeq K(B)$ and call the (generalized) essential codimension of $p$ and $q$. \end{defn} \begin{rem} We recall that BDF's original definition of the essential codimension of $p$ and $q$ in $B(H)$ is given by the Fredholm index of $V^*W$ where $V$ and $W$ are isometries such that $VV^*=q$ and $W^*W=p$ \cite{BDF}. This looks different with the above definition. But in the case $B=K$ or $\mb{C}$ $[p:q]$ is mapped to $[\phi,\psi,1]$ in $\KK(\mb{C},\mb{C})$. Then the map from $\KK(\mb{C},\mb{C})$ to $\mb{Z}$ sends $[\phi,\psi,1]$ to the Fredholm index of $qp$ viewing $qp$ as the operator from $pH$ to $qH$. Thus it equals to the Fredholm index of $V^*W$. \end{rem} The following demonstrate that the generalized essential codimension behaves like the original essential codimension (see \cite[Section 1]{B}). \begin{lem}\label{L:properties} $[\,:\,]$ has the following properties. \begin{enumerate} \item $[p_1:p_2]=[p_1]_0-[p_2]_0$ if either $p_1$ or $p_2$ belongs to $B$, where $[p_i]_0$ is the $K_0$-class of a projection $p_i$, \item $[p_1:p_2]=-[p_2:p_1],$ \item $[p_1:p_3]=[p_1:p_2]+[p_2:p_3],\text{when sensible},$ \item $[p_1+q_1:p_2+q_2]=[p_1:p_2]+[q_1:q_2], \, \text{when sensible.}$ \end{enumerate} \end{lem} \begin{proof} Let $\phi_i$'s and $\psi_i$'s be elements in $\Hom(\mb{C},M(B))$ such that $\phi_i(1)=p_i$ and $\psi_i(1)=q_i$ respectively. Without loss of generality we let B be a stable $C\sp{*}$-algebra and $\Theta_B: M_2(B) \to B$ be an inner isomorphism. Then \begin{itemize} \item[(1)]In general, the isomorphism \[KK_h(A,B) \to KK(A,B)\] maps $[\phi_1,\phi_2]$ to a cycle $[\phi_1,\phi_2,1]$. Note that, when $A=\mb{C}$, \[ (\phi_1,\phi_2,1)=\left(H_B \oplus H_B, \left(\begin{array}{cc} \phi_1 & \\ & \phi_2 \\ \end{array} \right), \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right) \right)\] is a compact perturbation of \[\left(H_B \oplus H_B, \left(\begin{array}{cc} p_1 & \\ & p_2 \\ \end{array} \right), \left( \begin{array}{cc} 0 & p_1p_2 \\ p_2p_1 & 0 \\ \end{array} \right) \right). \] The latter is decomposed to $((1-p_1)(H_B) \oplus( 1-p_2)(H_B),0,0)\oplus \left(p_1(H_B) \oplus p_2(H_B), \left(\begin{array}{cc} p_1 & \\ & p_2 \\ \end{array} \right), \left( \begin{array}{cc} 0 & p_1p_2 \\ p_2p_1 & 0 \\ \end{array} \right) \right)$ so that $(\phi_1,\phi_2,1)$ is represented as \[\left(p_1(H_B) \oplus p_2(H_B), \left(\begin{array}{cc} p_1 & \\ & p_2 \\ \end{array} \right), \left( \begin{array}{cc} 0 & p_1p_2 \\ p_2p_1 & 0 \\ \end{array} \right) \right).\] Now we view $p_2p_1$ as an essential unitary operator from $p_1(H_B)$ to $p_2(H_B)$, thus its generalized Fredholm index is given by $[p_1]_0-[p_2]_0$ (see \cite{Mi}). In fact, it is realized as the following diagram \[ \begin{CD} p_1(H_B)\oplus (1-p_1)(H_B)@>>> p_2(H_B)\oplus (1-p_2)(H_B)\\ @V I\oplus U^* VV @A I\oplus W AA \\ p_1(H_B)\oplus H_B @> p_2p_1\oplus I >> p_2(H_B)\oplus H_B \end{CD} \] where $U,W \in M(B)=\mc{L}(H_B)$ are isometries such that $UU^*=1-p_1, WW^*=1-p_2$. \item[(2)] $\left( \begin{array}{cc} \cos (\frac{\pi}{2}t) & \sin(\frac{\pi}{2}t) \\ -\sin(\frac{\pi}{2}t) & \cos (\frac{pi}{2}t)\\ \end{array} \right)$ defines a homotopy from $\left( \begin{array}{cc} \phi_2 & 0 \\ 0 & \phi_1 \\ \end{array} \right) $ to $\left( \begin{array}{cc} \phi_1 & 0 \\ 0 & \phi_2 \\ \end{array} \right)$. Thus \[\begin{split} [\phi_1,\phi_2]+[\phi_2,\phi_1]&=\left[ \Theta_B\circ \left( \begin{array}{cc} \phi_1 & 0 \\ 0 & \phi_2 \\ \end{array} \right), \Theta_B\circ \left( \begin{array}{cc} \phi_2 & 0 \\ 0 & \phi_1 \\ \end{array} \right) \right]\\ &=\left[ \Theta_B\circ \left( \begin{array}{cc} \phi_1 & 0 \\ 0 & \phi_2 \\ \end{array} \right), \Theta_B\circ \left( \begin{array}{cc} \phi_1 & 0 \\ 0 & \phi_2 \\ \end{array} \right) \right]\\ &=0. \end{split}\] \item[(3)] Similarly, \[\begin{split} [\phi_1,\phi_2]+[\phi_2,\phi_3]&=\left[ \Theta_B\circ \left( \begin{array}{cc} \phi_1 & 0 \\ 0 & \phi_2 \\ \end{array} \right), \Theta_B\circ \left( \begin{array}{cc} \phi_2 & 0 \\ 0 & \phi_3 \\ \end{array} \right) \right]\\ &=\left[ \Theta_B\circ \left( \begin{array}{cc} \phi_2 & 0 \\ 0 & \phi_1 \\ \end{array} \right), \Theta_B\circ \left( \begin{array}{cc} \phi_2 & 0 \\ 0 & \phi_3 \\ \end{array} \right) \right]\\ &=[\phi_2,\phi_2]+[\phi_1,\phi_3]\\ &=[\phi_1,\phi_3]. \end{split}\] \item[(4)] Since $\phi_i$ and $\psi_i$ are orthogonal, i.e. $\phi_i \psi_i=0$, $\phi_i+\psi_i$ is a homomorphism. Note that $\left( \begin{array}{cc} \phi_i & 0 \\ 0 & \psi_i \\ \end{array} \right)$ is homotopic to $\left( \begin{array}{cc} \phi_i+\psi_i & 0 \\ 0 & 0 \\ \end{array} \right)$ up to stability. \[ \begin{split} [\phi_1,\phi_2]+[\psi_1,\psi_2]&=\left[ \Theta_B\circ \left( \begin{array}{cc} \phi_1 & 0 \\ 0 & \psi_1 \\ \end{array} \right), \Theta_B\circ \left( \begin{array}{cc} \phi_2 & 0 \\ 0 & \psi_2 \\ \end{array} \right) \right]\\ &=\left[ \Theta_B\circ \left( \begin{array}{cc} \phi_1+\psi_1 & 0 \\ 0 & 0 \\ \end{array} \right), \Theta_B\circ \left( \begin{array}{cc} \phi_2+\psi_2 & 0 \\ 0 & 0 \\ \end{array} \right) \right]\\ &=[\phi_1+\psi_1,\phi_2+\psi_2]. \end{split} \] \end{itemize} \end{proof} \begin{lem}\label{L:unitaryequi} Let $p$ and $q$ be projections in $M(B)$ such that $p-q \in B$. If there is a unitary $U \in 1+ B$ such that $UpU^*=q$, then $[p:q]=0$. In particular, if $\| p- q\|<1 $, then $[p:q]=0$. \end{lem} \begin{proof} Since $(\phi_1,\phi_2,1)$ is unitarily equivalent to $(\phi_1,\phi_1,1)$ which is a degenerate element, it follows that $[p:q]=0$ in $\KK(\mb{C},B)$ If $\|p-q\| <1$, we can take $a=(1-q)(1-p)+qp \in 1+ B$. Since $aa^*=a^*a=1-(p-q)^2 \in 1+ B$, \[\|a^*a -1\|=\|p-q\|^2 < 1, \quad \|aa^*-1\|=\|p-q\|^2 < 1. \] Moreover, it follows that \begin{align*} ap &= qp =qa. \end{align*} Hence, $a$ is invertible element and $U=a(a^*a)^{-\frac{1}{2}} \in 1+B$ is a unitary such that $UpU^*=q$. \end{proof} \begin{prop}\label{P:restrictions} Let $p, q$ be projections in M(B) such that $p-q \in B$. Suppose that $(K_0(B),K_0(B)^{+})$ is an ordered group where the positive cone $K_0(B)^{+}=\{[p]_0\mid p \in \mc{P}_{\infty}(B)\}$. \begin{itemize} \item [(1)] If $q \in B$, then $[p:q] \geq -[q]_0$, \item [(2)] If $1-q \in \widetilde{B}$, then $[p:q] \leq [1-q]_0 $. \end{itemize} \end{prop} \begin{proof} \begin{itemize} \item[(1)] Note that $[p:q]=[p]_0-[q]_0$ by Lemma \ref{L:properties}-(1). Hence $[p:q]\geq -[q]_0$. \item[(2)] Notice that $[1-p]_0-[1-q]_0\in K_0(B)$. By Lemma \ref{L:properties}-(4) $[p:q]=-[1-p:1-q]=[1-q]_0-[1-p]_0\leq [1-q]_0$. \end{itemize} \end{proof} \begin{lem}\label{L:homotopy} Suppose projections $p_t, q_t \in M(B)$ are defined for each $t$ in a set of $\mathbb{R}$. Then $[p_t:q_t]$ is constant if $t \to p_t - q_t$ is norm continuous in $ B$. \end{lem} \begin{proof} Straightforward. \end{proof} \begin{rem} Originally, the proof of this lemma for $B=K$ and $M(B)=B(H)$ was nontrivial (see \cite[Corollary 2.6]{BL}). However, it is now built in the definition of $\KK_h$. \end{rem} The next theorem, which we do not claim its originality, was proved in \cite{Lee} exhibits the most important property of the essential codimension (see \cite[Theorem 2.7]{BL}). \begin{thm}\label{T:BDF} Let $B$ be a non-unital ($\sigma$-unital) purely infinite simple $C\sp{*}$-algebra such that $M(B\otimes K)$ has real rank zero. Suppose two projections $p$ and $q$ in $M(B\otimes K)=\mathcal{L}(H_B)$ such that $p-q \in B\otimes K$ and neither of them is in $B\otimes K$. If $[p:q] \in K_{0}(B)$ vanishes, then there is a unitary $u$ in $ 1 + B\otimes K$ such that $upu^*=q $. \end{thm} \begin{rem}\label{R:properequivalence} \begin{itemize} \item[(i)] As it was pointed out in \cite{DE} and \cite{Lee}, the crucial point of Theorem \ref{T:BDF} is that the implementing unitary $u$ has the form ``identity + compact''. This requirement to obtain the reasonable generalization of BDF's original statement has been very useful in $\KK$-theory (see \cite{DE} and \cite{Lee}). \item[(ii)] Without restrictions on $C\sp*$-algebra $B$, but on $p \in M(B\otimes K)$ being halving projection, any compact perturbation of $p$ is of the form $upu^*$ where a unitary $u$ is in $ 1 + B\otimes K$ \cite{Zh3}. \end{itemize} \end{rem} \section{Deformation of a projection and its applications}\label{S:deformation} Let $B$ be a simple stable $C\sp*$-algebra such that the multiplier algebra $M(B)$ has real rank zero. Let $X$ be $[0,1], [0,\infty)$, $(-\infty,\infty)$ or $\mathbb{T}=[0,1]/\{0,1\}$. When $X$ is compact, let $I=C(X)\otimes B$ which is the $C\sp{*}$-algebra of (norm continuous) functions from $X$ to $B$. When $X$ is not compact, let $I=C_0(X)\otimes B$ which is the $C\sp*$-algebra of continuous functions from $X$ to $B$ vanishing at infinity. Then $M(I)$ is given by $C_b(X, M(B)_s)$, which is the space of bounded functions from $X$ to $M(B)$, where $M(B)$ is given the strict topology. Let $\mathcal{C}(I)=M(I)/I$ be the corona algebra of $I$ and also let $\pi:M(I) \to \mathcal{C}(I)$ be the natural quotient map. Then an element $\mathbf{f}$ of the corona algebra can be represented as follows: Consider a finite partition of $X$, or $X \smallsetminus \{0,1\}$ when $X=\mathbb{T}$ given by partition points $x_1 < x_2 < \cdots < x_n $ all of which are in the interior of $X$ and divide $X$ into $n+1$ (closed) subintervals $X_0,X_1,\cdots,X_{n}$. We can take $f_i \in C_b(X_i, M(B)_s)$ such that $f_i(x_i) -f_{i-1}(x_i)\in B$ for $i=1,2,\cdots,n$ and $f_0(x_0)-f_n(x_0) \in B$ where $x_0=0=1$ if $X$ is $\mathbb{T}$. The following lemma and the statement after it were shown in \cite{Lee}. \begin{lem} The coset in $\mathcal{C}(I)$ represented by $(f_0,\cdots,f_n)$ consists of functions $f$ in $M(I)$ such that $f- f_i \in C(X_i)\otimes B$ for every $i$ and $f-f_i $ vanishes (in norm) at any infinite end point of $X_i$. \end{lem} Similarly $(f_0,\cdots,f_n)$ and $(g_0,\cdots,g_n)$ define the same element of $\mathcal{C}(I)$ if and only if $f_i - g_i \in C(X_i)\otimes B$ for $i=0,\cdots,n$ if $X$ is compact. $(f_0,\cdots,f_n)$ and $(g_0,\cdots,g_n)$ define the same element of $\mathcal{C}(I)$ if and only if $f_i - g_i \in C(X_i)\otimes B$ for $i=0,\cdots,n-1$, $f_n -g_n \in C_0([x_n,\infty))\otimes B$ if $X$ is $[0.\infty)$. $(f_0,\cdots,f_n)$ and $(g_0,\cdots,g_n)$ define the same element of $\mathcal{C}(I)$ if and only if $f_i - g_i \in C(X_i)\otimes B$ for $i=1,\cdots,n-1$, $f_n -g_n \in C_0([x_n,\infty))\otimes B$, $f_0-g_0 \in C_0((-\infty,x_1])\otimes B$ if $X=(-\infty,\infty)$.\\ A virtue of the above unnecessarily complicated description of an element in $\mc{C}(I)$ is the following theorem which says a projection in $\mc{C}(I)$ is locally liftable. \begin{thm}\cite[Theorem 3.2]{Lee}\label{T:locallift} Let $I$ be $C(X)\otimes B$ or $C_0(X)\otimes B$ where $B$ is a stable $C\sp{*}$-algebra such that $M(B)$ has real rank zero. Then a projection $\mathbf{f}$ in $M(I)/I$ can be represented by $(f_0,f_1,\cdots, f_n)$ as above where $f_i$ is a projection valued function in $C(X_i)\otimes M(B)_s$ for each $i$. \end{thm} \begin{rem} The theorem says that any projection $\mathbf{f}$ in the corona algebra of $C(X)\otimes B$ for some $C\sp{*}$-algebras $B$ can be viewed as a ``locally trivial fiber bundle'' with the Hilbert modules as fibers in the sense of Dixmier and Duady \cite{DixDua}. \end{rem} The following theorem will be used as one of our technical tools. \begin{thm}\cite[Theorem 3.3]{Lee}\label{T:lifting} Let $I$ be $C(X)\otimes B$ where $B$ is a $\sigma$-unital, non-unital, purely infinite simple $C\sp{*}$-algebra such that $M(B)$ has real rank zero or $K_1(B)=0$ (See \cite{Zh}). Let a projection $\mathbf{f}$ in $M(I)/I$ be represented by $(f_1,f_2,\cdots, f_n)$, where $f_i$ is a projection valued function in $C(X_i)\otimes M(B)_s$ for each $i$, as in Theorem \ref{T:locallift}. If $k_i=[f_i(x_i):f_{i-1}(x_i)]=0$ for all $i$, then the projection $\mathbf{f}$ in $M(I)/I$ lifts. \end{thm} Now we want to observe the necessary conditions for a projection in $\mc{C}(I)$ to lift. If $\mathbf{f}$ is liftable to a projection $g$ in $M(I)$, we can use the same partition of $X$ so that $(g_0,\cdots,g_n)$ and $(f_0, \cdots, f_n)$ define the same element $\mathbf{f}$ where $g_i$ is the restriction of $g$ on $X_i$. Then, for each $i$, $[g_i(x):f_i(x)]$ is defined for all $x$. By Lemma \ref{L:homotopy} this function must be constant on $X_i$ since $g_i -f_i$ is norm continuous. So we can let $l_i= [g_i(x):f_i(x)]$. \\ Since $g_i(x_i)=g_{i-1}(x_i)$, we have $[g_i(x_i):f_i(x_i)]+[f_i(x_i):f_{i-1}(x_i)]=[g_{i-1}(x_i):f_{i-1}(x_i)]$ by Lemma \ref{L:properties}-(3). In other words, \begin{equation}\label{E:eq1} l_i-l_{i-1}=-k_i \quad \mbox{for} \quad i>0 \quad \mbox{and}\quad l_0-l_n=-k_0 \quad \mbox{in the circle case}. \end{equation} Moreover, if $(K_0(B),K_0(B)^{+})$ is an ordered group with the positive cone $K_0(B)^{+}=\{[p]_0\mid p \in \mc{P}_{\infty}(B)\}$, we apply Proposition \ref{P:restrictions} and Lemma \ref{L:unitaryequi} to projections $g_i(x)$ and $f_i(x)$, and we have the following restrictions on $l_i$ . \begin{itemize} \item[(i)] If for some $x$ in $X_i$, $f_i(x)$ belongs to $B$, then \begin{equation}\label{E:eq2} l_i \geq - [f_i(x)]_0, \end{equation} \item[(ii)] If for some $x$ in $X_i$, $1-f_i(x)$ belongs to $B$, then \begin{equation}\label{E:eq3} l_i \leq [1-f_i(x)]_0, \end{equation} \item[(iii)] If either end point of $X_i$ is infinite, then \begin{equation}\label{E:eq4} l_i=0. \end{equation} \end{itemize} Are these necessary conditions sufficient? Our strategy of showing the converse is to perturb the $f_i$'s so that $[f_i(x_i):f_{i-1(x_i}]$ vanishes for all i. To do this we need to deform a field of orthogonal projections by embedding a field a projections with arbitrary``rank'' information which is given by a K-theoretical term. This implies that a field of Hilbert modules defined by a field of projections should be ambient so that any field of submodules with arbitrary rank can be embedded. Unlike a Hilbert space there is no proper algebraic notion of rank of a multiplier projection whose image can be regarded as a Hilbert submodule of the standard Hilbert module $H_B$. However, at least we want to distinguish submodules in terms of finiteness and infiniteness of corresponding projections. Recall that a projection $p$ in a unital $C\sp*$-algebra $A$ is called a \emph{halving} projection if both $1-p$ and $p$ are Murray-von Neumann equivalent to the unit in $A$ and a projection $p$ in $A$ is called properly inifinite if there are mutually orthogonal projections $e$, $f$ in $A$ such that $e \le p$, $f \le p$, and $e \sim f \sim p$. Then it is easy to check that a projection in the multiplier algebra of a stable $C\sp*$-aglebra is Murray-von Neumann equivalent to $1$ if and only if it is full and properly infinite. We denote by $\mf{P}$ the set of all full and properly infinite projections in $M(B)$. We assume $M(B)$ has a halving projection and fix the halving projection $H$. For the moment we allow $X$ to be any finite dimensional topological space. Let $\mathbf{p}$ be a section which is a continuous map from $X$ to $M(B)$ with respect to the strict topology on $M(B)$. We denote its image on $x\in X$ by $p_x$. By the observation so far, it is natural to put the condition that $p_x$ is full and properly inifinte for every $x \in X$ to have an analogue of an infinite dimensinal Hilbert bundle or a continuous field of separable infinited dimensioanl Hilbert spaces. We need the Michael selection theorem as the important step to get the main result Lemma \ref{L:deform}. \begin{thm}(See \cite{Mich})\label{T:Selection} Let $X$ be a paracompact finite dimensional topological space. Let $Y$ be a complete metric space and $S$ be a set-valued lower semicontinuous map from $X$ to closed subsets of $Y$, i.e. for each open subset $U$ of $Y$ the set $\{x \in X \mid S(x) \cap U \neq \emptyset \}$ is open. Let $\mathcal{R}$ be the range $\{S(x): x\in X\}$ of $S$. Then, if for some $m> \dim X$ we have \begin{itemize} \item[(i)] $S(x)$ is $m$-connected; \item[(ii)] each $R \in \mathcal{R}$ has the property that every point $x \in R$ has an arbitrarily small neighborhood $V(x)$ such that $\pi_m(R^{'}\cap V(x))=\{e\}$ for every $R^{'} \in \mathcal{R}$; \end{itemize} then there exists a continuous map $s$ from $X$ to $Y$ such that $s(x)$ is in $S(x)$ for all $x \in X$. \end{thm} Here, $\pi_m$ is the $m^{th}$ homotopy group, defined by $\pi_m(Z)=[S^m, Z]$. The following lemma was actually proved in \cite{KuNg} under the assumption that $p_x$ is halving for every $x$, but this assumption can be weaken keeping the same proof \cite{Ng}. \begin{lem}\label{L:deform} Let $B$ be a $\sigma_p$-unital, stable $C\sp*$-algebra and $\mathbf{p}$ be a section which is a map from a compact, finite dimensional topological space $X$ to $M(B)$ with respect to the strict topology on $M(B)$. Suppose $p_x$ is a full, properly infinite projection in $M(B)$ for each $x$. Then there exists a continuous map $u:X \to \mf{M}$ such that $u^*_x u_x=p_x$, $u_xu^*_x=H \in M(B)$, where $\mf{W}$ is the set of all partial isometries with the initial projection in $\mf{P}$ and range projection H. \end{lem} \begin{proof} We define a map $F_H: \mf{W} \to \mf{P}$ by $F_H(v)=v^*Hv$. Then it can be shown that $F_H^{-1}(p)$ is closed and contractible, and $F_H$ is an open mapping as in \cite{KuNg}. Let Y be norm closed ball in $M(B)$ of elements with the norm less than equal to 2. If we define a set-valued map $S:X \to 2^{Y}$ by $S(x)=F_H^{-1}(p_x)$, it can be shown that this map is lower semi-continuous using the openness of the map $F_H$ (see also \cite{KuNg}). Then we apply the Michael selection theorem to the map S so that there is a cross section $s:X \to Y$ such that $s(x)\in S(x)$. Then we define $u_x=Hs(x)$. Since $F_H(s(x))=p_x$, it follows that $u_xu^*_x=H$ and $u^*_xu_x=p_x$. \end{proof} Recall that a closed submodule $E$ of the Hilbert module $F$ over $B$ is complementable if and only if there is a submodule $G$ orthogonal to $E$ such that $E\oplus G =F$. The Kasparove stabilization theorem says that a countably generated closed submodule of $H_B$ is complementable whence it is the image of a projection in $\mathcal{L}(H_B)$. Let $\mathfrak{F}=((F_t)_{\{t\in T\}},\Gamma)$ be a continuous field of Hilbert modules. A continuous field of Hilbert modules $((E_x)_{\{x\in X\}},\Gamma')$ is said to be complementable to $\mathfrak{F}$ when $E_x$ is a complementable submodule of $F_x$ for each $x \in X$. Then the following is a geometrical interpretation of Lemma \ref{L:deform}. \begin{prop} A complementable subfield $((E_x)_{\{x\in X\}},\Gamma')$ of the constant module $((H_B)_{\{x\in X\}}, \Gamma)$, where $\Gamma$ consists of the (norm) continuous section from $X$ to $H_B$, is in one to one correspondence to a continuous projection-valued map $p: X \to \mathcal{L}(H_B)$, where the latter is equipped with the $*$-strong topology (In general, we say that $\{T_i\}$ in $\mathcal{L}(X)$ converges to $T$ $*$-strongly if and only if both $T_i(x) \to T(x)$ and $T_i^*(x) \to T^*(x)$ in $X$ for all $x\in X$). \end{prop} \begin{proof} Since $E_x$ is a complementable submodule of $H_B$, it is an image of a projection $p_x \in \mathcal{L}(H_B)$. This it defines a map $p:X \to \mathcal{L}(H_B)$. Note that in general when $\mathcal{H}=((H_x)_{x\in X}, \Gamma)$ is a continuous field of Hilbert modules and $((E_x)_{x\in X}, \Gamma^{'})$ is a complemented subfield of $\mathcal{H}$, $x \mapsto p_x(\gamma(x))$ is continuous if and only if $ x \mapsto \|p_x(\gamma(x))\|$ is continuous for $\gamma \in \Gamma$. In addition, $\Gamma^{'}=\{x \mapsto p_x(\gamma(x)) \,\, \text{ for $\gamma \in \Gamma$}\}$. So if $\mathcal{H}$ is a trivial field, the map $x \mapsto p_x(\xi)$ for $\xi \in H_B$ is in $\Gamma^{'}$, and thus continuous. This implies that the map $x \to p_x$ is strongly continuous. Conversely, suppose that we are given a strongly continuous map $x \to p_x \in \mathcal{L}(H_B)$. Let $E_x=p_x(H_B)$, and define a section $\gamma_{\xi}:x \to p_x(\xi)$ for each $\xi \in H_B$. Let $\Lambda=\{\gamma_{\xi} \in \prod_{x\in X} E_X \mid \xi \in H_B\}$ and $\Gamma^{''}=\{\gamma \in \prod_{x}E_x \mid \gamma \, \, \text{satisfies ($*$)}\}$.\\ ($*$) For any $ x \in X$ and $\epsilon >0$, there exists $\gamma^{'} \in \bar{\Lambda}$ such that $\| \gamma(x)- \gamma^{'}(x)\| \le \epsilon$ in a neighborhood of $x$.\\ Then we can check that $((E_x)_{x\in X},\Gamma^{''})$ is a complemented subfield of a trivial field. \end{proof} Thus we have an analogue of well-known Dixmier's triviality theorem on continuous fields of Hilbert spaces in the Hilbert module setting. We note that the strict topology on $M(B\otimes K) \simeq \mathcal{L}(H_B)$ coincides with the $*$-strong toplogy on bounded sets (see \cite[Proposition C.7]{RaeWi}). \begin{cor}\label{C:triviality} Let $B$ be a stable $C\sp*$-algebra and X a finite dimensional compact Hausdorff space. Then a complementable subfield of Hilbert modules associated with a projection-valued map $p:X \to M(B)_s$ is isomorphic to a trivial field provided that each $p_x$ is a full, properly infinite projection in $M(B)$. \end{cor} \begin{proof} Lemma \ref{L:deform} says that $p \in M(C(X)\otimes B)$ is globally full, and properly infinite if $p_x$ is full and properly infinite for each $x$ in $X$. Thus $p$ is Murray-von Neumann equivalent to $1_{M(C(X)\otimes B)}$. \end{proof} \begin{lem} Under the same hypothesis on $B$ and $X$ as in Lemma \ref{C:triviality} let $\mathbf{p}$ be a section which is a map from $X$ to $M(B)$ with respect to the strict topology on $M(B)$. Suppose that there exist a continuous map $u:X \to \mf{P}$ such that $u^*_x u_x=p_x$, $u_xu^*_x=H \in M(B)$. In addition, given an $\alpha \in K_0(B)$ we suppose that there exist a projection $q \in HBH \subset B$ such that $[q]=\alpha \in K_0(B)$. Then there exists a norm continuous section $\mathbf{r}$ from $X$ to $B$ such that $r_x \leq p_x$ such that $[r]_{K_0(B)}=\alpha$. \end{lem} \begin{proof} Since $q \in B$, $x \to qu_x \in B $ is norm continuous so that $x \to r_x=(qu_x)^*qu_x=u_x^*qu_x$ is norm continuous. Note that $(qu_x)^*qu_x=qu_xu_x^*q=qHq=q$. Thus $[r]_{K_0(B)}=[r_x]=[q]=\alpha$. \end{proof} In summary, we state what we need as a final form . \begin{lem}\label{L:subprojection} Let $\mathbf{p}$ be a section from $X$ to $M(B)$ with respect to the strict topology on $M(B)$ where $B$ is a $\sigma_p$-unital, stable $C\sp*$-algebra of real rank zero such that $M(B)$ contains a halving full projection. In addition, assume that $p_x$ is a full, properly infinite projection for each $x$. Then for any $\alpha \in K_0(B)$ there exists a norm continuous section $\mathbf{r}$ from $X$ to $B$ such that $r_x \leq p_x$ such that $[r]_{K_0(B)}=\alpha$. \end{lem} \begin{proof} If we denote a halving (strictly) full projection by $H$, $HBH$ is a full hereditary subalgebra of $B$ so that it is stably isomorphic to $B$ by \cite[Corollary 2.6]{Br}. Hence $K_0(HBH)=K_0(B)$. Since $B$ is a $C\sp*$-algebra of real rank zero, so is $HBH$ by \cite[Corollary 2.8]{BP}. This it satisfies the strong $K_0$-surjectivity, i.e. there exists a projection $q$ in $HBH$ such that $[q]_0=\alpha$. Then the conclusion follows from Lemma \ref{L:deform} and Lemma \ref{L:subprojection}. \end{proof} Now we restrict ourselves to the case $I=C(X)\otimes B$ where $X$ is $[0,1]$, $[0,\infty)$, $(-\infty,\infty)$, or $[0,1]/\{0,1\}$, and $B$ is $\sigma_p$-unital, purely infinite simple $C\sp*$-algebra such that $M(B)$ has real rank zero and has a full halving projection $H$. From now on we assume that $K_0(B)$ is an odered abelian group and drop $0$ in the expression of an element in $K_0(B)$. Note that $B$ is a stable $C\sp*$-algebra by Zhang's diachotomy \cite{Zh}. Also, it satisfies the strong $K_0$-surjectivity \cite{Lin96}. Thus we can apply Lemma \ref{L:subprojection} to (one dimensional) closed intervals $X_i$'s, which come from a partition of $X$ associated with a local representation of a projection in the corona algebr of $I$. \begin{thm}\label{T:liftingthm} A projection $\mathbf{f}$ in $\mathcal{C}(I)$ represented by $(f_0,\cdots, f_n)$ is liftable to a projection in $M(I)$ where $f_i(x)$'s are halving projections for all $i$ and $x\in X$ if and only if there exist $l_0,\cdots,l_n$ satisfying above conditions (\ref{E:eq1}), (\ref{E:eq2}), (\ref{E:eq3}), (\ref{E:eq4}). \end{thm} \begin{proof} Given $l_i$'s satisfying (\ref{E:eq1}), (\ref{E:eq2}), (\ref{E:eq3}), (\ref{E:eq4}), we will show there exist $g_0, \cdots, g_n$ such that $[g_i(x_i):g_{i-1}(x_i)]=0$ for $i>0$ and $[g_0(x_0):g_n(x_0)]=0$ in the circle case. First observe that if we have $g_{i}$'s such that $l_i=[g_i(x_i):f_i(x_i)]$, we have $[g_i(x_i):g_{i-1}(x_i)]=0 $ by (\ref{E:eq1}). Thus it is enough to show that there exist $g_0, \cdots, g_n$ such that $[g_i(x_i):f_{i}(x_i)]=l_i$. \begin{itemize} \item[$l_i=0$]: Take $g_{i}=f_{i}$. \item[$l_i>0$]: By Lemma \ref{L:subprojection} the continuous field determined by $1-f_{i}$ has a trivial subfield which is given by a projection valued function $q \leq 1-f_i$ such that $[q(x)]_0=l_i$. So we take $g_i=f_i+q$. \item[$l_i<0$]: Similarly, the continuous field determined by $f_i$ has a trivial subfield which is given by a projection valued function $q' \leq f_i$ such that $[q'(x)]_0=-l_i$. So we take $g_i=f_i-q'$. \end{itemize} Then the conclusion follows from Theorem \ref{T:lifting}. \end{proof} Then we want to investigate some equivalence relations for projections using above arguments. As before, let $\mathbf{p}$ and $\mathbf{q}$ be two projections in $\mathcal{C}(I)$. \begin{lem}\label{L:onetoone} Let $(p_0, \cdots, p_n)$ and $(q_0, \cdots, q_n)$ be local liftings of $\mathbf{p}$ and $\mathbf{q}$ such that $q_i(x)$ is a halving projection for each $x$ in $X_i$. If $ \sum_{i=1}^n[p_i(x_i):p_{i-1}(x_i)]=\sum_{i=1}^n [q_i(x_i):q_{i-1}(x_i)]$, or $\sum_{i=1}^n[p_i(x_i):p_{i-1}(x_i)]+ [p_0(x_0):p_n(x_0)]=\sum_{i=1}^n [q_i(x_i):q_{i-1}(x_i)]+[q_0(x_0):q_n(x_0)]$ in the circle case, then we can find a perturbation $(q_0', \cdots, q_n')$ of $\mathbf{q}$ such that $[p_i(x_i):p_{i-1}(x_i)]= [q_i'(x_i):q_{i-1}'(x_i)]$ for $i=1,\dots, n $ or $[p_i(x_i):p_{i-1}(x_i)]= [q_i'(x_i):q_{i-1}'(x_i)]$for $i=1,\dots, n+1$ modulo $n+1$. \end{lem} \begin{proof} Let $[p_i(x_i):p_{i-1}(x_i)]=k_i, [q_i(x_i):q_{i-1}(x_i)]=l_i$. If $d_i=k_i-l_i$, note that $$ \sum[p_i(x_i):p_{i-1}(x_i)]=\sum [q_i(x_i):q_{i-1}(x_i)] \quad \text{if and only if} \quad \sum d_i =0. $$ Let $q_0'=q_0$. Suppose that we have constructed $q_0', \cdots, q_{i}'$ such that $[p_j(x_j):p_{j-1}(x_j)]= [q_j'(x_j):q_{j-1}'(x_j)]$ for $j=1,\cdots,i$ and $[q_{i+1}(x_{i+1}):q_{i}'(x_{i+1})]=l_{i+1}-\sum_{k=1}^{i} d_k$. \\ Let $\mathbf{r}$ be a projection valued (norm continuous) function on $X_{i+1}$ such that $r \leq 1-q_{i+1}$ and $[q]_{K_0(B)}=d_{i+1}+\sum_{k=1}^{i} d_k$. Then take $q_{i+1}'=q+q_{i+1}$. Then \begin{align*} [q_{i+1}'(x_{i+1}):q_{i}'(x_{i+1})]&=[q_{i+1}(x_{i+1}):q_{i}'(x_{i+1})]+[q(x_{i+1}):0] \\ &=l_{i+1}-\sum_{k=1}^{i} d_k + d_{i+1}+\sum_{k=1}^{i} d_k\\ &=l_{i+1}+k_{i+1}-l_{i+1}\\ &=k_{i+1} \end{align*} \begin{align*} [q_{i+2}(x_{i+2}):q_{i+1}'(x_{i+2})]&=[q_{i+2}(x_{i+2}):q_{i+1}(x_{i+2})]+[0:q(x_{i+2})] \\ &=l_{i+2} -( d_{i+1}+\sum_{k=1}^{i} d_k)\\ &=l_{i+2}-\sum_{k=1}^{i+1} d_k \end{align*} By induction, we can get $q_{0}', \cdots, q_{n-1}'$ such that $[p_j(x_j):p_{j-1}(x_j)]= [q_j'(x_j):q_{j-1}'(x_j)]$ for $j=1,\cdots,n-1$ as we want. Finally, since we also have $[q_{n}(x_{n}):q_{n-1}'(x_{n})]=l_{n}-\sum_{k=1}^{n-1} d_k=l_{n}+d_{n}=k_{n}$ from $\sum_{k=1}^{n-1} d_k+d_{n}=0$, we take $q_{n}'=q_{n}$. In the circle case, we perturb $q_n$ to $q'_n$ such that $[q_n'(x_n):q'_{n-1}(x_n)]=k_n$ and $[q_0(x_0):q'(x_0)]=l_0- \sum_{k=1}^n d^k=l_0+d_0=k_0$. \end{proof} Next is an analogous result that is more symmetrical. \begin{lem}\label{L:samerank} Let $(p_0, \cdots, p_n)$ and $(q_0, \cdots, q_n)$ be local liftings of $\mathbf{p}$ and $\mathbf{q}$ such that $p_i(x)$ and $q_i(x)$ are full, properly infinite projections for each $x$ in $X_i$.\\ If $ \sum[p_i(x_i):p_{i-1}(x_i)]=\sum [q_i(x_i):q_{i-1}(x_i)]$, or $[p_i(x_i):p_{i-1}(x_i)]= [q_i'(x_i):q_{i-1}'(x_i)]$for $i=1,\dots, n+1$ modulo $n+1$, then we can find perturbations $(q_0', \cdots, q_n')$ of $\mathbf{q}$ and $(p_0', \cdots, p_n')$ of $\mathbf{p}$ such that $[p_i'(x_i):p_{i-1}'(x_i)]= [q_i'(x_i):q_{i-1}'(x_i)]$ for all $i$. \end{lem} \begin{proof} The proof proceeds as above with one exception: If $d_{i+1}+\sum_{k=1}^{i} d_k \geq 0$, we make $p_{i+1}' \leq p_i$ rather than making $q_{i+1}'\geq q_{i}$. \end{proof} An operator on $H_B$ is called a Fredholom operator when it is invertible modulo $\mathcal{K}(H_B)$ the ideal of compact operators. In fact, a generalized Atkinson theorem says that an opertor $F$ for which there exists a compact $K\in \mathcal{K}(H_B)$ such that $\Ker (F+K)$ and $\Ker (F+K)\sp*$ are finitely generated and $\Im F+K$ is closed is a Fredholm operator and vice virsa by \cite {Mi}. Thus we can define an index of a Fredholm operator in $K_0(B)$ as the diffence of two classes of finitely generated modules. Let us denote its index by $\Ind$. For more details, we refer the reader to \cite{We,Mi}. \begin{prop}\label{P:equivalence} Suppose $\mathbf{p}$ and $\mathbf{q}$ are given by projection valued functions $(p_0,p_1,\dots,p_n)$ and $(q_0,q_1,\dots,q_n)$, where both $p_i(x)$ and $q_i(x)$ are full and properly infinite projections for each $x$ in $ X_i$. If $\sum_i k_i=\sum_i l_i$, then $\mathbf{p} \sim \mathbf{q}$. \end{prop} \begin{proof} By Lemma \ref{L:samerank} and the assumption we may arrange that $k_i=l_i$ for each $i$. Since $p_i(x)$ and $q_i(x)$ are full, properly infinite projections for each $x$ in $X_i$, there is a (double) strongly continuous function $u_i$ on each $X_i$ such that ${u_i}^{\ast}u_i=p_i, u_i{u_i}^{\ast}=q_i$ by Corollary \ref{C:triviality}. Note that $u_{i-1}(x)$ is a unitary from $p_{i-1}(x)H$ onto $q_{i-1}(x)H$ so that $\Ind(u_{i-1}(x_i))=0$. Then $k_i=l_i$ implies that $$\Ind(q_i(x_i)u_{i-1}(x_i)p_i(x_i))= -l_i+\Ind(u_{i-1}(x_i))+k_i=0,$$ where the first index is for maps from $p_i(x_i)H_B$ to $q_i(x_i)H_B$, and, for example, the index of $p_{i-1}(x_i)p_i(x_i)$ as a map from $p_i(x_i)H_B$ to $p_{i-1}(x_i)H_B$ is $k_i$. Also $$q_i(x_i)u_{i-1}(x_i)p_i(x_i)-u_{i-1}(x_i) \in B.$$ There is a compact perturbation $v_i$ of $q_i(x_i)u_{i-1}(x_i)p_i(x_i)$ such that $v_i^*v_i=p_i(x_i)$, $v_iv_i^*=q_{i}(x_i)$, and $v_i-u_{i-1}(x_i)\in B$. By the triviality of the continuous field of Hilbert modules determined by $p_{i}$ and the path connectedness of the unitary group of $M(B)$ \cite{Mi}, there is a path $\{v(t) : t\in [x_i,x]\}$ such that $ v(t)^*v(t)=v(t)v(t)^*=p_i(t)$, $v(x_i)={u_i(x_i)}^{*}v_i$, and $v(x)=p_{i}(x)$ for some $x \in X_i$. Then we let $w_i=u_iv$ on $[x_i,x]$ so that \begin{align*} &w_i(x_i)-u_{i-1}(x_i) = v_i - u_{i-1}(x_i) \in B, \\ &w_{i}^*w_{i} =v^*{u_i}^*u_iv=v^*p_iv=p_i, \\ & w_{i}w_{i}^* =u_ivv^*{u_i}^*=u_ip_i{u_i}^*=q_i. \end{align*} Finally, we define \[ u_i'= \begin{cases} w_i, \quad \text{on} \quad[x_i,x], \\ u_i, \quad \text{on} \quad[x,x_{i+1}]. \end{cases} \] In the $(-\infty,\infty)$-case we do the above for $i=1,\dots,n$ and let $u'_0=u_0$. In the circle case we do it for $i=0,\dots,n$. \end{proof} \begin{cor}\label{C:uequivalence} Suppose $\mathbf{p}$ and $\mathbf{q}$ are given by projection valued functions $(p_0,p_1,\dots,p_n)$ and $(q_0,q_1,\dots,q_n)$, where both $p_i(x)$ and $q_i(x)$ are halving projections for each $x$ in $ X_i$. If $\sum_i k_i=\sum_i l_i$, then $\mathbf{p} \sim_{u} \mathbf{q}$. \end{cor} \begin{cor} Suppose $\mathbf{p}$ and $\mathbf{q}$ are given by projection valued functions $(p_0,p_1,\dots,p_n)$ and $(q_0,q_1,\dots,q_n)$. If $\sum_i k_i=\sum_i l_i$, then $[\mathbf{p}] \sim [\mathbf{q}]$ in $K_0$. \end{cor} \begin{proof} Since $p_i(x)$ and $q_i(x)$ are halving, we apply Proposition \ref{P:equivalence} to $\mathbf{1-p}$ and $\mathbf{1-q}$, and obtain that $\mathbf{1-p} \sim \mathbf{1-q}$. It follows that $\mathbf{p} \sim_u \mathbf{q}$. \end{proof} \begin{proof} We replace $\mathbf{p}$ with $\mathbf{p}\oplus \mathbf{1}\oplus \mathbf{0}$ and $\mathbf{q}$ with $\mathbf{q}\oplus \mathbf{1}\oplus \mathbf{0}$, still being equal in $K_0$. Also, note that $k_i$'s and $l_i$'s are not changed. Then by Kasparov's absorption theorem $p_x(H_B)\oplus H_B \simeq H_B$ and so is $q_x(H_B)$. Thus $p_x \oplus 1$ and $q_x\oplus 1$ are Murray-von Neumann equivalent $1$. This implies that $p_x\oplus 1$ and $q_x \oplus 1$ are full, properly infinite projections in $M(B)$. The conclusion follows from Proposition \ref{P:equivalence}. \end{proof} \section{Acknowledgements} The author wishes to thank S. Zhang for pointing out his previous result \ref{R:properequivalence}-(ii) during a conference. He also wishes to thank P.W. Ng for confirming Lemma \ref{L:deform}.
proofpile-arXiv_069-5839
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Overview} Throughout this note we shall consider a Young function $\Phi$ with the following properties. Given $r\geq 1$ and $\delta\geq 0$, we say that a Young function $\Phi$ belongs to the family $\mathfrak{F}_{r,\delta}$ if it is submultiplicative, has lower type $r$ and satisfies the condition \[\frac{\Phi(t)}{t^r}\leq C_0 (\log t)^\delta,\quad \textrm{ for }t\geq t^*,\] for some constants $C_0>0$ and $t^*\geq 1$. In \cite{Berra-Carena-Pradolini(MN)} we obtained mixed estimates for the operator $M_\Phi$, where $\Phi$ belongs to $\mathfrak{F}_{r,\delta}$. Concretely, we stated the inequality \[uv^r\left(\left\{x\in \mathbb{R}^n: \frac{M_\Phi(fv)(x)}{v(x)}>t \right\}\right)\leq C\int_{\mathbb{R}^n}\Phi\left(\frac{|f|}{t}\right)uv^r\] where $u$ and $v^r$ are weights belonging to the $A_1$-Muckenhoupt class. Later, in \cite{B22Pot}, the same kind of estimate was obtained when $v^r$ is only assumed to be an $A_\infty$ weight. In the the proofs of both results we used Claim 3.4 in \cite{Berra-Carena-Pradolini(MN)}, and Claims 1 and 3 in \cite{B22Pot} as auxiliary tools. These claims have an error on a Hölder estimate, where a limiting argument was mistakenly used and it cannot be adapted to obtain the inequality given above. The purpose of this note is give a proof of Theorem~1 in \cite{B22Pot} that avoids this step on the claims and allows us to obtain a slightly different estimate, that will still be useful for our purposes. We shall only modify the results obtained in \cite{B22Pot}, since they are more general and the corresponding version of those in \cite{Berra-Carena-Pradolini(MN)} will follow as an immediate consequence. The modified mixed estimate in \cite{B22Pot} is the following. \begin{teo}[Corrected version of Theorem 1 in \cite{B22Pot}]\label{teo: teorema principal} Let $r\geq 1$, $\delta\geq 0$ and $\Phi\in \mathfrak{F}_{r,\delta}$. If $u\in A_1$ and $v^r\in A_\infty$ then there exists a positive constant $\varepsilon_0$ such that the inequality \[uv^r\left(\left\{x\in \mathbb{R}^n: \frac{M_\Phi(fv)(x)}{v(x)}>t \right\}\right)\leq C\int_{\mathbb{R}^n}\left(\eta_\varepsilon\circ \Phi\right)\left(\frac{|f(x)|}{t}\right)u(x)v^r(x)\,dx\] holds for every positive $t$ and every $0<\varepsilon<\varepsilon_0$, where $\eta_\varepsilon(z)=z(1+\log^+z)^{\delta/\varepsilon}$ and $C$ depends on $\varepsilon$. \end{teo} It is not difficult to see that $M_\Phi v\gtrsim v$ when $\Phi$ belongs to $\mathfrak{F}_{r,\delta}$. So we have the following result as an immediate consequence of the theorem above. \begin{coro}[Corrected version of Corollary 2 in \cite{B22Pot}]\label{coro: corolario del teorema principal - 1} Under the assumptions in Theorem~\ref{teo: teorema principal}, there exists a positive constant $\varepsilon_0$ such that \[uv^r\left(\left\{x\in \mathbb{R}^n: \frac{M_\Phi(fv)(x)}{M_\Phi v(x)}>t \right\}\right)\leq C\int_{\mathbb{R}^n}\left(\eta_\varepsilon\circ \Phi\right)\left(\frac{|f(x)|}{t}\right)u(x)v^r(x)\,dx\] holds for every positive $t$ and every $0<\varepsilon<\varepsilon_0$, where $\eta_\varepsilon$ is as above and $C$ depends on $\varepsilon$. \end{coro} Throughout these notes, all references, lemmas and theorems will follow the label given in \cite{B22Pot}. \section{Proof of Theorem~\ref{teo: teorema principal}} We shall first give some preliminaries in order to proceed with the proof. Recall that $w\in A_\infty$ if there exists a positive constant $C$ such that \[\left(\frac{1}{|Q|}\int_Q w\right)\text{exp}\left(\frac{1}{|Q|}\int_Q \log w^{-1}\right)\leq C\] for every cube $Q$ in $\mathbb{R}^n$. The smallest constant for which the inequality above holds is denoted by $[w]_{A_\infty}$. The following lemma will be useful in the sequel. It can be found in \cite{HP13}. \begin{lema}\label{lema: reverse Holder con constante} Let $w\in A_\infty$ and let $r_w=1+\frac{1}{\tau_n[w]_{A_\infty}}$. Then for any cube $Q$ we have \[\left(\frac{1}{|Q|}\int_Q w^{r_w}\right)^{1/r_w}\leq \frac{2}{|Q|}\int_Q w.\] As a consequence, given any cube $Q$ and a measurable set $E\subseteq Q$ we have that \[\frac{w(E)}{w(Q)}\leq 2\left(\frac{|E|}{|Q|}\right)^{\varepsilon_w},\] where $\varepsilon_w=1/(1+\tau_n[w]_{A_\infty})$. The constant $\tau_n$ is purely dimensional and can be chosen as $2^{11+n}$. \end{lema} Recall that we are dealing with a function $\Phi\in \mathfrak{F}_{r,\delta}$, where $r\geq 1$ and $\delta\geq 0$ are given. Since we are assuming $v^r\in A_\infty$, there exists $\varepsilon_1>0$ such that $v^{r+\varepsilon}\in A_\infty$ for every $0<\varepsilon\leq \varepsilon_1$. Fix $0<\varepsilon<\min\{\varepsilon_1,\varepsilon_2\}$, where $\varepsilon_2>0$ will be chosen later. Then we have that $v^r\in \text{RH}_s$, where $s=1+\varepsilon/r$. We shall denote $\Psi_\varepsilon=\eta_\varepsilon \circ \Phi$. We shall follow the same sketch and steps as in \cite{B22Pot}, where the entire proof is included for the sake of clearness. Recall that it is enough to prove that \[uv^r\left(\left\{x\in \mathbb{R}^n: \frac{M_{\Phi,\mathcal{D}}(fv)(x)}{v(x)}>t \right\}\right)\leq C_\varepsilon\int_{\mathbb{R}^n}\Psi_\varepsilon\left(\frac{|f(x)|}{t}\right)u(x)v^r(x)\,dx,\] where $\mathcal{D}$ is a given dyadic grid. We can also assume that $t=1$ and that $g=|f|v$ is a bounded function with compact support. Then, for a fixed number $a>2^n$, we can write \begin{align*} uv^r\left(\left\{x\in \mathbb{R}^n: \frac{M_{\Phi,\mathcal{D}}(fv)(x)}{v(x)}>1 \right\}\right)&=\sum_{k\in \mathbb{Z}} uv^r\left(\left\{x: \frac{M_{\Phi,\mathcal{D}}g(x)}{v(x)}>1, a^k<v\leq a^{k+1} \right\}\right)\\ &=:\sum_{k\in \mathbb{Z}}uv^r(E_k). \end{align*} For every $k\in \mathbb{Z}$ we consider the set \[\Omega_k=\left\{x\in \mathbb{R}^n: M_{\Phi,\mathcal{D}}g(x)>a^k\right\},\] and by virtue of the Calderón-Zygmund decomposition of the space (see \cite[Lemma~6]{B22Pot}) there exists a collection of disjoint dyadic cubes $\{Q_j^k\}_j$ that satisfies \[\Omega_k=\bigcup_j Q_j^k,\] and $\|g\|_{\Phi,Q_j^k}>a^k$ for each $j$. By maximality, we have \begin{equation}\label{eq: promedios Luxemburgo de g son como a^k} a^k<\|g\|_{\Phi,Q_j^k}\leq 2^n a^k, \quad \textrm{ for every }j. \end{equation} For every $k\in \mathbb{Z}$ we now proceed to split the obtained cubes in different classes, as in \cite{L-O-P}. Given a nonnegative integer $\ell$, we set \[\Lambda_{\ell,k}=\left\{Q_j^k: a^{(k+\ell)r}\leq \frac{1}{|Q_j^k|}\int_{Q_j^k} v^r< a^{(k+\ell+1)r}\right\},\] and also \[\Lambda_{-1,k}=\left\{Q_j^k: \frac{1}{|Q_j^k|}\int_{Q_j^k} v^r< a^{kr}\right\}.\] The next step is to split every cube in the family $\Lambda_{-1,k}$. Fixed $Q_j^k\in \Lambda_{-1,k}$, we perform the Calder\'on-Zygmund decomposition of the function $v^r\mathcal{X}_{Q_j^k}$ at level $a^{kr}$. Then we obtain, for each $k$, a collection $\left\{Q_{j,i}^k\right\}_i$ of maximal cubes, contained in $Q_j^k$ and which satisfy \begin{equation}\label{eq: promedios de v^r sobre Q_{j,i}^k son como a^{kr}} a^{kr}<\frac{1}{|Q_{j,i}^k|}\int_{Q_{j,i}^k}v^r\leq 2^na^{kr},\quad \textrm{ for every }i. \end{equation} Also we define the sets \[\Gamma_{\ell,k}=\left\{Q_j^k\in \Lambda_{\ell,k}: \left|Q_j^k\cap \left\{x: a^k<v\leq a^{k+1}\right\}\right|>0\right\},\] and also \[\Gamma_{-1,k}=\left\{Q_{j,i}^k: Q_j^k\in \Lambda_{-1,k} \textrm{ and } \left|Q_{j,i}^k\cap \left\{x: a^k<v\leq a^{k+1}\right\}\right|>0\right\}.\] Since $E_k\subseteq \Omega_k$, we can estimate \begin{align*} \sum_{k\in \mathbb{Z}} uv^r(E_k)&=\sum_{k\in \mathbb{Z}} uv^r(E_k\cap \Omega_k)\\ &=\sum_{k\in \mathbb{Z}} \sum_j uv^r(E_k\cap Q_j^k)\\ &\leq \sum_{k\in \mathbb{Z}}\sum_{\ell\geq 0} \sum_{Q_j^k\in \Gamma_{\ell,k}}a^{(k+1)r}u(E_k\cap Q_j^k)+\sum_{k\in \mathbb{Z}}\,\, \sum_{i:Q_{j,i}^k\in \Gamma_{-1,k}}a^{(k+1)r}u(Q_{j,i}^k). \end{align*} If we can prove that given a negative integer $N$, there exists a positive constant $C_\varepsilon$, independent of $N$ for which the following estimate \begin{equation}\label{eq: desigualdad con C independiente de N} \sum_{k\geq N}\sum_{\ell\geq 0} \sum_{Q_j^k\in \Gamma_{\ell,k}}a^{(k+1)r}u(E_k\cap Q_j^k)+\sum_{k\geq N} \sum_{i:Q_{j,i}^k\in \Gamma_{-1,k}}a^{(k+1)r}u(Q_{j,i}^k)\leq C_\varepsilon\int_{\mathbb{R}^n}\Psi_\varepsilon\left(|f|\right)uv^r \end{equation} holds, then the proof would be completed by letting $N\to-\infty$. We shall also need the following lemma from \cite{L-O-P}. We include an adaptation of the proof involving our parameters for the sake of clearness. \begin{lema}\label{lema: condicion Ainf de u} Let $\ell\geq 0$ and $Q_j^k\in \Gamma_{\ell,k}$. If $u\in A_\infty$ and $v^r\in A_q$ for some $1<q<\infty$, then there exists positive constants $c_1$ and $c_2$ depending on $u$ and $v^r$ such that \[u(E_k\cap Q_j^k)\leq c_1\,e^{-c_2r\ell}u(Q_j^k).\] Furthermore, we can pick $c_1=2\left([v^r]_{A_q}a^r\right)^{1/((q-1)(1+\tau_n[u]_{A_\infty}))}$ and $c_2=\ln a/((q-1)(1+\tau_n[u]_{A_\infty}))$, where $\tau_n$ is the dimensional constant appearing in Lemma~\ref{lema: reverse Holder con constante}. \end{lema} \begin{proof} Since $v^r\in A_\infty$, there exists $q>1$ such that $v^r\in A_q$. Since $Q_j^k\in \Gamma_{\ell,k}$, we have that \[\left(\frac{|E_k\cap Q_j^k|}{|Q_j^k|}\right)^{q-1}\leq \left(\frac{1}{|Q_j^k|}\int_{Q_j^K} v^{r(1-q')}\right)^{q-1}a^{r(k+1)}\leq \frac{[v^r]_{A_q}|Q_j^k|}{v^r(Q_j^k)}a^{r(k+1)}\leq [v^r]_{A_q}a^{(1-\ell)r}.\] Since $u\in A_1\subseteq A_\infty$, by Lemma~\ref{lema: reverse Holder con constante} and the estimate above we have that \[\frac{u(E_k\cap Q_j^k)}{u(Q_j^k)}\leq 2\left(\frac{|E_k\cap Q_j^k|}{|Q_j^k|}\right)^{1/(1+\tau_n[u]_{A_\infty})}\leq 2 \left([v^r]_{A_q}a^{(1-\ell)r}\right)^{1/((q-1)(1+\tau_n[u]_{A_\infty}))}.\] From this last inequality we can obtain the thesis. \end{proof} \begin{proof}[Proof of Theorem~\ref{teo: teorema principal}] Since $u\in A_1$, we have $u\in A_\infty$. Moreover, the assumption $v^r\in A_\infty$ implies that there exists $1<q<\infty$ such that $v^r\in A_q$. We take $\varepsilon_2=r/((q-1)(1+\tau_n[u]_{A_\infty}))$ and $\varepsilon_0=\min\{\varepsilon_1,\varepsilon_2\}$. Fixed $0<\varepsilon<\varepsilon_0$, recall that we have to estimate the two quantities \[A_N:= \sum_{k\geq N}\sum_{\ell\geq 0} \sum_{Q_j^k\in \Gamma_{\ell,k}}a^{(k+1)r}u(E_k\cap Q_j^k)\] and \[B_N:=\sum_{k\geq N} \sum_{i:Q_{j,i}^k\in \Gamma_{-1,k}}a^{(k+1)r}u(Q_{j,i}^k)\] by $C_\varepsilon\int_{\mathbb{R}^n}\Psi_\varepsilon\left(|f|\right)uv^r$, with $C_\varepsilon$ independent of $N$. We shall start with the estimate of $A_N$. Fix $\ell\geq 0$ and let $\Delta_\ell=\bigcup_{k\geq N} \Gamma_{\ell,k}$. We define recursively a sequence of sets as follows: \[P_0^\ell=\{Q: Q \textrm{ is maximal in } \Delta_\ell \textrm{ in the sense of inclusion}\}\] and for $m\geq 0$ given we say that $Q_j^k\in P_{m+1}^\ell$ if there exists a cube $Q_s^t$ in $P_m^\ell$ which verifies \begin{equation}\label{eq: desigualdad 1 conjunto P_m^l} \frac{1}{|Q_j^k|}\int_{Q_j^k} u>\frac{2}{|Q_s^t|}\int_{Q_s^t}u \end{equation} and it is maximal in this sense, that is, \begin{equation}\label{eq: desigualdad 2 conjunto P_m^l} \frac{1}{|Q_{j'}^{k'}|}\int_{Q_j^k} u\leq\frac{2}{|Q_s^t|}\int_{Q_s^t}u \end{equation} for every $Q_j^k\subsetneq Q_{j'}^{k'}\subsetneq Q_s^t$. Let $P^\ell=\bigcup_{m\geq 0} P_m^{\ell}$, the set of principal cubes in $\Delta_\ell$. By applying Lemma~\ref{lema: condicion Ainf de u} and the definition of $\Lambda_{\ell,k}$ we have that \begin{align*} \sum_{k\geq N}\sum_{\ell\geq 0} \sum_{Q_j^k\in \Gamma_{\ell,k}}a^{(k+1)r}u(E_k\cap Q_j^k)&\leq\sum_{k\geq N}\sum_{\ell\geq 0} \sum_{Q_j^k\in \Gamma_{\ell,k}}c_1a^{(k+1)r}e^{-c_2\ell r}u(Q_j^k)\\ &\leq \sum_{\ell\geq 0}c_1e^{-c_2\ell r}a^{r(1-\ell)}\sum_{k\geq N}\sum_{Q_j^k\in \Gamma_{\ell,k}}\frac{v^r(Q_j^k)}{|Q_j^k|}u(Q_j^k). \end{align*} Let us sort the inner double sum in a more convenient way. We define \[\mathcal{A}_{(t,s)}^\ell=\left\{Q_j^k \in \bigcup_{k\geq N} \Gamma_{\ell,k}: Q_j^k\subseteq Q_s^t \textrm{ and } Q_s^t \textrm{ is the smallest cube in }P^\ell \textrm{ that contains it} \right\}.\] That is, every $Q_j^k\in \mathcal{A}_{(t,s)}^\ell$ is not a principal cube, unless $Q_j^k=Q_s^t$. Recall that $v^r\in A_\infty$ implies that there exist two positive constants $C$ and $\theta$ verifying \begin{equation}\label{eq: condicion Ainfty de v^r} \frac{v^r(E)}{v^r(Q)}\leq C\left(\frac{|E|}{|Q|}\right)^{\theta}, \end{equation} for every cube $Q$ and every measurable set $E$ of $Q$. By using \eqref{eq: desigualdad 2 conjunto P_m^l} and Lemma~12 in \cite{B22Pot} we have that \begin{align*} \sum_{k\geq N}\sum_{Q_j^k\in \Gamma_{\ell,k}}\frac{v^r(Q_j^k)}{|Q_j^k|}u(Q_j^k)&=\sum_{Q_s^t \in P^\ell}\,\,\sum_{(k,j): Q_j^k\in \mathcal{A}_{(t,s)}^\ell}\frac{u(Q_j^k)}{|Q_j^k|}v^r(Q_j^k)\\ &\leq 2\sum_{Q_s^t \in P^\ell}\frac{u(Q_s^t)}{|Q_s^t|}\,\,\sum_{(k,j): Q_j^k\in \mathcal{A}_{(t,s)}^\ell}v^r(Q_j^k)\\ &\leq C\sum_{Q_s^t \in P^\ell}\frac{u(Q_s^t)}{|Q_s^t|}v^r(Q_s^t)\left(\frac{\left|\bigcup_{(k,j): Q_j^k\in \mathcal{A}_{(t,s)}^\ell}Q_j^k\right|}{|Q_s^t|}\right)^\theta\\ &\leq C\sum_{Q_s^t \in P^\ell} \frac{u(Q_s^t)}{|Q_s^t|}v^r(Q_s^t). \end{align*} Therefore, \begin{align*} \sum_{k\geq N}\sum_{\ell\geq 0} \sum_{Q_j^k\in \Gamma_{\ell,k}}a^{(k+1)r}u(E_k\cap Q_j^k)&\leq C\sum_{\ell\geq 0}e^{-c_2\ell r}a^{-\ell r}\sum_{Q_s^t \in P^\ell} \frac{v^r(Q_s^t)}{|Q_s^t|}u(Q_s^t)\\ &\leq C\sum_{\ell\geq 0}e^{-c_2\ell r}\sum_{Q_s^t \in P^\ell} a^{tr}u(Q_s^t). \end{align*} \begin{afirmacion}[Corrected version of Claim 1 in \cite{B22Pot}]\label{af: claim 1} Given $\ell\geq 0$ and $Q_j^k\in \bigcup_{k\geq N} \Gamma_{\ell,k}$, we have that \begin{equation}\label{eq: estimacion de afirmacion: control de akr por promedios de Phi(f). caso l no negativo} a^{kr}\leq C\frac{\ell^{\delta/\varepsilon}a^{\ell\varepsilon}}{|Q_j^k|}\int_{Q_j^k}\Psi_\varepsilon\left(|f(x)|\right)v^r(x)\,dx, \end{equation} where $C$ depends on $\varepsilon$. \end{afirmacion} By applying the estimate above we obtain \begin{align*} \sum_{k\geq N}\sum_{\ell\geq 0} \sum_{Q_j^k\in \Gamma_{\ell,k}}a^{(k+1)r}u(E_k\cap Q_j^k)&\leq C_\varepsilon\sum_{\ell\geq 0} e^{-c_2\ell r}\ell^{\delta/\varepsilon}a^{\ell\varepsilon}\sum_{Q_s^t\in P^{\ell}}\frac{u(Q_s^t)}{|Q_s^t|}\int_{Q_s^t}\Psi_\varepsilon\left(|f|\right)v^r\\ &=C_\varepsilon\sum_{\ell\geq 0}e^{-c_2\ell r}\ell^{\delta/\varepsilon}a^{\ell\varepsilon}\int_{\mathbb{R}^n} \Psi_\varepsilon\left(|f|\right)v^r\left(\sum_{Q_s^t\in P^{\ell}}\frac{u(Q_s^t)}{|Q_s^t|}\mathcal{X}_{Q_s^t}\right)\\ &=C_\varepsilon\sum_{\ell\geq 0}e^{-c_2\ell r}\ell^{\delta/\epsilon}a^{\ell\varepsilon}\int_{\mathbb{R}^n} \Psi_\varepsilon\left(|f(x)|\right)v^r(x)h_1(x)\,dx\\ &\leq C_\varepsilon\int_{\mathbb{R}^n} \Psi_\varepsilon\left(|f(x)|\right)v^r(x)u(x)\,dx, \end{align*} by virtue of Claim 2 in \cite{B22Pot}. Notice that the sum is finite since we are assuming $\varepsilon<\varepsilon_2$. Indeed, we have that \[e^{-c_2\ell r}a^{\ell \varepsilon}=e^{-c_2\ell r+\ell \varepsilon\ln a}=e^{\ell(-c_2r+\varepsilon \ln a)},\] and this exponent is negative by the election of $\varepsilon$. This completes the estimate of $A_N$. \medskip Let us center our attention on the estimate of $B_N$. Fix $0<\beta<\theta$, where $\theta$ is the number appearing in \eqref{eq: condicion Ainfty de v^r}. We shall build the set of principal cubes in $\Delta_{-1}=\bigcup_{k\geq N}\Gamma_{-1,k}$. Let \[P_0^{-1}=\{Q: Q \textrm{ is a maximal cube in }\Delta_{-1}\textrm{ in the sense of inclusion}\}\] and, recursively, we say that $Q_{j,i}^k\in P_{m+1}^{-1}$, $m\geq 0$, if there exists a cube $Q_{s,l}^t\in P_m^{-1}$ such that \begin{equation}\label{eq: desigualdad 1 conjunto P_m^{-1}} \frac{1}{|Q_{j,i}^k|}\int_{Q_{j,i}^k} u> \frac{a^{(k-t)\beta r}}{|Q_{s,l}^t|}\int_{Q_{s,l}^t}u \end{equation} and it is the biggest subcube of $Q_{s,l}^t$ that verifies this condition, that is \begin{equation}\label{eq: desigualdad 2 conjunto P_m^{-1}} \frac{1}{|Q_{j',i'}^{k'}|}\int_{Q_{j',i'}^{k'}} u\leq \frac{a^{(k-t)\beta r}}{|Q_{s,l}^t|}\int_{Q_{s,l}^t}u \end{equation} if $Q_{j,i}^k\subsetneq Q_{j',i'}^{k'}\subsetneq Q_{s,l}^t$. Let $P^{-1}=\bigcup_{m\geq 0} P_m^{-1}$, the set of principal cubes in $\Delta_{-1}$. Similarly as before, we define the set \[\mathcal{A}_{(t,s,l)}^{-1}=\left\{Q_{j,i}^k \in \bigcup_{k\geq N} \Gamma_{-1,k}: Q_{j,i}^k\subseteq Q_{s,l}^t \textrm{ and } Q_{s,l}^t \textrm{ is the smallest cube in }P^{-1} \textrm{ that contains it} \right\}.\] We can therefore estimate $B_N$ as follows \begin{align*} B_N&\leq a^r \sum_{k\geq N}\sum_{i:Q_{j,i}^k\in \Gamma_{-1,k}}\frac{v^r(Q_{j,i}^k)}{|Q_{j,i}^k|}u(Q_{j,i}^k)\\ &\leq a^r\sum_{Q_{s.l}^t \in P^{-1}}\sum_{k,j,i: Q_{j,i}^k\in \mathcal{A}_{(t,s,l)}^{-1}}\frac{u(Q_{j,i}^k)}{|Q_{j,i}^k|}v^r(Q_{j,i}^k)\\ &\leq a^r\sum_{Q_{s.l}^t \in P^{-1}}\frac{u(Q_{s,l}^t)}{|Q_{s,l}^t|}\sum_{k\geq t}a^{(k-t)\beta r}\,\sum_{j,i: Q_{j,i}^k\in \mathcal{A}_{(t,s,l)}^{-1}}v^r(Q_{j,i}^k). \end{align*} Fixed $k\geq t$, observe that \[\sum_{j,i: Q_{j,i}^k\in \mathcal{A}_{(t,s,l)}^{-1}} |Q_{j,i}^k|<\sum_{j,i: Q_{j,i}^k\in \mathcal{A}_{(t,s,l)}^{-1}} a^{-kr}v^r(Q_{j,i}^k)\leq a^{-kr}v^r(Q_{s,l}^t)\leq 2^na^{(t-k)r}|Q_{s,l}^t|.\] Combining this inequality with the $A_\infty$ condition of $v^r$ we have, for every $k\geq t$, that \begin{align*} \sum_{j,i: Q_{j,i}^k\in \mathcal{A}_{(t,s,l)}^{-1}}a^{(k-t)\beta r}v^r(Q_{j,i}^k)&\leq Cv^r(Q_{s,l}^t)\left(\frac{\sum_{j,i: Q_{j,i}^k\in \mathcal{A}_{(t,s,l)}^{-1}}|Q_{j,i}^k|}{|Q_{s,l}^t|}\right)^\theta\\ &\leq Ca^{(t-k)r\theta}. \end{align*} Thus, \begin{align*} B_N&\leq C\sum_{Q_{s.l}^t \in P^{-1}}\frac{u(Q_{s,l}^t)}{|Q_{s,l}^t|}v^r(Q_{s,l}^t)\sum_{k\geq t} a^{(t-k)r(\theta-\beta)}\\ &=C\sum_{Q_{s.l}^t \in P^{-1}}\frac{v^r(Q_{s,l}^t)}{|Q_{s,l}^t|}u(Q_{s,l}^t)\\ &\leq C\sum_{Q_{s.l}^t \in P^{-1}}a^{tr}u(Q_{s,l}^t). \end{align*} \begin{afirmacion}[Corrected version of Claim 3 in \cite{B22Pot}]\label{af: Claim 3} If $Q_{j}^k\in \Lambda_{-1,k}$ then there exists a positive constant $C_\varepsilon$ such that \[a^{kr}\leq \frac{C_\varepsilon}{|Q_j^k|}\int_{Q_j^k}\Psi_\varepsilon\left(|f(x)|\right)v^r(x)\,dx.\] \end{afirmacion} By using this estimate we can proceed as follows \begin{align*} \sum_{k\geq N}\sum_{i:Q_{j,i}^k\in \Gamma_{-1,k}} a^{(k+1)r}u(Q_{j,i}^k)&\leq C\sum_{Q_{s.l}^t \in P^{-1}}a^{tr}u(Q_{s,l}^t)\\ &\leq C_\varepsilon\sum_{Q_{s.l}^t \in P^{-1}}\frac{u(Q_{s,l}^t)}{|Q_s^t|}\int_{Q_s^t}\Psi_\varepsilon\left(|f(x)|\right)v^r(x)\,dx\\ &\leq C_\varepsilon\int_{\mathbb{R}^n} \Psi_\varepsilon\left(|f(x)|\right)v^r(x)\left[\sum_{Q_{s.l}^t \in P^{-1}}\frac{u(Q_{s,l}^t)}{|Q_s^t|}\mathcal{X}_{Q_s^t}(x)\right]\,dx\\ &=C_\varepsilon\int_{\mathbb{R}^n} \Psi_\varepsilon\left(|f(x)|\right)v^r(x)h_2(x)\,dx\\ &\leq C_\varepsilon\int_{\mathbb{R}^n} \Psi_\varepsilon\left(|f(x)|\right)u(x)v^r(x)\,dx, \end{align*} by virtue of Claim~4 in \cite{B22Pot}. This concludes the proof. \end{proof} We proceed with the proofs of the claims, in order to complete the argument above. \begin{proof}[Proof of Claim~\ref{af: claim 1}] Fix $\ell\geq 0$ and a cube $Q_j^k\in \bigcup_{k\geq N} \Gamma_{\ell,k}$. We know that $\|g\|_{\Phi,Q_j^k}>a^k$ or, equivalently, $\left\|\frac{g}{a^k}\right\|_{\Phi, Q_j^k}>1$. Denote with $A=\{x\in Q_j^k: v(x)\leq t^*a^k\}$ and $B=Q_j^k\backslash A$, where $t^*$ is the number verifying that if $z\geq t^*$, then \[\frac{\Phi(z)}{z^r}\leq C_0 \left(\log z\right)^\delta.\] Then, \[1<\left\|\frac{g}{a^k}\right\|_{\Phi, Q_j^k}\leq \left\|\frac{g}{a^k}\mathcal{X}_A\right\|_{\Phi, Q_j^k}+\left\|\frac{g}{a^k}\mathcal{X}_B\right\|_{\Phi, Q_j^k}=I+II.\] This inequality implies that either $I>1/2$ or $II>1/2$. Since $\Phi\in\mathfrak{F}_{r,\delta}$ we can easily see that $I>1/2$ implies that \[a^{kr}<\frac{2^rC_0(\log (2t^*))^{\delta}}{|Q_j^k|}\int_{Q_j^k}\Phi\left(|f|\right)v^r\leq \frac{2^rC_0(\log (2t^*))^{\delta}}{|Q_j^k|}\int_{Q_j^k}(\eta_\varepsilon \circ \Phi)\left(|f|\right)v^r,\] because $\eta_\varepsilon(z)\geq z$. On the other hand, if $II>1/2$ then again \begin{align*} 1&<\frac{1}{|Q_j^k|}\int_B\Phi\left(\frac{2|f|v}{a^k}\right)\\ &\leq \frac{\Phi(2)C_0}{|Q_j^k|}\int_B \Phi\left(|f|\right)\frac{v^r}{a^{kr}}\left(\log\left(\frac{v}{a^k}\right)\right)^{\delta}, \end{align*} since $\Phi\in \mathfrak{F}_{r,\delta}$. This implies that \[a^{kr}\leq \frac{\Phi(2)C_0}{|Q_j^k|}\int_{Q_j^k}\Phi\left(|f|\right)v^rw_k,\] where $w_k(x)=\left(\log\left(\frac{v(x)}{a^k}\right)\right)^{\delta}\mathcal{X}_{B}(x)$. We shall now perform a generalized Hölder inequality with the Young functions \[\eta_\varepsilon(z)=z(1+\log^+z)^{\delta/\varepsilon}\quad \text{ and }\quad \tilde\eta_\varepsilon(z) \approx (e^{z^{\varepsilon/\delta}}-e)\mathcal{X}_{(1,\infty)}(z),\] with respect to the measure $d\mu(x)=v^r(x)\,dx$. Thus we have \begin{equation}\label{eq: af: Claim 1 - eq1} \frac{1}{|Q_j^k|}\int_{Q_j^k}\Phi\left(|f|\right)w_kv^r\leq \frac{v^r(Q_j^k)}{|Q_j^k|}\|\Phi(|f|)\|_{\eta_\varepsilon,Q_j^k,v^r}\|w_k\|_{\tilde\eta_\varepsilon,Q_j^k,v^r}. \end{equation} Let us first estimate the last factor. Since $e^{(\log z)^\varepsilon}\leq z^\varepsilon$ when $z\geq e^{\varepsilon^{1/(\varepsilon-1)}}$, we proceed as follows \begin{equation}\label{eq: af: Claim 1 - eq2} \frac{1}{v^r(Q_j^k)}\int_{Q_j^k}\tilde\eta_\varepsilon(w_k)v^r\leq \tilde\eta_\varepsilon\left(\varepsilon^{\delta/(\varepsilon-1)}\right)+\frac{1}{v^r(Q_j^k)}\int_{Q_j^k\cap\left\{v/a^k>e^{\varepsilon^{1/(\varepsilon-1)}}\right\}}\frac{v^{r+\varepsilon}}{a^{k\varepsilon}}. \end{equation} Since $\varepsilon^{-\varepsilon}\leq e$ we have that \[\tilde\eta_\varepsilon\left(\varepsilon^{\delta/(\varepsilon-1)}\right)\leq e^{e^{1/(1-\varepsilon)}}.\] On the other hand, our hypothesis on $v$ implies that $v^r\in \text{RH}_s$, where $s=1+\varepsilon/r$. Since $v^{r}\in \Lambda_{\ell,k}$ we obtain \begin{align*} \frac{1}{v^r(Q_j^k)}\int_{Q_j^k\cap\left\{v/a^k>e^{\varepsilon^{1/(\varepsilon-1)}}\right\}}\frac{v^{r+\varepsilon}}{a^{k\varepsilon}}&\leq \frac{a^{-k\varepsilon}}{v^r(Q_j^k)}\int_{Q_j^k}v^{r+\varepsilon}\\ &\leq [v^r]_{\text{RH}_{s}}^{s}\frac{a^{-k\varepsilon}|Q_j^k|}{v^r(Q_j^k)}\left(\frac{1}{|Q_j^k|}\int_{Q_j^k}v^r\right)^{s}\\ &= [v^r]_{\text{RH}_{s}}^{s} a^{-k\varepsilon}\left(\frac{1}{|Q_j^k|}\int_{Q_j^k}v^r\right)^{s-1}\\ &\leq [v^r]_{\text{RH}_{s}}^{s} a^{-k\varepsilon}a^{(k+\ell+1)\varepsilon}\\ &=[v^r]_{\text{RH}_{s}}^{s}a^{(\ell+1)\varepsilon}. \end{align*} By using these two estimates in \eqref{eq: af: Claim 1 - eq2}, we get \[\|w_k\|_{\tilde\eta_\varepsilon,Q_j^k,v^r}\leq e^{e^{1/(1-\varepsilon)}}+ [v^r]_{\text{RH}_{s}}^{s}a^{(\ell+1)\varepsilon}\leq (e^{e^{1/(1-\varepsilon)}}+[v^r]_{\text{RH}_{s}}^{s})a^{(\ell+1)\varepsilon}.\] We also observe that \begin{equation}\label{eq: relacion norma con infimo} \|\Phi(|f|)\|_{\eta_\varepsilon,Q_j^k,v^r}\approx \inf_{\tau>0}\left\{\tau+\frac{\tau}{v^r(Q_j^k)}\int_{Q_j^k}\eta_\varepsilon\left(\frac{\Phi(|f|)}{\tau}\right)v^r\right\}. \end{equation} If we choose $\tau=(2a^{(\ell+1)(r+\varepsilon)}(e^{e^{1/(1-\varepsilon)}}+[v^r]_{\text{RH}_{s}}^{s}))^{-1}$ then we can estimate the right-hand side of \eqref{eq: af: Claim 1 - eq1} as follows \begin{align*} \frac{v^r(Q_j^k)}{|Q_j^k|}\|\Phi(|f|)\|_{\eta_\varepsilon,Q_j^k,v^r}\|w_k\|_{\tilde\eta_\varepsilon,Q_j^k,v^r}&\leq \frac{a^{kr}}{2}+(e^{e^{1/(1-\varepsilon)}}+[v^r]_{\text{RH}_{s}}^{s})a^{(\ell+1)\varepsilon}\tau\eta_\varepsilon\left(\frac{1}{\tau}\right)\frac{1}{|Q_j^k|}\int_{Q_j^k}\Psi_\varepsilon(|f|)v^r. \end{align*} Notice that \[\tau\eta_\varepsilon\left(\frac{1}{\tau}\right)=\left(1+\log\left(\frac{1}{\tau}\right)\right)^{\delta/\varepsilon}\leq 2^{\delta/\varepsilon}\left(\log(2(e^{e^{1/(1-\varepsilon)}}+[v^r]_{\text{RH}_{s}}^{s}))+(\ell+1)(r+\varepsilon)\log a\right)^{\delta/\varepsilon}\leq C_\varepsilon \ell^{\delta/\varepsilon}.\] By plugging these two estimates in \eqref{eq: af: Claim 1 - eq1} we arrive to \begin{align*} a^{kr}&\leq \frac{C_\varepsilon \ell^{\delta/\varepsilon}a^{\ell\varepsilon}}{|Q_j^k|}\int_{Q_j^k}\Psi_\varepsilon(|f|)v^r, \end{align*} and we are done. \end{proof} \medskip \begin{proof}[Proof of Claim~\ref{af: Claim 3}] The proof follows similar arguments as the previous one. By adopting the same notation, we have that $\left\|\frac{g}{a^k}\right\|_{\Phi, Q_j^k}>1$, and this implies that either $I>1/2$ or $II>1/2$. If $I>1/2$, we obtain \[a^{kr}<\frac{C_0(\log (2t^*))^{\delta}}{|Q_j^k|}\int_{Q_j^k}\Phi\left(|f|\right)v^r\leq \frac{C_0(\log (2t^*))^{\delta}}{|Q_j^k|}\int_{Q_j^k}\Psi_\varepsilon\left(|f|\right)v^r.\] We now assume that $II>1/2$. By performing the same Hölder inequality as in Claim~\ref{af: claim 1}, we get \begin{equation}\label{eq: af: Claim 3 - eq1} \frac{1}{|Q_j^k|}\int_{Q_j^k}\Phi\left(|f|\right)w_kv^r\leq \frac{v^r(Q_j^k)}{|Q_j^k|}\|\Phi(|f|)\|_{\eta_\varepsilon,Q_j^k,v^r}\|w_k\|_{\tilde\eta_\varepsilon,Q_j^k,v^r}. \end{equation} In order to estimate the factor $\|w_k\|_{\tilde\eta_\varepsilon,Q_j^k,v^r}$ we proceed as before. Since \begin{align*} \frac{1}{v^r(Q_j^k)}\int_{Q_j^k\cap\left\{v/a^k>e^{\varepsilon^{-1/(1-\varepsilon)}}\right\}}\frac{v^{r+\varepsilon}}{a^{k\varepsilon}}&\leq \frac{a^{-k\varepsilon}}{v^r(Q_j^k)}\int_{Q_j^k}v^{r+\varepsilon}\\ &\leq [v^r]_{\text{RH}_{s}}^{s}\frac{a^{-k\varepsilon}|Q_j^k|}{v^r(Q_j^k)}\left(\frac{1}{|Q_j^k|}\int_{Q_j^k}v^r\right)^{s}\\ &= [v^r]_{\text{RH}_{s}}^{s} a^{-k\varepsilon}\left(\frac{1}{|Q_j^k|}\int_{Q_j^k}v^r\right)^{s-1}\\ &\leq [v^r]_{\text{RH}_{s}}^{s} a^{-k\varepsilon}a^{k\varepsilon}\\ &=[v^r]_{\text{RH}_{s}}^{s} \end{align*} we obtain that \[\|w_k\|_{\tilde\eta_\varepsilon,Q_j^k,v^r}\leq e^{e^{1/(1-\varepsilon)}}+[v^r]_{\text{RH}_{s}}^{s}=C_\varepsilon.\] By using \eqref{eq: relacion norma con infimo} and choosing $\tau=(2C_\varepsilon)^{-1}$, we can estimate the right-hand side in \eqref{eq: af: Claim 3 - eq1} as follows \begin{align*} \frac{v^r(Q_j^k)}{|Q_j^k|}\|\Phi(|f|)\|_{\eta_\varepsilon,Q_j^k,v^r}\|w_k\|_{\tilde\eta_\varepsilon,Q_j^k,v^r}&\leq \frac{a^{kr}}{2}+\tau\eta_\varepsilon\left(\frac{1}{\tau}\right)\frac{1}{|Q_j^k|}\int_{Q_j^k}\Psi_\varepsilon\left(|f|\right)v^r\\ &\leq \frac{a^{kr}}{2} + (1+\log(2C_\varepsilon))^{\delta/\varepsilon}\frac{1}{|Q_j^k|}\int_{Q_j^k}\Psi_\varepsilon\left(|f|\right)v^r. \end{align*} This yields \[a^{kr}\leq \frac{C_\varepsilon}{|Q_j^k|}\int_{Q_j^k}\Psi_\varepsilon\left(|f|\right)v^r.\] This concludes the proof. \end{proof} \section{Applications: Mixed estimates for the generalized fractional integral operator} Mixed inequalities for the generalized fractional maximal operator $M_{\gamma,\Phi}$ were also given in \cite{B22Pot}. One of the key properties in order to establish the following result was to define an auxiliary operator that is bounded in $L^\infty(uv^r)$ when $v^r\in A_\infty$. This operator is given by \[\mathcal{T}_\Phi f(x)=\frac{M_\Phi(fv)(x)}{M_\Phi v(x)}.\] It is not difficult to see that $M_\Phi v\approx v$ when $\Phi\in\mathfrak{F}_{r,\delta}$ and $v^r$ is an $A_1$-weight, so this operator is an extension of the Sawyer operator $S_\Phi f = M_\Phi(fv)/v$ considered in the main theorem. \begin{coro}[Corrected version of Corollary 3 in \cite{B22Pot}]\label{coro: corolario del teorema principal - 2} Let $r\geq 1$, $\delta\geq 0$ and $\Phi\in \mathfrak{F}_{r,\delta}$. Let $u\in A_1$, $v^r\in A_\infty$ and $\Psi$ be a Young function that verifies $\Psi(t) \approx \Phi(t)$, for every $t\geq t_0\geq 0$. Then, there exists $\varepsilon_0>0$ such that the inequality \[uv^r\left(\left\{x\in \mathbb{R}^n: \frac{M_\Psi(fv)(x)}{M_\Psi v(x)}>t \right\}\right)\leq C_1\int_{\mathbb{R}^n}(\eta_\varepsilon\circ \Psi)\left(\frac{C_2|f(x)|}{t}\right)u(x)v^r(x)\,dx\] holds for every $t>0$ and every $0<\varepsilon<\varepsilon_0$, where $C_1$ depends on $\varepsilon$ and $\eta_\varepsilon(z)=z(1+\log^+z)^{\delta/\varepsilon}$. \end{coro} \begin{proof} By combining the equivalence between $\Phi$ and $\Psi$ and Proposition~8 in \cite{B22Pot}, we obtain that there exist positive constants $A$ and $B$ such that \[A\|\cdot\|_{\Phi,Q}\leq \|\cdot\|_{\Psi,Q}\leq B\|\cdot\|_{\Phi,Q},\] for every cube $Q$. By setting $c_1=B/A$, we have that \[\frac{M_{\Psi}(fv)(x)}{M_\Psi v(x)} \leq c_1\frac{M_{\Phi}(fv)(x)}{M_\Phi v(x)}\] for almost every $x$. By applying Corollary~\ref{coro: corolario del teorema principal - 1}, there exists $\varepsilon_0>0$ such that for every $0<\varepsilon<\varepsilon_0$ we have \begin{align*} uv^r\left(\left\{x\in \mathbb{R}^n: \frac{M_\Psi(fv)(x)}{M_\Psi v(x)}>t \right\}\right)&\leq uv^r\left(\left\{x\in \mathbb{R}^n: \frac{M_\Phi(fv)(x)}{M_\Phi v(x)}>\frac{t}{c_1} \right\}\right)\\ &\leq C \int_{\mathbb{R}^n}(\eta_\varepsilon\circ \Phi)\left(\frac{c_1|f|}{t}\right)uv^r. \end{align*} Observe that \[\|\mathcal{T}_\Psi f\|_{L^{\infty}}=\left\|\frac{M_\Psi (fv)}{M_\Psi v}\right\|_{L^{\infty}}\leq \|f\|_{L^{\infty}},\] which directly implies $\|\mathcal{T}_\Psi f\|_{L^{\infty}(uv^r)}\leq \|f\|_{L^{\infty}(uv^r)}$ since the measure given by $d\mu(x)=u(x)v^r(x)\,dx$ is absolutely continuous with respect to the Lebesgue measure. We now apply Lemma~13 in \cite{B22Pot} to obtain \begin{align*} uv^r\left(\left\{x\in \mathbb{R}^n: \frac{M_\Psi(fv)(x)}{M_\Psi v(x)}>t \right\}\right)&\leq C\int_{\{x: |f(x)|>t/2\}}(\eta_\varepsilon\circ \Phi)\left(\frac{2c_1|f(x)|}{t}\right)u(x)v^r(x)\,dx\\ &\leq C(\eta_\varepsilon\circ \Phi)\left(\frac{c_1}{t_0}\right)\int_{\{x: |f(x)|>t/2\}}(\eta_\varepsilon\circ \Phi)\left(\frac{2t_0|f(x)|}{t}\right)u(x)v^r(x)\,dx\\ &\leq C_1\int_{\{x: |f(x)|>t/2\}}(\eta_\varepsilon\circ \Psi)\left(\frac{2t_0|f(x)|}{t}\right)u(x)v^r(x)\,dx\\ &\leq C_1\int_{\mathbb{R}^n}(\eta_\varepsilon\circ \Psi)\left(\frac{C_2|f(x)|}{t}\right)u(x)v^r(x)\,dx.\qedhere \end{align*} \end{proof} The corollary above is key in order to obtain mixed inequalities for the generalized fractional maximal operator defined, for $0<\gamma<n$ and a Young function $\Phi$, by the expression \[M_{\gamma,\Phi}f(x)=\sup_{Q\ni x}|Q|^{\gamma/n}\|f\|_{\Phi,Q}.\] Mixed estimates for this operator are contained in the following theorems. \begin{teo} \label{teo: mixta para Mgamma,Phi, caso r<p<n/gamma} Let $\Phi(z)=z^r(1+\log^+ z)^\delta$, with $r\geq 1$ and $\delta \geq 0$. Let $0<\gamma<n/r$, $r<p<n/\gamma$ and $1/q=1/p-\gamma/n$. If $u\in A_1$ and $v^{q(1/p+1/r')}\in A_\infty$, then the inequality \[uv^{q(1/p+1/r')}\left(\left\{x\in \mathbb{R}^n: \frac{M_{\gamma,\Phi}(fv)(x)}{M_{\varphi} v(x)}>t\right\}\right)^{1/q}\leq C\left[ \int_{\mathbb{R}^n}\left(\frac{|f(x)|}{t}\right)^pu^{p/q}(x)(v(x))^{1+p/r'}\,dx\right]^{1/p},\] holds for every positive $t$, where $\varphi(z)=z^{q/p+q/r'}(1+\log^+ z)^{n\delta/(n-r\gamma)}$. \end{teo} \begin{proof} We shall follow the scheme given in the proof of Theorem 4 in \cite{B22Pot}. The difference lies when we apply Corollary~\ref{coro: corolario del teorema principal - 2}, but the function controlling the right-hand side turns out to be auxiliary. We define \[\sigma=\frac{nr}{n-r\gamma}, \quad \nu=\frac{n\delta}{n-r\gamma}, \quad \beta=\frac{q}{\sigma}\left(\frac{1}{p}+\frac{1}{r'}\right),\] and let $\xi$ be the auxiliary function given by \[\xi(z)=\left\{\begin{array}{ccr} z^{q/\beta},&\textrm{ if } & 0\leq z\leq 1,\\ z^\sigma(1+\log^+z)^\nu, & \textrm{ if } &z> 1. \end{array}\right.\] Observe that \[\xi^{-1}(z)z^{\gamma/n}\approx \frac{z^{1/\sigma+\gamma/n}}{(1+\log^+z)^{\nu/\sigma}}=\frac{z^{1/r}}{(1+\log^+z)^{\delta/r}}\lesssim \Phi^{-1}(z),\] for every $z\geq 1$. Observe that $\beta>1$: indeed, since $p>r$ we have $q>\sigma$ and thus $q/(\sigma r')>1/r'$. On the other hand, $q/(p\sigma)>1/r$. By combining these two inequalities we have $\beta>1$. Applying Proposition 10 and Lemma 9 with $\beta$ from \cite{B22Pot}, we can conclude that \begin{equation}\label{eq: teo - mixta para M_{gamma,Phi}, caso r<p<n/gamma - eq1} M_{\gamma,\Phi}\left(\frac{f_0}{w}\right)(x)\leq C\left[M_\xi\left(\frac{f_0^{p\beta/q}}{w^{\beta}}\right)(x)\right]^{1/\beta}\left(\int_{\mathbb{R}^n}f_0^p(y)\,dy\right)^{\gamma/n}. \end{equation} Also observe that \begin{equation}\label{eq: teo - mixta para M_{gamma,Phi}, caso r<p<n/gamma - eq2} \left(M_\xi v^\beta(x)\right)^{1/\beta}\lesssim M_{\varphi} v(x), \quad \textrm{ a.e. }x. \end{equation} Notice that $\xi$ is equivalent to a Young function in $\mathfrak{F}_{\sigma,\nu}$ for $t\geq 1$. Since $q(1/p+1/r')=\beta\sigma$, if we set $f_0=|f|wv$, then we can use inequalities \eqref{eq: teo - mixta para M_{gamma,Phi}, caso r<p<n/gamma - eq1} and \eqref{eq: teo - mixta para M_{gamma,Phi}, caso r<p<n/gamma - eq2} to estimate \begin{align*} uv^{\tfrac{q}{p}+\tfrac{q}{r'}}\left(\left\{x: \frac{M_{\gamma,\Phi}(fv)(x)}{M_{\varphi} v(x)}>t\right\}\right)&\lesssim uv^{\beta\sigma}\left(\left\{x: \frac{M_{\gamma,\Phi}(fv)(x)}{\left(M_\xi v^\beta(x)\right)^{1/\beta}}>t\right\}\right)\\ &\leq uv^{\beta\sigma}\left(\left\{x: \frac{M_\xi\left(f_0^{p\beta/q}w^{-\beta}\right)(x)}{M_\xi v^\beta(x)}>\frac{t^\beta}{\left(\int|f_0|^p\right)^{\beta\gamma/n}}\right\}\right). \end{align*} Since $v^{\beta\sigma}\in A_\infty$, by Corollary~\ref{coro: corolario del teorema principal - 2} there exists $\varepsilon_0>0$ such that the inequality \[uv^{\beta\sigma}\left(\left\{x: \frac{M_\xi\left(f_0^{p\beta/q}w^{-\beta}\right)(x)}{M_\xi v^\beta(x)}>t_0\right\}\right)\leq C_\varepsilon\int_{\mathbb{R}^n}(\eta_{\varepsilon}\circ\xi)\left(c\frac{f_0^{p\beta/q}w^{-q}}{t_0}\right)uv^{\beta\sigma}\] holds for every $0<\varepsilon<\varepsilon_0$, with $t_0=t^\beta\|f_0\|_{p}^{-p\beta\gamma/n}$. Notice that $\eta_\varepsilon(z)=z(1+\log^+z)^{\nu/\varepsilon}$ in this case. Fixed $\varepsilon$, we write \[\int_{\mathbb{R}^n}(\eta_\varepsilon\circ\xi)\left(c\frac{|f|^{p\beta/q}(wv)^{\beta(p/q-1)}}{t^\beta}\left[\int_{\mathbb{R}^n}|f|^p(wv)^p\right]^{\gamma/n\beta}\right)uv^{\sigma\beta} =\int_{\mathbb{R}^n}(\eta_\varepsilon\circ\xi)(\lambda)uv^{\sigma\beta},\] where \[\lambda=c\frac{|f|^{p\beta/q}(wv)^{\beta(p/q-1)}}{t^\beta}\left[\int_{\mathbb{R}^n}|f|^p(wv)^p\right]^{\gamma/n\beta}.\] We further split $\mathbb{R}^n$ into the sets $A=\{x\in \mathbb{R}^n: \lambda(x)\leq 1\}$ and $B=\mathbb{R}^n\backslash A$. Since $(\eta_{\varepsilon} \circ \xi)(z)=z^{q/\beta}$ for $0\leq z\leq 1$, we have that \[\int_{A}(\eta_\varepsilon\circ \xi)(\lambda(x))u(x)[v(x)]^{\sigma\beta}\,dx=\int_A [\lambda(x)]^{q/\beta}u(x)[v(x)]^{\sigma\beta}\,dx.\] If we set $w=u^{1/q}v^{1/p+1/r'-1}$, then \begin{align*} \lambda^{q/\beta}uv^{\sigma\beta}&=c^{q/\beta}\frac{|f|^p}{t^q}(wv)^{p-q}\left[\int_{\mathbb{R}^n}|f|^p(wv)^p\right]^{q\gamma/n}uv^{\sigma\beta}\\ &=c^{q/\beta}\frac{|f|^p}{t^q}\left[\int_{\mathbb{R}^n}|f|^p(wv)^p\right]^{q\gamma/n}u^{p/q}v^{\sigma\beta+(p-q)(1/p+1/r')}. \end{align*} Observe that \[\sigma\beta+(p-q)\left(\frac{1}{p}+\frac{1}{r'}\right)=q\left(\frac{1}{p}+\frac{1}{r'}\right)+(p-q)\left(\frac{1}{p}+\frac{1}{r'}\right)=1+\frac{p}{r'}.\] Also, notice that \[(wv)^p=u^{p/q}v^{1+p/r'-p+p}=u^{p/q}v^{1+p/r'}.\] Therefore, \begin{align*} \int_{A}(\eta_\varepsilon\circ\xi)(\lambda)uv^{\sigma\beta}&\leq \frac{c^{q/\beta}}{t^q}\left[\int_{\mathbb{R}^n}|f|^pu^{p/q}v^{1+p/r'}\right]^{q\gamma/n}\left[\int_{\mathbb{R}^n} |f|^pu^{p/q}v^{1+p/r'}\right]\\ &= \frac{c^{q/\beta}}{t^q}\left[\int_{\mathbb{R}^n}|f|^pu^{p/q}v^{1+p/r'}\right]^{1+q\gamma/n}\\ &=\frac{c^{q/\beta}}{t^q}\left[\int_{\mathbb{R}^n}|f|^pu^{p/q}v^{1+p/r'}\right]^{q/p}. \end{align*} On the other hand, $\lambda(x)>1$ over $B$ and \[(\eta_\varepsilon\circ \xi)(z)\lesssim z^\sigma(1+\log z)^{\nu(1+1/\varepsilon)},\] and this function has an upper type $q/\beta$. Therefore we can estimate the integrand by $\lambda^{q/\beta}uv^{\sigma\beta}$ and proceed as we did with the set $A$. Thus, we obtain \[uv^{q(1/p+1/r')}\left(\left\{x\in \mathbb{R}^n: \frac{M_{\gamma,\Phi}(fv)(x)}{M_\varphi v(x)}>t\right\}\right)^{1/q}\leq C\left[ \int_{\mathbb{R}^n}\left(\frac{|f|}{t}\right)^pu^{p/q}v^{1+p/r'}\right]^{1/p}.\qedhere\] \end{proof} \begin{teo}[Corrected version of Theorem 5 in \cite{B22Pot}]\label{teo: mixta para Mgamma,Phi, caso p=r} Let $\Phi(z)=z^r(1+\log^+z)^\delta$, with $r\geq 1$ and $\delta\geq 0$. Let $0<\gamma<n/r$ and $1/q=1/r-\gamma/n$. If $u\in A_1$ and $v^q\in A_\infty$, then there exists a positive constant $\varepsilon_0$ such that the inequality \[uv^q\left(\left\{x\in \mathbb{R}^n: \frac{M_{\gamma,\Phi}(fv)(x)}{v(x)}>t\right\}\right)\leq C\, \varphi_\varepsilon\left(\int_{\mathbb{R}^n}\Phi_{\gamma,\varepsilon}\left(\frac{|f(x)|}{t}\right)\Psi\left(u^{1/q}(x)v(x)\right)\,dx\right),\] holds for $0<\varepsilon<\varepsilon_0$, where $\varphi_\varepsilon(z)=[z(1+\log^+z)^{\delta(1+1/\varepsilon)}]^{q/r}$, $\Psi_\varepsilon(z)=z^r(1+\log^+(z^{1-q/r}))^{q\delta(1+1/\varepsilon)/r}$, $\Phi_{\gamma,\varepsilon}(z)=\Phi(z)(1+\log^+z)^{\delta(1+1/\varepsilon) q\gamma/n+\delta/\varepsilon}$ and $C$ depends on $\varepsilon$. \end{teo} \begin{proof} Set $\xi(z)=z^q(1+\log^+z)^\nu$, where $\nu=\delta q/r$. Thus $z^{\gamma/n}\xi^{-1}(z)\lesssim \Phi^{-1}(z)$. By applying Proposition 10 in \cite{B22Pot} with $p=r$ we have that \[M_{\gamma,\Phi}\left(\frac{f_0}{w}\right)(x)\leq C\left[M_\xi\left(\frac{f_0^{r/q}}{w}\right)\right](x)\left(\int_{\mathbb{R}^n}f_0^r(y)\,dy\right)^{\gamma/n}.\] By setting $f_0=|f|wv$ we can write \begin{align*} uv^{q}\left(\left\{x: \frac{M_{\gamma,\Phi}(fv)(x)}{v(x)}>t\right\}\right)&=uv^{q}\left(\left\{x: \frac{M_{\gamma,\Phi}(f_0/w)(x)}{v(x)}>t\right\}\right)\\ &\leq uv^{q}\left(\left\{x: \frac{M_{\xi}(f_0^{r/q}/w)(x)}{M_\xi v(x)}>\frac{t}{\left(\int f_0^r \right)^{\gamma/n}}\right\}\right). \end{align*} Since $\xi\in \mathfrak{F}_{q,\nu}$, by Corollary~\ref{coro: corolario del teorema principal - 1} there exists $\varepsilon_0>0$ such that \begin{equation}\label{eq: eq1 - teo mixta para M_{gamma,Phi}, caso p=r} uv^{q}\left(\left\{x: \frac{M_{\gamma,\Phi}(fv)(x)}{v(x)}>t\right\}\right)\leq C_\varepsilon\int_{\mathbb{R}^n}(\eta_{\varepsilon}\circ\xi)\left(\frac{f_0^{r/q}\left(\int f_0^r\right)^{\gamma/n}}{wv t}\right)uv^q, \end{equation} for $0<\varepsilon<\varepsilon_0$ and being $\eta_\varepsilon(z)=z(1+\log^+z)^{\nu/\varepsilon}$. Fixed $\varepsilon$, the argument of $\eta_\varepsilon \circ \xi$ above can be written as \begin{align*} \frac{f_0^{r/q}\left(\int f_0^r\right)^{\gamma/n}}{wv t}&=\left(\frac{|f|}{t}\right)^{r/q}(wv)^{r/q-1}\left(\int_{\mathbb{R}^n} \left(\frac{|f|}{t}\right)^r(wv)^r\right)^{\gamma/n}\\ &=\left[\left(\frac{|f|}{t}\right)(wv)^{1-q/r}\left(\int_{\mathbb{R}^n} \left(\frac{|f|}{t}\right)^r(wv)^r\right)^{\gamma q/(nr)}\right]^{r/q}. \end{align*} Observe that for $0\leq z\leq 1$, $(\eta_\varepsilon \circ\xi)(z^{r/q})=z^r$, and for $z>1$ we have \[(\eta_\varepsilon \circ \xi)(z^{r/q})\lesssim z^r(1+\log z)^{\nu(1+1/\varepsilon)},\] which implies that $(\eta_\varepsilon \circ \xi)(z^{r/q})\lesssim \Phi_{\gamma,\varepsilon}(z)=z^r(1+\log^+z)^{\nu(1+1/\varepsilon)}$, for every $z\geq 0$. Since $\Phi_{\gamma,\varepsilon}$ is submultiplicative, we can estimate as follows \begin{align*} (\eta_\varepsilon \circ \xi )\left(\frac{f_0^{r/q}\left(\int_{\mathbb{R}^n}f_0^r\right)^{\gamma/n}}{wv t}\right)&\leq \Phi_{\gamma,\varepsilon}\left(\left(\frac{|f|}{t}\right)(wv)^{1-q/r}\left(\int_{\mathbb{R}^n} \left(\frac{|f|}{t}\right)^r(wv)^r\right)^{\gamma q/(nr)}\right)\\ &\leq \Phi_{\gamma,\varepsilon}\left(\left[\int_{\mathbb{R}^n}\Phi_{\gamma,\varepsilon}\left(\frac{|f|}{t}\right)(wv)^r\right]^{\gamma q/(nr)}\right)\Phi_{\gamma,\varepsilon}\left(\frac{|f|}{t}(wv)^{1-q/r}\right) \end{align*} Returning to \eqref{eq: eq1 - teo mixta para M_{gamma,Phi}, caso p=r} and setting $w=u^{1/q}$, the right hand side is bounded by \[ \Phi_{\gamma,\varepsilon}\left(\left[\int_{\mathbb{R}^n}\Phi_{\gamma,\varepsilon}\left(\frac{|f|}{t}\right)(wv)^r\right]^{\gamma q/(nr)}\right)\int_{\mathbb{R}^n} \Phi_{\gamma,\varepsilon}\left(\frac{|f|}{t}(wv)^{1-q/r}\right)(wv)^q.\] Notice that $\Phi_{\gamma,\varepsilon}(z^{1-q/r})z^q\leq \Psi_\varepsilon(z)$. Therefore, the expression above is bounded by \[\Phi_{\gamma,\varepsilon}\left(\left[\int_{\mathbb{R}^n}\Phi_{\gamma,\varepsilon}\left(\frac{|f|}{t}\right)\Psi_\varepsilon(u^{1/q}v)\right]^{\gamma q/(nr)}\right)\int_{\mathbb{R}^n} \Phi_{\gamma,\varepsilon}\left(\frac{|f|}{t}\right)\Psi_\varepsilon(u^{1/q}v).\] To finish, observe that \[z\Phi_{\gamma,\varepsilon}(z^{\gamma q/(nr)})\lesssim z^{1+\gamma q/n}(1+\log^+ z)^{\nu(1+1/\varepsilon)}= z^{q/r}(1+\log^+ z)^{\delta q(1+1/\varepsilon)/r}=\varphi_\varepsilon(z).\qedhere\] \end{proof} \def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
proofpile-arXiv_069-5923
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} It has now been over a decade since the publication of the theoretical works of S. A. Mikhailov on the low-frequency (\textit{intraband}) nonlinear response of the monolayer of graphene to an external electric field \citep{mikhailov2007,mikhailov2008}, which marked the birth of the study of nonlinear optical (NLO) responses in two-dimensional materials. In the past ten years, this area has become increasingly active and diverse, as it gathered the attention of both theoretical \citep{Al-Naib2014,Glazov2014b,Peres2014,Cheng2014,Cheng2015b,Cheng2015,Mikhailov2016,semnani2016,Mikhailov2017,Savostianova:17,ventura,passos,Savostianova2018,Hipolito2018} and experimental groups \citep{Hendry2010,Dean2010,Zhang2012,Kumar2013,Hong2013,Dremetsika2016,Vermeulen2016,Higuchi2017,Baudisch2018}. This has also been extended to many other, more recently isolated, layered materials \citep{Janisch2014,Pedersen2015,Hipolito2016,Hipolito2017,Youngblood2017}. In those materials, like in graphene, the nonlinear optical response has been shown to be very intense, much more so than in three dimensional materials. One key issue, that followed directly from those initial works, was to expand the understanding of the nonlinear \textit{intraband} response \textemdash{} frequencies in the microwave and the infrared \textemdash{} into the high frequency range \textemdash{} frequencies in the near infrared and above \citep{Cheng2014,Cheng2015b,Cheng2015,Mikhailov2016}. Doing so required a full quantum treatment of the electrons in a crystal, and meant recovering the formalism for the calculation of NLO coefficients in bulk semiconductors of the late eighties and early nineties, developed by J. E. Sipe and collaborators \citep{Moss1989,Sipe1993,Aversa1995,Sipe2000}. \textcolor{black}{Their work, mostly formulated in the so-called length gauge, provided expressions for the second and third order optical conductivities that are directly applicable to a system of non-interacting electrons in a solid, taking both intraband and interband transitions into account. Many other works have since used this framework. In practice, due to the complexity of the general expressions, calculations of nonlinear optical conductivities usually require performing the analytical calculation (i.e., an integration over the FBZ) for the particular system under study: in third order this is already rather cumbersome. Often, it is only really tractable for simple effective Hamiltonians (such as the Dirac Hamiltonian in graphene), that describe only a portion of the FBZ. This has limited the length gauge method to sufficiently small frequencies for such effective Hamiltonians to be applicable.} \textcolor{black}{Another approach, based on the velocity gauge, was developed concurrently but presented early difficulties. Spurious divergences and inaccurate results upon the truncation of the number of bands led the velocity gauge to be less adopted. The origin of these difficulties was understood early on as a violation of sum rules \citep{Aversa1995}. This was solved only recently \citep{passos}, with a reformulation of the velocity gauge that is able to reproduce the results from the length gauge and that is best suited for numerical calculations that involve the full FBZ. Two diagrammatic methods based on this formulation of the velocity gauge have since been developed \citep{parker2019,Joao2018}, the former of which was used in the study of Weyl semimetals, while the latter was shown to be applicable even in disordered systems. In this velocity gauge approach there is no added dificulty in moving to higher frequencies and in fact its implementation requires the use of models defined in the entire FBZ. The authors will use the new velocity gauge approach of ref.\citep{passos} to probe the NLO response of graphene in a frequency range beyond the Dirac approximation.} We present numerical results for the second and third order responses of the plain graphene (PG) and gapped (GG) graphene monolayers to a monochromatic electric field of frequencies (energies) that range from the microwave ($\hbar\omega\sim0.005$ eV) to the ultraviolet ($\hbar\omega\sim6$ eV). These results differ from what has been previously reported in literature \citep{Cheng2014,Cheng2015,Cheng2015b,Mikhailov2016,passos,Hipolito2018} for two reasons: \textcolor{black}{we go beyond the Dirac cone approximation (valid up to about 1 eV) and study the response of the PG and GG monolayers at high frequencies.} Our calculations address all different components of the conductivity tensors \textemdash{} on which intrinsic permutation symmetry is imposed \textemdash{} and not the \textit{effective} tensors of ref.\citep{Hipolito2018}, which also goes beyond the Dirac approximation, but where Kleinman's symmetry is additionally imposed. Since this second symmetry follows from the consideration that the nonlinear susceptibilities (or conductivities) can be deemed \textit{dispersionless} \citep{Boyd:2008}, it lacks justification in the study of the response in these frequency ranges. Although seemingly technical in nature, this difference is practically relevant as the conductivities computed here can be directly related to measurements of the current response, $J^{\alpha}(t)$, in an experiment (regardless of the polarization of the electric field), whereas the effective tensors cannot. The paper is organized as follows. In the following section, we perform a review of the calculations of NLO conductivities in the length and the velocity gauges. Section \ref{sec:III} is dedicated to the use of tight-binding Hamiltonians in velocity gauge calculations and to two pertinent points: the computation of $h$ coefficients, which are integral to the description of the response in the velocity gauge become simple when working on a basis for which the Berry connections are all trivial; the second point concerns the relation between these Berry connections and the manner by which one defines the position operator in the lattice. It is shown that this has implications in the optical response by studying the interband portion of the linear conductivity of plain graphene. In Section \ref{sec:IV}, we present the aforementioned results, i.e., the second harmonic generation and optical rectification conductivities, of the GG monolayer and for the third order response, i.e., the harmonic generation and Kerr effect conductivities, of the PG and GG monolayers, in the aforementioned frequency regime. For the two second order effects the results are complemented by analytical calculations of the real part of the conductivities. The final section is dedicated to a summary of our work. \section{Calculation of nonlinear optical conductivities in crystals\label{sec:II}} A system's nonlinear current response to a monochromatic electric field, that is considered to be constant throughout the material, \begin{align} \mathbf{E}(t)= & \mathbf{E}_{0}\,e^{i\omega t}+\left(\mathbf{E}_{0}\right)^{*}\,e^{-i\omega t} \end{align} is described, in second order, by the second harmonic generation, $\sigma_{\beta\alpha_{1}\alpha_{2}}^{(2)}(\omega,\omega)$, and the optical rectification, $\sigma_{\beta\alpha_{1}\alpha_{2}}^{(2)}(\omega,-\omega)$, conductivities,\footnote{We consider two things in the following expressions: repeated cartesian indexes are being implicitly summed over; conductivities satisfy the property of intrinsic permutation symmetry \citep{Boyd:2008}.} \begin{align} J_{\beta}^{(2)}(t)= & \ \sigma_{\beta\alpha_{1}\alpha_{2}}(\omega,\omega)\,E_{0}^{\alpha_{1}}\,E_{0}^{\alpha_{2}}\,e^{-i2\omega t}\nonumber \\ & +\sigma_{\beta\alpha_{1}\alpha_{2}}(\omega,-\omega)\,E_{0}^{\alpha_{1}}\,(E_{0}^{\alpha_{2}})^{*}\nonumber \\ & +\text{c.c.}, \end{align} while, in third order, it is described by the third harmonic generation, $\sigma_{\beta\alpha_{1}\alpha_{2}\alpha_{3}}^{(2)}(\omega,\omega,\omega)$ and the Kerr effect, $\sigma_{\beta\alpha_{1}\alpha_{2}\alpha_{3}}^{(3)}(\omega,\omega,-\omega)$, conductivities, \begin{align} J_{\beta}^{(3)}(t)= & \ \sigma_{\beta\alpha_{1}\alpha_{2}\alpha_{3}}(\omega,\omega,\omega)\,E_{0}^{\alpha_{1}}\,E_{0}^{\alpha_{2}}\,E_{0}^{\alpha_{3}}\,e^{-i3\omega t}\nonumber \\ & +\sigma_{\beta\alpha_{1}\alpha_{2}\alpha_{3}}(\omega,\omega,-\omega)\,E_{0}^{\alpha_{1}}\,E_{0}^{\alpha_{2}}\,(E_{0}^{\alpha_{3}})^{*}\,e^{-i\omega t}\nonumber \\ & +\text{c.c.}, \end{align} The problem of studying $J_{\alpha}^{(n)}(t)$ is thus a problem of knowing how to calculate the conductivities, $\sigma^{(n)}$, by means of a perturbative expansion. This topic has been the subject of intense research for crystalline systems \citep{Moss1989,Sipe1993,Aversa1995,Sipe2000,Cheng2014,Mikhailov2016,ventura,passos} and we will use, in particular, results of our previous work \citep{ventura,passos}, in the following review of those calculations. The baseline considerations here are the same as before: the electric field is constant throughout the crystal and electron-electron interactions, integral to a description of an excitonic response, are not taken into account. \subsection{Crystal hamiltonian and its perturbations} In a perfect infinite crystal, the eigenfunctions of the unperturbed (crystal) Hamiltonian, $H_{0}$, are, according to Bloch's theorem, written in terms of a plane wave and a function that is periodic in the real space unit cell, \begin{align} \psi_{\mathbf{k}s}(\mathbf{r})= & \ e^{i\mathbf{k}\cdot\mathbf{r}}\,u_{\mathbf{k}s}(\mathbf{r}),\label{eq:BLOCH_STATES} \end{align} for $\mathbf{R}$, any lattice vector, \begin{align} u_{\mathbf{k}s}(\mathbf{r})= & u_{\mathbf{k}s}(\mathbf{r}+\mathbf{R}).\label{eq:PERIODIC_STATES} \end{align} Each of these eigenfunctions and its corresponding eigenvalue, $\epsilon_{\mathbf{k}s}$, is labelled by a crystal momentum, $\mathbf{k}$, that runs continuously throughout the first Brillouin zone (FBZ) and by the $s$ index, indicating the band. For a $d$ dimensional crystal, their normalization reads, \begin{align} \left\langle \psi_{\mathbf{k}s}\right|\left.\psi_{\mathbf{k}'s'}\right\rangle = & (2\pi)^{d}\,\delta_{ss'}\,\delta(\mathbf{k}-\mathbf{k}'). \end{align} Furthermore, the periodic part of the Bloch functions (for a fixed $\mathbf{k}$) also forms an orthogonal basis, with an inner product that is defined over the real space unit cell, instead of the entire crystal, \begin{align} \left\langle u_{\mathbf{k}s}\right|\left.u_{\mathbf{k}s'}\right\rangle = & \frac{1}{v_{c}}\int_{v_{c}}d^{3}\mathbf{r}\,u_{\mathbf{k}s}^{*}(\mathbf{r})\,u_{\mathbf{k}s'}(\mathbf{r})=\delta_{ss'}.\label{eq:SCA_P} \end{align} One can then write the full Hamiltonian, composed of the crystal Hamiltonian and the coupling of the electrons to the external electric field, in the single particle basis of band states. The explicit form of the coupling depends on the representation one chooses for the electric field in terms of the scalar ($\phi(\mathbf{r},t)$) and vector potential ($\mathbf{A}(\mathbf{r},t)$), i.e., on the chosen gauge. For the length gauge, the vector potential is set to zero, \begin{equation} \mathbf{E}(t)=-\nabla\phi(\mathbf{r},t), \end{equation} and the coupling to the electrons is performed via dipole interaction, $V_{\mathbf{k}ss'}^{E}(t)$, \begin{align} H^{E}= & \ \int\frac{d^{d}\mathbf{k}}{(2\pi)^{d}}\sum_{s,s'}\left|\psi_{\mathbf{k}s}\right\rangle \bigl[\epsilon_{\mathbf{k}s}\,\delta_{ss'}+V_{\mathbf{k}ss'}^{E}(t)\bigr]\left\langle \psi_{\mathbf{k}s'}\right|,\label{eq:LENGTH_H} \end{align} where, \begin{align} V_{\mathbf{k}ss'}^{E}(t)= & \ ie\mathbf{E}(t)\cdot\mathbf{D}_{\mathbf{k}ss'}.\label{eq:PERT_E} \end{align} The covariant derivative, $\mathbf{D}_{\mathbf{k}ss'}$, is defined as \citep{ventura}, \begin{align} \mathbf{D}_{\mathbf{k}ss'}= & \ \nabla_{\mathbf{k}}\delta_{ss'}-i\boldsymbol{\xi}_{\mathbf{k}ss'},\label{eq:COV_DEV} \end{align} for $\boldsymbol{\xi}_{\mathbf{k}ss'}$, the Berry connection between band states \citep{Blount1962.}, \begin{align} \boldsymbol{\xi}_{\mathbf{k}ss'}= & \ i\left\langle u_{\mathbf{k}s}\right|\left.\nabla_{\mathbf{k}}u_{\mathbf{k}s'}\right\rangle . \end{align} As for the velocity gauge, it is the scalar potential that is set to zero, \begin{align} \mathbf{E}(t)= & -\partial_{t}\mathbf{A}(t), \end{align} and the full Hamiltonian in this gauge, $H^{A}$, is obtained from $H^{E}$ by means of a time-dependent unitary transformation \citep{passos,ventura}, \begin{align} H^{A}= & \int\frac{d^{d}\mathbf{k}}{(2\pi)^{d}}\sum_{s,s'}\left|\psi_{\mathbf{k}s}\right\rangle \bigl[\epsilon_{\mathbf{k}s}\,\delta_{ss'}+V_{\mathbf{k}ss'}^{A}(t)\bigr]\left\langle \psi_{\mathbf{k}s'}\right|,\label{eq:VELOCITY_H} \end{align} for a perturbation, $V_{\mathbf{k}ss'}^{A}(t)$, that is written as an infinite series in the external field, \begin{align} V_{\mathbf{k}ss'}^{A}(t)= & \sum_{n=1}^{\infty}\frac{e^{n}}{n!}\,A_{\alpha_{1}}(t)(...)A_{\alpha_{n}}(t)\,h_{\mathbf{k}ss'}^{\alpha_{1}(...)\alpha_{n}}.\label{eq:PERT_A} \end{align} The coefficients in that expansion, $h_{\mathbf{k}ss'}^{\alpha_{1}(...)\alpha_{n}}$, are given by nested commutators of the covariant derivative, Eq.(\ref{eq:COV_DEV}), with the unperturbed Hamiltonian \citep{passos}, \begin{align} h_{\mathbf{k}ss'}^{\alpha_{1}(...)\alpha_{n}}= & \left\langle u_{\mathbf{k}s}\right|(\nabla_{\mathbf{k}}^{\alpha_{n}}...\nabla_{\mathbf{k}}^{\alpha_{1}}H_{0\mathbf{k}})\left|u_{\mathbf{k}s'}\right\rangle ,\label{eq:AITCH}\\ = & \frac{1}{\hbar^{n}}\bigl[D_{\mathbf{k}}^{\alpha_{n}},\bigl[(...),\bigl[D_{\mathbf{k}}^{\alpha_{1}},\,H_{0}\bigr]\bigr]\,(...)\bigr]_{ss'}, \end{align} with the first one being the velocity matrix element in the unperturbed system. Finally, one can write the velocity operator in each of the gauges: $v^{\beta}=\hbar^{-1}\left[D^{\beta},H\right]$. In the single particle basis, they read as \begin{align} v^{E,\beta}= & \int\frac{d^{d}\mathbf{k}}{(2\pi)^{d}}\sum_{s,s'}\left|\psi_{\mathbf{k}s}\right\rangle v_{\mathbf{k}ss'}^{(0),\beta}\left\langle \psi_{\mathbf{k}s'}\right|,\label{eq:VEL_NOT}\\ v^{A,\beta}(t)= & \int\frac{d^{d}\mathbf{k}}{(2\pi)^{d}}\sum_{s,s'}\left|\psi_{\mathbf{k}s}\right\rangle \Bigl[v_{\mathbf{k}ss'}^{(0),\beta}+\sum_{n=1}^{\infty}\frac{e^{n}}{n!}\,\nonumber \\ & \times A_{\alpha_{1}}(t)(...)A_{\alpha_{n}}(t)\,h_{\mathbf{k}ss'}^{\beta\alpha_{1}(...)\alpha_{n}}\Bigr]\left\langle \psi_{\mathbf{k}s'}\right|.\label{eq:VEL_A} \end{align} \subsection{Density matrix and conductivities} The electric current density in the crystal is given by the ensemble average of the velocity operator times the charge of an electron, \begin{align} \langle J^{\beta}\rangle(t)= & \ (-e)\,\text{Tr}\left[v^{\beta}\rho(t)\right],\\ = & \ (-e)\int\frac{d^{d}\mathbf{k}}{(2\pi)^{d}}\sum_{s,s'}v_{\mathbf{k}s's}^{\beta}\,\rho_{\mathbf{k}ss'}(t),\label{eq:CURRENT} \end{align} and it is written in terms of the matrix elements of the density matrix (DM), whose time evolution is described by the Liouville equation, \begin{align} (i\hbar\partial_{t}-\Delta\epsilon_{\mathbf{k}ss'})\rho_{\mathbf{k}ss'}(t)= & \left[V_{\mathbf{k}},\,\rho_{\mathbf{k}}(t)\right]_{ss'}.\label{eq:EQM} \end{align} Each gauge has its own set of equations of motion, following from the perturbations of Eqs.(\ref{eq:PERT_E}) and (\ref{eq:PERT_A}). The perturbative treatment of the current response requires an expansion of the $\rho_{\mathbf{k}ss'}(t)$ in powers of the electric field and solving \textemdash{} recursively \textemdash{} the equations of motion for the matrix elements of the DM, Eq.(\ref{eq:EQM}), in frequency space. For the velocity gauge, it also requires an expansion of the velocity matrix elements, Eq.(\ref{eq:VEL_A}), since these also depend on the electric field. At the end of that procedure \citep{passos,ventura}, one obtains the conductivities of arbitrary order $n$ in both the length and velocity gauges. Here we only present the expressions for the second order conductivities, following the scattering prescription described in \citep{passos}, \begin{align} \sigma_{\beta\alpha_{1}\alpha_{2}}^{(2),E}(\omega_{1},\omega_{2})= & e^{3}\int\frac{d^{d}\mathbf{k}}{(2\pi)^{d}}\sum_{s,s'}\frac{h_{\mathbf{k}s's}^{\beta}}{\hbar\omega_{12}+2i\gamma-\Delta\epsilon_{\mathbf{k}ss'}}\nonumber \\ & \ \times\bigl[D_{\mathbf{k}}^{\alpha_{1}},\frac{1}{\hbar\omega_{2}+i\gamma-\Delta\epsilon}\circ\bigl[D_{\mathbf{k}}^{\alpha_{2}},\rho_{\mathbf{k}}^{(0)}\bigr]\bigr]_{\mathbf{k}ss'}\nonumber \\ & +(\{\alpha_{1},\omega_{1}\}\leftrightarrow\{\alpha_{2},\omega_{2}\}),\label{eq:SIGMA_E} \end{align} \begin{flushleft} \begin{widetext} \begin{align} \sigma_{\beta\alpha_{1}\alpha_{2}}^{(2),A}(\omega_{1},\omega_{2})= & \frac{e^{3}}{\omega_{1}\omega_{2}}\int\frac{d^{d}\mathbf{k}}{(2\pi)^{d}}\sum_{s,s'}\bigl[\frac{h_{\mathbf{k}s's}^{\beta}}{\hbar\omega_{12}+2i\gamma-\Delta\epsilon_{\mathbf{k}ss'}}\bigl(\bigl[h_{\mathbf{k}}^{\alpha_{1}},\frac{1}{\hbar\omega_{2}+i\gamma-\Delta\epsilon}\circ\bigl[h_{\mathbf{k}}^{\alpha_{2}},\rho_{\mathbf{k}}^{(0)}\bigr]\bigr]_{\mathbf{k}ss'}+\frac{1}{2}\bigl[h_{\mathbf{k}}^{\alpha_{1}\alpha_{2}},\rho_{\mathbf{k}}^{(0)}\bigr]_{\mathbf{k}ss'}\bigr)\nonumber \\ & +h_{\mathbf{k}s's}^{\beta\alpha_{1}}\,\frac{1}{\hbar\omega_{2}+i\gamma-\Delta\epsilon_{\mathbf{k}ss'}}\bigl[h_{\mathbf{k}}^{\alpha_{2}},\rho_{\mathbf{k}}^{(0)}\bigr]_{\mathbf{k}ss'}+\frac{1}{2}h_{\mathbf{k}s's}^{\beta\alpha_{1}\alpha_{2}}\,\rho_{\mathbf{k}ss'}^{(0)}\ +(\{\alpha_{1},\omega_{1}\}\leftrightarrow\{\alpha_{2},\omega_{2}\})\bigr]. \end{align} \end{widetext}In the absence of an electric field, the zeroth order DM matrix element is given by the Fermi-Dirac distribution and the band space identity matrix, \begin{equation} \rho_{\mathbf{k}ss'}^{(0)}=f(\epsilon_{\mathbf{k}s})\,\delta_{ss'}. \end{equation} The equivalence between these two conductivities, as well as for conductivities at an arbitrary order $n$, is ensured by the existence of sum rules \citep{Sipe1993,passos}, that are valid as long as the integration over $\mathbf{k}$ is taken over the \textit{full} FBZ. Though it is still possible to perform calculations using only a \textit{portion} of the FBZ \textemdash{} e.g., graphene in the Dirac cone approximation \citep{Cheng2014,Cheng2015,Cheng2015b,Mikhailov2016,Mikhailov2017} \textemdash{} one must do so in the length gauge \citep{ventura}, making it the suitable choice for analytical calculations \citep{passos}. In this work we present two analytical results, for the effects of second harmonic generation and optical rectification, in the clean limit: $\gamma\rightarrow0$. \par\end{flushleft} \begin{flushleft} The velocity gauge, on the other hand, is suitable for numerical approaches that involve the entire FBZ \citep{passos}. It does not feature higher order poles, and it avoids having to take derivatives of the density matrix. Instead, for a response of order $n$, one has to compute all $h$ coefficients, Eq.(\ref{eq:AITCH}), up to order $n+1$, $h_{\mathbf{k}ss'}^{\alpha_{1}...\alpha_{n+1}}$. All numerical results in this paper were calculated in the velocity gauge. \par\end{flushleft} \section{velocity gauge for Tight-Binding Hamiltonians\label{sec:III}} A perturbative description of the response in the velocity gauge is correct only if the unperturbed Hamiltonian is defined in the full FBZ. For the purpose of this work, we consider it to be a tight-binding model. This section is therefore dedicated to two points that concern this type of Hamiltonian: we show that the calculation of $h$ coefficients is made simple when one chooses a basis for which all Berry connections are trivial; we also trace the source of variations in the calculations of these coefficients, and to the Berry connections found in the literature, to subtle changes in the definition of the position operator. This difference is illustrated in the linear optical response of the plain graphene monolayer. \subsection{Covariant derivatives in the second Bloch basis} A tight-binding model is a simplified description of electrons in a lattice, where electronic motion is characterized by hoppings from one orbital to its neighbouring ones ($t_{ij}\left(\mathbf{R}_{n},\mathbf{R}_{m}\right)$), where $i,j$ index different orbitals of the same unit cell, which may have distinct on-site energies ($\epsilon_{i}$). In real space, \begin{align} \mathcal{H}= & \underset{\mathbf{R}_{n},\,\mathbf{R}_{n},\,i,\,j}{\sum\sum}\left[\,t_{ij}(\mathbf{R}_{n},\mathbf{R}_{m})\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle\bigl\langle\phi_{\mathbf{R}_{m}\,j}\bigr|+\text{h.c.}\right]\nonumber \\ & +\sum_{\mathbf{R}_{n}}\sum_{i}\,\epsilon_{i}(\mathbf{R}_{n})\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle\bigl\langle\phi_{\mathbf{R}_{n}\,i}\bigr|. \end{align} A $\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle$ represents a Wannier orbital centered at the position $\mathbf{R}_{n}+\boldsymbol{\lambda}_{i}$, with $\boldsymbol{\lambda}_{i}$ being the vector from that $i$-orbital site to the unit cell origin. As seen in the previous section, the eigenvalues of this Hamiltonian are the bands, $\epsilon_{\mathbf{k}s}$, and the eigenfunctions are the Bloch eigenstates, $\bigl|\psi_{\mathbf{k}s}\bigr\rangle$. There is, however, a second basis of functions that also satisfies Bloch's theorem, where each Bloch state is built out of a single type of Wannier orbitals (same $i$), \begin{align} \bigl|\psi_{\mathbf{k}i}\bigr\rangle= & \sum_{\mathbf{R}_{n}}e^{i\mathbf{k}\cdot\left(\mathbf{R}_{n}+\boldsymbol{\lambda}_{i}\right)}\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle.\label{eq:LOCAL BLOCH} \end{align} A very common approximation is to \emph{define} the position operator as diagonal in the Wannier basis: \begin{align} \mathbf{r}\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle= & \left(\mathbf{R}_{n}+\boldsymbol{\lambda}_{i}\right)\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle.\label{eq:POSITION WANNIER} \end{align} Under this approximation, the periodic factor in the Bloch wavefunction \[ \bigl|u_{\mathbf{k}i}\bigr\rangle=e^{-i\mathbf{k}\cdot\mathbf{r}}\bigl|\psi_{\mathbf{k}i}\bigr\rangle=\sum_{\mathbf{R}_{n}}\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle, \] is $\mathbf{k}$ independent and the Berry connections in this second basis is trivially zero, \begin{equation} \xi_{\mathbf{k}ij}^{\alpha}=i\left\langle u_{\mathbf{k}i}\right|\left.\nabla_{\mathbf{k}}u_{\mathbf{k}j}\right\rangle .\label{eq:LOCAL BERRY} \end{equation} This means that in the second Bloch basis, the covariant derivative ($\mathbf{D}_{\mathbf{k}}$) reduces to the regular derivative ($\nabla_{\mathbf{k}}$) and that the matrix element of the derivative of an operator is simply the derivative of matrix element of that operator \citep{ventura}, \begin{align} \left\langle u_{\mathbf{k}i}\right|\left(\nabla_{\mathbf{k}}^{\alpha}\mathcal{O}_{\mathbf{k}}\right)\left|u_{\mathbf{k}j}\right\rangle = & \left[D_{\mathbf{k}}^{\alpha},\mathcal{O}_{\mathbf{k}}\right]_{ij},\nonumber \\ = & \nabla_{\mathbf{k}}^{\alpha}\left[\left\langle u_{\mathbf{k}i}\right|\mathcal{O}_{\mathbf{k}}\left|u_{\mathbf{k}j}\right\rangle \right]-i\left[\xi_{\mathbf{k}}^{\alpha},\,\mathcal{O}_{\mathbf{k}}\right]_{ij},\nonumber \\ = & \nabla_{\mathbf{k}}^{\alpha}\mathcal{O}_{\mathbf{k}ij}. \end{align} The calculation of $h$ coefficients is then fairly simple. Following from Eq.(\ref{eq:AITCH}), and by use of the completeness relation for the states in the second basis, we can see that \begin{align} h_{\mathbf{k}ss'}^{\alpha_{1}...\alpha_{p}}= & \sum_{i,j}\left\langle u_{\mathbf{k}s}\right|\left.u_{\mathbf{k}i}\right\rangle \left\langle u_{\mathbf{k}i}\right|\left(\nabla_{\mathbf{k}}^{\alpha_{1}}...\nabla_{\mathbf{k}}^{\alpha_{p}}H_{\mathbf{k}}\right)\bigl|u_{\mathbf{k}j}\bigr\rangle\left\langle u_{\mathbf{k}j}\right|\left.u_{\mathbf{k}s'}\right\rangle ,\nonumber \\ = & \sum_{i,j}c_{\mathbf{k}s,i}\left(\nabla_{\mathbf{k}}^{\alpha_{1}}...\nabla_{\mathbf{k}}^{\alpha_{p}}H_{\mathbf{k}ij}\right)c_{\mathbf{k}s',j}^{*}\,,\label{eq:AITCH_TB} \end{align} for, $c_{\mathbf{k}s,i}$, the solutions to the eigenvector problem for that particular value of \textbf{k}, \begin{align} \left|\psi_{\mathbf{k}s}\right\rangle = & \sum_{i}c_{\mathbf{k}s,i}\left|\psi_{\mathbf{k}i}\right\rangle . \end{align} The Berry connection, in particular, is \begin{align} \xi_{\mathbf{k}ss'}^{\alpha} & =i\left\langle u_{\mathbf{k}s}\right|\left.\nabla_{\mathbf{k}}u_{\mathbf{k}s'}\right\rangle ,\nonumber \\ & =i\sum_{j,i}\left\langle u_{\mathbf{k}s}\right|\left.u_{\mathbf{k}j}\right\rangle \left\langle u_{\mathbf{k}j}\right|\left.u_{\mathbf{k}i}\right\rangle \left(\nabla_{\mathbf{k}}^{\alpha}\left\langle u_{\mathbf{k}i}\right|\left.u_{\mathbf{k}s'}\right\rangle \right),\\ & =i\sum_{j}c_{\mathbf{k}s,j}\nabla_{\mathbf{k}}^{\alpha}c_{\mathbf{k}s',j}^{*},\label{eq:Berry_change_baisis} \end{align} since $\nabla_{\mathbf{k}}^{\alpha}\left(\left|u_{\mathbf{k}i}\right\rangle \left\langle u_{\mathbf{k}i}\right|\left.u_{\mathbf{k}s'}\right\rangle \right)=\left|u_{\mathbf{k}i}\right\rangle \nabla_{\mathbf{k}}^{\alpha}\left\langle u_{\mathbf{k}i}\right|\left.u_{\mathbf{k}s'}\right\rangle .$ Note that this procedure for computing the $h$ coefficients has a profound impact on how the numerical calculations of the conductivity are performed: by having $H_{\mathbf{k}ij}$ that are analytical, one can easily compute their derivatives. All other operations, such as solving the eigenvalue/eigenvector problem and calculating matrix elements in the band basis, can be done numerically and much more efficiently. \begin{figure}[t] \centering{}\includegraphics[scale=0.45]{figures/honeycomb.png}\caption{The honeycomb lattice of graphene with lattice parameter $\left|\boldsymbol{\delta}_{2}\right|=a_{0}$ and the armchair in the $\hat{y}$ direction. We have represented both the lattice vectors, ($\boldsymbol{a}_{1}$, $\boldsymbol{a}_{2}$), and the vectors connecting an A atom to its nearest neighbours, ($\boldsymbol{\delta}_{1}$, $\boldsymbol{\delta}_{2}$, $\boldsymbol{\delta}_{3}$). In plain graphene (PG), the atoms A and B are equivalent, in gapped graphene (GG), these are not.\label{fig:honey}} \end{figure} \subsection{Choosing a representation for the tight-binding Hamiltonian} There is a second issue concerning tight-binding Hamiltonians that, though it does not pertain solely to the velocity gauge, is extremely relevant in the calculation of nonlinear optical conductivities. For simplicity, we present the following discussion in terms of the nearest neighbour tight-binding model for the PG monolayer, since it will be used in our description of the nonlinear optical responses. The hopping parameter is set to $3$ eV, both in this section and throughout the rest of the work \citep{passos,Hipolito2018}. This Hamiltonian is usually written in two different ways, \begin{align} H_{\mathbf{k},(a/\delta)}= & \left[\begin{array}{cc} 0 & (-t)\ \phi_{(\delta/a)}(\mathbf{k})\\ (-t)\ \phi_{(\delta/a)}^{*}(\mathbf{k}) & 0 \end{array}\right].\label{eq:HAM_PG} \end{align} The first comes directly from the definition of the second Bloch basis as that in Eq.(\ref{eq:LOCAL BLOCH}), \begin{align} \phi_{(\delta)}(\mathbf{k})= & \ e^{-i\mathbf{k}\cdot\boldsymbol{\delta}_{1}}+e^{-i\mathbf{k}\cdot\boldsymbol{\delta}_{2}}+e^{-i\mathbf{k}\cdot\boldsymbol{\delta}_{3}},\label{eq:F_DELTA} \end{align} and is expressed in terms of the vectors connecting an atom to its nearest neighbours, Figure \ref{fig:honey}. The other way of writing this Hamiltonian is associated to a second Bloch basis that has its states phase shifted with respect to those of Eq.(\ref{eq:LOCAL BLOCH}), \begin{align} \bigl|\tilde{\psi}_{\mathbf{k}i}\bigr\rangle= & \sum_{\mathbf{R}_{n}}e^{i\mathbf{k}\cdot\mathbf{R}_{n}}\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle,\label{eq:LOCAL BLOCH-1} \end{align} such that hoppings are written in terms of the lattice vectors $\mathbf{a}_{1}$ and $\mathbf{a}_{2}$, \citep{Cheng2014,Mikhailov2016,passos}, \begin{figure}[t] \centering{}\includegraphics[scale=0.33]{figures/nnr_lvr_6.png}\caption{\textcolor{black}{The real and imaginary parts of the interband portion of the linear conductivity, $\sigma_{xx}(\omega)$, for tight-binding Hamiltonians written in the lattice vector ($a_{i}$) and nearest neighbour ($\delta_{i}$) representations.\label{fig:2}} The relevant parameters here are $\mu=0.5$ eV, $\gamma=0.005$ eV and $T=1$ K. The conductivity is normalized with respect to $\sigma_{0}=e^{2}/4\hbar$.} \end{figure} \begin{align} \phi_{(a)}(\mathbf{k})= & \ 1+e^{i\mathbf{k}\cdot(\mathbf{a}_{2}-\mathbf{a}_{1})}+e^{i\mathbf{k}\cdot\mathbf{a}_{2}}.\label{eq:F_A} \end{align} Both representations have the same eigenvalues since the $\phi$ functions are related by a phase factor, \begin{equation} \phi_{(a)}(\mathbf{k})=e^{i\mathbf{k}\cdot\boldsymbol{\delta}_{2}}\,\phi_{(\delta)}(\mathbf{k}). \end{equation} \textcolor{black}{There is, however, a very important subtlety. If we use Eq.(\ref{eq:AITCH_TB}) to define the $h$ coefficients, or Eq.(\ref{eq:Berry_change_baisis}) to compute the Berry connection we obtain different results, both found in the literature, in the two representations: Eqs.(\ref{eq:F_DELTA}) and (\ref{eq:F_A}).} It would appear that the condition for a trivial Berry connection in the entire FBZ, Eq.(\ref{eq:LOCAL BERRY}), that follows from the definition of the position operator of Eq.(\ref{eq:POSITION WANNIER}) is satisfied in the $\psi_{\mathbf{k}i}$ basis but it is \textit{not} satisfied in the $\tilde{\psi}_{\mathbf{k}i}$ basis, since \begin{equation} \bigl|\tilde{u}_{\mathbf{k}i}\bigr\rangle=e^{-i\mathbf{k}\cdot\mathbf{r}}\bigl|\tilde{\psi}_{\mathbf{k}i}\bigr\rangle=\sum_{\mathbf{R}_{n}}e^{-i\mathbf{k}\cdot\boldsymbol{\delta}_{i}}\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle\label{eq:u_tilde} \end{equation} Still one needs to point out Eqs.(\ref{eq:AITCH_TB}) and (\ref{eq:Berry_change_baisis}) are still valid for the $\bigl|\tilde{\psi}_{\mathbf{k}i}\bigr\rangle$ basis of Eq.(\ref{eq:LOCAL BLOCH-1}), \emph{provided we define $\mathbf{r}$ differently, }effectively neglecting the distances inside the unit cell, $\left|\boldsymbol{\lambda}_{i}\right|$,\emph{ \begin{equation} \mathbf{r}\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle=\mathbf{R}_{n}\bigl|\phi_{\mathbf{R}_{n}\,i}\bigr\rangle.\label{eq:POSITIO-WANNIER-2} \end{equation} } These two different representations of the position operator, correspond naturally to different approximations to the perturbation term, and can lead to different results. To illustrate these distinctions it is worthwhile to compare the responses that follow from either representation, under the consideration that $h$ coefficients can be computed following Eq.(\ref{eq:AITCH_TB}), for both the zig-zag and armchair directions. First, we consider the linear response along the zig-zag direction, $\sigma_{xx}(\omega)$, represented in Figure \ref{fig:2}. \begin{figure}[t] \centering{}\includegraphics[scale=0.32]{figures/lvr6.png}\smallskip{} \includegraphics[scale=0.32]{figures/nnr6.png}\caption{The real and imaginary parts of interband portion of the linear conductivities, $\sigma_{xx}(\omega)$ and $\sigma_{yy}(\omega)$, following from Eq.(\ref{eq:F_A})/(\ref{eq:F_DELTA}), the lattice vector representation (a) and the nearest neighbour representation (b). The chemical potential is again fixed to $0.5$ eV.\label{fig:3}} \end{figure} In this case, the responses that follow from the two representations are exactly the same, as both $\xi_{\mathbf{k}ij}^{x}$ and $\tilde{\xi}_{\mathbf{k}ij}^{x}$ are zero. This is due to the fact that $\boldsymbol{\delta}_{2}$ points in the $\hat{y}$ or armchair direction and as such, does not bear an influence in the response along the zig-zag direction. For the armchair direction, however, it is clear that the results in the two representations are different, Figure \ref{fig:3}. More importantly, we can see that in lattice vector representation, the responses along zig-zag, $\sigma_{xx}(\omega)$, and armchair directions, $\sigma_{yy}(\omega)$, are different from one another, Figure \ref{fig:3}(a). The use of the approximation described in Eq.(\ref{eq:POSITIO-WANNIER-2}) fails to properly translate the symmetry properties of the PG monolayer, particularly at high frequencies \citep{Boyd:2008}. In the nearest neighbours representation, that property is indeed fulfilled. It was this latter representation that we used for the remaining numerical results in this work. \section{Results\label{sec:IV}} In this section, we present the results for the second order response, i.e. the second harmonic generation and optical rectification conductivities, of the gapped graphene monolayer and for the third order response, i.e. the third harmonic generation and Kerr effect conductivities, of the plain and gapped monolayers to a monochromatic electric field. We considered different values for the gap and chemical potential, $\Delta$ and $\mu$, as well as different values for the scattering rate, $\gamma$, but not different values for the temperature, $T$, as its effect is similar to that of $\gamma$, which is to broaden the features. $T$ is thus set, throughout this work, to $1$K. Since all nonlinear optical conductivities are monotonically decreasing \textemdash{} the exception being the regions around processes at the gap (or twice the chemical potential) and around the van Hove singularities \textemdash{} these were represented in the two frequency regions separately, so as to make the features more visible. It must be said of these high frequency results that they should be taken only as an indication of what the response should look like \textemdash{} they were calculated in the independent particle approximation and, as such, do not consider the effect of excitons \textcolor{black}{\citep{Chae2011,Mak2014}}. Finally, we emphasize that the following conductivities satisfy the property of intrinsic permutation symmetry \citep{Boyd:2008}. \subsection{SECOND ORDER RESPONSE OF THE GAPPED GRAPHENE MONOLAYER} A gap is introduced in the plain graphene Hamiltonian, Eq.(\ref{eq:HAM_PG}), by adding to it a term that breaks the equivalence between the A and B atoms, $\text{diag}(\Delta/2,-\Delta/2)$, and thus the centrosymmetricity of the PG monolayer. The study of the remaining symmetries in the point group then tells us that there is only one relevant component for this conductivity tensor: $\sigma_{yyy}$, with $y$ being the armchair direction \citep{Hipolito2016}. In addition, the relation between this component and the remaining nontrivial components reads as, \begin{equation} \sigma_{xxy}=\sigma_{xyx}=\sigma_{yxx}=-\sigma_{yyy}.\label{eq:TENSOR} \end{equation} The following results have been normalized with respect to $\sigma_{2}=e^{3}a_{0}/4t\hbar=2.87\times10^{-15}\text{ S\ensuremath{\cdot}m/V}$ \citep{Hipolito2016}. \subsubsection{Second Harmonic Generation (SHG)} We begin with the study of the one photon ($\hbar\omega\sim\Delta$) and two photon ($2\hbar\omega\sim\Delta$) processes at the gap for values of $\Delta=30,\,300$ meV \citep{Hipolito2018} and for two different values of the scattering rate, $\gamma=0.005,\,0.001$ eV, Figure \ref{fig:4}. \begin{figure}[t] \centering{}\includegraphics[scale=0.3]{figures/shg_003_low_fre_4.png}$\ \,$\smallskip{} \includegraphics[scale=0.3]{figures/shg_03_low_fre_3.png}\caption{The real and imaginary parts of the second harmonic generation, $\sigma_{yyy}(\omega,\omega)$, close to the one photon and two photon processes at the gap: $30$ meV in the top plot, (a), and $\Delta=300$ meV in the bottom plot, (b) for different values of the scattering rate. The green curve represents the $\gamma=0$ analytical result. The parameters not listed in the plots are the chemical potential and the temperature, which will be fixed to $\mu=0$ eV, $T=1$K. These values are used in the remaining figures of this section. \label{fig:4}} \end{figure} It is clear from these results that the shape of the features is highly dependent on the interplay between the gap and scattering parameters, $\gamma$ and $\Delta$. For the larger scattering rate and smaller gap, we can see an overlap of the two and one photon peaks, Figure \ref{fig:4}(a), which is markedly different from what happens for the larger gap, Figure \ref{fig:4}(b), where the two peaks are clearly distinct. For smaller values of the scattering rate, represented by dashed curves, there is a sharpening of the features \textemdash{} now narrower and taller \textemdash{} and the results for the two gaps are similar. To study the zero scattering limit, $\gamma=0$, we turn to the analytical results \textemdash{} represented by the thicker green curve \textemdash{} that are obtained in the length gauge, Eq.(\ref{eq:SIGMA_E}). It can be shown that the real part of the two photon process in the second harmonic generation can be expressed in terms of the shift current coefficient that has been previously derived in \citep{Aversa1995,Sipe2000}, \begin{align} \frac{\text{Re}\left[\sigma_{yyy}(\omega,\omega)\right]}{\sigma_{2}}= & -\frac{8it}{\pi}\int d^{2}\mathbf{k}\,\xi_{vc}^{y}\bigl(\xi_{cv}^{y}\bigr)_{;y}\ \delta(2\hbar\omega-\Delta\epsilon_{cv}).\label{eq:SHG_ANALYTICAL} \end{align} using the standard notation for the generalized derivative, $\bigl(\xi_{ss'}^{\alpha_{1}}\bigr)_{;\alpha_{2}}=\nabla_{\mathbf{k}}^{\alpha_{1}}(\xi_{ss'}^{\alpha_{2}})-i(\xi_{ss}^{\alpha_{1}}-\xi_{s's'}^{\alpha_{1}})\xi_{ss'}^{\alpha_{2}}$ \citep{Aversa1995}. By having a delta function in the integrand, one can see that the relevant contributions to the study of the two photon processes at the gap will come from the two regions of the FBZ around the band minimum, $\mathbf{K},\,\mathbf{K}'=\pm4\pi/3\sqrt{3}a_{0}\,\hat{x}$, which motivates a momentum expansion of the band around those points. Furthermore, since the delta function fixes $\Delta\epsilon_{cv}$ directly to twice the photon energy it is the suitable variable of integration, \begin{align} \Delta\epsilon_{cv}^{2}= & \Delta^{2}+4t^{2}\left|\phi_{\delta}(\mathbf{k})\right|^{2}.\label{eq:SQ_DIS_REL} \end{align} Now, by expanding the hopping function, $\phi_{\delta}$, for small momenta around one of the band minima, \begin{align} \left|\phi_{\delta}(\mathbf{k}=\mathbf{K}+\mathbf{q})\right|= & \frac{3\left|\mathbf{q}\right|}{2}-\frac{3\left|\mathbf{q}\right|^{2}}{8}\text{cos}(3\theta)+\mathcal{O}(\left|\mathbf{q}\right|^{3}),\label{eq:EXPANSION} \end{align} where $\left|\mathbf{q}\right|$ and $\theta$ are the radial and polar coordiantes associated with $\mathbf{q}$, and by rewriting Eq.(\ref{eq:SQ_DIS_REL}) with the help of Eq.(\ref{eq:EXPANSION}), we obtain, \begin{align} \frac{1}{t}\sqrt{\Delta\epsilon_{cv}^{2}-\Delta^{2}}= & \frac{3\left|\mathbf{q}\right|}{2}-\frac{3\left|\mathbf{q}\right|^{2}}{8}\text{cos}(3\theta)+\mathcal{O}(\left|\mathbf{q}\right|^{3}).\label{eq:EQUAL} \end{align} We have effectively related one of our integration variables, $\left|\mathbf{q}\right|$, with the small parameter $\delta(\Delta\epsilon_{cv})=\sqrt{\Delta\epsilon_{cv}^{2}-\Delta^{2}}/t$. It is now possible to invert this series, so as to obtain $\left|\mathbf{q}\right|$ in terms of $\delta$, \begin{figure}[t] \centering{}\smallskip{} \smallskip{} \includegraphics[scale=0.265]{figures/shg_tpa_vhs_8.png}\smallskip{} \includegraphics[scale=0.26]{figures/shg_opa_vhs_6.png}\caption{The real and imaginary parts of the SHG in GG, $\sigma_{yyy}(\omega,\omega)$, for frequencies around the two ($\hbar\omega\sim t$) (a) and one ($\hbar\omega\sim2t$) (b) photon processes at the van Hove singularity. The imaginary parts for the two photon processes are represented on the inset. Curves labelled as scaled have been divided by a factor of $\Delta/300$ meV. The scattering parameter for these plots is $\gamma=0.005$ eV. \label{fig:5}} \end{figure} \begin{align} \left|\mathbf{q}\right| & =\frac{\delta}{3}+\frac{\delta^{2}}{36}\text{cos}(3\theta)+\mathcal{O}(\delta^{3}). \end{align} Performing this change of variable in the integral, $\left|\mathbf{q}\right|\rightarrow\delta$, enables us to compute the integration in Eq.(\ref{eq:SHG_ANALYTICAL}) analytically. The result is an expansion in powers of $(2\hbar\omega)^{2}-\Delta^{2}$, which in the lowest orders reads, \begin{align} \frac{\text{Re}\left[\sigma_{yyy}(\omega,\omega)\right]}{\sigma_{2}}=\ & \Theta(2\hbar\omega-\Delta)\Bigl[\frac{2t}{\Delta}+\Bigl(\frac{t}{9\Delta}-\frac{2t^{3}}{\Delta^{3}}\Bigr)\nonumber \\ & \times\Bigl(\Bigl(\frac{2\hbar\omega}{t}\Bigr)^{2}-\Bigl(\frac{\Delta}{t}\Bigr)^{2}\Bigr)+(...)\Bigr].\label{eq:TPA_ANA} \end{align} This represented in Figure \ref{fig:4}, alongside the numerical results of the velocity gauge. We must note that, had we carried only linear terms in $\left|\mathbf{q}\right|$ in the expansion of the hopping function, the $\text{Re}\left[\sigma_{yyy}(\omega,\omega)\right]$ would be exactly zero, for the same reason it vanishes in the monolayer of plain graphene: in that case, the Berry connections, $\xi_{\mathbf{q}ss'}^{\alpha}$, are odd under $\mathbf{q}\rightarrow-\mathbf{q}$, and the integral vanishes necessarily. To obtain a nontrivial second order response in GG one has to consider the trigonal warping terms in the expansion of Eq.(\ref{eq:EXPANSION}). The high frequency results, i.e. those for the two ($\hbar\omega\sim t$) and one ($\hbar\omega\sim2t$) photon processes at the van Hove singularities, are represented in Figure \ref{fig:5}. We can see that the features \textemdash{} for different values of $\Delta$ \textemdash{} are centered around slightly different different energies, as \begin{align} \Delta\epsilon_{\textsc{vHs}}^{2}= & \Delta^{2}+4t^{2}\left|\phi_{\delta}(\mathbf{M})\right|^{2}.\label{eq:VHS} \end{align} Note also that the absolute value of these conductivities scales with $\Delta$ \textemdash{} the opposite behavior to what we found for the response at the gap. Another, quite surprising, point concerns the features for the real and imaginary parts of these conductivities as they are switched with respect to the real and imaginary parts of the conductivities at the gap, Figure \ref{fig:4}. It is now the real part that has the shape of a logarithmic-like divergence while the step-like behavior is present in the imaginary part. \subsubsection{Optical Rectification} The other second order process that can be observed in the response to an external monochromatic field is the generation of a DC current, described by the optical recitification conductivity: $\sigma_{yyy}(\omega,-\omega)$, Figures \ref{fig:6} and \ref{fig:7}. From the inspection of the response at photon energies close to the value of the gap, $\hbar\omega\sim\Delta$, Figure \ref{fig:6}, we can see that this tensor component is always finite (even in the zero scattering limit), meaning that there is indeed the absence of the injection current, as prescribed by the symmetry properties of the GG monolayer, Eq.(\ref{eq:TENSOR}). The remaining portion of this response is associated to the shift current and has a feature which is similar to that of the second harmonic generation at the gap, Figure \ref{fig:4}(b). In the zero scattering limit, we have \citep{Sipe2000}, \begin{figure}[t] \centering{}\includegraphics[scale=1.27]{figures/pho_003_low_fre_3.png}\smallskip{} \includegraphics[scale=1.13]{figures/pho_03_low_fre_3.png}\caption{The optical rectification conductivity, $\sigma_{yyy}(\omega,-\omega)$, in GG for frequencies close to the gap: $\Delta=30$ meV in the top plot (a) and $\Delta=300$ meV in the bottom plot (b) for different values of the scattering rate. The green curve represents the $\gamma=0$ analytical result of Eq.(\ref{eq:PHOTO_ANA}). \label{fig:6}} \end{figure} \begin{align} \frac{\sigma_{yyy}(\omega,-\omega)}{\sigma_{2}}= & -\frac{4it}{\pi}\int d^{2}\mathbf{k}\,\xi_{vc}^{y}\bigl(\xi_{cv}^{y}\bigr)_{;y}\ \delta(\hbar\omega-\Delta\epsilon_{cv}). \end{align} By comparison with the two photon resonance in the second harmonic generation, Eq.(\ref{eq:TPA_ANA}), we can see the two effects are essentially described by the same function with just different arguments \textemdash{} $2\omega$ in the case of the SHG \textemdash{} and an extra factor of two,\footnote{We have compared this with the analytical result of ref.\citep{Hipolito2016} and found a minus sign discrepancy, which \textemdash{} according to our calculations \textemdash{} follows from excluding the derivative portion of the generalized derivative.} \begin{align} \frac{\text{Re}\left[\sigma_{yyy}(\omega,-\omega)\right]}{\sigma_{2}}=\ & \Theta(\hbar\omega-\Delta)\Bigl[\frac{t}{\Delta}+\Bigl(\frac{t}{18\Delta}-\frac{t^{3}}{\Delta^{3}}\Bigr)\nonumber \\ & \times\Bigl(\Bigl(\frac{\hbar\omega}{t}\Bigr)^{2}-\Bigl(\frac{\Delta}{t}\Bigr)^{2}\Bigr)+(...)\Bigr].\label{eq:PHOTO_ANA} \end{align} \begin{figure}[t] \centering{}\includegraphics[scale=0.32]{figures/pho_opa_vhs_4.png}\caption{The optical rectification conductivity, $\sigma_{yyy}(\omega,-\omega)$, in GG for frequencies around the one photon process at the van Hove singularity. Curves labelled as scaled have been divided by a factor of $\Delta/300$ meV. The scattering parameter in this plot is $\gamma=0.005$ eV. \label{fig:7}} \end{figure} The similarity between the optical rectification conductivity and the second harmonic generation is also present at higher frequencies, $\hbar\omega\sim2t$, Figure \ref{fig:7}. Apart from a sign switch and the absence of the imaginary part \textemdash{} the symmetrized optical rectification conductivity is necessarily real \textemdash{} this result is very similar that of Figure \ref{fig:5}(b). \subsection{THIRD ORDER RESPONSE OF THE GG AND PG MONOLAYERS} The third order response is finite even in the presence of inversion symmetry and as such, we present results for both the gapped graphene and the plain graphene monolayers for the nonlinear processes of third harmonic generation and optical Kerr effect. Though associated to different point group symmetries, the components of their third order conductivities satisfy the same relations, \begin{equation} \begin{array}{c} \sigma_{yyyy}=\sigma_{xxxx}=\sigma_{xxyy}+\sigma_{xyxy}+\sigma_{xyyx},\\ \sigma_{xxyy}=\sigma_{yyxx},\ \sigma_{xyxy}=\sigma_{yxyx},\ \sigma_{xyyx}=\sigma_{yxxy}. \end{array} \end{equation} As we are also imposing intrinsic permutation symmetry, there is only one relevant component in third harmonic generation (THG), with all other components of the tensor trivially expressed in terms of it, \begin{equation} \begin{array}{c} \sigma_{xxyy}(\omega,\omega,\omega)=\sigma_{xyxy}(\omega,\omega,\omega)=\sigma_{xyyx}(\omega,\omega,\omega),\\ \sigma_{xxyy}(\omega,\omega,\omega)=\frac{1}{3}\sigma_{xxxx}(\omega,\omega,\omega)=\frac{1}{3}\sigma_{yyyy}(\omega,\omega,\omega). \end{array}\label{eq:THG_TENSOR_COM} \end{equation} We will thus present only the $\sigma_{yyyy}$ in our study of the THG. For the optical Kerr effect, we consider both the $\sigma_{yyyy}$ and $\sigma_{yxxy}$ components. The following results have been normalized by $\sigma_{3}=e^{4}a_{0}^{2}/8\hbar t^{2}=6.84\times10^{-26}\text{ S\ensuremath{\cdot}m}^{2}\text{/V}^{2}$. \subsubsection{Third Harmonic Generation (THG)} \begin{figure}[t] \centering{}\includegraphics[scale=0.29]{figures/thg_low_fre_5.png}\caption{The real and imaginary parts of the third harmonic generation, $\sigma_{yyyy}(\omega,\omega,\omega)$, in the GG, $\Delta=300$ meV, and PG, $2\mu=300$ meV, monolayers for frequencies that cover the different (one, two and three) photon processes at the gap / twice the value of the chemical potential. Note that the vertical scale is in units of $10^{6}$. The inset represents a zoom-in in the region of the one photon process, $\hbar\omega\sim\Delta,\,2\mu$. \label{fig:8}} \end{figure} \begin{figure}[t] \begin{centering} \includegraphics[scale=0.26]{figures/kerr_low_fre_7.png} \par\end{centering} \caption{The real and imaginary parts (the latter is represented in the inset) of the optical Kerr effect for the components, $\sigma_{yyyy}(\omega,\omega,-\omega)$ and $\sigma_{yxxy}(\omega,\omega,-\omega)$, in the GG, $\Delta=300$ meV, and PG, $2\mu=300$ meV, monolayers for frequencies around the one photon process at the gap / twice the value of the chemical potential. Note that the vertical scale is in units of $10^{6}$.\label{fig:fig9}} \end{figure} One of the points covered in a previous subsection (SHG) concerned the interplay between $\Delta$ and $\gamma$, and how this affected the features one sees in the conductivities. If one were to do this analysis in the THG, one would again conclude that for larger scattering rates one sees a broadening, possibly even a merger, of the main features in the conductivity. The scattering rate is therefore fixed to $\gamma=0.005$ eV. We will, instead, focus on the THG of the gapped and plain graphene monolayers, in the case where the value of the gap in the GG is equal to the energy value of the region of states that are Pauli-blocked, $2\mu$, of the PG, and that this is equal to $300$ meV, Figures \ref{fig:8} and \ref{fig:9}. We begin by studying the response of the several different photon processes, $n\hbar\omega=\Delta,\,2\mu$ for $n=1,2,3$, Figure \ref{fig:8}. It is clear that the conductivities for gapped and plain graphene are very much different: for the three photon resonance, there are prominent features in both sets of curves but the sign appears to be switched with respect to one another; for the two photon resonance, there are no clear features in the GG monolayer, whereas in the PG, one finds a shoulder and a local minimum in the real and imaginary parts, respectively. An exception to this, however, are the features for the one photon process, inset of Figure \ref{fig:8}. The differences between the low frequency limit of the gapped and plain graphene monolayer can be easily ascribed to the intraband terms of the response, dominant in this frequency range, that are completely absent from the response of the GG \textemdash{} a cold semiconductor \textemdash{} but present in the response of the doped PG monolayer. For higher frequencies, associated with the different processes around the van Hove singularities, Figure \ref{fig:9}, we can see that the conductivities of the PG and GG monolayers are rather similar. For those energies, the band structures are rather similar (as $\Delta\ll t$) and the chemical potential that is set in the PG is completely irrelevant. The only difference in the two curves comes from the different energy values for the van Hove singularity, Eq.(\ref{eq:VHS}). \begin{figure}[t] \centering{}\includegraphics[scale=0.275]{figures/thg_3pa_vhs_5.png}\smallskip{} \includegraphics[scale=0.259]{figures/thg_1pa_vhs.png}\caption{The real and imaginary parts of the third harmonic generation, $\sigma_{yyyy}(\omega,\omega,\omega)$, for frequencies around the three photon ($3\hbar\omega\sim2t$), (a), two photon ($\hbar\omega\sim t$), inset of (a), and one photon ($\hbar\omega\sim2t$), (b), processes at the van Hove singularity in the GG, $\Delta=300$ meV, and PG, $2\mu=300$ meV, monolayers.\label{fig:9}} \end{figure} \begin{figure*}[t] \centering{}\includegraphics[scale=0.265]{figures/kerr_low_fre_2_tpa.png}$\ \ \ $\includegraphics[scale=0.274]{figures/kerr_1pa_vhs_5.png}\caption{The real and imaginary (the latter represented in the insets) parts of the Kerr effect, $\sigma_{yyyy}(\omega,\omega,-\omega)$, in the GG, $\Delta=300$ meV, and PG, $2\mu=300$ meV, monolayers for frequencies around $\Delta=\,2\mu$, (a), and for frequencies around the van Hove singularity, (b). Note that the vertical scale in both figures is in units of $10^{6}$. \label{fig:10} } \end{figure*} \begin{figure*}[t] \centering{}\includegraphics[scale=0.28]{figures/kerr_2pa_lower_gamma.png}$\ \ \ $\includegraphics[scale=0.28]{figures/kerr_1pa_lower_gamma_2.png}\caption{The real and imaginary (the latter represented in the insets) parts of the Kerr effect, $\sigma_{yyyy}(\omega,\omega,-\omega)$, in the GG, $\Delta=300$ meV, and PG, $2\mu=300$ meV, monolayers for frequencies around $2\hbar\omega\sim\Delta=\,2\mu$ (a), and for frequencies around $\hbar\omega\sim\Delta=\,2\mu$ (b) for different values of the scattering parameter: $\gamma=0.005$ eV (black), $\gamma=0.0025$ eV (red) and $\gamma=0.001$ eV (orange). Note that in (b), the conductivities for different $\gamma$ have been scaled by different factors: $1.5$ for black, $0.5$ for red and $0.1$ for orange. As before, the vertical scale in both figures is in units of $10^{6}$. \label{fig:12} } \end{figure*} \subsubsection{Optical Kerr Effect (OKE)} Our final set of results concerns the optical Kerr effect (OKE), once again calculated for the cases of the GG and PG monolayers of parameters, $\Delta=2\mu$. Now, unlike the THG \textemdash{} Eq.(\ref{eq:THG_TENSOR_COM}) \textemdash{} not all nonzero components of the conductivity tensor associated with the OKE can be directly related to the diagonal terms. To show their differences, we present the $\sigma_{yyyy}(\omega,\omega,-\omega)$ and $\sigma_{yxxy}(\omega,\omega,-\omega)$ components in the low-frequency portion of the response, i.e. around the one and two photon processes at the gap (twice the chemical potential for the PG), Figures \ref{fig:fig9} and \ref{fig:10}(a), as well as the response around the one photon process at the van Hove singularity, Figure (\ref{fig:10})(b). Figure \ref{fig:fig9} shows that the two conductivity components of the GG monolayer have opposite signs \textemdash{} in both the real and the imaginary part \textemdash{} and that they are similar, but not exactly equal, in modulus. In the PG, it is only the height of the features that is different, being less pronouced in the off-diagonal component. For the two photon processes at the gap (twice the chemical potential), Figure \ref{fig:10}(a), both the GG and PG conductivities display sign differences between the two tensor components and the property observed in the one photon seems to appear in reverse: here it is in the response of the GG that we see the less pronounced features for the off-diagonal component; the PG conductivities have opposite signs and are rather similar, in modulus, across the frequency range considered. For the high frequency response, i.e. one photon processes at the van Hove singularity, Figure \ref{fig:10}(b), we see that the features on both components of the OKE conductivity are essentially the same, differing only by an overall factor of three. As in the THG, the only distinction between responses of the GG and PG monolayers comes from the fact the different energy values for the van Hove singularity: slightly higher in the GG monolayer, Eq.(\ref{eq:VHS}). A second point of interest in the OKE concerns the existence of a divergence in the real part of its associated conductivity for frequencies above the one photon absorption at the gap (twice the chemical potential) in the scatteringless limit \citep{Aversa1995}, that is related to the acceleration of electron-hole pairs \textemdash{} produced in one photon absorption processes \textemdash{} by a static, nonlinear, electric field. This divergence should be present in both the GG and PG monolayers and was indeed seen in an analytical calculation of the OKE in the monolayer of plain graphene, in the context of a linearized band \citep{Cheng2014}. Although we cannot probe this singularity directly \textemdash{} in the sense that the scattering parameter is necessarily finite in the numerical calculations \textemdash{} we find that it is nonetheless clear that such a divergence does exists, in both the PG and the GG monolayer. Figure (\ref{fig:12}) represents the real and imaginary parts of the OKE conductivity for frequencies around the two photon (a) and one photon (b) at the gap (twice the chemical potential) for different values of the scattering parameter, $\gamma$. For frequencies, $2\hbar\omega\sim\Delta=2\mu$, Figure (\ref{fig:12})(a), we can see that a decrease in the value of $\gamma$ is associated with sharper features in a small region around the absorption threshold, that then tend to merge as one moves to frequencies away from those around the threshold. This is similar to what we have observed in Figures \ref{fig:4}(b) and \ref{fig:6}(b) and it is the expected behavior for features in any given regular conductivity. When we move to frequencies above the one photon absorption, $\hbar\omega\ge\Delta=2\mu$, this no longer holds for the real part of the OKE conductivity. It increases in absolute value as $\gamma$ is reduced with the curves for different $\gamma$ running parallel to one another. Instead of a well-localized feature, one can see the appearance of a divergence. \section{SUMMARY} In this work we have studied second and third harmonic generation, the optical rectification and the optical Kerr effect for the gapped and plain graphene monolayers to a monochromatic pulse by using the density matrix formalism in the velocity gauge as well as in the length gauge. Although the topic is not new, this is the first work to present all tensor components of the nonlinear conductivities of these materials, in a frequency range that extends beyond the Dirac approximation. We emphasize that the tensor components considered here are not the effective tensors of ref.\citep{Hipolito2018}, the use of which, we think, has not been adequately justified. To calculate the conductivities in this work, we used the velocity gauge formalism developed in a previous work \citep{passos} with an additional point that we presented here: the choice of an adequate basis \textemdash{} the second Bloch basis \textemdash{} can be used to reduce covariant derivatives to regular \textbf{k}-space derivatives, which in turn simplifies the computation of the $h$ coefficients that are required for the calculation of nonlinear optical responses in the velocity gauge. We have also shown how this treatment of the covariant derivative is related to the representation of the position operator, the choice of which bears an influence in the results. As for the nonlinear conductivities themselves: for the second harmonic generation and the optical rectification conductivity at the gap, the numerical results of the velocity gauge were complemented by analytical, zero scattering limit, results in the length gauge. From these numerical results we saw how the interplay between the gap, $\Delta$, and the scattering rate, $\gamma$, affected the form of the features at low frequency. For higher frequencies, that is, around the van Hove singularity, we saw the relation between conductivities of GG monolayers with different values of the gap as well as a blueshift of the features for increasing values of $\Delta$. For the third order response, we instead focused on a comparison between the responses of the gapped graphene and doped plain graphene monolayer, in the case where the excluded energy region for interband transitions is the same, i.e., $\Delta=2\mu$. We saw, in the case of the THG, that the low frequency limit in the two materials is very different, which can be traced back to the presence of intraband terms in the response of the doped PG monolayer. For higher frequencies, the two responses are very much alike, with the exception of the shift that follows from the different location of the van Hove singularity. For the OKE, we studied two different components of the conductivity tensor, for both low and high frequencies, as well as the existence of a divergence for frequencies above the one photon absorption at the gap (twice the chemical potential) in the response of both the PG and the GG monolayers. The authors acknowledge financing of Funda\c{c}\~{a}o da Ci\^{e}ncia e Tecnologia, of COMPETE 2020 program in FEDER component (European Union), through projects POCI-01-0145-FEDER-028887 and UID/FIS/04650/2013. G. B. V. would like to thank Emilia Ridolfi and V\'{i}tor M. Pereira of the Centre for Advanced 2D Materials at the National University of Singapore for useful discussions, Nuno M. R. Peres for his help reviewing the manuscript and Sim\~{a}o Meneses Jo\~{a}o for providing Figure \ref{fig:honey}.
proofpile-arXiv_069-6189
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=8cm]{global_framework.pdf} \caption{Illustration of the RL-LTV algorithm framework. An RL is employed to solve a item-level MDP and provides a policy optimizing item long-term rewards (LTV). The policy score is combined with CTR ranking score with their combination weight adjusted by the critic expectation. The ultimate ranking score is applied in online ranking.} \label{global_framework} \end{figure} Recommender systems (RS) have become increasingly popular and have been utilized in a variety of domains (\emph{e.g.} products, music, movies and etc.) \cite{Anidorif2015Recommender}. RS assists users in their information seeking tasks by suggesting a list of items (\emph{i.e.}, products) that best fits the target user's preference. For a practical system, a common pipeline is that a series of items are first retrieved from enormous candidates, then sorted by the ranking strategy to optimize some expected metric such as CTR (\emph{i.e.,} Click-Through Rate) \cite{2011Unbiased}. However, due to the lack of user-item interactions, a common challenge is the cold-start recommendation problem \cite{2019Addressing}. Solution to the cold-start problem may depend on the platform characteristics. Traditional way to solve the cold-start problem is leveraging auxiliary information into the recommendation systems (\emph{e.g.}, content based \cite{2016Latent, 2016Collaborative}, heterogeneous information \cite{Shi2016A,2020Meta} and cross-domain \cite{2018Li, 2020CDLFM}). Although they have achieved good performance, they focus on the instant reward, while the long-term rewards is ignored. Correspondingly, there is recently increasing attention on the long-term/delayed metric, and solution to optimize the long-term user engagement \cite{wu2017returning, 2019UserEngage} is proposed. However, a long-term viewpoint from the item aspect is still missing. In the case of E-commerce, where the recommended items are typically products, there is a clear need to consider their long-term behaviors, which are changing throughout their life periods. The life dynamics of products share a similar development pattern, as stated in \textit{product life theory} \cite{Levitt1965PLC, Hui2012PLC}. \cite{he2018speeding} further proposes four distinct stages including \textit{introduction}, \textit{growth}, \textit{maturity} and \textit{decline}, and uses a mathematical tool to model the product life stages and predict the transit probability between different stages. In this paper, we also consider the product life dynamics, in a continuous, numerical manner. Within the scope of cold-start recommendation, items on the earlier stages (\emph{i.e.}, \textit{introduction} and \textit{growth}) are paid more attention in this work. For a recently introduced item, recommendation algorithms focus on the instant metric may have the exposure bias. Typically, such fresh items are probably not preferred by the instant metric due to the lack of historical behavior, therefore they are subject to low ranking preference. As a result, there might be severe Matthew Effect in the existence of conventional algorithms, in which the mature items keep receiving more impressions and the fresh items are hard to grow up. However, for some fresh items, they could be more and more popular with some investment of user impressions, and yield more returns in the future. From this aspect, a smart cold-start recommender should be able to identify high potential items in advance, and assign them higher ranking priority; while a low potential product could be penalized in contrary. A multi-period maximization task is then needed. Reinforcement Learning (RL) provides a natural, unified framework to maximize the instant and long-term rewards jointly and simultaneously. Nevertheless, considering the complexity of actual online environment, building an interactive recommender agent between user and item, as well as considering the long-term rewards is a challenging task. The trial-and-error behavior of RL might harm the system performance, or affect the user satisfaction. The calculation complexity could also be prohibitively expensive. From these considerations, and considering the fact that the time-evolution of products are naturally in a much slower time scale than online recommendation (days VS milliseconds), in this study we instead define an off-policy learning method on the item level and on the daily basis, which makes the solution practical. In this paper, we proposes a novel methodology, named \textit{reinforcement learning with lifetime value} (RL-LTV), to consider the long-term rewards of recommended items inside the cold-start recommendation problem. Such long-term rewards are called by item \textit{lifetime values} (LTV) in this paper. An off-policy, actor-critic RL with a recurrent component is employed to learn the item-level dynamics and make proactive long-term objected decisions. Information of aforementioned product life stages are encoding by the recurrent hidden memory states, which are studied by a LSTM ~\cite{hochreiter1997long} component, shared by actor and critic. To transfer the information from historical items to cold-start items, we introduce item inherent features, trending bias term, and memory states as extra inputs into both the actor and critic. One of the most important action output by the actor, the item-infinity score of LTV, is then incorporated with the conventional ranking score to form a dual-rank framework. Their combination weight is determined by action-value suggested by the critic. Figure \ref{global_framework} illustrates the entire framework of our proposed algorithm. The major contributions of this paper are as follows: \begin{itemize} \item We define a concept of \textit{Partially Observable and Controllable Markov decision process} (POC-MDP) to formulate the product life dynamics. Unobservable states depict the intrinsic life stages, while uncontrollable states can affect product growth speed but are independent of actions. \item We incorporate the item LTVs into the online ranking by RL-LTV. By prioritizing high potential cold-start items during the ranking, the exposure bias of cold-start items could be overcome. To the best of our knowledge, this is the first time that such a technique is applied to solve the cold-start recommendation problem. \item Knowledge of mature items could be generalized and transferred to cold-start items, even for those first-introduced items. To achieve this, we build the MDP on the item-level, with continual observation and action spaces, as well as parameter-shared policy and critic networks. \item We design a learning framework called IE-RDPG to solve a large-scale RL problem in an itemwise, episodic way. The algorithm is deployed into production and improvements of $8.67\%$ and $18.03\%$ on IPV and GMV for cold-start items are observed. \end{itemize} The rest of the paper is organized as follows. The connection with previous works is first discussed in Section \ref{sec:related_work}. Preliminaries are then introduced in Section \ref{sec:model_def}. The POC-MDP formulation and its learning algorithm are stated in Section \ref{sec:model_framework}. Experiment results are summarized in Section \ref{sec:experiment}. Finally Section \ref{sec:conclusion} concludes this paper. \section{RELATED WORK} \label{sec:related_work} In this section, we will briefly review representative works of cold-start recommendation and reinforcement learning. \textbf{Cold-start Recommendation:} Although collaborative filtering and deep learning based model has achieved considerable success in recommendation systems\cite{bokde2015role,2017Collaborative}, it is often difficult to deal with new users or items with few user-item interactions, which is called cold-start recommendation. The traditional solution of cold-start recommendation is to introduce auxiliary information into the recommendation system, such as content-based, heterogeneous information and cross domain. Specifically, the content-based methods rely on data augmentation by merging the user or item side information \cite{2016Latent,2016Collaborative,Zhang_2019, 2019Addressing}. For example, \cite{2016Latent} presents an approach named visual-CLiMF to learn representative latent factors for cold-start videos, where emotional aspects of items are incorporated into the latent factor representations of video contents. \cite{2016Collaborative} proposes a hybrid model in which item features are learned from the descriptions of items via a stacked diagnosing auto-encoder and further combined into a collaborative filtering model to address the item cold-start problem. In addition to these content-based features and user-item interactions, richer heterogeneous data is utilized in the form of heterogeneous information network \cite{2019Metagraph,2011PathSim,2020Meta} , which can capture the interactions between items and other objects. For heterogeneous information based methods, one of the main tasks is to explore the heterogeneous semantics in recommendation settings by using high-order graph structure, such as Metapath or Metagraph. Finally, cross-domain methods based on transfer learning, which applies the characteristics of the source domain to the target domain \cite{2018Li,2020CDLFM}. The premise of this type method is that the source domain is available and users or items can be aligned in the two domains. For example, \cite{2018Li} presents an innovative model of cross-domain recommendation according to the partial least squares regression (PLSR) analysis, which can be utilized for better prediction of cold-start user ratings. \cite{2020CDLFM} propose a cross-domain latent feature mapping model, where the neighborhood-based cross-domain latent feature mapping method is applied to learn a feature mapping function for each cold-start user. Although these methods have achieved good performance, most of them only alleviate the cold-start problem from the single-period viewpoint while ignore the long-term rewards. Recently, there is some study which tries to study the long-term effect from the user-side \cite{2019UserEngage}, but not the item side. In this paper, we try to solve a item cold-start problem by not only using items content information, but also determine the ranking preference of items according to their long-term returns. \textbf{Reinforcement Learning:} Our approach connects to the widely application of reinforcement learning on recommendation problems. These applications include different strategies: value-based \cite{Chen2018StabilizingRL,Taghipour2018,2018DRN}, policy-based \cite{Hau1997, 2013PEGASUS,Gong2019ExactKRV,Chen2019TopKOC}, and model-based \cite{Bai2019ModelBasedRL} methods. When the environment is identified as a partial observable MDP (POMDP), the recurrent neural network is a natural solution to deal with the hidden states \cite{Bakker2001ReinforcementLW,Zhao2018RecommendationsWN,2018Learning,2019UserEngage}. Reinforcement Learning also helps to generate an end-to-end listwise solution \cite{zhao2019deep} or even jointly determines the page display arrangement \cite{Zhao_2018}. There are also RL-based studies for cold-start recommendation. \cite{wang2020offline} proposes an offline meta level model-based method. \cite{ding2017coldstart} combines policy-gradient methods and maximum-likelihood approaches and then apply this cold-start reinforcement learning method in training sequence generation models for structured output prediction problems. \cite{2019UserEngage} uses reinforcement learning to solve the multi-period reward on user engagement. \cite{he2018speeding} proposes a RL-based framework for impression allocation, based on consideration of item life period stages. In their work, the item stages are explicitly identified and predicted by a mathematical model while RL is used inside the impression allocation problem. In contrast to \cite{he2018speeding}, in this paper the item stage information is implicitly studied by reinforcement learning in an end-to-end manner. In particular, we use recurrent neural network to encode the hidden state as a continual and dense representation of life stages, based on item histories. Our framework jointly studies this recurrent hidden state, the action value as a prediction of item long-term reward, as well as the ranking policy. \section{Preliminaries} \label{sec:model_def} In this section, we first introduce the background of product metabolism and the idea of item lifetime value on E-commerce, which basically motivates this work. Based on the understanding of this scenario, we define a special type of Markov Decision Process to model the metabolism of an item. The basic DDPG algorithm is finally shown as the underlying learning approach. \subsection{Item Metabolism and Lifetime Value} \label{sec:item_growth} \begin{figure} \centering \includegraphics[width=8cm]{item_growth.pdf} \caption{Item metabolism in e-commerce. Recommendation and Search feed the page views of item and determine the click-through rates. Pricing strategy further determines how many sales are converted. All of these behaviors result in more people interested with the item and choose to search it in the future. The item then obtains more priority in search and recommendation algorithms because of more historical information.} \label{item_growth} \end{figure} A typical pattern of product metabolism on the e-commerce platform is shown in Figure \ref{item_growth}. Assuming a balancing electric scooter is just introduced into the platform, it usually receives few user interest and the statistics are in low-level. Although the item can be left to grow up by itself, several channels could be utilized to change its situation. First, the item's user exposure (or more specifically, the page views (PV)) is significantly affected by both search and recommendation algorithms. Second, reasonability of search and recommendation directly affects the click-through rate (CTR), \emph{i.e.}, how many item page views (IPV) could be clicked from PV. Furthermore, as the third channel, the pricing strategy from the pricing system (PS) has substantial impact that how many number of sales (SLS) are converted from IPV. During this process, PV, IPV and SLS are accumulated, the item's reputation is built, and the interested group of users continues to expand. As more users appeal to the item, their behaviors help the specific item claim more importance in search and recommendation algorithms, therefore a positive feedback closed-loop mechanism could be created. As item elapses, the fresh item may have growing life dynamics trajectories of key indicators, including PV, IPV and SLS. However, not all items can achieve such a positive closed-loop mechanism. The growth rate of an item depend on its inherent characteristics (what the item is), brand, market, external trends or noise, and the algorithms. Actually, most items finally fall into the group called "long-tail products", with few user views, clicks or purchases. Therefore, it would help if one could identify if the item could return significant future PVs, IPVs and GMVs \footnote{The gross merchandise value from the item, which equals to $SLS$ times the averaged paid price.}, such that the star products and the long-tail products can be classified even at their early life stage. In this paper, we call such long-term rewards as the item's Lifetime Value (LTV). By allocating more resources for those high potential products, the platform would be repaid with more LTV in the future, and makes the entire ecosystem grow and prosper. As shown in Figure \ref{item_growth}, search, recommendation and pricing are possible important tools to allocate the resource and adjust the item dynamics. \subsection{MDP and its Extensions} \textit{Markov Decision Process} (MDP) is typically employed to model the sequential decision making problem. Nominal MDP usually consists of the 4-tuple $(\mathcal{S, A, R, P})$, where $\mathcal{S, A, R, P}$ are the state space, action space, set of rewards, and transition probability functions, respectively. At time step $t$, the state $s_t \in \mathcal{S}$ represents the current system stage, which is affected by an action $a_t \in \mathcal{A}$ from the agent, generating a reward $r_t$ by the reward function $\mathcal{S} \times \mathcal{A} \to \mathcal{R}$, as well as the next state $s_{t+1} \in \mathcal{P}(s_t, a_t)$. For such nominal MDP, it assumes that all elements of $s$ can both be observed by agent, and be changed by the agent's action. However, it is rare that this assumption holds in the real environment. For example, the Pong game in Atari have both the current image and the ball velocity as state, while only the former is provided to the agent. \textit{Partially Observable Markov Decision Process} (PO-MDP) captures the partial observability part of system complexity. It is instead described as a 5-tuple $(\mathcal{S, A, P, R, O})$, in which $\mathcal{O}$ denotes the set of observable states, \emph{i.e.}, $\mathcal{O}$ is subset of $\mathcal{S}$. Another form of system complexity is relatively less studied, the MDP \textit{with uncontrollable states} \cite{Arruda2009StandardDP,Liang_2019}. In such situations, some elements of $s_t$ can never be affected by agent actions, but they can determines $s_{t+1}$ therefore affect future rewards too. We here pay especial attention to such form of uncontrollablity, which will be discussed with more details in the next section. In our topic, it is believed that both unobservable and uncontrollable states exist, both of which are discussed with more details in Section \ref{sec:model_framework}. We deal with the above concerns and define a \textit{Partially Observable and Controllable Markov Decision Process} (POC-MDP), which literally means there are some unobservable states and some uncontrollable states in MDP at the same time. Although the term "state" can denote all states no matter they are observable or controllable, for clarity, we use the notation $s$ to present only the nominal (both observable and controllable) states. The unobservable states (but controllable) are denoted by $h \in \mathcal{H}$, and the uncontrollable (but observable) states are denoted by $x \in \mathcal{X}$. As a result, our POC-MDP now has the 6-tuple $(\mathcal{S, A, P, R, O, H})$. Note now the observation is the concatenation of nominal states and uncontrollable states\footnote{In this paper, we use the square bracket to represent the concatenation of multiple variables, for simplicity.}, \emph{i.e.}, $o := [s, x]$. \subsection{The DDPG method} \label{sec:ddpg} In this section we give a brief introduction to an model-free, off-policy, actor-crtic RL algorithm, the Deep Deterministic Policy Gradient (DDPG) method \cite{lillicrap2019continuous}, which is closely related to our proposed approach. In DDPG, the actor network $\pi$ is approximated by the net parameter $\theta$ and generates the policy, and the critic network $Q$ is approximated by the net parameter $w$ and generates the action-value function. Gradients of the deterministic policy is \begin{equation} \triangledown_{\theta} J = \mathbb{E}_{s \backsim d^{\pi}}[\triangledown_{\theta} \pi_{\theta}(s) \triangledown_{a}Q_w(s,a)|_{a=\pi(s)}] \notag \end{equation} where $d^{\pi}(s)$ is a discounted distribution of state $s$ under the policy of $\pi$. Then $\theta$ can be updated as \begin{equation} \theta \leftarrow \theta + \eta \mathbb{E}_{s \backsim d^{\pi}}[\triangledown_{\theta} \pi_{\theta}(s) \triangledown_{a}Q_w(s,a)|_{a=\pi(s)}] \notag \end{equation} with $\eta$ as the learning rate. For the critic, the action-value function can be obtained iteratively as \begin{equation} Q_w(s_t, a_t) = \mathbb{E} [r_t + \gamma \mathbb{E}_{a \backsim \pi_{\theta}} [Q_w(s_{t+1}, a_{t+1})]] \notag \end{equation} $w$ is updated by minimizing the following objective function \begin{equation} \small \mathop{\text{min}}_{w} L = \mathbb{E}_{s \backsim d^{\pi}}[(R(s_t,a_t) + \gamma Q_{w^{'}}(s_{t+1},\pi_{\theta^{'}}(s_{t+1})) - Q_{w}(s_{t},\pi_{\theta}(s_{t})))^2] \notag \end{equation} where $\pi_{\theta^{'}}$ and $Q_{w^{'}}$ are the target networks of actor and critic. The target network parameters can be softly updated by \begin{align} \theta^{\mu^{'}} {\leftarrow} \tau {\theta^{\mu^{'}}} + (1-\tau)\theta^{\mu} \notag \\ w^{'} \leftarrow \tau w^{'} + (1-\tau)w \notag \end{align} \section{Method} \label{sec:model_framework} This section illustrates the key concepts of our approach, including how we implement our \textit{RL-LTV}, and how its action is applied in the online system. First definitions of terms in POC-MDP are listed, then architectures of the actor and critic networks are introduced, after which the learning algorithm follows. The actor outputs a preference score, which is linearly combined with the ranking score from a conventional CTR model in a dual rank framework. The new ranking score is then applied to sort items within the response to each request. Table \ref{tab:note} summarizes important symbols which are frequently used in the current and related sections. \begin{table} \caption{Notations.} \label{tab:note} \scalebox{0.75}{ \begin{tabular}{cl \toprule Notation & Description\\ \midrule $x_i$ & the item inherent, time-invariant features \\ $x_t$ & the time-variant, trending-bias factors \\ $\mathcal{X}$, $x$ & uncontrollable state space $\mathcal{X}$, $x := [x_i, x_t] \in \mathcal{X}$ \\ $\mathcal{S}$, $s$ & state space $\mathcal{S}$, $s \in \mathcal{S}$ \\ $\mathcal{O}$, $o$ & observation space $\mathcal{O}$, $o := [s, x] \in \mathcal{O}$ \\ $\mathcal{H}$, $h$ & unobserable, hidden state space $\mathcal{H}$, $h \in \mathcal{H}$ \\ $\mathcal{A}$, $a$ & action space $\mathcal{A}$, $a \in \mathcal{A}$ \\ $\mathcal{R}$, $r$ & reward space $\mathcal{R}$, $r \in \mathcal{R}$ \\ $\gamma$ & discount factor to balance immediate and future rewards \\ $J$ & discounted cumulative return\\ $\mathcal{T}$ & the transition $(o_t, a_t, r_t, o_{t+1})$\\ $\pi_{\theta}(a|o)$ & policy function with $\theta$ as actor parameters\\ $Q_{w}(o, a)$ & state-action value function with $w$ as critic parameters\\ $PV$ & page views of all channels (search plus recommendation)\\ $PV_{\text{rec}}$ & page views from solely recommendation\\ $IPV$ & item page views jumped from all channels \\ $SLS$ & number of sales \\ $p$ & the discount degree of (averaged) paid price \\ $GMV$ & the revenue of item, basically price times SLS \\ $y_{ctr}$& the pointwise user-item affinity score from CTR model \\ $y_{rl}$& the item-affinity score from RL-LTV \\ $y$& the finally-applied weighted pointwise recommendation score \\ $\alpha$ & linear weight between $y_{ctr}$ and $y_{rl}$\\ \bottomrule \end{tabular} } \vspace{-2mm} \end{table} \subsection{Definitions} \label{subsec:definitons} In this paper, we consider a two-level agent, including RS which affects the click rate and PS which affects the conversion rate. The environment is the e-commerce ecosystem. Figure \ref{mdp} provides a snapshot of our POC-MDP, with terms defined below: \textbf{State}. The state space is defined on the item-level, representing the observable part of current item life stage, including the item's time on the market, PV (both current and accumulated), IPV (both current and accumulated), SLS (both current and accumulated), and properties (number, averaged activeness frequency, averaged purchasing power, \emph{etc}) of the user crowd currently interested with the item. \textbf{Action}. Action is denoted by $a_t = [y_{rl, t}, p_t] \in \mathcal{A}$. For the RS part, $y_{rl} \in (0,1)$ indicates the RS's preference ranking score for a specific item. PS at the downstream of RS has the action of $p$ which is modelled as the price discount degree, the averaged paid price divided by the original price. Both $y_{rl}$ and $p$ can be easily retrieved from the real data. \textbf{Uncontrollable State}. We formulate such states from the inspiration that an item's next state is not only result of its current state, but also determined by its inherent feature and extrinsic trending-bias factors. The inherent feature (\emph{e.g.,} title, cover image, category, brand, shop name) can always be assumed item-invariant. Therefore, we denote them as $x_i$ (where i means both inherent and item-invariant). Notably, within $x_i$ a pre-trained, 128-dim item title \& image multi-modal embedding is employed as the core information to bridge the information between different items \cite{InterBERT}. Other parts of $x_i$, including category, brand and shop, are input as ID-type features. The extrinsic bias factors can have different trend sources, including the entire market, the platform campaign, item seller, brand or category. We denote them as $x_t$ because obviously they are time-variant. For each trending source, we include their moving average growth percentage (MAGP) of PV, IPV and SLS. \begin{figure}[t] \centering \includegraphics[width=8cm]{mdp.pdf} \caption{Schematic representation of POC-MDP. The agent are two-level in which $y_{rl}$ is the action of RS and $p$ is the action of PS. The item inherent, item-invariant feature is denoted by $x_i$, and the trending bias, time-variant factor is denoted by $x_t$, both of which can affect the dynamics.} \label{mdp} \end{figure} \textbf{Reward}. As mentioned before, the MDP is defined on the item-level in order to share information between items. However, online recommendation contains a listwise ranking which means different item competes with each other for impression allocation. Considering this dimensional mismatch, we can not simply define the reward as a absolute value (LTV of PV, IPV, SLS, or their linear combinations) because in this case RL would simply generate $y_{rl}$ for each item as large as possible. Instead, we choose the form of reward to be similar with ROI(the return of investment), which is defined as the return of the next-step IPV with the investment of current recommended PV: \begin{equation} \small r_t = \frac{\text{IPV}_{t+1}}{\text{PV}_{rec,t}} \end{equation} \vspace{-1mm} \textbf{Long-term Gain and Discount Factor.} The multi-period, cumulative gain is defined by \begin{equation} \label{eq:v} J_t = \sum_{i=0}^{T} \gamma^{i} r_{t+i} \end{equation} which could be viewed as our ROI version of LTV. $T$ is the window length and $\gamma$ is the important discount factor, both of which control how far a model looks forward when optimizing $J$. In our study, $\gamma$ is set to a smaller number, \emph{i.e.}, $0.5$, indicating that our RL balances two sides of considerations: (1) the long-term total rewards, as well as (2) to speed up the growth of cold-start items as soon as possible. \subsection{Actor} The actor is designed to generate the action $a_t = [y_{rl}, p]$ given the current observation $o_t = [s_t, x_t, x_i]$. Such policy function is approximated by the actor net $\pi_{\theta}(a_t|o_t)$ with $\theta$ representing net parameters. As stated in Subsection \ref{subsec:definitons}, the item inherent feature $x_i$ including two parts: the first part is the pretrained embedding vector; the second part is composed of different IDs, all of which are feed into an encoder network $f_{e}(x_i, w_e)$. Embedding generated by this encoder is concatenated with the pretrained vector and forms $x_{e,i}$, the embedded version of $x_i$. Then we have the new observation after embedding $o_{e,t} = [s_t, x_t, x_{e,i}]$. Meanwhile, in order to allow the agent to capture the intrinsic item life stage, we introduce an LSTM cell to encode the sequential information of historical observations into continuous hidden states: \begin{equation} \begin{split} & \mathnormal{Z}_{f,t} = \mathnormal{\sigma} (W_f \cdot [h_{t-1}, o_{e,t}]) \\ & \mathnormal{Z}_{u,t} = \mathnormal{\sigma} (W_u \cdot [h_{t-1}, o_{e,t}]) \\ & \mathnormal{Z}_{o,t} = \mathnormal{\sigma} (W_o \cdot [h_{t-1}, o_{e,t}]) \\ & \tilde{c}_t = \mathnormal{tanh} (W_c \cdot [h_{t-1}, o_{e,t}]) \\ & c_t =\mathnormal{Z}_{f,t} \cdot c_{t-1} + \mathnormal{Z}_{u,t} \cdot \tilde{c}_t \\ & h_t= \mathnormal{Z}_{o,t} \cdot \mathnormal{tanh}(c_t). \end{split} \end{equation} where $\mathnormal{Z}_{f,t}, \mathnormal{Z}_{u,t}, \mathnormal{Z}_{o,t}$ are the forget, update, output gates. $W_f$, $W_u$, $W_o$, $W_c$ are LSTM parameters. $c_t$ and $h_t$ are the cell state and hidden state, respectively. In general, actor's inputs are $[o_{e,t},h_t]$. In order to enhance the actor's ability to memory $o_{e,t}$ and generalize $h_t$, a wide and deep structure \cite{Cheng2016WideD} is applied: \begin{equation} \begin{split} o_{wide} &= f_{wide}(W_w,o_{e,t}),\\ o_{deep} &= f_{deep}(W_d,[o_{e,t},h_t]) \end{split} \end{equation} in which $f_{wide}$ and $f_{deep}$ are one or three fully connected layers, respectively. To obtain the first action (\emph{i.e.}, $y_{rl} \in (0,1)$), we use the following functions \begin{equation} \begin{split} & S = \text{softmax}(W_s^T \cdot [o_{deep}, o_{wide}]);\\ & y_{rl} = \sigma(x_{e,i} \cdot S) \end{split} \end{equation} where $W_s^T$, $\sigma$ are the weights of linear layer and sigmoid function. Note the second action, the pricing discount $p$, is output by a similar logic structure with $y_{rl}$; this part of logic is omitted in the figure, for simplicity. The left part of Figure \ref{fig:actor_critic} shows the actor structure introduced above. \subsection{Critic} The right part of Figure \ref{fig:actor_critic} illustrates our critic. It shares the same ID encoder and LSTM components with actor. A dueling structure \cite{Wang2016DuelingNA} is here employed to depict the action value function, by decomposing it into the state value $V(o)$ and the action advantage value $A(o, a)$. $V(o)$ is generate by a dense net given $o_{t}$ and $h_t$. For $A(o, a)$, we further decompose it as $A(o_t, a_t) = A(s_t, a_t) + Bias(s_t, x_t, a_t)$. $A(s_t, a_t)$ has the same meaning with what is defined in the vanilla dueling structure, which is calculated by another dense net; the second term, $Bias(s_t, x_t, a_t)$, representing the bias consideration caused by the trending-bias factor $x_t$, is calculated by a single linear layer. The final $Q$ value is obtained by simply adding all these terms: \begin{align} Q(o_t, a_t, h_t) &= V(o_{t},h_{t}) + A(o_t, a_t) \notag \\ &= V(o_{e,t},h_{t}) + A(s_t, a_t) + Bias(s_t, x_t, a_t) \end{align} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{actor_critic.pdf} \caption{Our Actor-Critic Network Structure. Actor (left) and critic (right) share the same item encoder and LSTM component.} \label{fig:actor_critic} \end{figure} \subsection{Learning} RL aims to solve the MDP and provide a policy $\pi_{\theta}(a|s)$ which maximizes the expected discount return in Equation (\ref{eq:v}). Because our RL framework and online recommendation are in different time frequencies, an off-policy method is more suitable for us, such as DDPG introduced in Section \ref{sec:ddpg}. Same with the original version, here we also applies the double-Q network settings and a replay memory buffer. However, with the existence of the recurrent component, and also to reduce the computational burden for the industrial-scale data, here we consider a different way of data sampling and network update. In the vanilla DDPG method, one samples a batch of transitions (the tuple $(o_t, a_t, r_t, o_{t+1})$) and the gradient update is the batch average of each transition feedback. In our approach, we instead randomly sample a batch of items (or their episodes) from the buffer; updates are executed at the beginning of episode then proceed forward through time until the episode ends. In such manner, the hidden state in LSTM is always carried forward from its previous value, which alleviates its convergence speed. One can refer to \cite{SDMIA15-Hausknecht} for a similar update method in which it is called "Bootstrapped Sequential Updates". Note in our system, an episode is an item's transition sequence collected and sorted in the daily queue. Therefore, here we name our training algorithm as \textit{Itemwise Episodic Recurrent Deterministic Policy Gradient} (IE-RDPG) with the pseudo code summarized in Algorithm \ref{alg:ieddpg}. \begin{algorithm}[t!] \caption{IE-RDPG} \label{alg:ieddpg} \begin{algorithmic}[1] \STATE {\textbf{Initialize} the parameters $w$ and $\theta$ of critic $Q_w(o,a)$ and actor $\pi_{\theta}(a|o)$} \STATE {\textbf{Initialize} the target networks with $w^{\prime} \leftarrow w$ and $\theta^{\prime} \leftarrow \theta$} \STATE {\textbf{Initialize} the replay memory buffer $R = \{ \}$} \STATE \textbf{REPEAT:} \\ \qquad \textbf{Transition Generation Stage:} \STATE \qquad \textbf{for} item $ 1, M$ \textbf{do} \STATE \qquad \qquad {Receive initial observation state $o_0$} \STATE \qquad \qquad \textbf{for} $t = 1, T$ \textbf{do} \STATE \qquad \qquad \qquad Perform $a_t$ using the current policy $\pi_{\theta}(a_t \vert o_t)$ \STATE \qquad \qquad \qquad Collect reward $r_t$ and the next observation $o_{t+1}$ \STATE \qquad \qquad \qquad {Record the transition $\mathcal{T}_t = (o_t, a_t, r_t, o_{t+1})$} \STATE \qquad \qquad \textbf{end for} \STATE \qquad \qquad Store the episode $\{\mathcal{T}_0, \mathcal{T}_1, \cdots, \mathcal{T}_T\}$ in $R$; \STATE \qquad \textbf{end for} \qquad \textbf{Parameter Updating Stage:} \STATE \qquad \textbf{for} epoch $= 1, K$ \textbf{do} \STATE \qquad \qquad {Sample a mini-batch of $N$ episodes from $R$} \STATE \qquad \qquad \textbf{for} $t = 1, T$ \textbf{do} \STATE \qquad \qquad \qquad {Set $y_i = r_i + \gamma Q_{w^{\prime}}(o_{i+1}, \pi_{\theta^{\prime}}(a_{i+1} \vert o_{i+1}))$} \STATE \qquad \qquad \qquad {Update critic by minimizing the loss: \\ \qquad \qquad \qquad $L = \frac{1}{N} \sum_i (y_i - Q_w(o_i, a_i))^2$} \STATE \qquad \qquad \qquad Update actor using the sampled policy gradient: \\ \qquad \qquad \qquad $\nabla_{\theta} J = \frac{1}{N} \sum_i \nabla_a Q_w(o, a) \nabla_{\theta} \pi_{\theta}(a \vert o)$ \STATE \qquad \qquad \qquad {Update the target networks:} \begin{align} &w^{\prime} \leftarrow \tau w + (1 - \tau) w^{\prime} \notag \\ &\theta^{\prime} \leftarrow \tau \theta + (1 - \tau) \theta^{\prime} \notag \end{align} \vspace{-5mm} \STATE \qquad \qquad \textbf{end for} \STATE \qquad \textbf{end for} \end{algorithmic} \end{algorithm} There are two phases within a training session of IE-RDPG. First, we let the agent to interact with the platform with respect to the current policy, collect the data until $T$ days. For each item, transition is stored in chronological order to form the episode. After that, the parameter updating stage starts with randomly sampling a batch of $N$ episodes at one time. The parameter update is first conducted parallelly at the beginning timestep, then such update continues to proceed forward through time until all episodes end. The parameter updating stage could have multiple epochs. This two-phase session repeats in the real-world timeline. IE-RDPG provides an asynchronous and distributed architecture which is able to solve the problem with enormous items in the one of the largest E-commerce platform. \subsection{Dual-rank module} \label{subsec: dual_rank} Pointwise Learning to Rank (LTR) is widely used in industrial recommendation ranking. Upon each user request, the item feature is aligned with the user characteristic and an user-affinity score is calculated by a supervised model like click-through rate (CTR) model. This ctr score, $y_{ctr}$, is employed as the ranking metric to yield a ranked item list. In our study, RL-LTV provides another item-affinity score, $y_{rl}$, indicating its preference based on the item LTV. By forming a dual-rank module and considering these two scores simultaneously, the online ranking could consider not only the immediate click-through reward but also the long-term rewards. As a result, some fresh items without enough historical interactions but with high potential of LTV, would have small $y_{ctr}$ but large $y_{rl}$, and is expect to have more ranking priority. We obtain the new ranking score by the following equation: \begin{equation} \label{eq:dual_rank} y = (1-\alpha) \cdot y_{ctr}+ \alpha \cdot y_{rl} \end{equation} where $\alpha \in (0, 1)$ is the mixing parameter. Since $Q$ output by the critic predicts the future LTV, it could be used to determine the degree of $\alpha$, therefore also determine how the CTR and LTV rewards tradeoff. In the experiments, $\alpha$ is calculated as \begin{equation} \label{eq:alpha} \alpha = e^{\frac{Q-Q_{min}}{Q_{max} - Q_{min}} \ln(1+\alpha_{\text{max}}-\alpha_{\text{min}})} - 1 + \alpha_{\text{min}} \end{equation} where $\alpha_{\text{min}},\alpha_{\text{max}}$ are hyper-parameters highlighting the lower and upper bounds of $\alpha$; and item with higher $Q$ would always have a higher $\alpha$. \section{EXPERIMENTS} \label{sec:experiment} To evaluate the proposed model, we apply our approach on Taobao, a world-leading E-commerce platform. A series of experiments are designed in order to address the following questions: (1) whether the proposed approach performs better than baseline, or some other competitors which is not RL-based; (2) verify if inclusion of $x$, or the LSTM component contributes to the performance; and (3) if some representative examples could be found to display how the framework improve LTV, from the business point of view. It is important to emphasize that, most of public dataset in the recommendation domain do not have the characteristic of item dynamics depicted in Section \ref{sec:item_growth}, therefore we can not simply directly comparing our performance with most state-of-the-art baselines. On the other hand, the live environment contains billions of users and items, therefore it is also not a trivial task to launch an arbitrary baseline on the real system. As a remedy, the performance of RL-LTV is comparing with the live ranking algorithm (refer to \cite{10.1145/3219819.3219823} for part of details) which is a reasonable baseline for such a complicated industrial system. We also conduct offline analysis on the LTV recognition including comparison of several baselines in Section \ref{subsec:offline_test}. Ablation and sensitivity analysis are also provided to further illustrate the reasonability of our framework. \subsection{Experimental Settings} \textbf{Training process}: The online ranking system is in real-time, while our approach has a daily update frequency. In our practice, user logs are kept collecting, then data is aggregated by item and by day, and the MDP transitions are recorded. According to Algorithm \ref{alg:ieddpg}, for each item we wait for $T$ new transitions to appear; a new episode is then formed and put into the buffer. Similarly, it is the episodes from each buffer sampling, and gradient update is performed accordingly. \textbf{Parameters setting}: In the training of our RL-LTV, we use Adam with learning rate 0.0001 as the optimizer to train actor and critic. The hyper-parameters $\gamma$, $\tau$, $\alpha_{min}$ and $\alpha_{max}$ are 0.5, 0.001, 0 and 0.2, respectively. The dimension of hidden states in the LSTM component is 4. The item inherent feature output by the item encoder has the dimension of 8. From the applicable point of view, the time step is chosen as a day. We have relatively small buffer (200) and batch size (50), because our replay memory buffer is operated in terms of episodes, not transitions. \subsection{LTV Recognition} \label{subsec:offline_test} Because recommendation is directly correlated with customer experience and platform revenue, scale and depth of the online experiment are limited at the current stage. Nevertheless,, some offline analysis can be conducted from the perspective of LTV recognition. \textbf{Metrics:} Two aspects of performances could be evaluated offline: (1) Since the critic provides a predicted version of the actual cumulative gain, $Q$, the regression accuracy between $Q$ and the actual $J$, $J_{\text{actual}}$, can be calculated and evaluated; (2) The actor generates $y_{rl}$ which indicates its ranking preference \textit{w.r.t.} LTV; a ranking metric can be evaluated by sorting $y_{rl}$ in a descending order, and comparing its sequential similarity with the ground truth (item sorted by $J_{\text{actual}}$). To calculate $J_{\text{actual}}$, data of 5 consecutive days are collected; while data in longer horizons can be ignored since their weights in $J$ decay quickly with $\gamma = 0.5$. For (1), the prediction error of this $J_{\text{actual}}$ is studied in the manner of root mean square error (RMSE) and mean absolute error (MAE) . We employ the normalized discounted cumulative gain (NDCG) as a measure of the sequential similarity stated in (2). To mimic the live environment and reduce the computation cost, the online retrieval method is also employed for this offline evaluation, which generates a series of items upon each user query. The top-$K$ items from the retrieved list are used to calculate NDCG@K with $K = 10, 20, 50$, and $J_{\text{actual}}$ is used as the relativity score. \textbf{Baselines:} The following experimental versions are employed to compare with our RL-LTV: \begin{itemize} \item \textit{Vanilla-CTR}: The online, pointwise, single-period, vanilla CTR model provides a natural baseline running everyday on Taobao, with a state-of-the-art CTR performance by so far but without any LTV consideration. \item \textit{Empirical}: To mimic the human decision making process, scores are manually assigned based on business experience, \emph{e.g.} collect percentiles of $J_{\text{actual}}$ according to category, seller and brand respectively, calculate their weighted averages, then scale to (0,1). \item \textit{LSTM}: Use a supervised LSTM model to regress and predict $J_{\text{actual}}$. Score is then assigned based on prediction. It shares the same structure as the LSTM component in RL-LTV. \end{itemize} Note we only compare the regression accuracy of LSTM with RL-LTV, while Vanilla-CTR or Empirical do not have an explicit LTV prediction. On the other hand, we can evaluated the ranking NDCG of LTV for all these baselines. For Vanilla-CTR, the CTR score is used to rank its preference for LTV ranking, which is actually the exact case on the live platform. \begin{table} \caption{Offline metrics on item LTV. Note Baseline and Empirical do not have RMSE or MAE result because they do not have $Q$ estimates.} \label{tab:offline_accuracy} \scalebox{0.85}{ \begin{tabular}{cccccc} \toprule Model & RMSE & MAE & NDCG@10 & NDCG@20 & NDCG@50\\ \midrule Vanilla-CTR & $--$ & $--$ & $0.746$ & $0.662$ & $0.582$ \\ \hline Empirical & $--$ & $--$ & $0.765$ & $0.678$ & $0.581$ \\ \hline LSTM & $7.900$ & $4.761$ & $0.772$ & $0.684$ & $0.600$ \\ \hline RL-LTV & $7.873$ & $4.067$ & $0.782$ & $0.705$ & $0.625$ \\ \bottomrule \end{tabular} } \vspace{-2mm} \end{table} \textbf{Result:} We summarize these results in Table \ref{tab:offline_accuracy}. RL-LTV is expected to have superior LTV regression accuracy, as a result of its interactive learning behavior and long time-horizon optimization, which is verified by the result. Compared to LSTM, RL-LTV reduces the prediction error of LTV for both MAE and RMSE. For NDCG@K, one can find that Empirical, LSTM and RL-LTV all have better NDCGs than Vanilla-CTR, which only looks at the instant CTR metric. Still, RL-LTV is the best among these experiments. \subsection{Live Experiments} \label{subsec:online_test} The live experiment is conducted on Taobao. The A/B test starts on January 26th, 2021 and lasts for a week. Policy of RL-LTV is first warmed up with the trained version in Section \ref{subsec:offline_test}, then updates online in a daily basis. Because of the real world limitation, only $y_{rl}$ in the action takes effective; $p$ is instead determined and controlled by other group, therefore becomes a read-only parameter for us. \textbf{Cold-start performance:} We first investigate the performance of cold-start items. To define cold-start items, fresh products with time on the market less then a month when the experiment starts are tagged, and then randomly divided into different groups. Each group of cold-start items is limited to be recommended by only one experiment, such that the item-based metrics (IPV and GMV) of different experiments can be reasonably calculated and fairly compared. LTV in forms of PV, IPV and GMV are collected after the online test ends. For RL-LTV, relative LTV differences to Vanilla-CTR are calculated and shown in Table \ref{tab:perf_cold}. One can see that RL-LTV improves LTVs of IPV and GMV significantly, with almost the same PV investment. \begin{table} \caption{Percentage Difference of LTV relative to Vanilla-CTR for cold-start items.} \label{tab:perf_cold} \vspace{-1mm} \scalebox{0.85}{ \begin{tabular}{cccc} \toprule Model &PV &IPV &GMV \\ \midrule \hline RL-LTV (w/o R) &$1.01\%$ &$6.25\%$ &$12.31\%$ \\ \hline RL-LTV &$-1.19\%$ &$8.67\%$ & $18.03\%$ \\ \bottomrule \end{tabular} } \vspace{-2mm} \end{table} \begin{table}[t] \center \caption{Comparison of Offline Performance for Component Ablation Analysis} \label{tab:ablation} \vspace{-1mm} \scalebox{0.85}{ \renewcommand{\arraystretch}{1} \begin{tabular}{c|c|c|c|c|c} \toprule \multicolumn{1}{c|}{Model}& \multicolumn{1}{c|}{RMSE} & \multicolumn{1}{c|}{MAE} & \multicolumn{1}{c|}{NDCG@10} &\multicolumn{1}{c|}{NDCG@20} & \multicolumn{1}{c}{NDCG@50}\\ \hline RL-LTV (w/o $x_s$)& $7.947$& $4.135$ &$0.771$ & $0.686$ & $0.613$ \\ \hline RL-LTV (w/o $x_t$) & $8.146$ &$4.351$ &$0.763$ & $0.680$ & $0.611$ \\ \hline RL-LTV (w/o R) &$7.931$ &$4.133$ &$0.754$&$0.672$ & $0.592$ \\ \hline RL-LTV &$7.873$ &$4.067$ &$0.782$ & $0.705$ &$0.625$\\ \bottomrule \end{tabular} \label{tab:dod} } \vspace{-3.5mm} \end{table} \textbf{Global performance:} The ultimate purpose of cold-start recommendation is to improve the entire RS performance. Therefore, we need to inspect metrics of all items, not only those of cold-start items. Although the short-term performance of RS might be harmed due to the investment on cold-start items, it is expected its long-term performance could be reimbursed with the growth of LTVs. By the end of experiment, such reimbursement effect is observed. In more details, RL-LTV has $-0.55\%$ PV and $-0.60\%$ IPV compared with Vanilla-CTR, indicating there is no severe degradation for the global performance. \textbf{Typical case:} In order to have a deeper insight that how RL helps recognize a specific high potential item and improve its LTV, we also look into some specific cases. Figure \ref{fig:item_age_metric} shows such a case study with an item (a hook handle) first introduced on the platform on January 26th. As it cold starts, there is few people who comes to view, click and buy it. However, RL-LTV recognizes it as high LTV potential, therefore makes substantial PV investment at the early stage (the $PV_\text{rec}$ curve on the upper right). This investment is successful since it triggers more people's interests and behaviors (the $PV_\text{other}$ (PV from the other channels) curve on the lower right), with PV, IPV and GMV all grow rapidly (the accumulated curves on the lower left). After this item turns into a star product, RL-LTV turns to invest other cold-start items. As a result, $PV_\text{rec}$ falls down after the 3rd day, while the other metrics are already in a healthy closed-loop feedback. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{item_age_metric.pdf} \caption{Time trajectory of a typical cold-start recommended item (A hook handle as a fresh item of platform). Upper Right: The $PV_{\text{rec}}$ invested by RL-LTV. Lower Right: $PV_{\text{other}}$ (mainly from the search channel) inspired after LTV investment. Lower Left: The accumulated PV, IPV and GMV indicating this item rapidly becomes a star-item.} \label{fig:item_age_metric} \end{figure} \subsection{Ablation Studies} To illustrate the effectiveness of several important components in RL-LTV, we here perform some ablation analysis on the follow attempts: \begin{itemize} \item RL-LTV(w/o $x_s$): our approach but without the inclusion of the item inherent feature, $x_s$. \item RL-LTV(w/o $x_t$): our approach but without the inclusion of the trending-bias factor, $x_t$. \item RL-LTV(w/o R): our approach but without the recurrent LSTM component. \end{itemize} Similar to Section \ref{subsec:offline_test}, we perform the offline analysis for these experiments, and their results aside with RL-LTV are given in Table \ref{tab:ablation}. Not surprisingly, RL-LTV still has the best performance, suggesting that $x_s$, $x_t$ and the recurrent cell are all crucial. Due to limited online resource, only RL-LTV(w/o R) is conducted with an live experiment, with result also shown in Table \ref{tab:perf_cold}. One can see that RL-LTV(w/o R) also has positive online impacts compared with Vanilla-CTR, but not as good as RL-LTV. All these results emphasize the validity of our original definition of POC-MDP. Comparison with RL-LTV, the degraded performance of Vanilla-CTR (in Subsection \ref{subsec:offline_test} and \ref{subsec:online_test}) verifies the entire MDP and RL framework; the degraded performances of RL-LTV(w/o $x_s$) and RL-LTV(w/o $x_t$) indicates the necessity of uncontrollable state (\emph{i.e.}, $x_s$ and $x_t$); the degraded performance of RL-LTV(w/o R) suggests the unobservable state also helps the modeling. \subsection{Hyperparameter Sensitivity Analysis} Here we also perform a sensitivity analysis for an important parameter, the incorporation weight $\alpha$ in Equation (\ref{eq:dual_rank}). Different choices of $\alpha$ generates different final score $y$, which similarly, can also have an NDCG evaluation introduced in Section \ref{subsec:offline_test}. Furthermore, as an indication of CTR prediction with the true CTR label, AUC of $y$ can also be evaluated with respect to different $\alpha$. As $\alpha$ steadily changes from 0 to 1, $y$ transforms from a ranking metric of CTR to a ranking metric of item LTV. By checking the curves of AUC (of CTR) and NDCG (of LTV) w.r.t. $\alpha$, one could have a snapshot about the trade-off between the instant reward and the long-term reward. \footnote{Note $\alpha$ should vary with different items according to Equation (\ref{eq:alpha}). However, here we just perform a simplified offline analysis by fixing all items with the same $\alpha$.} The result is demonstrated in Figure \ref{fig:alpha_metric}. Not surprisingly, the AUC decreases while NDCG increases as $\alpha$ increases. Based on the shape of curves, we expect the mean of $\alpha$ to be around 0.1 where the AUC curve starts to decrease faster. Correspondingly, $\alpha_{\text{min}}$ and $\alpha_{\text{max}}$ in Equation (\ref{eq:alpha}) are set to 0 and 0.2, respectively. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{Alpha.png} \caption{AUC, NDCG@10, NDCG@20 and NDCG@50 curves with different $\alpha$. When more proportion of $y_{rl}$ is adopted against $y_{ctr}$, AUC decreases but NDCG of LTV increases, as expected.} \label{fig:alpha_metric} \vspace{-4mm} \end{figure} \section{CONCLUSION} \label{sec:conclusion} In this paper, we propose a novel RL-LTV framework which solves the cold-start recommendation problem by considering the longer-term rewards of items, the LTVs. A special form of MDP, POC-MDP, is employed to model the item life dynamics. Generalized item-level observation and action spaces, as well as parameter-shared networks, help the knowledge transfer from mature, historical items to fresh, cold-start items. An off-policy, actor-critic RL agent with an LSTM component is employed to interactively learn and change the online system with a policy optimizing both instant reward and long-term LTV. It is the first time to incorporate such policies into the online recommendation system, to address the item cold-start issue. We develop a training framework called IE-RDPG to complete the large scale, itemwise episodic training task. By rigorous experiments, it shows that our algorithm performs much better on the long-term rewards of cold-start items, comparing with the online state-of-the-art baseline on Taobao. There is no evident degradation of the entire online performance during the experiment. For future work, it would be interesting to use RL to study the entire life periods of products, therefore produce policies not only for cold-start but also lifelong recommendation. Another possible improvement is to make the model more explainable to explicitly exhibit the lifetime stage transitions of products. \balance \bibliographystyle{ACM-Reference-Format}
proofpile-arXiv_069-7565
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \input{sections/introduction} \section{Background: an overview of BBR and BBRv2} \input{sections/background} \section{A Deep Dive into BBRv2} \input{sections/measurement} \section{Design and implementation of BBRv2+} \input{sections/motivation-arch} \section{Evaluation of BBRv2+} \input{sections/BBRv2p-evaluation} \section{Related work} \input{sections/relatedwork} \section{Conclusion} \input{sections/conclusion} \subsection{Evaluation setup} \label{sec:eva:method} Two testbeds are used for the evaluation of BBRv2+. One is the Mininet-based testbed used in \S\ref{sec:measurement}, as a controlled environment to evaluate BBRv2+ from various perspectives. The other is based on Mahimahi~\cite{netravali_mahimahi_2015}, a trace-driven emulator that can accurately replay real-world packet-level traces, as illustrated in Fig.~\ref{fig:mahimahi_topo}. The physical server running the two testbeds is the same one as that used to evaluate BBRv2 in \S\ref{sec:measurement}. The toolset for data analysis and traffic generation is also the same as that in \S\ref{sec:measurement}. In all experiments in this section, the $\alpha$ (the loss threshold) of BBRv2+ is set to 20\% as the value is suitable for most of the buffer sizes according to the results in \S\ref{sec:measurement:loss_resilience}. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{images/mahimahi_topo} \caption{Mahimahi testbed} \label{fig:mahimahi_topo} \end{figure} \subsection{Responsiveness to bandwidth dynamics} \label{sec:eva:bw_change} To evaluate BBRv2+'s responsiveness to bandwidth dynamics, we use the same settings as that in \S\ref{sec:measurement:bw_dynamics}, in order to run BBRv2+ to have a microscopic view on how it reacts to bandwidth dynamics. Fig.~\ref{fig:bw_change_bbr2p} shows the results. \begin{figure*}[ht] \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{images/refined_bbr2p_rate_vs_buffer_inc} \subcaption{bw. increasing} \label{fig:bw_inc_bbr2p} \end{minipage} \hfill \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{images/refined_bbr2p_rate_vs_buffer_dec} \subcaption{bw. decreasing} \label{fig:bw_dec_bbr2p} \end{minipage} \caption{BBRv2+'s responsiveness to bandwidth increases (a) and decreases (b). The red line represents the \textit{BtlBW} estimation of BBRv2+, and the green line indicates the real bandwidth of the bottleneck. The dark line shows the dynamics of the bottleneck link's queue length.} \label{fig:bw_change_bbr2p} \end{figure*} In Fig.~\ref{fig:bw_inc_bbr2p}, when there is no bandwidth increment, BBRv2+ only enters ProbeTry for a very short duration and finishes bandwidth probing very soon, which leads to an instantaneous short standing queue. However, when the bandwidth is increased, BBRv2+ can timely adapt its' \textit{BtlBW} estimation to the real bandwidth. Compared with the results of BBR in Fig.~\ref{fig:bw_inc_bbr} and BBRv2 in Fig.~\ref{fig:bw_inc_bbr2}, BBRv2+ is capable to utilize newly available bandwidth as quick as BBR, while its guided probing strategy (by the \signame{}) incurs lower queuing delay than BBR. In Fig.~\ref{fig:bw_dec_bbr2p}, when bandwidth decreases, BBRv2+ notices that the queuing delay is obviously rising up via increased RTT (see \S\ref{sec:design:probebw}). It expires the old \textit{BtlBW} estimation and adapts its \textit{BtlBW} estimation to the available bandwidth. Compared with the results of BBR and BBRv2 in Fig.~\ref{fig:resp_to_bw_dynamics}, BBRv2+ adapts its sending rate to the decreased bandwidth much faster, which leads to lower queuing delay. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{images/step_trace.png} \caption{An example of traces with bandwidth changing as a step function.} \label{fig:step_bw_trace} \end{figure} Next, we compare the responsiveness of Cubic, BBR, BBRv2, and BBRv2+ to bandwidth dynamics in our trace-driven emulation testbed Mahimahi, using five synthesized network traces where the bandwidth changes as step functions, as illustrated in Fig.~\ref{fig:step_bw_trace}. Following the settings in~\cite{abbasloo_classic_2020}, we set the buffer size to 1.5MB, the delay to 20ms, and the loss rate to zero. In each experiment, the flow throughput, as well as the sojourn time of each packet in the buffer of the bottleneck link (denoted as queuing delay), are recorded. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{images/step_bw_vs_delay} \caption{Normalized throughput and queuing delay of different CCAs. (markers: average throughput and queuing delay; left end of the lines: 95\%-tile of queuing delay; ellipses: the standard deviations)} \label{fig:step_bw_trace_res} \end{figure} To compare the overall performance of all CCAs on a network trace, we normalized the average queuing delay and the average throughput of all CCAs to the minimum average queuing delay and the maximum average throughput achieved on that trace, respectively. In addition, we also normalized the 95\%tile queuing delay of all CCA on a network trace to the minimum average queuing delay achieved on that trace. Then, we averaged all normalized values over all traces. The results are shown in Fig.~\ref{fig:step_bw_trace_res}. We observe that BBRv2+ achieved significantly higher throughput and lower queuing delay than BBRv2, and lower queuing delay at the cost of slightly lower throughput than BBR. These observations stem from the facts that: \textbf{(1)} BBRv2+ probes for bandwidth at a frequency similar to BBR's one, thus, achieving high bandwidth utilization as BBR does; \textbf{(2)} BBRv2+ adapts its sending rate to decreased bandwidth faster than BBR as it quickly updates its \textit{BtlBW} estimation upon increased queuing delay. \subsection{Resilience to network jitters} \label{sec:eva:jitter} Next, we evaluate the performance of BBRv2+ under network jitters, using the same settings as in \S\ref{sec:measurement:jitter}. The throughput of BBRv2+, Cubic, BBR and BBRv2 under various levels of jitters are shown in Fig.~\ref{fig:bbr2p_tput_vs_jitters}. Different from BBR and BBRv2, the throughput of BBRv2+ does not degrade when the network jitters become larger. Fig.~\ref{fig:bbr2p_owin} further plots the average inflight bytes of four CCAs; the results confirm that the BDP compensation mechanism of BBRv2+ succeeds to increase the inflight size for higher throughput when the network jitter becomes larger. Nevertheless, the throughput of BBRv2+ is slightly lower than Cubic. The reason is that our compensation to BDP is a bit conservative; in contrast, Cubic's \textit{cwnd} can grow far beyond the real BDP as it is not affected by network jitters. \begin{figure}[ht] \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/all_jitter} \subcaption{Throughput} \label{fig:bbr2p_tput_vs_jitters} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/all_owin} \subcaption{Avg. inflight bytes} \label{fig:bbr2p_owin} \end{minipage} \caption{Resilience to network jitters} \label{fig:bbr2p_resilience_to_jitters} \end{figure} \subsection{Inter-protocol fairness} \label{sec:eva:inter_fair} The inter-protocol fairness of BBRv2+ is evaluated using the same settings as that in \S\ref{sec:measurement:inter_fair}. We considered BBRv2+ with/without the dual-mode mechanism (see \S\ref{sec:design:coexist}) to study the impact of this mechanism on inter-protocol fairness. The results are shown in Fig.~\ref{fig:cc_fair_bbr2p}. Several observations are notable. First, the results demonstrate the efficacy of the dual-mode mechanism. In Fig.~\ref{fig:cc_fair_bbr2p_wo_fallback}, we can observe that BBRv2+ without the dual-mode mechanism is starved by Cubic in deep-buffered cases. This is because BBRv2+ falsely treats the RTT increments caused by Cubic as a signal of bandwidth shrinking, thus, constantly yielding up bandwidth to Cubic. The problem is eliminated by the dual-mode mechanism as shown in Fig.~\ref{fig:cc_fair_bbr2p_w_fallback}. Second, compared with the results of BBRv2(20\%, 0.3) in Fig.~\ref{fig:cc_fair_bbr2_20alpha}, we can see that: \textbf{(1)} BBRv2+ provides better inter-protocol fairness than BBRv2(20\%, 0.3) under an extremely shallow buffer (i.e. 0.2$\times$ BDP); \textbf{(2)} BBRv2+'s inter-protocol fairness is similar to that of BBRv2(20\%, 0.3) under other buffer sizes. The reason for the better inter-protocol fairness of BBRv2+ under an extremely shallow buffer is that BBRv2+ does not enter ProbeUp, which is more aggressive than ProbeTry, thanks to the two-step probing mechanism (see \S\ref{sec:design:probebw}) while BBRv2(20\%, 0.3) periodically enters ProbeUp. Third, compared with the results of BBR in Fig.~\ref{fig:cc_fairness_bbr} and BBRv2 in Fig.~\ref{fig:cc_fairness_bbr2}, BBRv2+ performs no worse than the better one among BBR and BBRv2 under different buffer sizes. The reasons are three-fold: \textbf{(1)} under shallow buffers, BBRv2+ achieves similar inter-protocol fairness to that of BBRv2 thanks to its cautiously aggressive bandwidth probing strategy (see \S\ref{sec:design:probebw}); \textbf{(2)} under moderate buffers, BBRv2+ is close to BBRv2(20\%, 0.3) that has better inter-protocol fairness than BBRv2 in these cases as explained in \S\ref{sec:measurement:loss_resilience}; \textbf{(3)} under deep buffers, the three CCAs perform closely as they all have an inflight cap around 2$\times$ BDP, thus, unable to beat loss-based CCAs. \begin{figure}[ht] \centering \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/cc_fairness_bbr2plus_wo_fallback} \subcaption{Without the dual-mode mechanism} \label{fig:cc_fair_bbr2p_wo_fallback} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/cc_fairness_bbr2plus_w_fallback} \subcaption{With the dual-mode mechanism} \label{fig:cc_fair_bbr2p_w_fallback} \end{minipage} \caption{BBRv2+: Inter-protocol fairness} \label{fig:cc_fair_bbr2p} \end{figure} \begin{figure}[ht] \centering \begin{minipage}{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{images/rtt_fairness_bbr2plus} \end{minipage} \caption{BBRv2+: RTT fairness} \label{fig:rtt_fair_bbr2p} \end{figure} \subsection{RTT fairness} \label{sec:eva:rtt_fair} Next, we evaluate the RTT fairness of BBRv2+ using the same setting as that in \S\ref{sec:measurement:rtt_fair}. The results are presented in Fig.~\ref{fig:rtt_fair_bbr2p}. Compared with the results of BBR in Fig.~\ref{fig:rtt_fairness_bbr} and those of BBRv2 in Fig.~\ref{fig:rtt_fairness_bbr2}, BBRv2+ has better RTT fairness than BBR and behaves close to BBRv2. The results are expected as the mechanisms in BBRv2 that improves the RTT fairness over BBR (see \S\ref{sec:measurement:rtt_fair}) remain unchanged in BBRv2+. \subsection{Retransmissions in shallow-buffered networks} \label{sec:eva:retrans} In the following, we evaluate whether BBRv2+ is as aggressive as BBR to lead to excessive retransmissions in shallow-buffered networks, using the same setting as that in \S\ref{sec:retx_vs_tput}. The results are shown in Fig.~\ref{fig:bbr2p_vs_bbr2}. We observe that when the buffer is extremely shallow (e.g. 0.02$\times$ BDP when the bandwidth is 500Mbps and the RTT is 75ms), BBRv2+ incurs more retransmissions than BBRv2. This is because the bandwidth probing frequency of BBRv2+ is higher than that of BBRv2. Although BBRv2+ uses a relatively small \emph{pacing\_gain} (1.1) when it starts to probe for more bandwidth in ProbeTry, it still causes buffer overflow when the network buffer is extremely shallow. However, compared with the results of BBR in Fig.~\ref{fig:retx_bbr}, BBRv2+ reduces retransmissions significantly. \begin{figure}[htb] \centering \begin{minipage}{0.6\linewidth} \centering \includegraphics[width=1\linewidth]{images/loss_rate_bbr2p_no_annot} \end{minipage} \caption{BBRv2+: retransmission rate (100KB buffer)} \label{fig:bbr2p_vs_bbr2} \end{figure} \subsection{Real-world trace driven emulation} \label{sec:eva:mahimahi} To evaluate how BBRv2+ performs in real network conditions, we compare BBRv2+ with Cubic, BBR, BBRv2, and Orca~\cite{abbasloo_classic_2020} in the emulation-based Mahimahi testbed, using traces collected in real-world networks. Orca\footnote{We directly used the model trained by the authors in our experiments.} is used for comparison as a representative of the state-of-the-art learning-based CCAs. \noindent\textbf{Trace collection:} We collected traces from WiFi and LTE networks, using \emph{saturatr}~\cite{winstein_stochastic_nodate}. In total, 20 network traces are collected, half of which are collected when the collector is stationary to the base station (LTE) or the Access Point (WiFi), and the other half are collected when the collector is moving at high speeds (i.e. in vehicles or on high-speed rails). In stationary scenarios, the network bandwidth is usually stable, while in high-mobility scenarios, the bandwidth fluctuates greatly. Examples of stationary and high-mobility traces are shown in Fig.~\ref{fig:mahimahi_traces}. The network delay and loss rate are also measured using \textit{ping}. \begin{figure}[htb] \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/static_trace} \subcaption{An example of stationary traces} \label{fig:mahimahi_stationary_trace} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/highmobility_trace} \subcaption{An example of high-mobility traces} \label{fig:mahimahi_mobility_trace} \end{minipage} \caption{Example of traces used in our trace-driven evaluation} \label{fig:mahimahi_traces} \end{figure} \begin{figure*}[htb] \centering \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=0.7\linewidth]{images/all_tput_vs_delay_static} \subcaption{Stationary traces} \label{fig:mahimahi_static_tput_vs_delay} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=0.7\linewidth]{images/all_tput_vs_delay_mobility} \subcaption{High-mobility traces} \label{fig:mahimahi_mobility_tput_vs_delay} \end{minipage} \caption{Normalized throughput and queuing delay of different CCAs. (markers: average throughput and queuing delay, left end of the lines: 95\%-tile of queuing delay, ellipses: the standard deviations of the throughput and queuing delay)} \label{fig:mahimahi_results} \end{figure*} \noindent\textbf{Experimental results:} The network buffer size is set to 1.5MB. The collected traces, including bandwidth dynamics, network delay, and loss rate are the inputs of the emulator. Using the same metrics as that in \S\ref{sec:eva:bw_change}, the results for the stationary and high-mobility scenarios are shown in Fig.~\ref{fig:mahimahi_static_tput_vs_delay} and Fig.~\ref{fig:mahimahi_mobility_tput_vs_delay} respectively. In stationary scenarios, BBR, BBRv2, and BBRv2+ perform very close to each other because the bandwidth is usually stable. Cubic shows slightly better throughput for most of the time at the cost of high queuing delays. Orca has the lowest throughput probably because the network scenario where the model was trained is different from our collected traces, which also demonstrates the limitation of learning-based CCAs. In high-mobility scenarios, BBR and BBRv2+ achieve the highest and the second-highest throughput respectively. Meanwhile, BBRv2 and Cubic fail to achieve consistently high throughput across different high-mobility traces. Compared with BBRv2+, BBR achieves higher throughput at the cost that it incurs higher queuing delays as it is more aggressive. Orca fails to achieve consistently high throughput and low delays in high-mobility scenarios; the results of Orca raise a concern on the generalization ability of learning-based CCAs. The above results of trace-driven emulation using Mahimahi demonstrate that BBRv2+ performs closely to BBR and BBRv2 in stationary network scenarios, but shows great improvements over BBRv2 in high-mobility scenarios as it has better responsiveness to bandwidth dynamics. \subsection{Summary of experimental results} We can conclude from the above experiments that BBRv2+ succeeds to balance the aggressiveness of bandwidth probing and the fairness against loss-based CCAs. With such a balance, which is neither achieved by BBR nor BBRv2, BBRv2+ achieves higher throughput and lower delay than BBRv2 in scenarios where the bandwidth fluctuates, while keeping the advantages of BBRv2 with regard to inter-protocol fairness and reduced retransmissions under shallow buffers. Moreover, the dual-mode mechanism makes BBRv2+ able to co-exist with loss-based CCAs under deep buffers and the compensation mechanism for BDP estimation efficiently enhances the performance of BBRv2+ under high network jitters. \subsection{Methodology} \label{sec:measurement:method} We utilize Mininet~\cite{noauthor_mininet_nodate} to build an emulation-based testbed. The testbed, whose topology is shown in Fig.~\ref{fig:mininet_topo}, was run on a server with 8 Intel Xeon Platinum cores and 32GB of memory. The operating system is Ubuntu 18.04.5 with BBR and BBRv2~\cite{bbr2_kernel} installed. Linux \textit{tc-netem}~\cite{noauthor_tc-netem8_nodate} is used to emulate different network conditions (e.g. router buffer size, link speed, RTT, random loss rate, jitter). \textit{Iperf3}~\cite{noauthor_esnetiperf_nodate} generates TCP traffic between senders and receivers. During the transmission, various performance metrics (e.g. RTT, throughput, retransmissions, inflight bytes) are measured by \textit{tcpdump}~\cite{noauthor_tcpdumplibpcap_nodate} and \textit{tcptrace}~\cite{noauthor_tcptrace1_nodate}. Moreover, a set of internal variables (e.g. \textit{cwnd}, \textit{pacing\_rate}, \textit{RTprop}, \textit{BtlBW}) in BBR and BBRv2 are reported by Linux kernel module and the backlog information of the standing queue in bottleneck routers (R2 and R3) is reported by \textit{tc}~\cite{noauthor_tc8_nodate}. Each set of experiments is repeated five times and the average results are reported. \begin{figure}[htb] \centering \includegraphics[width=0.8\linewidth]{images/mininet_topo} \caption{Mininet testbed} \label{fig:mininet_topo} \end{figure} \subsection{Fairness} \label{sec:measurement:fairness} We first evaluate the fairness of BBRv2. Two types of fairness are investigated: the inter-protocol fairness against loss-based CCAs and RTT fairness. In this set of experiments, two flows start simultaneously, one from H1 to H3 and the other from H2 to H4, and last for three minutes for the convergence of throughput. The bottleneck bandwidth is fixed at 40Mbps without network jitters or random losses. We use Jain's fairness index~\cite{rfc5166} ($\mathcal{F}$) of the two flows as the metrics of fairness, calculated according to Eq.~\ref{eq:jain_index}, where $T_i$ is the average throughput of the $i$-th flow. \begin{equation} \centering \mathcal{F} = \frac{(T_{1} + T_{2})^2}{2*(T_{1}^2 + T_{2}^2)} \label{eq:jain_index} \end{equation} $\mathcal{F} = 1$ indicates the maximum fairness where two flows have the same average throughput, and $\mathcal{F} = 0.5$ represents that one flow's throughput is zero and the fairness is minimized. As the size of the bottleneck buffer also impacts fairness results, we varied the buffer size to study their relationship. \subsubsection{Inter-protocol fairness} \label{sec:measurement:inter_fair} In the experiment, the flow from H1 to H3 uses either BBR or BBRv2, and that from H2 to H4 uses Cubic, which is the default CCA in Linux and MacOS. The RTTs of the two paths of the two flows are set to 40ms. Fig.~\ref{fig:cc_fairness} shows the inter-protocol fairness results for both BBR and BBRv2. We can observe that compared with BBR, BBRv2 significantly improves Jain's fairness index when the buffer is shallow (i.e., less than 2$\times$ BDP). This is due to the fact that BBRv2 reacts to losses caused by buffer overflow and bounds the inflight size by using \emph{inflight\_hi} and \emph{inflight\_lo}. When the bottleneck buffer becomes deeper, Cubic obtains more bandwidth than BBR or BBRv2. This is because that the inflight size of both BBR and BBRv2 is limited by about 2$\times$ BDP, while Cubic's inflight size can go beyond this value under deep buffers. As the two flows experience similar RTTs, a larger inflight size means higher throughput. We also note that BBRv2 is less competitive than BBR under moderate buffers. The reason lies in that BBRv2 is more conservative in bandwidth probing and inflight cap estimation \begin{figure} \centering \begin{minipage}{1\linewidth} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/cc_fairness_bbr} \subcaption{BBR vs Cubic} \label{fig:cc_fairness_bbr} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/cc_fairness_bbr2} \subcaption{BBRv2 vs Cubic} \label{fig:cc_fairness_bbr2} \end{minipage} \caption{Inter-protocol fairness of BBR/BBRv2 under different buffer sizes.} \label{fig:cc_fairness} \end{minipage} \end{figure} \subsubsection{RTT fairness} \label{sec:measurement:rtt_fair} In this experiment, both flows use the same CCA, either BBR or BBRv2. The path between H1 and H3 has an RTT of 40ms and that between H2 and H4 has an RTT of 150ms. Fig.~\ref{fig:rtt_fairness} shows the RTT fairness results of BBR and BBRv2, where we set the buffer size to $x$ times of the BDP of the path between H2 and H4. \begin{figure}[htb] \centering \begin{minipage}{1\linewidth} \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/rtt_fairness_bbr} \subcaption{BBR} \label{fig:rtt_fairness_bbr} \end{minipage} \hfill \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/rtt_fairness_bbr2} \subcaption{BBRv2} \label{fig:rtt_fairness_bbr2} \end{minipage} \caption{RTT fairness of BBR/BBRv2 under different buffer sizes.} \label{fig:rtt_fairness} \end{minipage} \end{figure} When the buffer is quite shallow (i.e., 0.2$\times$ BDP), BBR has good fairness. When the buffer size becomes larger, the BBR flow with longer RTT gradually occupies all the bandwidth and starves the flow with shorter RTT. The reason for the poor RTT fairness of BBR is well documented by the previous studies~\cite{hock_experimental_2017, ma_fairness_2017}. The bandwidth probing of BBR leads the aggregated sending rate of two flows to exceed bottleneck bandwidth, thus, forming a persistent queue at the bottleneck link. As the inflight cap of BBR is proportional to \textit{RTprop}, the flow with longer RTT pours more data into the bottleneck buffer, thus, leading to a larger share of the bottleneck link's capacity. This problem is not severe under shallow buffers, as excess packets are mostly dropped instead of forming a persistent queue. Compared with BBR, BBRv2 has better RTT fairness, especially under deep buffers. The likely reason is three-fold. First, when the buffer size is moderate, losses are triggered due to buffer overflow, and then both flows reduce their inflight size proportional to BDP. As the flow with longer RTT has a larger BDP, it reduces its inflight size more than the flow with shorter RTT. Second, when BBRv2 flows are cruising at the speed of \textit{BtlBW}, they always try to leave headroom\footnote{BBRv2 always limits its inflight size below $0.85\times$ \emph{inflight\_hi} to leave headroom for faster throughput convergence with other flows if there is any.} for other flows to explore the bandwidth. Third, BBRv2 enters the ProbeRTT state more often than BBR, thus, leading BBRv2 flows to yield occupied capacity more frequently. \begin{figure}[thb] \begin{minipage}{1\linewidth} \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/loss_rate_bbr_no_annot} \subcaption{BBR} \label{fig:retx_bbr} \end{minipage} \hfill \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/loss_rate_bbr2_no_annot} \subcaption{BBRv2} \label{fig:retx_bbr2} \end{minipage} \caption{The heatmap of the retransmission rate of BBR/BBRv2 under various network conditions. The numbers in squares are retransmission rates in percentage.} \label{fig:retx_bbr_vs_bbr2} \end{minipage} \end{figure} \subsection{Retransmission and throughput} \label{sec:retx_vs_tput} As one of BBRv2's design goals is to reduce unnecessary retransmissions in shallow-buffered networks, next we investigate whether BBRv2 achieves this design goal. The experimental setup is similar to that in the previous work~\cite{cao_when_2019}, where the bottleneck bandwidth varies in 10$\sim$750~Mbps and the path RTT varies in 5$\sim$150~ms as these values are commonly employed in modern networks~\cite{cao_when_2019, hock_experimental_2017, Huffaker2002DistanceMI}. The buffer size at the bottleneck link is set to 100KB to emulate a shallow-buffered network because 100KB is less than the BDP of most bandwidth-RTT combinations in our setup. One TCP flow from H1 to H3 runs for 30 seconds and the retransmission rate is recorded for each setup. \begin{figure} \begin{minipage}{1\linewidth} \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/BBR-BBR2-100K} \subcaption{100KB buffer} \label{fig:tput_bbr_vs_bbr2_shallow} \end{minipage} \hfill \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=1\linewidth]{images/BBR-BBR2-10M} \subcaption{10MB buffer} \label{fig:tput_bbr_vs_bbr2_deep} \end{minipage} \caption{The heatmap of \textit{Tput\_Gain} (in percentage) in (a) shallow and (b) deep buffered networks.} \label{fig:tput_bbr_vs_bbr2} \end{minipage} \end{figure} The heatmaps in Fig.~\ref{fig:retx_bbr_vs_bbr2} show the retransmission rates of BBR and BBRv2 under various network conditions. We can observe that the retransmission rate of BBRv2 is significantly reduced compared with that of BBR, especially when the BDP is larger than 400KB (i.e. the buffer size $\leq$ 0.25$\times$ BDP). The lower retransmission rate of BBRv2 in shallow-buffered networks stems from the fact that BBRv2 reacts to packet losses, while BBR does not. In the Startup and ProbeUp state, BBRv2 tries to send at a rate higher than the bottleneck bandwidth, which leads to excessive losses (i.e. loss rate $\geq$ 2\%). The excessive losses trigger \emph{inflight\_hi} to be set to the current inflight size that is likely close to BDP in shallow-buffered networks. As a result, BBRv2's inflight size is bounded beblow 0.85$\times$ \emph{inflight\_hi} in ProbeCruise because it tries to leave headroom for other flows to explore bandwidth. Since BBRv2 flows spend most of their lifecycle in ProbeCruise, the average throughput of BBRv2 is expected to be 15\% lower than the available bandwidth. That said, BBRv2 trades off throughput against retransmission in shallow-buffered networks. To verify this, we compute the throughput gain of BBR over BBRv2 (\emph{Tput\_Gain}), which is defined in Eq.~\ref{eq:tput_gain}, where $Tput_{BBR}$ (resp. $Tput_{BBRv2}$) is the average throughput of a BBR (resp. BBRv2) flow over 30 seconds. \begin{equation} \centering Tput\_Gain = \frac{Tput_{BBR} - Tput_{BBRv2}}{Tput_{BBRv2}} \label{eq:tput_gain} \end{equation} Fig.~\ref{fig:tput_bbr_vs_bbr2_shallow} plots the \emph{Tput\_Gain} under various network conditions. We can observe that in the network conditions where BBRv2 reduces the retransmission rate (when the BDP exceeds 400KB), BBRv2 achieves lower throughput than BBR. Specifically, the throughput of BBRv2 is 13\%$\sim$16\% lower than that of BBR in these cases, which coincides with our analysis. In deep-buffered networks, however, the packet losses are much less often. It is thus expected that the throughput of BBR and BBRv2 are comparable. This is confirmed by the results in Fig.~\ref{fig:tput_bbr_vs_bbr2_deep}, where the buffer size is configured at 10MB, larger than the BDP of most of the bandwidth-RTT combinations in our setup. The throughput differences between BBR and BBRv2 are indeed marginal in these networks. \subsection{Resilience to random losses} \label{sec:measurement:loss_resilience} Several early tests~\cite{cardwell_bbr_105, kfoury_emulation-based_2020, song_understanding_2021} have shown that BBRv2 is less resilient to random losses than BBR, since BBRv2 limits its inflight size by the \emph{inflight\_lo} and \emph{inflight\_hi}, which both react to all types of losses. In BBRv2, there are two parameters that decide how the \emph{inflight\_lo} and \emph{inflight\_hi} react to losses. One is the explicit loss threshold ($\alpha$) and the other one is the \emph{inflight\_lo} reduction factor ($\beta$). In our experiments, we investigate BBRv2 variants with different $\alpha$ and $\beta$ under random loss, where each specific BBRv2 variant is referred as BBRv2($\alpha$, $\beta$). For $\alpha$, we cap it at 20\% to match the maximum loss rate that BBR can tolerate; for $\beta$, we only evaluate the difference between the case with (i.e. $\beta=0.3$) and without it i.e. ($\beta=0$)\footnote{The default value 0.3 is necessary for BBRv2 to co-exist with Cubic~\cite{bbr2_kernel}}. In the experiment, the bottleneck bandwidth is set to 40Mbps, and the path RTT is 40ms. The buffer size is set to 32$\times$ BDP to avoid packet loss due to buffer overflow. The random loss rate ranges from 0\% to 30\%. Fig.~\ref{fig:loss_resilience_measurement} reports the average throughput of each CCA against random loss rates. We observe that the throughput of BBRv2 drops significantly after the random loss rate reaches 2\%. There is a clear sign that the $\alpha$ impacts the loss resilience of BBRv2: as the $\alpha$ increases, we can observe the improvement of loss resilience of BBRv2. For instance, with a 10\% random loss rate, BBRv2(20\%, 0.3) reaches around half of the maximum bandwidth while BBRv2's throughput nearly drops to zero. The impact of $\beta$ is also remarkable: the loss resilience of BBRv2(20\%, 0.3) is lower than that of BBRv2(20\%, 0) that performs similar to BBR. \begin{figure} \centering \includegraphics[width=0.85\linewidth]{images/loss_resilience_wo_bbr2p} \caption{Avg. throughput against different random loss rates (buffer size = 32$\times$ BDP).} \label{fig:loss_resilience_measurement} \end{figure} The above results indicate that BBRv2's loss resilience can be improved via raising the $\alpha$. Yet, there is a concern --- how does the $\alpha$ impact the retransmission rate in shallow-buffered networks as we already saw that BBRv2 alleviates the retransmission issue by setting \emph{inflight\_hi} upon the loss rate exceeding $\alpha$ to lower down its inflight size (see \S\ref{sec:retx_vs_tput}). \begin{figure} \centering \includegraphics[width=0.85\linewidth]{images/retx_vs_alpha} \caption{Retransmission rates versus loss thresholds ($\alpha$) under networks with various buffer sizes. The errorbars in the figure represent the standard deviations of retransmission rate. Note that the $\beta$ was fixed at 0.3 for all experiments; the default $\alpha$ in BBRv2 is 2\% (the first data point of every line).} \label{fig:retx_vs_alpha} \end{figure} To investigate the aforementioned concern, we further extended the experiments by considering more BBRv2 variants ($\alpha$ $\in$ [2\%, 100\%], $\beta$ = 0.3) and more configurations on buffer size (buffer size $\in$ \{0.2, 0.5, 1.0, 1.5, 2.0\}$\times$ BDP). Fig.~\ref{fig:retx_vs_alpha} plots the retransmission rates of all those BBRv2 variants under 0\% random loss rate (to eliminate the impact of random losses on retransmission rate), which shows the impact of buffer size. Two observations are notable. Firstly, we observe that the retransmission rate increases when the $\alpha$ exceeds a certain point, which depends on the bottleneck buffer size. The $\alpha$ values beyond the turning points are too high to be reached by the temporary loss rate, thus, limiting the efficacy of \emph{inflight\_hi}. Secondly, if the buffer size is large enough (i.e. 2$\times$ BDP in our experiments), the retransmissions are eliminated, thus, the value of $\alpha$ becomes irrelevant. Another concern about lifting $\alpha$ is the impact on the inter-protocol fairness because the larger $\alpha$ is, the slower reaction of BBRv2 to losses is, which makes BBRv2 more aggressive to loss-based CCAs. To investigate this concern, we test the inter-protocol fairness of BBRv2(20\%, 0.3) using the same setup in \S\ref{sec:measurement:inter_fair}, and plot the results in Fig.~\ref{fig:cc_fair_bbr2_20alpha}. In comparison with Fig.~\ref{fig:cc_fairness_bbr2}, we can see that the inter-protocol fairness of BBRv2 is indeed worsened in the case of extremely shallow buffer (0.2$\times$ BDP) due to the increased aggressiveness caused by a larger $\alpha$. Nevertheless, we also observe that the fairness index is improved under moderate buffers because the increased aggressiveness also makes BBRv2 less vulnerable to Cubic when the bottleneck buffer becomes larger. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/cc_fairness_bbr2_20alpha} \caption{Inter-protocol fairness of BBRv2(20\%, 0.3).} \label{fig:cc_fair_bbr2_20alpha} \end{figure} \vspace{0.5em} \noindent\textbf{Summary of random loss resilience:} The loss resilience of BBRv2 can be improved by raising the loss threshold $\alpha$. Nevertheless, the threshold $\alpha$ should be carefully tuned according to the bottleneck buffer size to avoid increasing retransmissions and being too aggressive to loss-based CCAs in extremely shallow-buffered networks. \subsection{Responsiveness to bandwidth dynamics} \label{sec:measurement:bw_dynamics} In networks with highly dynamic available bandwidth~\cite{wang_active-passive_2019, winstein_stochastic_nodate, li_measurement_2018}, BBRv2's bandwidth probing may fail to quickly adapt to bandwidth changes. Next, we investigate BBRv2's responsiveness to bandwidth changes. \begin{figure*}[thb] \centering \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1\linewidth]{images/refined_bbr_rate_vs_buffer_inc} \subcaption{BBR (bw. increasing)} \label{fig:bw_inc_bbr} \end{minipage} \hfill \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1\linewidth]{images/refined_bbr_rate_vs_buffer_dec} \subcaption{BBR (bw. decreasing)} \label{fig:bw_dec_bbr} \end{minipage} \hfill \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1\linewidth]{images/refined_bbr2_rate_vs_buffer_inc} \subcaption{BBRv2 (bw. increasing)} \label{fig:bw_inc_bbr2} \end{minipage} \label{fig:bw_inc} \hfill \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1\linewidth]{images/refined_bbr2_rate_vs_buffer_dec} \subcaption{BBRv2 (bw. decreasing)} \label{fig:bw_dec_bbr2} \end{minipage} \caption{Responsiveness to bandwidth increases (a, c) and decreases (b, d). The red line represents the \textit{BtlBW} estimation of BBR/BBRv2, and the green line indicates the real bandwidth of the bottleneck. The dark line shows the dynamics of the bottleneck link's queue length. Note that the spikes of queue length in (a) and (c) are caused by the periodical bandwidth probing of BBR/BBRv2, and the sudden drop of queue\_len around 10s in (b) and (d) is because BBR/BBRv2 enters the ProbeRTT state.} \label{fig:resp_to_bw_dynamics} \end{figure*} The experiments are designed as follows. The bandwidth of the bottleneck link is configured to increase or decrease 5Mbps every 2 seconds, the path delay is set to 40ms, and the buffer size is set to 32$\times$ BDP. The internal variables during flow transmission (including \textit{pacing\_rate} and \textit{BtlBW}, instantaneous throughput, and the queue length at the bottleneck link) are sampled at an interval of 100ms. Fig.~\ref{fig:resp_to_bw_dynamics} shows how BBR and BBRv2 adapt to bandwidth increases or decreases respectively. In this figure, the upward and downward spikes of queue length correspond to the actions of BBR/BBRv2 in probing for more bandwidth or draining the bottleneck buffer. We can observe that BBRv2 is less effective than BBR in terms of responsiveness to bandwidth dynamics, resulting in low utilization of bandwidth and long queuing delay. As we discussed in \S\ref{sec:background}, to match the interval between Reno loss recovery epochs for better inter-protocol fairness, BBRv2 uses $min\{rand(2,3),\frac{BDP}{MSS}\times{}RTT\}$ seconds as its probing interval. This interval can be tens of RTTs, which is too conservative in such a dynamic environment. That said, BBRv2 improves the inter-protocol fairness, at the cost of poorer responsiveness to bandwidth dynamics. \subsection{Resilience to network jitters} \label{sec:measurement:jitter} Several works~\cite{wang_active-passive_2019, kumar_tcp_2019} have shown that throughput collapse occurs when BBR operates in high-jitter networks that are widely deployed, e.g. WiFi and 5G networks operating in mmWave band~\cite{zhang_will_2018, chitimalla_5g_2017, kumar_tcp_2019}, and cellular networks~\cite{kumar_tcp_2019, beshay_link-coupled_2017} especially when high-mobility involves such as high-speed rails~\cite{wang_active-passive_2019}. It is interesting to investigate whether BBRv2 operates well in networks with high jitters. In this experiment, the bottleneck bandwidth is 40Mbps and the path RTT is 40ms. The bottleneck buffer size is set to 32$\times$ BDP to avoid buffer overflow. To emulate jitters, \emph{tc} is used to add jitters following Gaussian distribution at R3's interface that connects to H3. The mean value of the Gaussian distribution varies from 0$\sim$120~ms to emulate different degrees of jitters. \begin{figure} \centering \begin{minipage}{0.6\linewidth} \centering \includegraphics[width=1\linewidth]{images/jitter_wo_bbr2p} \end{minipage} \caption{Avg. throughput against different levels of jitters. $x=0$ is equivalent to no jitters.} \label{fig:measure_resilience_to_jitters} \end{figure} Fig.~\ref{fig:measure_resilience_to_jitters} shows the average value and the standard deviation of throughput of Cubic, BBR, and BBRv2 under various levels of jitter. Compared with Cubic, both BBR and BBRv2 experience low throughput under high jitters. As documented by Kumar et al.~\cite{kumar_tcp_2019}, BBR underestimates \textit{RTprop} in such networks because it uses a recent 10s minimum RTT to approximate \textit{RTprop}, leading to \textit{cwnd} exhaustion. This problem still exists in BBRv2, even if BBRv2 updates \textit{RTprop} 2$\times$ frequently than BBR (i.e. BBRv2 uses the minimum RTT in recent 5s to estimate \textit{RTprop}). We also note that the significant throughput degradation starts when the average jitter reaches the path RTT without jitter (i.e. 40ms). \subsection{Summary and Implication} We observe that BBRv2 improves the inter-protocol fairness and RTT fairness, and reduces retransmission rates under shallow buffers, at the cost of slow responsiveness to bandwidth dynamics and low resilience to random loss. First, the root cause for the slow responsiveness is that BBRv2 is over-conservative regarding bandwidth probing. That said, it fails to achieve a good balance between the aggressiveness in probing for more bandwidth and the fairness against loss-based CCAs. Note that, however, recklessly increasing BBRv2's aggressiveness in bandwidth probing may lead BBRv2 to generate overwhelming retransmissions and unfairly share bandwidth with loss-based CCAs. In the next section, we propose BBRv2+, which incorporates \signame{} to cautiously guide the aggressiveness of bandwidth probing to avoid reducing the fairness against loss-based CCAs. The challenge is how to effectively use this signal and how to avoid being suppressed by other loss-based CCAs in deep-buffered networks as other delay-based CCAs. Second, the resilience to random loss can be improved by raising the loss threshold $\alpha$, where the value of $\alpha$ needs to be set according to the bottleneck buffer size. Last, the throughput degradation of BBR and BBRv2 in high-jitter networks is own to the underestimation of \textit{RTprop}, which in turn leads to a smaller estimation of BDP. We propose a compensation mechanism of BDP that enables the estimated BDP to be close to the real BDP. \subsection{Design goals} \subsection{Overview} The architecture of BBRv2+ is shown in Fig.~\ref{fig:bbrv2plus_arch}. BBRv2+ incorporates \signame{} in its path model. Specifically, the \signame{} consists of three state variables (the first three variables listed in Table.~\ref{tab:bbrv2plus_vars}) of minimum RTTs, which reflect the change of queuing delay over time. The \signame{} facilitates quick responsiveness to bandwidth dynamics. Particularly, a new sub-state, ProbeTry, is added into the ProbeBW state. In ProbeTry, BBRv2+ slightly speeds up to examine if this acceleration will lead to increased RTTs. In the case of increased RTTs, BBRv2+ quits this probing and moves to the ProbeDown state to drain the queue at the bottleneck link; otherwise, it moves to the ProbeUp state to further explore available bandwidth. BBRv2+ also uses the \signame{} to quickly adapt to bandwidth decreases---it quickly updates its bottleneck bandwidth estimation to the current bandwidth measurement if an obvious increase of RTT is observed when BBRv2+ is not probing for bandwidth. Like other CCAs that use delay-based signals, BBRv2+ will be suppressed when co-existing with loss-based CCAs under deep buffers~\cite{al-saadi_survey_2019}, as the loss-based CCAs constantly fill the buffer, leading BBRv2+ to falsely yield up obtained bandwidth. BBRv2+ uses a dual-mode mechanism that forces BBRv2+ to use BBRv2's state-machine when loss-based CCAs co-exist. Finally, BBRv2+ uses a BDP compensation mechanism to address the \textit{cwnd} exhaustion problem caused by network jitters. Our key observation is that in high-jitter networks, the BDP will be underestimated because of the underestimation of \textit{RTprop}. The mechanism compensates BDP by taking the recent RTT variations into consideration; this compensation mitigates the underestimation issue significantly. \subsection{Redesign of the ProbeBW state} \label{sec:design:probebw} In the case of bandwidth increments, BBRv2+ needs to start probing for more bandwidth quickly instead of spending time on cruising with the current estimated \textit{BtlBW}. Thus, the probing interval needs to be reasonably shortened, which is set to approximately match the probing interval of BBR (8 rounds of RTT). However, if BBRv2+ is already sending at the speed close to bottleneck bandwidth, the probing interval above may result in more packet losses and thus unfairness against loss-based CCAs in shallow-buffered networks. Thus, a two-step probing mechanism incorporating the \signame{} is introduced in BBRv2+, as shown in Fig.~\ref{fig:bbrv2plus_arch}. A new sub-state ProbeTry, which lasts for two RTTs, is inserted before entering ProbeUp in the state machine. In the first RTT of ProbeTry, BBRv2+ slightly increases its \textit{pacing\_rate} by increasing \emph{pacing\_gain} to 1.1. In the second RTT, BBRv2+ reduces \emph{pacing\_gain} to 1.0 and monitors if $\mathrm{MinRTT_{curr\_rtt}}$ is larger than $\gamma\times\mathrm{MinRTT_{prev\_rtt}}$, where MinRTT is measured on the ACKs for the packets sent in the previous round, thus, reflecting the queuing delay caused by the previous round (see Table~\ref{tab:bbrv2plus_vars}). The rationale of using $\gamma>1$ is to introduce a relaxing factor tolerate noises in RTT measurements, where a small $\gamma$ may lead BBRv2+ to miss some chances to explore bandwidth while a large $\gamma$ may make BBRv2+ over-aggressive. In our current implementation, we set $\gamma=1.02$ to tolerate noises for 2\% of RTT measurements. It is worth noting that $\gamma$ is a design parameter and can be tuned by designers\footnote{All the design parameters of BBRv2+ in our current implementation are exposed to user-space through the \texttt{/sys/module} interfaces, enabling designers to change the parameters without recompiling the kernel module.}. If $\mathrm{MinRTT_{curr\_rtt}}$ $>$ $\gamma \times \mathrm{MinRTT_{prev\_rtt}}$, BBRv2+ transits to ProbeDown with \emph{pacing\_gain} as 0.9 to drain the queue accumulated during the first RTT of ProbeTry. Otherwise, it enters ProbeUp to probe for more bandwidth. To further boost the speed of bandwidth discovery, BBRv2+ also incorporates a continuous probing mechanism based on the \signame{} \fy{Mark}. Specifically, at the end of ProbeUp, if the $\mathrm{MinRTT_{curr\_rtt}}\le\gamma\times\mathrm{MinRTT_{before\_probe}}$, BBRv2+ re-enters ProbeUp. The rationale behind this is that there is possible more free bandwidth capacity as no significant increment of queuing delay arises in the current ProbeUp sub-state. In the case of bandwidth decrements, BBRv2+ needs to update its \textit{BtlBW} estimation to new bandwidth measurements as soon as possible. When bandwidth decreases, BBRv2+ sends data faster than the bottleneck bandwidth and packets accumulate in the buffer of the bottleneck link. We thus also leverage the \signame{} to detect bandwidth decrement. If BBRv2+ is in ProbeCruise or ProbeDown, on the receipt of a new ACK in an RTT round, Algorithm~\ref{alg:BBRv2p_advance_filter} is called\footnote{It is called once at maximum for every RTT round to avoid expiring the \textit{BtlBW} estimation too frequently.}. The reason that the algorithm is only applicable in ProbeCruise or ProbeDown is to eliminate the impact on delay variations caused by ProbeTry and ProbeUp sub-states. In Algorithm~\ref{alg:BBRv2p_advance_filter}, BBRv2+ expires its current \textit{BtlBW} estimation if $\mathrm{MinRTT_{curr\_rtt}}$ is larger than the recently measured minimum RTT by $\theta$ times. $\theta$ is a parameter to balance the speed to converge to new bandwidth and the resistance to noises in bandwidth measurements. A small $\theta$ may lead to throughput oscillation, while a large $\theta$ may reduce BBRv2+'s responsiveness to bandwidth dynamics. We recommend $\theta\in[1.05,1.15]$ according to our experiences. \begin{algorithm}[htb] \caption{Advance \textit{BtlBW} max\_filter} \label{alg:BBRv2p_advance_filter} \LinesNumbered \SetKwProg{Fn}{Function}{}{end} \SetKwFunction{expire}{expire\_the\_oldest\_value} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKw{return}{return} \SetKw{true}{true} \SetKw{false}{false} \SetKw{not}{not} \SetKw{and}{and} \SetKw{or}{or} \Input{\texttt{conn}: BBRv2+ TCP connection} target\_rtt $\xleftarrow{}$ $\theta$ $*$ \texttt{conn}.\textit{RTprop} should\_advance $\xleftarrow{}$ (\texttt{conn}.$\mathrm{MinRTT_{curr\_rtt}}$ $>$ target\_rtt) \uIf{should\_advance}{\expire{{\normalfont\ttfamily conn}.\textit{BtlBW}}} \end{algorithm} \subsection{Dual-mode mechanism} \label{sec:design:coexist} Due to the use of the \signame{} to guide the aggressiveness in bandwidth probing, BBRv2+ will be starved by loss-based CCAs under deep buffers, suffering from the similar problem existing in most delay-based CCAs~\cite{al-saadi_survey_2019}. The root cause is that loss-based CCAs constantly fill the bottleneck buffer, where BBRv2+ falsely treats the increments of RTT as the signal of bandwidth decrements. As BBRv2+ periodically drains the bottleneck buffer, during which the measured minimum RTT ($\mathrm{MinRTT_{curr\_cruise}}$ listed in Table.~\ref{tab:bbrv2plus_vars}) is close to \textit{RTprop} if no loss-based competitor exists. By comparing the $\mathrm{MinRTT_{curr\_cruise}}$ with the recorded \textit{RTprop} value, BBRv2+ estimates the existence of loss-based competitors. If loss-based competitors co-exist, BBRv2+ switches to use BBRv2's ProbeBW state, which enables BBRv2+ to co-exist with loss-based CCAs in the same way as BBRv2 that does not yield up obtained bandwidth due to RTT increments. Further, if the loss-based competitors no longer exist, BBRv2+ returns back to use the redesigned ProbeBW state. We note that the dual-mode mechanism does not switch BBRv2+ to use BBRv2's ProbeBW state if the bottleneck buffer is very shallow, because loss-based CCAs can not bloat the bottleneck buffer and BBRv2+ will not be starved. \begin{algorithm}[htb] \caption{The dual-mode mechanism} \label{alg:BBRv2p_switch} \LinesNumbered \SetKwProg{Fn}{Function}{}{end} \SetKwFunction{maxfilter}{max\_filter} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKw{return}{return} \SetKw{true}{true} \SetKw{false}{false} \SetKw{not}{not} \SetKw{and}{and} \SetKw{or}{or} \SetKw{return}{return} \SetKwFunction{restart}{restart\_from\_startup} \Input{\texttt{conn}: BBRv2+ TCP connection} \uIf{{\normalfont\ttfamily conn}.probe\_bw\_mode $=$ BBRv2{\normalfont+}}{ \label{alg:mode:bbrv2p_start} switch\_thld $\leftarrow$ $\lambda_{1}$ $*$ \texttt{conn}.$\mathrm{\textit{RTprop}}$ \uIf{{\normalfont\ttfamily conn}.$MinRTT_{curr\_cruise}$ $>$ switch\_thld}{ \texttt{conn}.buffer\_filling$++$ } \uElse{ \texttt{conn}.buffer\_filling $\leftarrow$ 0 } \uIf{{\normalfont\ttfamily conn}.buffer\_filling $\geq$ $\eta_{1}$}{ \texttt{conn}.probe\_bw\_mode $\leftarrow$ BBRv2 \restart{{\normalfont\ttfamily conn}} \label{alg:mode:bbrv2p_end} } } \uElse{ switch\_thld $\leftarrow$ $\lambda_{2}$ $*$ \texttt{conn}.$\mathrm{\textit{RTprop}}$ \uIf{{\normalfont\ttfamily conn}.$MinRTT_{curr\_cruise}$ $\leq$ switch\_thld}{ \texttt{conn}.buffer\_empty$++$ } \uElse{ \texttt{conn}.buffer\_empty $\leftarrow$ 0 } \uIf{{\normalfont\ttfamily conn}.buffer\_empty $\geq$ $\eta_{2}$}{ \texttt{conn}.probe\_bw\_mode $\leftarrow$ BBRv2+ } } \end{algorithm} The dual-mode mechanism is detailed in Algorithm~\ref{alg:BBRv2p_switch}, which runs at the end of ProbeCruise. If the sender is running in BBRv2+'s ProbeBW state and has not seen RTT samples close to the \textit{RTprop} for a number of ($\eta_{1}$) successive ProbeCruise sub-states, it switches to use BBRv2's ProbeBW state and restarts itself from Startup (line~\ref{alg:mode:bbrv2p_start}--\ref{alg:mode:bbrv2p_end}). We note that \texttt{restart\_from\_startup(conn)} in line~\ref{alg:mode:bbrv2p_end} is a heuristic to quickly regain the bandwidth that has been potentially yielded up to loss-based competitors by BBRv2+ recently. If the sender's ProbeBW state is BBRv2 and it has seen low RTTs for $\eta_{2}$ successive ProbeCruise sub-states, it returns back to use BBRv2+'s ProbeBW state because the competitors are most likely gone. We note that the four parameters in Algorithm~\ref{alg:BBRv2p_switch}, $\lambda_{1}$, $\lambda_{2}$, $\eta_{1}$, and $\eta_{2}$, are to control the sensitivity of BBRv2+ to the co-existence of loss-based CCAs. In practice, we used 1.1, 1.05, 2, and 4 for $\lambda_{1}$, $\lambda_{2}$, $\eta_{1}$, and $\eta_{2}$ respectively. Nevertheless, these parameters can be tuned to fit specific networks in user space in our current implementation. \subsection{Compensation for BDP estimation} \label{sec:design:comp_cwnd} We have seen in \S\ref{sec:measurement:jitter}, when network jitters are high, BBRv2 (also BBR) underestimates \textit{RTprop}, thus the BDP of the network path, leading to \textit{cwnd} exhaustion and thus performance degradation. To boost BBRv2+'s performance under high network jitters, BBRv2+ takes network jitters into account when estimating the BDP of the network path. BBRv2+ compensates the BDP estimation with a component proportional to RTT variations when network jitters are high, which is detailed in Algorithm.~\ref{alg:BBRv2p_comp_cwnd}. As instantaneous RTT variations could be very dynamic, to ensure that BBRv2+ can tolerate jitters up to the maximum extent, we use the recently measured maximum RTT variation, $\mathrm{Max_{4RTT}(jitter)}$ in Table.~\ref{tab:bbrv2plus_vars}, as the indicator of recent jitters. When $\mathrm{Max_{4RTT}(jitter)}$ exceeds $\mu\times\mathrm{\textit{RTprop}}$, the estimated \textit{RTprop} is increased to the sum of the original \textit{RTprop} and the delay variation ($\mathrm{Max_{4RTT}(jitter)}$) to mitigate the underestimation of \textit{RTprop}. We recommend setting $\mu$ around 0.5 because the performance of BBRv2 starts to degrade when jitters approach half of \textit{RTprop} as observed in \S\ref{sec:measurement:jitter}. \begin{algorithm}[htb] \caption{Compensating BDP estimation} \label{alg:BBRv2p_comp_cwnd} \LinesNumbered \SetKwProg{Fn}{Function}{}{end} \SetKwFunction{maxfilter}{max\_filter} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKw{return}{return} \Input{\texttt{conn}: BBRv2+ TCP connection} \Output{the BDP estimation of BBRv2+} jitter $\xleftarrow{}$ \texttt{conn}.$\mathrm{Max_{4RTT}(jitter)}$ threshold $\xleftarrow{}$ $\mu$ $*$ \texttt{conn}.\textit{RTprop} fixed\_\textit{RTprop} $\leftarrow$ \texttt{conn}.\textit{RTprop} \uIf{jitter $>$ threshold}{fixed\_\textit{RTprop} $\leftarrow$ fixed\_\textit{RTprop} $+$ jitter} \return \texttt{conn}.\textit{BtlBW} $*$ fixed\_\textit{RTprop} \end{algorithm} \subsection{Implementation} \label{sec:design:impl} BBRv2+ is implemented as a Linux kernel module ($\sim$2100 LoCs), based on Google's BBRv2 alpha kernel module~\cite{bbr2_kernel}. Therefore, it is easy to deploy BBRv2+ on the hosts where BBRv2 is already in use. The parameters of BBRv2+ are exposed to user-space through the \texttt{/sys/module} interfaces, which allows users to change the parameters according to their need without recompiling the kernel module. The code of BBRv2+ is open-sourced on Github~\cite{furongYangfurongBBRv2plus2021} to the research community for further test and improvement.
proofpile-arXiv_069-7665
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the American system of democracy, local representatives from a state are elected to the national House of Representatives in direct local elections, held in districts of (roughly) equal population. These Congressional districts of a state are redrawn every 10 years, following the decennial census. Because the Congressional representatives of a state are elected in local district elections, there is no guarantee that the political makeup of the state's elected slate of representatives will mirror the political composition of statewide vote casts in the election. In practice, this seemingly desirable target is missed by a wide mark; for example, in Pennsylvania in 2012, 51\% of the population voted for Democrat representatives, yet Republican representatives won 13 out of 18 district elections. This phenomenon, popularly known as \emph{gerrymandering} --- where political parties carefully draw districts to maximize their advantage in election outcomes --- has been studied on several fronts. Researchers have worked to propose simple tests to quantify gerrymandering with greater granularity than the election outcomes provide \cite{wang,mcdonald}, and to distinguish gerrymandering from natural artifacts of political geography \cite{chenrodden,duke,outliers}. In some states, there has been an earnest effort to reduce political influence over the districting process through the establishment of independent redistricting commissions. For example, in Arizona, redistricting is currently carried out by a commission composed of 10 Democrats, 10 Republicans and 5 Independents. Our goal in this paper is to propose and analyze a protocol for fair redistricting that can be carried out without ``independent'' agents; in particular, a redistricting commission using our protocol can be composed of an even number of members, drawn from the two major political parties of the state. Districting with our protocol would enable reasonable districts to be drawn for a state without requiring effective mechanisms identifying truly independent agents. Motivated by classical cake-cutting problems with ``I-cut-you-choose'' types of solutions~\cite{BT96,Pro13}, our algorithm leverages competition between two political entities to create a reasonable districting in a turn-based protocol. We will prove that our protocol has desirable properties in idealized settings. \section{Setting and results} We consider the districting problem as a competition between two parties. A \emph{state} will be modeled as a continuously divisible object, some subset of which is loyal to Player 1, and some subset of which is loyal to Player 2. We will sometimes ignore geometry and sometimes pay attention to it, so we really have two models of a state. In the first model, the state is an interval $[0,n]$ ($n$ is the number of districts) and an $n$-\emph{districting} is a a collection of $n$ disjoint unit-measure subsets of the interval. In this model, the measure of a subset corresponds to a population, where 1 unit of population is the size of 1 district. In the second model, the state is a subset $X\subset \R^2$ topologically equivalent to a disc, together with a population density $\phi:X\to [0,1]$. We are given that $\int_{X} \phi=n$ and an $n$-districting of $X$ is a division of $X$ into $n$ disjoint \emph{connected} subsets $X_i$, each satisfying that $\int_{X_i} \phi=1$. Given a district in a districting of a state, the district is \emph{loyal} to the player with a majority of loyalty in the district. (In cases of exact ties, we break them arbitrarily, say, to Player 1). The outcome for a player from the districting is the player's \emph{slate}, which is the number of districts loyal to him. We now turn to a discussion of natural districting protocols, culminating with our own. \subsection{One player decides} In this trivial protocol, some external process chooses one player, and that player freely chooses the districting. Subject to moderate legal oversight, this is essentially the protocol currently used in most states. In practice, the external process which chooses the favored player is often control of the state legislature (which itself is influenced by gerrymandering of state-level districts), or, as in Pennsylvania, control of the state supreme court. \subsection{Independent agent protocols} If benevolent independent agents are available in the problem model, then a natural solution is to simply allow independent agents to draw the districting. In spirit, this is the approach of redistricting commissions such as Arizona's. Our goal is to eliminate the trust required of independent agents. Interesting work by Landau, Reid, and Yershov \cite{LRY} and Landau and Su \cite{LandauSu} developed protocols with a moderate dependence on an independent agent. Essentially, an independent agent is used just to choose a suitable division of the state into two parts, and assign one part (and a target number of districts) to each player. Each player then freely districts his part, and the result is combined into a districting of the state. They proved an elegant theoretical guarantee for their protocol under optimum play: Each player will achieve at least the average of their best possible slate and their worst possible slate of representatives, among districtings respecting the split-line chosen by the independent agent. Apart from the dependence on an independent agent, their protocol has one other feature which may dissuade its adoption in practice. Since the outcome of the protocol is a districting in which each player freely chose districtings on their side of the split, the intended result is a districting of a state in which one side is gerrymandered for Player 1, while the other is gerrymandered for Player 2. Although this produces a reasonable outcome in terms of the slates won by each player, which is our primary outcome of interest, such a protocol will not reduce (and in some cases, may even exacerbate) other maligned effects sometimes attributed to gerrymandering, such as entrenchment of incumbent representatives and the rise of political extremism \cite{lindgren2013effect,carson2014reevaluating}. Similar problems plague the I-cut-I-freeze protocol, below. We will give a formalization capturing this phenomenon (Definition \ref{d.target}) and prove that our protocol avoids it (Theorem \ref{t.target}). \subsection{I-cut-I-freeze} A very simple multiround districting protocol is to simply have the players take turns adding districts with the correct population size to an initially empty districting, until the districting is complete. The following version of this ensures that completion is always possible. \subsubsection*{I-cut-I-freeze:} Initialize $n$ to be the number of districts the state is to be divided into. Initially no districts are frozen. On each player's turn (while $n>0$), the player: \begin{enumerate} \item Redistricts the unfrozen part of the state into $n$ districts; \item Chooses one of the new districts to be frozen; \item Updates $n\leftarrow (n-1)$. \end{enumerate} One attractive theoretical feature of this protocol is its slate guarantee (ignoring any constraints due to geography); the straightforward proof is omitted. \begin{theorem} In the geometry-free model, if more than $\ell/ n$ of the state is loyal to Player $i$, then Player $i$ can achieve a slate of size $\ell$ from a $n$-districting, in the I-cut-I-freeze protocol. \end{theorem} On the other hand, like the independent agent protocols discussed above, the I-cut-I-freeze protocol allows each player to freely draw many districts. In particular, each political party will be able to pack minority populations, reinforce their incumbents, etc. Although from the standpoint of the final slates for each player, competition balances the two sides, these features may make this protocol undesirable nonetheless. To rigorously capture the power afforded each player by the I-cut-I-freeze protocol and the related independent agent protocols, we define the following property of a districting protocol: \begin{definition} \label{d.target} We say that a $n$-districting protocol has the $B$-target property if for any $i\in \{1,2\}$ and any target subset of the state of measure/cardinality $\frac{1}{n}$ of the state's total, Player $i$ has a strategy to ensure that at most a $1/B$ fraction of the target intersects any single district. \end{definition} Note that trivially any protocol has the $1$-target property. Moreover, $1$ is the largest value of $B$ for which the I-cut-I-freeze protocol has the $B$-target property, since, for example, on Player 1's first turn, he can always create a district equal to any choice for the target. We will see that the situation for the our proposed protocol is very different. \subsection{I-cut-you-freeze} The I-cut-you-freeze protocol is the main subject of this paper. Essentially, the motivation is to reduce the influence a single player can exert unilaterally on the drawing of any single district. Although each player will still draw $n/2$ districts (up to rounding) that are in the final districting, they will no longer have control over which of the districts they draw in the course of the protocol end up in the final districting. \subsubsection*{I-cut-you-freeze:} Initialize $n$ to be the number of districts the state is to be divided into. At the beginning of the protocol, Player 1 gives a districting of the state into $n$ districts, and passes it to Player 2. On each player's subsequent turn (while $n>0$), the player who has just been given the districting: \begin{enumerate} \item Chooses an unfrozen district to be frozen (in the districting received from the other player); \item Updates $n\leftarrow (n-1)$. \item Redistricts the still unfrozen part of the state into $n$ districts, and passes the new districting back to the other player. \end{enumerate} At the end of a protocol, we have a districting in which half of the districts were drawn by Player 1 and frozen by Player 2, and vice versa. Let us define $\sigma(n,s)$ to be the slate which will be won by Player $1$ under optimum play of the I-cut-you-freeze protocol for $n$-districtings, in the setting where the state is modeled as an interval of length $n$, when the subset of the state loyal to Player $i$ has measure $s$. We characterize the outcome of our protocol asymptotically as follows: \begin{theorem} \label{t.a-seats} We have \[ \lim_{n\to \infty} \frac{\sigma(n,\alpha n)}{n}= \begin{cases} 2\alpha^2 & \text{for } \alpha\leq \tfrac 1 2\\ 1-2(1-\alpha)^2 & \text{for }\alpha>\tfrac 1 2. \end{cases} \] \end{theorem} \noindent We can also explicitly characterize the small-$n$ behavior in all cases. \begin{theorem} We have $\sigma(n,s)\geq k$ if and only if \begin{equation} s\succeq \begin{cases} \displaystyle \frac{(n-1)!!(2k+[2\nmid n]-2)!!}{2(2k+[2\nmid n]-3)!!(n-2)!!} & \text{for }k\leq n/2\\[15pt] \displaystyle n-\frac{n!!(2(n-k)-[2\nmid n]+1)!!}{2(2(n-k)-[2\nmid n])!!(n-1)!!} & \text{for }k>n/2. \end{cases} \end{equation} Here $\succeq$ is interpreted as $\geq$ if ties are broken in favor of Player 1, and $>$ if they are broken in favor of Player 2; $[2\nmid n]$ is $1$ or $0$ according to the parity of $n$; and $n!!$ is the double factorial $n(n-2)(n-4)\cdots(2\text{ or }1)$. \label{t.seats} \end{theorem} By contrast, for the trivial One-player-decides algorithm (with Player 1 deciding), we would have $\sigma(n,s)\geq k$ if and only if $s\geq k/2$; Figure~\ref{f.curve} illustrates a comparison. \begin{figure}[t] \begin{center} \begin{tikzpicture}[xscale=3,yscale=3] \draw[-|] (0,0) -- (1,0) node[right] {$\alpha$}; \draw (1,0) node[below] {\tiny $1$}; \draw (0,0) node[below] {\tiny $0$}; \draw[-|] (0,0) -- (0,1) node[above] {$\sigma_{10}$}; \draw (0,1) node[left] {\tiny $10$}; \draw (0,0) node[left] {\tiny$0$}; \draw (0,0) -- (.05,0) -- (.05,.1) -- (.1,.1) -- (.1,.2) -- (.15,.2) -- (.15,.3) -- (.2,.3) -- (.2,.4) -- (.25,.4) -- (.25, .5) -- (.3,.5) -- (.3,.6) -- (.35,.6) -- (.35,.7) -- (.4,.7) -- (.4,.8) -- (.45,.8) -- (.45, .9) -- (.5,.9) -- (.5,1) -- (1,1); \draw [thick](0,0) -- (.123047,0) -- (.123047,.1) -- (.245094,.1) -- (.245094,.2) -- (.328125,.2) -- (.328125,.3) -- (.39375,.3) -- (.39375,.4) -- (.45,.4) -- (.45,.5) -- (.5,.5) -- (.5,.6) -- (.555556,.6) -- (.555556,.7) -- (.619048,.7) -- (.619048,.8) -- (.695238,.8) -- (.695238,.9) -- (.796825,.9) -- (.796825,1) -- (1,1); \fill (0.5,0.5) ellipse(.0175cm and .0175cm); \end{tikzpicture} \hspace{1cm} \begin{tikzpicture}[xscale=3,yscale=3] \draw[-|] (0,0) -- (1,0) node[right] {$\alpha$}; \draw (1,0) node[below] {\tiny$1$}; \draw (0,0) node[below] {\tiny$0$}; \draw[-|] (0,0) -- (0,1) node[above] {$\sigma_n$}; \draw (0,1) node[left] {\tiny$n$}; \draw (0,0) node[left] {\tiny$0$}; \draw (0,0) -- (.5,1) -- (1,1); \draw[thick,scale=1,domain=0:.5,smooth,variable=\x,black] plot (\x,{2*\x * \x}); \draw[thick,scale=1,domain=.5:1,smooth,variable=\x,black] plot (\x,{1-(2*(1-\x) *(1-\x))}); \fill (0.5,0.5) ellipse(.0175cm and .0175cm); \end{tikzpicture} \end{center} \caption{\label{f.curve} The proportion of seats $\sigma_n$ won by Player 1 under optimum play is plotted against his loyalty $\alpha$ of the state. Bold lines are the result of the I-cut-you-freeze algorithm, while light lines are the result of the trivial ``One player decides'' algorithm (with Player 1 deciding). On the left is the case $n=10$, while the right gives the curves in the large-$n$ limit. The point $\alpha=\tfrac 1 2$, $\sigma=\tfrac n 2$ is marked on both curves. By Corollary \ref{c.half}, this point lies on the $\sigma_n(\alpha)$ curve for our protocol for every value of $n$.} \end{figure} The following corollary is immediate from Theorem \ref{t.seats}. It implies, in particular, that majority shares are perfectly aligned, up to rounding. \begin{corollary}\label{c.half} Player 1 wins more than $\lfloor n/2 \rfloor$ seats if and only if his loyal subset has measure $s\succeq n/2$. \end{corollary} As mentioned above, we also wish to capture the diminished power either player has over any single district under our method compared with the previously discussed independent agent protocols, and the I-cut-I-freeze protocol. It is possible to prove this rigorously even in a geometric setting where districts are required to connected, as we do in Section \ref{s.geometric}. \begin{theorem}\label{t.target} The I-cut-you-freeze protocol for a $n$-districting has the $B$-target property for $B=\frac{\sqrt{n}}{2}$, in both the geometric and non-geometric models. \end{theorem} \begin{remark} It is natural to wonder why our analysis of slates (Theorems \ref{t.seats} and \ref{t.a-seats}) is carried out in a geometry-free setting, while our analysis of the $B$-target property allows geometry. The problem is that in full generality, a setting respecting geometry can defy analysis of slate outcomes in a way that, we suspect, would not actually correspond well to the real-world behavior of our protocol. For example, suppose we model a state as a topological unit disc $X$, as above, which has not only a population density $\phi:X\to [0,1]$ but also densities $\phi_A$ and $\phi_B$ of those loyal to Players A and B, where $\phi=\phi_A+\phi_B$. The outcome of our protocol will now be highly dependent on the geometric relationship between $\phi_A$ and $\phi_B$; for example, if we allow that $\phi_A>\phi_B$ everywhere, then Player A will win every district no matter what choices the players make, and, in principle, this requires only that $s>n/2$. By contrast, in an application to redistricting in the United States, such a consistent relationship between the player loyalties does not occur. We discuss the applicability of our idealized settings to real-world implementation of our protocol in Section \ref{s.realworld}, but roughly speaking, we believe that Theorems \ref{t.a-seats} and \ref{t.seats} do capture concrete advantages of our protocol over the One-player-decides protocol, which we would expect to persist in the real world; namely, that the protocol is nearly symmetric with respect to which player gets the first move, and produces generally reasonable outcomes. \end{remark} \section{Slates from optimal play} In this section we analyze the I-cut-you-freeze protocol in terms of the relation between the measure of the subset loyal to a player and his slate, namely Theorems \ref{t.a-seats} and \ref{t.seats}. Let us denote the measure of the subset of $[0,n]$ loyal to Player 1 by $s_1^n$, and the measure of the subset of $[0,n]$ loyal to Player 2 by $s_2^n$. Formally, the game can be expressed recursively as the following procedure for a given position, which, with $k=n$, $s_1=s_1^n$ and $A=1$, returns the slate of Player 1. For $k\in \mathbb{N}^+$, $s_1\in [0,k]$, and $A\in \{1,2\}$, we define: \medskip \newcommand{\game}{\mathrm{\textsc{Game}}_{1}} \begin{algorithmic}[1] \Procedure{Game$_1$}{$k,s_1,A$}\Comment{Player $A$ divides first} \State Player $A$ chooses $k$ numbers in $[0,1]$: $x_{k,1},\ldots,x_{k,k}$, such that \[\sum_{i=1}^k x_{k,i} = s_1\] \State Player $B$ chooses an integer $i_k\in [k]$, where $\{A,B\}=\{1,2\}$ \State \textbf{return} $\game(k - 1, s_1-x_{k,i_k},B)$ + $[x_{k,i_k}\geq\tfrac 1 2]$ \EndProcedure \end{algorithmic} \medskip We also set $\game(0,0,A)=0$. Now let \[ \game(n,s_1,1),\,\game(n-1,s_1-x_{n,i_n},2),\dots \] be the procedures (game positions) encountered recursively in the course of the game $\game(n,s_1,1)$. We call steps 2 and 3 of each such procedure a \emph{round} of the game $\game(n,s_1,1)$, and number the rounds in reverse, beginning with round $n$ and ending with round 1. In particular, round $t$ begins with some player choosing $t$ numbers $x_{t,1},\dots,x_{t,t}$. We let $s_1^{(t)}$ and $s_2^{(t)}$ be the fraction of the unfrozen state loyal to Players 1 and 2, respectively, at the beginning of round $t$. In particular, we can set \begin{align*} s_1^{(t)}&=s_1-\sum_{k=t+1}^{n} x_{k,i_k}\\ s_2^{(t)}&=t-s_1^{(t)}. \end{align*} Let $f(k,s_1,A)$ be the output of $\game(k,s_1,A)$ when the two players play optimally. (Note that this function always returns the slate of Player 1, even if $A=2$.) Then we have that \begin{align} \label{f1} f(k,s_1,1) &= \max_{x_{k,1},\ldots,x_{k,k}} \min_{i\in [k]} \left(f(k -1,s_1 - x_{k,i},2) + [x_{k,i}\geq\tfrac 1 2]\right),\quad\text{and}\\ \label{f2} f(k,s_1,2) &= \min_{x_{k,1},\ldots,x_{k,k}} \max_{i\in [k]} \left(f(k -1,s_1 - x_{k,i},1) + [x_{k,i}\geq\tfrac 1 2]\right). \end{align} It is intuitively obvious that $f(k,s_1,A)$ should be monotone with respect to $s_1$, and indeed, this follows immediately from induction on \eqref{f1} and \eqref{f2}: \begin{lemma}\label{lemma:mono} $f(k,s_1,A)\leq f(k,s_1',A)$ if $s_1< s_1'$.\qed \end{lemma} \noindent The following lemma shows that in optimum play, players will divide the unfrozen region into districts with at most two distinct loyalty values. \begin{lemma}\label{lemma:two-value} For any $\game(k,s_1,A)$, there are numbers $\omega\geq \tfrac 1 2>\lambda$ such that under optimum play, Player $A$ will choose each $x_{k,i}$ to be $\omega$ or $\lambda$. \end{lemma} \begin{proof} First consider $A = 1$, let $W = \{i:x_{k,i}\geq \tfrac 1 2\}$ and $L = [k] - W$. By Lemma \ref{lemma:mono} applied to $\game(k-1,s_1-x_{k,i},2)$ which is encountered at the end of the round, Player 2's optimum move is to either choose $i=\arg\max_{j\in W} x_{k,j}$ or $i=\arg\max_{j\in L} x_{k,j}$. So under optimum play, Player 1 will assign an identical number $\omega$ to districts in $W$, and an identical number $\lambda$ to districts in $L$, where $\omega = \sum_{i\in W} x_{k,i} / |W|$ if $W\neq \emptyset$ and $\lambda = \sum_{i\in L} x_{k,i} / |L|$ if $L\neq \emptyset$. \old{ Therefore, \[f(n,s_1,1)=\max\left\{\begin{array}{l} \max_{a,b}\{\min\{f(n-1,s_1-a,2) + 1, f(n-1,s_1-b,2)\}\}, \\ f(n-1,s_1-s_1/k,2)+[s_1/k \geq \tfrac 1 2]\end{array}\right\},\] where $a \geq \tfrac 1 2> b$ and $\exists m \in [n -1], am + b(n - m) = s_0$. } The case of $A=2$ is analogous. \end{proof} In round $t$, if $s_1^{(t)} \geq t/2$, we say Player 1 is stronger and Player 2 is weaker in this round; otherwise, Player 2 is stronger and Player 1 is weaker. (Note that, in general, which player is strongest may change from round to round, even under optimum play.) Lemma \ref{lemma:two-value} implies that Player $A$'s move is thus completely characterized by the choice made for $\omega$ and $\lambda$. Assuming without loss of generality that $A=1$, we see that Player 2's response will result either in the value $f(k-1,s_1-\lambda,2)$ or the value $f(k-1,s_1-\omega,2)+1$, assuming the remaining gameplay is optimal. In particular, we have the following: \begin{lemma}\label{lemma:best-pairs} Given possible choices $(\omega, \lambda)$ and $(\omega',\lambda')$ for Player $A$ satisfying $\omega \leq \omega'$ and $\lambda \leq \lambda'$, the choice $(\omega,\lambda)$ dominates the choice $(\omega',\lambda')$ if $A = 1$; otherwise, the choice $(\omega',\lambda')$ dominates the choice $(\omega,\lambda)$. \end{lemma} \noindent This immediately implies the following: \begin{lemma}\label{lemma:stronger-strategy} In any $\game(k,s_1,A)$, Player A, if stronger, chooses $x_{k,1}=x_{k,2}=\dots=x_{k,k}$ in optimum play. In particular: \[ A\text{ stronger}\implies f(k,s_1,A) = f(k-1,s_1 - s_1 / k, B) + 1. \] \end{lemma} \old{ \begin{proof} We prove the case where $A = 1$; the case where $A=2$ is analogous. We let $\gamma=\max_i x_{k,i}$ be the largest $x_{k,i}$ chosen by Player A. Assuming the lemma fails, we have $x_{k,i}>s_1/k$. Therefore, monotonicity of $f$ (Lemma \ref{lemma:mono}) would give that \[ f(k-1,s_1 - s_1 / k, 2) + 1 > f(k-1,s_1-\gamma,2) + 1, \] right hand side of this inequality is a game value achievable by Player 2 assuming the lemma fails, and the left hand side is the only value achievable if Player 1 plays as claims. \end{proof} } The following lemma will serve as a base case, if you will, for our larger analysis, characterizing the outcome of play once the two players are roughly even. \begin{lemma}\label{lemma:close-case} If $(k-1)/2 \leq s_1 < k/2$, \[f(k,s_1,1)=\lfloor k / 2 \rfloor;\] and if $k/2 \leq s_1 < (k+1)/2$, \[f(k,s_1,2)=\lceil k / 2 \rceil.\] \end{lemma} \begin{proof} Assume that $(k-1)/2 \leq s_1 < k/2$; the other case is analogous. We prove the lemma by induction on $n$. When $k \leq 2$, it is trivial. When $k > 2$, \begin{equation} \label{eq:maxmin} f(k,s_1,1)=\max\left\{\begin{array}{l} \max_{\omega,\lambda}\{\min\{f(k-1,s_1-\omega,2) + 1, f(k-1,s_1-\lambda,2)\}\}, \\ f(k-1,s_1-s_1/k,2)+[s_1/k \geq \tfrac 1 2]\end{array}\right\}, \end{equation} where the max over $\omega,\lambda$ is taken over pairs satisfying $\omega \geq \tfrac 1 2 > \lambda$, with the property that there exists $m\in [k-1]$ such that $\omega m+\lambda (k-m) = s_1$. Here we can assume $m = k-1$, otherwise, one can easily find $\omega'$ and $\lambda'$ such that $\omega > \omega' \geq \tfrac 1 2 > \lambda > \lambda'$ and $\omega'(k-1)+\lambda' = s_1$. It remains to analyze each of the expressions in Equation~\eqref{eq:maxmin}. First, since $$s_1-\omega < \frac k 2 - \frac 1 2 = \frac{k-1}{2},$$ we have (by Lemma~\ref{lemma:stronger-strategy}) \[f(k-1,s_1-\omega,2)+1 = f\left(k-2, (s_1 - \omega)\frac{k-2}{k-1}, 1\right) + 1.\] Then, since \begin{align*} (s_1 - \omega)\frac{k-2}{k-1} \geq s_1\frac{(k-2)^2}{(k-1)^2} \geq \frac{1}{2}\cdot \frac{(k-2)^2}{k-1} > \frac{k-3}{2}, \end{align*} and $$ (s_1 - \omega)\frac{k-2}{k-1}<\frac{k-1}{2}\cdot\frac{k-2}{k-1}=\frac{k-2}{2}, $$ we have by the induction assumption that \[f\left(k-2, (s_1 - \omega)\frac{k-2}{k-1}, 1\right) = \left\lfloor \frac{k-2}{2} \right\rfloor.\] Second, $$\frac k 2 > s_1 \geq s_1 -\lambda = \omega(k-1) \geq \frac{k-1}{2},$$ so, by the induction assumption, $f(k-1,s_1-\lambda,2) = \lceil (k-1) / 2 \rceil$. Third, for $f(k-1,s_1-s_1/k,2)$, since $$s_1 - \frac{s_1}{k} = s_1\frac{k-1}{k} < \frac{k-1}{2}$$ and $s_1 (k-2)/k\geq (k-3)/2$, we have \[f\left(k-1,s_1-\frac{s_1}{k},2\right) = f\left(k-2, s_1 \frac{k-2}{k}, 1\right) = \left\lfloor \frac{k-2}{2}\right\rfloor.\] By Equation~\eqref{eq:maxmin}, we conclude that $$f(k,s_1,1)= \max\{\min\{\lfloor (k-2) / 2 \rfloor + 1, \lceil (k-1) / 2 \rceil\},\lfloor (k-2) / 2 \rfloor\} = \lfloor k / 2 \rfloor.$$ \end{proof} \begin{lemma}\label{lemma:weaker-strategy} In any $\game(k,s_1,A)$, if Player $A$ is weaker and $s_A \succeq 1/2$, the following strategies for players are optimal: \begin{itemize} \item Let $m = \lfloor 2s_A \rfloor$ if $A =1$, $m = \lceil 2s_A \rceil - 1$ if $A = 2$. Player $A$ will divide the resources such that his proportion in each district is either 0 or $s_A/m$. \item Player $B$ will choose a district where his loyalty is 1. \end{itemize} In particular: if $s_1 < k/2$, \[f(k,s_1,1) = f(k-1,s_1,2);\] if $s_1 \geq k/2$, \[f(k,s_1,2) = f(k-1,s_1-1,1) + 1.\] \end{lemma} \begin{proof} We write this proof for the case $A = 1$; the case $A=2$ is similar. For $s_1 \geq (k-1) / 2$, it is obvious from the proof of Lemma \ref{lemma:close-case}. For $s_1 < (k-1)/2$, prove the lemma by induction on $k$. The base case is $k\leq 2$; we have $s_1<1/2$, so Player $1$ cannot win any districts, and the claim on $f$ is obvious. For $k > 2$, we first prove that if Player 2's optimal strategy always chooses a piece where Player 2's loyalty is greater than $1/2$, then Player 1's optimal strategy is as above. Indeed, this implication is clear, since given that Player 2 will choose a piece where his loyalty is greater than $1/2$, Player 1 wishes this piece to have maximum loyalty to Player 2, by monotonicity. In particular, his strategy will only produce districts which have loyalty 1 or less than $1/2$ to Player 2; equivalently, $0$ or more than $1/2$ to himself. (The condition $s_A\succeq 1/2$ ensures that this is always possible.) \old{ Under this assumption on Player 2's optimal strategy, we have \begin{equation} f(k-1,s_1-s_1 / m,2) + 1 > f(k-1,s_1,2) \geq f(k-1,s_1 - b,2) \end{equation} for any $b>0$, where this second inequality is a consequence of monotonicity (Lemma \ref{lemma:mono}). Therefore, \begin{equation} f(k-1,s_1,2) \geq \max_{a,b}\{\min\{f(k-1,s_1-a,2) + 1, f(k-1,s_1-b,2)\}\}, \end{equation} and \begin{equation} f(k-1,s_1,2) \geq f(k-1,s_1 - s_1 / k, 2). \end{equation}} This gives that $f(k,s_1,1) = f(k-1,s_1,2)$, as desired. Thus it suffices to prove that Player 2's optimal strategy will always choose a district where his loyalty is more than $1/2$. We prove that this is the case in response to any optimal move by Player 1 of the form guaranteed to exist by Lemma \ref{lemma:two-value}. By Lemma \ref{lemma:best-pairs}, we may assume that Player 1 makes a minimal choice of the pair $(\omega,\lambda)$. In particular, for any $\lambda\geq 0$, there is some minimum $\omega_\lambda$ for which the pair $(\omega_\lambda,\lambda)$ is feasible, and an interval $I=[0,\beta]$ such that $\omega_\lambda$ is decreasing with respect to $\lambda$ for $\lambda\in I$ and and such that $\omega_\beta=1/2$. In particular, it suffices to prove that for any move by Player 1 of the form $(\omega_\lambda,\lambda)$ for $\lambda\in I$, Player 2's optimal response will be to choose a district where his loyalty is more than $1/2$. Finally, monotonicity implies that it suffices to consider the case of the pair $(\omega_0,0)$ since Player 2's outcome from choosing district with minority loyalty worsens, and his outcome from choosing a majority district improves, as $\lambda$ increases. Of course, $\omega_0$ is precisely $s_A/m$, so we may, in fact, assume that Player 1 employs the strategy that this lemma claims is optimal. Now, for the sake of a contradiction, suppose that $s_1^{(k)}< (k-1)/2$ and Player 2 is \emph{strictly} better off choosing a district where he has minority loyalty. Letting $m = \lfloor 2s_1^{(k)} \rfloor$, we have that \[s_1^{(k-2)} = s_1^{(k-1)} \frac{k-2}{k-1} = s_1^{(k)} \frac{m-1}{m}\cdot \frac{k-2}{k-1} < \frac{k-2}{2},\] where the first equality follows from Lemma \ref{lemma:stronger-strategy}. By the induction assumption, in round $k-2$, the players follow the claimed optimal strategies, so \[s_1^{(k-3)} = s_1^{(k-2)} = \frac{s_1^{(k)}(m-1)(k-2)}{m(k-1)}.\] Now we compare this result with the case where Player 2 chooses a majority-loyal district in round $k$ and a minority-loyal district in round $k-2$. It holds that \[s_1^{(k-2)} = s_1^{(k-1)} \frac{k-2}{k-1} = s_1^{(k)} \frac{k-2}{k-1} < \frac{k-2}{2}. \] Let $$m' = \lfloor 2s_1^{(k - 2)} \rfloor = \left\lfloor s_1^{(k)} \frac{k-2}{k-1}\right \rfloor,$$ then \[s_1^{(k-3)} = s_1^{(k-2)} \cdot \frac{m'-1}{m'} = \frac{s_1^{(k)}(m'-1)(k-2)}{m'(k-1)}.\] Since $m' \leq m$, $$\frac{s_1^{(k)}(m'-1)(k-2)}{m'(k-1)} \leq \frac{s_1^{(k)}(m-1)(k-2)}{m(k-1)}.$$ Since in both cases Player 2 wins the same number of districts, this is a contradiction to our assumption that it is strictly optimal for Player 2 to choose a district where he has minority loyalty. \end{proof} \iffalse \begin{theorem} \[\lim_{n\to \infty} \frac{f(n, \alpha n, c)}n = \left\{\begin{array}{ll} 2\alpha^2 & \text{for } \alpha \leq \tfrac 1 2,\\ 1-2(1-\alpha)^2 & \text{for } \alpha > \tfrac 1 2. \end{array}\right.\] \end{theorem} \fi \begin{proof}[Proof of Theorems \ref{t.a-seats} and \ref{t.seats}] Combining Lemma \ref{lemma:stronger-strategy}, Lemma \ref{lemma:close-case} and Lemma \ref{lemma:weaker-strategy}, the recurrence relation of $f$ is solved. One can easily find the following: for $\kappa = n,n-2,n-4,\ldots$, $f(n,s_1,1) \geq \lfloor \kappa / 2\rfloor$ if and only if \[s_1 \geq \frac{(n-1)!!(\kappa-2)!!}{2(\kappa-3)!!(n-2)!!};\] and $f(n,s_1,2)\leq n-\lfloor \kappa / 2\rfloor$ if and only if \[s_1 < n - \frac{(n-1)!!(\kappa-2)!!}{2(\kappa-3)!!(n-2)!!}.\] For $\kappa = n-1,n-3,n-5,\ldots$, $f(n,s_1,1) \leq n-\lfloor \kappa / 2\rfloor$ if and only if \[s_1 < n - \frac{n!!(\kappa-2)!!}{2(\kappa-3)!!(n-1)!!};\] and $f(n,s_1,2) \geq \lfloor \kappa / 2\rfloor$ if and only if \[s_1 \geq \frac{n!!(\kappa-2)!!}{2(\kappa-3)!!(n-1)!!}.\] Writing $k=\lfloor \kappa/2\rfloor$ or $k = n - \lfloor\kappa / 2\rfloor+1$ gives Theorem \ref{t.seats} as stated. Theorem \ref{t.a-seats} then follows by Stirling's approximation. \end{proof} \section{The $B$-target property} In this section we prove Theorem \ref{t.target}. We begin by proving the theorem in the nongeometric setting, which is simpler. \subsection{Nongeometric setting} In what follows, we suppose that Player 2 wants to ensure that as small as possible fraction of the target intersects any single district, and Player 1 opposes this goal. Our analysis does not depend on which of these players has the first move, however. Let the measure of the target subset be $s_T$. Then the game can be expressed as the following recursive procedure, whose value for $k=n, s=s_T$ is precisely the maximum measure of the target that intersects any single district, under optimum play. \medskip \newcommand{\gamet}{\mathrm{\textsc{Game}}_{2}} \begin{algorithmic}[1] \Procedure{Game$_2$}{$k,s,A$}\Comment{Player $A$ divides first} \State Player $A$ chooses $k$ numbers in $[0,1]$: $x_{k,i},\ldots,x_{k,k}$, such that \[\sum_{i=1}^n x_{k,i} = s\] \State Player $B$ chooses an integer $i\in [k]$, where $\{A,B\}=\{1,2\}$ \State \textbf{return} $\max\{\gamet(k -1, s-x_{k,i},B), x_{k,i}\}$ \EndProcedure \end{algorithmic} \medskip \begin{theorem} Under optimum play, $\gamet(n,s_T,2)$ has value $\frac{s_T(n-1)!!}{n!!}$, while $\gamet(n,s_T,1)$ has value $\frac{s_T(n-2)!!}{(n-1)!!}$. In particular, the I-cut-you-freeze protocol for an $n$-districting has the $B$-target property in the geometry-free setting for \begin{equation} \label{eq:B} B = \min\left\{\frac{n!!}{(n-1)!!},\frac{(n-1)!!}{(n-2)!!}\right\} \sim \sqrt{\frac {2 n} \pi}. \end{equation} \end{theorem} \begin{proof} We prove the theorem by induction on $n$. When $n = 1$, it is trivial. When $n =k> 1$, first we consider $\gamet(k,s,2)$. If Player 2 chooses $x_{k,1} = \cdots = x_{k,k} = s / k$, then the result will be \[\max\{\gamet(k -1, s-s/k,1), s/k\} = \max\left\{\frac{s(k-1)!!}{k!!}, s/k\right\} = \frac{s(k-1)!!}{k!!}. \] Player 1 can always choose the minimum $x_{k,i}$, which is no larger than $s / k$, so \[\gamet(k,s,2)\geq \gamet(k -1, s-s/k,1)= \frac{s(k-1)!!}{k!!}. \] Therefore, $\gamet(k,s,2)$ has value $\frac{s(k-1)!!}{k!!}$. Similarly, for $\gamet(k,s,1)$, if Player 1 chooses $x_{k,1} = s, x_{k,2} = \cdots = x_{k,k} = 0$, the result will be \[\min\{s, \gamet(k -1, s,2)\} = \min\left\{s, \frac{s(k-2)!!}{(k-1)!!}\right\} = \frac{s(k-2)!!}{(k-1)!!}.\] Also since Player 2 can always choose the minimum $x_{k,i}$, where $0\leq x_{k,i}\leq s/k$, we have that \begin{align*} \gamet(k,s,1)~&\leq \max\{\gamet(k -1, s,2), s/k\}\\ ~&= \max\left\{\frac{s(k-2)!!}{(k-1)!!}, s/k\right\} = \frac{s(k-2)!!}{(k-1)!!}. \end{align*} Therefore, $\gamet(k,s,1)$ has value $\frac{s(k-2)!!}{(k-1)!!}$. To see the asymptotic equivalence given in Equation \eqref{eq:B}, observe that \[ \min\left\{\frac{n!!}{(n-1)!!},\frac{(n-1)!!}{(n-2)!!}\right\}=\frac{(n'+1)!!}{n'!!} = \frac{(n'+1)\binom{n'}{\frac{n'}{2}}}{2^{n'}} \] for $n'=2\cdot \lfloor \tfrac {n-1} 2 \rfloor$. Standard bounds on the central binomial coefficient also suffice to satisfy the (nonasymptotic) bound on this expression claimed by Theorem \ref{t.target}. \end{proof} \subsection{The geometric setting} \label{s.geometric} Recall that in the geometric setting, the state is modeled as a subset $X\subseteq \R^2$ topologically equivalent to an open disc, and the measure of a subset $C$ is now $\int_C \phi$. Let $S_T$ be the target subset. Then, formally, the game can be expressed as the following recursive procedure, in which $T$ is allowed in general to be a finite union of topological open discs, and which for $k=n$, $T=X$ returns the maximum measure of the target over all single districts): \medskip \newcommand{\gameT}{\mathrm{\textsc{Game}}_{3}} \begin{algorithmic}[1] \Procedure{Game$_3$}{$k,T,A$}\Comment{Player $A$ divides first} \State Player $A$ chooses $k$ districts: disjoint topological open discs of equal measure $d_1,\ldots,d_n$, such that the closure of their union is $T$ \State Player $B$ chooses an integer $i\in [n]$, for $\{A,B\}=\{1,2\}$ \State \textbf{return} $\max\left\{\gameT\left(k -1, \text{cl}\left(\bigcup_{j\in [n]/\{i\}} d_j\right),B\right), r(d_i)\right\}$ \EndProcedure \end{algorithmic} \medskip \noindent Here cl$(\cdot)$ denotes the closure of a set, and $r(d_i)$ denotes the measure of $d_i \cap S_T$. Since $\gameT$ is much more complicated than $\gamet$, it is difficult to characterize its exact output. Our goal, therefore, is to bound $\gameT(n, X,c)$, that is, we need to find a good enough strategy for Player 2 such that for any strategy of Player 1, the measure of the target subset will be small in every district. Specifically, we will show that Player 2 can ensure that every district contains at most $\frac{2}{\sqrt{n}}$ of the target. But before we introduce this strategy, we need to establish several lemmas. \begin{lemma}\label{lemma:divide-district} If a set $C$ in the plane is a topological open disc of measure $s$, and contains a measurable subset $A$ of measure $a$, then for any positive integer $k$, $C$ can be divided into two topological open discs $D$ and $C'$ ($D$, $C'$ are disjoint and the closure of their union is the closure of $C$) such that the measure of $D$ is $s/k$ and the measure of $D \cap A$ is $a/k$. \end{lemma} \begin{proof} Without loss of generality we assume $C$ is an open disc centered at the origin. For any $\theta\in [0,2\pi]$, let $D(\theta)$ be the (open, say) circular sector of $C$ of measure $s/k$ starting from the angle $\theta$, let $f(\theta)$ be the measure of $D(\theta)\cap A$. We can choose $\theta_1,\dots,\theta_k$ so that $D(\theta_1), \ldots, D(\theta_k)$ partition $C$ (up to boundaries) and among these circular sectors, we can find two circular sectors $D(\theta_i)$ and $D(\theta_j)$, such that $f(\theta_i) \leq a / k$ and $f(\theta_j) \geq a/ k$. Since $\phi$ is bounded, $f$ is a continuous function, and we can find an angle $\theta'$ between $\theta_i$ and $\theta_j$ such that $f(\theta') = a / k$. Thus we take $D=D(\theta')$ and $C'=C\setminus D$. \end{proof} By induction, Lemma \ref{lemma:divide-district} gives the following: \begin{lemma}\label{lemma:divide-districts} If a set $C$ in the plane is a topological open disc of measure $s$, and contains a measurable subset $A$ of measure $a$, then for any positive integer $k$, $C$ can be divided into $k$ topological open discs $D_1, D_2, \ldots, D_k$ ($D_1, \ldots, D_k$ are disjoint and the closure of their union is the closure of $C$) such that for any $i \in [k]$, the measure of $D_i$ is $s/k$ and the measure of $D_i \cap A$ is $a/k$. \end{lemma} As play progresses from the initial configuration on $X$, players may choose districts that divide the remainder of the state into disconnected regions. The following graph-theoretic lemma will allow us to control the effects of such strategies. \begin{lemma}\label{lemma:split-graph} Let $G=(V,E)$ be a connected graph with $n>1$ nodes. Each node $v\in V$ is labeled by a positive number $r(v)$. For any subgraph $C$, let $\bar r(C)$ be the average of $r$ over nodes in $C$. Moreover, let $\mathcal{C}_v$ be the set of connected components of the induced subgraph on $V\setminus \{v\}$. Then for any $c \geq 1$, there always exists a node $v$ such that $r(v) \leq \min\{c, n/2\}\bar r(G)$, and for any component $C \in \mathcal{C}_v$, \begin{itemize} \item $c-1< |C| < n - c$ or $|C| = n - 1$, \item $\bar r(C) \leq \frac{|C|+1}{|C|}\bar r(G)$. \end{itemize} \end{lemma} \begin{figure}[t] \begin{center} \subfigure[$\mathcal C_v = \{C_1,C_2,\ldots,C_m\}$.]{ \label{fig:split-graph} \begin{tikzpicture} \node[circle, draw] (v) at (0,0) {$v$}; \foreach \x in {-1.75, -0.5, 1.75} { \draw (\x, -1) -- (v); \draw (\x, -1) -- (\x - 0.5, - 2.5); \draw (\x, -1) -- (\x + 0.5, - 2.5); \draw (\x - 0.5, -2.5) -- (\x + 0.5, - 2.5); } \node at (-1.75, -2) {$C_1$}; \node at (-0.5, -2) {$C_2$}; \node at (0.625, -2) {$\cdots$}; \node at (1.75, -2) {$C_m$}; \draw[rounded corners=10pt, dashed] (-2.5,-2.75) rectangle (2.5, - 0.75); \node at (-2.25, -0.5) {$\mathcal{C}_v$}; \end{tikzpicture} } \subfigure[$\mathcal C_{v_0} = \{C_1, C_2\}$; $v:\alpha$ means $r(v)=\alpha$.]{ \label{fig:split-graph-example} \begin{tikzpicture} \foreach \x/\y/\i/\r in {2.5/2/0/0, 0/1/1/0, 2.25/1/2/0, 5.25/1/3/0, 0/0/4/1, 1.5/0/5/0, 3/0/6/1, 4.5/0/7/1, 6/0/8/1, 1.5/-1/9/1} \node[draw] (\i) at (\x, \y) {$v_\i:\r$}; \foreach \i/\j in {0/1, 0/2, 0/3, 1/4, 2/5, 2/6, 3/7, 3/8, 5/9} \draw (\i) -- (\j); \draw (4) -- (5); \draw[rounded corners=10pt, dashed] (-0.67,-1.5) rectangle (3.67, 1.5); \node at (0.25, 1.75) {$\bar r(C_1) = 1/2$}; \draw[rounded corners=10pt, dashed] (3.83,-0.5) rectangle (6.67, 1.5); \node at (5.5, 1.75) {$\bar r(C_2) = 2/3$}; \end{tikzpicture} } \end{center} \caption{Illustration of Lemma~\ref{lemma:split-graph}. (a) shows the structure of $\mathcal{C}_v$, and (b) gives an example of $G$, with $\bar r(G) = 1/2$. For $c=1$, $v_0,v_1,v_2,v_3,v_5$ all satisfy the conditions of the lemma. Taking $v_0$ as an example, $\mathcal C_{v_0} = \{C_1, C_2\}$, $|C_1|=6$, $|C_2|=3$, $\bar r(C_1) = 1/2$ and $\bar r(C_2) = 2/3= (4/3)\bar r(G)$.} \label{fig:example} \end{figure} Figure \ref{fig:example} provides an illustrated example of the statement of Lemma \ref{lemma:split-graph} with 10 nodes. To prove the lemma, we will use the following simple observation: \begin{lemma}\label{lemma:find-edge} Let $G=(V,E)$ be a connected graph, and let $T$ be a spanning tree of $G$. For an edge $e = (u,v)$, let the components of the induced subgraph of $T$ on $V\setminus \{v\}$ be $C_1(e), C_2(e),\ldots,C_{m(e)}(e)$. Without loss of generality, assume $u\in C_1(e)$, and let $C_0(e)$ denote the subtree of $T$ spanned by $\{v\}\cup C_2(e)\cup\dots\cup C_m(e)$. Then there exists an edge $e$ such that $\bar r(C_0(e)) \leq \bar r(G)$ and for all $i=2,\ldots,m(e)$, $\bar r(C_i(e)) > \bar r(G)$. \end{lemma} The strong inequality at the end of the lemma's statement may seem confusing, as it could be the case that, say, $r(v)=1$ for all $v\in V$. But, in that case, we can choose $v$ to be a leaf, and then $C_1(e)$ is the rest of the tree, so $C_0(e)$ is just $v$ itself. \begin{proof}[Proof of Lemma~\ref{lemma:find-edge}] We find the edge $(u,v)$ with the following procedure. First, choose an arbitrary edge $e= (u,v)$. If $\bar r(C_0(e)) \leq \bar r(G)$, continue with $e$; otherwise, continue with $(v, u)$. (The desired inequality must hold for one of the two choices, as they induce the same partition of the vertices, with opposite choices of $C_0(e)$.) Then, while the current edge $e' = (u',v')$ has $2\leq i\leq m(e')$ such that $\bar r(C_i(e')) \leq \bar r(G)$, continue with $e''=(v', w)$ for $w\in C_i(e')$; otherwise, terminate with $(u,v)$. Notice that in the former case, $\bar{r}(C_0(e''))=\bar{r}(C_i(e'))\leq \bar{r}(G)$. See Figure \ref{fig:find-edge-example} for an illustration of the procedure. This procedure will terminate because each edge can only be considered once. Moreover, the procedure ensures that for the current edge $e$, $r(C_0(e)) \leq \bar r(G)$. Finally, the termination condition ensures that for all $i=2,\ldots,m(e)$, $\bar r(C_i(e)) > \bar r(G)$. \begin{figure}[p] \begin{center} \subfigure[A spanning tree of the graph shown in Figure \ref{fig:split-graph-example}.]{ \begin{tikzpicture} \foreach \x/\y/\i/\r in {1.5/2/0/0, 0/1/1/0, 1.5/1/2/0, 3/1/3/0, 0/0/4/1, 1/0/5/0, 2/0/6/1, 2.75/0/7/1, 3.5/0/8/1, 1/-1/9/1} \node[circle, draw, inner sep=1pt] (\i) at (\x, \y) {$v_\i$}; \foreach \i/\j in {0/1, 0/2, 0/3, 1/4, 2/5, 2/6, 3/7, 3/8, 5/9} \draw (\i) -- (\j); \draw[dashed] (4) -- (5); \end{tikzpicture} } \hspace{0.8in} \subfigure[First step: choose $e=(v_5,v_9)$.]{ \begin{tikzpicture} \foreach \x/\y/\i/\r in {1.5/2/0/0, 0/1/1/0, 1.5/1/2/0, 3/1/3/0, 0/0/4/1, 1/0/5/0, 2/0/6/1, 2.75/0/7/1, 3.5/0/8/1, 1/-1/9/1} \node[circle, draw, inner sep=1pt] (\i) at (\x, \y) {$v_\i$}; \foreach \i/\j in {0/1, 0/2, 0/3, 1/4, 2/5, 2/6, 3/7, 3/8, 5/9} \draw (\i) -- (\j); \draw[rounded corners=8pt, dashed] (0.6,-1.4) rectangle (1.4, -0.6); \node at (2.5, -1.2) {$\bar r(C_0(e)) = 1$}; \draw[rounded corners=10pt, dashed] (-0.4,-0.4) rectangle (3.9, 2.4); \node at (2.5, 2.6) {$\bar r(C_1(e)) = 4/9$}; \end{tikzpicture} } \vspace{0.5cm} \subfigure[Second step: switch to $e=(v_9,v_5)$.]{ \begin{tikzpicture} \foreach \x/\y/\i/\r in {1.5/2/0/0, 0/1/1/0, 1.5/1/2/0, 3/1/3/0, 0/0/4/1, 1/0/5/0, 2/0/6/1, 2.75/0/7/1, 3.5/0/8/1, 1/-1/9/1} \node[circle, draw, inner sep=1pt] (\i) at (\x, \y) {$v_\i$}; \foreach \i/\j in {0/1, 0/2, 0/3, 1/4, 2/5, 2/6, 3/7, 3/8, 5/9} \draw (\i) -- (\j); \draw[rounded corners=8pt, dashed] (0.6,-1.4) rectangle (1.4, -0.6); \node at (2.5, -1.2) {$\bar r(C_1(e)) = 1$}; \draw[rounded corners=8pt, dashed] (-0.4,-0.4) -- (0.3, -0.4) -- (0.7, 0.7) -- (1.3, 0.7) -- (1.7, -0.4)-- (3.9, - 0.4) -- (3.9, 1) --node[sloped, above]{$\bar r(C_2(e)) = 1/2$} (1.8, 2.4) -- (1.2, 2.4) -- (-0.4, 1.4) -- cycle; \end{tikzpicture} } \hspace{0.8in} \subfigure[Third step: switch to $e=(v_5,v_2)$.]{ \begin{tikzpicture} \foreach \x/\y/\i/\r in {1.5/2/0/0, 0/1/1/0, 1.5/1/2/0, 3/1/3/0, 0/0/4/1, 1/0/5/0, 2/0/6/1, 2.75/0/7/1, 3.5/0/8/1, 1/-1/9/1} \node[circle, draw, inner sep=1pt] (\i) at (\x, \y) {$v_\i$}; \foreach \i/\j in {0/1, 0/2, 0/3, 1/4, 2/5, 2/6, 3/7, 3/8, 5/9} \draw (\i) -- (\j); \draw[rounded corners=8pt, dashed] (0.6,-1.4) rectangle (1.4, 0.4); \node at (2.6, -1.2) {$\bar r(C_1(e)) = 1/2$}; \draw[rounded corners=8pt, dashed] (-0.4,-0.4) -- (0.4, -0.4) -- (0.4, 0.8) -- (1.5, 1.6) -- (2.4, 0.8) -- (2.4, -0.4)-- (3.9, - 0.4) -- (3.9, 1) --node[sloped, above]{$\bar r(C_2(e)) = 1/2$} (1.8, 2.4) -- (1.2, 2.4) -- (-0.4, 1.4) -- cycle; \draw[rounded corners=8pt, dashed] (1.67,-0.4) rectangle (2.33, 0.4); \node at (2.7, -0.7) {$\bar r(C_3(e)) = 1$}; \end{tikzpicture} } \vspace{.5cm} \subfigure[Fourth step: switch to $e=(v_2,v_0)$.]{ \begin{tikzpicture} \foreach \x/\y/\i/\r in {1.5/2/0/0, 0/1/1/0, 1.5/1/2/0, 3/1/3/0, 0/0/4/1, 1/0/5/0, 2/0/6/1, 2.75/0/7/1, 3.5/0/8/1, 1/-1/9/1} \node[circle, draw, inner sep=1pt] (\i) at (\x, \y) {$v_\i$}; \foreach \i/\j in {0/1, 0/2, 0/3, 1/4, 2/5, 2/6, 3/7, 3/8, 5/9} \draw (\i) -- (\j); \draw[rounded corners=8pt, dashed] (0.6,-1.4) rectangle (2.3, 1.4); \node at (1.5, -1.7) {$\bar r(C_1(e)) = 1/2$}; \draw[rounded corners=8pt, dashed] (-0.4,-0.4) --node[sloped, above]{$\bar r(C_2(e)) = 1/2$} (-0.4, 1.4) -- (0.4, 1.4) -- (0.4,-0.4) --cycle; \draw[rounded corners=8pt, dashed] (2.45, -0.4)-- (2.45, 1.4) --(3.9, 1.4) -- (3.9, -0.4) --cycle; \node at (3.7, -0.7) {$\bar r(C_3(e)) = 2/3$}; \end{tikzpicture} } \hspace{0.4in} \subfigure[Fifth step: switch to $e=(v_0,v_1)$.]{ \begin{tikzpicture} \foreach \x/\y/\i/\r in {1.5/2/0/0, 0/1/1/0, 1.5/1/2/0, 3/1/3/0, 0/0/4/1, 1/0/5/0, 2/0/6/1, 2.75/0/7/1, 3.5/0/8/1, 1/-1/9/1} \node[circle, draw, inner sep=1pt] (\i) at (\x, \y) {$v_\i$}; \foreach \i/\j in {0/1, 0/2, 0/3, 1/4, 2/5, 2/6, 3/7, 3/8, 5/9} \draw (\i) -- (\j); \draw[rounded corners=8pt, dashed] (0.6,-1.4) rectangle (3.9, 2.4); \node at (2, -1.7) {$\bar r(C_1(e)) = 1/2$}; \draw[rounded corners=8pt, dashed] (-0.4,-0.4) --node[sloped, above]{$\bar r(C_2(e)) = 1$} (-0.4, 0.4) -- (0.4, 0.4) -- (0.4,-0.4) --cycle; \end{tikzpicture} } \caption{The procedure described in Lemma~\ref{lemma:find-edge}, applied to the graph of Figure~\ref{fig:split-graph-example}.} \label{fig:find-edge-example} \end{center} \end{figure} \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:split-graph}] First, if there is a leaf $v$ of $T$ with \[r(v) \leq \min\{c, n/2\}\bar r(G),\] then $v$ trivially satisfies the conditions. Moreover, if $c \geq n / 2$, there must be at least one leaf satisfying the foregoing inequality, because there are at least two leaves in any tree. Therefore, in what follows, we assume that $c < n / 2$, and every leaf $v$ of $T$ satisfies \begin{equation} \label{eq:rv} r(v) > c\bar r(G). \end{equation} Now let the edge whose existence is guaranteed by Lemma \ref{lemma:find-edge} be $e = (u,v)$; we will show that the node $v$ satisfies the conditions of Lemma \ref{lemma:split-graph}. For $S\subseteq [m(e)]$, let $C_S(e) = \bigcup_{i\in S} C_i(e)$. We claim that for all $S \subseteq [m(e)]$, we have \begin{equation} \label{e.rbound}\bar r(C_S(e) \cup \{v\})\leq \bar r(G). \end{equation} To see this, suppose it fails. If $1\notin S$, then we get the contradiction $\bar r(C_0(e)) > \bar r(G)$, since $C_0(e) = (C_S(e) \cup \{v\})\cup \bigcup_{2\leq j\leq m(e),j\notin S} C_j(e)$ and $\bar r(C_j(e))>\bar r(G)$ for $2\leq j\leq m(e)$ by assumption. If $1\in S$, then we get the analogous contradiction $\bar r(G) > \bar r(G)$, since in this case $G = (C_S(e) \cup \{v\})\cup \bigcup_{2\leq j\leq m(e),j\notin S} C_j(e)$. Note that taking $S = \emptyset$ gives the Lemma's claim that $r(v) \leq \bar r(G)$. Moreover, since the $C_i$'s are connected subgraphs of of the induced subgraph on $V\setminus \{v\}$ that partition its vertex set, any component $C\in \mathcal{C}_v$ must have the same vertex set as $C_S(e)$ for some $S\subseteq [m(e)]$. Therefore, \eqref{e.rbound} gives that all components $C\in \mathcal{C}_v$ satisfy $\bar r(C \cup \{v\})\leq \bar r(G)$, or, since $r(v)\geq 0$, that \begin{equation*} \bar r(C)\leq \frac{|C|+1}{|C|}\bar r(G). \end{equation*} This is the second requirement the lemma places on components $C\in \mathcal{C}_v$. To establish the first, consider a nonempty $S\subseteq [m(e)]$. Observe that $C_S(e) \cup \{v\}$ contains at least one leaf of $T$. Using \eqref{eq:rv}, we have \begin{equation} \label{e.fromoneleaf} \bar r(C_S(e) \cup \{v\}) > \frac{c\bar r(G)}{|C_S(e)| + 1}. \end{equation} Now \eqref{e.rbound} and \eqref{e.fromoneleaf} imply that \begin{equation} \label{e.CSbound} |C_S(e)|+1 > c. \end{equation} Moreover, if $|C| \neq n - 1$, that is, $ [m(e)]\setminus S \neq \emptyset$, then \eqref{e.CSbound} gives that \begin{equation}\label{e.sizerange} |C| = n - |C_{[m(e)]\setminus S}(e)|- 1 < n - c, \end{equation} completing the proof. \end{proof} The significant source of complication in \textsc{Game}$_3$ is that the state may be disconnected during play. In order to illustrate our strategy clearly, we will need to classify the components of the active part of the state into 3 types. At the beginning, if the size/measure of the state is odd, the initial component (the whole state) is Type 1; otherwise, it is Type 2. If a component (of any type) is split into several smaller components, these components are also assigned Type 1 or Type 2 according to the parity of their size. If a player freezes a district in a component, but keeps the component connected, then the type of the component changes as follows: \begin{itemize} \item Type 1 becomes Type 2; \item Type 2 becomes Type 1 if chosen by Player 1, and becomes Type 3 if chosen by Player 2; \item Type 3 becomes Type 2. \end{itemize} Note that at any point, a component with an even number of districts is Type 2, while a component with an odd number of districts is Type 1 or Type 3. Our strategy for Player 2 is as follows. \begin{itemize} \item If it is Player 2's turn to cut, for each connected component $C$, there is a topological open disc $C'$ whose closure is $C$. Then by applying Lemma \ref{lemma:divide-districts}, he divides it into districts such that the proportion of the target is the same in every district. \item If it is Player 2's turn to freeze, he plays in a component of Type 1 if available, and otherwise in component of Type 2. (We will prove that one of these types is always available to him.) To choose the district to freeze, Player 2 regards the presented districting of his selected component as a planar graph, with districts as vertices and edges corresponding to shared boundaries of positive length. He applies Lemma \ref{lemma:split-graph} to this graph with $c=3$ and freezes the district corresponding to the vertex $v$ given by the lemma. \end{itemize} Depending on which player freezes first and the parity of the measure of the original state, a given districting game is either of \emph{odd type}, where Player 2 is always presented with an odd number of districts when it is his turn to freeze, of of \emph{even type}, where he encounters an even number of districts when it is his turn to freeze. \begin{lemma}\label{lemma-no-3} In an odd type instance of $\gameT$, there will never exist component of Type 3 if Player 2 plays as above. \end{lemma} \begin{proof} This is a simple consequence of the parities and Player 2's preference to play in Type 1 components. Note that there are initially no Type 3 components and only Player 2's freezes can create such components. In particular, we prove by induction that immediately before Player 2's $k$th turn to freeze a district, there are no Type 3 components. Note that since the game is of odd type, this induction hypothesis implies that there is at least one Type 1 component before his $k$th turn to freeze. Player 2's strategy will thus freeze a district in a Type 1 component, and no Type 3 components will be created on his $k$th turn, meaning that immediately before his $(k+1)$st turn to freeze, no Type 3 components will exist, as desired. \end{proof} \begin{lemma} At any point, there exists at most one component of Type 3. If it is Player 2's turn to freeze, there exists at least one component of Type 1 or 2. \end{lemma} \begin{proof} Note that the first claim of the Lemma implies the second: Indeed, if it is Player 2's turn to freeze, and the only component available is of Type 3, then the game is of odd type, and this contradicts Lemma \ref{lemma-no-3}. To prove the first claim, we proceed again by induction. Suppose a choice by Player 2 increased the number of Type 3 components; in this case, there were no Type 1 components available, and by Lemma \ref{lemma-no-3}, the game is of even type; this implies the number of Type 3 components available was even. Since it is at most 1 by assumption, there were no Type 3 components available, and thus the number of Type 3 components can only increase to 1, as claimed. \end{proof} For each connected component $C$, define $r(C)$ as the ratio of the target in $C$, i.e., \[ r(C)= \frac{\int_{C\cap S_T} \phi}{\int_C \phi}.\] The following lemma establishes an upper bound on $r(C)$ according to its type. \begin{lemma}\label{lemma-bound-by-type} Let $r_0 = r(X)$, where $X$ is the initial state. For any sequence $(a_1, a_1, a_2, \ldots, a_k)$, let $q(a_1,a_2,\ldots, a_k) = \prod_{i=1}^k \frac{a_i}{a_i-1}$. At any step of the game, for any connected component $C$ of measure $s$: \begin{itemize} \item if $C$ is Type 1, \[r(C)\leq r_0\cdot q(s+2,s+4,\ldots, n'');\] \item if $C$ is Type 2, \[r(C)\leq r_0\cdot q(s+1,s+3,\ldots, n'');\] \item if $C$ is Type 3, \[r(C)\leq r_0\cdot q(s+1,s+2,s+4,\ldots, n''),\] \end{itemize} where $n'' = 2\lfloor (n-1)/2\rfloor + 1$. \end{lemma} \begin{proof} We prove the lemma by induction on $s$. At the beginning of the game, it is trivially true. Now suppose a component $C$ is chosen, there are two cases, according to whether the component is split by frozen district. First, we consider the case where the component is still connected after the move. After removing a district from $C$, let the new component be $C'$ and suppose it contains $s$ districts. If it is Player 1's turn to freeze, then $r(C') = r(C)$; otherwise, $r(C') \leq \frac{s+1}{s}r(C) = q(s+1)r(C)$. For any possible type change, one can easily verify that $r(C')$ satisfies the inequality; here we only take changes from Type 2 to Type 3 as an example: \begin{align*} r(C') ~&\leq q(s+1)r(C) \leq q(s+1) \cdot r_0\cdot q(s+2,s+4,\ldots,n'') \\ ~&= r_0\cdot q(s+1,s+2,s+4,\ldots,n''), \end{align*} where the second inequality holds by the induction assumption. Second, we consider the case where smaller components emerge. Let one of the new components be $C'$ and suppose it contains $s$ districts. If it is Player 1's turn to freeze, then we have $r(C') = r(C)$ and $|C'|\leq |C|-2$. In this case, one can easily verify that $r(C')$ satisfies the inequalities. If it is Player 2's turn to freeze, then we have $r(C') \leq \frac{s+1}{s} r(C)$, and $C$ contains more than $s+c$ districts by Lemma \ref{lemma:split-graph} (as $s=|C'|<|C|-c$). This case is more complicated, and we have to enumerate types of $C'$ and $C$ -- both can be Type 1 or 2. Let us first take $C'$ of Type 1 and $C$ of Type 2 as an example. Suppose $C$ contains $s+2a+1$ districts, then \[r(C) \leq r_0 \cdot q(s+2a+2,s+2a + 4, \ldots, n'').\] In order to ensure \[q(s +1) r(C) \leq r_0\cdot q(s + 2, s + 4, \ldots, n''),\] we only need to make sure \[q(s+1)\leq q(s+2,s+4,\ldots, s+2a).\] Since $c = 3$ and $c<s<s+2a+1-c$, we have $a \geq 2$ and $s \geq 4$, which satisfies the inequality above. Similarly, one can easily verify that when both $C'$ and $C$ are Type 1, $c\geq 2$ is enough; when $C'$ is Type 2, $c\geq 1$ is enough. Since we let $c = 3$ when applying Lemma \ref{lemma:split-graph}, $r(C')$ satisfies the inequalities. \end{proof} \iffalse \begin{theorem} If Player 2 uses the strategy we proposed above, for any district $d$ produced by the game, \[r(d)\leq 6\sqrt{e n/2}r_0,\] in other words, $d$ contains at most $\frac{6\sqrt{e/2}}{\sqrt{n}}$ of the target. \end{theorem} \fi \begin{proof}[Proof for Theorem \ref{t.target}] For any component $C$ of any type, by Lemma \ref{lemma-bound-by-type}, \[r(C)\leq\left\{\begin{array}{ll} r_0 \cdot \frac{n''!!(s-2)!!}{(n''-1)!!(s-1)!!}, & \text{if } C \text{ is Type 2}, \\ r_0\cdot\frac{s+1}{s}\cdot \frac{n''!!(s-1)!!}{(n''-1)!!s!!}, & \text{otherwise}. \end{array}\right.\] If Player 1 freezes a district $d$ from $C$, $r(d) = r(C)$. If Player 2 freezes a district $d$ from $C$, by Lemma \ref{lemma:split-graph} (with $c = 3$), $r(d)\leq \min\{3, s/2\}r(C)$ if $s > 1$, otherwise $r(d) = r(C)$. Therefore, for any $d$, \begin{align*} r(d)\leq&~ r_0 \cdot \frac{n''!!}{(n''-1)!!} \cdot \max\left\{2, \max_{k\in \mathbb{N}^+}\left\{ \min\{3, (2k - 1)/2\} \frac{2k}{2k-1} \frac{(2k-2)!!}{(2k-1)!!}\right\} , \right.\\ &~ \left.\max_{k\in \mathbb{N}^+}\left\{\min\{3, k\} \frac{(2k-2)!!}{(2k-1)!!}\right\}\right\}\\ =&~ r_0 \cdot \frac{n''!!}{(n''-1)!!} \cdot \max\left\{2, \frac{3 \cdot 6!!}{7!!}, \frac{3 \cdot 4!!}{5!!}, \frac{2 \cdot 2!!}{3!!}\right\} \\ =&~ 2r_0 \cdot \frac{n''!!}{(n''-1)!!}. \end{align*} Since \[\frac{n''}{n''-1} \leq \sqrt \frac{n''}{n''-2},\] by induction on $n''$, one can easily find \[\frac{n''!!}{(n''-1)!!} \leq \sqrt{n''}\leq \sqrt{n}.\] That is, \[r(d)\leq 2r_0\sqrt{n}.\] In other words, $d$ contains at most $\frac{2}{\sqrt{n}}$ of the target. \end{proof} \section{Limitations and further questions} \label{s.realworld} We have analyzed optimum play of the I-cut-you-freeze protocol in idealized settings; in actual applications to redistricting, real-world constraints would interfere with optimum play. For example, our analysis considers a district with $50.1\%$ loyalty to Player $A$ to be controlled by that player, whereas in a real-world setting, a Player would want a more comfortable loyalty margin to consider a district safe. Apart from this thresholding issue, two other simplifications in our model stand out: \begin{itemize} \item \textbf{Geometric constraints on districts.} In the United States, Congressional districts are required to be connected, but in many states, are also required to be geometrically ``nice'' in other less-precise ways. One common term used in state-by-state requirements on districts is that they be ``compact'', which is supposed to limit the extent to which districts have intricate drawn out structure. Various metrics have been proposed to quantify the ``compactness'' of a district \cite{horn1993practical}; one of the simplest is the ratio $4\pi A_D/P_D^2$, where here $A_D$ and $P_D$ are the area and perimeter of the district, respectively. Note that with this normalization, the measure takes a value in $(0,1]$. Other common requirements include respect for geographical phenomena such as cities, counties, etc. Our analysis necessarily ignores these geometric constraints on the districts. (It should be noted that there are no precise and agreed upon definitions of what constitutes a valid district in any particular state, and in practice, many Congressional districts seem to flaunt natural interpretations of these constraints.) \item \textbf{Mixed populations.} Theorem \ref{t.seats} concerns a model of redistricting in which districts can be assembled with essentially arbitrary collections of voters among those voters remaining in the unfrozen part of the state. In practice, however, Democrats and Republicans are sometimes neighbors, and it is generally impossible to draw a district which is $100\%$ loyal to party $A$ or party $B$, as is sometimes an optimum move for our protocol (i.e., as from Lemma \ref{lemma:weaker-strategy}). \end{itemize} It may be of theoretical interest to analyze our protocol in richer models motivated by these complications. That said, we believe it is reasonable to infer basic real-world properties of our protocol from our rigorous analysis in idealized settings. Let us first consider Theorems \ref{t.a-seats} and \ref{t.seats} concerning the slate which will result from optimum play in our algorithm. For these results, it is reasonable to suspect that the idealized model we work in has a significant effect on the precise results we obtain. In particular, our proof is based on the feasibility of two types of moves for the players: Lemma \ref{lemma:stronger-strategy} requires a player to divide a region into districts with similar proportions of voters loyal to each party, while Lemma \ref{lemma:weaker-strategy} also requires him to draw some districts with pure loyalty for one party. Obviously, neither of these moves is perfectly possible to emulate in a real-world situation; both players will be handicapped to some degree. Nevertheless, we consider the key conclusion of Theorems \ref{t.a-seats} and \ref{t.seats} to be not the particular formula for the slate won by each player in our protocol, but instead the general feature that our protocol does shift the unbalanced seat/loyalty curve from the ``One-player-decides'' protocol to an (asymptotically) symmetric curve (recall Figure~\ref{f.curve}). In some sense, the key point of Theorems \ref{t.a-seats} and \ref{t.seats} is just that the protocol produces a result within reason, and that neither player gains a significant advantage from the choice of who is assigned the first move in the protocol; we expect that both of these properties would persist in real-world applications. For Theorem \ref{t.target} we believe there is actually relatively little lost in our abstraction of the real-world problem. Much of our analysis (e.g., Lemma \ref{lemma:split-graph}) concerns the case where Player 2 is choosing which district to freeze after Player 1 has divided the state; this part of our analysis holds regardless of what geometric constraints are imposed on the divisions made by Player 1 on his turn. When Player 2 must divide a region, our analysis directs him to divide the target evenly among many districts. In practice, divisions with the general feature of dividing a target among many districts are generally quite easy to construct, and thus we expect that our protocol would retain a property similar in spirit to the assertion of Theorem \ref{t.target}. The main takeaway from Theorem \ref{t.target} is not the precise threshold of $B$ for which the protocol has the $B$-target property, but the fact that neither player will be able to completely direct the composition of any particular district; to some approximation, we expect this property to survive in real-world implementations. \subsection*{Acknowledgments} The first two authors are partially supported by the National Science Foundation and the Sloan Foundation; the second author is additionally supported by the Office of Naval Research. The first author also acknowledges helpful discussions with Maria Chikina on the problem setting considered here. \bibliographystyle{plain}
proofpile-arXiv_069-8720
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and Summary of Results:} Traditionally, the analysis of information processing systems is based on a certain modelling of the process that generates the observed data (e.g an ergodic process). Based on this a-priori model, a processor (e.g. a compression algorithm, a classifier, etc) is then optimally designed. In practice, there are many cases where insufficient a-priori information about this generating model is available and one must base the design of the processor on the observed data only, under some complexity constraints that the processor must comply with. \subsection{Universal Data Compression with Limited Memory} The Kolmogorov-Chaitin complexity (1968) is the length of the shortest ${\it program}$ that can generate the given individual sequence via a universal Turing machine. More concrete results are achieved by replacing the universal Turing machine model with the more restricted finite-state machine model. The Finite- State (FS) normalised complexity (compression) $H({\bf X})$, measured in bits per input letter, of an individual infinite sequence ${\bf X} $ is the normalised length of the shortest one-to-one mapping of ${\bf X} $ into a binary sequence that can be achieved by any finite-state compression device [1]. For example, the counting sequence 0123456... when mapped into the binary sequence 0,1,00,01,10,11,000,001,010,011,100,101,110,111... is incompressible by any finite-state algorithm. Fortunately, the data that one has to deal with is in many cases compressible. The FS complexity was shown to be asymptotically achieved by applying the LZ universal data compression algorithm [1] to consecutive blocks of the individual sequence. The FS modelling approach was also applied to yield asymptotically optimal universal prediction of individual sequences [9]. Consider now the special case of a FS class of processors is further constrained to include ${\it only}$ block-encoders that process one $N$-string at a time and then start all over again, (e.g. due to bounded latency and error-propagation considerations). $H({\bf X})$ is still ${asymptotically}$ achievable by the LZ algorithm when applied on-line to consecutive strings of length N, as N tends to infinity [1]. But the LZ algorithm may not be the best on-line universal data compression algorithm when the block-length is of ${\it finite}$ length $N$. It has been demonstrated that if it is a-priori known that ${\bf X}$ is a realization of a stationary ergodic, Variable Length Markov Chain (VLMC) that is governed by a tree model, then context-tree coding yields a smaller redundancy than the LZ algorithm ([10], [4]). More recently, it has been demonstrated that context-tree coding yields an optimal universal coding policy (relative to the VLMC assumption) ([2]). Inspired by these results, one may ask whether the optimality of context-tree coding relative to tree models still holds for more general setups. It is demonstrated here that the best possible compression that may be achieved by ${\it any}$ universal data compression algorithm for \textit{finite} $N$-blocks is essentially achieved by context-tree coding for ${\it any}$ individual sequence ${\bf X}$ and not just for individual sequences that are realizations of a VLMC. In the following, a number of quantities are defined, that are characterised by non-traditional notations that seem unavoidable due to end-effects resulting from the finite length of $X_1^N$. These end-effects vanish as $N$ tends to infinity, but must be taken into account here. Refer to an arbitrary sequence over a finite alphabet $ {\bf A}, |{\bf A}|=A$, $X^N=X_1^N=X_1, X_2,...;X_N \in {\bf A}$, as being an ${\it individual}$ sequence. Let ${\bf X}=X_1,X_2,...$ denote a semi-infinite sequence over the alphabet ${\bf A}$. Next, an empirical probability distribution $P_{MN}(Z_1^N,N)$ is defined for $N$ vectors that appear in a sequence of length $MN$. The reason for using the notation $P_{MN}(Z_1^N,N)$ rather than, say, $P_{MN}(Z_1^N)$ is due to end-effects as discussed below. We then define an empirical entropy that results from $P_{MN}(Z_1^N,N)$, namely $H_{MN}(N)$. This quantity is similar to the classical definition of the empirical entropy of $N$-blocks in an individual sequence of length $MN$ and as one should anticipate, serves as a lower bound for the compression that can be achieved by any $N$- block encoders. Furthermore, $H_{MN}(N)$ is achievable in the impractical case where one is allowed to first scan the long sequence $X_1^{MN}$, generate the corresponding empirical probability $P_{MN}(Z_1^N,N)$ for each $N$-vector $Z_1^N$ that appears in $X_1^{MN}$, and apply the corresponding Huffman coding to consecutive $N$-blocks. Then, define $H({\bf X},N)=\limsup_{M \to \infty}H_{MN}(N)$. It follows that $$H({\bf X})=\limsup_{N \to \infty}H({\bf X},N)$$ is the smallest number of bits per letter that can be asymptotically achieved by any N-block data-compression scheme for ${\bf X}$. However, in practice, universal data-compression is executed on-line and the only available information on ${\bf X}$ is the currently processed $N$-block. Next, for $\ell< N$, an empirical probability measure $P_{MN} (Z_1^ {\ell},N)$ is defined for $\ell$-vectors that appear in an $MN$-sequence, which is derived from $P_{MN}(Z_1^N,N)$ by summing up $P_{MN}(Z_1^N,N)$ over the last $N-{\ell}$ letters of $N$ vectors in the $MN$- sequence. Again, observe that due to end-effects, $P_{MN}(Z_1^{\ell},N)$ is different from $P_{MN}(Z_1^{\ell},\ell)$ but converges to it asymptotically, as $M$ tends to infinity. Similarly, an empirical entropy $H_{MN}(\ell,N)$ is derived from $P_{MN}(Z_1^{\ell},N)$. In the analysis that follows below both $H_{MN}(N)$ and $H_{MN}(\ell,N)$ play an important role. An empirical entropy $H_{MN}(\ell|z_1^{\ell-1},N)$ is associated with each vector $z_1^{\ell-1};1\leq \ell\leq (\log N)^2 $ in $X_1^{MN}$. This entropy is derived from $P_{MN}(z_{\ell}|z_1^{\ell-1},N)=\frac{P_{MN}(z_1^{\ell},N)}{P_{MN}(z_1^{\ell-1},N)}$. Note that this empirical entropy is conditioned on the ${\it particular}$ value of $z_1^{\ell-1}$ and is ${\it not}$ averaged over all $z_1^{\ell-1} \in {\bf A}^{\ell-1}$ relative to $P_{MN}(z_1^{\ell-1},N)$. A context-tree with approximately $N$ leaves, that consists of the $N$ most empirically probable contexts in $X_1^{MN}$, is generated. For each leaf of this tree, choose the one context among the contexts along the the path that leads from the root of the tree to this leaf, for which the associated entropy is the smallest. Then, these minimal associated entropies are averaged over the set of leaves of the tree. This average entropy is denoted by $H_u(N,M)$. Note that $H_u(N,M)$ is essentially an empirical conditional entropy which is derived for a suitably derived variable-length Markov chain (VLMC). Finally, define $H_u({\bf X},N)= \limsup_{M \to \infty} H_u(N,M)$. It is demonstrated that \newline $\liminf_{N \to \infty}[H({\bf X},N)-H_u({\bf X},N)]>0$. Thus, for large enough $N$, $H_u({\bf X},N)$, like $H({\bf X},N)$, may also serve as a lower bound on the compression that may be achieved by any encoder for $N$-sequences. The relevance of $H_u({\bf X},N)$ becomes apparent when it is demonstrated in Theorem 2 below that a context-tree universal data-compression scheme, when applied to $N'$-blocks, essentially achieves $H_u({\bf X},N)$ for any ${\bf X}$ if $\log N'$ is only slightly larger than $\log N$, and achieves $H({\bf X})$ as $N'$ tends to infinity. Furthermore, it is shown in Theorem 1 below that among the many compressible sequences ${\bf X}$ for which $H_u({\bf X},N)=H({\bf X}); H({\bf X})< \log A $, there are some for which ${\it no}$ on-line universal data-compression algorithm can achieve any compression at all when applied to consecutive blocks of length $N'$, if $\log N'$ is slightly smaller than $\log N$. Thus, context-tree universal data-compression is therefore essentially optimal. Note that the threshold effect that is described above is expressed in a logarithmic scaling of $N$. At the same time, the logarithmic scaling of $N$ is apparently the natural scaling for the length of contexts in a context-tree with $N$ leaves. \subsection{Application to Universal Classification} A device called \textit{classifier} (or discriminator) observes an individual training sequence of length of $m$ letters, $X_1^m$. The classifier's task is to consider individual test sequences of length $N$ and decide whether the test $N$-sequence has, in some sense the same properties as those that are captured by the training sequence, or is sufficiently different, according to some appropriate criterion. No a priori information about the test sequences is available to the classifier asides from the training sequence. A universal classifier $d(X_1^m,{\bf A}^N)$ for N-vectors is defined to be a mapping from ${\bf A}^N$ onto $\{0,1\}$. Upon observing $Z_1^N$, the classifier declares $Z_1^N$ to be ${\it similar}$ to one of the $N$-vectors $X_{j+1}^{j+N};j=0,1,...,m-N$ iff $d(X_1^m, Z_{1}^{N}\in {\bf A}^N)=1$ (or, in some applications, if a slightly distorted version $\tilde Z_{1}^{N}$ of $Z_{1}^{N}$ satisfies $d(X_1^m, \tilde Z_{1}^{N})=1)$. In the ${\it classical}$ case, the probability distribution of $N$-sequences is known and an optimal classifier accepts all $N$-sequences $Z_1^N$ such that the probability $P(Z_1^N)$ is bigger than some preset threshold. If $X_1^m$ is a realization of a stationary ergodic source one has, by the Asymptotic Equipartition Property (A.E.P) of information theory, that the classifier's task is tantamount (almost surely for large enough $m$ and $N$) to deciding whether the test sequence is equal to a "typical" sequence of the source (or, when some distortion is acceptable, if a slightly distorted version of the test sequence is equal to a "typical" sequence). The cardinality of the set of typical sequences is, for large enough $m$ and $N$, about $2^{NH}$, where $H$ is the entropy rate of the source [10]. What to do when $P(Z_1^N)$ is unknown or does not exist, and the only available information about the generating source is a training sequence $X_1^m $? The case where the training sequence is a realization of an ergodic source with vanishing memory is studied in [11], where it demonstrated that a certain universal context-tree based classifier is essentially optimal for this class of sources (the fact that context-trees are versatile for classes of renewal sources was recently demonstrated in [20]). This is in unison with related results on universal prediction ([11], [12], [13], [18], [19]). Universal classification of test sequences relative to a long training sequence is a central problem in computational biology. One common approach is to assume that the training sequence is a realization of a VLMC, and upon viewing the training sequence, to construct an empirical Probabilistic Suffix Tree (PST), the size of which is limited by the available storage complexity of the classifier, and apply a context-tree based classification algorithm [7], [8]. But how should one proceed if there is no a-priori support for the VLMC assumption? In the following, it is demonstrated that the PST approach is essentially optimal for every ${\it individual}$ training sequence, even without the VLMC assumption. Denote by $S_{d}(N,X_1^m,\epsilon)$ a set of $N$-sequences $Z_1^N$ which are declared to be similar to $X_1^m$ (i.e. $d(X_1^m, Z_{1}^{N})=1$), where $d(X_1^m, X_{j+1}^{j+N})=0$ should be satisfied by no more than $\epsilon (m-N+1)$ instances $j=0,1,2,...,m-N$, and where $\epsilon$ is an arbitrarily small positive number. Also, given a particular classifier, let $D_{d}(N,X_1^m,\epsilon)=|S_{d}(N,X_1^m,\epsilon)|$ and let $$ H_{d}(N,X_1^m,\epsilon)=\frac {1}{N}\log D_{d}(N,X_1^m,\epsilon)\, $$ Thus, any classifier is characterised by a certain $H_{d}(N,X_1^m,\epsilon)$. Given $X_1^m$, let $D_{d,min}(N,X_1^m,\epsilon)$ be the smallest achievable $D_{d}(N,X_1^m,\epsilon)$. Denote by $d^{*}$ the particular classifier that achieves $D_{d,min}(N,X_1^m,\epsilon)$ and let $$H_{min}(N,X_1^m,\epsilon )=\frac {1}{N}\log D_{d^{*}}(N,X_1^m,\epsilon)\,$$ For an infinite training sequence ${\bf X}$, let $$H(N,{\bf X},\epsilon)= \limsup_{m \to \infty}H_{min}(N,X_1^m,\epsilon )$$. Note that $$\bar H({\bf X})=\lim_{\epsilon \to 0}\limsup_{N \to \infty}H_{min}(N,{\bf X},\epsilon) $$ is the topological entropy of ${\bf X}$ [6]. Naturally, if the classifier has the complete list of $N$-vectors that achieve $D_{\epsilon,min}(N,X_1^m )$, it can achieve a perfect classification by making $d(X_1^m, Z_{1}^{N})=1$ iff $Z_1^N=X_{j+1}^{j+N}$, for every instance $j=0,1,2,. ..,m-N$ for which $X_{j+1}^{j+N}$ is in this complete list. The discussion is constrained to cases where $H_{min}(N,X_1^m,\epsilon)>0$. Therefore, when $m$ is large, $D_{d,min}(N,X_1^m,\epsilon)$ grows exponentially with $N$(e.g. when the test sequence is a realization of an ergodic source with a positive entropy rate). The attention is limited to classifiers that have a storage-space complexity that grows only linearly with N. Thus, the long training sequence cannot be stored. Rather, the classifier is constrained to represent the long training sequence by a short ``signature", and use this short signature to classify incoming test sequences of length $N$. It is shown that it is possible to find such a classifier, denoted by $d(X_1^m,\epsilon, Z_{1}^{N})$, which is essentially optimal in the following sense: An optimal ``$\epsilon$-efficient" universal classifier $d(X_1^m,\epsilon, Z_{1}^N)$ is defined to be one that satisfies the condition that $d(X_1^m,X_{j+1}^{j+N})=1$ for $(1-\hat\epsilon)(m-N+1)$ instances $j=0,1,...m-N$, where $\hat\epsilon\leq \epsilon $. This corresponds to the rejection of at most $\epsilon D_{min}(N,X_1^m,\epsilon)$ vectors from among the $D_{min}(N,X_1^m,\epsilon)$ ${\it typical}$ $N$-vectors in $X_1^m$. Also, an optimal ``$\epsilon$-efficient" universal classifier should satisfy the condition that $d(X_1^m,Z_1^N)=1$ is satisfied by no more than $2^{NH_{min }(N,X_1^m,\epsilon)+\epsilon}$ N-vectors $Z_1^N$. This corresponds to a false-alarm rate of $$\frac{ 2^{N H_{min }(N,X_1^m,\epsilon)+\epsilon}- 2^{N H_{min }(N,X_1^m,\epsilon)}}{2^{N\log A}-2^{N H_{min }(N,X_1^m,\epsilon)}}$$ when $N$-vectors are selected independently at random, with an induced uniform probability distribution over the set of $2^{N \log A}-2^{N( H_{min }(N,X_1^m,\epsilon))}$ $N$-vectors that should be rejected. Note that the false-alarm rate is thus guaranteed to decrease exponentially with $N$ for any individual sequence $X_1^m$ for which $H_{min }(N,X_1^m,\epsilon)< \log A -\epsilon$. A context-tree based classifier for $N$-sequences, given an infinite training sequence ${\bf X}$ and a storage-complexity of $O(N)$, is shown by Theorem 3 below to be ${\it essentially}$ $\epsilon$-efficient ( and therefore ${\it essentially}$ optimal) for any $N \geq N_0({\bf X})$ and some $m=m_0(N,{\bf X})$. Furthermore, by Theorem 3 below, among the set of training sequences for which the proposed classifier is essentially optimal, there are some for which no $\epsilon$-efficient classifier for $N'$-sequences exists, if $\log N'<\log N$ for any $\epsilon < \log A- H_{min}({N,\bf X},\epsilon)$. Thus, the proposed classifier is essentially optimal. Finally, the following universal classification problem is considered: Given two test-sequences $Y_1^N$ and $Z_1^N$ and ${\it no}$ training data, are these two test-sequences "similar" to each other? The case where both $Y_1^N$ and $Z_1^N$ are realizations of some (unknown) finite-order Markov processes is discussed in [14], where an asymptotically optimal empirical divergence measure is derived empirically from $Y_1^N$ and $Z_1^N$. In the context of the individual-sequence approach that is adopted here, this amounts to the following problem: Given $Y_1^N$ and $Z_1^N$, is there a training-sequence ${\bf X}$ for which $H_{\epsilon,min}({\bf X})>0$, such that both $Y_1^N$ and $Z_1^N$ are accepted by some $\epsilon$-efficient universal classifier with linear space complexity? (this problem is a reminiscence of the ``common ancestor" problem in computational biology [15], where one may think of ${\bf X}$ as a training sequence that captures the properties of a possible ``common ancestor" of two DNA sequences $Y_1^N$ and $Z_1^N$). This is the topic of the Corollary following Theorem 3 below. \section{Definitions,Theorems and Algorithms} Given $X_{1}^N \in {\bf A}^{N}$, let $c(X_1^N);X \in {\bf A}^{N}$ be a one-to-one mapping of $X_1^N$ into a binary sequence of length $L(X_1^N)$, which is called the length function of $c(X_1^N)$. It is assumed that $L(X_1^N)$ satisfies the Kraft inequality. For every ${\bf X}$ and any positive integers $M,N$, define the compression of the prefix $X_1^{NM}$ to be: $$ \rho_L({\bf X},N,M)=\max_{i;1 \leq i \leq {N-1}} \frac{1}{NM} \left[\left[\sum_{j=0}^{M-2}L\Bigl(X_{i+jN+1}^{(i+(j+1)N)}\Bigr)\right] +L(X_1^{i})+L\Bigl(X_{(i+1+(M-1)N)}^{NM}\Bigr)\right] $$ Thus, one looks for the compression of the sequence $X_1^{MN}$ that is achieved by successively applying a given length-function $L(X_1^N)$ and with the ${\it worst}$ starting \newline ${\it phase}$ $i;i=1,2,...,N-1$. It should be noted that for any given length-function $L(X_1^N)$, one can construct another length-function for $N^2$-vectors $L(X_1^{N^2})$ such that, when applied successively to vectors $X_{jN^{2}+1}^{(j+1){N^2}};j=0,1,2,...M-1$, the resulting compression will be no larger than the compression that is achieved by applying $L(X_{i+jN+1}^{(i+(j+1)N)})$ to successive vectors $X_{i+jN+1}^{(i+(j+1)N)}$, up to a factor $O(\frac{1}{N})$ where $j=1,2,...MN^2$ and any $1\leq i \leq N-1$. Thus, asymptotically as $M$ and $N$ tend to infinity, the starting phase $i$ has a diminishing effect on the best achievable compression. Observe that by ignoring the terms $\frac{1}{NM}L(X_1^{i})$ and $\frac{1}{NM}L\Bigl(X_{(i+1+(M-1)N)}^{NM}\Bigr)$ (that vanish for large values of $M$) in the expression above, one gets a lower bound on the actual compression. In the following, a lower-bound $H_{MN}(N)$ on $\rho_L({\bf X},N,M)$ is derived, that applies to any length-function $L(X_1^N)$. First, the notion of ${\it empirical~ probability}$ of an $N$-vector in a ${\it finite}$ $MN$-vector is derived, for any two positive integer $N$ and $M$. Given a positive integer $\ell; 1 \leq \ell \leq N$ and a sequence $X_1^{MN}$, define for a vector $Z_1^{\ell}\in {\bf A}^{\ell}$, \begin{equation} P_{MN,i}(Z_1^{\ell},N)=\frac{1}{M-1}\sum_{j=0}^{M-2}\OO_{Z_1^N} \Bigl(X_{i+j{\ell}}^{i+(j+1){\ell}-1}\Bigr) ; 1 \leq i \leq N-1 \end{equation} and \begin{equation} P_{MN}(Z_1^{\ell},N)=\frac{1}{N}\sum_{i=1}^{N}P_{MN,i}(Z_1^{\ell},N) \end{equation} where, $\OO_{Z_1^N}( X_{j{\ell}+i}^{(j+1){\ell}+i-1})=1$ iff $ X_{j{\ell}+i}^{(j+1){\ell}+i-1}=Z_1^{\ell}$; else $\OO_{Z_1^N}( X_{jN+i}^{(j+1)N+i-1})=0.$ Thus, $$ P_{MN}(Z_1^{\ell},{\ell})=\frac{1}{(M-1)N+1} \sum_{i=1}^{(M-1)N+1}\OO_{Z_1^{\ell}}(X_i^{i+{\ell}-1})\,, $$ is the standard definition of the empirical probability of $Z_1^{\ell}$. (As noted in the Introduction, $P_{MN}(Z_{1}^{\ell},N)$ converges to the empirical probability $ P_{MN}(Z_{1}^{\ell},{\ell})$ as $M$ tends to infinity. However, for finite values of $M$, these two quantities are not identical due to end-effects). Let, $$ H_{MN}(\ell,N)=-\frac{1}{\ell}\sum_{Z_1^{\ell}\in {\bf A}^{\ell}} P_{MN}(Z_1^{\ell},N)\log P_{MN}(Z_1^{\ell},N) \, $$ and \begin{equation} H_{MN}(N)=H_{MN}(N,N)=-\frac{1}{N}\sum_{Z_1^{N}\in {\bf A}^{N}} P_{MN}(Z_1^{N},N)\log P_{MN}(Z_1^{N},N) \end{equation} then, \begin{proposition} $$\rho_L({\bf X},N,M) \geq H_{MN}(N)$$ and $$\limsup_{M \to \infty}\rho_L({\bf X},N,M) \geq H({\bf X},N)$$ \end{proposition} where $H({\bf X},N)=\limsup_{M \to \infty}H_{MN}(N)$. The proof appears in the Appendix. Thus, the best possible compression of ${X_1^{MN}}$ that may be achieved by any one-to-one \newline encoder for $N$-blocks, is bounded from below by $H_{MN}(N)$. Furthermore, $H_{MN}(N)$ is achievable for $N$ that is much smaller than $\log M$, if $c(Z_1^N)$ (and its corresponding length-function $L(Z_1^N)$ is tailored to the individual sequence $X_1^{MN}$, by first scanning $X_1^{MN}$,\newline evaluating the empirical distribution $ P_{MN}(Z_1^N,N)$ of $N$-vectors and then applying the \newline corresponding Huffman data compression algorithm. However, in practice, the data-compression has to be executed on-line and the only available information on ${X_1^{MN}}$ is the one that is contained the currently processed N-block. The main topic of this paper is to find out how well one can do in such a case where the ${\it same}$ mapping $c(X_1^N)$ is being applied to successive $N$-vectors of $X_1^{MN}$. Next, given $N$ and $M$ a particular context-tree is generated from $X_1^{MN}$ for each letter \newline $X_i;1 \leq i \leq MN$, and a related conditional empirical entropy $H_u(N,M)$ is defined \newline. It is then demonstrated that for large enough $N$ and $M$, $H_u(N,M)$ may also serve as a lower bound on $\rho_L({\bf X},N,M)$. {\bf Construction of the Context-tree for the letter $Z_i$} 1) Consider contexts which are not longer than $t=\lceil (\log N)^2 \rceil$ and let K be a positive number. 2)Let $ K_1(Z_{1}^N,K)=\min [j-1,t]$ where $j$ is the smallest positive integer such that \newline $ P_{MN}(Z_{1}^{j},N) \leq \frac{1}{K} $, where the probability measure $ P_{MN}(Z_{1}^{j},N)$ for vectors $Z_1^{j} \in {\bf A}^j$ is derived from $X_1^{MN}$. If such $j$ does not exist, set $ K_1(Z_{1}^N,K)=-1$, where $Z_{1}^0$ is the null vector. 3)Given $X_1^{MN}$ evaluate $P_{MN}(Z_1^i,N)$. For the $i$-th instance in $Z_1^{N}$, let $Z_1^{i-1}$ be the \newline corresponding suffix. For each i particular $z_1^{i-1} \in {\bf A}^{i-1}$ define \begin{equation} H_{MN}(i|z_1^{i-1},N)=-\sum_{z_i \in {\bf A}} \frac{P_{MN}(z_1^i,N)}{P_{MN}(z_{1}^{i-1},N)}\log \frac{P_{MN}(z_1^i, N)}{P_{MN}( z_{1}^ {i-1},N)} \end{equation} 4)Let $j_0 =j_0(Z_1^{i-1})$ be the integer for which, $$H_{MN}(i|z_{i-j_0}^{i-1},N)=\min_{1 \leq j \leq 1+K_1(z_{i-N}^{i-1},K)}H_{MN} (i|z_{i-j}^{i-1},N)$$ Each such $j_0$ is a node in a tree with about $K$ leaves. The set of all such nodes represents the particular context tree for the $i$-th instance. 5)Finally, \begin{equation} H_u(N,K,M)=\sum_{z_{1}^{i-1} \in {\bf A}^{i-1}} P_{MN}(z_{1}^{i-1},N)H_{MN} (i|z_{i-j_0}^{i-1},N) \, \end{equation} Observe that $H_u(N,K,M)$ is an entropy-like quantity defined by an optimal data-driven tree of variable depth $K_1$, where each leaf that is shorter than $t$ has roughly an empirical probability $\frac{1}{K}$. Set $K=N$ and let $H_u(N,M)=H_u(N,N,M)$. Also, let \begin{equation} H_u({\bf X},N)=\limsup_{M \to \infty} H_u(N,M) \end{equation} and \begin{equation} H_u({\bf X})=\limsup_{N \to \infty}H_u({\bf X},N) \end{equation} The different empirical quantities that are defined above reappear throughout the paper and are therefore summarised in the short Glossary below. {\bf Glossary:} 1) $P_{MN}(z_1^{\ell},N)$: An empirical probability measure on $\ell$-vectors $z_1^i$ in the vector $Z_1^{MN}$. 2)$P_{MN}(z_{1}|z_{-i+1}^0,N)=\frac{P_{MN}(z_{-\ell+1}^1, N)}{P_{MN}( z_{-\ell+1}^ {0},N)}$: A conditional empirical probability measure. 3) $H_{MN}(\ell,N)$ : An empirical entropy that is derived from $P_{MN}(z_1^{\ell},N)$. 4) $H_{MN}(N)$=$H_{MN}(N,N)$; $H({\bf X},N)=\limsup_{M \to \infty}H_{M}(N)$; $H({\bf X})=\limsup_{N \to \infty}H({\bf X},N)$ 5) $H(i|z_1^{i-1},N)$: A conditional empirical entropy, conditioned on the particular suffix $z_1^{i-1}$. 6) $K_1(z_{-N+1}^0,K)$ is the smallest positive integer $j$ such that $P_{MN}(z_{-j+1}^0) \leq \frac{1}{K}$. 7) $H_u(N,K,M)$: The average of the minimal value of $H(1|z_{-j+1}^0,N); 1 \leq j \leq 1+K_1(z_{-N+1}^0,K)$ over $z_{-j+1}^0$; $H_u(N,N,M)$ is denoted by $H_u(N,M)$. 8) $H_u({\bf X},N)= \limsup_{M \to \infty}H_u(N,M)$ ; $H_u({\bf X})= \limsup_{N \to \infty} H_u({\bf X},N)$ Let $$ H({\bf X})=\limsup_{N \to \infty}H({\bf X},N)\, $$ where, as defined above, $H({\bf X},N)= \limsup_{M \to \infty}H_{MN}(N)$. Then, \begin{lemma} For every individual sequence ${\bf X}$, \end{lemma} \begin{equation} \liminf_{N \to \infty} \Bigl[H({\bf X},N)-H_u({\bf X},N)\Bigr]\geq 0 \end{equation} Hence, $ H_u( {\bf X})\leq H({\bf X}) \,. $ The proof of Lemma 1 appears in the Appendix. Any compression algorithm that achieves a compression $H_u({\bf X},N)$ is therefore \textit{asymptotically} optimal as $N$ tends to infinity. Note that the conditional entropy for a data-driven Markov tree of a uniform depth of, say, $O(\log {\log N})$ may still satisfy Lemma 1 (by the proof of Lemma 1), but the corresponding conditional empirical entropy is lower-bounded by $H_u({\bf X},N)$ for ${\it finite}$ values of $N$. A context-tree data-compression algorithm for $N$-blocks that essentially achieves a \newline compression $H_u({\bf X},N)$ is introduced below, and is therefore asymptotically optimal, but so are other universal data-compression algorithms (e.g., as mentioned above, a simpler context-tree data compression algorithm with a uniform context depth or the \newline LZ algorithm [1]). However, the particular context-tree algorithm that is proposed below is shown to be essentially optimal for \textit{non-asymptotic} values of $N$ as well. Let $\delta$ be an arbitrarily small positive number and let us consider the class $C_{N_0,M_0,\delta}$ of all ${\bf X}$-sequences for which, for some $\hat H$ such that $ \delta <\hat H <(1-2\delta)\log A $, \begin{itemize} \item[1)] $H_{{M_0}N_0}({N_0},N_0)=\hat H$. \item[2)] $ H_u({N_0},K_0,M_0)-\hat H \leq \delta$ where $K_0=N_0$. \end{itemize} \begin{theorem} A)The class $C_{N_0,M_0,\delta}$ is not empty. Every sequence ${\bf X}$ is in the set $C_{N_0,M_0,\delta}$ for large enough $N_0$ and $M_0=M_0(N_0)$. B)Let $ N'=N^{1-\delta}$. For any universal data-compression algorithm for $ N'$-vectors that utilises some length-function $L(Z_1^{ N'})$, there exist {\it some} sequences ${\bf X} \in C_{N_0,M_0,\delta}$ such that for any $M\geq M_0$ and any $N' \geq {N_0}^{\frac{1}{1-\delta}} $: $$ \rho_{L}({\bf X},N',M) \geq (1-\delta)[\log A-\delta] > \hat H$$ for large enough $N_0$. \end{theorem} The proof of Theorem 1 appears in the Appendix. The next step demonstrates that there exists a universal data-compression algorithm, which is optimal in the sense that when it is applied to consecutive $N$-blocks, its associated compression is about $ H_{u}({\bf X},N')$ for \textit{every} individual sequence ${\bf X}$ where $\log N'$ is slightly smaller than $\log N$. \begin{theorem} Let $\delta$ be an arbitrarily small positive number and let $N'=\lfloor N^{1-\delta}\rfloor$. There exists a context-tree universal coding algorithm for $N$-blocks, with a length-function $\hat L(Z_1^{N})$ for which, for ${\it every}$ individual $X_1^{MN}\in {\bf A}^{MN}$, $$ \rho_{\hat L}({\bf X},N,M)\leq H_u({N},N',M) +O\left(\frac{\log N}{N^{\delta}}\right) \, $$ \end{theorem} It should be noted here that it is not claimed that the particular algorithm that is described below yields the smallest possible computational complexity. It is sufficed to establish the fact that this particular ${\it essentialy~optimal}$ universal algorithm indeed belongs to the class of context-tree algorithms. The reader is referred to [2] and [4] for an exhaustive discussion of optimal universal context-tree algorithms where the issues of minimal redundancy and computational complexity are discussed in length. Therefore, no attempt was made to minimise or evaluate the computational complexity of this particular "test-bench" algorithm. The practitioner reader is referred to [2] where it was established that an optimal universal context coding may be realized by an algorithm with a computational complexity that grows only linearly with the block-length $N$. For an enlightening perspective on computationally bounded data-compression algorithms (which are not necessarily "essentially optimal" in the sense of Theorem 2) see [21]. \paragraph{Description of the universal compression algorithm:} Consider first the encoding of the first $N$-vector $X_1^N$ (to be repeated for every $X_{(i-1)N+1}^{iN}; ~i=2,3,...,M$). Let $t=\lceil(\log N)^2 \rceil$ and $M'=\frac{N}{t}$ (assuming that $t$ divides $N$). \begin{itemize} \item[A)] Generate the set $T'(X_1^{N})$ that consists of all contexts $x_{1}^{i-1}$ that appear in $X_1^N$, satisfying $P_N(x_{1}^{i-1},t)\geq \frac{1}{N'}; i \leq t$, where $ N'=N^{1-\delta}$ ([16], [17]. Clearly, $T'(X_1^{N})$ is a context tree with no more than $N^{1-\delta}$ leaves with a maximum depth of $t=\lceil(\log N)^2 \rceil$. The depth $t$ is chosen to be just small enough so as to yield an ${\it implementable}$ compression scheme and at the same time, still be big enough so as to yield an ${\it efficient}$ enough compression. \item[B)] Evaluate, $$ H_{u}(t,N',M') =\sum_{x_{1}^{t-1} \in {\bf A}^{t -1}} P_{N}(x_{1-t+1}^{0},t)\min_{0 \leq j \leq {t-1};x_{1-j}^{0} \in T'(X_1^N)}H_{N}(1|x_{1-j}^{0},t) \, $$ Note that $H_{u}(t,K,M'); K=N'$ is a conditional empirical entropy that is derived from an empirical probability measure of $t$-vectors in the particular $N$-vector $X_1^N$ while previously $H_{u}(N,K,M)$ has been derived from an empirical probability measure of $N$-vectors in the whole individual sequence $X_1^{MN}$ (See Glossary at the end of Section B). Also note that the number of computational steps that are involved in the minimisation is $t|T'(X_1^{N})| \leq O(N)$. Let $T_{u}'(X_1^{N})$ be a sub-tree of $T'(X_1^{N})$, such that its leaves are the contexts that achieve $H_{u}(t,N',M')$. \item[C)] A length function $\hat L(X_1^N)=\hat L_{1}(X_1^N)+\hat L_{2}(X_1^N)+\hat L_{3}(X_1^N)$ is constructed as follows: \begin{itemize} \item[1)] $\hat L_{1}(X_1^N)$ is the length of an uncompressed binary length-function, $\hat m_1(X_1^N)$ that enables the decoder to reconstruct the context tree $T_{u}'(X_1^{N})$, that consists of the set of contexts that achieves $H_{u}(t,N',M')$. This tree has, by construction, at most ${N}^{1-\delta}$ leaves and at most $t$ letters per leaf. It takes at most $1+t{\log A}$ bits to encode a vector of length $t$ over an alphabet of $A$ letters. It also takes at most $1+\log t $ bits to encode the length of a particular context. Therefore, $\hat L_{1}(X_1^N) \leq N^{1-\delta}({t}\log A+\log t+2)\leq$ $[\log N {\log A}+\log({\log N}^{2}) +2]N^{1-\delta}$ bits. \bigskip \item[2)] $\hat L_{2}(X_1^N)$ is the length of a binary word $\hat m_2(X_1^N)$, ($t\log A $ bits long), which is an uncompressed binary mapping of $X_1^{t}$, the first $t$ letters of $X_1^N$. \bigskip \item[3)] Observe that given $\hat m_{1}(X_1^N)$, and $\hat m_{2}(X_1^N)$, the decoder can re-generate $X_1^{t}$ and the sub-tree $ T_{u}'(X_1^N)$ that achieves $H_{u}(t,N',M')$. Given $T_{u}'(X_1^N)$ and a prefix $X_1^{t}$ of $X_1^N$, $X_{t +1}^N$ is compressed by a context-tree algorithm for FSMX sources [3], [4], which is tailored to the contexts that are the leaves of $ T_{u}'(X_1^N)$, yielding a length function $\hat L_{3}(X_1^N)\leq NH_{u}(t,N',M')+O(1) $ where the small $O(1)$ redundancy is achieved by combining arithmetic coding with Krichevsky-Trofimov mixtures [4]. \end{itemize} \item[D)] Thus, $$\hat L(X_1^N) \leq N[H_{u}(t,N',M')+O((\log N)^{2}{N^{-\delta}})]$$ Repeat the steps 1), 2) and 3) above for the N-vectors $X_{(i-1)N+1}^{iN};i=2,3,...,M$ and denote by $H_{u,i}(t,N',M')$ the quantity $H_{u}(t,N',M')$ that is derived for the $N$-vector $X_{i+1}^{i+N}$. \end{itemize} \paragraph{Proof of Theorem 2:} Let $N"=N^{1-2\delta}$ and let $ T_{u}"(X_1^{MN})$ be the subset of contexts for which the minimisation that yields $ H_u(N,N",M)$ is achieved. The proof of Theorem~2 follows from the construction and by the convexity of the entropy function since $$ \frac{1}{M}\sum_{i=0}^{M-1} H_{u,i}\Bigl(t,N',M'\Bigr) \leq H_u(N, N",M)+ O(N^{-\delta})+O\left(\frac {[\log N]^4}{N}\right) $$ and where the term $O(N^{-\delta})$ is an upper-bound on the relative frequency of instances in $X_1^N$ that have as a context a leaf of $T'(X_1^{N})$ that is a suffix of a leaf of $ T_{u}"(X_1^{MN})$ and is therefore not included in the set of contexts that achieve $H_u(N,N",M)$. The term $O(\frac{[\log N]^4}{N})$ is due to end-effects and follows from Lemma 2.7 in [5,page 33](see proof of Lemma~1 in the Appendix). In conclusion, the proposed algorithm is based on an empirical VLMC and is a universal context-tree algorithm, but, as mentioned above, not necessarily the best one in terms of computational complexity. This issue and others are thoroughly discussed in [4]. \section{Application to Universal Classification} A device called \textit{classifier} (or discriminator) observes an individual training sequence of length of m letters, $X_1^m$. The classifier's task is to consider individual test sequences of length N and decide whether the test N-sequence has the same features as those that are captured by the training sequence, or is sufficiently different, according to some appropriate criterion. No a-priori information about the test sequences is available to the classifier aside from the training sequence. Following the discussion in the Introduction section, a universal classifier $d(X_1^m,{\bf A}^N)$ for N-vectors is defined to be a mapping from ${\bf A}^N$ onto $\{0,1\}$. Upon observing $Z_1^N$, the classifier declares $Z_1^N$ to be ${\it similar}$ to one of the $N$-vectors $X_{j+1}^{j+N};j=0,1,...,m-N$ iff $d(X_1^m, Z_{1}^{N}\in {\bf A}^N)=1$ (or, in some applications, if a slightly distorted version $\tilde Z_{1}^{N}$ of $Z_{1}^{N}$ satisfies $d(X_1^m, \tilde Z_{1}^{N})= 1)$. Denote by $S_d(N,\epsilon,X_1^m)$ a set of $N$-sequences $Z_1^N$ which are declared to be similar to $X_1^m$, i.e. $d(X_1^m, Z_{1}^{N})=1$), where $d(X_1^m, X_{j+1}^{j+N})=0$ should be satisfied by no more than $\epsilon (m-N+1)$ instances $j=0,1,2,...,m-N$, and where $\epsilon$ is an arbitrarily small positive number. Also, given a particular classifier, let $D_{d}(N,X_1^m,\epsilon)= |S_d(N,\epsilon,X_1^m)|$, and let $$ H_{d}(N,X_1^m,\epsilon)=\frac {1}{N}\log D_{d}(N,X_1^m,\epsilon)\,. $$ Thus, any classifier is characterised by a certain $H_{d}(N,X_1^m,\epsilon)$. Given $X_1^m$, let $D_{min}(N,X_1^m,\epsilon)$ be the smallest achievable $D_{d}(N,X_1^m,\epsilon)$. Denote by $d^*$ the particular classifier that achieve $D_{min}(N,X_1^m,\epsilon)=D_{d^*}(N,X_1^m,\epsilon)$ and let $$H_{min}(N,X_1^m,\epsilon)=\frac {1}{N}\log D_{d^*}(N,X_1^m,\epsilon)\,.$$ Naturally, if the classifier has the complete list of $N$-vectors that achieve $D_{min}(N,X_1^m,\epsilon )$, it can perform a perfect classification by making $d(X_1^m, Z_{1}^{N})=d^*(X_1^m, Z_{1}^{N})=1$ iff $Z_1^N=X_{j+1}^{j+N}$ for every instance $j=0,1,2,. ..,m-N$ for which $X_{j+1}^{j+N} \in S_{d^{*}}(N,\epsilon,X_1^m)$. The discussion is constrained to cases where $H_{min}(N,X_1^m,\epsilon )>0$. Therefore, when $m$ is large, $D_{min}(N,X_1^m,\epsilon)$ grows exponentially with $N$. The attention is limited to classifiers that have a storage-space complexity that grows only linearly with N. Thus the training sequence cannot be stored within this limited memory. Rather, the classifier should represent the long training sequence with a short ``signature" and use it to classify incoming test sequences of length $N$. It is shown that it is possible to find such a classifier, denoted by $d(X_1^m,\epsilon, {\bf A}^N)$, that is essentially optimal in the following sense(as discussed and motivated in the Introduction section): $d(X_1^m,\epsilon, Z_{1}^N)\in {\bf A}^N$ is defined to be one that satisfies the condition that $d(X_1^m,X_{j+1}^{j+N})=1$ for $(1-\hat\epsilon)(m-N+1)$ instances $j=0,1,...m-N$, where $\hat\epsilon\leq \epsilon $. This corresponds to a rejection of at most $\epsilon N$ vectors among $N$-vectors in $X_1^m$. Also, an optimal ``$\epsilon$-efficient" universal classifier should satisfy the condition that $d(X_1^m,Z_1^N)=1$ is satisfied by no more than $2^{NH_{min }(N,X_1^m,\epsilon)+\epsilon}$ N-vectors $Z_1^N$. Observe that in the case where ${\bf X}$ is a realization of a finite-alphabet stationary ergodic process, $\lim_{\epsilon \to 0}\lim_{N \to \infty}\limsup_{m \to \infty}H_{min}(N,X_1^m,0)$ is equal almost surely to the entropy-rate of the source and, for large enough $m$ and $N$, the classifier efficiently identifies typical $N$-vectors without searching the exponentially large list of typical N-vectors, by replacing the long training sequence with an ``optimal sufficient statistics" that occupies a memory of $O(N)$ only. In the following, a universal context classifier for N-vectors with a storage-space complexity that is linear in $N$, is shown to be essentially optimal for large enough $N$ and $m$. \paragraph{Description of the universal classification algorithm:} Assuming that $N$ divides $m$, let $M= \frac{m}{N}$ and let $N"=\lfloor N^{1-2\epsilon}\rfloor$. \begin{itemize} \item[A)] Evaluate $H_{min}(N,X_1^m,\frac{1}{2}{\epsilon})$. This step is carried out by generating an ordered list of the different $N$-vectors that appear in $X_1^m$ according to their decreasing empirical probability \newline $P_{MN}(Z_1^N,N);Z_1^N \in {\bf A}^N$ (see Glossary in Section B). Let $S_{min}(N,X_1^m,\epsilon)$ be the smallest set of the most probable $N$-vectors such that $P_{MN}[S_{min}(N,X_1^m,\frac{1}{2}{\epsilon})]\geq 1-\frac{1}{2}{\epsilon}$. Then $H_{min}(N,X_1^m,\frac{1}{2}{\epsilon})=\frac{1}{N}\log |S_{min}(N,X_1^m,\frac{1}{2}{\epsilon})|$. \item[B)] First pass: gather all the contexts that are no longer than $t= {\lceil (\log N)^2\rceil}$ and that each appears at least $MN"$ times in the training sequence $X_1^m$, and generate a context tree. Second pass: Compute $ H_u(N,N",M)$ where $H_u(N,N",M)$ is given by Eq (5), with $N"$ replacing $K$. Let $ T_{u}"(X_1^{m})$ be the subset of contexts for which the minimisation that yields $ H_u(N,N",M)$ is achieved. Clearly, $|T_{u}"(X_1^{m})|\leq N"$. The computational complexity of steps A) and B) is $O(m)$ (using suffix tree methods for the first pass and dynamic programming for the second pass [16], [17], [4]). Note however that steps A), and B) above are preliminary pre-processing steps that are carried out once, prior to the construction of the classifier that is tailored to the training data $X_1^m$, and is not repeated for each test-sequence. The subset $ T_{u}"(X_1^{m})$ is the "signature" of $X_1^m$ which, together with the quantity $H_{min}(N,X_1^m,\frac{1}{2}\epsilon)$ and the corresponding set of empirical probabilities $P_{MN}(x_{1}|x_{-i+1}^0,N)$ for every ${x_{-i+1}^0 \in T_u"(X_1^m)}$ are stored in the memory of the classifier (see Glossary in Section B). The storage complexity is at most $O(N)$. \item[C)] Let $x_{-i+1}^0$ denote a context in the set $ T_{u}"(X_1^{m})$. Compute $h_{u}(Z_1^N,X_1^m, T_{u}",t)$ $$=-\sum_{x_{-i+1}^0 \in T_u"(X_1^m)} P_{N}(x_{-i+1}^0,t)\sum_{x_1 \in {\bf A}}P_N(x_{1}|x_{-i+1}^0,t) \log P_{MN}(x_{1}|x_{-i+1}^0,N)$$, where $P_N(x_{1}|x_{-i+1}^0,t)$ and $P_{N}(x_{-i+1}^0,t)$ are derived from the test sequence $Z_1^N$. \item[D)] Let $S(Z_1^N,\mu)$ be the set of all $\tilde Z_1^N \in {\bf A}^N$ such that $g(\tilde Z_1^N,Z_1^N)\leq \mu$, where $g(*,*)$ is some non-negative distortion function satisfying: $g(\tilde Z_1^N,Z_1^N)=0$ iff $\tilde Z_1^N=Z_1^N$. Given a test sequence $Z_1^N$, let \newline $\Delta (Z_1^N,X_1^m,\mu)=$ $$\min_{\tilde Z_1^N \in S(Z_1^N,\mu)}\Bigl[h_{u,\mu}(Z_1^N,X_1^m, T_u",t)- \min[H_{u}(t,N',M'), H_{min}(N,X_1^m,\frac{1}{2}{\epsilon})\Bigr] $$ where here $[H_{u}(t,N',M')$ is evaluated for $Z_1^N$,$N'=N^{1-\epsilon}$ and $H_{u}(t,N',M')= H_{u}(t,K,M'); K=N'$ (see Glossary in section B). Note that if $\mu >0$, the number of computational steps that are involved in the minimisation may grow exponentially with $N$. Now set the particular classifier $\hat d(Z_1^N,\epsilon, X_1^m)$ to satisfy: $\hat d(Z_1^N,\epsilon, X_1^m)$=1 iff \newline $\Delta (Z_1^N,X_1^m,\mu) \leq {\epsilon}'$, where ${\epsilon}'$ is set so as to guarantee that $\hat d(X_1^m,\epsilon, X_{j+1}^{j+N})=1$ for for at least $(1-\epsilon)(m-N+1)$ instances $j=0,1....,m-N$ of $X_1^m$. If $H_{u}(t,N',M') +{\epsilon}'>\log A$, set $\hat d(Z_1^N,\epsilon, X_1^m)$=1 for every $Z_1^N \in {\bf A}^N$. \end{itemize} Refer to a test sequence $Z_1^N$ as being ${\epsilon}'$ acceptable (relative to $X_1^m$) iff $\Delta (Z_1^N,X_1^m,\mu) \leq {\epsilon}'$. It should be noted that for some small values of N, one may find values of m for which $ H_{{\hat d}}(N,X_1^m,\epsilon)$ is much larger than $H_{min}( {\bf X},\epsilon)$. It should also be noted that the space complexity of the proposed classifier is $O(N)$ and that if no distortion is allowed (i.e. $\mu =0$), the time complexity of the proposed algorithm is linear in $N$ as well. Now set ${{\epsilon}'}=\frac{1}{2}{\epsilon}^2 +O(N^{-\epsilon})$. Then \begin{theorem} \quad 1)For any arbitrarily small positive $\epsilon$, the classifier that is described above accepts no more than $2^{H_{\hat d}(N,X_1^m,\epsilon)}$ N-vectors where, if $| S(Z_1^N,\mu)|\leq 2^{N{\epsilon}"}$ $$ \limsup_{N \to \infty}\liminf_{m \to \infty} H_{{\hat d}}(N,X_1^m,\epsilon)\leq H_{min}( {\bf X},\frac{1}{2}{\epsilon})+\frac{1}{2}{\epsilon}^2+{\epsilon}"$$ Observe that $H_{min}( {\bf X},\frac{1}{2}{\epsilon})$ $ \leq H_{min}( {\bf X},{\epsilon}) +{\delta}(\epsilon)$ where $\lim_{\epsilon \to 0}{\delta}(\epsilon)=0$. 2)~ There exist m-sequences $X_1^m$ such that $H_{\hat d}(N,X_1^m,\epsilon)$ is much smaller than $\log A$ and for which no classifier can achieve $\hat H_{min}(N',X_1^m,\epsilon) <\log A -\epsilon$ if $\log N' <\log N$, where $\delta$ is an arbitrarily small positive number. \end{theorem} Thus, for every $N\geq N_0({\bf X})$, the proposed algorithm is essentially optimal for some $m_0=m_0(N,{\bf X})$ and is characterised by a storage-space complexity that is linear in $N$. Furthermore, if one sets $\mu =0$ (i.e. no distortion), the proposed algorithm is also characterised by a linear time-complexity. The proof of Theorem 3 appears in the Appendix. Also, it follows from the proof of Theorem 3 that if one generates a training sequence ${\bf X}$ such that $\liminf_{M \to \infty}H_u(N,N,M)=\lim_{M \to \infty}H_u(N, N,M)$\newline (i.e. a ``stationary'' training sequence), then there always exist positive integers $N_0$ and $m_0=m_0(N_0)$ such that the proposed classifier is ${\it essentially}$ optimal for ${\it any}$ $N>N_0({\bf X})$ and for any $m>m_0(N_0)$, rather than for only some specific values of $m$ that depend on $N$. Now, let $Y_1^N$ and $Z_1^N$ be two N-sequences and assume that no training sequence is available. However, one still would like to test the hypothesis that there exists some test sequence ${\bf X}$ such that both $N$-sequences are acceptable by this "essentially" $\epsilon$-efficient algorithm with respect to ${\bf X}$. (This is a reminiscence of the ``common ancestor" problem in computational biology where one may think of ${\bf X}$ as a training sequence that captures the properties of a possible ``common ancestor"[15] of two DNA sequences $Y_1^N$ and $Z_1^N$). \begin{corollary} Let $Y_1^N$ and $Z_1^N$ be two N-sequences and let $S(Y_1^N,Z_1^N)$ be the union of all their corresponding contexts that are no longer than $t= {\lceil (\log N)^2\rceil}$ with an empirical probability of at least $\frac{1}{N^{1-\epsilon}}$. If there does not exist a conditional probability distribution $$ P(X_1|X_{-i+1}^0);X_{-i+1}^0 \in S(Y_1^N,Z_1^N)$$ such that, $$\sum_{X_1 \in {\bf A}} P_{N,Y_1^N}(X_1|X_{-i+1}^0,t)\log \frac{P_{N,Y_1^N}(X_1|X_{-i+1}^0)}{P(X_1|X_{-i+1}^0)} \leq {\epsilon}$$ and at the same time, $$\sum_{X_1 \in {\bf A}} P_{N,Z_1^N}(X_1|X_{-i+1}^0,t)\log \frac{P_{N,Z_1^N}(X_1|X_{-i+1}^0,t)}{P(X_1|X_{-i+1}^0)} \leq {\epsilon}$$ (where $ P_{N,Y_1^N}(Y_1|Y_{-i+1}^0,t)=P_N(Y_1|Y_{-i+1}^0,M')$ is empirically derived from $Y_1^N$ and \newline $P_{N,Z_1^N}(X_1|X_{-i+1}^0,t)=P_N(Z_1|Z_{-i+1}^0,M')$ is empirically derived from $Z_1^N$, $M'=\frac{N}{t}$, see Glossary in section B), then there does not exist a training sequence ${\bf X}$ such that for some positive integer $m$ for which $H_{min}(N,X_1^m,\frac{1}{2}\epsilon)> \epsilon$ and $H_{\hat d}(N,X_1^m,\epsilon)\leq H_{min}( N, X_1^m,\frac{1}{2}{\epsilon}) + \frac{1}{2}{\epsilon}^2$ and at the same time both $Y_1^N$ and $Z_1^N$ are ${\epsilon}'$-acceptable relative to $X_1^m$. Here, the condition $H_{min}(N,X_1^m,\frac{1}{2}\epsilon)> \epsilon$ guarantees that $X_1^m$ is not a degenerated training sequence and is exponentially "rich" in distinct N-vectors. \end{corollary} In conclusion, it should be noted that it has not been claimed that the particular "test-bench" algorithm that was introduced here as a theoretical tool to establish Theorem 3, yields the smallest possible computational time complexity. In unison with the ``individual sequence" justification for the essential optimality of Context-tree universal data compression algorithm that was established above, these results may contribute a theoretical ``individual sequence" justification for the Probabilistic Suffix Tree approach in learning and in computational biology [7], [8]. \section*{Acknowledgement} Helpful discussions with Neri Merhav and helpful comments by the anonymous reviewers are acknowledged with thanks. \section*{Appendix} \paragraph{Proof of Proposition 1:} By definition, \begin{multline} \rho_L({\bf X},N,M)\geq \max_{i=1}^{N-1} \frac{1}{NM} \left[\sum_{j=0}^{M-2}L(X_{i+jN}^{i+(j+1)N-1})\right] \geq \frac{1}{N}\sum_{i=1}^N \sum_{Z_1^{N}\in{\bf A}^N}P_{MN,i}(Z_{1}^{N},N)L(Z_1^{N})\\ = \frac{1}{N}\sum_{Z_1^N\in{\bf A}^N} P_{MN}(Z_1^{N},N)L(Z_1^N) \geq H_{MN}(N) \end{multline} which leads to Proposition 1 by the Kraft inequality. \paragraph{Proof of Lemma 1:} Let $N_0$, $M_0 $ and M be positive numbers and let $\epsilon=\epsilon(M)$ be an arbitrarily small positive number, satisfying $\log M_0 >{N_0}\log A$, $ H_u({\bf X}) \geq H_u({\bf X}, N_0)-\epsilon$, and $M\geq {{N_0}}^2$ such that $H_u({M_0}N_0,{M_0}N_0,{M}) \geq H_u({\bf X})-\epsilon$, where $H_u({M_0}N_0,{M_0}N_0,{M})=H_u({M_0}N_0,K,{M})$ with $K={M_0}N_0$ (See Glossary in section B). Note that for any vector $Z_{1}^{M_{0}N_0}$ the block-length is $M_{0}N_0$. Thus, the parameter $t$ that determines $ K_1(Z_{1}^{M_{0}N_0},K)$ here is $t=\lceil (\log {M_{0}N_{0}})^2 \rceil$ (see section B). Thus, $t> N_0^2>N_0$. Therefore, by the properties of the entropy function, by applying the chain-rule to $H_{{{M}}N_0}(N_0,{M_0}N_0)$ and by Eq~(5), $$ H_{{{M}M_0}N_0}(N_0,{M_0}N_0) \geq H_{{MM_0}N_0}(Z_N|Z_{1}^{N_0-1},{M_0}N_0)\geq H_u({M_0}N_0,{M_0}N_0,{M})-\frac{\log A}{N_0}$$ $$\geq H_u({\bf X},{N_0})-2\epsilon - \frac{\log A}{N_0} $$ where by Eq (4), the term $\frac{\log A}{N_0}$ is an upper-bound on the total contribution to $H_{{MM_0}N_0}(Z_N|Z_{1}^{N_0-1},{M_0}N_0)$ \newline by vectors $Z_{1}^{N_0-1}$ for which $P_{M{M_0}N_0}(Z_{1}^{N_0-1},M_{0}N_0)<\frac{1}{M_0{N_0}}$ Now, $ |P_{{M M_0}N_0}(Z_1^{N_0},{M_0}N_0)-P_{{MM_0}N_0}(Z_1^{N_0},N_0)| \leq \frac{{M_0}{N_0}}{{MM_0}N_0}\leq \frac{1}{{N_0}^2}= d_{N_0} \, $. By Lemma 2.7 in [5, page 33], for any two probability distributions $P(Z_1^{N_0})$ and $Q(Z_1^{N_0})$, $$ \left|-\sum_{Z_1^{N_0}\in {\bf A}^{N_0}} P(Z_1^{N_0})\log \frac {P(Z_1^{N_0})} {Q(Z_1^{N_0})}\right| \leq d \log \left(\frac{{\bf A}^{N_0}}{d}\right) $$ where $d=\max_{Z_1^{N_0} \in { A}^{N_0}}|P(Z_1^{N_0})-Q(Z_1^{N_0})|$. Hence, by letting $d_{N_0}$ play the role of d in the following expression, $$ \Bigl|H_{M{M_0}N_0}(N_0,{M_0}N_0)-H_{M{M_0}N_0}(N_{0},N_{0})\Bigr| \leq \frac{1}{{N_0}^2}\Bigl[N_0 \log {A} +2\log {N_0}\Bigr] $$ and therefore, $$ H_{M{M_0}N_0}(N_0,{N_0})\geq H_u({\bf X},N_0)-2\epsilon - \frac{1}{{N_0}^2}\Bigl[N_0 \log {A} +2\log {N_0}\Bigr]-\frac{\log A}{N_0}\, $$ which, by Eqs~(6) and (7) and by setting $\epsilon =\frac{1}{N_0}$, proves Lemma 1 (Eq (8)). \paragraph{Proof of Theorem~1:} Consider the following construction of $X_1^{NM}$: Let $h$ be an arbitrary small positive number and $\ell$ be a positive integer, where $h$ and $\ell$ satisfy $N={\ell}2^{{h}\ell}$, and assume that $\ell$ divides $N$. \begin{itemize} \item[1)] Let $S_{\ell,h}$ be a set of some $T'=\frac{N}{\ell}=2^{h \ell}$ distinct $\ell$-vectors from ${\bf A}^{\ell}$. \item[2)] Generate a concatenation $Z_1^{N}$ of the $T'$ distinct $\ell$-vectors in $S_{\ell,h}$. \item[3)] Return to step 2 for the generation of the next $N$-block. \end{itemize} Now, by construction, the $M$ consecutive $N$-blocks are identical to each other. Hence, $ \rho_L({\bf X},N,M)\geq \frac{1}{NM}$ and by Eq (1), $$\frac{1}{N}\geq P_{MN,i}(Z_1^{\ell},N)\geq \frac {M-1}{MN};i=1,2,...N.$$ Thus, by construction, $P_{MN}(Z_1^{\ell},N)\geq \frac {M-1}{MN}\,.$ Furthermore, there exists a positive integer $N_0=N_0(h)$ such that for any $N \geq N_0$, $$H_{MN}(\ell,N) \leq \frac{\log N}{\ell} \leq 2h$$ where $$H_{MN}(\ell,N)=-\frac{1}{\ell}\sum_{Z_1^{N} \in {\bf A}^{N}} P_{MN}(Z_1^{\ell},N)\log P_{MN}(Z_1^{\ell},N)\,.$$ Observe that any vector $Z_{j-i}^{j};i+1\leq j \leq MN;1\leq i \leq \ell -1$, except for a subset of instances $j$ with a total empirical probability measure of at most $\frac { 1}{2^{h \ell}}$, is therefore a suffix of $Z_{j-K_1(X_1^N,\hat K)}^{j}$ where $\hat K=N\frac{M}{M-1}$ and that $K_1(X_1^N,\hat K) \leq t$ for any $ N>N_0(h)$, where $t=\lceil (\log N)^2 \rceil$. Thus, by applying the chain-rule to $H_{MN}(\ell,N)$, by the convexity of the entropy function and by Eq~(5), \begin{equation} H_u({N},\hat K,M) \leq H_{MN}(Z_1|Z_{-\ell+1}^0,N)_ \leq H_{MN}(\ell,N)\leq 2h \end{equation} Also, $$\limsup_{M \to \infty}H_u(N,\hat K,M)=\limsup_{M \to \infty}H_u(\hat K,\hat K,M-1)=H_u({\bf X},N)\leq 2h$$ (see Glossary in section B). Consider now the class $\sigma_{\ell,h}$ of all sets like $S_{\ell,h}$ that consists of $2^{h\ell}$ distinct $\ell$-vectors. The next step is to establish that no compression for $N$-sequences which consist of the $2^{h\ell}$ distinct $\ell$ vectors that are selected from some member in the class $\sigma_{\ell,h}$ is possible, at least for some such $N$-sequences. Let the \textit{normalised} length-function $\bar L(Z_1^N)$ be defined by: $$\hat L(Z_1^N)= -\log \frac {2^{-L(Z_1^N)}}{\sum_{Z_1^N \in {\bf A}^N}2^{-L(Z_1^N)}}\,.$$ Clearly, $\bar L(Z_1^N)\leq L(Z_1^N)$ since $ L(Z_1^N)$ satisfies the Kraft inequality while $\bar L(Z_1^N$ satisfies it with equality, since $2^{-\bar L(Z_1^N)}$ is a probability measure. Then, $$L(Z_1^N) \geq \bar L(Z_1^N)= \sum_{i=0}^{\frac {N}{\ell}-1} \bar L(Z_{i\ell +1}^{(i+1)\ell}|Z_1^{i\ell}) $$ where $$ \bar L(Z_{i\ell +1}^{(i+1)\ell}|Z_1^{i\ell})=\bar L(Z_{1}^{(i+1)\ell})- \bar L(Z_1^{i\ell})$$ is a (normalised) conditional length-function that, given $X_1^{i\ell}$, satisfies the Kraft inequality with equality, since $2^{-\bar L(Z_{i\ell +1}^{(i+1)\ell}|Z_1^{i\ell})}$ is a conditional probability measure. \begin{lemma} For any $h>0$, any arbitrarily small positive number $\delta$, any $N\geq N_0=N_0(\ell,h)$ and any $\bar L(x_1^{\ell}|x_{-N+1}^0)$ there exists a set of $2^{h \ell}$ $\ell$-vectors such that $$\sum_{x_1^{\ell} \in {\bf A}^{\ell}}P_{N}(x_1^{\ell},\ell) \bar L(x_1^{\ell}|x_{-N+1}^0) \geq {\ell} (1-\delta)(\log A -\delta)$$ for all $x_{-N+1}^0$ which are concatenations of $\ell$-vectors from $S_{\ell,h}$ as described above. \end{lemma} \paragraph{Proof of Lemma 2:} The number of possible sets $S_{\ell, h}$ that may be selected from the $A^{ \ell}$ $\ell$ vectors over ${\bf A}$ is: $$M_{\ell,h}= {2^{(\log A) \ell} \choose 2^{h \ell}}\,$$ Given a particular $\bar L(x_{1}^{\ell}|x_{-N+1}^0)$, consider the collection ${\bf M}_{\ell,h,\delta|x_{-N+1}^0 }$ of all sets $S_{\ell,h,\delta}$ that consist of at most $(1-\delta) 2^{h\ell}$ vectors selected from the set of vectors $\hat x_1^{\ell}$ for which $L(\hat x_{1}^{\ell}|x_{-N+1}^0)\geq (\log A-\delta) \ell$ (observe that there are at least $2^{\log A \ell}-2^{(\log A-\delta) \ell}$ such vectors). The collection $ {\bf M}_{\ell,h,\delta |x_{-N+1}}$ is referred to as the collection of "good" sets $S_{\ell,h,\delta}$ (i.e. sets yielding $\hat L(x_1^{\ell}|x_{-N+1}^0) \leq (\log A-\delta) \ell$). It will now be demonstrated that $ \left|\sum_{x_{-N+1}^0 \in {{\mathbf{A}}}^{N}} {\bf M}_{\ell,h,\delta|x_{-N}^0 }\right|$ is exponentially smaller than ${M}_{\ell, h}$ if $N < \delta 2^{h \ell}(1-h)\ell$. Hence, for any conditional length- function $L(x_{1}^{\ell}|x_{-N+1}^0)$ and any $x_{-N+1}^0) \in {\bf A}^N$, most of the sets $S_{\ell,h} \in {\bf M}_{\ell, h}$ will not contain a "good" $S_{\ell,h,\delta} \in {\bf M}_{\ell,h,\delta|X_{-N+1}^0 }$ and therefore less than $\delta 2^{h\ell}$ $\ell$-vectors out of the $2^{h \ell}$ $\ell$-vectors in $S_{\ell,h}$ will be associated with an $L(x_1^{\ell}|x_{-N+1}^0)< (\log A- \delta)\ell$. The cardinality of $ {\bf M}_{\ell,h,\delta |(x_{-N+1}^0}$ is upper bounded by: $\sum_{j=\delta 2^{h\ell}}^{ 2^{h\ell}} {{2^{\ell \log A}-2^{(\log A-\delta) \ell}}\choose{(2^{h\ell}-j)}}$ ${2^{(\log A-\delta) \ell}}\choose {j}$. Now, by [3], one has for a large enough positive integer $n$, $$\log_2 {n \choose {pn}}=[h(p) +\epsilon(n)]n$$ where $h(p)=-p\log_2 p -(1-p)\log_2 (1-p)$ and where $\lim_{n \to \infty}\epsilon(n)=0$. Thus, \begin{equation} \log \frac{\sum_{x_{-N+1}^0 \in {\bf A}^N} {\bf M}_{\ell,h,\delta |x_{-N+1}^0}} {M_{\ell,h}} \leq - \delta 2^{h\ell}(1-h)\ell +N +{\epsilon'(N)}N \end{equation} where $\lim_{N \to \infty}{\epsilon'(N)}=0$. Therefore, if $N < \delta 2^{h \ell}(1-h)\ell$, there exists some $S_{\ell,h}$ for which, $$\sum_{x_1^{\ell} \in {\bf A}^{\ell}}2^{-h \ell} \hat L(x_1^{\ell}|x_{-N+1}^0) \geq {\ell} (1-\delta)(\log A -\delta)$$ for all N-vectors $X_{-N+1}^0 \in {\bf A}^N$. Hence by construction, there exists some $S_{\ell,h}$ for which, $$ L(x_1^N) \geq \bar L(x_1^N)=\sum_{i=0}^{\frac {N}{\ell}-1} \bar L\Bigl(x_{i\ell +1}^{(i+1)\ell}|x_1^{i\ell}\Bigr)\geq N\Bigl[(1-\delta)\log A-\delta)+{\epsilon'(N)}\Bigr]\,$$ Therefore, it follows that the class $C_{N_0,M_0,\delta}$ is not empty since, by construction, the class $S_{\ell,h}$ of cardinality $M_{\ell,h}= {2^{(\log A) \ell} \choose 2^{h \ell}}$ of sequences is is included in $C_{N_0,M_0,\delta}$ for $h=\frac{\delta}{2}$ and for $\ell$ that satisfy $N_0=\ell2^{h\ell}$. Moreover, by Lemma 1, it follows that every sequence ${\bf X}$ is in the set $C_{N_0,M_0,\delta}$, for large enough $N_0$ and $M_0=M_0(N_0)$. This completes the proof of Lemma 2 and setting $h=\delta$, the proof of Theorem 1. \paragraph{ Proof of Theorem 3:} Consider the one-to-one mapping of $X_{1}^{N}$ with the following length-function (following the definition of $\hat L(X_1^N)$ in the proof of Theorem 2 in section B above): \begin{itemize} \item[1)] $L(Z_{1}^{N)})=2+N[H_u(t,N',M')+O((\log N)^{2}N^{-\epsilon})]$ if $H_u(t,N',M') \leq H_{min}(N,X_1^m,\frac{1}{2}\epsilon )$ and if $Z_{1}^{N} \in S_{d^*}(N, X_1^m,\frac{1}{2}\epsilon)$, where $H_u(t,N',M')$ is evaluated for $Z_1^N$. \item[2)] $L(Z_{1}^{N})=2+N[H_{min}(N,X_1^m,\frac{1}{2}\epsilon)]$ if $H_u(N,M',N') > H_{min}(N,X_1^m,\frac{1}{2}\epsilon)$ and if $Z_{1}^{N} \in S_{d^*}(N,X_1^m,\epsilon)$ \item[3)] $L(Z_{1}^{N}) \leq 2+N[H_u(t,N',M')+O((\log )^{2}N^{-\epsilon})]]\,$ otherwise. \end{itemize} Now $$\frac{1}{m-N+1}\sum_{j=0}^{m-N}L(X_{j+1}^{j+N}) \geq H_{MN}(N)$$ For any $N>N_0$ and ${\it some}$ $ M>M_0 =M_0(N_0)$ (see Eq (6) and Glossary in section B), $H_u(N,N",M)\leq H_{MN}(N)+\frac{1}{4}{\epsilon}^2$ Thus, $$\frac{1}{m-N+1}\sum_{j=0}^{m-N}L(X_{j+1}^{j+N}) \geq H_u(N,N",M)-\frac{1}{4}{\epsilon}^2$$ Also, by construction (see section C), $h_{u}(Z_1^N,X_1^m, T_{u}",t)$ $$=-\sum_{z_{-i+1}^0 \in T_u"(X_1^m)} P_{N}(z_{-i+1}^0,t)\sum_{z_1 \in {\bf A}}P_N(z_{1}|z_{-i+1}^0,t) \log P_{MN}(z_{1}|z_{-i+1}^0,N)$$ where $P_N(z_{1}|z_{-i+1}^0,t)$ and $P_{N}(z_{-i+1}^0,t)$ are derived from the test sequence $Z_1^N$. Let $P_{N,j}(x_{-i+1}^1,t)$ denote $P_{N}(x_{-i+1}^1,t)$ where for $0\leq j \leq m-t$, $P_{N}(x_{-i+1}^1,t)$ is derived from the substring $X_{j+1}^{j+N}$ of $X_1^N$. \newline Thus, $0<|\frac{1}{m-N+1}\sum_{j=0}^{m-N}P_{N,j}(x_{-i+1}^1,t)-P_{MN}(x_{-i+1}^1,t)|\leq \frac{(\log N)^2}{N}$~ due to end-effects (see introduction). Also note that $ \log P_{MN}(x_{1}|x_{-i+1}^0,N) \leq \log N$ for every $x_{-i+1}^1 \in T"_u(X_1^m)$.\newline Hence, it follows that \newline $\frac{1}{m-N+1}\sum_{j=0}^{m-N}h_{u}(X_{j+1}^{j+N},X_1^m, T_{u}",t)\leq H_u(N,N",M)+O(\frac{[\log N]^3}{N})$ Thus, $$\frac{1}{m-N+1}\sum_{j=0}^{m-N}[h_{u}(X_{j+1}^{j+N},X_1^m, T_{u}",t)-L((X_{j+1}^{j+N})]\ \leq \frac{1}{4}{\epsilon}^2+O(\frac{[\log N]^3}{N})$$ Let, $$\bar \Delta (Z_{1}^{N},X_1^m)=h_{u}(Z_1^N,X_1^m, T_u",t)- H_u(t,N',M') $$ where $H_u(t,N',M')$ is evaluated for $Z_1^N$. Let $T'(X_1^{N})$ be a set that consists of all contexts $x_{1}^{i-1}$ that appear in $X_1^N$, satisfying $P_N(x_{1}^{i-1},t)\geq \frac{1}{N'}; i \leq t$, where $ N'=N^{1-\epsilon}$. \newline Note that $\bar \Delta (Z_{1}^{N},X_1^m)$ is similar to a divergence measure. $\bar \Delta (Z_{1}^{N},X_1^m)+ O( N^{-\epsilon})\geq 0$ where the term $O(N^{-\delta})$ is an upper-bound on the relative frequency of instances in $Z_1^N$ that have as a context a leaf of $ T_{u}'(Z_1^{N})$ that is a suffix of a leaf of $ T_{u}"(X_1^{m})$ and is therefore not included in the set of contexts that achieve $H_u(N,N",M)$, where $H_u(N,N",M)$ is derived from $Z_1^N$. Thus, since $L(X_1^N)\leq H_u(t,N',M'))+\frac{2}{N}+O((\log N)^{2} N^{-\epsilon})$, also $$h_{u}(X_1^N,X_1^m, T_{u}",t)-L((X_1^N)+ O((\log)^{2} N^{-\epsilon})+\frac{2}{N}\geq 0$$ Therefore, $$ \frac{1}{m-N+1}\sum_{j=0}^{m-N}[h_{u}(X_{j+1}^{j+N},X_1^m, T_{u}",t)-L((X_{j+1}^{j+N})] +O((\log N)^{2}N^{-\epsilon})+\frac{2}{N}\leq \frac {1}{2}(\epsilon)^2\,$$ Note that $L(X_{j+1}^{j+N})= \min[ H_{min}(N,X_1^m,\frac{1}{2}\epsilon), H_u(t,N',M')]+O((\log N)^{2}N^{-\epsilon})+ +\frac{2}{N}$ if $Z_1^N \in S_{d^*}(N, X_1^m,\frac{1}{2}\epsilon)$, which is valid for $m(1-\frac{1}{2}\epsilon)$ instances $j$ in $X_1^m$, where $H_u(t,N',M')$ is evaluated for $X_1^N$. Assume that all the other $\frac{m}{2}\epsilon$ instances $j$ are rejected by the classifier \newline (i.e. $\hat d(X_1^m,\epsilon, \tilde Z_{j+1}^{j+N})=0$). \newline Statement 1) in Theorem 3 then follows by the Markov inequality for non-negative random variables and by setting ${\epsilon}'=\frac{1}{2}{\epsilon}^2 +O((\log N)^{2}N^{-\epsilon})$, where (ignoring the effect of the term $O(N^{-\epsilon})$ for large N) there are at most $\frac{1}{2}{\epsilon}m$ instances $j$ with $Z_{1+j}^{N+j} \in S_{d^*}(N, X_1^m,\frac{1}{2}\epsilon)$ are rejected, on top of $\frac {1}{2}{\epsilon}m$ instances where $Z_{1+j}^{N+j}$ is not in instances ${\epsilon}m$ as required from an ${\epsilon}$-efficient classifier. Also, since $h_{u,\mu}(Z_1^N,X_1^m, T_u",t)$ is a length function, the classifier accepts no more than $2^{H_{min}(N,X_1^m,\epsilon)+\frac{1}{2}{\epsilon}^2+ O(N^{-\epsilon})+{\epsilon}"}$ where the term ${\epsilon}"$ is due to the fact that for $\mu>0$, the discussion is limited to cases where $| S(Z_1^N,\mu)|\leq 2^{N{\epsilon}"}$. Consider the class of sequences ${\bf X}$ that are generated in the proof of Theorem 1 above. It is Demonstrated that for every such individual sequence, $H_u({\bf X},N)\leq 2h$ where $h$ is an arbitrarily small positive number. Thus, there exists an $m$ such that \newline $H_{\hat d}(N,X_1^m,\epsilon ) \leq 2h +\frac{1}{2}{\epsilon}^2+{\epsilon}"$. Now, assume that the classifier is successively fed with every $Z_1^{N'} \in {\bf A}^{N'}$ and consider the list of all the $N'$-vectors that are accepted (i.e. $Z_1^{N'}:\hat d(Z_1^{N'},X_1^m,\epsilon)=1$) and a compression algorithm that assigns a length-function that is equal to $NH_{\hat d}(Z_1^{N'},X_1^m,\epsilon)+1$ to each of the accepted vectors and $N\log A+1$ to any of the `rejected $N'$-vectors. This results in a compression of $H_{\hat d}(N',X_1^m,\epsilon)+\epsilon +\frac{1}{N'}.$ Following the proof of Lemma 2 that led to the proof of Theorem 1 above, and observing that if a "signature" of $N'$ bits that is generated from $X_1^m$ is made available to a universal data-compression algorithm for $N'$-vectors, Theorem 1 still holds for some of the sequences that are generated as described above, if $\log {N}'< \log N$. This leads to statement 2) of Theorem 3 and completes the proof of Theorem 3.
proofpile-arXiv_069-14225
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the last five years we have witnessed a renaissance of charm spectroscopy. Several new charmed states have been observed, using data samples collected by so-called $B$-factories i.e. $e^+e^-$ storage rings dedicated to the studies of CP violation in the sector of the beauty quark. These machines are running essentially at the center-of-mass (CMS) energy corresponding to the maximum of the $\Upsilon(4S)$ resonance (10.58~GeV/c$^2$). There are three such accelerators and detectors, which are currently taking data. The oldest one, which contributed a lot of to the heavy flavour physics in the past twenty years, is the CLEO apparatus~\cite{Kubota:1991ww,Briere:2001rn} at the CESR~\cite{CESR} storage-ring (Cornell, USA). After collecting the data sample of~16~fb$^{-1}$, the CLEO collaboration has moved since 2003 to the lower energy working point corresponding to the maximum of the $\psi(3770)$. The other two detectors working at $B$-factories: the BaBar~\cite{BABAR} at PEP-II~\cite{PEPII} (Stanford, USA) and Belle~\cite{BELLE} at KEKB~\cite{KEKB} (Tsukuba, Japan) have collected in the last few years enormous data samples corresponding to 370~fb$^{-1}$(630~fb$^{-1}$), respectively. The KEKB is, in fact, the record holder as far as the luminosity is concerned with its peak value of $1.65\times 10^{34}$~cm$^{-2}$s$^{-1}$. It is worthwile to stress here that the cross-section for the continuum process $e^+e^-\to c\bar{c}$ (1.3~nb) is comparable to the one for the reaction $e^+e^-\to\Upsilon(4S)\to B\bar{B}$ (1.1~nb). As a result, $B$-factories can be safely considered as $c$-factories too. Moreover, charmed hadrons can he reconstructed here relatively easy, due the `clean' environment provided by $e^+e^-$ collisions. This paper is divided into several chapters, each one of them discussing the observation of an individual new state and entitled with its name. The following new meson-like charmed hadrons are talked over: $X(3872)$, $Y(3940)$, $X(3940)$, $\chi^{\prime}_{c2}(3930)$, $Y(4260)$, $h_c$ and the $c\bar{s}$ states $D_{sJ}$. Then the following new observations of charmed barions will be described: $\Sigma_c(2800)$, $\Lambda_c(2940)$, $\Xi_{cx}(2980)$, $\Xi_{cx}(3077)$ and $\Omega_c^{*}$. \section{X(3872)} The first new charmed resonance, marked as X(3872), was discovered by the Belle collaboration in 2003~\cite{X1BELLE} by analyzing exclusive decays\footnote{charge conjugate modes are included everywhere, unless otherwise specified.} $B^+\rightarrow \pi^+\pi^- J/\psi K^+, J/\psi\to l^+l^-$. The $B$ mesons were reconstructed using two kinematical variables: the energy offset $\Delta E = \sum_i E_i - E_{beam}$ and the beam-constrained mass $M_{bc}=\sqrt{E_{beam}^2 - \sum_i (\vec{p_i})^2}$, where $E_i$ and $\vec{p_i}$ are the center-of-mass (CMS) energies and momenta of the selected $B$ meson decay products and $E_{beam}$ is the CMS beam energy. A very narrow peak in the invariant mass spectrum of the system $\pi^+\pi^- J/\psi$ was observed (Fig.~\ref{MASSX}) with a statistical significance above 10~$\sigma$. The mass of the resonance was determined to be ($3872.0\pm 0.6\pm 0.5$) MeV/c$^2$ and a width below 2.3 MeV (90\% C.L.), which is consistent with the detector resolution. \begin{figure}[hbt] \begin{center} \includegraphics[height=5.0cm]{mpipijpsi_x_half-box.eps} \caption{The mass distribution of $J/\psi \pi^+\pi^-$ for the X(3872) resonance, as measured by Belle collaboration.} \end{center} \label{MASSX} \end{figure} The observation of $X(3872)$ was very quickly confirmed by the CDF~\cite{XCDF}, D0~\cite{XD0} and BaBar~\cite{XBABAR} experiments. At first glance the $X(3872)$ would appear as an ideal candidate for one of the, unobserved yet, charmonium states. Among $(c\bar{c})$ states, the ones expected to be closest in mass to X are those belonging to the multiplets $1D$ and $2P$ multiplets~\cite{CC1}--\cite{CC4}. However, it soon turned out that the, discussed below, properties of none of these states are in agreement with measured properties of $X(3872)$. This fact stimulated the development of several theoretical models assuming the exotic nature of this new resonance. In particular, the coincidence of the X mass with the $D^0\bar{D^{*0}}$ threshold i.e. ($3871.3\pm 1.0$)~MeV/c$^2$ has prompted many theoretical speculations that X(3872) may be a so-called deuson~\cite{XDEUSON1}--\cite{XDEUSON4} i.e. a loosely bound molecular state of these two mesons or a tetraquark i.e. a tightly bound open charm diquark-antidiquark state~\cite{XTETRAQ,FAUSTOV1}. Other models attributed the $X(3872)$ as a $(c\bar{c})$-gluon hybrid meson~\cite{XHYBRID}, a glueball with a $(c\bar{c})$ admixture~\cite{XGLUEB} or the so called threshold cusp effect~\cite{XCUSP}. The Belle collaboration, has also provided the first evidence for two new decay modes of the $X(3872)$: $X\rightarrow \gamma J/\psi$ and $X\rightarrow \pi^+\pi^-\pi^0 J/\psi$~\cite{X0505037}, observed in exclusive $B$ meson decays to the final states $\gamma J/\psi K$ and $\pi^+\pi^-\pi^0 J/\psi K$, respectively. The yield of the decay $B\rightarrow \gamma J/\psi K$ plotted in bins of the $\gamma J/\psi$ invariant mass (Fig.~\ref{GAMM3PI}{\bf a)}) exhibits an excess of $13.6\pm 4.4$ events (statistical significance of 4$\sigma$). This evidence was was recently confirmed by the BaBar collaboration~\cite{X0607050} with the signal yield of $19.2\pm 5.7$ events (3.4$\sigma$). The observation of this decay establishes unambiguously that the charge-conjugation parity of the X(3872) is positive and indicates the presence of the $c\bar{c}$ component in its wave function. The partial width ratio $\Gamma(X\rightarrow \gamma J/\psi)/\Gamma(X\rightarrow \pi^+\pi^- J/\psi)$ amounts to $0.14\pm 0.05$. This result is, in particular, in contradiction with the $\chi_{c1}^{\prime}$ ($1^{++}$ charmonium) assignment for X as in this case a value around 40 would be expected. The second decay mode $X\rightarrow \pi^+\pi^-\pi^0 J/\psi$ was found to be dominated by the sub-threshold decay $X\rightarrow \omega^* J/\psi$. This is motivated by the fact that the yield of $B$ mesons plotted in bins of the $\pi^+\pi^-\pi^0$ invariant mass (Fig~\ref{GAMM3PI}{\bf b)}) inside of the signal region from the decay $X\rightarrow\pi^+\pi^-\pi^0 J/\psi$ is consistent with zero except for the $M(\pi^+\pi^-\pi^0)>750$~MeV/c$^2$. There, the excess of $12.4\pm 4.1$ events (4.3$\sigma$) is observed. The ratio of branching fractions {\cal B}($X\to\pi^+\pi^-\pi^0 J/\psi$)/{\cal B}($X\to\pi^+\pi^- J/\psi$). was measured to be $1.0\pm 0.4\pm 0.3$, which implies a large violation of isospin symmetry. This in turn points at the presence of both $u\bar{u}$ and $d\bar{d}$ pairs in the X wave function. The overall properties of the above two decays are in reasonable agreement with the $D^0\bar{D}^{0*}$ molecule hypothesis. \begin{figure}[tb] \begin{minipage}[b]{.5\linewidth} \setlength{\unitlength}{1mm} \begin{picture}(70,40) \put(12,30){\bf a)} \includegraphics[height=3.9cm]{0540_fig2.eps} \end{picture} \end{minipage}\hfill \begin{minipage}[b]{.5\linewidth} \setlength{\unitlength}{1mm} \begin{picture}(70,40) \put(12,30){\bf b)} \includegraphics[height=3.9cm]{0540_fig4.eps} \end{picture} \end{minipage} \caption{The yield of $B$ mesons from the decay {\bf a)} $B^0\rightarrow \gamma J/\psi K$, in bins of the $\gamma J/\psi$ invariant mass and {\bf b)} $B^0\rightarrow \pi^+\pi^-\pi^0 J/\psi K$, in bins of the $\pi^+\pi^-\pi^0$ invariant mass, determined by the Belle collaboration from fits to the $\Delta E$ and $M_{bc}$ distributions.} \label{GAMM3PI} \end{figure} The Belle collaboration also attempted to determine the $J^{PC}$ quantum numbers of the X(3872)~\cite{X0505038} by studying the angular distributions of the decay $X\to\pi^+\pi^- J/\psi$, as suggested by J.L.~Rosner~\cite{ROSNER}\index{Rosner, J.L.}. Among the twelve possible $J^{PC}$ assignments, half ($0^{--}$, $0^{+-}$, $1^{--}$, $1^{+-}$, $2^{--}$ and $2^{+-}$) may be discarded due to their negative charge conjugation-parity. The assignments $0^{-+}$ and $0^{++}$ are strongly disfavoured by the analysis of angular distributions. The additional two odd-parity possibilities: $1^{-+}$ and $2^{-+}$ are discarded as for them the dipion invariant mass spectrum is expected to be much softer to compare with the data. The above considerations leave only two assignments: $1^{++}$ and $2^{++}$ as the possible $J^{PC}$ of $X$. The decay angular distributions and $\pi^+\pi^-$ angular distribution agree well with the $1^++$ hypothesis. The assignment $2^{++}$ was disfavoured by the recent observation by Belle~\cite{X0606055} of a near-threshold enhancement in the $D^0\bar{D^0}\pi^0$ invariant mass in $B\to KD^0\bar{D^0}\pi^0$ decays. It corresponds to $23.4\pm 5.6$ signal events (6.4$\sigma$) at mass ($3875.4\pm 0.7 \pm 1.1$) MeV/c$^2$ which is around two standard deviations higher than the world average for $X(3872)$~\cite{PDG}. Taking for granted that the observed near-threshold enhancement is due to the $X(3872)$, the decay of a spin 2 state to three pseudoscalars ($D^0\overline{D^0}\pi^0$) would require at least one pair of them to be in a relative D wave. In such a configuration the near threshold production would be strongly suppressed by a centrifugal barrier. The CDF collaboration~\cite{XCDFPIPI} has recently studied the spin-parity of $X(3872)$ using a high-statistics sample of $\approx 3000$ events of $X(3872)\to\pi^+\pi^- J/\psi$. The shape of the $\pi^+\pi^-$ invariant mass distribution was compared with the predictions corresponding to all relevant $J^{PC}$ values. (Fig.~\ref{MCDFPIPI}). It was found that both $1^{++}$ and $2^{-+}$ assignments fit reasobably the data. Collecting the above information it seems the most plausible that X(3872) is a deuson. This conjecture is supported in particular by the pattern of its decay modes and the favoured spin-parity assignment $1^{++}$. \begin{figure}[tb] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(100,70)(0,-7) \includegraphics[height=6.0cm]{mpipi_cdf.eps} \put(-20,-5){$M(\pi^+\pi^-)$ [GeV/c$^2$]} \end{picture} \caption{The dipion mass spectrum for the $X(3872)$ (data points), as measured by the CDF collaboration, together with fits to different $J^{PC}$ hypotheses.} \end{center} \label{MCDFPIPI} \end{figure} \begin{figure}[tbh] \begin{center} \includegraphics[height=5.0cm]{y_fig4.eps} \end{center} \caption{ $B^+\to K^+ \omega J/\psi$ signal yields vs $M(\omega J/\psi)$ as determined by the Belle collaboration. The curve in (a) shows the result of a fit that includes only a phase-space-like threshold function. The curve in (b) corresponds to the result of a fit that includes an $S$-wave Breit-Wigner resonance term.} \label{YYYY} \end{figure} \section{Y(3940)} In 2004 The Belle collaboration observed another new state, denoted as Y(3940) and produced in $B^+\to K\omega J/\psi$ decays~\cite{Y3940}. In this study events with $M(K\omega)<1.6$ GeV/c$^2$ were rejected in order to remove the contribution from $K^*\to Kw$ decays. A fit to the $\omega J/\psi$ invariant-mass distribution (Fig.\ref{YYYY}) yielded a signal of $58\pm 11$ events (8.1$\sigma$) corresponding to a mass of ($3943\pm 11\pm 13$) MeV/c$^2$ and the width ($87\pm 22\pm 26$) MeV. Both mass and width of $Y$ are in agreement with expectations for a radially excited charmonium state $\chi_{cJ}^{\prime}$. This interpretation is also strengthened by the observation of the corresponding ($b\bar{b}$) decay $\chi_{b1}^{\prime}\to \omega\Upsilon(1S)$~\cite{CHIB}. Such a ($c\bar{c}$) state would, however, decay predominantly to $D\bar{D}^{(*)}$ pairs, which is not observed. Moreover, for the $\chi_{cJ}^{\prime}$ hypothesis one would expect that ${\cal B}(B\to K\chi_{cJ}^{\prime}) < {\cal B}(B\to K\chi_{cJ})=4\times 10^{-4}$. Taking into account the value of the product ${\cal B}(B\to K Y) {\cal B}(Y\to\omega J/\psi) = (7.1\pm 1.3\pm 3.1)\times 10^{-5}$ , determined by Belle, this implies that ${\cal B}(Y\to \omega J/\psi) > 12$~\%. Such a value would seem exceptionally high for any charmonium state with a mass above $D\bar{D}^{(*)}$ threshold. The above drawbacks of the conventional charmonium interpretation of $Y$, in particular the lack of its decay to $D\bar{D}^{(*)}$ and a large ${\cal B}(Y\to\omega J/\psi)$, are in fact advantages while taking for granted the hypothesis of a $c\bar{c}$-gluon hybrid~\cite{YHYB1}. This is also supported by the lattice QCD calculations~\cite{YHYB2} which indicate that a partial width for the decay to $K \omega J/\psi $ are comparable to the value measured by Belle. However, the masses of $c\bar{c}$-gluon mesons predicted by these calculations~\cite{YHYB2}--\cite{YHYB4} are between 4300 and 4500 MeV/c$^2$ i.e. substantially higher than the measured value. \section{X(3940)} Yet another particle, marked as $X(3940)$ with the mass of 3940 MeV/c$^2$ was observed by the Belle collaboration in the process $e^+e^-\to J/\psi X$~\cite{BY3940}. Its signal was seen in the spectrum of the $J/\psi$ recoil mass (Fig.~\ref{Y3940}) defined as $M_{recoil}(J/\psi)=\sqrt{(E_{CMS}-E^*_{J/\psi})^2-(cp^*_{J/\psi})^2}/c^2$, where $E_{CMS}$ is the center-of-mass energy of the event and $E^*_{J/\psi}$ ($p^*_{J/\psi}$) denote the CMS energy (momentum) of the $J/\psi$, respectively. The previous studies of this process reveiled the presence of three states: $\eta_c$, $\chi_{c0}$ and $\eta_c(2S)$. The new analysis, using significantly higher statistics, provided the observation of the fourth particle in the $J/\psi$ recoil mass spectrum. Its mass was estimated to be $(3943\pm 6\pm 6)$~MeV/c$^2$ and and the width smaller than 52 MeV (90~\% C.L.). The search for $X(3940)$ decay modes yielded the evidence for $X\to D^*\bar{D}$ (${\cal B}=96^{+45}_{-32}\pm 22$~\%). No signal was observed for $X\to D\bar{D}$ (${\cal B}<41$~\% (90~\% C.L.)) and $X\to \omega J/\psi$ (${\cal B}<26$~\% (90~\% C.L.)) The properties of X(3940) match the expectations~\cite{CC4} of the $3^1S_0$ charmonium state, denoted also as $\eta_c^{''}$. It is appropriate to stress here that, in spite the same mass measured, it is extremely unlikely that the states $X(3940)$ and $Y(3940)$ coincide. The X(3940) decays to $D\bar{D^*}$ and does not decay to $\omega J/\psi$. for the Y(3940) the situation is reversed, as far as the above-mentioned decays are concerned. \begin{figure}[t] \begin{center} \includegraphics[height=5.0cm]{fig1_new.eps} \end{center} \caption{The distribution of masses recoiling against the reconstructed $J/\psi$ measured by the Belle collaboration in inclusive $e^+e^- \to J/\psi X$ events. The four enhancements, from left to right, correspond to the $\eta_c$, $\chi_{c0}$, $\eta_c(2S)$ and a new state $X(3940)$.} \label{Y3940} \end{figure} \begin{figure}[tb] \begin{minipage}[b]{.5\linewidth} \setlength{\unitlength}{1mm} \begin{picture}(70,50) \includegraphics[height=5.0cm,width=6.5cm,trim=1 20 1 1]{z3930_1.eps} \put(-15,44){\bf (a)} \put(-40,-3){\bf $M(D\bar{}D)$ [GeV/c$^2$]} \end{picture} \end{minipage}\hfill \begin{minipage}[b]{.5\linewidth} \setlength{\unitlength}{1mm} \begin{picture}(70,50) \includegraphics[height=5.1cm,width=5.5cm]{z3930_2.eps} \put(-15,44){\bf (b)} \put(-30,-3){\bf $| \cos\theta^*|$} \end{picture} \end{minipage} \caption{{\bf (a)} Invariant mass distribution of $D\overline{D}$ pairs produced in two photon processes as measured by Belle. The solid (dashed) curve shows the fits with (without) a resonance component. The histogram corresponds to the distribution of the events from the $D$-mass sidebands. {\bf (b)} The distribution of the angle $\theta^*$ of a $D$ meson relative to the beam axis in the $\gamma\gamma$ CMS frame. The data points correspond to the $3.91 < M(D\bar{D})< 3.95$~MeV/c$^2$ region. The solid histogram shows the yield of background scaled from $M(D\bar{D})$ sidebands. The solid and dashed curves represent expectations for spin-2 and spin-0 hypotheses, respectively. The dotted curve interpolates the non-peak background.} \label{ZZZZ} \end{figure} \section{$\chi^{\prime}_{c2}(3930)$} The Belle collaboration has also performed the search for the production of new resonances in the process $\gamma\gamma\to D\overline{D}$~\cite{ZZZZ}. Here the two-photon processes are studied in the ``zero-tag'' mode, where the final state electron and positron are not detected and the transverse momentum of the $D\bar{D}$ system is very small. The analysis yielded an observation of a new state, marked as $Z(3930)$, at the mass and width of ($3929\pm 5\pm 2$) MeV/c$^2$ and ($29\pm 10\pm 2$) MeV, respectively (Fig.~\ref{ZZZZ} a)). The statistical significance of the signal amounted to 5.3$\sigma$. The product of the two-photon radiative width and branching fraction for the decay to $D\bar{D}$ was found to be $\Gamma_{\gamma\gamma}\times {\cal B}(Z(3930)\to D\bar{D}) = (0.18\pm 0.05\pm 0.03)$~keV. The properties of this new state match the expectations~\cite{CC4,ZSWANSON} for the radially excited ($c\bar{c}$) states $\chi_{c0}^{\prime}$ and $\chi_{c2}^{\prime}$. A study of angular distribution of the $D$ mesons in the $\gamma\gamma$ CMS frame showed that that spin-2 assignment is strongly favoured over the spin-0 hypothesis. Thus the state $Z(3930)$ can be safely interpreted as the $\chi_{c2}^{\prime}$ $2^3P_2$ charmonium. \begin{figure}[p] \begin{center} \includegraphics[height=5.0cm]{y4260_fig1.eps} \end{center} \caption{The $\pi^+\pi^- J/\psi$ invariant mass spectrum measured by BaBar in the range 3.8--5.0 GeV/c$^2$ and (inset) over a wider range that includes the $\psi(2S)$. The points represent the data and the shaded histogram corresponds to the scaled data from the $J/\psi$ mass sidebands. The solid line shows the result of the single-resonance fit. The dashed curve represents the background component.} \label{FIG1_Y4260} \begin{center} \includegraphics[height=9.0cm]{y4260_fig2.eps} \end{center} \caption{The missing momentum distribution measured by Cleo collaboration for $\pi^+\pi^- J/\psi$ (top), $\pi^0\pi^0 J/\psi$ (middle) and $ K^+ K^- J/\psi$ (bottom) in the data at $\sqrt{s}=4.26$~GeV (data points) and the signal shape as predicted by MC simulation (histogram) scaled to the net signal size.} \label{FIG2_Y4260} \end{figure} \section{$Y(4260)$} \begin{table}[b] \begin{center} \begin{tabular}{l|ccc} & BaBar & CLEO & Belle (preliminary) \\ \hline Yield (significance) & $125\pm 23$ ($>8 \sigma$) & $14.1^{+5.2}_{-4.2}$ ($4.9\sigma$) & $165^{+24+7}_{-22-23}$ ($>7\sigma$) \\ Mass (MeV/c$^2$) & $4259\pm 8 ^{+2}_{-6}$ & $4283^{+17}_{-16}\pm 4$ & $4295\pm 10 ^{+11}_{-5}$ \\ Width (MeV) & $88\pm 23 ^{+6}_{-4}$ & $70^{+40}_{-25}\pm 5$ & $133^{+26+13}_{-22-6}$ \\ \hline \end{tabular} \caption{The parameters of the $Y(4260)$ resonance, as measured by BaBar, CLEO and Belle.} \label{TAB_Y4260} \end{center} \end{table} The BaBar collaboration has studied initial-radiation (ISR) processes~\cite{Y4260} $e^+e^-\to\gamma_{ISR}\pi^+\pi^- J/\psi$ and observed a broad resonance in the invariant mass spectrum of $\pi^+\pi^- J/\psi$ near 4.26 GeV/c$^2$(Fig.~\ref{FIG1_Y4260}). The photon radiated from an initial $e^+e^-$ collision is not detected directly. Since the new state, marked as $Y(4260)$, is produced in ISR events, its spin-parity is well defined as $1^{--}$. The existence of $Y(4260)$ was soon confirmed by CLEO~\cite{Y4260_CLEO} and Belle~\cite{Y4260_BELLE} collaborations. The relevant parameters of this new state are collected in Table~\ref{TAB_Y4260}. It is worthwile to note that the values measured so far by three collaborations are only marginally consistent. The CLEO collaboration has also provided the first observation of $Y(4260)\to\pi^0\pi^0 J/\psi)$ ($5.1\sigma$) and found the first evidence for $Y(4260)\to K^+K^- J/\psi)$ ($3.7\sigma$)~\cite{Y4260_CLEO} (Fig.~\ref{FIG2_Y4260}). Simultaneously, the $e^+e^-$ cross-sections at $\sqrt{s}=4.26$ GeV were determined for $\pi^+\pi^- J/\psi$ and $\pi^0\pi^0 J/\psi$ final states to be ($58^{+12}_{-10}\pm 4$)~pb and ($23^{+12}_{-8}\pm 1$)~pb, respectively. The observation of the $\pi^0\pi^0 J/\psi$ contradicts the hypothesis that $Y$ is a $\chi_{cJ}\rho$ molecule~\cite{Y4260_MOLEC}. The interpretation of Y as a baryonium state~\cite{Y4260_BARYO} is strongly disfavoured by the fact that the $\pi^0\pi^0 J/\psi$ rate is about half that of $\pi^+\pi^- J/\psi$. The $Y(4260)$ is located at the dip in $R(e^+e^-\to hadrons)$. Similar drop of the cross-section was also found by Belle in the exclusive reaction $e^+e^-\to D^{*+}D^{*-}$, measured as a function of the CMS energy using ISR events~\cite{Y4260_PAKHL}. This dip could be accomodated as a result of $\psi(3S)-\psi(4S)$ interference, provided that $Y(4260)$ can be interpreted as the conventional charmonium state $\psi(4S)$~\cite{Y4260_CCBAR}. Then, however, the $\psi(3S)$ should exhibit a substantial coupling to $\pi^+\pi^- J/\psi$, which is not observed. Two other viable models describe the $Y(4260)$ as a tetraquark~\cite{Y4260_TETRA} or a $c\bar{c}$-gluon hybrid meson~\cite{Y4260_CCG1}--\cite{Y4260_CCG3}. The unambiguos interpretation of $Y(4260)$ can be possibly obtained as a result of careful studies of its open-charm decays, in particular those with $D$ meson (both $S$ and $P$ wave) pairs. \begin{figure}[p] \begin{center} \includegraphics[height=7.8cm]{ydd_babar.eps} \end{center} \caption{The $D\bar{D}$ mass spectrum for the ISR sample, as measured by BaBar. The arrow indicates the expected position of the $Y(4260)$.} \label{YDD} \begin{center} \includegraphics[height=7.8cm]{y2_babar.eps} \end{center} \caption{The $\pi^+\pi^- J/\psi$ mass spectrum for the ISR sample, as measured by BaBar in the analysis with the detection of the hard photon radiated from an initial $e^+e^-$ collision.} \label{YD2} \end{figure} The BaBar collaboration has studied the exclusive production of the $D\bar{D}$ system ($D=D^0$ or $D^+$) through initial state radiation~\cite{Y4260_DD1}. As seen in Fig~\ref{YDD}, the $D\bar{D}$ mass spectrum shows a clear $\psi(3770)$ signal and two further structures, centered around 3.9 and 4.1 GeV/c$^2$. No evidence was found for $Y(4260)\to D\bar{D}$, leading to an upper limit $\frac{{\cal B}(Y(4260)\to D\bar{D})}{{\cal B}(Y(4260)\to \pi^+\pi^- J/\psi)} < 7.6$ (90~\% C.L.). This number is over an order of magnitude smaller to compare with the value for the $\psi(3770)$ which makes the interpretation of $Y(4260)$ as a conventional $c\bar{c}$, rather doubtful. The BaBar collaboration has also searched for the processes $e^+e^- \to (J/\psi\gamma\gamma)\gamma_{ISR}$ and $e^+e^- \to (J/\psi\pi^+\pi^-)\gamma_{ISR}$~\cite{Y4260_DD2}, where the hard photon radiated from an initial electron-positron collision is directly detected. In the latter final state the signal of $Y(4260)$ was observed~(Fig.~\ref{YD2}). Its mass and width are consistent with the the values originally reported by BaBar in~\cite{Y4260}. In the $(J/\psi\gamma\gamma)\gamma_{ISR}$ final state, no events were found in the $Y(4260)$ mass region in the $J/\psi\eta$, $J/\psi \pi^0$ and $\chi_{c2}\gamma$ distributions. \begin{figure}[tb] \begin{minipage}[b]{.5\linewidth} \setlength{\unitlength}{1mm} \begin{picture}(70,50) \includegraphics[height=5.0cm,width=6cm]{hccleo_incl.eps} \put(-15,44){\bf (a)} \end{picture} \end{minipage}\hfill \begin{minipage}[b]{.5\linewidth} \setlength{\unitlength}{1mm} \begin{picture}(70,50) \includegraphics[height=5.0cm,width=7.5cm]{hccleo_excl.eps} \put(-15,44){\bf (b)} \end{picture} \end{minipage} \caption{The recoil mass against $\pi^0$ for {\bf (a)} inclusive (i.e. no reconstruction of the $\eta_c$) and {\bf (b)} exclusive $\eta_c$ reconstruction in the reaction $\psi(2S)\to \pi^0 h_c \to (\gamma\gamma)(\gamma\eta_c)$, as measured by CLEO collaboration.} \label{HCCLEO} \end{figure} \section{$h_c$} The CLEO collaboration has observed the $h_c$ ($^1P1$) state of charmonium in the reaction $\psi(2S)\to \pi^0 h_c \to (\gamma\gamma)(\gamma\eta_c)$~\cite{HCCLEO}. The signal in the $\pi^0$ recoil mass was observed both for the inclusive reaction (Fig.~\ref{HCCLEO} a)), where the decay products of the $\eta_c$ are not identified, and for exclusive processes (Fig.~\ref{HCCLEO} b)), in which $\eta_c$ decays are reconstructed in seven hadronic decay channels ($\sim 10$~\% of all $\eta_c$ decays). The results of the inclusive and exclusive analyses were combined and yielded $M(h_c)=(3524.4\pm 0.6\pm 0.4)$~MeV/c$^2$ (in agreement with~\cite{FAUSTOV3}) and ${\cal B}(\psi(2S)\to \pi^0 h_c)\times {\cal B}(h_c\to\gamma\eta_c) =(4.0\pm 0.8\pm 0.7)\times 10^{-4} $. Together with the well known mass value of the $^3P_J$ centroid ($<M(^3P_J)>=(3525.36\pm 0.06)$~MeV/c$^2$~\cite{PDG}), it has allowed to determine for the first time the hyperfine splitting for the $P$ states of charmonium: $\Delta M_{hf}(<M(^3P_J)> - M(^1P_1) = (+1.0 \pm 0.6 \pm 0.4)$ MeV/c$^2$. This agrees well with the simplest calculations assuming the potential composed of a vector Coulombic ($\sim r$) and a scalar confining ($\sim 1/r$) terms. They are both spin independent and as a result the hyperfine splitting should be zero. Larger values of the $\Delta M_{hf}$ could be accomodated only after the inclusion of the higher-order corrections~\cite{HCTHEOR1,HCTHEOR2}, which is not confirmed by the CLEO measurement. \begin{figure}[p] \begin{center} \centering\epsfig{figure=dsj_babar.eps,height=7.5cm} \end{center} \caption{The $D_s^+\pi^0$ mass distributions for {\bf (a)} the decay $D_s^+\to K^+K^-\pi^+$ and {\bf (b)} $D_s^+\to K^+K^-\pi^+\pi^0$, as measured by BaBar. The solid curves represent the fits, described in~\cite{DSJ_BABAR1}.} \label{FIG_DSJ_BABAR} \begin{center} \centering\epsfig{figure=dsj_cleo.eps,height=7.5cm} \end{center} \caption{The mass difference $\Delta M(D_s^{*}\pi^0) = M(D_s\gamma\pi^0) - M(D_s\gamma)$, measured by the Cleo collaboration for {\bf (a)} combinations where the $D_s\gamma$ system is consistent with $D_s^*$ decay and {\bf (b)} $D_s\gamma$ combinations selected from the $D_s^*$ mass sideband regions.} \label{FIG_DSJ_CLEO} \end{figure} \section{$D_{sJ}$ mesons} \begin{figure}[p] \begin{center} \centering\epsfig{figure=dsj_brodzick.eps,height=7cm} \end{center} \caption{$B^+\to\bar{D^0}D^0 K^+$ signal yield vs $M(D^0K^+)$ (data points) as measured by Belle. Additively superimposed histograms denote the contributions from $D_{sJ}(2700)$ (blue), $\psi(3770)$ (green), $\psi(4160)$ (yellow), threshold (red) and phase-space (navy blue) components.} \label{FIG_BRODZICK} \begin{center} \centering\epsfig{figure=fig_dsj_296.eps,height=7cm,width=17cm} \end{center} \caption{Background subtracted $DK$ invariant mass distributions measured by BaBar collaboration for {\bf (a)} $D^0(\to K^-\pi^+)K^+$, {\bf (b)} $D^0(\to K^-\pi^+\pi^0)K^+$, {\bf (c)} $D^+(\to K^-\pi^+\pi^+)K^0_s$ and {\bf (d)} the sum of all modes in the 2.86 GeV/c$^2$ mass range. The curves are the fitted functions described in~\cite{DSJ_BABAR_286}.} \label{FIG_DSJ_BABAR286} \end{figure} The symbol $D_{sJ}$ is often used to mark orbital excitations of the $c\bar{s}$ bound states. Four such $P$-wave mesons are expected in the framework of potential models, inspired by the heavy quark symmetry (HQS)~\cite{HQS1,HQS2}. They can be naturally splitted in two doublets differing in the orbital momentum of light degrees of freedom ($j_q$). The states with $j_q=3/2$ are predicted to be narrow and were identified as the $D_{s1}(2536)$ (Argus) and $D_{s2}(2573)$ (Cleo) in 1989 (1994), respectively. The mesons belonging to the $j_q=1/2$ are expected to be much wider i.e. more difficult to observe. Two candidates for such states were found in 2003. First the BaBar collaboration provided the evidence for the state $D_{sJ}(2317)^+$ (Fig.~\ref{FIG_DSJ_BABAR}), decaying to $D_s^+\pi^0$~\cite{DSJ_BABAR1}. The observation by Cleo of the second state $D_{sJ}(2460)$ (Fig.~\ref{FIG_DSJ_CLEO}), in the decay to $D_s^{*+}\pi^0$~\cite{DSJ_CLEO1}, followed almost immediately. Both signals were found in the continuum processes $e^+e^-\to c\bar{c}$. There were soon confirmed by the Belle collaboration, together with an additional evidence of their presence in $B$ meson exclusive decays~\cite{DSJ_BELLE1}. Two other decay modes to the final states $D_s\gamma$ and $D_s\pi^+\pi^-$ (implies a spin of at least one) were observed for the $D_{sJ}(2460)^+$. The discussed below, unexpected properties of both new mesons questioned the interpretation of both new mesons as ($c\bar{s}$, $j_q=1/2$) bound states. First of all the widths of both the $D_{sJ}(2317)$ and $D_{sJ}(2460)$ turned out to be very small, consistent with the experimental resolution ($< 4.6$~MeV and $< 5.5$~MeV, respectively). Also their masses, measured to be below the $DK$ (D*K) thresholds, respectively, appeared to be significantly lower to compare with HQS expectations, On the other side the study of angular distributions of the $D_{sJ}(2317)$ and $D_{sJ}(2460)$ decay products, performed by BaBar~\cite{DSJ_BABAR2} and Belle~\cite{DSJ_BELLE2}, favoured strongly their spin-parity assignments $0^+$ and $1^+$, in agreement with HQS predictions. This motivated a vigorous answer from the side of theorists, proposing several exotic explanations for the two new mesons. In particular, the $D_{sJ}(2317)$ and $D_{sJ}(2460)$ were interpreted as $D^{(*)}K$ molecules~\cite{DSJ_DK1,DSJ_DK2} or the chiral doublers of the $D_s$ and $D_s^*$~\cite{DSJ_CHIRAL1,DSJ_CHIRAL2}. Assuming that current mass predictions of HQS are wrong (they can in fact be shifted to lower values, in the presence of a strong $S$ wave coupling to $D K^{(*)}$), both new $D_{sJ}$ mesons could be comfortably interpreted as coventional $(c\bar{s})$ states. Provided that their predicted masses may be lowered below the respective $D^{(*)}K$ thresholds, the narrow widths of the $D_{sJ}(2317)$ and $D_{sJ}(2460)$ are naturally explained. These low masses would clearly allow the observed electromagnetic and isospin-violating decays of the the two states to be pronounced. Thus, the two new $D_{sJ}$ mesons can be interpreted as conventional states $D_{s0}^*$ and $D_{s1}$~(\cite{FAZIO1}--\cite{FAZIO3}). Yet another charm-strange meson, marked as $D_{sJ}(2700)$ and produced in $B^+\to \bar{D^0}D_{sJ}$, $D_{sJ}\to D^0 K^+$ was observed by Belle~\cite{DSJ_BRODZICK} (Fig.~\ref{FIG_BRODZICK}). This state has a mass of $M=(2715\pm 11 ^{+11}_{-14})$ MeV/c$^2$ and a width $\Gamma=(115\pm 20^{+36}_{-32})$ MeV and its signal corresponds to $182\pm 30$ events. The study of $D_{sJ}(2700)$ helicity angle distributions strongly favours the spin parity assignment of $1^-$. Recent observations concerning the $D_{sJ}$ family are completed by the study of three inclusive processes $e^+e^-\to D^0 K^+ X, D^0\to K^-\pi^+$, $e^+e^-\to D^0 K^+ X, D^0\to K^-\pi^+\pi^0$ and $e^+e^-\to D^+ K^0_s X, D^+\to K^-\pi^+\pi^+$ performed by BaBar~\cite{DSJ_BABAR_286}. The distributions of $DK$ invariant mass (Fig.~\ref{FIG_DSJ_BABAR286}) show a clear signal of a new charm-strange meson, marked as $D_{sJ}(2860)$, with a mass of $M=(2856.6\pm 1.5\pm 5.0)$ MeV/c$^2$ and a width $\Gamma =(47\pm 7\pm 10)$ MeV. The decay to two pseudoscalars implies a natural spin-parity for this state ($0^+, 1^-,...$) and the value $J^P=3^-$ is predicted in~\cite{FAZIO4}. According to~\cite{DSJ286_INT}, the $D_{sJ}(2860)$ could be a radial excitation of $D_{sJ}^*(2317)$. However, other assignments cannot be ruled out. Moreover, a second broad enhancement is observed around 2.69 GeV/c$^2$ (Fig.~\ref{FIG_DSJ_BABAR286}). This state was temporarily marked as $X(2690)^+$ and clearly further inputs are necessary in order to understand its origin. Its mass was determined to be $M=(2688\pm 4\pm 3)$~MeV/c$^2$ and a width $\Gamma = (112\pm 7\pm 36)$~MeV. It would be very interesting to check if there is any association between the $D_{sJ}(2700)$ and $X(2690)$. \begin{figure}[p] \setlength{\unitlength}{1mm} \begin{center} \begin{picture}(230,60)(-12,0) \put(-1,20){\rotatebox{90}{\tiny\bf events/(10~{\rm MeV}/$c^2$)}} \put(43,1){{\large $M(\Lambda_c^+\pi)-M(\Lambda_c^+)$ [GeV/$c^2$]}} \put(25,45){\large $\Lambda_c^+\pi^-$} \put(63,45){\large $\Lambda_c^+\pi^0$} \put(103,45){\large $\Lambda_c^+\pi^+$} \centering\epsfig{figure=mizuk_fig1a.eps,height=6cm} \end{picture} \end{center} \caption{$M(\Lambda_c^+\pi)-M(\Lambda_c)$ distributions of the selected $\Lambda_c^+\pi^-$ (left), $\Lambda_c^+\pi^0$ (middle), and $\Lambda_c^+\pi^+$ (right) combinations. Data from the $\Lambda_c^+$ signal window (points with error bars) and normalized sidebands (histograms) are shown, together with the fits (solid curves) and their combinatorial background components (dashed).} \label{FIG_MIZUK} \end{figure} \begin{table}[p] \caption{Parameters of the baryons $\Sigma_c(2800)^0$, $\Sigma_c(2800)^+$ and $\Sigma_c(2800)^{++}$ as measured by Belle.} \begin{center} \begin{tabular}{l|cccc} State & $\Delta M$ [{\rm MeV}/c$^2$] & Width [{\rm MeV}] & Yield/$10^3$ & Significance ($\sigma$) \\ \hline $\Sigma_c(2800)^0$ & $515.4^{+3.2+2.1}_{-3.1-6.0} $ & $61^{+18+22}_{-13-13}$ & $2.24^{+0.79+1.03}_{-0.55-0.50}$ & ~8.6 \\ $\Sigma_c(2800)^+$ & $~505.4^{+5.8+12.4}_{-4.6-2.0}$ & $62^{+37+52}_{-23-38}$ & $1.54^{+1.05+1.40}_{-0.57-0.88}$ & ~6.2 \\ $\Sigma_c(2800)^{++}$ & $514.5^{+3.4+2.8}_{-3.1-4.9} $ & $75^{+18+22}_{-13-11}$ & $2.81^{+0.82+0.71}_{-0.60-0.49}$ & 10.0 \\ \hline \end{tabular} \end{center} \label{TABLE_MIZUK} \caption{Parameters of the baryons $\Lambda_c(2880)^+$ and $\Lambda_c(2940)^+$, as determined by BaBar and Belle.} \begin{center} \begin{tabular}{l|cccc} State & Expt. & Mass [{\rm MeV}/c$^2$] & Width [{\rm MeV}] & Yield (events) \\ \hline $\Lambda_c(2880)^+$ & BaBar & $2881.9 \pm 0.1 \pm 0.5 $ & $5.8\pm 1.5\pm 1.1$ & $2800\pm 190$ \\ $\Lambda_c(2880)^+$ & Belle & $2881.2 \pm 0.2^{+0.4}_{-0.3}$ & $5.5^{+0.7}_{-0.3}\pm 0.4$ & $880\pm 50\pm 40$ \\ \hline $\Lambda_c(2940)^+$ & BaBar & $2939.8 \pm 1.3 \pm 1.0 $ & $17.5\pm 5.2\pm 5.9$ & $2280\pm 310$ \\ $\Lambda_c(2940)^+$ & Belle & $2937.9 \pm 1.0^{+1.8}_{-0.4}$ & $10\pm 4\pm 5$ & $210^{+70+100}_{-40-60}$ \\ \hline \end{tabular} \end{center} \label{TABLE_LC2940} \end{table} \section{$\Sigma_{c}(2800)$} The Belle collaboration has provided the first evidence~\cite{MIZUK} for an isotriplet of excited charmed baryons $\Sigma_c(2800)$ decaying into the $\Lambda_c^+\pi^-$, $\Lambda_c^+\pi^0$ and $\Lambda_c^+\pi^+$ final states. As shown in Fig.~\ref{FIG_MIZUK}, clear enhancements around 0.51 GeV/c$^2$ are seen in the distributions of the mass difference $\Delta M (\Lambda_c^+\pi)= M(\Lambda_c^+\pi) - M(\Lambda_c^+)$ for the $\Lambda_c^+\pi^-$, $\Lambda_c^+\pi^0$, and $\Lambda_c^+\pi^+$ combinations. The mass differences $\Delta M$ together with the widths of the states $\Sigma_c(2800)$ are collected in Table~\ref{TABLE_MIZUK}. These states are identified as the members of the predicted $\Sigma_{c2}$, $J^P=3/2^-$ isospin triplet~\cite{MIZUK_THEOR}. The enhancement near $\Delta M=0.43$ GeV/c$^2$ (cf Fig.~\ref{FIG_MIZUK}), in the spectra corresponding to $\Lambda_c^+\pi^-$ and $\Lambda_c^+\pi^+$ combinations, is attributed to feed-down from the decay $\Lambda_c(2880)^+\to \Lambda_c^+\pi^+\pi^-$, as verified by reconstructing $\Lambda_c(2880)$ in the data. \begin{figure}[ph] \begin{center} \includegraphics[height=7.0cm]{lc2940_babar.eps} \end{center} \caption{Invariant mass distribution of $p D^0$ pairs, as measured by BaBar collaboration (data points). The shaded histogram and open circles correspond to the $D^0$ mass sidebands and wrong-sign $p\overline{D^0}$ candidates, respectively. The inset shows the $pD^0$ mass spectrum in the range 2.9--2.975 GeV/c$^2$.} \label{FIGBAB_LC2940} \setlength{\unitlength}{1mm} \begin{center} \begin{picture}(100,75) \put(4,40){\rotatebox{90}{ $N / 2.5$ MeV/c$^2$}} \put(35,+5){{$m(\Lambda_c^+\pi^+\pi^-)$ [GeV/$c^2$]}} \centering\epsfig{figure=lc2940_belle.eps,height=8.5cm,width=10cm} \end{picture} \end{center} \caption{The invariant mass distribution of the $\Lambda_c^+\pi^+\pi^-$ combinations as measured by the Belle collaboration. The plot corresponds to the $\Sigma_c(2455)$ mass peak of the $\Lambda_c\pi^{\pm}$ combinations. The signals of $\Lambda_c(2765)^+$, $\Lambda_c(2880)^+$ and $\Lambda_c(2940)^+$ can be clearly distinguished.} \label{FIGBEL_LC2940} \end{figure} \section{$\Lambda_c(2940)^+$ and $\Lambda_c(2880)^+$} The charmed baryon $\Lambda_c(2940)^+$ was first observed by the BaBar collaboration in the $p D^0$ final state~\cite{LC2940_BABAR} (Fig.~\ref{FIGBAB_LC2940}). The signal at 2.88 GeV/c$^2$ is due to the decay $\Lambda_c(2880)^+\to p D^0$. This comprises the first observation of the above decay channel (the baryon $\Lambda_c(2880)^+$ was first seen by CLEO in the final state $\Lambda_c^+\pi^+\pi^-$~\cite{LC2880_CLEO}). The search for a doubly-charged partner of the $\Lambda_c(2940)^+$, performed by BaBar in the final state $p D^+$, gave negative results~\cite{LC2940_BABAR}. The Belle collaboration has recently reported the evidence for another decay mode $\Lambda_c(2940)^+\to \Sigma_c(2455)^{0,++}\pi^{\pm}$~\cite{LC2940_BELLE} (Fig.~\ref{FIGBEL_LC2940}). The study of angular distributions of the decay $\Lambda_c(2880)^+\to \Sigma_c^{0,++}\pi^{\pm}$ strongly favours a $\Lambda_c(2880)^+$ spin assignment of $\frac{5}{2}$ over $\frac{3}{2}$ and $\frac{1}{2}$~\cite{LC2940_BELLE}. The parameteters of both $\Lambda_c(2880)^+$ and $\Lambda_c(2940)^+$ measured by Belle and BaBar, are in good overall agreement (Table~\ref{TABLE_LC2940}). \section{$\Xi_{cx}(2980)$ and $\Xi_{cx}(3077)$} \begin{figure}[th] \setlength{\unitlength}{1mm} \begin{picture}(190,50) \put(13,44){{\bf (a) }} \begin{minipage}[b]{.5\linewidth} \begin{center} \epsfig{figure=lmckpi_main.eps,width=\linewidth,height=5.5cm} \end{center} \end{minipage}\hfill \begin{minipage}[b]{.5\linewidth} \begin{center} \epsfig{figure=xicx2.eps,width=\linewidth,height=5.3cm} \put(86,44){{\bf (b) }} \end{center} \end{minipage} \end{picture} \caption{ {\bf (a)}: $M(\Lambda_c^+ K^-\pi^+)$ distribution (points with error bars) together with the fit (solid curve). The dashed region represents the background component corresponding to the wrong-sign combinations $\Lambda_c^+K^+\pi^-$. {\bf (b)}: $M(\Lambda_c^+ K^0_s\pi^+)$ distribution (points) together with the overlaid fitting curve. Both spectra were measured by the Belle collaboration. \label{XICX1}} \end{figure} \begin{figure}[hp] \begin{center} \includegraphics[height=16.0cm]{omegac.eps} \end{center} \caption{The invariant mass distributions of $\Omega_c^{*0}\to\Omega_c^0\gamma$ candidates with $\Omega_c^0$ reconstructed by BaBar in the decay modes {\bf (a)} $\Omega^-\pi^+$, {\bf (b)} $\Omega^-\pi^+\pi^0$, {\bf (c)} $\Omega^-\pi^+\pi^-\pi^+$ and {\bf (d)} $\Xi^- K^-\pi^+\pi^+$. The points with error bars represent the data, the dashed line corresponds to the combinatorial background and the solid line is the sum of signal and background. The shaded histograms correspond to the mass distribution expected from the mass sideband of $\Omega_c^0$.} \label{OMEGAC} \end{figure} In the beginning of this year, the Belle collaboration reported the first observation of two baryons~\cite{XICX_BELLE}, denoted as $\Xi_{cx}(2980)^{+}$ and $\Xi_{cx}(3077)^{+}$ and decaying into $\Lambda_c^+ K^-\pi^+$ (Fig.~\ref{XICX1}(a)). The existence of both new particles were quickly confirmed by BaBar~\cite{XICX_BABAR}. Assuming that these states carry charm and strangeness, the above observation would comprise the first example of a baryonic decay in which the initial $c$ and $s$ quarks are carried away by two different final state particles. Most naturally, these two states would be interpreted as excited charm-strange baryons $\Xi_c$. This interpretation is strengthened by the positive results of the search for neutral isospin related partners of the above states (Fig.~\ref{XICX1}(b)), performed by Belle in $\Lambda_c^+ K^0_s\pi^-$ final state~\cite{XICX_BELLE}. It yielded an evidence of the $\Xi_{cx}(3077)^0$ together with a broad enhancement near the threshold i.e. in the mass range corresponding to the $\Xi_{cx}(2980)^0$. The preliminary parameters of the states $\Xi_{cx}(2980)^+$ and $\Xi_{cx}(3077)^+$ are collected in Table~\ref{TABLE_CHISTOV}. In the $\Lambda_c^+ K^-\pi^+ (\pi^+)$ final state, the SELEX collaboration~\cite{SELEX} reported the observation of two double charmed baryon: $\Xi_{cc}^+$ with a mass of 3520 MeV/c$^2$ and $\Xi_{cc}^{++}$ with a mass of 3460 MeV/c$^2$. The studies by Belle~\cite{XICX_BELLE} and BaBar~\cite{XICX_BABAR2} show no evidence for these states. The BaBar collaboration estimated the following 95~\% C.L. upper limits on the ratio of production cross-sections: $\sigma(\Xi_{cc}^+)\times {\cal B}(\Xi_{cc}^+\to \Lambda_c^+ K^-\pi^+)/\sigma(\Lambda_c^+) < 2.7 \times 10^{-4}$ and $\sigma(\Xi_{cc}^{++})\times {\cal B}(\Xi_{cc}^{++}\to \Lambda_c^+ K^-\pi^+\pi^+)/\sigma(\Lambda_c^+) < 4.0 \times 10^{-4}$ (estimated for $p^*(\Lambda_c) > 2.3$ GeV/c, where $p^*$ denotes the CMS momentum of the $\Lambda_c$). The Belle collaboration studied only the single-charged state which yielded $\sigma(\Xi_{cc}^+)\times {\cal B}(\Xi_{cc}^+\to \Lambda_c^+ K^-\pi^+)/\sigma(\Lambda_c^+) < 1.5 \times 10^{-4}$ (90~\% C.L.; $p^*(\Lambda_c) > 2.5$ GeV/c). \begin{table}[tbh] \caption{Parameters of the new charm-strange baryons $\Xi_{cx}(2980)^{+,0}$ and $\Xi_{cx}(3077)^{+,0}$} \vspace{0.4cm} \begin{tabular}{l|ccccc} State & Expt. & Mass & Width & Yield & Signif. \\ & &({\rm MeV/c}$^2$) &({\rm MeV})& (events) & ($\sigma$) \\ \hline $\Xi_{cx}(2980)^+$ & BaBar & $2967.1\pm 1.9\pm 1.0$ & $23.6\pm 2.8\pm 1.3$ & $284\pm 45\pm 46$ & 7.0 \\ $\Xi_{cx}(2980)^+$ & Belle & $2978.5\pm 2.1\pm 2.0$ & $43.5\pm 7.5\pm 7.0$ & $405.3\pm 50.7$ & 5.7 \\ $\Xi_{cx}(3077)^+$ & BaBar & $3076.4\pm 0.7\pm 0.3$ & $~6.2\pm 1.6\pm 0.5$ & $204\pm 35\pm 12$ & 8.6 \\ $\Xi_{cx}(3077)^+$ & Belle & $3076.7\pm 0.9\pm 0.5$ & $~6.2\pm 1.2\pm 0.8$ & $326.0\pm 39.6$ & 9.2 \\ \hline $\Xi_{cx}(2980)^0$ & Belle & $2977.1\pm 8.8\pm 3.5$ & $~43.5$ (fixed) & $42.3\pm 23.8$ & 1.5 \\ $\Xi_{cx}(3077)^0$ & Belle & $3082.8\pm 1.8\pm 1.5$ & $~5.2\pm 3.1\pm 1.8$ & $67.1\pm 19.9$ & 4.4 \\ \hline \end{tabular} \label{TABLE_CHISTOV} \end{table} \section{$\Omega_{c}^{*0}$} The baryon $\Omega_{c}^{*0}$ was observed by the BaBar collaboration in the radiative decay $\Omega_c^0\gamma$~\cite{OMEGAC_BABAR}. This state was the last singly-charm baryon having zero orbital momentum, remaining to be experimentally detected. The $\Omega_c^0$ was reconstructed in the decays to the final states $\Omega^-\pi^+$, $\Omega^-\pi^+\pi^0$, $\Omega^-\pi^+\pi^-\pi^+$ and $\Xi^- K^-\pi^+\pi^+$ (Fig.~\ref{OMEGAC}). The mass difference between $\Omega_c^{*0}$ and $\Omega_c^{0}$ was measured to be $\Delta M = 70.8\pm 1.0 \pm 1.1$~MeV/c$^2$. This agrees with the theoretical prediction in~\cite{OMEGAC_TH1,FAUSTOV2} and is below that described in~\cite{OMEGAC_TH2}. \section{Summary} The charm physics has many features of the Sleeping Beauty. After the initial publicity of the times of November revolution, it remained a calm field aimed at filling the columns of Particle Data Group booklets with new or more accurate cross-sections, branching ratios, lifetimes etc. It seems that $B$ factories acted like a prince who kissed the Sleeping Beauty and waked her up right in the beginning of this century. The discovery of plethora of new charmed states has revitalized the charm physics and triggered many new theoretical ideas. Since the $B$ factories are still collecting enormous samples of data, it is rather likely that some new exciting and charming discoveries are just around the corner. \bigskip I am very grateful to the organizers of the HQL2006 Conference for their support and all efforts in making this venue successful. Special thanks to Prof. S.Paul. \vskip 1.5cm
proofpile-arXiv_069-14248
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The existence of long-lived vortices in accretion discs was first proposed by \cite{W44} in a model of planet formation. This idea was reintroduced by \cite{BS95} to accelerate the formation of planetesimals through a dust trapping process. Moreover, large scale vortices can lead to a significant transport of angular momentum thanks to the production of density waves \citep{JG05b}. Many physical processes have been introduced in the literature to justify the existence of these vortices such as Rossby wave instabilities \citep{LLC99}, planetary gap instabilities \citep{VE06,VAAP07}, 3D circulation models \citep{BM05}, MHD turbulence \citep{FN05} and the global baroclinic instability \citep{KB03}. Baroclinic instabilities in accretion discs has regained interest in the past few years. A first version of these instabilities (the global baroclinic instability or GBI) was mentioned by \cite{KB03} using a purely numerical approach and considering a disc with a radial entropy gradient. However, the presence of this instability was affected by changing the numerical scheme used in the simulations, casting strong doubts on this instability as a real physical processes. The linear properties of the GBI were then investigated by \cite{K04} and \cite{JG05}. However, only transient growths were found in this context. \cite{K04} speculated that these transient growths could lead to a nonlinear instability explaining the result of \cite{KB03}, but it is still unclear whether transient growths are relevant for nonlinear instabilities \citep[see e.g.][and reference therein for a complete discussion of this point]{LL05}. The nonlinear problem was then studied in local shearing boxes by \cite{JG06} using fully compressible numerical methods. These authors found no instability in the Keplerian disc regime and concluded that the GBI was ``either global or nonexistent''. Nevertheless, baroclinic processes were studied again by \cite{PJS07} using anelastic global simulations and including new physical processes. As in \cite{KB03}, a weak radial entropy gradient was imposed in the simulation. However, a cooling function was also included in their model, in order to force the system to relax to the imposed radial temperature profile. In these simulations, \cite{PJS07} observed spontaneous formation of vortices with radial entropy gradients compatible with accretion disc thermodynamics. In a subsequent paper \citep{PSJ07}, these vortices were found to survive for several hundreds orbits. According to the authors, their disagreement with \cite{JG06} was due to a larger Reynolds number and the use of spectral methods, a possibility already mentioned by \cite{JG06}. In this paper, we revisit baroclinic instabilities in a local setup (namely the shearing box) both in the incompressible-Boussinesq approximation and in fully compressible simulations. The aim of this paper is to clarify several of the points which have been discussed in the literature for the past 6 years about the existence, the nature and the properties of baroclinic instabilities. We show that a subcritical baroclinic instability (or shortly SBI) \emph{does exist} in shearing boxes. This instability is \emph{nonlinear} (or subcritical) and \emph{strongly linked to thermal diffusion}, a point already mentioned by \cite{PSJ07}. We also present results concerning the behaviour of the SBI in 3 dimensions, as vortices are known to be unstable because of parametric instabilities \citep{LP09}. We begin in \S\ref{equations} by presenting the equations, introducing the dimensionless numbers and describing the numerical methods used in the rest of the paper. \S\ref{2Dincomp} is dedicated to 2D incompressible-Boussinesq simulations and a qualitative understanding of the instability. We present in \S\ref{compressibility} our results in compressible shearing boxes and in \S\ref{3Dincomp} the 3D behaviour of the instability. Conclusions and implications of our findings are discussed in \S\ref{conclusions}. \section{Local model\label{equations}} \subsection{Equations} In the following, we will assume a local model for the accretion disc, following the shearing sheet approximation. The reader may consult \cite{HGB95}, \cite{B03} and \cite{RU08} for an extensive discussion of the properties and limitations of this model. As a simplification, we will assume the flow is incompressible, consistently with the small shearing box model \citep{RU08}. The shearing-box equations are found by considering a Cartesian box centred at $r=R_0$, rotating with the disc at angular velocity $\Omega=\Omega(R_0)$. We finally introduce in this box a radial stratification using the Boussinesq approximation \citep{SV60}. Defining $r-R_0 \rightarrow x$ and $R_0\phi \rightarrow y$ as in \cite{HGB95}, one eventually obtains the following set of equations: \begin{eqnarray} \nonumber \partial_t \bm{u}+\bm{u\cdot\nabla} \bm{u}&=&-\bm{\nabla} \Pi -2\bm{\Omega \times u}\\ \label{motiongeneral}& &2\Omega S x \bm{e_x}-\Lambda N^2\theta \bm{e_x}+\nu\Delta\bm{u},\\ \label{entropygeneral}\partial_t \theta +\bm{u\cdot\nabla}\theta&=&u_x/\Lambda+\mu\Delta\theta\\ \label{divv} \bm{\nabla \cdot u}&=&0, \end{eqnarray} where $\bm{u}=u_x\bm{e_x}+u_y\bm{e_y}+u_z\bm{e_z}$ is the fluid velocity, $\theta$ the potential temperature deviation, $\nu$ the kinematic viscosity and $\mu$ the thermal diffusivity. In these equations, we have defined the mean shear $S=-r\partial_r \Omega$, which is set to $S=(3/2)\Omega$ for a Keplerian disc. One can check easily that the velocity field $\bm{U}=-Sx\bm{e_y}$ is a steady solution of these equations. In the following we will consider the evolutions of the perturbations $\bm{v}$ (not necessarily small) of this profile defined by $\bm{v}=\bm{u}-\bm{U}$. The generalised pressure $\Pi$ is calculated solving a Poisson equation with the incompressibility condition (\ref{divv}). For homogeneity and consistency with the traditional Boussinesq approach, we have introduced a stratification length $\Lambda$. Note however that $\Lambda$ disappears from the dynamical properties of these equations as one can renormalise the variables defining $\theta'\equiv\Lambda \theta$. The stratification itself is controlled by the Brunt-V\"ais\"al\"a frequency $N$, defined for a perfect gas by \begin{equation} N^2=-\frac{1}{\gamma \rho}\frac{\partial P}{\partial R}\frac{\partial}{\partial R}\ln\Big(\frac{P}{\rho^\gamma}\Big), \end{equation} where $P$ and $\rho$ are assumed to be the background equilibrium profile and $\gamma$ is the adiabatic index. With these notations, one can recover the ordinary thermodynamical variables as $\theta\equiv \delta \rho /\rho$, where $\delta \rho$ is the density perturbation and $\rho$ is the background density. The stratification length is then defined by $\Lambda\equiv -\partial_R P /\rho N^2$. \subsection{Dimensionless numbers} The system described above involves several physical processes. To clarify the regime in which we are working and to make easier comparisons with previous work, we define the following dimensionless numbers: \begin{itemize} \item The Richardson number $Ri=N^2/S^2$ compares the shear timescale to the buoyancy timescale. This definition is equivalent to the definition of \cite{JG05}. \item The Peclet number $Pe=SL^2/\mu$. \item The Reynolds number $Re=SL^2/\nu$. \end{itemize} $L$ is a typical scale of the system, chosen to be the radial box size ($L_x$) in our notations. The linear stability properties of this flow are quite well understood. The flow is linearly stable for axisymmetric perturbations when the Solberg-Ho\"iland criterion is satisfied. This criterion may be written locally as: \begin{equation} \label{solberg}2\Omega(2\Omega -S)+N^2>0 \quad \quad \mathrm{Stability,} \end{equation} or equivalently in a Keplerian disc, $Ri>-4/9$. Another linear stability criterion, the Schwarzschild criterion, is often used for convectively stable flows \emph{without rotation nor shear}. This criterion reads \begin{equation} \label{schwarzschild} N^2>0 \quad \quad \mathrm{Stability}, \end{equation} or equivalently $Ri>0$. As we will see in the following, this criterion is the relevant one for the SBI. When viscosity and thermal diffusivity are included, the Solberg-Ho\"iland criterion is modified, potentially leading to the Goldreich-Schubert-Fricke (GSF) instability \citep{GS67,F68}. The stability criterion is in that case \begin{equation} \mu 2\Omega(2\Omega -S)+\nu N^2>0 \quad \quad \mathrm{Stability.} \end{equation} This criterion is satisfied when both (\ref{solberg}) is verified and $\nu/\mu<1$, which corresponds to the regime studied in this paper. \subsection{Numerical methods}\label{Numerics} We have used two different codes to study this instability. When using the Boussineq model (\S\ref{2Dincomp} and \S\ref{3Dincomp}), we have used SNOOPY, a 3D incompressible spectral code. This code uses an implicit scheme for thermal and viscous diffusion, allowing us to study simulations with small $Pe$ without any strong constrain on the CFL condition. The spectral algorithm of SNOOPY has now been used in several hydrodynamic and magnetohydrodynamic studies \citep[e.g.][]{LL05,LL07} and is freely available on the author's website. For compressible simulations (\S\ref{compressibility}), we have used NIRVANA \citep{ZY97}. NIRVANA has been used frequently in the past to study various problems involving MHD turbulence in the shearing box \citep{FP06,PNS04}. In the following, we use the shearing sheet boundary conditions \citep{HGB95} in the radial direction. This is made possible by the use of the Boussinesq approximation in which only the \emph{gradients} of the background profile appear explicitly, through $\Lambda$ and $N$. Therefore, when $\Lambda$ and $N$ are constant through the box, one can assume that the thermodynamic fluctuation $\theta$ is shear-periodic, consistently with the shearing-sheet approximation. \section{2D subcritical baroclinic instability in incompressible flows\label{2Dincomp}} To start our investigation, we consider the simplest case of a 2D $(x,y)$ problem in an infinitely thin disc. This setup is the local equivalent of the 2D global anelastic setup of \cite{PJS07}, and one expects to find similar properties in both cases if the instability is local. In two dimensions, it is easier to consider the vorticity equation instead of (\ref{motiongeneral}). Defining the vertical vorticity of the pertubations by $\bm{\omega}=\partial_x v_y-\partial_y v_x$, we have: \begin{equation} \partial_t\omega+\bm{v\cdot\nabla}\omega-Sx\partial_y\omega=\Lambda N^2\partial_y \theta +\nu\Delta \omega \end{equation} In this formulation, the only source of enstrophy $\langle \omega^2/2 \rangle=\int dxdy \omega^2/2$ is the baroclinic term $\Lambda N^2\partial_y \theta$ which will be shown to play an important role for the instability. The simulations presented in this section are computed in a square domain $L_x=L_y=1$ with a resolution of $512\times 512$ grid points using our spectral code. When not explicitly mentioned, the time unit is the shear timescale $S^{-1}$. We choose the fiducial parameters to be $Re=4\times10^5$ ; $Pe=4\times10^3$ and $Ri=-0.01$, in order to have a flow linearly stable for the Solberg-Ho\"iland criterion (\ref{solberg}). These parameters are close to the one used by \cite{PJS07} and are compatible with disc thermodynamics. \subsection{Influence of initial perturbations} In the first numerical experiment, we choose to test the effect of the initial conditions, keeping constant the dimensionless parameters. In our initial conditions, we excite randomly the largest wavelengths of the vorticity field with an amplitude $Ap$. This initial condition is slightly different from the one used by \cite{PJS07,PSJ07} who introduced perturbations in the temperature field. In each experiment we modify the amplitude of the initial perturbation $Ap$, and follow the time evolution of the total enstrophy (Fig.~\ref{subcrit}) \begin{figure} \centering \includegraphics[width=1.0\linewidth]{PerturbN2=0.01.eps} \caption{Volume averaged enstrophy for several initial amplitude perturbation ($Ap$) in arbitrary units. A subcritical instability is observed for finite amplitude perturbations between $Ap=0.2$ and $Ap=0.4$.} \label{subcrit}% \end{figure} The numerical results indicates clearly the presence of a \emph{nonlinear} or \emph{subcritical} transition in the flow. Indeed, we find the ``instability'' for large enough initial perturbations. This also confirms the result of \cite{JG06}: there is no linear instability in the presence of a weak radial stratification. Moreover, one of the reasons that \cite{JG06} did not observe any transition could be because of the weak initial perturbations they used (between $10^{-12}$ and $10^{-4}$). Using similar initial conditions, we did not observe any transition either. The existence of the instability in our simulations was checked by doubling the resolution ($1024\times 1024$), keeping constant all the dimensionless numbers. We found no significant difference between high and low resolution results, showing that our simulations were converged for this problem. When the system undergoes a subcritical transition, it develops long-lived \emph{self-sustained} vortices, as shown for $Ap=1$ in Fig.~\ref{snaps} (top). For such a large Reynolds number, it is known that vortices survive for long times \citep[see e.g.][]{UR04}, even without any baroclinic effect. To check that the observed vortices were really due to the baroclinic term, we have carried out the exact same simulation but without stratification (Fig.~\ref{snaps} bottom). This simulation also shows the formation of vortices but these are lately dissipated on a few hundred shear times. At $t=500\,S^{-1}$ the difference between the simulation with and without baroclinicity becomes obvious. \begin{figure*} \centering \includegraphics[width=0.33\linewidth]{snapt10_wbc.eps} \includegraphics[width=0.33\linewidth]{snapt100_wbc.eps} \includegraphics[width=0.33\linewidth]{snapt500_wbc.eps} \includegraphics[width=0.33\linewidth]{snapt10_wobc.eps} \includegraphics[width=0.33\linewidth]{snapt100_wobc.eps} \includegraphics[width=0.33\linewidth]{snapt500_wobc.eps} \caption{Evolution of the vorticity in the fiducial case. Top row has a baroclinic term with $Ri=-0.01$. Bottom row has no baroclinicity. We show $t=10$ (left), $t=100$ (middle), $t=500$ (right). } \label{snaps}% \end{figure*} The vortices observed in these incompressible simulations lead to a very weak and strongly oscillating turbulent transport of angular momentum. One finds typically an \emph{inward} transport with $\alpha=\langle v_x v_y\rangle/SL^2\simeq -3\times 10^{-5}$. This is consistent with the results published by \cite{PSJ07} in global simulations. In several other numerical experiments (not shown here), we have noticed that the amplitude threshold required to get the instability depends on $Re$ and $Pe$, a larger Reynolds number being associated with a weaker initial perturbation. This dependency was pointed out by \cite{PJS07} and it indicates that the amplitude threshold in a realistic disc could be very small (i.e. much smaller than the sound speed). Note however that this threshold might also be scale dependent, a problem which has not been addressed here. \subsection{Influence of the Richardson number} To understand how the instability depends on the amplitude of the baroclinic term, we have carried out several runs with $Re=4\times10^5$ and $Pe=4\times10^3$, varying $Ri$ from $-0.02$ to $-0.16$ and from $0.02$ to $0.016$. We compare the resulting shell-integrated enstrophy spectra ($\hat{\omega}^2_k/2$) on Fig.~\ref{spectrumN}. When $Ri<0$ and $|Ri|$ becomes larger, the instability tends to amplify smaller and smaller scales. In particular for $Ri=-0.02$, the dominant scale is clearly the box scale, as already observed for the fiducial run. This trend is also observed looking directly at vorticity snapshots (Fig.~\ref{Risnaps}). The sign of $Ri$ is also of importance for the onset of the SBI. To demonstrate this effect, we show the time history of the total enstrophy for positive and negative $Ri$ on figure \ref{posRi}. In the cases of positive $Ri$, the total enstrophy decays until the flow becomes axisymmetric. Perturbations are then damped very slowly on a viscous timescale. We conclude from these results that a necessary condition for the SBI to appear is a flow which do not satisfy the Schwarzschild criterion (\ref{schwarzschild}). \begin{figure*} \centering \includegraphics[width=0.4\linewidth]{snapt500_N002.eps} \quad\quad\quad \includegraphics[width=0.4\linewidth]{snapt500_N016.eps} \caption{Vorticity map for $Ri=-0.02$ (left) and $Ri=-0.16$ (right) at $t=500$. As already shown by the spectra, small scales are dominant for larger $|Ri|$. } \label{Risnaps}% \end{figure*} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{SpectrumN.eps} \caption{Enstrophy spectrum as a function of the Richardson number $Ri$. As $|Ri|$ becomes larger, the instability moves to smaller scales.} \label{spectrumN}% \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{posRi.eps} \caption{Time history of the total enstrophy for positive and negative Richardson number. The instability is observed only for negative $Ri$.} \label{posRi}% \end{figure} \subsection{Influence of thermal diffusion} The importance of thermal cooling and thermal diffusion was already pointed out by \cite{PSJ07}. To check this dependancy in our local model, we have considered several simulations with $Ri=-0.01$, varying $Pe$ from $Pe=20$ to $Pe=16000$. The resulting enstrophy evolution is presented for several of these runs on Fig.~\ref{posPe}. Looking at the snapshots of the vorticity field for these simulations, we find the SBI approximatively for $50\le Pe \le 8000$. Assuming a typical vortex size $l\sim 0.25$ (see e.g. Fig.\ref{snaps}), these Peclet numbers correspond to thermal diffusion times $3\,S^{-1}\le\tau_\mathrm{diff}\le 500\,S^{-1}$ over the vortex size, with an optimum found for $\tau_\mathrm{diff}\simeq 10\,S^{-1}$ ($Pe=250$). Note that the cooling time used by \cite{PSJ07} lies typically in this range of values. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{posPe.eps} \caption{Time history of the total enstrophy for several Peclet number ($Pe$). The instability is observed for $50\le Pe \le 8000$.} \label{posPe}% \end{figure} \subsection{Instability mechanism} Since the flow is subcritically unstable, no linear analysis can capture this instability entirely. As shown by \cite{JG05}, an ensemble of shearing waves in a baroclinic flow is subject to a transient growth with an amplification going asymptotically like\footnote{Note that the amplification occurs independently of the sign of $Ri$, as already mentioned by \cite{JG05}.} $|Ri|t^{1-4Ri}$ for $|Ri|\ll 1$ when $\mu=\nu=0$, the waves being ultimately decaying when $\mu>0$ or $\nu>0$. However, this tells us little to nothing about the nonlinear behaviour of such a flow. Indeed, it is known that barotropic Keplerian flows undergo arbitrarily large transient amplifications \citep[see e.g.][]{CZTL03}, but subcritical transitions are yet to be found in that case \citep{HBW99,LL05,JBSG06}. In the SBI case, the subcritical transition happens only for negative $Ri$ as shown above, in flagrant contradiction with the linear amplification described by \cite{JG05}. This is a strong indication that linear theory is of little use for describing this instability. To isolate a mechanism for a subcritical instability, one should start from a non-linear structure observed in the simulations. For the SBI, these structures are self-sustained vortices. In order to understand how baroclinic effects can feedback on the vortex structure, we initialised our fiducial simulation with a Kida vortex of aspect ratio 4 \citep{K81}. Although this vortex is known to be an exact nonlinear solution of the inviscid equation of motions, it is slowly modified by the explicit viscosity and the baroclinic term, leading to a slow growth of the vortex. We show on Fig.~\ref{Kida-struct} the resulting vortex structure (left) and the associated baroclinic term $\Lambda N^2\partial_y \theta$ (right) at $t=100\,S^{-1}$. Note that these structures are quasi steady and evolve on timescales much longer than the shear timescale. It is clear from these snapshots that the baroclinic feedback tends to amplify the vorticity located inside the vortex, leading to the growth of the vortex itself. \begin{figure*} \centering \includegraphics[width=0.4\linewidth]{kida-wz.eps} \quad\quad\quad \includegraphics[width=0.4\linewidth]{kida-bc.eps} \caption{Vortex structure obtained starting from a Kida vortex. (left) vorticity map. (right) baroclinic term ($\Lambda N^2\partial_y \theta$) map.} \label{Kida-struct}% \end{figure*} To understand the origin of the baroclinic feedback, let's assume the physical quantities are evolving slowly in time, as is observed in the simulations, so we may write \begin{eqnarray} \omega &=& \omega(\epsilon t)\equiv\omega(\tau),\\ \theta &=& \theta(\epsilon t)\equiv \theta(\tau), \end{eqnarray} where $\epsilon$ is a small parameter and it is assumed that $|\theta_{\tau}|$ is of the same order as $|{\bf v}\cdot\nabla \theta |$, and similarly for $\omega$. We therefore have to solve: \begin{eqnarray} \epsilon\partial_{\tau}\omega+(\bm{v\cdot\nabla}-Sx\partial_y)\omega&=&\Lambda N^2\partial_y\theta+\nu\Delta\omega,\\ \label{eq_vort_0}\epsilon\partial_{\tau}\theta+(\bm{v\cdot\nabla}-Sx\partial_y)\theta&=&v_{x}/\Lambda +\mu\Delta\theta. \end{eqnarray} Since the Richardson number is assumed to be small, the baroclinic feedback of (\ref{eq_vort_0}) has to be small. As only this term can lead to an instability, we can assume it scales like $\epsilon$. The role of the viscosity is to prevent the instability from happening, damping vorticity fluctuations. Assuming we are in a regime in which the instability appears, the viscosity has to be of the order of the baroclinic term, scaling like $\epsilon$. At zeroth order in $\epsilon$, we are left with: \begin{eqnarray} \label{invisid_0}(\bm{v\cdot\nabla}-Sx\partial_y)\omega&=&0,\\ \label{entrop_0}(\bm{v\cdot\nabla}-Sx\partial_y)\theta&=&v_{x}/\Lambda+\mu\Delta\theta \end{eqnarray} This system of equations describes a quite simple physical system: the vorticity field is a steady solution of the inviscid vorticity equation with a constant shear and without any baroclinic feedback, whereas the potential temperature results from advection-diffusion of the background entropy profile due to the flow structure. A family of steady solutions of the inviscid vorticity equation with a constant shear are known to be steady vortices, like the \cite{K81} vortex. More generally, it can be shown that any $\omega$ of the form: \begin{equation} \omega=F(\psi) \end{equation} where $\psi$ is the stream function defined by $\bm{u}=\bm{e_z \times \nabla} \psi$ is solution of (\ref{invisid_0}). In the following, we assume (\ref{invisid_0}) is satisfied by $\omega$ and we look for a solution to the entropy equation (\ref{entrop_0}). As a further simplification, we neglect the advection of the perturbed entropy $\theta$ by $\bm{v}$, assuming this effect is small compared to the advection by the mean shear, although this does not affect the argument leading to the possibility of instability (see below). Using a Fourier decomposition $\theta(\bm{x})=\int d\bm{k}\,\hat{\theta}(\bm{k})\exp(i\bm{k\cdot x})$, one gets \begin{equation} \label{adv_diff}Sk_y\frac{d\hat{\theta}}{dk_x}+\mu k^2\hat{\theta}=\hat{v}_x/\Lambda . \end{equation} A solution to this equation can easily be found using standard techniques for solving first order ordinary differential equations. Assuming $|\hat{v}_x|$ decays to zero when $|\bm{k}|\rightarrow\infty$, one may write the result in the form \begin{equation} \label{ent_struct}\hat{\theta}(k_x,k_y)=\frac{1}{\Lambda |Sk_y|}\int_{-\infty}^\infty dk'_x \hat{v}_x(k'_x,k_y)G(k_x,k_x',k_y), \end{equation} where we have defined \begin{eqnarray} \nonumber G(k_x,k_x',k_y)&=&\mathcal{H}\Big((k_x-k'_x)Sk_y/|Sk_y|\Big) \times\\ & & \exp\Big[\frac{\mu}{Sk_y}\Big(\frac{k_x'^3-k_x^3}{3}+k_y^2[k_x'-k_x]\Big)\Big], \end{eqnarray} $\mathcal{H}$ being the Heaviside function. From this expression, it is formally possible to derive the time evolution of the vorticity and look for an instability using the first order term in (\ref{eq_vort_0}). However, this requires an explicit expression for the vorticity, which is \emph{a priori} unknown. Instead, we have choosen to compute the evolution of the total enstrophy of the system $\langle \omega^2/2\rangle=\int dxdy \, \omega^2/2$: \begin{equation} \label{final_vort_budget} \partial_t\langle \omega^2/2 \rangle=\Lambda N^2\langle\omega\partial_y\theta\rangle-\nu\langle|\bm{\nabla} \omega|^2 \rangle \end{equation} Evidently, an instability can occur only if the first term on the right hand side is positive. Using (\ref{ent_struct}), is is possible to derive an analytical expression for this term: \ \ \ \ \ \ \ \ \begin{eqnarray} \nonumber \Lambda N^2\langle\omega\partial_y\theta\rangle&=& -4\pi^2 \Re\Bigg[N^2\int dk_x dk_y dk'_x\\ \label{BCfeedback}& &\frac{|k_y|}{|S||{\bm k'}|^2} \hat{\omega}^*(\bm{k})G(k_x,k'_x,k_y)\hat{\omega}(\bm{k'})\Bigg], \end{eqnarray} where ${\bm k'}= (k_x',k_y).$ In this expression, the existence of an instability is controlled by the Green's function $G(k_x,k'_x,k_y)$ and the shape of $\omega(\bm{k})$. To our knowledge, it is not possible to derive any general criterion for the instability, as such a criterion would depend on the background vortex considered. It is however possible to derive a criterion in the limit of a large thermal diffusivity $\mu$. In this limit, $G$ is strongly peaked around $k'_x=k_x$, and one can assume in first approximation that $G$ is proportional to a Dirac distribution. More formally, one can approximate in the limit of large thermal diffusivity: \begin{equation} G(k_x,k_x',k_y)\simeq\mathcal{H}\Big((k_x-k'_x)Sk_y/|Sk_y|\Big)\exp\Big[\frac{\mu}{Sk_y}k^2(k_x'-k_x)\Big], \end{equation} when $k_x\simeq k_x'$. It is then possible to derive an expression for the baroclinic feedback as a series expansion in the small parameter $\mu^{-1}$: \begin{eqnarray} \nonumber LN^2\langle\omega\partial_y\theta\rangle&\simeq& -4\pi^2\Re\Big[N^2\int d\bm{k}\Big(\frac{k_y^2}{\mu k^4}|\hat{\omega}(\bm{k})|^2-\\ & &\frac{Sk_y^3}{\mu^2 k^4}\hat{\omega}^*(\bm{k})\partial_{k_x}\big[\hat{\omega}(\bm{k})/k^2\big]+\dots \Big)\Big]\label{growth} \end{eqnarray} In this approximation, we find two competing effects. The first term on the right hand side appears when the thermal diffusion completely dominates over the shear in (\ref{adv_diff}). It has a positive feedback on the total enstrophy when $N^2<0$ and can be seen as a source term for the SBI. The second term involves a competition between shear and diffusion. It can be seen in first approximation as a phase mixing term, and the resulting can be either positive or negative, depending on the vorticity background one considers. Physically, this term shears out the entropy structure created by the vortex and if it is too strong, it kills the positive feedback of the first term. One should note however that this term will not \emph{reverse} the sign of the baroclinic feedback but will just weaken it by randomising the phase coherence between $\omega$ and $\theta$. To understand the ``growth'' described by (\ref{growth}), on can derive an order of magnitude expression for the growth rate $\gamma=d\ln\,\langle \omega^2\rangle/dt$ combining (\ref{final_vort_budget}) with (\ref{growth}): \begin{equation} \gamma\sim\frac{(-N^2)\sigma^2}{\mu}\phi_\omega (S\sigma^2/\mu)-\frac{\nu}{\sigma^2}. \end{equation} In this expression, $\sigma$ is the typical vortex size, with the assumption $\sigma\sim 1/k$. We have included a phase mixing term with the function $\phi_\omega$, as an extension of (\ref{growth}). This function depends on the background vortex solution $\omega(x,y)$ one chooses and cannot be written explicitly in general. As explained above, this phase mixing weaken the baroclinic feedback when $Pe$ is large but it does not change its intrinsic nature. According to these properties, one expect $\phi_\omega(0)=1$ and $\phi_\omega(x) \rightarrow 0$ when $x\rightarrow \infty$. Although $\phi_\omega$ can only be determined for a specific vortex solution, one can still deduce a few general properties for the SBI. For very large thermal diffusivity (very small $Pe$) one will expect the SBI when \begin{equation} \frac{(-N^2)\sigma^4}{\nu\mu} > 1, \end{equation} since $\phi_\omega\sim 1$ in this limit. Although not quantitatively equivalent, this first limit corresponds to the $Pe>25$ threshold of our simulations. It is moreover similar to the criterion for the onset of convection (in the absence of shear) based on the Rayleigh number $Ra=-N^2\sigma^4/\nu\mu$. One finds another instability condition due to phase mixing when $\mu\rightarrow 0$. Assuming $\phi_\omega(x)$ decays faster than $1/x$ when $x\rightarrow\infty$, one gets a minimum value for $\mu$ solving \begin{equation} \frac{\phi_\omega(S\sigma^2/\mu)}{\mu}>\frac{\nu}{(-N^2) \sigma^4} \end{equation} which can be calculated once $\phi_\omega$ is explicitly provided. In our simulations, this second limit corresponds to the $Pe<16000$ threshold. Although not entirely conclusive, this short analytical analysis tends to explain why a relatively strong thermal diffusion is required to get the SBI. This dependancy was also pointed out by \cite{PSJ07} who used both a thermal diffusion and a cooling relaxation time in the entropy equation. Such a cooling time can also be introduced in our analysis in place of the thermal diffusion but this does not change the underlying physical process. We also note that in this analysis, we have assumed that a finite amplitude background vorticity satisfying (\ref{invisid_0}) existed in the first place, from which we have derived the baroclinic feedback on the same vorticity field. In this sense, this analysis describes a non-linear instability and is different from the linear approach of \cite{JG05}. In addition, we recall that we made the approximation of only including the background shear in the advection term on the left hand side of equation (\ref{entrop_0}). We remark that the dominant driving term in the limit of large $\mu$ in (\ref{growth}) does not depend on this assumption because the heat diffusion term dominates in this limit. \subsection{Phenomenological picture} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{streamline.eps} \caption{Streamline in a vortex undergoing the SBI. A fluid particle is accelerated by buoyancy effects on A-B and C-D branches. Cooling occurs on B-C and D-A branches (see text).} \label{streamline}% \end{figure} Knowing the basic mechanisms underlying the SBI, one can tentatively draw a phenomenology of this instability. For this exercise, let us consider a single streamline in a vortex subject to the SBI (Fig.~\ref{streamline} left). For simplicity, we reduce the trajectory followed by a fluid particle in the vortex to a rectangle (Fig.~\ref{streamline} right). Let us start with a fluid particle on A, moving radially inward toward B. This fluid particle is initially in thermal equilibrium, and we assume the motion from A to B is fast enough to be considered approximately adiabatic. As the particle moves inward, its temperature and density slowly deviate from the background values. If the background entropy profile is convectively unstable (condition \ref{schwarzschild}), the fluid particle moving inward is cooler and heavier than the surrounding, and is consequently subject to an inward acceleration due to gravity (the particle ``falls''). Once the particle reaches B, it drifts azimuthally toward C. On this trajectory, the background density and temperature is constant. Therefore, the fluid particle gets thermalised with the background and, if the cooling is fast enough, it reaches C in thermal equilibrium. Between C and D, the same buoyancy effect as the one between A and B is observed: as the particle moves from C to D, it is always hotter and lighter than the surrounding, creating a buoyancy force directed radially outward on the particle. Finally, a thermalisation episode happens again between D and A, closing the loop. From this picture, it is evident that fluid particles get accelerated by buoyancy forces on A-B and C-D branches. In the end, considering many particles undergoing this buoyancy cycle, the vortex structure itself gets amplified, explaining our results. Moreover, the role played by cooling or thermal diffusion is evident. It the cooling is too fast, fluid particles tend to get thermalised on the A-B and C-D branches, reducing the buoyancy forces and the efficiency of the cycle. On the other hand, if no cooling is introduced, the particles cannot get thermalised on the B-C and D-A branches. In this case, the work due to buoyancy forces on the B-A trajectory will be exactly opposite to the work done on the C-D trajectory, neutralising the effect on average. \section{2D SBI in compressible flows\label{compressibility}} In this situation we consider the 2D subcritical baroclinic instability within the framework of a compressible model. We accordingy adopt the compressible counterparts of equations (\ref{motiongeneral})- (\ref{divv}) for an ideal gas with constant ratio of specific heats $\gamma$ in the form \begin{eqnarray} \nonumber \frac{D \bm{u}}{Dt}&=&-\frac{1}{\rho}\bm{\nabla}P -\bm{\nabla}\Phi -2\bm{\Omega \times u}\\ \label{motiongeneralcomp}& &-2\Omega S x \bm{e_x}+\nabla\cdot\bm{T_v},\\ \label{entropygeneralcomp}\frac{D c^2}{Dt} &=& \frac{(\gamma-1)c^2}{\rho}\frac{D \rho}{Dt} + \frac{1}{\rho}\left({\cal H} + {\cal K}\Delta c^2\right)\\ \label{divvcomp}\partial_t\rho &=&-\bm{\nabla \cdot\rho u}. \end{eqnarray} Here $c$ is the isothermal sound speed which is proportional to the square root of the temperature, $\bm{T_v}$ is a viscous stress tensor, ${\cal H}/(\gamma - 1)$ is the rate of energy production per unit volume, $\Phi$ is a general external potential, and the constant $K = \mu \rho$ where as for the incompressible case $\mu$ is a thermal diffusivity. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{smallboxenst.eps} \caption{Time history of the evolution of the enstrophy corresponding to the perturbations for the small box. The uppermost curve is for ${\cal A}=0.25$ with no applied viscosity. The next uppermost is the corresponding case with an imposed viscosity. The lowermost curve is for ${\cal A}=0.025$ with applied viscosity and the next lowermost curve is for the corresponding case with no viscosity} \label{smallboxenst}% \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{smallboxvisc_1.eps} \caption{Vorticity map for the small box simulation with applied viscosity and ${\cal A}=0.25$ at $t=2000.$} \label{smallboxvisc}% \end{figure} For simplicity we adopt a model that may be either regarded as assuming that there is a prescribed energy production rate ${\cal H}$ and $\gamma$ is close to unity in (\ref{entropygeneralcomp}) so that the compressional heating term $\propto (\gamma-1)$ can be dropped, or replacing the combination of the compressional heating term and ${\cal H}$ by a new specified energy production / cooling rate. In either view, as the energy production rate is specified, the model is simplified as equation (\ref{entropygeneralcomp}) becomes an equation for $c^2$ alone. We consider shearing box models of extent $L_x$ in the radial direction and $L_y$ in the azimuthal direction. A characteristic constant sound speed, $c_0,$ defines a natural scale height $H=c_0/\Omega.$ We present simulations using NIRVANA (see section \ref{Numerics}) for a small box with $L_x=L_y/2=0.6H$ and a large box with $L_x=L_y/2=1.2H.$ Thus in terms of the characteristic sound speed, when Keplerian shear is the only motion, relative motions in the small box are subsonic, whereas supersonic relative velocities occur in the big box. We have also considered the two resolutions $(N_x,N_y)=(144,288)$ and $(N_x,N_y)=(288,576)$ and tests have shown that these give the same results. We have also performed simulations with no applied viscosity and with an applied viscosity corresponding to a Reynolds number $L_x^2\Omega/\nu = 12500.$ This was applied as a constant kinematic Navier Stokes viscosity but with stress tensor acting on the velocity ${\bf v}$ being the deviation from the background shear rather than ${\bf u}.$ This is not significant for an incompressible simulation but ensures that unwanted phenomena such as viscous overstabilty produced by perturbations of the background viscous stress \citep[see][]{KPL93} are absent. The diffusivity corresponding to the viscosity we used is the same magnitude as the magnetic diffusivity used in \cite{FPLH07} and as in their case we find that it is adequately resolved. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Bigboxenstrophy.eps} \caption{Time history of the evolution of the enstrophy corresponding to the perturbations for the large box. The uppermost curve is for ${\cal A}=0.5$ with no applied viscosity. The lower curve is for the corresponding case with an imposed viscosity. } \label{Bigboxenstrophy}% \end{figure} \begin{figure*} \centering \includegraphics[width=0.4\linewidth]{bigboxvisc_1.eps} \quad\quad\quad\quad \includegraphics[width=0.4\linewidth]{bigboxvisc_2.eps} \caption{Vorticity map (left) and surface density map (right) for the large box simulation with applied viscosity and ${\cal A}=0.5$ at t=190.} \label{bigboxviscfig}% \end{figure*} In order to perform simulations corresponding to the incompressible ones we need to impose gradients in the state variables that produce a non zero Richardson number (appropriate for $\gamma =1.$) \begin{equation} Ri =-\frac{1}{ S^2 \rho}\frac{\partial P}{\partial R}\frac{\partial}{\partial R}\ln\Big(\frac{P}{\rho}\Big). \end{equation} Because of the shearing box boundary conditions, imposed background gradients have to be periodic. Thus we choose an initially uniform density $\rho= \rho_0$ and initial profiles for $P$ and $c^2$ of the form $P =\rho_0c_0^2 (1- C_p\sin(2\pi x/L_x))$ and $c^2 =c_0^2(1 -C_p\sin(2\pi x/L_x)),$ where $C_p$ is a constant. For a given value $H/L_x,$ $Ri$ depends only on $C_p$ which was chosen to give a maximum value for this quantity of $-0.07.$ Note that in our case, as $C_p$ is small, we have approximately $Ri\propto \cos^2(2\pi x/L_x)$ in contrast to the incompressible simulations where it is constant. Once the initial value of $\mu$ has been chosen to correspond to a specified uniform Peclet number, the external potential $\Phi$ and the energy source term ${\cal H}$ were chosen such that the choice of state variables with Keplerian shear resulted in an equilibrium state and then held fixed. For comparison with the incompressible simulations we adopted $Pe=379$ for the simulations reported here. At this point we stress that in view of the rather artificial set up, this model is clearly far from definitive. However, it can be used to show that the phenomena found in the incompressible case are also seen in a compressible model but with the addition of the effects of density waves in the case of the large box. As for the incompressible case, solutions with sustained vortices were only found for sufficiently large initial velocity perturbations of the equilibrium state. We adopted an incompressible velocity perturbation of the form \begin{equation} v_x = (3L_y\Omega /(8\pi)){\cal A}\sin(2\pi y/L_y)\cos(2\pi x/L_x)\end{equation} and \begin{equation} v_y =- (3L^2_y\Omega /(8\pi L_x)){\cal A}\cos(2\pi y/L_y)\sin(2\pi x/L_x), \end{equation} where ${\cal A}$ is specified amplitude factor. \subsection{Simulation results} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{alpha.eps} \caption{Time history of the running means of the the spatially averaged value of $\alpha$ for the small box (lower curve) and large box (upper curve) simulations with applied viscosity and with ${\cal A}=0.5$ and ${\cal A}=0.25$ respectively.} \label{alphaplot}% \end{figure} In Fig. \ref{smallboxenst} we show the the evolution of the enstrophy associated with the perturbations for the small box. Cases with ${\cal A}=0.25$ with and without applied viscosity are shown. These lead to solutions for which anticyclonic vortices are sustained for long times. The case with no viscosity is more active as expected but otherwise looks similar to the case with applied viscosity. By contrast when the amplitude of the initial perturbation is reduced to ${\cal A}=0.025$ no sustained vortices are seen. This demonstrates that our numerical setup is not subject to any linear instability, such as Rossby wave instabilities \citep{LLC99}. The enstrophy attains a low level in the case with no applied viscosity which is a consequence of a long wavelength linear axisymmetric disturbance that shows only very weak decay provided by numerical viscosity in this run. Fig. \ref{smallboxvisc} shows a vorticity map for the small box simulation with applied viscosity and ${\cal A}=0.25$. Anticylconic vortices are clearly seen in this case supporting the finding from the incompressible runs that a finite amplitude initial kick is required to generate them. The time history of the evolution of the enstrophy for large box simulations with ${\cal A}=0.5$ with and without applied viscosity is shown in Fig. \ref{Bigboxenstrophy}. Corresponding vorticity and surface density maps for the case with applied viscosity are shown in Fig. \ref{bigboxviscfig}. Again the inviscid case is more active but nonetheless the corresponding maps look very similar. In these cases the anticyclonic vortices are present as in the small box case but there is increased activity from density waves as seen in the surface density maps. These waves could be generated by a process similar to the swing amplifier with vorticity source described by \cite{HP09a,HP09b}, although the structures we observe do not strictly correspond to a small scale turbulent flow. The density waves are associated with some outward angular momentum transport. However, the value of $\alpha$ measured from the volume average of the Reynolds stress is always highly fluctuating. Accordingly we plot running means as a function of time for the small box and large box with applied viscosity in Fig. \ref{alphaplot}. In the small box case there is a small residual time average $\sim 10^{-4}$ but in the large box this increases to $\sim 3\times 10^{-3}.$ This is clearly a consequence of the fact that the small box is close to the incompressible regime, whereas the large box being effectively larger than a scale height in radial width allows the vortices to become large enough to become significantly more effective at exciting density waves. \section{The SBI in 3D\label{3Dincomp}} The results presented previously were obtained in a 2D setup. It is however known that vortices like the one observed in these simulations are linearly unstable to 3D perturbations due to parametric instabilities \citep{LP09}. The question of the survival of these vortices in 3D is therefore of great importance, as their destruction would lead to the disappearance of the SBI. As shown by \cite{LP09}, 3D instabilities involve relatively small scales compared to the vortex size when the aspect ratio\footnote{The aspect ratio of a vortex is defined by the ratio of the azimuthal size to the radial size of the vorticity patch generated by the vortex} $\chi$ of the vortex is large. This leads to strong numerical constraints on the resolution one has to use to resolve both the 2D SBI and 3D instabilities. To optimise the computational power, we have carried out all the 3D simulations using our spectral code in the Boussinesq approximation, with a setup similar to \S\ref{2Dincomp} and constant stratification. \subsection{Fiducial simulation\label{3Dnoise}} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{velocity3D-noise.eps} \caption{Time history of the maxima of the 3 components of the velocity field when starting from 2D+3D noise. Once the SBI has formed vortices, 3D motions due to parametric instabilities appears and balance the 2D instability.} \label{vel3D-noise}% \end{figure} \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{vorticity3D-noise.eps} \includegraphics[width=0.49\linewidth]{vz3D-noise.eps} \caption{3D structure obtained from our fudicial 3D simulation at $t=800$. Left: vertical component of the vorticity. Right: vertical component of the velocity field. We find that 3D instabilities involving vertical motions and vertical structures are localised inside vortex cores.} \label{snap3D-noise}% \end{figure*} We first consider a box extended horizontally to mimic a thin disc with $L\equiv L_x=L_y=8$ and $L_z=1$. We set $Re=1.02\times 10^6$, $Pe=6400$ and $Ri=-0.01$ with a numerical resolution $N_x\times N_y\times N_z=1024\times 512 \times 128$ in order to resolve the small radial structures due to the 3D instabilities. The horizontal boundary conditions are identical to the one used in 2D and periodic in the vertical direction. Note that we don't include any stratification effect in the vertical direction to reduce computational costs. Initial conditions are to be chosen with care as 3D random perturbations will generally decay rapidly due to the generation of 3D turbulence everywhere in the box. To avoid this effect, we choose to start with a large amplitude 2D white noise ($\langle \sqrt{v^2}\rangle\sim0.1$) to which we add small 3D perturbations with an amplitude set to 1\% of the 2D perturbation amplitude. We show on Fig.\ref{vel3D-noise} the time history of the extrema of the velocity field in such a simulation. As expected, vertical motions triggered by 3D instabilities appear at $t=200$. However, these ``secondary'' instabilities \emph{do not have a destructive effect on the SBI}. Instead, an almost steady state is reached at $t\simeq 600$. Looking at the snapshots in the ``saturated state'' at $t=800$ (Fig.~\ref{snap3D-noise}) we observe the production of large scale anticyclonic vortices, as in the 2D case. However, in the core of these vortices (regions of negative vorticity), we also observe the appearance of vertical structures. Looking at the vertical component of the velocity field, one finds clearly that these vertical structures and motions are localised \emph{inside} the vortex core. Although it is likely that the 3D instability observed in this simulation is related to the elliptical instability described by \cite{LP09}, we can't formally prove this point as the vortices found in the simulation are different from the one studied by \cite{LP09}. Despite the presence of 3D turbulence in these vortices, the turbulent transport measured in this simulation is directed inward, with $\alpha\simeq -3\times 10^{-5}$. This is to be expected as the 3D instabilities extract primarily their energy from the vortex structure and \emph{not} from the mean shear. However, allowing for compressibility will certainly change the turbulent transport as vortices will then also produce density waves (see section \ref{compressibility}). \subsection{Evolution of a single vortex} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{VortexAspect.eps} \caption{Time history of $v_z$ extrema (top) and of the vortex aspect ratio $\chi$ (bottom). Once the SBI has formed vortices, 3D motions due to parametric instabilities appears and balance the 2D instability.} \label{vel3D-kida}% \end{figure} \begin{figure*} \centering \includegraphics[width=0.33\linewidth]{kida3DSnap-66.eps} \includegraphics[width=0.33\linewidth]{kida3DSnap-75.eps} \includegraphics[width=0.33\linewidth]{kida3DSnap-83.eps} \caption{Vorticity map from a simulation starting with a 2D Kida vortex plus a weak 3D noise. We show a snapshot at $t=660$ (left), $t=750$ (middle) and $t=830$. After the 3D instability burst, a weak vortex survives and is amplified by the SBI.} \label{kida3DSnap}% \end{figure*} To isolate the interplay between the SBI and 3D elliptical instabilities, we have considered the case of an isolated 2D vortex in a 3D setup. We take the same parameters as in the fiducial case, but we consider this time a Kida vortex of aspect ratio $\chi=6$ as an initial condition. Thanks to these initial conditions, it is possible to follow the evolution of this single vortex, and in particular measure its aspect ratio as a function of time. This is done with a post-processing script, assuming that the vortex core boundary is located at $\omega_b=0.5[\mathrm{min}(\omega)+\mathrm{max}(\omega)]$. We then measure the vortex aspect ratio as being the ratio $l_y/l_x$ of the boundary defined above. One should note however that this procedure does not check that the vortex is still an ellipse. Moreover, it gives wrong values for $\chi$ if 3D perturbations create strong fluctuations of $\omega$. We show on Fig.~\ref{vel3D-kida} the result of such a procedure as well as the extrema on the vertical velocity for comparison. Starting from $\chi=6$, the first effect of the SBI is to reduce the vortex aspect ratio. This is consistent with the Kida vortex solution for which \begin{equation} \frac{\omega}{S}=-\frac{1}{\chi}\Big(\frac{\chi+1}{\chi-1}\Big). \end{equation} Therefore, the vorticity amplification in the vortex core due to the baroclinic feedback leads to a reduction of the aspect ratio. When $\chi\simeq 4$, we note the appearance of strong vertical motions. During these ``bursts'' of 3D turbulence, the aspect ratio measurement is not reliable anymore. However, the vertical motions become ultimately weak, and a weaker vortex appears with $\chi \sim 10$. The baroclinic feedback then amplifies the vortex and the cycle starts again. Interestingly, the $\chi<4$ Kida vortices described by \cite{LP09} were shown to be strongly unstable due to a ``horizontal'' instability. The behaviour observed in these simulations tends to indicate that the parametric modes unstable for $\chi>4$ are not fast enough to take over the SBI. However, as $\chi$ passes through the critical value $\chi=4$, the horizontal modes become unstable and strongly damp the vortex in a few orbits. This interpretation is somewhat confirmed by the snapshots of the velocity field (Fig.~\ref{kida3DSnap}), where growing perturbations are localised inside vortex cores as observed by \cite{LP09}. The ``burst'' of turbulence observed in the Kida vortex case might be a specific property of this vortex solution. Indeed, on longer timescales, the spatial distribution of vorticity might be modified, leading to a more progressive appearance of 3D instabilities and a state more similar to the one observed in \ref{3Dnoise} could be achieved. However, this extreme example clearly demonstrate that 3D parametric instabilities do not lead to a total destruction of the vortices produced by the SBI. \section{Conclusions\label{conclusions}} In this paper, we have shown that a subcritical baroclinic instability was found in shearing boxes. Our simulations produce vortices with a dynamic similar to the description of \cite{PJS07} and \cite{PSJ07}. We find that the conditions required for the SBI are (1) a negative Richardson number (or equivalently a disc unstable for the radial Schwarzschild criterion) (2) a non negligible thermal diffusion or a cooling function and (3) a finite amplitude initial perturbation, as the instability is subcritical\footnote{Note that point (1) and (2) were already mentioned by \cite{PSJ07}, although in a somewhat different form.}. When computed in compressible shearing boxes, the vortices sustained by the SBI produce density waves which could lead to a \emph{weak} outward angular momentum transport ($\alpha\sim 10^{-3}$), a process suggested by \cite{JG05b}. However, we would like to stress that this transport cannot be approximated by a turbulent viscosity, as the transport due to these waves is certainly a non local process. Several uncertainties remain in the compressible stratified shearing box model. In particular, the imposed temperature profile that is subsequently maintained by a heating source is somewhat artificial. This occurs while angular momentum transport causes a significant density redistribution causing the local model to deviate from the original form. Clearly one should therefore consider global compressible simulations with a realistic thermodynamic treatment to draw any firm conclusion on the long term evolution and the angular momentum transport aspect of the SBI. Although the vortices produced by the SBI are anticyclones, the precise pressure structure inside these vortices is not as simple as the description given by \cite{BS95}. In particular, since the vorticity is of the order of the local rotation rate, vortices are not necessarily in geostrophic equilibrium and pressure minima can be found in the centre of strong enough vortices. Moreover, these vortices interact with each other, which leads to complex streamline structures and vortex merging episodes. This leads us to conclude that the particle concentration mechanism proposed by \cite{BS95} might not work in its present form for SBI vortices. In 3D, vortices are found to be unstable to parametric instabilities. These instabilities generates random gas motions (``turbulence'') in vortex cores, but they generally don't lead to a proper destruction of the vortex structure itself. More importantly, the presence of a weak hydrodynamic turbulence in vortex cores will lead to a diffusion of dust and boulders, balancing the concentration effect described above. In the end, dust concentration inside these vortices might not be large enough to trigger a gravitational instability and collapse to form planetesimals. Given the instability criterion detailed above, one can tentatively explain why \cite{JG06} failed to find the SBI in their simulation. First, we would like to stress that our compressible simulations are not more resolved than theirs. Since these simulations use similar numerical schemes, the argument of a too small Reynolds number proposed by \cite{PSJ07} seems dubious. We note however that \cite{JG06} did not include two other important ingredients: a thermal diffusivity and finite amplitude initial perturbations. As shown in this paper, if one of these ingredient is missing, the flow never exhibits the SBI and it gets back to a laminar state rapidly. Note that even if this argument explains the negative result of \cite{JG06}, it also indicates that the SBI should be absent from the simulations presented by \cite{KB03}, as no thermal diffusion nor cooling function was used (the initial conditions were not explicitly mentioned). This suggests that either the global baroclinic instability described by \cite{KB03} and the SBI are two separate instabilities, or strong numerical artefacts have produced an artificially large thermal diffusion in \cite{KB03} simulations. In the limit of a very small kinematic viscosity, the scale at which the instability appears has to be related to the diffusion length scale $(\mu/S)^{1/2}$. In a disc, one would therefore expect the instability around that scale, provided that the entropy profile has the right slope (condition 1). However, as the instability produces enstrophy, vortices are expected to grow, until they reach a size of the order of the disc thickness where compressible effects balance the SBI source term. We conclude from this argument that even if the SBI itself is found at small scale (e.g. because of a small thermal diffusivity), the end products of this instability are large scale vortices, with a size of the order of a few disc scale heights. We would like to stress that the present work constitutes only a \emph{proof of concept} for the subcritical baroclinic instability. To claim that this instability actually exists in discs, one has to study the disc thermodynamic in a self consistent manner, a task which is well beyond the scope of this paper. Moreover, it is not possible at this stage to state that the SBI is a solution to the angular momentum transport problem in discs. Although this instability creates some transport through the generation of waves, this transport is weak and large uncertainties remain on its exact value. Finally, the coexistence of the SBI and the MRI \citep{BH91a} is for the moment unclear. In the presence of magnetic fields, vortices produced by the SBI might become strongly unstable because of magnetoelliptic instabilities \citep{MB09}. Although the nonlinear outcome of these instabilities is unknown, one can already suspect that vortices produced by the SBI will be weaken in the presence of magnetic fields. \begin{acknowledgements} GL thanks Charles Gammie, Hubert Klahr, Pierre-Yves Longaretti and Wladimir Lyra for useful and stimulating discussions during the Newton Institute program on the dynamics of discs and planets. We thank our referee G. Stewart for his valuable suggestions on this work. The simulations presented in this paper were performed using the Darwin Supercomputer of the University of Cambridge High Performance Computing Service (http://www.hpc.cam.ac.uk/), provided by Dell Inc. using Strategic Research Infrastructure Funding from the Higher Education Funding Council for England. GL acknowledges support by STFC . \end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_069-15828
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} The magnetorotational instability (MRI, \citealp{BH91}) is considered the most promising mechanism for triggering turbulence and transporting angular momentum in accretion disks. The properties of the MRI depend on magnetic field geometry. Without external field, the MRI serves as a dynamo process that keeps dissipating and re-generating magnetic fields in a self-sustained manner (e.g., \citealp{SHGB96,Davis_etal10,Shi_etal10}). On the other hand, MRI turbulence becomes stronger when the disk is threaded by external (vertical) magnetic flux \citep{HGB95,BaiStone13a}. Such external magnetic flux may be generically present in accretion disks, especially in protoplanetary disks (PPDs), from both observational \citep{Chapman_etal13,Hull_etal14} and theoretical \citep{BaiStone13b,Bai13,Simon_etal13b} points of view. Numerical studies of the MRI turbulence have shown that it tends to generate long-lived large-scale axisymmetric banded density/pressure variations. They are termed zonal flows, with geostrophic balance between radial pressure gradients and the Coriolis force \citep{Johansen_etal09}. In PPDs, zonal flows have the attractive potential to concentrate dust particles into pressure bumps, which may serve as a promising mechanism for planetesimal formation \citep{Dittrich_etal13}, and also as dust traps to overcome the rapid radial drift of mm sized grains \citep{Pinilla_etal12}. Without external magnetic flux, the existence of zonal flows is robust based on local shearing-box simulations \citep{Simon_etal12a}, although they are not unambiguously identified in global simulations \citep{Uribe_etal11,Flock_etal12}. In the presence of net vertical magnetic flux, enhanced zonal flow has been reported from local shearing-box simulations in the ambipolar diffusion dominated outer regions of PPDs \citep{SimonArmitage14}. Such enhanced zonal flow is further found to be associated with the re-distribution of vertical magnetic flux \citep{Bai14}: flux is concentrated into thin shells in the low-density regions of the zonal flow, while the high-density regions have almost zero net vertical magnetic flux (see Figure 8 of \citealp{Bai14}). \begin{figure*} \centering \includegraphics[width=180mm]{demo.eps} \caption{Time evolution of the radial profiles of mean density $\bar{\rho}$ (top) and mean vertical magnetic field $\bar{B}_z$ (bottom) in the midplane region from the ideal MHD, vertically stratified simulations of \citet{BaiStone13a}. Left and right panels correspond to runs B2 (midplane $\beta_0=10^2$) and B4 (midplane $\beta_0=10^4$) in that paper. The average is taken azimuthally and vertically within $z=\pm2H$. The color scales are centered at the mean value, and span the same range relative to the mean. This Figure may be viewed in parallel with the top and bottom panels in Figure 7 of \citet{BaiStone13a}.}\label{fig:demo} \end{figure*} Magnetic flux concentration by MRI turbulence is evident in earlier shearing-box as well as global simulations containing net vertical magnetic flux, although it has not been systematically studied in the literature. For instance, we show in Figure \ref{fig:demo} the time evolution of radial profiles for the mean gas density $\bar{\rho}$ and the mean vertical magnetic field $\bar{B}_z$ around disk midplane extracted from runs B2 and B4 in \citet{BaiStone13a}. These are isothermal ideal magnetohydrodynamic (MHD) stratified shearing-box simulations of the MRI in the presence of relatively strong net vertical magnetic flux. The net vertical field is characterized by $\beta_0$, the ratio of gas to magnetic pressure (of the net vertical field) at the disk midplane, with $\beta_0=100$ and $10^4$ respectively. We see from the top panels that strong zonal flows are produced with density variations of about $30\%$ and $5\%$ around the mean values respectively. The bottom panel shows the corresponding magnetic flux distribution. Concentration of magnetic flux in low-density region of the zonal flow is obvious when $\beta_0=10^2$, and the high-density region contains essentially zero net vertical magnetic flux. With weaker net vertical field $\beta_0=10^4$, magnetic flux concentration is still evident but weaker. We emphasize that the systems are highly turbulent where the level of density and magnetic fluctuations is much stronger than their mean values (see Figures 3 and 4 of \citealp{BaiStone13a}). In this work, we systematically explore the phenomenon of magnetic flux concentration by performing a series of local shearing-box simulations (Section 2), both in the ideal MHD regime and in the non-ideal MHD regime with ambipolar diffusion as a proxy for the outer regions of PPDs. All simulations are unstratified and include net vertical magnetic flux. A phenomenological model is presented in Section 3 to address the simulation results. Using this model, we systematically explore parameter space in Section 4. While we focus on unstratified simulations in this work, we have tested that the phenomenological model can be applied equally well to stratified simulations such as shown in Figure \ref{fig:demo}. A possible physical mechanism for magnetic flux concentration, together with its astrophysical implications are discussed in Section 5. We conclude in Section 6. \section[]{Magnetic Flux Concentration in Shearing-Box Simulations}\label{sec:shear} We first perform a series of unstratified 3D shearing-box simulations using the Athena MHD code \citep{Stone_etal08}. The orbital advection scheme \citep{StoneGardiner10} is always used to remove location-dependent truncation error and increase the time step \citep{FARGO,Johnson_etal08}. The MHD equations are written in Cartesian coordinates for a local disk patch in the corotating frame with angular velocity $\Omega$. With $(x, y, z)$ denoting the radial, azimuthal and vertical coordinates, the equations read \begin{equation} \frac{\partial\rho}{\partial t}+\nabla\cdot(\rho{\boldsymbol v})+v_K\frac{\partial\rho}{\partial y}=0\ ,\label{eq:cont} \end{equation} \begin{equation} \frac{\partial\rho{\boldsymbol v}}{\partial t}+v_K\frac{\partial\rho{\boldsymbol v}}{\partial y} +\nabla\times(\rho{\boldsymbol v}{\boldsymbol v}+{\sf T})= -\frac{1}{2}\rho\Omega v_x{\boldsymbol e}_y +2\rho\Omega v_y{\boldsymbol e}_x\ ,\label{eq:momentum} \end{equation} \begin{equation} \frac{\partial{\boldsymbol B}}{\partial t} =-\frac{3}{2}B_x\Omega{\boldsymbol e}_y+\nabla\times\bigg[{\boldsymbol v}\times{\boldsymbol B} +\frac{({\boldsymbol J}\times{\boldsymbol B})\times{\boldsymbol B}}{\gamma\rho_i\rho}\bigg] \ ,\label{eq:induction} \end{equation} where ${\sf T}\equiv(P+B^2/2){\sf I}-{\boldsymbol B}{\boldsymbol B}$ is the total stress tensor, $\rho$, $P$, $v_K$, ${\boldsymbol v}$ and ${\boldsymbol B}$ denote gas density, pressure, background Keplerian velocity, background subtracted velocity, and magnetic field, respectively. We adopt an isothermal equation of state $P=\rho c_s^2$ with $c_s$ being the sound speed. The unit for magnetic field is such that magnetic permeability $\mu=1$, and ${\boldsymbol J}=\nabla\times{\boldsymbol B}$ is the current density. The disk scale height is defined as $H\equiv c_s/\Omega$. We set $\rho_0=\Omega=c_s=H=1$ in code units, where $\rho_0$ is the mean gas density. The last term in the induction equation is due to ambipolar diffusion (AD), with $\gamma$ being the coefficient for momentum transfer in ion-neutral collisions, and $\rho_i$ is the ion density. The strength of AD is measured by the Elsasser number $Am\equiv\gamma\rho_i/\Omega$, the frequency that a neutral molecule collides with the ions normalized to the disk orbital frequency \citep{ChiangMurrayClay07}. We consider both the ideal MHD regime, which corresponds to $Am\rightarrow\infty$, and the non-ideal MHD regime with $Am\sim1$, appropriate for the outer regions of PPDs \citep{Bai11a,Bai11b}. \begin{figure*} \centering \includegraphics[width=180mm]{unstrat_fid1.eps} \caption{Time evolution of the radial profiles of main diagnostic quantities from our fiducial ideal MHD run ID-4-16 (left) and fiducial non-ideal MHD run AD-4-16 (right). For each run, from top to bottom, we show the evolution of mean density $\bar{\rho}$, normalized mean vertical magnetic field $\bar{B}_z/B_{z0}$, magnetic pressure $P_B$, Maxwell stress $M_{xy}$, and turbulent velocity fluctuation $\delta v^2=\delta v_x^2+\delta v_z^2$.}\label{fig:shbox-fid} \end{figure*} All our simulations contain net vertical magnetic field $B_{z0}$, measured by initial plasma $\beta_0=2P_0/B_{z0}^2$, the ratio of gas pressure to the magnetic pressure of the net vertical field. We perform a total of 9 runs listed in Table \ref{tab:shbox}. Typical simulation run time ranges from $T=1080\Omega^{-1}$ ($\sim172$ orbits) to $T=2700\Omega^{-1}$ ($\sim430$ orbits). Physical run parameters include $\beta_0$ and $Am$, while numerical parameters include simulation box size and resolution. Fiducially, we adopt box size of $L_x\times L_y\times L_z=4H\times4H\times H$, resolved with $192\times96\times48$ cells for ideal MHD simulations. For non-ideal MHD simulations, we increase the resolution to $256\times128\times64$ cells, which helps better resolve the MRI turbulence (see discussion below). We also explore the effect of horizontal domain size by varying $L_x$ from $2H$ to $12H$ while keeping the same resolution (and $L_y=\max[L_x, 4H]$). We set $\beta_0=1600$ as the standard value, but we also consider $\beta_0=400$ and $6400$ for comparison. All simulations quickly saturate into the MRI turbulence in a few orbits. Standard diagnostics of the MRI include the Maxwell stress \begin{equation} M_{xy}\equiv-B_xB_y\ , \end{equation} and the Reynolds stress $\rho v_xv_y$. Their time and volume averaged values normalized by pressure give the Shakura-Sunyaev parameters $\alpha_{\rm Max}$ and $\alpha_{\rm Rey}$ respectively. In Table \ref{tab:shbox}, we list these values for all our simulations, averaged from $t=360\Omega^{-1}$ onward. Also listed is the plasma $\beta$ parameter, ratio of gas to magnetic pressure at the saturated state. We see that in both ideal and non-ideal MHD runs, $\alpha_{\rm Max}$ and $\alpha_{\rm Rey}$ increases with net vertical magnetic flux, as is well known \citep{HGB95,BaiStone11}. Also, they all roughly satisfy the empirical relation $\alpha\beta\approx1/2$ in both ideal MHD and non-ideal MHD cases \citep{Blackman_etal08,BaiStone11}, where $\alpha=\alpha_{\rm Max}+\alpha_{\rm Rey}$. To ensure that our simulations have sufficient numerical resolution, we have computed the quality factor $Q_z\equiv\lambda_{\rm MRI}/\Delta z$ \citep{Noble_etal10}, where $\lambda_{\rm MRI}$ is the characteristic MRI wavelength based on the total (rms) vertical magnetic field strength. For the most unstable wavelength in ideal MHD, we have $\lambda_{\rm MRI}=9.18\beta_z^{-1/2}$ \citep{HGB95}, where $\beta_z=2P/\overline{B_z^2}$ is the plasma $\beta$ parameter for the vertical field component. In non-ideal MHD with $Am=1$, we find $\lambda_{\rm MRI}=17.47\beta_z^{-1/2}$ \citep{BaiStone11}. Similarly, one can define $Q_y\equiv\lambda_c/\Delta y$, where $\lambda_c$ is defined the same way as $\lambda_{\rm MRI}$ but using $\beta_\phi$ instead of $\beta_z$. In general, the MRI is well resolved when $Q_y\gtrsim20$ and $Q_z\gtrsim10$ \citep{Hawley_etal11}. We find that in all our simulations are well resolved based on this criterion. Further details are provided in Section \ref{sec:param}. In Figure \ref{fig:shbox-fid}, we show the time evolution of the radial profiles of various diagnostic quantities for our fiducial ideal and non-ideal MHD runs. The results are discussed below. Other runs will be discussed in Section \ref{sec:param}. \subsection[]{The Ideal MHD Case}\label{ssec:fid1} In this ideal MHD run, we see that a very strong zonal flow is produced, with density contrast up to $50\%$. In the mean time, there is a strong anti-correlation between gas density and mean vertical magnetic field, with most magnetic flux concentrated in the low density regions. In this fiducial run with radial box size $L_x=4H$, there is just one single ``wavelength" of density and mean field variations. The phase of the pressure maxima drifts slowly in a random way over long timescales, accompanied by a slow radial drift of the mean field profile; but overall, the system achieves a quasi-steady-state in terms of density and magnetic flux distributions. Combined with Figure \ref{fig:demo}, we see that both unstratified and stratified shearing-box simulations show similar phenomenon of magnetic flux concentration and zonal flows. This fact indicates that the same physics is operating, independent of buoyancy. We stress that the mean vertical field, even in the highly concentrated region, is much weaker than the rms vertical field from the MRI turbulence. Therefore, the physics of magnetic flux concentration lies in the intrinsic properties of the MRI turbulence. From the last three panels on the left of Figure \ref{fig:shbox-fid}, we see that the action of the Maxwell stress (which is the driving force of the zonal flow) is bursty. Such behavior corresponds to the recurrence of the channel flows followed by dissipation due to magnetic reconnection \citep{SanoInutsuka01}. The Maxwell stress is most strongly exerted in regions where magnetic flux is concentrated. Interestingly, magnetic pressure shows bursty behavior similar to the Maxwell stress, but its strength does not show obvious signs of radial variation. On the other hand, turbulent velocity in the $x-z$ plane, given by $\delta v^2=\delta v_x^2+\delta v_z^2$, is strongest in regions with weaker magnetic flux during each burst\footnote{By contrast, we find $\delta v_y^2$ (after removing the zonal flow) peaks in regions with magnetic fluctuation.}. While we are mostly dealing with time-averaged quantities in this work, one should keep in mind about such variabilities on timescales of a few orbits. \begin{table*} \caption{List of all shearing-box simulations}\label{tab:shbox} \begin{center} \begin{tabular}{c|ccc|ccc|cc|cccc|c}\hline\hline Run & Box size ($H$) & $\beta_0$ & $Am$ & $\alpha_{\rm Max}$ & $\alpha_{\rm Rey}$ & $\langle\beta\rangle$ & $\Delta\rho/\rho_0$ & $\overline B_{z}^{\rm Max}/B_{z0}$ & $\alpha_m$ & $\alpha_t$ & $Q'$ & $\alpha_{xy}$ & $t\ (\Omega^{-1})$\\\hline ID-4-4 & $4\times4\times1$ & $400$ & $\infty$ & $0.15$ & $0.055$& $3.6$ & $0.26$ & $1.8$ & $0.22$ & $0.054$ & $-0.11$ & $-0.12$ & $360-660$\\ ID-4-16 & $4\times4\times1$ & $1600$ & $\infty$ & $0.070$ & $0.026$& $7.4$ & $0.37$ & $2.2$ & $0.086$ & $0.033$ & $-0.081$ & $-0.072$ & $360-720$\\ ID-4-64 & $4\times4\times1$ & $6400$ & $\infty$ & $0.034$ & $0.010$& $14$ & $0.23$ & $2.1$ & $0.032$ & $0.010$ & $-0.050$ & $-0.033$ & $600-780$\\\hline ID-2-16 & $2\times4\times1$ & $1600$ & $\infty$ & $0.083$ & $0.022$& $5.7$ & $0.014$ & $1.33$ & $--$ & $--$ & $---$ & $--$ & $480-600$\\ ID-8-16 & $8\times8\times1$ & $1600$ & $\infty$ & $0.069$ & $0.025$& $7.1$ & $0.44$ & $2.0$ & $0.014$ & $0.032$ & $-0.089$ & $-0.069$ & $1260-1500$\\ ID-16-16 & $16\times16\times1$ & $1600$ & $\infty$ & $0.070$ & $0.024$& $6.9$ & $0.29$ & $1.4$ & $--$ & $--$ & $--$ & $--$ & $1170-1350$\\\hline AD-4-16 & $4\times4\times1$ & $1600$ & $1$ & $1.5$E$-3$ & $9.1$E$-4$ & $2.1$E+$2$ & $0.061$ & $1.6$ & $5.6$E$-3$ & $1.1$E$-3$ & $-0.093$ & $2.6$E$-3$ & $1080-1440$\\ AD-4-64 & $4\times4\times1$ & $6400$ & $1$ & $4.8$E$-4$ & $6.0$E$-4$& $5.0$E+$2$ & $0.081$ & $2.6$ & $1.6$E$-3$ & $4.9$E$-4$ & $-0.064$ & $1.2$E$-3$ & $1200-1440$\\ AD-2-64 & $2\times4\times1$ & $6400$ & $1$ & $4.4$E$-4$ & $4.1$E$-4$ & $5.3$E$+2$ & $0.024$ & $2.0$ & $7.6$E$-3$ & $5.8$E$-4$ & $-0.048$ & $1.1$E$-3$ & $210-450$\\ \hline\hline \end{tabular} \end{center} \end{table*} \subsection[]{The Non-ideal MHD Case}\label{ssec:niruns} On the right of Figure \ref{fig:shbox-fid}, we show the time evolution of the radial profiles of main diagnostic quantities from our fiducial non-ideal MHD simulation AD-4-16. Zonal flows and magnetic flux concentration are obvious from the plots. One important difference from the ideal MHD case is that the scale that magnetic flux concentrates is much smaller: we observe multiple shells of concentrated magnetic flux whose width is around $0.5H$ or less. The shells may persist, split, or merge during the evolution, while their locations are well correlated with the troughs in the radial density profile. Many other aspects of the evolution are similar to the ideal MHD case, such as the action of Maxwell stress, and the distribution of $P_B$ and $\delta v^2$. These flux-concentrated shells closely resemble the shells of magnetic flux observed in stratified simulations shown in Figure 8 of \citet{Bai14}. Again, the similarities indicate that the physics of magnetic flux concentration is well captured in unstratified simulations. In our unstratified simulations, the zonal flow is weaker than in the ideal MHD cases, where the amplitude of density variations is typically $10\%$ or less. The level of radial density variations in stratified simulations is typically larger \citep{SimonArmitage14,Bai14}. Meanwhile, it appears that magnetic flux concentration is more complete in stratified simulations: most of the magnetic flux is concentrated into the shells, while other regions have nearly zero net vertical flux (again see Figure 8 of \citealp{Bai14}). Also, the flux-concentrated shells are more widely separated in stratified simulations. Note that these stratified simulations contain an ideal-MHD, more strongly magnetized and fully MRI turbulent surface layer, which may affect the strength of the zonal flow at disk midplane (via the Taylor-Proudman theorem) as well as the level of magnetic flux concentration. Nonetheless, addressing these differences is beyond the scope of this work. Finally, we note that the zonal flow and magnetic flux concentration phenomena were already present in our earlier AD simulations \citep{BaiStone11,Zhu_etal14b}. These simulations either focused on the Shakura-Sunyaev $\alpha$ parameter, or the properties of the MRI turbulence, while the radial distribution of magnetic flux was not addressed. \section[]{A Phenomenological Model}\label{sec:model} In this section, we consider our fiducial run ID-4-16 for a detailed case study. We take advantage of the fact that the system achieves a quasi-steady-state in its radial profiles of density and magnetic flux, and construct a phenomenological, mean-field interpretation on magnetic flux concentration and enhanced zonal flows. We use overbar to denote quantities averaged over the $y-z$ dimensions (and certain period of time), which have radial dependence. We use $\langle\cdot\rangle$ to represent time and volume averaged values in the entire simulation domain at the saturated state of the MRI turbulence. In Figure \ref{fig:prof} we show the radial profiles of some main diagnostic quantities. They are obtained by averaging from $t=360\Omega^{-1}$ to $720\Omega^{-1}$, where the density and magnetic flux profiles approximately maintain constant phase. Detailed analysis are described below. \begin{figure*} \centering \includegraphics[width=160mm]{shbox_prof.eps} \caption{Radial profiles of various quantities in the saturated state of run ID-4-16, as indicated in the legends in each panel. Dashed-dotted lines are fits to the measured profiles based on the phenomenological model in Section 3.}\label{fig:prof} \end{figure*} \subsection[]{Force balance} Zonal flow is a result of geostrophic balance between radial pressure gradient and the Coriolis force \begin{equation} \frac{c_s^2}{2\Omega}\frac{\partial\overline \rho}{\partial x}=\overline\rho\overline v_y\approx\rho_0\overline v_y\ . \end{equation} From the top left panel of Figure \ref{fig:prof}, we see that the above formula accurately fits the measured profile of $\overline v_y$. The pressure gradients are driven by radial variations of the Maxwell stress, balanced by mass diffusion \citep{Johansen_etal09}. Using $D_m$ to denote the mass diffusion coefficient, one obtains \begin{equation} \frac{2}{\Omega}\frac{\partial\overline M_{xy}}{\partial x}=-D_m\frac{\partial\overline\rho}{\partial x}\ .\label{eq:massdiff} \end{equation} Therefore, in a periodic box, the density variation should be anti-correlated with the Maxwell stress. Asserting $D_m\equiv\alpha_mc_sH$ and assuming $\alpha_m$ is a constant, we obtain \begin{equation}\label{eq:alpham} \frac{\Delta\overline M_{xy}}{\langle M_{xy}\rangle}\approx-\frac{\alpha_m}{2\alpha_{\rm Max}}\frac{\Delta\overline\rho}{\rho_0}\ , \end{equation} where $\Delta\overline A\equiv\overline{A}-\langle A\rangle$ for any quantity $A$. We can fit the mass diffusion coefficient based on Equation (\ref{eq:alpham}), and obtain $\alpha_m\approx1.2\alpha_{\rm Max}\approx0.086$. Also from the top left panel of Figure \ref{fig:prof}, we see that the fitting result agrees extremely well with the measured profile of $\overline M_{xy}$. \subsection[]{Magnetic Flux Evolution}\label{ssec:evolveB} The evolution of vertical magnetic flux is controlled by the toroidal electric field via the induction equation (\ref{eq:induction}) \begin{equation} \frac{\partial\overline{B}_z(x)}{\partial t}=-\frac{\partial\overline{E}_y}{\partial x}\ , \end{equation} where in ideal MHD, the toroidal electric field can be decomposed into \begin{equation} \overline{E}_y=\overline{E}_{y1}+\overline{E}_{y2}=\overline{v_xB_z}-\overline{v_zB_x}\ .\label{eq:Ey} \end{equation} In the above, the first term describes the advective transport of magnetic flux by turbulent resistivity \begin{equation} \overline{E}_{y1}=\overline{v_xB_z}\approx\eta_t\overline{J}_y=-\eta_t\partial_x\overline{B}_z\ ,\label{eq:eta_t} \end{equation} where $\eta_t\equiv\alpha_tc_sH$ is the turbulent resistivity. The outcome is that accumulation of magnetic flux tends to be smeared out. We can fit the value of $\alpha_t$ from the profiles of $\overline B_z$ and $\overline E_{y1}$ to obtain $\alpha_t\approx0.033$, which is the same order as $\alpha_{\rm Max}$. While the data are somewhat noisy, we see from the bottom left panel of Figure \ref{fig:prof} that the profile of $\overline E_{y1}$ is well fitted from Equation (\ref{eq:eta_t}). This is the basic principle for measuring turbulent resistivity from the MRI \citep{GuanGammie09,LesurLongaretti09,FromangStone09}. The second term in (\ref{eq:Ey}) describes the generation of vertical field by tilting the radial field. Since we expect ${\overline v}_z=0$ and ${\overline B}_x=0$ in the MRI turbulence, its contribution must come from a correlation between $v_z$ and $B_x$, which is primarily responsible for magnetic flux accumulation. The fact that the system achieves a quasi-equilibrium state indicates that their sum $\overline E_y\approx0$. Therefore, contribution from $\overline E_{y2}$ must balance the turbulent diffusion term $\overline E_{y1}$. This is indeed the case, as we see from Figure \ref{fig:prof}. \subsection[]{Turbulent Diffusivity}\label{ssec:diff} The saturated state of the system has a mean toroidal current $\overline J_y=-\partial\overline B_z/\partial x$ but zero mean toroidal electric field $\overline E_y\approx0$. Applying an isotropic Ohm's law to the system would yield infinite conductivity. This is obviously not the case. The issue can be resolved if the turbulent conductivity/diffusivity is {\it anisotropic} with strong off-diagonal components. More generally, we write \begin{equation} \overline E_i=\eta_{ik}\overline J_k\ ,\label{eq:aniso} \end{equation} where $i, j, k$ denote any of the $x, y, z$ components, and one sums over index $k$. Given the mean $\overline J_y$, we have analyzed all other components of the mean electric field. We find that the mean vertical electric field $\overline E_z$ is consistent with zero, while there is a non-zero mean radial electric field \begin{equation} \overline E_x=\overline E_{x1}+\overline E_{x2}=\overline{v_zB_y}-\overline{\delta v_y\delta B_z}\ . \end{equation} Note that in the second term $\overline E_{x2}$, we have removed the component $\overline v_y\overline B_z$, which corresponds to the advection of vertical field due to disk rotation and is physically irrelevant to the MRI turbulence. In the bottom right panel of Figure \ref{fig:prof}, we show the radial profiles of $\overline E_x$ and $\overline E_{x1}$. We see that $\overline E_x$ is approximately in phase with $-\overline E_{y1}$ and $\overline E_{y2}$. This observation indicates that at the saturated state, the system is characterized by an anisotropic turbulent diffusivity that is off-diagonal, given by \begin{equation} \overline E_x\approx\eta_{xy}\overline J_y\ . \end{equation} We can fit the value of $\eta_{xy}\equiv\alpha_{xy}c_sH$ to obtain $\alpha_{xy}\approx-0.072\approx-\alpha_{\rm Max}$. In the bottom right panel of Figure \ref{fig:prof}, we see that although the fitting result is not perfect, it captures the basic trend on the radial variations of $\overline E_x$. It is satisfactory since some features can be smoothed out over the time average due to the (small) phase shift of the density/magnetic flux profiles. Anisotropic diffusivities in MRI turbulence have been noted in \citet{LesurLongaretti09}, who measured most components of the diffusivity tensor by imposing some fixed amplitude mean field variations in Fourier space. In particular, they found that the value of $\eta_{xy}$ is typically negative\footnote{Note that they used a different coordinate system from ours. Our $\eta_{xy}$ corresponds to their $-\eta_{yx}$, and our $\eta_t$ corresponds to their $\eta_{xx}$.}, and the value of $|\alpha_{xy}|$ can be a substantial fraction of $\alpha_{\rm Max}$. Also, they found that $|\eta_{xy}|$ is typically a factor of several larger than the diagonal component, which is our equivalence of $\eta_t$. Our measurements of $\eta_{xy}$ are consistent with their results. \subsection[]{Connection between Anisotropic Diffusivity and Magnetic Flux Concentration} We see from the bottom right panel of Figure \ref{fig:prof} that contributions to $\overline E_x$ is completely dominated by $\overline E_{x1}=\overline{v_zB_y}$, indicating a correlation between $v_z$ and $B_y$. In Section \ref{ssec:evolveB}, we see that magnetic flux concentration is mainly maintained by $\overline E_{y2}=-\overline{v_zB_x}$, indicating an anti-correlation between $v_z$ and $B_x$. Since $B_y$ and $-B_x$ are correlated in the MRI turbulence (to give the Maxwell stress $-\overline{B_xB_y}>0$), it is not too surprising that the two correlations are related to one another: $\overline E_{y2}$ and $\overline E_{x1}$ are in phase as we see in Figure \ref{fig:prof}. Our analysis suggests that magnetic flux concentration is a direct consequence of the anisotropic diffusivity/conductivity in the MRI turbulence. In addition to turbulent resistivity given by $\overline E_{y1}$, another anisotropic component, resulting from correlations between $v_z$ and the horizontal magnetic field, contributes to both $\overline E_{y2}$ and $\overline E_{x1}$. The latter exhibits as $\eta_{xy}$, while the former acts to concentrate vertical magnetic flux. \subsection[]{Analogies to the Hall Effect} We note that the generation of $\overline E_x$ from $\overline J_y$ in the presence of mean vertical field is analogous to the {\it classical} Hall effect. If we empirically set $\eta_{xy}\equiv QB_{z0}$, the electric field in the saturated state may be written as \begin{equation} \overline{\boldsymbol E}\approx Q\overline{\boldsymbol J}\times\overline{\boldsymbol B}\ .\label{eq:Ehc} \end{equation} Since only $\overline J_y$ and $\overline B_z$ are non-zero, this leads to a net $\overline E_x$, consistent with our measurement. The analogy above prompts us to draw another analogy between the {\it microscopic} Hall effect and magnetic flux concentration, which was demonstrated in \citet{KunzLesur13}. For the microscopic Hall effect, the Hall electric field can be written as \begin{equation} \overline{\boldsymbol E}^{h}\approx Q'\overline{{\boldsymbol J}\times{\boldsymbol B}}\ .\label{eq:Ehm} \end{equation} It generates a mean toroidal electric field via \begin{equation}\label{eq:Ehy} \overline E_y^{h}\approx Q'\overline{B_x(\partial_x{B_y})}\approx-Q'\frac{\partial\overline M_{xy}}{\partial x}\ . \end{equation} We can fit $\overline E_{y2}$ using the above relation to obtain $Q'\approx-0.080$ in code unit. As seen in Figure \ref{fig:prof}, the radial profile of $\overline E_{y2}$ is fitted very well. Also note that both $Q$ and $Q'$ are negative based on our fitting results to $\overline E_x$ and $\overline E_{y2}$. While different phenomenological considerations are used to arrive at Equations (\ref{eq:Ehc}) and (\ref{eq:Ehm}), the directionality of the two electric field components is in line with the Hall-like interpretation. \begin{figure*} \centering \includegraphics[width=180mm]{ID_history.eps} \caption{Time evolution of the radial profiles of $\overline\rho$ (top) and $\overline B_z/B_{z0}$ (bottom) from our ideal MHD runs with different $\beta_0$: ID-4-4 (left) and ID-4-64 (right).}\label{fig:IDhist} \end{figure*} \begin{figure*} \centering \includegraphics[width=180mm]{ID_Lx_history1.eps} \caption{Time evolution of the radial profiles of $\overline\rho$ and $\overline B_z/B_{z0}$ from our ideal MHD runs with different $L_x$: ID-2-16 (top left), ID-8-16 (bottom left), ID-16-16 (right).}\label{fig:ID_Lx_hist} \end{figure*} Encouraged by the analogies above, if we insert Equation (\ref{eq:Ehy}) as an ansatz for $\overline E_{y2}$, then the evolution of vertical magnetic flux can be written as \begin{equation}\label{eq:Bcon} \frac{\partial\overline{B}_z}{\partial t}\approx\bigg(\eta_t +Q'\frac{d\overline M_{xy}}{d\overline{B}_z}\bigg)\frac{\partial^2\overline{B}_z}{\partial x^2}\ . \end{equation} In general, we expect $\overline M_{xy}$ to increase with net vertical flux until the net vertical field becomes too strong: $d\overline M_{xy}/d\overline B_z>0$ ($B_z>0$). From our measurement, we have $Q'<0$. Therefore, the second term in Equation (\ref{eq:Bcon}) acts as {\it anti}-diffusion of vertical magnetic flux. It is likely that this term dominates over turbulent diffusion at the initial evolutionary stage to trigger magnetic flux concentration, while turbulent diffusion catches up at later stages when sufficient magnetic flux concentration is achieved. Together with Equation (\ref{eq:alpham}), we see that the gradient of Maxwell stress is responsible for both launching of the zonal flow and magnetic flux concentration. This provides a phenomenological interpretation why magnetic flux always concentrates toward low-density regions. Given our crude phenomenological treatment, however, we can not provide more detailed descriptions on the flux concentration process and phase evolution, nor can we explain its saturation scale and amplitude without involving many unjustified speculations. Our Equation (\ref{eq:Bcon}) closely resembles Equation (26) of \citet{KunzLesur13}, which were used to explain magnetic flux concentration due to the {\it microscopic} Hall effect. The counterpart of our $Q'$ in their paper is positive. Therefore, strong concentration of magnetic flux occurs only when the Hall effect and net vertical field become sufficiently strong (so that $M_{xy}$ decreases with $B_z$). In our case, since $Q'$ is negative, magnetic flux concentration is expected even for relatively weak net vertical field. \subsection[]{Summary} In sum, we have decomposed the turbulent diffusivity from the MRI turbulence into two ingredients. There is a conventional, Ohmic-like turbulent resistivity $\eta_t$. In addition, we find correlations of $v_z$ with $B_y$ and $B_x$ in the presence of vertical magnetic flux gradient. The former leads to an anisotropic diffusivity, which is analogous to the the classical Hall effect. The latter effectively leads to anti-diffusion of vertical magnetic flux, which is responsible for magnetic flux concentration, and is analogous to the the microscopic Hall effect. We emphasize that anisotropic turbulent conductivity/diffusivity is an intrinsic property of the MRI turbulence. While we draw analogies with the Hall effect, it represents an phenomenological approach and simply reflects our ignorance about the MRI turbulence. The readers should not confuse this analogy with the physical (classical or microscopic) Hall effect, which would lead to polarity dependence (on the sign of $B_{z0}$). Magnetic flux concentration, on the other hand, has NO polarity dependence. \section[]{Parameter Exploration}\label{sec:param} The main results of a series of simulations we have performed to explore parameter space are summarized in Table \ref{tab:shbox}. We follow the procedure in Section \ref{sec:model} to analyze the properties related to zonal flows and magnetic flux concentration. In doing so, we choose a specific time period in each run where the density and magnetic flux profiles maintain approximately constant phase. They are listed in the last column of the Table. In many cases, multiple periods can be chosen, and we confirm that the fitting results are insensitive to period selection. To characterize the strength of the zonal flow and magnetic flux concentration, we further include in the table $\Delta\overline\rho/\rho_0$, the relative amplitude of radial density variations, and $\overline B_z^{\rm Max}/B_{z0}$, the ratio of maximum vertical field in the time-averaged radial profile to its initial background value. Runs ID-2-16 and ID-16-16 never achieve a quasi-steady state in their density and magnetic flux distribution, we thus simply measure $\Delta\overline\rho/\rho_0$ and $\overline B_z^{\rm Max}/B_{z0}$ over some brief periods, leaving other fitting parameters blank in the Table. \subsection[]{Ideal MHD Simulations} All our ideal-MHD simulations have achieved numerical convergence based on the quality factor criterion discussed in Section \ref{sec:shear}. In particular, in the run with weakest net vertical field ID-4-64, we find $\overline Q_y>45$ and $\overline Q_z>25$ for any $x$, meaning that the resolution is about twice more than needed to properly resolve the MRI. Runs with stronger net vertical field give further larger quality factors. Below we discuss the main simulation results. \subsubsection[]{Dependence on Net Vertical Field Strength} We first fix the simulation domain size ($L_x=4H$) and vary the strength of the net vertical magnetic field to $\beta_0=400$ and $\beta_0=6400$. The time evolution of $\overline\rho$ and $\overline B_z/B_{z0}$ in the two runs are shown in Figure \ref{fig:IDhist}. We see that in general, enhanced zonal flow requires relatively strong net vertical magnetic field. Our run ID-4-64 with $\beta_0=6400$ has notably weaker density contrast of $\sim20\%$ compared with $\sim40\%$ in our fiducial run ID-4-16. It also takes longer time for strong concentration of magnetic flux to develop. This is in line with the vertically stratified simulations shown in Figure \ref{fig:demo}. In the limit of zero net vertical flux, the density contrast is further reduced to $\sim10\%$ \citep{Johansen_etal09,Simon_etal12a}. For the selected time periods, we find that the phenomenological description in Section \ref{sec:model} works well of all ideal MHD simulations. There is a systematic trend that the mass diffusion coefficient $\alpha_m$, turbulent resistivity $\alpha_t$, $Q'$ and $|\alpha_{xy}|$ all increase with increasing net vertical field. In particular, $\alpha_t$ and $\alpha_{xy}$ roughly scale in proportion with $\alpha_{\rm Max}$. We do not extend our simulations to further weaker net vertical field, where the MRI would be under-resolved. On the other hand, we note that without net vertical magnetic flux, oppositely directed mean vertical magnetic fields tend to decay/reconnect, rather than grow spontaneously \citep{GuanGammie09}. Therefore, concentration of vertical magnetic flux occurs only when there is a net vertical magnetic field threading the disk. Combining both our unstratified simulation results and the results from stratified simulations of shown in Figure \ref{fig:demo}, we expect strong concentration of magnetic flux and enhanced zonal flow to take place for net vertical field $\beta_0\lesssim10^4$. \subsubsection[]{Dependence on Radial Domain Size} In our fiducial run ID-4-16, only one single ``wavelength" of density and magnetic flux variations fit into our simulation box. We thus proceed to perform additional simulations varying the radial domain size, and show the time evolution of their density and magnetic flux profiles in Figure \ref{fig:ID_Lx_hist}. We first notice that when using a smaller box with $L_x=2H$, the zonal structures become much weaker. They appear to be more intermittent, have finite lifetime, and undergo rapid and random radial drift. One can still see that magnetic flux is concentrated toward low density regions, although the trend is less pronounced than that in our fiducial run. Also, the system never achieves a quasi-steady state on its magnetic flux distribution. We note that most previous unstratified simulations of the MRI adopt even smaller radial domain size with $L_x=H$ (e.g., \citealp{HGB95,Fleming_etal00,SanoInutsuka01,LesurLongaretti07,Simon_etal09}). Therefore, the intermittent features discussed above would make signatures of magnetic flux concentration hardly noticeable in these simulations. We also note that the phenomenon of magnetic flux concentration should occur in earlier unstratified simulations with relatively large radial domain such as in \citet{Bodo_etal08} and \citet{LongarettiLesur10}. Nonetheless, these works have mostly focused on the volume averaged turbulent transport coefficients rather than sub-structures in the radial dimension. Enlarging the radial domain size to $L_x=8H$, we see that the system initially develops two ``wavelengths" of zonal structures ($t=300-1000\Omega^{-1}$), with magnetic flux concentrated into two radial locations corresponding to the density minima. Later on, however, the two modes merge into one single mode with much stronger density variation. The magnetic flux in the two radial locations also merge to reside in the new density trough. From this time, the system achieves a quasi-steady state configuration. We find that for turbulent diffusivities, our model provides excellent fits, and the values of $\alpha_t$, $Q'$ and $\alpha_{xy}$ agree with those in the fiducial run ID-4-16 very well. This indicates well converged basic turbulent properties with simulation domain size, and our phenomenological description on magnetic flux concentration works reasonably well in a wider simulation box. On the other hand, we find that Equation (\ref{eq:alpham}) no longer yields a good fit between the density and Maxwell stress profiles, leaving the value of $\alpha_m$ poorly measured (the reported value represents an underestimate). This is most likely due to the more stochastic nature of the forcing term (Maxwell stress) in a wider simulation box, which has been discussed in \citet{Johansen_etal09}. Further increasing the radial domain size to $L_x=16H$, we find that the system initially breaks into multiple zonal structures. Magnetic flux still concentrates toward low-density regions, but the density and magnetic flux profiles show long-term evolutions. Even by running the simulation for more than 400 orbits, no quasi-steady configuration is found. The later evolution of the system is still dominated a single ``mode" of zonal structure in the entire radial domain, but there are more substructures associated with multiple peaks of magnetic flux distribution. The overall level of magnetic flux concentration is weaker, with typical $\overline B_z^{\rm Max}/B_{z0}\sim1.5$ or less, and the typical scale of individual magnetic flux substructure is around $\sim2H$. While we may speculate that this simulation better represents realistic (fully-ionized) disks, we also note that the simulation box size of this run is already large enough that the local shearing-sheet formulation would fail if the disk is not too thin (e.g., aspect ratio $H/R\lesssim0.03$), and we have not included vertical stratification. Overall, in the ideal MHD case, the properties of the zonal flow and magnetic flux concentration do not converge with the box size in shearing-box simulations. Finally, we notice that evidence of magnetic flux concentration is already present in earlier global unstratified simulations with net vertical flux. \citet{Hawley01} found in his simulations the formation of a dense ring near the inner radial boundary and various low density gaps (i.e., zonal flows) within the disk, which were tentatively attributed to a type of ``viscous" instability. \citet{SteinackerPapaloizou02} obtained similar results and identified the trapping of vertical magnetic flux in the density gaps, although they did not pursue further investigation. While shearing-box simulation results do not converge with box size, these global unstratified simulation results lend further support to the robustness of magnetic flux concentration in more realistic settings. \subsection[]{Non-ideal MHD Simulations} \begin{figure*} \centering \includegraphics[width=160mm]{AD_prof1.eps} \caption{Radial profiles of various quantities in the saturated state of run AD-4-16, as indicated in the legends in each panel. Dashed-dotted lines are fits to the measured profiles based on the phenomenological model in Section 3. Note that the green curve in the bottom left panel now shows $E_y$ due to AD.}\label{fig:ADprof} \end{figure*} With strong AD, we first show in Figure \ref{fig:ADprof} the radial profiles of main diagnostics from our fiducial run AD-4-16, with fitting results over plotted, which compliments our discussions in Section \ref{ssec:niruns}. We first notice from the top right panel that because the MRI turbulence is weaker due to AD, the mean vertical field dominates over the rms fluctuations of the vertical field in the flux-concentrated shells. This is also the case in most stratified shearing-box simulations for the outer regions of PPDs in \citet{Bai14}. Magnetic fluctuations in $B_x$ and $B_y$ do not show strong trend of radial variations. Secondly, we find that for this run, magnetic flux concentration is still mainly due to turbulent motions. In the bottom left panel of Figure \ref{fig:ADprof}, we also show $\overline E_{y}^{\rm AD}$, the toroidal electric field resulting from AD. We see that the contribution from $\overline E_{y}^{\rm AD}$ is small compared with the other two components $\overline E_{y1}$ and $\overline E_{y2}$. Therefore, the phenomenological description in Section \ref{sec:model} is equally applicable in the non-ideal MHD case. It provides reasonable fits in the mean $\overline E_x$ and $\overline E_y$ profiles. We will discuss further on the role of AD in Section \ref{sssec:AD}. We also notice that while the radial density variation and Maxwell stress are still anti-correlated, they are not well fitted from relation (\ref{eq:alpham}). Correspondingly, the mass diffusion coefficient $\alpha_m$ is not very well measured. Again, this is likely due to the stochastic nature of the forcing term (Maxwell stress). As we see from Figure \ref{fig:shbox-fid} (3rd panel on the right), many bursts of the Maxwell stress are exerted over an extended range of the radial domain, covering multiple peaks and troughs in the magnetic flux profile. Although on average, regions with strong magnetic flux concentration have stronger Maxwell stress, the ``kicks" they receive are not as coherent as its ideal MHD counterpart (3rd panel on the left). In this situation, it is more appropriate to apply the stochastic description of the zonal flow in \citet{Johansen_etal09} rather than the simple form of Equation (\ref{eq:massdiff}). \begin{figure} \centering \includegraphics[width=90mm]{AD_history1.eps} \caption{Time evolution of the radial profiles of $\overline\rho$ and $\overline B_z/B_{z0}$ from our two non-ideal MHD runs with different radial domain size: AD-2-64 (top) and AD-4-64 (bottom).}\label{fig:ADhist} \end{figure} In Figure \ref{fig:ADhist}, we further show the time evolution of density and magnetic flux profiles from two other runs AD-2-64 and AD-4-64 with $\beta_0=6400$ and $Am=1$. We see that the evolutionary patterns from the two runs are very similar to each other. They are also qualitatively similar to our run AD-4-16 discussed earlier, with magnetic flux concentrated into thin shells whose sizes are $\lesssim0.5H$. We have also tested the results with larger box size $L_x=6H$, and find very similar behaviors as smaller box runs. This is very different from the ideal MHD case, and provides evidence that the properties of magnetic flux concentration converge with simulation box size down to $L_x=2H$ in unstratified simulations. The convergence is mainly due to the small width of the flux-concentrated shells and their small separation. Nevertheless, we again remind the readers that properties of magnetic flux concentration and zonal flows can be different in the more realistic stratified simulations, as mentioned in Section \ref{ssec:niruns}. In the bottom panels of Figure \ref{fig:ADhist}, we see that the properties of the zonal flow in our run AD-4-64 show long-term evolutions over the more than 100 orbits, and stronger density contrast is developed toward the end of the run (which again may relate to stochastic forcing). Similar long-term evolution behavior was also reported in unstratified simulations of \citet{Bai14}. Despite the value of $\alpha_m$ being poorly determined, other quantities $\alpha_t$, $\alpha_{xy}$ and $Q'$ are found to be similar between the two runs AD-2-64 and AD-4-64. Their values are a factor of $\sim2$ smaller than in run AD-4-16 with twice the net vertical field, consistent with expectations of weaker turbulence. In addition, we see that magnetic flux concentration is even more pronounced with weaker net vertical field $\beta_0=6400$ than with $\beta_0=1600$. Combining the results from stratified simulations of \citet{Bai14}, we see that strong magnetic flux concentration can be achieved with very weak net vertical field, at least down to $\beta_0=10^5$. We have also computed the quality factors for these two runs with $\beta_0=6400$, and find $\overline Q_y\gtrsim20$ over the entire simulation domain, $\overline Q_z\sim10-15$ in high density regions, and $\overline Q_z\sim20-30$ in low density regions. The small $\overline Q_z$ value in high density regions is mainly due to weaker (rms) vertical field, hence larger $\beta_z$, as a result of magnetic flux concentration and zonal flows. We see that the relatively high resolution ($64$ cells per $H$ in $z$) that we have adopted for these non-ideal MHD simulations is necessary to guarantee proper resolution of the MRI over the entire simulation domain, especially the high-density regions of the zonal flow. Finally, by comparing run AD-4-16 with run ID-4-16, we see that the Maxwell stress $\alpha_{\rm Max}$ is reduced by a factor of $\sim50$ due to AD. On the other hand, we find that the amplitudes of $\overline E_{y1}$ and $\overline E_{y2}$ are reduced by just a factor of $\sim20$. Since both $\alpha_{\rm Max}$ and $\overline E_y$ result from quadratic combinations of turbulent fluctuations, this fact indicates that while turbulence gets weaker, the correlation between $v_z$ and $B_x$ becomes tighter in the AD case. To quantify this, we further define \begin{equation} \delta_x\equiv\frac{\langle|\overline{v_zB_x}|\rangle} {\langle v_z^2\rangle^{1/2}\langle B_x^2\rangle^{1/2}}\ ,\quad \delta_y\equiv\frac{\langle|\overline{v_zB_y}|\rangle} {\langle v_z^2\rangle^{1/2}\langle B_y^2\rangle^{1/2}}\ , \end{equation} where the overbar indicates averaging over the horizontal and vertical domain at individual snapshots, and the angle bracket indicates further averaging over the radial domain and selected time period in Table \ref{tab:shbox}. For all ideal MHD runs, we consistently find that $\delta_x\sim0.07-0.09$, and $\delta_y\sim0.09-0.11$. For all non-ideal MHD runs, we find $\delta_x\sim0.10-0.11$, and $\delta_y\sim0.14-0.16$. It is clear that non-ideal MHD simulations give larger $\delta$ values. In the ideal MHD case, the actual correlations between $v_z$ and $B_x$, $B_y$ are weaker than indicated by the $\delta$ values due to stronger time fluctuations\footnote{If we take the time average before computing the absolute values, we obtain $\delta_{x,y}\sim0.01-0.03$ in the ideal MHD case, and $\delta_{x,y}\sim0.04-0.07$ in the non-ideal MHD runs.}. Recently, \citet{Zhu_etal14b} noticed that the MRI turbulence with AD has very long correlation time in vertical velocity (see their Figures 9 and 13). The more coherent vertical motion in the AD dominated MRI turbulence might also be related to the stronger correlation between $v_z$ and $B_x$, $B_y$ and efficient magnetic flux concentration. \subsubsection[]{Role of Ambipolar Diffusion on Magnetic Flux Concentration}\label{sssec:AD} As previously discussed, AD appears to play a minor role in magnetic flux concentration in run AD-4-16, as shown in the bottom left panel of Figure \ref{fig:ADprof}. On the other hand, we find that with weaker net vertical field as in run AD-4-64 ($\beta_0=6400$), AD acts to enhance the level of magnetic flux concentration. In Table \ref{tab:shbox}, we see that the value $\overline B_z^{\rm Max}/B_{z0}$ is systematically higher in run AD-4-64 compared with run AD-4-16. In Figure \ref{fig:AD64prof}, we show the radial profiles of $\overline B_z$ as well as various components of $\overline E_y$. We see that vertical flux is squeezed into thiner shells with much sharper magnetic flux gradients compared with run AD-4-16 (top right panel of Figure \ref{fig:ADprof}). Very interestingly, the AD electric field $\overline E_y^{\rm AD}$ is mostly anti-correlated with $\overline E_{y1}$, suggesting that it plays an anti-diffusive role, and its contribution is comparable with $\overline E_{y2}$. We have also checked the simulations in \citet{Bai14}, where the midplane $\beta_0=10^{4-5}$, and found that again, contribution from $\overline E_y^{\rm AD}$ to magnetic flux concentration is comparable to, and sometimes more than, that from $\overline E_{y2}$. \begin{figure} \centering \includegraphics[width=90mm]{AD64prof.eps} \caption{Equivalence of the top right and bottom left panels of Figure \ref{fig:ADprof}, but for run AD-4-64.}\label{fig:AD64prof} \end{figure} AD is generally thought to be a diffusive process, which tends to reduce the magnetic field strength by smoothing out the field gradients. However, unlike Ohmic resistivity, AD is highly anisotropic. It also preserves magnetic field topology since it represents ion-neutral drift without breaking field lines. \citet{BrandenburgZweibel94} demonstrated that AD can in fact lead to the formation sharp magnetic structures, especially near magnetic nulls. This dramatic effect was attributed to two reasons. First, magnetic flux drifts downhill along magnetic pressure gradient, and second, reduction of diffusion in weak field regions. They also showed via a 2D example that even without magnetic nulls, sharp current structures can be formed. While the situation is different in our case, stronger concentration of magnetic flux with sharp vertical flux profiles can be considered as another manifestation on the effect of AD in forming sharp magnetic structures. Weaker net vertical field leads to weaker MRI turbulence, allowing the effect of AD to better stand out. \section[]{Discussions}\label{sec:discussion} Our simulation results demonstrate that magnetic flux concentration and enhanced zonal flows are robust outcome of the MRI in the presence of net vertical magnetic flux, at least in shearing-box simulations. We have also tested the results using an adiabatic equation of state with cooling, where we set the cooling time to satisfy $\Omega t_{\rm cool}=1$. We find exactly the same phenomenon as the isothermal case with strong zonal flows of similar amplitudes and strong flux concentration. Magnetic flux concentration in low density regions of the MRI turbulence was also observed in \citet{Zhu_etal13}, where the low density region was carved by a planet. Concentration of magnetic flux enables the planet to open deeper gaps compared with the pure viscous case. Their results further strengthen the notion of magnetic flux concentration as a generic outcome of the MRI turbulence. In broader contexts, the interaction of an external magnetic field with turbulence has been studied since the 1960s. Observations of the solar surface show that magnetic flux is concentrated into discrete and intermittent flux tubes with intricate topology, where the field is above equipartition strength to suppress convection. They are separated by convective cells with very little magnetic flux. It is well understood, both theoretically and numerically, that in a convective medium, magnetic flux is expelled from regions of closed streamlines, and concentrates into flux tubes in between the convective cells (e.g., \citealp{Parker63,GallowayWeiss81,Nordlund_etal92}). In the interstellar medium, concentration of magnetic flux in MHD turbulence has also been suggested \citep{Vishniac95,LazarianVishniac96}, via a process which they referred to as turbulent pumping. Our findings are in several aspects different from the formation of flux tubes. For example, the distribution of mean vertical magnetic field is quasi-axisymmetric rather than patchy. Also, the level of concentration is modest, with mean vertical field typically weaker than the turbulent field, and the overall distribution of magnetic energy is approximately uniform. Nevertheless, our findings add to the wealth of the flux concentration phenomena, and deserve more detailed studies in the future. \begin{figure} \centering \includegraphics[width=90mm]{schematic.eps} \includegraphics[width=90mm]{pattern1.eps} \caption{Top: schematic illustration of a possible mechanism for magnetic flux concentration due to the MRI, where the bold lines represent magnetic field lines. See explanation in Section \ref{ssec:physical}. Bottom: a snapshot at $t=300\Omega^{-1}$ from run AD-2-64 showing the azimuthally averaged toroidal (up) and vertical (down) magnetic fields. Arrows indicate the azimuthally averaged in-plane velocity field (up) and magnetic field (down).}\label{fig:schematic} \end{figure} \subsection[]{A Possible Physical Picture}\label{ssec:physical} Here we describe a possible physical scenario for magnetic flux concentration in the MRI turbulence. It is schematically illustrated on the top panel of Figure \ref{fig:schematic}, which is divided into three stages. We consider the unstable axisymmetric linear MRI modes in the presence of net vertical magnetic field (stage 1), so-called the ``channel flows". The channel flows exhibit as two counter-moving planar streams, and are found to be exact even in the non-linear regime \citep{GoodmanXu94}. The vertical fields are advected by the streams to opposite radial directions, generating radial fields. The radial fields further generate toroidal fields due to the shear. As a result, oppositely directed radial and toroidal fields are produced and grow exponentially across each stream (stage 2). Eventually, the growth is disrupted by parasitic instabilities or turbulence \citep{PessahGoodman09,Latter_etal09}, effectively leading to enhanced reconnection of such strongly amplified, oppositely directed horizontal fields around each stream. The outcome is represented by two field loops in stage 3. Eventually, these loops are dissipated, and we are back in stage 1. In the picture above, the material in the loop (stage 3) is originally threaded by net vertical flux. However, due to reconnection, material is pinched off from the original vertical field lines. Therefore, the mass-to-flux ratio in these field lines decreases. In other words, magnetic flux is effectively concentrated into low-density regions. This mechanism resembles the idea of turbulent pumping \citep{Vishniac95,LazarianVishniac96}, but relies on the specific properties of the MRI. In brief, the reconnection process following the development of the channel flows effectively pumps out the gas originally threaded by vertical field lines, which results in magnetic flux concentration. As we have briefly discussed in Section \ref{ssec:fid1}, the evolution of the MRI shows recurrent bursty behaviors characteristic of discrete channel flows on large scales, followed by rapid dissipation. The overall behaviors are qualitatively similar to the cyclic picture outlined above. A more detailed study carried out by \citet{SanoInutsuka01} lends further support to this picture. In the presence of strong AD, the above picture is more easily visualized since in the flux-concentrated shells, mean vertical field dominates turbulent field. In the bottom panels of Figure \ref{fig:schematic}, we show a snapshot of azimuthally averaged field quantities from our run AD-2-64. We see that the flux-concentrated shells (at both $x\sim-0.7H$ and $x\sim0.4H$) show clear signature of sinusoidal-like variations in $z$, indicating the development of channel flows. Given the mean vertical field strength with $\beta_0=6400$, the net vertical field in the flux-concentrated shells can be 2-4 times stronger, with $\beta_z\sim400-1600$. The corresponding most unstable wavelength is about $0.5-1H$, consistent with observed features. In the upper panel, we see that toroidal fields are amplified to relatively strong levels ($\beta_y\sim100$). Oppositely directed toroidal fields are separated by sharp current sheets, ready for reconnection to take place. Also, the location of the current sheets approximately coincides with the location where radial field in the channel mode changes sign, consistent with expectations. Admittedly, the saturated state of the MRI turbulence, especially in the ideal MHD case, contains a hierarchy of scales where the processes described above may be taking place. The final result would be a superposition of loop formation and reconnection at all scales. The simple picture outlined here is only meant to be suggestive. More detailed studies are essential to better understand the physical reality of magnetic flux concentration in the MRI turbulence. \subsection[]{Implications for Magnetic Flux Transport} The properties of the MRI turbulence strongly depend on the amount of net vertical magnetic flux threading the disk \citep{HGB95,BaiStone13a}. Therefore, one key question in understanding the physics of accretion disks is whether they possess (or how they acquire) net vertical magnetic flux, and how magnetic flux is transported in the disks. Conventional studies on magnetic flux transport generally treat turbulent diffusivity as an isotropic resistivity. Balancing viscous accretion and isotropic turbulent diffusion, it is generally recognized that for magnetic Prandtl number of order unity (appropriate for the MRI turbulence e.g., \citealp{GuanGammie09,LesurLongaretti09,FromangStone09}), magnetic flux tends to diffuse outward for thin accretion disks \citep{Lubow_etal94a,GuiletOgilvie12,Okuzumi_etal14}. Our results indicate, at least for thin disks (where the shearing-sheet approximation is valid), that the distribution of magnetic flux in accretion disks is likely non-uniform. \citet{SpruitUzdensky05} showed that if magnetic flux distribution is patchy, inward dragging of magnetic flux can be much more efficient because of reduced outward diffusion and enhanced angular momentum loss on discrete patches. While in our study, magnetic flux concentrates into quasi-axisymmetric shells rather than discrete bundles, we may expect similar effects to operate as a way to help accretion disks capture and retain magnetic flux. Given the highly anisotropic nature of the MRI turbulence, our results also suggest that it is important to consider the full turbulent diffusivity/conductivity tensor in the study of magnetic flux transport. While the full behavior of this tensor is still poorly known, our results have already highlighted its potentially dramatic effect in magnetic flux evolution. Additionally, magnetic flux evolution can also be strongly affected by global effects, which requires careful treatment of disk vertical structure, as well as properly incorporating various radial gradients that are ignored in shearing-box \citep{Beckwith_etal09,GuiletOgilvie14}. \subsection[]{Zonal Flow and Pressure Bumps} Our results suggest that magnetic flux concentration and zonal flows are intimately connected. In the context of global disks, radial variations of concentrated and diluted mean vertical field lead to variations of the Maxwell stress or effective viscosity $\nu$. Steady state accretion demands $\nu\Sigma={\rm const}$ \citep{Pringle81}. Correspondingly, the radial profile of surface density (hence midplane gas density) is likely non-smooth. The density/pressure variations drive zonal flows as a result of geostrophic force balance. Therefore, the enhanced zonal flows reported in stratified shearing-box simulations with net vertical magnetic flux \citep{SimonArmitage14,Bai14} are likely a real feature in global disks. In PPDs, the radial pressure profile is crucial for the growth and transport of dust grains (e.g., \citealp{Birnstiel_etal10}), the initial stage of planet formation. Planetesimal formation via the streaming instability favors regions with small radial pressure gradient \citep{Johansen_etal07,BaiStone10c}. Sufficiently strong pressure variations may even reverse the background pressure gradient in localized regions to create pressure bumps, which are expected to trap particles or even planets (e.g., \citealp{KretkeLin12}). Numerical modelings indicate that such pressure bumps are needed in the outer region of PPDs to prevent rapid radial drift of millimeter sized grains \citep{Pinilla_etal12}. Realistic stratified global disk simulations with net vertical magnetic flux is numerically difficult and the results can be affected by boundary conditions. Keeping the potential caveats in mind, radial variations of surface density and Maxwell stress are present in the recent global stratified simulations by \citet{SuzukiInutsuka14}, and pressure bumps are also observed in some of their runs. Long-lived zonal flows as particle traps are also seen in the recent global unstratified simulations by \citet{Zhu_etal14b}. Therefore, we speculate that because of strong magnetic flux concentration, enhanced zonal flows have the potential to create pressure bumps in the outer regions of PPDs. \section[]{Conclusions} In this work, we have systematically studied the phenomenon of magnetic flux concentration using unstratified shearing-box simulations. In the presence of net vertical magnetic field, the non-linear evolution of the MRI generates enhanced level of zonal flows, which are banded quasi-axisymmetric radial density variations with geostrophic balance between radial pressure gradient and the Coriolis force. We find that vertical magnetic flux strongly concentrates toward the low density regions of the zonal flow, where the mean vertical field can be enhanced by a factor of $\sim2$. High density regions of the zonal flow has much weaker or even zero mean vertical field. In ideal MHD, we find that strong magnetic flux concentration and zonal flow occur when the radial domain size $L_x$ reaches $\sim4H$. The typical length scale of magnetic flux concentration is $\sim 2H$, but the general behaviors of flux concentration do not show clear sign of convergence with increasing simulation box size up to $L_x=16H$. In non-ideal MHD with strong ambipolar diffusion (AD), magnetic flux concentrates into thin shells whose width is typically less than $\sim0.5H$. AD facilitates flux concentration by sharpening the magnetic flux profiles, especially when net vertical flux is weak. The properties of the system converge when the radial domain size reaches or exceeds $\sim2H$. Concentration of magnetic flux is a consequence of anisotropic turbulent diffusivity of the MRI. At the saturated state, a turbulent resistivity tends to smear out concentrated magnetic flux. This is balanced by an anti-diffusion effect resulting from a correlation of $\overline{v_zB_x}$, which has the analogy to the microscopic Hall effect. In addition, a correlation of $\overline{v_zB_y}$ yields a radial electric field, mimicking the classical Hall effect. We provide a phenomenological description that reasonably fits the simulation results. The physical origin of magnetic flux concentration may be related to the recurrent development of channel flows followed by enhanced magnetic reconnection, a process which reduces the mass-to-flux ratio in localized regions. Systematic studies of turbulent diffusivities in the presence of net vertical magnetic flux are crucial to better understand the onset of magnetic flux concentration, together with its saturation amplitude. They are also important for understanding magnetic flux transport in general accretion disks. Association of magnetic flux concentration with zonal flows also has important consequences on the structure and evolution of PPDs. This relates to many aspects of planet formation, especially on the trapping of dust grains and planetesimal formation. In the future, global stratified simulations are essential to provide a realistic picture on the distribution and transport of magnetic flux, as well as global evolution of accretion disks. \acknowledgments We thank Charles Gammie, Hantao Ji, Julian Krolik, Dong Lai, Ramesh Narayan, John Papaloizou and Zhaohuan Zhu for useful conversations, and an anonymous referee for a useful report. X.-N.B is supported by NASA through Hubble Fellowship grant HST-HF2-51301.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. JMS is supported by NSF grants AST-1312203 and AST-1333091. Computation for part of this work was performed on Stampede at Texas Advanced Computing Center through XSEDE grant TG-AST140001. \bibliographystyle{apj}
5eba3c0b59dcf679978abf33c66b50a7f63cc632
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:intro} V838 Monocerotis (henceforth V$838$ Mon) is a dramatic stellar object: in 2002, it brightened nine magnitudes during three outbursts \citep{Brown2002,Goranskii2002}, experienced a variety of distinctive optical and infrared (IR) changes \citep{Wisniewski2003,Lynch2004,Rushton2005,Geballe2007}, and eventually morphed into a L-type supergiant \citep{Evans2003}. V838 Mon is estimated to be located at a distance of 6.2$\pm$1.2kpc \citep{Sparks2008} and is embedded in a sparse, young cluster with an upper age limit of $\sim$25 Myr and an intervening reddening E(B-V)=0.85 \citep{Afsar2007, Tylenda2012}. Since its outburst, an unresolved B3V companion in the system has been detected through spectroscopic monitoring \citep{Munari2005}. Many theoretical mechanisms have been explored to explain the origin of V$838$ Mon's outbursts \citep{Lawlor2005,Munari2005,Retter2006,Tylenda2006}; the most likely scenario is a stellar merger between two progenitor stars in a formerly triple system \citep{Tylenda2006}. This scenerio also predicts x-ray emission from the spin-up of the envelope produced by such a merger \citep{Soker2007}; however, such x-ray emission has not been seen in two epochs of Chandra imagery \citep{Antonini2010}. The light from V$838$ Mon's B3V companion is sufficient to account for the entire luminosity of the variable star measured on sky-survey photographs before its outburst \citep{Afsar2007} \citep[however, see ][for a discussion of the progenitor as its own B3V star]{Barsukova2010}. While V$838$ Mon is a decidedly rare object, several analogs have been reported in the literature. Two emerging classes of cool explosions are: luminous red novae (LRNe) and intermediate luminosity red transients (ILRTs). Akin to V$838$ Mon, LRNe are stellar eruptions that have remained extremely cool through the outburst and include V1309 Sco \citep{Tylenda2011} and V4332 Sgr \citep{Martini1999}. Two extragalactic events have also been noted to be LRNe: one in the bulge of M31 \citep{Mould1990} and one in the lenticular galaxy M85 \citep{Kulkarni2007}. It has been proposed that LRNe are stellar mergers. ILRTs are similarly red but too luminous to be explained by stellar mergers: \eg, SN2008S in NGC 6946 \citep{Prieto2008}, NGC300-OT2008 \citep{Bond2009, Thompson2009}, and PTF10fqs in M99 \citep{Kasliwal2011}. Consistent with their discovery in grand spirals, it has been proposed that ILRTs represent electron-capture induced collapse in extreme Asymptotic Giant Brach Stars \citep{Kochanek2011}. The local circumstellar environment of V$838$ Mon has been studied extensively since the initial outburst of the system. Based on an observed, short-lived (February -- March, 2002) intrinsic polarization in the system, the outburst geometry was likely non-spherical \citep{Wisniewski2003}. In October 2002, a temporary re-emergence of an intrinsic polarization component occurred. Oriented 90$^{\circ}$ from the original component, the new polarization suggests one of two changes to the system: either a change in the optical depth of the material surrounding the star(s) or a physical shift in the illuminating source(s) \citep{Wisniewski2003b}. Recent interferometric observations \citep{Chesneau2014} have indicated a similar local circumstellar geometry as suggested by polarimetric data. V838 Mon is also surrounded by a substantially broader region of nebular material as diagnosed by the dramatic light echo illuminated by the outburst events \citep{Bond2003}. Continued observations of V838 Mon's local circumstellar environment have traced the dynamical evolution of the system. Optical and IR photometric monitoring has suggested that the outburst ejecta has advanced past the location of the B3V binary, completely attenuating its signal \citep{Goranskiji2008} and condensing to form new circumstellar dust \citep{Wisniewski2008}. To explain the evolution of spectroscopic and photometric properties observed over time, \citet{Lynch2004} proposed a simple, spherically symmetric model containing a central star with an expanding circumstellar shell that is cooling in a radiatively-dominated quasi-equilibrium manner; this model is consistent with spectral fits to optical and IR data from \citet{Lynch2004}. More recently, \citet{Lynch2007} expanded this conceptual framework to include five components: a central star, two photospheric shells that surround the central star, and two much cooler and more distant expanding shells of gas. Validating the central tenents of these models, including the number of components required and their expansion velocities, has remained elusive due to the previously limited number of epochs (2) of IR data available. In this paper, we discuss the continued evolution of the system in the context of the Lynch et al.~ (2004, 2007) models and previously published photometric, spectroscopic, and spectro-polarimetric observations of the system. In \S\ref{sec:observ}, we present optical, near-IR, and mid-IR spectroscopic and mid-IR photometric observations of V$838$ Mon obtained between 2008 and 2012 at the Apache Point Observatory 3.5m, NASA IRTF 3m, and Gemini South 8m telescopes. In \S\ref{sec:halpha}, we show a low level of H$\alpha$ emission has returned to the system, possibly due to excitation of the gas by the B3V companion as seen through the optically thinning gas and dust ejecta or due to magnetic activity produced by an emerging dynamo effect from the proposed stellar merger. In \S\ref{sec:spectyp}, we spectrally type V$838$ Mon in both the optical (M7 supergiant) and near infrared (L3 supergiant) and discuss the discrepancy in the two identifications. In \S\ref{sec:irseds}, we fit our 2009 observations to the \citet{Lynch2004} model for the expanding warm dust envelope and find our data is consistent with the previous two epochs of data. Finally, in \S\ref{sec:results}, we discuss the implications of the results we presented in the previous sections. \section{Observations} \label{sec:observ} Optical spectra were taken using the Astrophysical Research Consortium Echelle Spectrograph (ARCES) and the Dual Imaging Spectrograph (DIS) instruments on Apache Point Observatory's (APO) $3.5$m telescope between 2008 - 2012 (see Table \ref{tab:observation_table}). ARCES \citep{Wang2003} is a high resolution, cross-dispersed visible light spectrograph \footnote[16]{http://www.apo.nmsu.edu/arc35m/Instruments/ARCES/} that obtains spectra between 3600-10000 $\AA$ with a resolution of R$\sim$31500. Standard bias, flat field, and ThAr lamp exposures for the echelle were also obtained on every night. These data were reduced using standard techniques in IRAF\footnote[17]{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.}. DIS is a medium dispersion spectrograph with separate red and blue channels.\footnote[18]{http://www.apo.nmsu.edu/arc35m/Instruments/DIS/} Our DIS observations were taken using the default low-resolution blue and red gratings (B400/R300), yielding a resolution R$\sim$800 and a total effective wavelength coverage of $4500-9000 \AA$. Bias, flat field, and HeNeAr lamp exposures for DIS were obtained on every night, and nightly observations of the flux standard stars G191B2b, Feige34 or Feige110 were used to flux calibrate these data. These data were reduced using standard techniques in IRAF. We obtained two epochs of observations of V838 Mon (Table \ref{tab:observation_table}) using SpeX \citep{Rayner2003}, a medium-resolution $0.8$-$5.4$ $\mu$m spectrograph located at NASA's Infrared Telescope Facility (IRTF).\footnote[19]{http://irtfweb.ifa.hawaii.edu/$\sim$spex/} We used the 0$\farcs$3 x 15$\farcs$0 slit for our V838 Mon observations, providing R$\sim$2000 spectra from 0.8-2.5 $\mu$m, and also obtained observations of the A0V star HD 53205 to provide telluric correction. These data were reduced using Spextool \citep{Vacca2003,Cushing2004}. Mid-IR low resolution spectroscopy and imaging of V838 Mon were obtained with T-ReCS (Thermal-Region Camera Spectrograph), located at the Gemini South Observatory (GSO).\footnote[20]{http://www.gemini.edu/sciops/instruments/trec/} As summarized in Table \ref{tab:observation_table}, we obtained both low resolution (R$\sim$100) spectroscopy in the N-band, narrow-band photometry near the N-band (Si-5: 11.7 $\mu$m and Si-6: 12.3 $\mu$m), and broad-band photomtry in the Q-band (Qa: 18.3 $\mu$m). Standard telescope chopping and nodding was used to mitigate the effects of the thermal background of the sky in the mid IR. These data were reduced using the \textit{midir} Gemini/IRAF software package. We also present two epochs of mid-IR photometry of V838 Mon obtained with the Broadband Array Spectrograph System (BASS), mounted on NASA's IRTF (see Table \ref{tab:observation_table}). BASS is an infrared array prism spectrograph \citep{Hackwell1990} that covers the $2.9$-$13.5$ $\mu$m spectral region simultaneously at a resolving power of 25-120, depending on wavelength, and records these data on 116 back-illuminated blocked impurity band Si:As detectors.\footnote[21]{http://www.aero.org/capabilities/remotesensing/bass.html} Our BASS observations were calibrated using $\alpha$ Tau as a reference star. We note that relative calibrations for reference stars $\alpha$ Tau, $\alpha$ CMa, $\beta$ Gem, $\alpha$ Boo, and $\alpha$ Lyr have remained constant to about 1\% with repeated tests. \section{H$\alpha$ Evolution} \label{sec:halpha} The H$\alpha$ line has proven to be an important diagnostic of the evolution of V838 Mon's circumstellar environment, helping to trace the initial expansion of ejecta from the 2002 outbursts \citep{Wisniewski2003} and in 2006 probing the likely interaction of this expanding ejecta with the B3V companion \citep{Munari2007}. We discuss the continued monitoring of H$\alpha$ between October 2008 and September 2012 below. In Figure~\ref{f:DIS_halpha}, we present three optical spectra of V$838$ Mon taken with the low resolution (R $\sim 800$) DIS spectrograph on APO. We detect no convincing evidence of H$\alpha$ emission at this resolution, which is consistent with conclusions drawn by \citet{Bond2009b}. However, the three optical spectra of V$838$ Mon taken with the high resolution (R $\sim 31500$) Echelle spectrograph on APO during the same epoch as our low resolution spectra clearly reveal the presence of a low level of H$\alpha$ emission (Figure~\ref{f:ECHELLE_halpha}). This low level of emission still appears present during our fourth epoch of high resolution spectra obtained in 2012. We do not observe any significant variability in the H$\alpha$ emission. These results are consistent with the report of low level H$\alpha$ emission being present in 2009-epoch high resolution spectra by \citet{Tylenda2011}, and extend the time period over which the system exhibits this emission. One possible origin for this low level emission could be excitation of the expanding dust and gas envelope by the B3V companion. \citet{Bond2009b} suggests that the behavior of H$\alpha$ from 200 to 2009, which included a brief return of the line appearing in emission followed by a return to an apparent pure absorption profile, was due to the B3V companion becoming completely engulfed by the expanding envelope. However, our high resolution spectra suggest that perhaps this envelope is not completely optically thick. Alternatively, the observed low level H$\alpha$ emission could be caused by magnetic activity in the primary star, which is an expected byproduct of the proposed stellar merger event of 2002 \citep{Soker2007}. We discuss the plausibility of each of these scenarios in \S\ref{sec:results}. \section{Spectral Typing} \label{sec:spectyp} To better understand the properties of the central star, we compare both the DIS optical spectrum and SpeX IR spectrum with other cool objects. For the purposes of this discussion, we assume that the dust shell has little or no effect on the spectral features, and have corrected for a reddening of $E{(B-V)} = 0.85$. \subsection{Optical Spectral Type} \label{sec:opttype} For a starting point in our analysis, we compare the optical spectrum of V$838$ Mon to M5--M9 dwarf \citep{Bochanski2007} and giant \citep{Pickles1998} spectroscopic templates (shown in Figure~\ref{f:optical_comp}). We used the Hammer spectral typing software \citep{Covey2007} to estimate the optical spectral type based on the characteristics of M dwarf optical spectra. We obtain a best fit type of M7, consistent with the \citet{Tylenda2011} type of M6.3. An examination of the specific optical features, however, indicates that the type is primarily based on the strength of the TiO bands, and most optical features are more sensitive to the V$838$ Mon's surface gravity rather than its surface temperature. The optical spectrum of V$838$ Mon shows strong, sharp TiO and VO features, with only weak CaH absorption at 6750\AA~and no sign of FeH absorption at 8600\AA. The presence and strength of TiO and VO is consistent with an oxygen-rich stellar atmosphere, and the absence of FeH and CaH bands indicate a very low gravity atmosphere; those bands are weak or absent in M giants \citep{Evans2003}. The sharpness of the TiO and VO bands and the \ion{K}{1} doublet at 7700\AA~compared to both the dwarf and giant templates indicates that it is likely to have an even lower surface gravity (consistent with a supergiant classification). The TiO bands at 6150\AA, 6651\AA, and 7053\AA~each absorb to zero flux, indicating the absence of the B3 component detected by \citet{Munari2005}. There is no detection of the 8183/8195\AA~\ion{Na}{1} doublet, which is typically in absorption for late-M dwarfs and in emission for late-M giants \citep{Schiavon1997}. It is possible that the lack of emission is due to the lower gravity of V$838$ Mon compared to M giants. The absence of this feature is perhaps not remarkable due to the weak or absent \ion{Na}{1} absorption both in the optical \citep[the $\sim$5893\AA~doublet is very weak in the 2009 spectrum]{Tylenda2011} and the near-infrared (we do not detect the \ion{Na}{1} doublets at $\sim$1.14 and $\sim$2.21$\mu$m; see Section~\ref{sec:irtype}). \subsection{Infrared Spectral Type} \label{sec:irtype} \citet{Evans2003} noted that the 0.8--2.4$\mu$m spectrum of V$838$ Mon appeared to be that of an L supergiant star. There are no other objects with an L supergiant classification, so we select a range of spectra for comparison to the SpeX observations of V$838$ Mon , drawing both from late-M and L dwarf spectra and late-M giant and supergiant spectra. In Figure~\ref{f:ir_comp}, we show the SpeX spectrum of V$838$ Mon compared to publicly available infrared spectra \citep{Cushing2005a,Rayner2009} of M8V LP 412-31 (similar to its initial optical classification), M8/9III IRAS 14303-1042 (as an example late-M giant), and M7I MY Cep (as an example of a late-M supergaint), and L3V 2MASS J00361617+1821104 (possibly similar in surface temperature) from the IRTF Spectral Library\footnote[22]{http://irtfweb.ifa.hawaii.edu/\~spex/IRTF\_Spectral\_Library/index.html}. Each of these spectra share some spectral features with V$838$ Mon . The best match for the overall shape of the V$838$ Mon spectrum is a mid-L dwarf, so we apply L dwarf classification indices to the infrared spectrum of V$838$ Mon . L dwarfs are classified not only on their spectral type/T$_{\rm eff}$, but also their surface gravity \citep[which shows a dependency on age;][]{Burrows1997}. Based on the strength of its H$_2$O bands, which primarily trace effective temperature, V$838$ Mon has an infrared spectral type of L3. The \citet{Allers2013} gravity indices (based on the strength of \ion{K}{1}, VO, and FeH) indicate a very low gravity. The \ion{K}{1} and FeH are in fact completely absent from the spectrum, consistent with the very low surface gravity L supergiant classification \citet{Evans2003}. Despite the L3 classification, the detailed spectral features of V$838$ Mon are not a close match to those of L3 dwarfs, as can be seen in Figure~\ref{f:ir_comp}. The TiO bands between 0.8 and 0.9 microns, discussed in \ref{sec:opttype}, are not typically present in L dwarf spectra. The very low surface gravity of V$838$ Mon results in the absence of FeH, \ion{K}{1} and \ion{Na}{1} between 1.1 and 1.3$\mu$m. Another dramatic difference between L dwarfs an V$838$ Mon is the presence of AlO A-X absorption bands \citep[first identified on V$838$ Mon by][]{Bernstein2003} at 1.226, 1.242, 1.646, and 1.648$\mu$m. These bands are consistent with oxygen-rich, low temperature environments and have been observed in a handful of AGB stars \citep{Banerjee2012}. The best match for the detailed $J$-band spectrum of V$838$ Mon is that of the M8/9II IRAS 14303-1042; both have TiO, VO, and H$_2$O absorption features, though the specific shapes and strengths of the features are not well matched. The $H$-band spectrum of V$838$ Mon is dominated by H$_2$O, CO, and OH, resulting in detailed features very similar to those of the M7I MY Cep (excepting the AlO absorption bands, present only in V$838$ Mon ). The $K$-band spectra of all three comparison stars are a good overall match to V$838$ Mon , but the CO bands appear different in both shape and strength, possibly due to the complexities of the dust shell surrounding V$838$ Mon . The CO bands in our 2008 spectrum are qualitatively similar to those detected in the \citet{Geballe2007} low-resolution 2006 infrared spectra. \citet{Geballe2007} analyzes the CO bands in detail using a high resolution (R$\sim$18000) spectra taken within a few months of the low resolution detection; this high-resolution spectra revealed distinct velocity components within the CO spectrum. The main component of the CO was at photospheric temperatures, but with a radial velocity of $-15$~km~s$^{-1}$ relative to the stellar radial velocity, indicating the gas was still settling on the surface of the star. Other components of the CO absorption were associated with the more extended dust shell. In our 2008 spectrum, V$838$ Mon's CO bands are stronger than those of all the comparison objects, which may indicate that some of the extended dust shell components still contribute to the absorption, but we do not have the velocity resolution to investigate the CO bands in more detail. V$838$ Mon is relatively free of molecular absorption near 1.3 and 2.2$\mu$m, revealing weak absorption lines. These lines appear qualitatively similar to those of the M7I MY Cep and are likely a feature of supergiant atmospheres. Using the list of infrared lines in Arcturus \citep{Rayner2009}, we tentatively identify a \ion{Mn}{1} $\lambda$ 1.2903, 1.2980$\mu$m doublet; an \ion{Al}{1} $\lambda$1.3127, 1.3153$\mu$m doublet; and a \ion{Si}{1} $\lambda$1.3181$\mu$m absorption line in the 1.25--1.32$\mu$m spectral region. Near $\sim$2.2$\mu$m, we tentatively identify \ion{Si}{1} at 2.2069$\mu$m (not the \ion{Na}{1} doublet seen in the dwarf spectra); a \ion{Ti}{1} $\lambda$2.2217, 2.2239, 2.2280$\mu$m triplet; and a \ion{Ca}{1} $\lambda$2.2614, 2.2631, 2.2633, 2.2657$\mu$m quadruplet. Both regions have additional lines of similar strength that overlapped with multiple atomic transitions so could not be positively identified at this resolution. \subsection{Estimating T$_{\rm eff}$ From Spectra} The optical and infrared spectra of V$838$ Mon , at first glance, seem to indicate different spectral types, and therefore different surface temperatures (T$_{\rm eff}$). Adopting the M7 optical spectral type and extrapolating the supergiant spectral type/T$_{\rm eff}$ relation of \citet{Levesque2005} gives a surface temperature of T$_{\rm eff}\sim3000$~K \citep[similar to the temperature quoted by][]{Tylenda2011}. There is no supergiant temperature scale for L dwarfs, but an L3 dwarf spectral type corresponds to T$_{\rm eff}\sim1800$~K \citep{Stephens2009}. One possible way of reconciling those two temperatures is an application of a multi-layer photosphere \citep[e.g.,][]{Lynch2007}, but here we estimate based on the comparison of V$838$ Mon to other stars. As discussed in \S~\ref{sec:opttype}, the optical spectrum for V$838$ Mon matches an M7 dwarf best because the TiO bands are strongest on M7 photospheres. The fraction of TiO contained in molecules (vs.~ \ion{Ti}{1}) increases with decreasing temperature, hitting a maximum (at M dwarf atmospheric pressures) at T$_{\rm eff}\sim2000$~K \citep{Lodders2006}. At temperatures cooler than T$_{\rm eff}\lesssim2400$~K, TiO begins to condense onto dust grains (e.g., perovskite) \citep{Burrows1999}, so TiO disappears in M7 and later dwarfs. The formation of dust grains is more sensitive to atmospheric pressure than the formation of TiO, so at low surface gravity, a low temperature atmosphere will continue to have strong TiO bands at T$_{\rm eff}\lesssim2400$~K. The remaining indicators of cool temperatures in optical spectra (e.g., the transition from late-M to L dwarfs) are pressure sensitive; gravity broadening of the \ion{K}{1} doublet, the strengthening of FeH. Additionally, the infrared spectrum of V$838$ Mon appears cooler than the M7I MY Cep due to stronger TiO, VO, and H$_2$O bands. Therefore, the spectrum of V$838$ Mon may be both very cooler and very low gravity; a prototype for L supergiants in the optical. M giants are T$\sim200$--$400$~K warmer than dwarfs of the same optical type, and if this trend continues, the surface temperature of V$838$ Mon is likely to be T$_{\rm eff}\sim2000$--$2200$~K. This range of temperatures is also consistent with the formation of strong TiO bands without significant depletion of TiO onto dust. \section{Infrared SEDS} \label{sec:irseds} In this section, we fit our visible and infrared data with a two component blackbody curve. We consider the physical interpretation of this fit in context of the \citet[][henceforth, L04]{Lynch2004} model of V$838$ Mon's post outburst evolution. Our goal is to constrain the evolution of the ejecta with our new epoch of data. The L04 model contains three components. \begin{enumerate} \item{The central star's photospheric emission, which is modeled as an inner blackbody source with a homogeneous outer, cooler absorbing layer. The central star is described by a radius, $R_{p}$, and a blackbody temperature, $T^{p}_{BB}$.} \item{A warm shell, which is modeled with several parameters, including an expanding radius, $R_{s}$, and a vibrational/blackbody emission temperature, $T^{s}_{BB}$.} \item{A cold, outer shell, modeled in a similar fashion as the warm, expanding shell, but with an emission temperature set to zero. Here, the quantity of greatest significance is the absorber column (resulting in an effective emissivity, $\epsilon$, of the system).} \end{enumerate} We also considered the \citet[][henceforth, L07]{Lynch2007} model of V$838$ Mon's ejecta, which contains 5 components (a central star, two photospheric shells that surround the central star, and two much cooler and more distant expanding shells of gas). This model is essentially the same as the L04 model, except the two photospheric layers are considered separately from the central star, and are each assumed to be homogeneous and characterized by a single equilibrium temperature and column amounts for the molecular species. While the added components are physically reasonable and lead to refined predictions for $T^{p}_{BB}$, the added complexity beyond this revision did not help us build additional insight from our two component blackbody fit. Therefore, our remaining discussion will focus primarily on comparisons with the L04 model. Best fits from L04 and L07 can be found in Table~\ref{table:model}. We note that the generally accepted values for the velocity of the expanding cloud in 2003 were within 100-400 km s$^{-1}$ (see L04 and references therein). We assume the velocity of the expanding cloud is $v_{s}=160$ km s$^{-1}$ and the distance to V$838$ Mon is $D_{V838 Mon}=6$ kpc \citep{Sparks2008} to generate model predictions. In January 2009, L04 predicts: $R_{p} \sim 4.2$ AU (no evolution from 2005); $T^{p}_{BB} \sim 2100$ K (no evolution from 2005); $R_{s} \sim 230.5$ AU (time elapse 6 year 10.5 months, expansion rate 33.75 AU/yr); and $T^{s}_{BB} \sim 283$ K (temperature of cloud in 2003 $\times$ 0.38). We fit our 2009 SED data, consisting of both de-reddened optical and infrared data, with a 2 component blackbody curve (Figure \ref{f:bb}). Our simultaneous fit yielded for the primary star T$_{p}$$=$2370$\pm400$ K and R$_{p} = 4.3\pm.2$ AU and T$_{s}$$=$285$\pm2$ K, R$_{s}$$=$263$\pm10$ AU, and veiling=$35\%\pm20$ for the warm expanding dust envelope. We note, the T$_{\rm eff}$ identified in section \S\ref{sec:spectyp} using spectral analysis (T$_{\rm eff}\sim2000$--$2200$ K) falls within the 3$\sigma$ errors on our best fit for the temperature of the primary, $T_p=2370\pm400$ K. Our results are consistent with the predictions from the L04 model. We find only a slightly larger radius for the expanding dust shell ($R_{s}$ $\sim$ 263 $\pm$10 AU) than is predicted ($R_{s}$ $\sim$ 230.5 AU) using an expansion rate of $v_{s}=160$ km s$^{-1}$. Moreover, the blackbody temperature we determine for this shell ($T^{s}_{BB}$ $\sim$ 285 $\pm$ 2 K) is consistent with that predicted from the L04 model. \section{Discussion \& Conclusions} \label{sec:results} In this section, we discuss and present our conclusions regarding V$838$ Mon's spectral type, expanding outburst ejecta, and ongoing H$\alpha$ emission. We also comment on the lack of new dust formation (relative to 2008 observations) and place this in the context of the evolution of the system as a whole. In \S\ref{sec:spectyp}, we concluded that the IR spectral slope of V$838$ Mon is best matched to an L3 supergiant. While the optical data is best matched by a spectral classification of M7 (as matched to a dwarf template), this classification does not take into account the effects of low gravity on the persistence of various spectral features. Significantly, the temperatures at which TiO can remain coherent in supergiant photospheres could be as high as 2000--2200 K. Given the complete lack of L supergiant data to type against within the optical regime, we conclude that V$838$ Mon could plausibly be catagorized as a L supergiant in both the IR and optical regimes. In this regard, we speculate that V$838$ Mon could be a prototype for what an L supergiant optical spectra should look like. In \S\ref{sec:irseds}, we used our 2009 optical to mid-IR observations as a third epoch of data to test the \citet{Lynch2004} model of the expanding ejecta envelope. Our data indicate the dust shell has continued to cool at a similar rate as suggested by the \citet{Lynch2004} model. If the shell was ejected at a velocity of $160 \>{\rm km}\,{\rm s}^{-1}$, the 2009 IR data is consistent with a 285$\pm$2 K shell of material which has expanded to a radius of 263$\pm$10 AU and is attenuated by $\sim 35\%\pm20$ by an outer cooler shell. In this best fit to the model, the radius of the L3 primary is described by $R_{p}\sim4.3\pm0.2$ AU and a $T_{p}\sim2370\pm400$ K. We note, the T$_{\rm eff}$ identified in section \S\ref{sec:spectyp} using spectral analysis (T$_{\rm eff}\sim2000$--$2200$ K) falls within the 3$\sigma$ errors on our best fit for the temperature of the primary, $T_p=2370\pm400$ K. These results are consistent with the recent IR interferometric analysis by \citet{Chesneau2014}. Using the Very Large Telescope Interferometer between October 2011 and February 2012, and adopting a distance to V$838$ Mon of $D=6.1\pm0.6$ kpc, \citet{Chesneau2014} deduce that there is a dust shell distributed from about 130 to 300 AU around V$838$ Mon. They note the dust is distributed in a flattened structure which could be interpreted as a relic of the genesis event or could be influenced by the embedded B3V companion. They also note that as of 2014, the radius of V$838$ Mon's photosphere has decreased by about 40\% from the radius determined after the outbursting events to $3.5\pm0.6$ AU. We note that the radius of the primary that we determined ($4.3\pm0.2$ AU) is within errorbars of the radius determined by \citet{Chesneau2014}. We consider here whether the dust is newly formed material or persisting matter that has expanded to a larger spatial distribution. \citet{Wisniewski2008} previously found substantial dust production in the 2006$-$2007 epoch, which was reflected in an IR excess at the 10--80$\mu$m wavelength range. We repeat their analysis on our 2008$-$2009 IR photometry. In Figure~\ref{f:photo}, we plot our new results together with the data presented in \citet{Wisniewski2008}. As evident, our 2008$-$2009 IR photometry shows no appreciable change from the 2006$-$2007 epoch. We conclude there has been no substantive change in the dust production or composition. In \S\ref{sec:halpha}, we showed the level of H$\alpha$ activity has declined since the line's brief return to significant emission in 2006. While H$\alpha$ emission is not distinguishable in low resolution spectra obtained during 2008-2009, all high-resolution (R $\sim 31500$) spectra obtained between 2008-2012 do show the ongoing presence of a small amount of H$\alpha$ emission. We suggest two possible origins for the observed weak H$\alpha$ emission. The emission could originate from magnetic activity in the primary star, which is an expected phenomenon \citep{Soker2007} if the 2002 outburst was caused by a stellar merger event \citep{Tylenda2006}. \citet{Antonini2010} found highly variable X-ray emission associated with V$838$ Mon: in 2008, strong X-ray emission was detected with XMM Newton, but in 2009 and 2010 no such emission was found using Chandra. From this, \citet{Antonini2010} suggest that the magnetic activity itself might be highly variable. We do not detect significant H$\alpha$ variability in our high resolution spectra that supports the purported variable magnetic activity; however, we cannot rule out that this is due to the sparse, non-contemporaneous sampling of the optical and X-ray data. Alternatively, the emission may simply be caused by the continued excitation of HI in the expanding shell of gas and dust by the B3V binary companion, which would indicate that this shell is not completely optically thick. To ultimately disentangle which scenario is generating the H$\alpha$ emission, a simultaneous X-ray and observational campaign is needed. As \citet{Antonini2010} note, it is implausible to generate high X-ray luminosities from the simple interactions of infalling matter with the matter above the stellar photosphere unless extreme accretion rates and infall velocities are involved. Thus catching V$838$ Mon in the act of producing strong X-ray emission would be a strong indicator that the stellar merger is the ultimate source of any ongoing H$\alpha$ activity. Even if no X-ray emission is detected, it is certainly worthwhile to continue to monitor the H$\alpha$ activity and search for the reappearance of the B3V companion. Further evolution (or lack thereof) will put strong contraints on both the material surrounding V$838$ Mon and the genesis event that fueled the outbursts. \section{Acknowledgments} \label{sec:acknow} This work is supported at The Aerospace Corporation by the Independent Research and Development program. SRL acknowledges support from the Michigan Society of Fellows. MMK acknowledges generous support from the Hubble Fellowship and Carnegie-Princeton Fellowship. \begin{table*}[!th] \caption{Summary of optical and IR spectroscopic and photometric data.}\label{tab:observation_table} \vspace{-0.15in} \center{\begin{tabular}{lccr} \tableline Date & Observatory & Instrument & Wavelength Coverage/Filters \\ \tableline 2008 Oct 12 & APO & ARCES & $3600$-$10000$ $\AA$\\ 2009 Nov 01 & & &\\ 2009 Nov 27 & & &\\ 2010 Feb 27 & & &\\ 2010 Mar 25 & & &\\ 2012 Sep 21 & & &\\ \tableline 2008 Oct 12 & APO & DIS & blue: $4500$-$5500$ $\AA$ \\%3350-9260 $\AA$\\ 2008 Nov 23 & & & red: $6000$-$9000$ $\AA$\\ 2009 Jan 15 & & & \\ \tableline 2009 Apr 19 & Gemini South & T-ReCS & N Lo-Res: 7.7-12.97 $\mu$m\\ 2009 Sep 20 & & & Si-5: $11.09$-$12.22$ $\mu$m\\ & & & Si-6: $11.74$-$12.92$ $\mu$m\\ & & & Qa: $17.57$-$19.08$ $\mu$m\\ \tableline 2008 Nov 26 & NASA IRTF & SpeX & $0.8$-$2.5$ $\mu$m\\ 2009 Jan 10 & & &\\ \tableline 2008 Sep 05 & NASA IRTF & BASS & $2.9$-$13.5$ $\mu$m\\ 2009 Dec 02 & & &\\ \tableline \end{tabular}} \vspace{0.1in} \end{table*} \pagebreak \begin{table*}[!th] \caption{Best fit parameters for two component model.} \vspace{-0.15in} \begin{center} \begin{tabular}{lcccccr} \tableline &Year&$R_{p}$ (AU)&$T^{p}_{BB}$ (K)&$R_{s}$ (AU)&$T^{s}_{BB}$ (K)&$\epsilon$\\ \tableline L04 & 2003& 5.6 & 2100 & 28 & 750 & 0.67\\ L07 & 2005& 4.2 & 2100 & 129.25 & 375 &\\ Predictions based on L04 model & 2009 & 4.2 (adopting L7 value) & 2100 & 230.5 & 283 & 0.67\\ Best fit to observed data (current work) & 2009 & 4.3$\pm$2 & 2370$\pm$400 & 263$\pm$10 & 285$\pm$2& 0.65$\pm$.2\\ \tableline \end{tabular} \end{center} \vspace{0.1in} \tablecomments{Tabulated values include the radius of the primary star ($R_{p}$), the blackbody temperature of the primary star ($T^{p}_{BB}$), the radius ($R_{s}$) and temperature ($T^{s}_{BB}$) of the expanding dust shell, and a uniform attentuation factor ($\epsilon$). Note that L04 and L07 assume the velocity of the expanding cloud is $v_{s}=160$ km s$^{-1}$ and the distance to V$838$ Mon is $D_{V838 Mon} \sim 6$ kpc.}\label{table:model} \end{table*} \pagebreak \begin{figure*}[!H] \center{\includegraphics[width=1\textwidth]{fig1_DIS_COLOR}} \caption{We monitored the optical spectrum of V$838$ Mon at low resolution (R $\sim 800$) with the DIS instrument on the $3.5$m Apache Point Observatory telescope from 2008$-$2009. We find no convincing evidence of emission at this resolution, consistent with that noted by \citet{Bond2009b}.} \label{f:DIS_halpha} \end{figure*} \pagebreak \begin{figure*}[!H] \center{\includegraphics[width=1\textwidth]{fig2_ECHELLE_COLOR_5spec}} \caption{We also monitored V$838$ Mon using the high resolution (R $\sim 31500$) Echelle spectrograph on the $3.5$m Apache Point Observatory telescope. At this resolution, we see clear evidence of a low level of H$\alpha$ emission from October 2008 $-$ September 2012. Note, we have added a fiducial offset to each spectrum for ease of comparison.} \label{f:ECHELLE_halpha} \end{figure*} \pagebreak \begin{figure*}[!H] \center{\includegraphics[width=1\textwidth]{fig3_opt_v838}} \caption{Comparison of our low-resolution ($R \sim 800$), extinction corrected ($E{(B-V)} = 0.85$) optical spectrum of V$838$ Mon (black) with M dwarf templates (red) from \citet{Bochanski2007} and M giant templates (green) from \citet{Pickles1998}. Prominent molecular and atomic features are labeled. Due to the dramatic TiO bands, the best match spectra are M7. We note that the optical spectrum of V$838$ Mon shows narrower molecular bands and atomic lines than both the dwarfs and giants due to its low surface gravity.} \label{f:optical_comp} \end{figure*} \pagebreak \begin{figure*}[!H] \center{\includegraphics[width=1\textwidth]{fig4_ir_v838_lxd}} \caption{Comparison of our IRTF SpeX data (black lines) to spectra from the IRTF Spectral Library \citep[colored lines][]{Cushing2005a,Rayner2009}. The top panel shows the entire range of the SpeX data, while the next three panels show the $J$-, $H$- and $K$-band respectively. The types of each spectrum are given in the top left corner of the top panel. None of the comparison spectra are a perfect match for the V$838$ Mon spectrum. The L3V is the best match for the overall spectral slope, while the M8/9III is the best match in the $J$-band due to the dramatic TiO bands. The M7I spectrum is the best match in the $H$- and $K$-bands.} \label{f:ir_comp} \end{figure*} \pagebreak \begin{figure*}[!H] \center{\includegraphics[width=1\textwidth]{fig5_BB_bestfit_0.00_sigma_v_legend.eps}} \caption{We fit a two component blackbody model to V$838$ Mon, including a 2370 K component from the primary star and a 285 K component from the warm expanding envelope. These data provide observational support for the \citet{Lynch2004} model.} \label{f:bb} \end{figure*} \pagebreak \begin{figure*}[!H] \center{\includegraphics[width=1\textwidth]{fig6_PHOTOMETRIC_FLUX_COLOR}} \caption{Our 2008$-$2009 IR photometry shows no appreciable change from the 2006$-$2007 epoch \citep{Wisniewski2008}. We conclude there has been no substantive change in the dust production or composition.} \label{f:photo} \end{figure*} \pagebreak \clearpage \newcommand{\noop}[1]{}
e381532d789668ef5fc170bc284427f7f40213ee
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Highest weight categories were first introduced in \cite{CPS88} with the purpose of axiomatizing certain phenomena arising naturally in the representation theory of complex semisimple Lie algebras. This prompted the definition of a quasi-hereditary algebra (see \cite{Scott87}), which was shown by the authors to be precisely such a finite-dimensional algebra whose module category is equivalent to a highest weight category. Since their introduction, examples of quasi-hereditary algebras have been found to be abundant in representation theory. Classical examples include hereditary algebras, algebras of global dimension two, Schur algebras and blocks of BGG category $\mathcal{O}$. The main protagonists of the representation theory of quasi-hereditary algebras are the standard modules. These are certain quotients of the indecomposable projective modules which depend on a chosen partial order on the indexing set of the isomorphism classes of simple modules. For blocks of BGG category $\mathcal{O}$, this order is the Bruhat order. Associated to the standard modules is the category $\mathcal{F}(\Delta)$: the full subcategory of the module category consisting of those modules which admit a filtration whose subquotients are standard modules. Given a quasi-hereditary algebra, natural objectives of its representation theory are understanding the standard modules and the associated category $\mathcal{F}(\Delta)$. In the endeavor of understanding $\mathcal{F}(\Delta)$ for a general quasi-hereditary algebra, a breakthrough was made by König, Külshammer and Ovsienko in \cite{kko}. They showed that any quasi-hereditary algebra $A$ is Morita equivalent to an algebra $\Lambda$, which admits a very particular subalgebra $B$, called a regular exact Borel subalgebra. This subalgebra has the surprising property that the image of its module category under the functor $\Lambda\otimes_B \blank$ is equivalent to $\mathcal{F}(\Delta)$. In general, the problem of determining $\Lambda$ and $B$ for an arbitrary quasi-hereditary algebra $A$ may be very hard. One way to do this involves determining an $A_\infty$-structure on $\operatorname{Ext}_A^\ast(\Delta,\Delta)$, the algebra of extensions between standard modules over $A$, which can be arduous. Some examples of where this has been done can be found in \cite{KlamtStroppel, Thuresson22}. However, in the situations studied in this paper, the aforementioned $A_\infty$-structures can be easily computed. Moreover, we are able to use criteria from \cite{CONDE21} to check when our algebras themselves contain regular exact Borel subalgebras. A given associative algebra can have different quasi-hereditary structures, depending on the partial order mentioned above. In particular, two different orders may yield identical quasi-hereditary structures. This phenomenon motivates the definition of the esssential order, an order with the property that two quasi-hereditary structures coincide if and only if they generate the same essential order. The algebras considered in this article are hereditary, and hereditary algebras are precisely those algebras which are quasi-hereditary with respect to any total order on an indexing set of the isomorphism classes of their simple modules. A natural question to ask is then how many different quasi-hereditary structures there are. When the algebra in question admits a duality on its module category which preserves simple modules, the structure is essentially unique, as shown by Coulembier in \cite{Kevin2020}. This contrasts the algebras studied in this article with the examples from \cite{KlamtStroppel, Thuresson22}, as the algebras appearing there admit a simple-preserving duality. In a recent paper by Flores, Kimura and Rognerud, \cite{FKR}, the authors showed that when $A$ is the path algebra of a uniformly oriented linear quiver, the different quasi-hereditary structures are counted by the Catalan numbers. Moreover, they counted the quasi-hereditary structures of path algebras of more complicated quivers by means of a combinatorial process called ``deconcatenation''. The main goal of the present article is to use the combinatorial techniques of Flores, Kimura and Rognerud to further study the quasi-hereditary structures of some of the algebras considered in \cite{FKR}. In particular, we want to describe the $\operatorname{Ext}$-algebras of their standard modules and their regular exact Borel subalgebras. The following is a brief description of the main results of the present article. For a quasi-hereditary algebra $A$, denote by $\Delta$ the direct sum of standard modules, one from each isomorphism class, and denote by $\operatorname{Ext}_{A}^\ast(\Delta,\Delta)$ the algebra of extensions between standard modules. \begin{enumerate}[(A)] \item Let $A_n$ be the path algebra of the following quiver: $$\xymatrix{1\ar[r] & 2 \ar[r] & \dots \ar[r] & n-1\ar[r] & n}$$ For any partial order $\trianglelefteq$ on $\{1,\dots, n\}$ such that $(A_n,\trianglelefteq)$ is quasi-hereditary, we construct a graded quiver $Q$ and an admissible ideal $I\subset KQ$, such that there is an isomorphism of graded associative algebras $\operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)\cong KQ/I$. According to results from \cite{FKR}, each quasi-hereditary structure corresponds to a unique binary tree, and we show that this tree encodes all necessary information about extensions between the standard modules. For all details, we refer to Section~3. \item Let $A_n$ be as in (A). For any partial order $\trianglelefteq$ on $\{1,\dots, n\}$ such that $(A_n,\trianglelefteq)$ is quasi-hereditary, we check that $A_n$ admits a regular exact Borel subalgebra. We show that $A_n$ always admits a regular exact Borel subalgebra $B$ which contains the idempotents $e_1,\dots, e_n$ and find a minimal generating set for $B$. For all details, we refer to Section~3.2. \item Let $Q=Q^1\sqcup Q^2$ be a deconcatenation of the quiver $Q$ at a sink or source $v$. Put $A=KQ$ and $A^\ell=KQ^\ell$, for $\ell=1,2$. We describe $\operatorname{Ext}_A^\ast(\Delta,\Delta)$ up to isomorphism in terms of the Ext-algebras of standard modules over $A^\ell$, via a certain ``gluing'' process. \item Given that $A^\ell$ admits a regular exact Borel subalgebra $B^\ell$, for $\ell=1,2$, we show the following: \begin{enumerate}[(i)] \item If $v$ is a source, $A$ admits a regular exact Borel subalgebra. \item If $v$ is a sink, then $A$ admits a regular exact Borel subalgebra if and only if $v$ is minimal or maximal with respect to the essential order on $Q_0$. \end{enumerate} In the cases where $A$ admits a regular exact Borel subalgebra, we construct a regular exact Borel subalgebra of $A$ from $B^1$ and $B^2$, using a similar ``gluing'' as in (C). \end{enumerate} This article is organized in the following way: In Section~2 we give the necessary background on quasi-hereditary algebras and fix some notation. In Section~3, we recall the results of \cite{FKR} on the quasi-hereditary structures of $A_n$ and expand upon them, proving (A). In Subsection~3.1, we compute the quiver and relations of the Ringel dual of $A_n$. Subsection~3.2 is devoted to the description of the regular exact Borel subalgebras of $A_n$. Subsection~3.3 briefly discusses $A_\infty$-structures on $\operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$. In Section~4, we give the background on deconcatenations from \cite{FKR} and prove (C). Subsection~4.1 discusses how regular exact Borel subalgebras behave under deconcatenations and contains the proof of (D). In Subsection~4.2, we apply our results to the case where $Q$ is a linear quiver with arbitrary orientation. \section{Notation and background} Throughout, let $K$ be an algebraically closed field. For a quiver $Q=(Q_0, Q_1)$, denote by $KQ$ path algebra of $Q$. Let $I\subset KQ$ be an admissible ideal and let $A$ be the quotient $A=KQ/I$. We take $A$-module to mean finite-dimensional left $A$-module if nothing else is stated. For an arrow $\alpha\in Q_1$, denote by $s(\alpha)$ and $t(\alpha)$ the starting and terminal vertex of $\alpha$, respectively. We adopt the convention of writing composition of arrows from right to left. That is, for vertices and arrows arranged as $$\xymatrix{x\ar[r]^-{\alpha} & y\ar[r]^-{\beta} &z,}$$ we write the composition ``first $\alpha$, then $\beta$'' as $\beta \alpha$. We extend the notation of starting and terminal vertex to paths in $Q$, so if $p=\alpha_n \dots \alpha_1$, then we put $s(p)=s(\alpha_1)$ and $t(p)=t(\alpha_n)$. The isomorphism classes of the simple $A$-modules are indexed by the vertex set, $Q_0$, of $Q$. Denote by $L(i)$ the simple $A$-module associated to the vertex $i$. Denote by $P(i)$ and $I(i)$ the projective cover and injective envelope of $L(i)$, respectively. For an $A$-module $M$ and a simple module $L(i)$, denote by $[M:L(i)]$ the Jordan-Hölder multiplicity of $L(i)$ in $M$. We denote the category of finite-dimensional left $A$-modules by $A\operatorname{-mod}$. \begin{definition}\cite{CPS88} Let $A$ be a finite-dimensional algebra. Let $\{1,\dots, n\}$ be an indexing set for the isomorphism classes of simple $A$-modules and let $\trianglelefteq$ be a partial order on $\{1,\dots, n\}$. Let the \emph{standard module} at $i$, denoted by $\Delta(i)$, be the largest quotient of $P(i)$, whose composition factors $L(j)$ are such that $j\trianglelefteq i$. The algebra $A$ is said to be \emph{quasi-hereditary} with respect to $ \trianglelefteq$ if the following hold. \begin{enumerate}[(QH1)] \item There is a surjection $P(i)\twoheadrightarrow \Delta(i)$ whose kernel admits a filtration with subquotients $\Delta(j)$, where $j\triangleright i$. \item There is a surjection $\Delta(i) \twoheadrightarrow L(i)$ whose kernel admits a filtration with subquotients $L(j)$, where $j\triangleleft i$. \end{enumerate} Equivalently, let the \emph{costandard module} at $i$, denoted by $\nabla(i)$, be the largest submodule of $I(i)$, whose composition factors are such that $j\trianglelefteq i$. The algebra $A$ is said to be \emph{quasi-hereditary} with respect to $\trianglelefteq$ if the following hold. \begin{enumerate}[(QH1)$^\prime$] \item There is an injection $\nabla(i)\hookrightarrow I(i)$ whose cokernel admits a filtration with subquotients $\nabla(j)$, where $j\triangleright i$. \item There is an injection $L(i)\hookrightarrow \nabla(i)$ whose cokernel admits a filtration with subquotients $L(j)$, where $j\triangleleft i$. \end{enumerate} \end{definition} For a quasi-hereditary algebra $A$, it is natural to consider two particular subcategories of its module category, $A\operatorname{-mod}$. These are $\mathcal{F}(\Delta)$, the full subcategory of $A\operatorname{-mod}$ consisting of those modules which admit a filtration by standard modules, and $\mathcal{F}(\nabla)$, the full subcategory of $A\operatorname{-mod}$ consisting of those modules which admit a filtration by costandard modules. For an $A$-module $M\in \mathcal{F}(\Delta)$ (or $\mathcal{F}(\nabla)$), the number of occurrences of a particular standard module $\Delta(i)$ (or costandard module $\nabla(i)$) as a subquotient of a filtration of $M$ is well-defined, and we denote this number by $(M:\Delta(i))$ (or $(M:\nabla(i))$). In general, refining the partial order $\trianglelefteq$ may produce different standard and costandard modules. Traditionally, only so-called adapted orders on $\{1,\dots, n\}$ are considered in order to avoid this. \begin{definition}\cite{DlabRingel} Let $A$ be a finite-dimensional algebra. Let $\{1,\dots, n\}$ be an indexing set for the isomorphism classes of simple $A$-modules and let $\trianglelefteq$ be a partial order on $\{1,\dots, n\}$. We say that $\trianglelefteq$ is \emph{adapted} to $A$ if and only if for any $A$-module $M$ such that $$\operatorname{top}M\cong L(i)\quad \textrm{and}\quad \operatorname{soc}M\cong L(j),$$ where $i$ and $j$ are incomparable, there is $1\leq k\leq n$ such that $i\triangleleft k$, $j\triangleleft k$ and $[M:L(k)]>0$. \end{definition} \begin{lemma}\cite{CondeThesis} \label{lemma: q.h algebra has adapted order} If $(A, \trianglelefteq)$ is a quasi-hereditary algebra, then $\trianglelefteq$ is adapted to $A$. \end{lemma} Two quasi-hereditary structures $(A, \trianglelefteq_1)$ and $(A,\trianglelefteq_2)$ are said to be equivalent if the sets of standard (and costandard) modules with respect to $\trianglelefteq_1$ and $\trianglelefteq_2$ coincide. We denote this relationship by $\trianglelefteq_1 \sim \trianglelefteq_2$. Then, more precisely: $$\trianglelefteq_1\sim\trianglelefteq_2 \iff \Delta_1(i)=\Delta_2(i)\wedge \nabla_1(i)=\nabla_2(i),\ \forall i\in Q_0.$$ Equivalence of different quasi-hereditary structures is also captured precisely by the \emph{essential order}. \begin{definition}\cite[Definition~1.2.5]{Kevin2020} Let $(A,\trianglelefteq)$ be a quasi-hereditary algebra. Define the \emph{essential order} $\trianglelefteq^e$ of $\trianglelefteq$ as the partial order transitively generated by the relations $$i\trianglelefteq^e j \iff \left[\Delta(j):L(i)\right] >0 \quad \textrm{or}\quad (P(i):\Delta(j))>0.$$ \end{definition} The essential order is related to equivalence of quasi-hereditary structures via $$\trianglelefteq_1\sim \trianglelefteq_2 \iff \trianglelefteq_1^e=\triangleleft_2^e.$$ \subsection{Gluing of subalgebras} At various points in the present article, there will appear algebras which, intuitively, arise from gluing two subspaces of some ambient algebra at a shared dimension. Let $X$ and $Y$ be subpaces of some algebra $A$, which are closed under multiplication. Assume that $\dim X\cap Y=1$ and choose complements $X^\prime$ and $Y^\prime$ of $X\cap Y$ in $X$ and $Y$, respectively. Assume moreover that $X^\prime \cdot Y^\prime=Y^\prime \cdot X^\prime =0$ and that $1_A\in \operatorname{span}(X, Y)$. Put $$C=X^\prime \oplus Y^\prime \oplus X\cap Y.$$ Then, $C$ is a subalgebra of $A$. We call $C$ the \emph{gluing} of $X$ and $Y$ and write $C=X\diamond Y$. Note that when $X$ and $Y$ are graded, there is a natural grading on $X\diamond Y$ induced by the gradings on $X$ and $Y$. \section{The path algebra of $\mathbb{A}_n$} In this section, we consider the path algebra of the uniformly oriented linear quiver $$\mathbb{A}_n:\xymatrix{1\ar[r]^-{\alpha_1} & 2 \ar[r]^-{\alpha_2} & \dots \ar[r]^-{\alpha_{n-2}}&n-1 \ar[r]^-{\alpha_{n-1}}& n}.$$ Throughout the section, we put $A_n=K\mathbb{A}_n$. The simple modules over $A_n$ are indexed by the vertices $1,\dots, n$. Moreover, the algebra $A_n$ is hereditary, and therefore, it is quasi-hereditary with respect to any adapted order $(\{1,\dots, n\},\trianglelefteq)$ to $A_n$ \cite{DlabRingel}. Recall that the indecomposable $A_n$-modules, up to isomorphism, are given by the interval modules, which are modules having Loewy diagrams of the following form: $$M(i,j):\vcenter{\xymatrixrowsep{0.3cm}\xymatrix{ i \ar[d] \\ i+1 \ar[d] \\ \vdots \ar[d] \\ j-1 \ar[d]\\ j }}$$ With this notation, we have $$L(i)=M(i,i),\quad P(i)=M(i,n)\quad \textrm{and}\quad I(i)=M(1,i),\quad \forall 1\leq i\leq n.$$ Homomorphisms and extensions between interval modules are well understood, and we summarize this information in the following proposition. Note that \cite{OpThom10} uses different conventions than the present article. \begin{proposition}\cite[Theorem~3.6]{OpThom10} \label{proposition:homs and ext between interval modules} Let $M(i_1,j_1)$ and $M(i_2, j_2)$ be interval modules. Then, we have \begin{enumerate}[(i)] \item $$\dim \operatorname{Hom}_{A_n}(M(i_1,j_1), M(i_2, j_2))=\begin{cases} 1 & \textrm{if } i_2\leq i_1 \leq j_2\leq j_1;\\ 0 & \textrm{otherwise}. \end{cases}$$ \item $$\dim\operatorname{Ext}_{A_n}^1(M(i_1, j_1),M(i_2, j_2))=\begin{cases} 1 & \textrm{if }i_1+1\leq i_2\leq j_1+1\leq j_2; \\ 0 & \textrm{otherwise}. \end{cases}$$ \end{enumerate} \end{proposition} \begin{definition} A \emph{binary tree} $T$ is either the empty set or a triple $(s, L, R)$, where $s$ is a singleton set, called the \emph{root} of $T$, and $L$ and $R$ are two binary trees, called the \emph{left} and \emph{right} subtrees of $s$, respectively. The empty set has one \emph{leaf}, and the set of \emph{leaves} of $T=(s, L, R)$ is the (disjoint) union of the sets of leaves of $L$ and $R$. A \emph{binary search tree} is a binary tree, whose vertices are labeled by integers, such that if a vertex $x$ is labeled by $k$, then the vertices of the left subtree of $x$ are labeled by integers less than $k$, and the vertices of the right subtree of $x$ are labeled by integers greater than $k$. If $T$ is a binary tree with $n$ vertices, there exists a unique labelling of the vertices of $T$ by the integers $1,\dots, n$, turning $T$ into a binary search tree. The procedure by which this labelling is obtained is known as the \emph{in-order algorithm}, which recursively visits the left subtree, then the root, then the right subtree. The first vertex visited is labeled by 1, the second by 2, and so on. \end{definition} \begin{example}Consider the following binary tree with 6 vertices. $$ \begin{tikzpicture} \node(a) at (0,0) [shape=circle,draw, fill=lightgray, thick] {\phantom{1}}; \node(b) at (-2,-1) [shape=circle,draw, fill=lightgray, thick] {\phantom{1}}; \node(c) at (2,-1) [shape=circle,draw, fill=lightgray, thick] {\phantom{1}}; \node(d) at (-3,-2) [shape=circle,draw, fill=lightgray, thick] {\phantom{1}}; \node(e) at (-1,-2) {}; \node(f) at (-3.5,-3) {}; \node(g) at (-2.5,-3) {}; \node(h) at (3,-2) {}; \node(i) at (1,-2) [shape=circle,draw, fill=lightgray, thick] {\phantom{1}}; \node(j) at (1.5,-3) [shape=circle,draw, fill=lightgray, thick] {\phantom{1}}; \node(k) at (0.5,-3) {}; \node(l) at (1.25,-4) {}; \node(m) at (1.75, -4) {}; \draw[thick] (a) to (b); \draw[thick] (a) to (c); \draw[thick] (b) to (d); \draw[thick] (b) to (e); \draw[thick] (d) to (f); \draw[thick] (d) to (g); \draw[thick] (c) to (i); \draw[thick] (c) to (h); \draw[thick] (i) to (j); \draw[thick] (i) to (k); \draw[thick] (j) to (l); \draw[thick] (j) to (m); \end{tikzpicture}$$ With the in-order algorithm, the vertices of the tree are labeled as follows, creating a binary search tree. $$ \begin{tikzpicture} \node(a) at (0,0) [shape=circle,draw, fill=lightgray, thick] {3}; \node(b) at (-2,-1) [shape=circle,draw, fill=lightgray, thick] {2}; \node(c) at (2,-1) [shape=circle,draw, fill=lightgray, thick] {6}; \node(d) at (-3,-2) [shape=circle,draw, fill=lightgray, thick] {1}; \node(e) at (-1,-2) {}; \node(f) at (-3.5,-3) {}; \node(g) at (-2.5,-3) {}; \node(h) at (3,-2) {}; \node(i) at (1,-2) [shape=circle,draw, fill=lightgray, thick] {4}; \node(j) at (1.5,-3) [shape=circle,draw, fill=lightgray, thick] {5}; \node(k) at (0.5,-3) {}; \node(l) at (1.25,-4) {}; \node(m) at (1.75, -4) {}; \draw[thick] (a) to (b); \draw[thick] (a) to (c); \draw[thick] (b) to (d); \draw[thick] (b) to (e); \draw[thick] (d) to (f); \draw[thick] (d) to (g); \draw[thick] (c) to (i); \draw[thick] (c) to (h); \draw[thick] (i) to (j); \draw[thick] (i) to (k); \draw[thick] (j) to (l); \draw[thick] (j) to (m); \end{tikzpicture}$$ \end{example} Any binary search tree $T$ with $n$ vertices induces a partial order on the set of its vertices $\{1,\dots, n\}$. We denote this partial order by $\trianglelefteq_T$. It is defined in the following way. $$i\triangleleft_T j \iff i\textrm{ labels a vertex in the subtree of the vertex labeled by }j.$$ In the above example, the partial order $\trianglelefteq_T$ would be given by: $$1\triangleleft_T 2 \triangleleft_T 3, \quad 5 \triangleleft_T 4 \triangleleft_T 6 \triangleleft_T 3.$$ At this point, it is natural to ask for which binary trees $T$ we obtain a quasi-hereditary algebra $(A_n, \trianglelefteq_T)$. Recall that, since $A_n$ is hereditary, $A_n$ is quasi-hereditary with respect to any partial order which is adapted to $A_n$. It turns out that any partial order $\trianglelefteq$ with respect to which $A_n$ is quasi-hereditary, is equivalent (that is, produces the same set of standard and costandard modules) to a partial order produced by a binary tree. Conversely, for any binary tree $T$, the order $\trianglelefteq_T$ makes $A_n$ quasi-hereditary. More precisely, we have the following. \begin{proposition}\cite[Proposition~4.4]{FKR} Let $n$ be a natural number and let $\mathcal{T}$ be the set of binary trees with $n$ vertices. Denote by $\mathcal{A}$ the set of adapted orders to $A_n$. For any partial order $\trianglelefteq$ in $\mathcal{A}$, denote by $\overline{\trianglelefteq}$ the equivalence class of $\trianglelefteq$ with respect to the relation $\sim$. Then, the map from $\mathcal{T}$ to $\faktor{\mathcal{A}}{\sim}$ defined by $$T\mapsto \overline{\trianglelefteq}_T$$ is a bijection. \end{proposition} Now, given that any quasi-hereditary structure on $A_n$ may be associated to a binary tree, it is natural to describe this structure in terms of the associated binary tree. The following proposition is stated in the proof of \cite[Proposition~4.4]{FKR}. For the convenience of the reader, we give a proof. \begin{proposition}\label{proposition:standard and costandard modules over tree order} Let $T$ be a binary search tree and let $\trianglelefteq_T$ be the associated adapted order to $A_n$. Then, we have the following. \begin{enumerate}[(i)] \item The composition factors of the standard module $\Delta(i)$ are indexed by the labels of the vertices in the right subtree of the vertex labeled by $i$. \item The composition factors of the costandard module $\nabla(i)$ are indexed by the labels of the vertices in the left subtree of the vertex labeled by $j$. \end{enumerate} \end{proposition} \begin{proof} By definition, the standard module $\Delta(i)$ is a quotient of the indecomposable projective module $P(i)$. Since $P(i)=M(i,n)$, this implies that there exists an integer $s_i$, which satisfies $i\leq s_i\leq n$, such that $\Delta(i)=M(i,s_i)$. The composition factors of $\Delta(i)$ are then $L(i), \dots, L(s_i)$, all occuring with Jordan-Hölder multiplicity 1. Denote by $v_i$ the vertex labeled by $i$. The composition factors $L(j)$ of $\Delta(i)$ must satisfy $v_j\trianglelefteq_T v_i$. By definition of $\trianglelefteq_T$, the vertices $v_j$ such that $v_j \triangleleft_T v_i$ are exactly the vertices of the left and right subtree of $v_i$. By construction of $\trianglelefteq_T$, the vertices in the left subtree of $v_i$ are labeled by integers less than $i$, which shows that the corresponding simple modules may not be composition factors of $\Delta(i)$. Since $\Delta(i)$ is the maximal quotient of $P(i)$ such that its composition factors $L(j)$ satisfy $v_j\trianglelefteq_T v_i$, we are done. The argument for the form of the costandard modules is similar. \qedhere \end{proof} For a vertex $v$ of $T$, denote by $\ell(v)$ and $r(v)$ the vertices immediately down and to the left or down and to the right of $v$, respectively. If such vertices do not exist, we write $\ell(v)=\emptyset$ or $r(v)=\emptyset$. \begin{proposition}\label{proposition:extensions and homes between standard modules} Let $T$ be a binary search tree and let $\trianglelefteq_T$ be the associated adapted order to $A_n$. \begin{enumerate}[(i)] \item Suppose that $i$ labels the vertex $\ell(v)$ and that $j$ labels the vertex $v$. Then, we have $$\dim \operatorname{Ext}^1_{A_n}(\Delta(i),\Delta(j))=1.$$ \item Suppose that $i$ labels the vertex $r(v)$ and that $j$ labels the vertex $v$. Then, we have $$\dim \operatorname{Hom}_{A_n}(\Delta(i),\Delta(j))=1.$$ \end{enumerate} \end{proposition} \begin{proof} Put $\Delta(i)=M(i,s_i)$ and $\Delta(j)=M(j, s_j)$. \begin{enumerate}[(i)] \item Note that the first vertex visited by the in-order algorithm, after visiting the right subtree of $\ell(v)$, is $v$. Therefore, by Proposition \ref{proposition:standard and costandard modules over tree order}, we have $j=s_i+1$. The condition of Proposition \ref{proposition:homs and ext between interval modules}, part (ii), is then $$i+1\leq s_i+1\leq s_i+1 \leq s_j,$$ which is clearly satisfied. \item With the in-order algorithm, the vertex $r(v)$ is visited after $v$. Then, using Proposition \ref{proposition:standard and costandard modules over tree order}, we know that if $\Delta(i)=M(i,s_i)$, then $\Delta(j)=M(i+k,s_i)$, for some $k\leq s_i-i$. The condition of Proposition \ref{proposition:homs and ext between interval modules}, part (i), is then $$i\leq i+k \leq s_i \leq s_i,$$ which is clearly satisfied.\qedhere \end{enumerate} \end{proof} \begin{lemma}\label{lemma:no hom from standard to left subtree and no ext from standard to right subtree} \begin{enumerate}[(i)] \item Let $v_i$ and $v_j$ be vertices labeled by $i$ and $j$, respectively. Assume that $v_j$ is in the left subtree of $v_i$. Then, we have $$\operatorname{Hom}_{A_n}(\Delta(i),\Delta(j))=\operatorname{Hom}_{A_n}(\Delta(j),\Delta(i))=0.$$ \item Let $v_i$ and $v_j$ be vertices labeled by $i$ and $j$, respectively. Assume that $v_j$ is in the right subtree of $v_i$. Then, we have $$\operatorname{Ext}^1_{A_n}(\Delta(i),\Delta(j))=\operatorname{Ext}^1_{A_n}(\Delta(j),\Delta(i))=0.$$ \item Let $v_i$ and $v_j$ be vertices, labeled by $i$ and $j$, respectively. Assume that $v_i$ is not in the subtree of $v_j$ and that $v_j$ is not in the subtree of $v_i$. Then, we have $$\operatorname{Hom}_{A_n}(\Delta(i),\Delta(j))=\operatorname{Hom}_{A_n}(\Delta(j),\Delta(i))=\operatorname{Ext}^1_{A_n}(\Delta(i),\Delta(j))=\operatorname{Ext}^1_{A_n}(\Delta(j),\Delta(i))=0.$$ \end{enumerate} \end{lemma} \begin{proof} By the proof of Proposition \ref{proposition:standard and costandard modules over tree order}, we have $\Delta(i)=M(i, s_i)$ and $\Delta(j)=M(j,s_j)$, where the integers $$i+1,i+2, \dots, s_i$$ label the vertices in the right subtree of $v_i$ and the integers $$j+1,j+2,\dots, s_j$$ label the vertices in the right subtree of $v_j$. \begin{enumerate}[(i)] \item According to the in-order algorithm, the integers $j+1,\dots, s_j$ are less than $i$. We immediately have $$\operatorname{Hom}_{A_n}(\Delta(i),\Delta(j))=0,$$ since $v_j\triangleleft_T v_i$ and $A_n$ is quasi-hereditary. Moreover by Proposition \ref{proposition:homs and ext between interval modules}, part (ii), we have $$\operatorname{Hom}_{A_n}(\Delta(j),\Delta(i))=0,$$ since the condition $i\leq j \leq s_i \leq s_j$ is not satisfied, because $j<i$. \item Since $v_j$ is in the right subtree of $v_i$, we have $s_j \leq s_i$. We immediately have $$\operatorname{Ext}_{A_n}^1(\Delta(i),\Delta(j))=0,$$ since $v_j\triangleleft_T v_i$ and $A_n$ is quasi-hereditary. Moreover, by Proposition \ref{proposition:homs and ext between interval modules}, part (i), we have $$\operatorname{Ext}_{A_n}^1(\Delta(j),\Delta(i))=0,$$ since the condition $i+1\leq j\leq s_i+1\leq s_j$ is not satisfied, because $s_j\leq s_i$. \item This is immediate, since $A_n$ is quasi-hereditary and the vertices $v_i$ and $v_j$ are incomparable with respect to $\trianglelefteq_T$. \qedhere \end{enumerate} \end{proof} \begin{example} Consider the following binary search tree, $T$, with vertices labeled according to the in-order algorithm. $$\begin{tikzpicture} \node(a) [shape=circle, draw, thick, fill=lightgray] at (0,0) {4}; \node(b) [shape=circle, draw, thick, fill=lightgray] at (-2,-1) {2}; \node(c) [shape=circle, draw, thick, fill=lightgray] at (2,-1) {5}; \node(d) [shape=circle, draw, thick, fill=lightgray] at (-1,-2) {3}; \node(e) [shape=circle, draw, thick, fill=lightgray] at (-3,-2) {1}; \node(f) at (1,-2) {}; \node(g) [shape=circle, draw, thick, fill=lightgray] at (3,-2) {6}; \node(h) at (-3.5,-3) {}; \node(i) at (-2.5,-3) {}; \node(j) at (-1.5,-3) {}; \node(k) at (-0.5,-3) {}; \node(n) at (2.5,-3) {}; \node(o) at (3.5,-3) {}; \draw[thick] (a) to (b); \draw[thick] (b) to (e); \draw[thick] (e) to (h); \draw[thick] (e) to (i); \draw[thick] (b) to (d); \draw[thick] (d) to (j); \draw[thick] (d) to (k); \draw[thick] (a) to (c); \draw[thick] (c) to (f); \draw[thick] (c) to (g); \draw[thick] (g) to (n); \draw[thick] (g) to (o); \end{tikzpicture}$$ The partial order $\trianglelefteq_T$ is given by: $$1\triangleleft_T 2 \triangleleft_T 4,\quad 3\triangleleft_T 2 \triangleleft_T 4,\quad \textrm{and}\quad 6\triangleleft_T 5 \triangleleft_T 4.$$ For the standard and costandard modules, we have \begin{align*}\Delta(1)& \cong L(1),\quad \Delta(2)\cong M(2,3),\quad\Delta(3)\cong L(3),\quad \Delta(4)\cong P(4), \quad \Delta(5)\cong P(6),\quad \Delta(6)\cong L(6), \\ \nabla(1)&\cong L(1),\quad \nabla(2)\cong M(1,2),\quad \nabla(3)\cong L(3),\quad \nabla(4)\cong M(1,4),\quad \nabla(5)\cong L(5),\quad \textrm{and}\quad \nabla(6)\cong L(6),\end{align*} which we see by applying Proposition \ref{proposition:standard and costandard modules over tree order}. Moreover, we have non-split short exact sequences $$\Delta(2)\hookrightarrow I(3)\twoheadrightarrow \Delta(1)\quad \textrm{and}\quad \Delta(4) \hookrightarrow P(2)\twoheadrightarrow \Delta(2),$$ giving us the extensions guaranteed by part (i) of Proposition \ref{proposition:extensions and homes between standard modules}. Next, we see there are natural monomorphisms $\Delta(3)\hookrightarrow \Delta(2)$ and $\Delta(6)\hookrightarrow\Delta(5)\hookrightarrow\Delta(4),$ giving us the homomorphisms guaranteed by part (ii) of Proposition \ref{proposition:extensions and homes between standard modules}. Lastly, we have \begin{align*} \dim \operatorname{Hom}_{A_n}(\Delta(1),\Delta(2))&=\dim \operatorname{Hom}_{A_n}(\Delta(2),\Delta(1))=\dim \operatorname{Hom}_{A_n}(\Delta(1),\Delta(3))=\dim \operatorname{Hom}_{A_n}(\Delta(3),\Delta(1))=0;\\ \dim \operatorname{Hom}_{A_n}(\Delta(2),\Delta(4))&=\dim \operatorname{Hom}_{A_n}(\Delta(4),\Delta(2))=\dim \operatorname{Hom}_{A_n}(\Delta(1),\Delta(4))=\dim \operatorname{Hom}_{A_n}(\Delta(4),\Delta(1))=0;\\ \dim \operatorname{Hom}_{A_n}(\Delta(2),\Delta(5))&=\dim\operatorname{Hom}_{A_n}(\Delta(5),\Delta(2)) =\dim \operatorname{Hom}_{A_n}(\Delta(2),\Delta(6))=\dim \operatorname{Hom}_{A_n}(\Delta(6),\Delta(2))=0;\\ \dim \operatorname{Hom}_{A_n}(\Delta(1),\Delta(5))&=\dim \operatorname{Hom}_{A_n}(\Delta(5),\Delta(1))=\dim \operatorname{Hom}_{A_n}(\Delta(1),\Delta(6))=\dim \operatorname{Hom}_{A_n}(\Delta(6),\Delta(1))=0, \end{align*} as prescribed by Lemma \ref{lemma:no hom from standard to left subtree and no ext from standard to right subtree}. \end{example} \begin{lemma} \label{lemma:multiplication map in right subtree is zero} Let $v, r(v)$ and $\ell(r(v))$ be vertices labeled by $i, j$ and $k$, respectively. Then, the multiplication map $$\operatorname{Hom}_{A_n}( \Delta(j),\Delta(i)) \times \operatorname{Ext}^1_{A_n}(\Delta(k),\Delta(j))\to \operatorname{Ext}_{A_n}^1(\Delta(k),\Delta(i))$$ is the zero map. \end{lemma} \begin{proof} Note that the vertices $i,j$ and $k$ are configured in the following way: $$\begin{tikzpicture} \node(a)[shape=circle,draw, fill=lightgray, thick] at (0,0) {$i$}; \node(b) [shape=circle,draw, fill=lightgray, thick] at (1,-1) {$j$}; \node(c) [shape=circle,draw, fill=lightgray, thick] at (0.5, -2) {$k$}; \draw[thick] (a) to (b); \draw[thick] (b) to (c); \end{tikzpicture}$$ The multiplication map in question produces an extension in the space $\operatorname{Ext}_{A_n}^1(\Delta(k),\Delta(i))$. But since $\ell(r(v))$ is a vertex in the right subtree of $v$, this space is zero, by part (ii) of Lemma \ref{lemma:no hom from standard to left subtree and no ext from standard to right subtree}. \end{proof} \begin{lemma}\label{lemma:multiplication in left subtree is non-zero} Let $v$, $\ell(v)$ and $r(\ell(v))$ be vertices labeled by $i,j$ and $k$, respectively. Then, the multiplication map $$\operatorname{Ext}_{A_n}^1(\Delta(j),\Delta(i))\times \operatorname{Hom}_{A_n}(\Delta(k),\Delta(j))\to \operatorname{Ext}_{A_n}^1(\Delta(k),\Delta(i))$$ is non-zero. \end{lemma} \begin{proof} We start by observing that the vertices labeled by $i$, $j$ and $k$ are configured in the following way: $$\begin{tikzpicture} \node(a)[shape=circle,draw, fill=lightgray, thick] at (0,0) {$i$}; \node(b) [shape=circle,draw, fill=lightgray, thick] at (-1,-1) {$j$}; \node(c) [shape=circle,draw, fill=lightgray, thick] at (-0.5, -2) {$k$}; \draw[thick] (a) to (b); \draw[thick] (b) to (c); \end{tikzpicture}$$ According to the in-order algorithm, we have $j<k<i$. Note that, since $A_n$ is hereditary, a minimal projective resolution of a standard module $\Delta(x)$ is of the form $$\xymatrix{\ker p_x \ar[r] & P(x) \ar[r]^-{p_x} & \Delta(x)}.$$ Consider the following picture: $$\xymatrix{ &\ker p_k \ar[r] \ar[d]^-f &P(k) \ar[r]^-{p_k} \ar[d]& \Delta(k) \\ &\ker p_j \ar[r] \ar[d]^-g &P(j) \ar[r]^-{p_j} & \Delta(j) \\ \ker p_i \ar[r] &P(i) \ar[r]^-{p_k} & \Delta(i) }$$ An extension from $\Delta(k)$ to $\Delta(i)$ is represented by a chain map, which has one (possibly) non-zero component, namely the map $g\circ f$. Now, assume that $\Delta(k)=M(k,s_k)$ and $\Delta(i)=M(i, s_i)$. Then, we have $\ker p_k=P(s_k+1)$ and $\ker p_i=P(s_i+1)$. Since $k<i$ and $s_k<s_i$, we have $\operatorname{Hom}_{A_n}(P(k),P(i))=\operatorname{Hom}_{A_n}(\ker p_k, \ker p_i)=0.$ Therefore, the chain map, if it is non-zero, cannot be null-homotopic. By construction, $f$ is the unique (up to a scalar) non-zero map from $M(s_k+1, n)$ to $M(s_j+1,n)$. Next, note that in our configuration, the first vertex visited by the in-order algorithm, after visiting the entire right subtree of $\ell(v)$, is $v$. This implies that $s_j+1=i$, so that the map $g$ in the above picture is some scalar multiple of the identity homomorphism on $P(i)$. This shows that the composition $g\circ f$, and hence the chain map representing our extension, is non-zero. \end{proof} For two interval modules $M(i_1, j_1)$ and $M(i_2, j_2)$, the space $\operatorname{Hom}_{A_n}(M(i_1, j_1), M(i_2, j_2))$ is one-dimensional if it is non-zero. Fix a basis of this space, corresponding to the following homomorphism of representations: $$\xymatrix{ K & \dots \ar[l] & K \ar[l] \ar[d]^1& \dots \ar[l] \ar[d]^1& K \ar[l] \ar[d]^1\\ & & K & \dots \ar[l] & K \ar[l] & \dots \ar[l] & K\ar[l] }$$ Note that this choice of basis also gives an obvious choice of basis for the spaces $\operatorname{Ext}_{A_n}^1(M(i_1, j_1), M(i_2, j_2))$. Suppose now that we have chosen such basis vectors $$y\in \operatorname{Hom}_{A_n}(\Delta(i), \Delta(j));\quad x\in \operatorname{Ext}_{A_n}^1(\Delta(j), \Delta(k));\quad z\in \operatorname{Ext}_{A_n}^1(\Delta(i),\Delta(k));$$ and that $i,j$ and $k$ are as in the statement of Lemma \ref{lemma:multiplication in left subtree is non-zero}. Then, we have $xy=z$. \begin{theorem}\label{theorem:ext-algebra of A_n} Let $T$ be a binary search tree with $n$ vertices, with the vertices labeled according to the in-order algorithm, and let $\trianglelefteq_T$ be the associated adapted order to $A_n$. Construct a quiver $Q$ in the following way: \begin{enumerate}[(i)] \item The vertices of $Q$ are $1,2,\dots, n$. \item We draw an edge between $i$ and $j$ if and only if the integers $i$ and $j$ label either a set of vertices $\{v, \ell(v)\}$ or $\{v, r(v)\}$. If $i$ labels $\ell(v)$ and $j$ labels $v$, the orientation of the edge is from $\ell(v)$ to $v$, of degree 1, and is denoted by $\varepsilon_i^j$. If $i$ labels $r(v)$ and $j$ labels $v$, the orientation of the edge is from $r(v)$ to $v$, of degree 0, and is denoted by $f_i^j$. \end{enumerate} Let $I\subset KQ$ be the ideal generated by the following elements: \begin{enumerate}[($\bullet$)] \item $f_j^k \varepsilon_i^j$, for all $i,j$ and $k$; \item $\varepsilon_j^k \varepsilon_i^j$, for all $i,j$ and $k$; \end{enumerate} Then, there is an isomorphism of graded algebras $\faktor{KQ}{I}\cong \operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$. \end{theorem} \begin{example} Consider the binary search tree $$\begin{tikzpicture} \node(a) [shape=circle, draw, thick, fill=lightgray] at (0,0) {4}; \node(b) [shape=circle, draw, thick, fill=lightgray] at (-2,-1) {2}; \node(c) [shape=circle, draw, thick, fill=lightgray] at (2,-1) {5}; \node(d) [shape=circle, draw, thick, fill=lightgray] at (-1,-2) {3}; \node(e) [shape=circle, draw, thick, fill=lightgray] at (-3,-2) {1}; \node(f) at (1,-2) {}; \node(g) [shape=circle, draw, thick, fill=lightgray] at (3,-2) {6}; \node(h) at (-3.5,-3) {}; \node(i) at (-2.5,-3) {}; \node(j) at (-1.5,-3) {}; \node(k) at (-0.5,-3) {}; \node(n) at (2.5,-3) {}; \node(o) at (3.5,-3) {}; \draw[thick] (a) to (b); \draw[thick] (b) to (e); \draw[thick] (e) to (h); \draw[thick] (e) to (i); \draw[thick] (b) to (d); \draw[thick] (d) to (j); \draw[thick] (d) to (k); \draw[thick] (a) to (c); \draw[thick] (c) to (f); \draw[thick] (c) to (g); \draw[thick] (g) to (n); \draw[thick] (g) to (o); \end{tikzpicture}$$ Then, the corresponding quiver $Q$ is $$\xymatrix{ & & 4\\ & 2\ar[ru]^-{\varepsilon_2^4} & & 5\ar@{-->}[lu]_-{f_5^4}\\ 1 \ar[ru]^-{\varepsilon_1^2} & & 3 \ar@{-->}[lu]_-{f_3^2} & &6\ar@{-->}[lu]_-{f_6^5} }$$ and the ideal $I=\langle \varepsilon_2^4\varepsilon_1^2\rangle$. The full arrows, $\varepsilon_1^2$ and $\varepsilon_2^4$, are of degree 1, while the dashed arrows, $f_3^2, f_5^4$ and $f_6^5$ are of degree 0. \end{example} \begin{proof}[Proof of Theorem \ref{theorem:ext-algebra of A_n}] Fix basis vectors $\varphi_i^j\in \operatorname{Hom}_{A_n}(\Delta(i),\Delta(j))$ and $e_i^j\in \operatorname{Ext}_{A_n}^1(\Delta(i),\Delta(j))$ of all non-zero spaces of the form $\operatorname{Hom}_{A_n}(\Delta(i),\Delta(j))$ and $\operatorname{Ext}_{A_n}^1(\Delta(i),\Delta(j))$, as discussed prior to Theorem \ref{theorem:ext-algebra of A_n}. Define a map $\Phi:\faktor{KQ}{I}\to \operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$ by $$\varepsilon_i^j \mapsto e_i^j,\quad f_i^j\mapsto \varphi_i^j, \quad e_i\mapsto 1_{\Delta(i)},$$ and extend by linearity. By applying Lemma \ref{lemma:multiplication map in right subtree is zero} and Lemma \ref{lemma:no hom from standard to left subtree and no ext from standard to right subtree}, along with the fact that there are no extensions of degree greater than 1, since $A_n$ is hereditary, we see that $I\subset \ker \Phi$, so that $\Phi$ is well-defined. To check that $\Phi$ is a homomorphism of algebras, it is enough to check compatibility of $\Phi$ with products of the form $\varepsilon_j^k f_i^j$. We have: \begin{align*} \Phi(\varepsilon_j^k f_i^j)&=\Phi(\varepsilon_i^k)=e_i^k=e_j^k\varphi_i^j=\Phi(\varepsilon_j^k)\Phi(f_i^j). \end{align*} As the image of $\Phi$ contains bases of $\operatorname{Hom}_{A_n}(\Delta(i),\Delta(j))$ and $\operatorname{Ext}_{A_n}^1(\Delta(i),\Delta(j))$ for all $i$ and $j$, $\Phi$ is surjective. Next, we compare dimensions. Consider the degree 0 part of the space $e_j \faktor{KQ}{I}e_i$. The dimension of this space is equal to the number of paths of degree 0 from $i$ to $j$ in $Q$. Clearly, such a path must be a product of the form $f_{k_s}^j f_{k_{s-1}}^j \dots f_{k_1}^{k_2}f_i^{k_1}$. Since vertices in $Q$ have out-degree at most 1, it follows that such a path is unique. So the dimension of the degree 0 part $e_j\faktor{KQ}{I}e_i$ is either 0 or 1. By construction of $Q$, it is now clear that this dimension coincides with $\dim \operatorname{Hom}_{A_n}(\Delta(i),\Delta(j))$. Next, consider the degree 1 part of the space $e_j\faktor{KQ}{I}e_i$. Similarly, the dimension of this space is either 0 or 1, depending on the number of paths from $i$ to $j$ in $Q$. Such a path is either of the form $\varepsilon_i^j$ or $\varepsilon_k^j f_i^k$. Again, by construction of $Q$, we see that this dimension coincides with $\operatorname{Ext}_{A_n}^1(\Delta(i),\Delta(j))$. Therefore, $\Phi$ is a surjective linear map between vector spaces of the same dimension, hence an isomorphism. \end{proof} \subsection{Ringel dual} We recall that, for any quasi-hereditary algebra $A$, there exists a module $\mathtt{T}$, called the characteristic tilting module, whose additive closure equals $\mathcal{F}(\Delta)\cap \mathcal{F}(\nabla)$. The characteristic tilting module decomposes as $$\mathtt{T}=\bigoplus_{i=1}^n T(i),$$ where each $T(i)$ is indecomposable. The Ringel dual is then, by definition, $R(A)=\operatorname{End}(\mathtt{T})^{\operatorname{op}}$. In this section, we compute quiver and relations of the Ringel dual of $(A_n,\trianglelefteq_T)$ using the associated binary search tree. The following statement is part of the proof of Theorem~4.6 in \cite{FKR}. For the convenience of the reader, we give a proof. \begin{proposition}\label{proposition:comp factors of characteristic tilting modules} Let $v$ be a vertex labeled by $i$. The indecomposable summand $T(i)$ of the characteristic tilting module is the interval module $M(t_i, s_i)$, where $t_i$ is the least integer labelling a vertex in the left subtree of $v$ and $s_i$ is the greatest integer labelling a vertex in the right subtree of $v$. \end{proposition} \begin{proof} To prove that $T(i)=M(t_i, s_i)$, it suffices to show that $\Delta(i)$ is isomorphic to a submodule of $T(i)$, that the cokernel of this monomorphism admits a filtration by standard modules, and that $T(i)$ is contained in $\mathcal{F}(\Delta)\cap \mathcal{F}(\nabla)$ \cite[Proposition~2]{Ringel1991}. We proceed by induction on the size of the subtree of $v$. The basis of the induction is clear, since if $\ell(v)=r(v)=\emptyset$, we have $\Delta(i)=\nabla(i)=L(i)=T(i)$. Since $t_i\leq i$, it is clear that $\Delta(i)$ embeds into $M(t_i, s_i)$. Clearly, the cokernel of this embedding is the module $M(t_i, i-1)$. Suppose that $j$ labels the vertex $\ell(v)$. The composition factors of $M(t_i, i-1)$ are labeled by the vertices in the subtree of $\ell(v)$, so by the induction hypothesis, $M(t_i, i-1)=T(j)$. In other words, we have a short exact sequence $$0\to \Delta(i) \hookrightarrow M(t_i, s_i) \twoheadrightarrow T(j)\to 0.$$ Since $T(j)\in \mathcal{F}(\Delta)\cap \mathcal{F}(\nabla)$, we have $M(t_i, s_i)\in \mathcal{F}(\Delta)$. By a similar argument, we see that there is a short exact sequence $$0 \to T(k) \hookrightarrow T(i) \twoheadrightarrow \nabla(i)\to 0,$$ where $k$ labels the vertex $r(v)$. Then, since $T(k)\in \mathcal{F}(\Delta)\cap \mathcal{F}(\nabla)$, we get that $M(t_i, s_i)\in \mathcal{F}(\nabla)$. Now, $M(t_i, s_i)$ is a module such that $\Delta(i)$ embeds into it, the cokernel of this embedding is contained in $\mathcal{F}(\Delta)$, and $M(t_i, s_i)\in \mathcal{F}(\Delta)\cap \mathcal{F}(\nabla)$. This confirms that $M(t_i, s_i)=T(i)$. \end{proof} \begin{corollary}\label{corollary:proposition:comp factors of characteristic tilting modules} Let the vertices $v$, $\ell(v)$ and $r(v)$ be labeled by $i, j$ and $k$, respectively. Then, for any $v$, there is an epimorphism $p_i^j:T(i)\twoheadrightarrow T(j)$ and a monomorphism $q_k^i:T(k)\hookrightarrow T(i).$ \end{corollary} \begin{proposition} \label{proposition: homs between summands of tilting module} Let the vertices $\ell(\ell(v)), \ell(v),v, r(v)$ and $r(r(v))$ be labeled by labeled by $x, j, i, k$ and $y$, respectively. Let $p_i^j, p_j^x, q_k^i$ and $q_y^k$ be the homomorphisms from Corollary \ref{corollary:proposition:comp factors of characteristic tilting modules}. Then, we have: \begin{enumerate}[(i)] \item $p_j^x p_i^j\neq 0$ and $q_k^i q_y^k \neq 0$; \item $p_i^j q_k^i=0;$ \item $\operatorname{Hom}_{A_n}(T(j), T(i))=0$; \item $\operatorname{Hom}_{A_n}(T(i), T(k))=0.$ \end{enumerate} \end{proposition} \begin{proof} The vertices are configured in the following way: $$\begin{tikzpicture} \node(a) [shape=circle, draw, thick, fill=lightgray] at (0,0) {$i$}; \node(b) [shape=circle, draw, thick, fill=lightgray] at (-2,-1) {$j$}; \node(c) [shape=circle, draw, thick, fill=lightgray] at (2,-1) {$k$}; \node(e) [shape=circle, draw, thick, fill=lightgray] at (-3,-2) {$x$}; \node(f) at (1,-2) {}; \node(g) [shape=circle, draw, thick, fill=lightgray] at (3,-2) {$y$}; \node(h) at (-3.5,-3) {}; \node(i) at (-2.5,-3) {}; \node(j) at (-1.5,-3) {}; \node(k) at (-0.5,-3) {}; \node(n) at (2.5,-3) {}; \node(o) at (3.5,-3) {}; \draw[thick] (a) to (b); \draw[thick] (b) to (e); \draw[thick] (e) to (h); \draw[thick] (e) to (i); \draw[thick] (b) to (d); \draw[thick] (a) to (c); \draw[thick] (c) to (f); \draw[thick] (c) to (g); \draw[thick] (g) to (n); \draw[thick] (g) to (o); \end{tikzpicture}$$ \begin{enumerate}[(i)] \item This is immediate, as $p_j^xp_i^j$ is the composition of two epimorphisms and $q_k^iq_y^k$ is the composition of two monomorphisms. \item Applying Proposition \ref{proposition:comp factors of characteristic tilting modules}, we see that $T(k)$ and $T(j)$ have no common composition factors, which implies that $\operatorname{Hom}_{A_n}(T(k),T(j))=0$, yielding the statement. \item Put $T(j)=M(t_j, s_j)$ and $T(i)=M(t_i, s_i)$, where $t_j=t_i$ and $s_j< s_i$. Then, using Proposition \ref{proposition:homs and ext between interval modules}, the condition for $\operatorname{Hom}_{A_n}(T(j), T(i))$ being non-zero is $$t_i\leq t_j \leq s_i \leq s_j,$$ which is not satisfied, since $s_j < s_i$. \item Similar to (iii). \qedhere \end{enumerate} With these observations established, we are ready to describe the Ringel dual $R(A_n)$. \end{proof} \begin{theorem}\label{theorem:ringel dual of A_n} Let $T$ be a binary search tree with $n$ vertices, with the vertices labeled according to the in-order algorithm, and let $\trianglelefteq_T$ be the associated adapted order to $A_n$. Construct a quiver $Q$ in the following way: \begin{enumerate}[(i)] \item The vertices of $Q$ are $1,\dots, n$. \item We draw an edge between $i$ and $j$ if and only if the integers $i$ and $j$ label either a set of vertices $\{v,\ell(v)\}$ or $\{v,r(v)\}$. If $i$ labels $v$ and $j$ labels $\ell(v)$, the edge is denoted by $f_i^j$ and its orientation is from $i$ to $j$. If $i$ labels $v$ and $j$ labels $r(v)$, the edge is denoted by $g_j^i$ and its orientation is from $j$ to $i$. \end{enumerate} Let $I\subset KQ$ be the ideal generated by the elements $f_i^j g_k^i$, for all $i,j$ and $k$. Then, there is an isomorphism of algebras $\faktor{KQ}{I}\cong\operatorname{End}(\mathtt{T})$, and consequently, $R(A_n)\cong \faktor{KQ}{I}^{\operatorname{op}}$. \end{theorem} \begin{example} Consider the binary search tree $$\begin{tikzpicture} \node(a) [shape=circle, draw, thick, fill=lightgray] at (0,0) {4}; \node(b) [shape=circle, draw, thick, fill=lightgray] at (-2,-1) {2}; \node(c) [shape=circle, draw, thick, fill=lightgray] at (2,-1) {5}; \node(d) [shape=circle, draw, thick, fill=lightgray] at (-1,-2) {3}; \node(e) [shape=circle, draw, thick, fill=lightgray] at (-3,-2) {1}; \node(f) at (1,-2) {}; \node(g) [shape=circle, draw, thick, fill=lightgray] at (3,-2) {6}; \node(h) at (-3.5,-3) {}; \node(i) at (-2.5,-3) {}; \node(j) at (-1.5,-3) {}; \node(k) at (-0.5,-3) {}; \node(n) at (2.5,-3) {}; \node(o) at (3.5,-3) {}; \draw[thick] (a) to (b); \draw[thick] (b) to (e); \draw[thick] (e) to (h); \draw[thick] (e) to (i); \draw[thick] (b) to (d); \draw[thick] (d) to (j); \draw[thick] (d) to (k); \draw[thick] (a) to (c); \draw[thick] (c) to (f); \draw[thick] (c) to (g); \draw[thick] (g) to (n); \draw[thick] (g) to (o); \end{tikzpicture}$$ Then, the corresponding quiver $Q$ is $$\xymatrix{ & & 4 \ar[ld]_-{f_4^2}\\ & 2 \ar[ld]_-{f_2^1}& & 5 \ar[lu]_-{g_5^4}\\ 1 & & 3 \ar[lu]_-{g_3^2}& &6 \ar[lu]_-{g_6^5} }$$ and the ideal $I\subset KQ$ is generated by the elements $f_4^2 g_5^4$ and $f_2^1 g_3^2$. \end{example} \begin{proof}[Proof of Theorem~\ref{theorem:ringel dual of A_n}] Recall the discussion prior to Theorem \ref{theorem:ext-algebra of A_n}, which implies that for composable homomorphisms between indecomposable summands of $\mathtt{T}$, we may choose basis vectors \begin{align*} x \in \operatorname{Hom}_{A_n}(T(i), T(j));\quad y\in \operatorname{Hom}_{A_n}(T(j), T(k));\quad z\in \operatorname{Hom}_{A_n}(T(i), T(k)) \end{align*} such that $xy=z$. Define a map $\tilde{\Psi}: KQ\to \operatorname{End}(\mathtt{T})$ by $$f_i^j \mapsto p_i^j,\quad g_i^j \mapsto q_i^j,\quad e_i \mapsto 1_{T(i)}$$ and extend by linearity. Using Proposition \ref{proposition: homs between summands of tilting module}, part (ii), we see that $I\subset \ker \tilde{\Psi}$, so that $\tilde\Psi$ factors uniquely through the quotient $\faktor{KQ}{I}$. This gives us a well-defined homomorphism of algebras $\Psi:\faktor{KQ}{I}\to \operatorname{End}(\mathtt{T})$. Since the homomorphisms $p_i^j, q_k^i$ and $1_{T(i)}$ generate $\operatorname{End}(\mathtt{T})$, $\Psi$ is surjective. It is now easy to see that $\dim \faktor{KQ}{I}=\dim \operatorname{End}(\mathtt{T})$, so that $\Psi$ is a surjective linear map between spaces of the same dimension, hence an isomorphism. \end{proof} \subsection{Regular exact Borel subalgebras of $A_n$} In this section, we investigate when $(A_n, \trianglelefteq_T)$ admits a regular exact Borel subalgebra. Recall the following. \begin{definition}\cite[Definition~3.4]{konig1995exact, bkk}\label{definition:exact borel subalg} Let $(\Lambda,\trianglelefteq)$ be a quasi-hereditary algebra with $n$ simple modules, up to isomorphism. Then, a subalgebra $B\subset \Lambda$ is called an \emph{exact Borel subalgebra} provided that \begin{enumerate}[(i)] \item $B$ also has $n$ simple modules up to isomorphism and $(B,\trianglelefteq)$ is quasi-hereditary with simple standard modules, \item the functor $\Lambda\otimes_B\blank$ is exact, and \item there are isomorphisms $\Lambda\otimes_B L_B(i)\cong \Delta_\Lambda(i).$ \end{enumerate} If, in addition, the map $\operatorname{Ext}_B^k(L_B(i),L_B(j))\to \operatorname{Ext}_\Lambda^k(\Lambda\otimes_B L_B(i), \Lambda\otimes_B L_B(j))$ induced by the functor $\Lambda\otimes_B\blank$ is an isomorphism for all $k\geq 1$ and $i,j\in\{1,\dots, n\}$, $B\subset \Lambda$ is called a \emph{regular exact Borel subalgebra}. If this map is an isomorphism for all $k\geq 2$ and an epimorphism for $k=1$, $B\subset \Lambda$ is called a \emph{homological} Borel exact Borel subalgebra. \end{definition} \begin{theorem}\label{theorem:when does KA_n have regular exact borel}\cite[Theorem~D, part 5]{CONDE21} Every quasi-hereditary algebra equivalent to $(A_n, \trianglelefteq_T)$ has a regular exact Borel subalgebra if and only if $\operatorname{rad}\Delta(i)$ is contained in $\mathcal{F}(\nabla)$, for all $i\in\{1,\dots, n\}$. \end{theorem} \begin{proposition}\label{proposition:A_n has a regular exact borel} $(A_n, \trianglelefteq_T)$ has a regular exact Borel subalgebra. \end{proposition} \begin{proof} Let the vertex $v$ be labeled by $i$ and consider $\Delta(i)=M(i, s_i)$. Then, $\operatorname{rad}\Delta(i)=M(i+1, s_i)$. Let the vertex $r(v)$ be labeled by $k$. According to the in-order algorithm, the first vertex visited immmediately after $v$ is the left-most vertex in the subtree of $r(v)$. This shows, in our earlier notation, that $t_k=i+1$. This fact implies that $$M(i+1, s_i)=M(t_k, s_i)=M(t_k, s_k)=T(k),$$ according to the proof of Proposition \ref{proposition:comp factors of characteristic tilting modules}. Since $T(k)\in \mathcal{F}(\nabla)$, the statement follows from Theorem \ref{theorem:when does KA_n have regular exact borel}. \end{proof} Thanks to the obervations made earlier in the current section, it is fairly easy to find the quiver, $Q_B$, of a regular exact Borel subalgebra $B\subset A_n$. Recall that, if $B\subset A_n$ is a regular exact Borel subalgebra, then there are isomorphisms of vector spaces $$\operatorname{Ext}_B^k(L(i),L(j))\cong \operatorname{Ext}_{A_n}^k(\Delta(i),\Delta(j)),$$ for all $k\geq 1$ and $i,j\in\{1,\dots, n\}$. In particular, the spaces above have the same dimension. In our setup, these spaces are zero for $k> 1$. It is well-known that $$\dim \operatorname{Ext}_B^1(L(i),L(j))= \textrm{number of arrows from $i$ to $j$ in $Q_B$}.$$ This means that we can determine the quiver $Q_B$ by computing extensions between standard modules over $A_n$. To this end, we apply Theorem \ref{theorem:ext-algebra of A_n}, resulting in the following. \begin{proposition}\label{proposition:quiver of regular exact borel} Let $B\subset A_n$ be a regular exact Borel subalgebra. Then, there is an isomorphism of algebras $B\cong KQ_B$, where $Q_B$ is the following quiver. \begin{enumerate}[(i)] \item The vertex set of $Q_B$ is $\{1,\dots, n\}$. \item There is an arrow $i\to j$ if and only if $\dim \operatorname{Ext}_{A_n}^1(\Delta(i),\Delta(j))=1$. \end{enumerate} \end{proposition} Next, we wish to determine how to realize $B$ as a subalgebra of $A_n$. In this endeavour, we are aided by the following results. \begin{theorem}[Wedderburn-Mal'tsev]\cite{Wedderburn, Mal'tsev}\label{theorem:wedderburn} Let $A$ be a finite-dimensional algebra over an algebraically closed field. Then, there is a semi-simple subalgebra $S$ of $A$ such that $$A=\operatorname{rad}A\oplus S$$ as a vector space. Moreover, if $T$ is another semisimple subalgebra of $A$ such that $A=\operatorname{rad}A\oplus T$, then there is an inner automorphism $\sigma$ of $A$ such that $\sigma(S)=T$. \end{theorem} We remark that if $S$ is a semi-simple subalgebra as in the theorem and $T$ is another semi-simple subalgebra with $\dim_KS=\dim_KT$, then necessarily $A=\operatorname{rad}A\oplus T$, as any semi-simple subalgebra intersects $\operatorname{rad}A$ trivially. Consequently, $S$ and $T$ are conjugate. Let $A=KQ/I$ be the path algebra of some quiver modulo an admissible ideal and let $S\subset A$ be the subalgebra $S=\langle e_1,\dots, e_n\rangle$. Then, $S$ is semi-simple and $A=\operatorname{rad}A\oplus S$. If $\{d_1,\dots, d_n\}$ is some other complete set of primitive orthogonal idempotents, then $T=\langle d_1,\dots, d_n\rangle$ is another semi-simple subalgebra of $A$ with $\dim_KS=\dim_K T$, so $S$ and $T$ must be conjugate. We conclude that for any two complete sets of primitive orthogonal idempotents in $A$, there is an inner automorphism of $A$ mapping one to the other. \begin{theorem}\cite[Proposition~3.4]{Zhangunique}\label{theorem: inner auto preserves borel} Let $A$ be a finite-dimensional quasi-hereditary algebra and let $B\subset A$ be an exact Borel subalgebra. Then, $C=\sigma(B)$ is again an exact Borel subalgebra. \end{theorem} \begin{theorem}\cite[Theorem~3.6]{Zhangunique}\label{theorem: uniqueness of borel} Let $A$ be a finite-dimensional quasi-hereditary algebra and let $B\subset A$ be an exact Borel subalgebra. Then, $B\subset A$ is unique up to an inner automorphism of $A$, that is, if $B^\prime \subset A$ is another exact Borel subalgebra, then $B$ and $B^\prime$ are conjugate. \end{theorem} Let $\{d_1,\dots, d_n\}$ be a complete set of primitive orthogonal idempotents in $B$. For any complete set of primitive orthogonal idempotents $\{f_1,\dots, f_n\}$ of $A_n$, there is an inner automorphism $\sigma$ of $A_n$ mapping $\{d_1,\dots, d_n\}$ to $\{f_1,\dots, f_n\}$. But by Theorem \ref{theorem: inner auto preserves borel}, $\sigma(B)$ is an exact Borel subalgebra of $A_n$, and by construction it contains the idempotents $\{f_1,\dots, f_n\}$. We conclude that for any complete set of primitive orthogonal idempotents $\{f_1,\dots, f_n\}$ in $A_n$, there is an exact Borel subalgebra $C$ containing them. In particular, there is an exact Borel subalgebra containing the idempotents $e_1,\dots, e_n$. \begin{example} Consider $A_2$, the path algebra of $\xymatrix{1\ar[r]^-\alpha & 2}$ and let $B\subset A_2$ be a regular exact Borel subalgebra. Fix the order $1\triangleleft 2$. Then we have $$\Delta(1)\cong L(1)\quad\textrm{and}\quad \Delta(2)\cong L(2).$$ Therefore, the quiver of $B$ coincides with the quiver of $A_2$, and the (unique) regular exact Borel subalgebra is $B=A_2$. If we instead fix the order $2\triangleleft 1$, we have $$\Delta(1)\cong P(1)\quad\textrm{and}\quad \Delta(2)\cong P(2).$$ Since there are no non-split extensions between projective modules, the quiver of $B$ has no arrows. Thus, we have $B\cong K\times K$. The obvious choice for $B$ is then the subalgebra $B=\langle e_1,e_2\rangle$. Suppose $xe_1+ye_2+z\alpha\in A_2$ is an idempotent. Then we have: \begin{align*} (xe_1+ye_2+z\alpha)^2=x^2e_1+y^2e_2+z(x+y)\alpha=xe_1+ye_2+z\alpha \end{align*} This implies that all possible complete sets of primitive orthogonal idempotents are $\{e_1+z\alpha, e_2-z\alpha\}$, where $z\in K$ is some scalar. Thus, we obtain exact Borel subalgebras $B_z=\langle e_1+z\alpha, e_2-z\alpha\rangle$. Since $z\alpha \in \operatorname{rad}A_2$, the element $1-z\alpha$ is invertible (with inverse $1+z\alpha$). We see that $$(1+z\alpha)e_1(1-z\alpha)=1+z\alpha\quad \textrm{and} \quad (1+z\alpha)e_2(1-z\alpha)=1-z\alpha,$$ so for all $z\neq 0$, the subalgebras $B$ and $B_z$ are conjugate. \end{example} Next, we want to argue as in the discussion after Theorem \ref{theorem: uniqueness of borel} to conclude that there is always a regular exact Borel subalgebra containing the idempotents $e_1,\dots, e_n$. That argument relies on \cite[Proposition~3.4]{Zhangunique}, which states that inner automorphisms of quasi-hereditary algebras preserve exact Borel subalgebras. Our next goal is to extend this statement to regular and homological exact Borel subalgebras. For any automorphism $\sigma$ of $A$, there is an endofunctor $\Phi_\sigma$ of $A\operatorname{-mod}$ defined by taking an $A$-module $V$ to the module having the same underlying vector space, but with the action defined by $a\cdot_\sigma v \coloneqq \sigma(a)\cdot v$. For ease of notation, we denote $\Phi_\sigma(V)$ by ${_\sigma}V$, indicating that the action of $A$ is twisted by the automorphism $\sigma$. If $B\subset A$ is a subalgebra, then so is $C\coloneqq \sigma(B)$. It is clear that $\Phi_\sigma$ restricts, in a natural way, to a functor $C\operatorname{-mod}\to B\operatorname{-mod}$. Let $b_1,\dots, b_n$ be some complete set of primitive orthogonal idempotents for $B\subset A$. Then, the set $\{b_1,\dots, b_n\}$ indexes the isomorphism classes of simple $B$-modules, and we denote by $L_B(i)$, where $1\leq i\leq n$, the simple module having as a basis the idempotent $b_i$ and the action defined by letting $b_i$ act as the identity and other basis elements as 0. Since $\sigma:B\to C$ is an isomorphism, $\sigma(b_1),\dots, \sigma(b_n)$ is a complete set of primitive orthogonal idempotents for $C$, and putting $\sigma(b_i)=c_i$, for $1\leq i\leq n$, we denote by $L_C(i)$ the simple $C$-module having as a basis the idempotent $c_i$ and the action defined by letting $c_i$ act as the identity and other basis elements as 0. With this notation, consider the module ${_\sigma} L_C(i)$ and fix the basis $c_i$. On this basis, the idempotent acts as $b_i\cdot_\sigma c_i\coloneqq \sigma(b_i)\cdot c_i=c_i\cdot c_i=c_i$. Thus, with this notation, we have that ${_\sigma}L_C(i)=L_B(i)$. \begin{lemma}\label{lemma:inner auto preserves regular exact borel} Let $A$ be an algebra and let $B\subset A$ be a subalgebra. Let $\sigma$ be an inner automorphism of $A$, given by $\sigma(x)=u^{-1}xu$ for some invertible element $u$ and $x\in A$. Put $C=\sigma(B)$. \begin{enumerate}[(i)] \item There is an isomorphism of $A\operatorname{-}A\operatorname{-bimodules}$ $A\cong {_\sigma}A$ given by left multiplication with $u^{-1}$. \item For any $C$-module $M$, there is an isomorphism of left $A$-modules ${_\sigma}A\otimes_C M\to A\otimes_B {_\sigma}M.$ \item For any $C$-module $M$ there is an isomorphism of left $A$-modules $A\otimes_C M\to A\otimes_B {_\sigma}M$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i)] \item That the map $x\mapsto u^{-1}(x)$ is a linear bijection on $A$ is clear, so we check that it is a homomorphism of bimodules. Indeed, $$u^{-1}(axb)=u^{-1}a u u^{-1} x b=\sigma(a) u^{-1}(x)b=a\cdot_\sigma u^{-1}(x) \cdot b.$$ \item Define a map $\varphi: {_\sigma}A\otimes_C M \to A\otimes_B {_\sigma}M$ on generators by $a\otimes m \mapsto \sigma^{-1}(a)\otimes m$ and extend by linearity. Then, $\varphi$ is well-defined: \begin{align*} \varphi(ac\otimes_C m)&=\sigma^{-1}(ac)\otimes_B m=\sigma^{-1}(a)\sigma^{-1}(c)\otimes_B m=\sigma^{-1}(a)\otimes_B \sigma^{-1}(c)\cdot_\sigma m\\ &=\sigma^{-1}(a)\otimes_B c\cdot m=\varphi(a\otimes_C c\cdot m), \end{align*} for any $c\in C$. Moreover, $\varphi$ is a homomorphism of left $A$-modules: \begin{align*} \varphi(x\cdot_{\sigma} (a\otimes_C m))&=\varphi (\sigma(x)a\otimes_C m)=\sigma^{-1}(\sigma(x)a)\otimes_B m=x\sigma^{-1}(a)\otimes_B m=x\cdot( \sigma^{-1}(a)\cdot_B m)=x\cdot \varphi(a\otimes_C m) \end{align*} for any $x\in A$. Define a map $\psi:A\otimes_B {_\sigma}M \to {_\sigma}A\otimes _C M$ on generators by $a\otimes m \mapsto \sigma(a)\otimes m$ and extend by linearity. Then, $$\psi \varphi(a\otimes_C m)=\psi (\sigma^{-1}(a)\otimes_B m)=a\otimes_Cm\quad \text{and}\quad \varphi \psi(a\otimes_Bm)=\varphi(\sigma(a)\otimes_Cm)=a\otimes_B m,$$ so $\varphi$ and $\psi$ are mutually inverse bijections. We conclude that $\varphi$ is an isomorphism. \item Combine (i) and (ii). \qedhere \end{enumerate} \end{proof} \begin{theorem}\label{theorem:inner automorphisms preserve regular exact borel subalgebras} Let $A$ be a basic quasi-hereditary algebra and let $B\subset A$ be a regular (respectively, homological) exact Borel subalgebra. Let $\sigma$ be an inner automorphism of $A$. Then, $\sigma(B)$ is again a regular (respectively, homological) exact Borel subalgebra of $A$. \end{theorem} \begin{proof} Due to Theorem \ref{theorem: inner auto preserves borel}, we know that $C$ is an exact Borel subalgebra. We interpret elements of the spaces $\operatorname{Ext}_B^k(L_B(i),L_B(j))$ and $\operatorname{Ext}_C^k(L_C(i),L_C(j))$ as exact sequences of length $k+2$, modulo equivalence. Since $A \otimes_B \blank$ and $A\otimes_C\blank$ are exact functors, applying them to exact sequences in $\operatorname{Ext}_B^k(L_B(i),L_B(j))$ and $\operatorname{Ext}_C^k(L_C(i),L_C(j))$ produces exact sequences in the spaces $$\operatorname{Ext}_A^k(A\otimes_B L_B(i), A\otimes_B L_B(j))\quad \textrm{and}\quad \operatorname{Ext}_A^k(A\otimes_C L_C(i), A\otimes_C L_C(j)).$$ Functoriality ensures that these maps are well-defined. Denote them by $F_B$ and $F_C$, respectively. We denote by ${_\sigma}\blank$ the map $\operatorname{Ext}_C^k(L_C(i),L_C(j))\to \operatorname{Ext}_B^k(L_B(i),L_B(j))$ defined by taking an exact sequence $$L_C(j) \to M_k \to \dots \to M_1 \to L_C(i) \in \operatorname{Ext}_C^k(L_C(i),L_C(j))$$ to the sequence \begin{align*} _\sigma L_C(j) \to _\sigma M_k \to &\dots \to _\sigma M_1 \to _\sigma L_C(i) \\ &=\\ L_B(j) \to _\sigma M_k \to &\dots \to _\sigma M_1 \to L_B(i). \end{align*} Consider the following diagram. $$\xymatrix{ \operatorname{Ext}_B^k(L_B(i), L_B(j)) \ar[d]_-{F_B} & & \operatorname{Ext}_C^k(L_C(i), L_C(j)) \ar[d]^-{F_C} \ar[ll]_-{{_\sigma}\blank}\\ \operatorname{Ext}_A^k(A\otimes_B L_B(i),A\otimes_B L_B(j)) \ar[rd]_-{\sim} & & \operatorname{Ext}_A^k(A\otimes_C L_C(i), A\otimes_C L_C(j)) \ar@{-->}[ll] \ar[ld]^-{\sim} \\ &\operatorname{Ext}_A^k(\Delta(i),\Delta(j)) }$$ Let $L_C(j) \to M_k \to \dots \to M_1 \to L_C(i)$ be an exact sequence, interpreted as an element of the space $\operatorname{Ext}_C^k(L_C(i), L_C(j))$. Applying first ${_\sigma}\blank$ and then $F_B$, we obtain $$A\otimes_B L_B(j)\to A\otimes_B{_\sigma} M_k\to \dots \to A\otimes_B{_\sigma}M_1 \to A\otimes_B L_B(i).$$ If we instead apply $F_C$ to our original sequence $L_C(j)\to M_k \to \dots \to M_1 \to L_C(i)$, we get $$A\otimes_C L_C(j) \to A\otimes_C M_k \to \dots \to A\otimes_C M_1 \to A\otimes_C L_C(i).$$ Applying Lemma \ref{lemma:inner auto preserves regular exact borel}, we know there is an isomorphism of left $A$-modules, given by the composite $$A\otimes_C M \to {_\sigma}A\otimes_C M \to A\otimes_B {_\sigma}M.$$ This means that we may choose the dashed arrow in the diagram to be the map \begin{align*} A\otimes_C L_C(j)\to A\otimes_C M_k \to & \dots\to A\otimes_C M_1 \to A\otimes_C L_C(i) \\ &\mapsto \\ A\otimes_B L_B(j) \to A\otimes_B {_\sigma} M_k \to &\dots \to A\otimes_B {_\sigma}M_1 \to A\otimes_B L_B(i), \end{align*} which is an isomorphism and makes the top square commute, by construction. It follows that $F_C$ is an isomorphism. For the second statement, the same argument as above ensures that the maps $\operatorname{Ext}^k_C(L_C(i),L_C(j))\to \operatorname{Ext}^k_A(A\otimes L_C(i),A\otimes L_C(j))$ are isomorphisms for $k\geq 2$. For $k=1$, the above diagram implies that we can solve for $F_C$ to see that $F_C$ is the composition of three epimorphisms, hence an epimorphism. \end{proof} \begin{corollary}\label{corollary:theorem:isomorphisms preserve regular exact borel subalgebras} Let $A=KQ/I$ be a basic quasi-hereditary algebra containing a regular exact Borel subalgebra. Then, all exact Borel subalgebras of $A$ are regular. In particular, $A$ admits a regular exact Borel subalgebra containing the idempotents $e_1,\dots, e_n$. \end{corollary} \begin{proof} By assumption, there exists a regular exact Borel subalgebra $B$ of $A$. Let $C$ be an exact Borel subalgebra of $A$. By Theorem \ref{theorem: uniqueness of borel}, there exists an inner automorphism $\sigma$ of $A$ such that $\sigma(B)=C$, and since inner automorphisms preserve regular exact Borel subalgebras, by Theorem \ref{theorem:inner automorphisms preserve regular exact borel subalgebras}, $C$ is a regular exact Borel subalgebra. In the discussion following Theorem \ref{theorem: uniqueness of borel}, we saw that $A$ admits an exact Borel subalgebra containing the idempotents $e_1,\dots, e_n$. By the above argument, it is regular. \end{proof} \begin{proposition}\label{proposition:what paths are in regular exact borel} Let $B\subset A_n$ be a regular exact Borel subalgebra containing the idempotents $e_1,\dots, e_n$ and suppose that $\Delta(i)=M(i, s_i)$. \begin{enumerate}[(i)] \item If $i\leq s_i<n$, then $\alpha_j\dots \alpha_i \notin B$, for any $i< j<s_i$, and $\alpha_{s_i}\dots \alpha_i \in B$. \item If $s_i=n$, that is, if $\Delta(i)$ is projective, then $\alpha_j\dots \alpha_i \notin B$, for any $i\leq j<n$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[(i)] \item Assume towards a contradiction that $\alpha_j \dots \alpha_i \in B$ for some $i<j<s_i$. Let $p\in A_n$ be a path and consider the generator $p\otimes e_i$ of $A_n\otimes_B L(i)$. If $s(p)\neq i$, then $$e_{j+1}\cdot (p\otimes e_i)= e_{j+1}\cdot (p\otimes e_{s(p)}e_i)=0,$$ since $e_{s(p)}\in B$. Similarly, if $t(p)\neq j+1$, then $$e_{j+1}\cdot (p\otimes e_i)=e_{j+1}e_{t(p)} p\otimes e_i=0.$$ If $s(p)=i$ and $t(p)=j+1$, then $p=\alpha_j\dots \alpha_i$, which we assume is contained in $B$. We get $$\alpha_j\dots \alpha_i \otimes e_i=e_j\otimes \alpha_j \dots \alpha_i e_i=0,$$ so that $e_{j+1} \cdot (A_n\otimes_B L(i))=0.$ However, $e_{j+1}\cdot \Delta(i)\neq 0$, which is our desired contradiction. For the second statement, assume towards a contradiction that $\alpha_{s_i}\dots \alpha_i\notin B$. Then, $\alpha_{s_i}\dots \alpha_i \otimes e_i$ is a non-zero element of $A_n \otimes_B L(i)$. To see this, note that $\alpha_j\dots \alpha_i \notin B$ for any $i<j<s_i$, by the first part. It follows that also $e_{s_i+1} \cdot (\alpha_{s_i}\dots \alpha_i \otimes e_i)\neq 0$. Now, we see that $e_{s_i+1}\cdot \Delta(i)=0$, while $e_{s_i+1}\cdot A_n\otimes_B L(i)$, leading to a similar contradiction as above. \item Similar to the first part of (i). We remark that this is a separate case only because the first statement of part (ii) does not make sense when $\Delta(i)$ is projective. \qedhere \end{enumerate} \end{proof} \begin{proposition}\label{proposition:generating set for regular exact borel} Let $B\subset A_n$ be a regular exact Borel subalgebra containing the idempotents $e_1,\dots, e_n$. Let $S_B$ be the set of paths which are contained in $B$ by Proposition \ref{proposition:what paths are in regular exact borel} together with the idempotents $e_1,\dots, e_n$. Then, $S_B$ is a minimal generating set for $B$. \end{proposition} \begin{proof} It is well known that any basic and connected finite-dimensional algebra over an algebraically closed field $K$ is isomorphic to the quotient of the path algebra of some quiver by some admissible ideal. We recall the construction of this isomorphism. For details, we refer to \cite[Theorem~3.7]{ASS}. Let $\Phi:KQ_B\to B$ be the isomorphism mentioned above. Then, $\Phi$ is defined on the trivial paths in $Q_B$ by $\Phi(e_i)=e_i$, for $1\leq i\leq n$. Note that this makes sense, as we may talk about the idempotents $e_i$ as elements of $KQ_B$ and of $B$. Consider the set of arrows $i\to j$ in $Q_B$. As discussed prior to Proposition \ref{proposition:quiver of regular exact borel}, the cardinality of this set equals $\dim \operatorname{Ext}_{A_n}^1(\Delta(i),\Delta(j))$, which may be 0 or 1, according to Theorem \ref{theorem:ext-algebra of A_n}. Therefore, this set is either empty or contains a single arrow, which we denote by $x_i^j$. The arrow $x_i^j$ should be mapped to an element $y\in \operatorname{rad}B$, so that the residue class $y+\operatorname{rad}^2B$ forms a basis of $e_j \faktor{\operatorname{rad}B}{\operatorname{rad}^2B}e_i$. This is achieved by defining $\Phi(x_i^j)=\alpha_{j-1}\dots \alpha_i$. Since we know that $\Phi$ is an isomorphism, we know that the idempotents $e_i$, together with paths of the form $\Phi(x_i^j)$, constitute a minimal generating set for $B$. Recall that the arrow $x_i^j\in Q_B$ exists precisely when $\dim \operatorname{Ext}_{A_n}^1(\Delta(i),\Delta(j))=1$. Consider the following picture. The dashed edge between $b$ and $c$ signifies that $c$ is a vertex in the right subtree of $b$ (not necessarily the vertex $r(b)$), such that there exists a non-zero homomorphism from $\Delta(c)$ to $\Delta(b)$. $$\begin{tikzpicture} \node(a)[shape=circle,draw, fill=lightgray, thick] at (0,0) {$a$}; \node(b) [shape=circle,draw, fill=lightgray, thick] at (-1,-1) {$b$}; \node(c) [shape=circle,draw, fill=lightgray, thick] at (-0.25, -3) {$c$}; \draw[thick] (a) to (b); \draw[thick,dashed] (b) to (c); \end{tikzpicture}$$ We know, using Theorem \ref{theorem:ext-algebra of A_n}, that possible non-zero extensions between standard modules appear as either $\operatorname{Ext}_{A_n}^1(\Delta(b), \Delta(a))$ or as $\operatorname{Ext}_{A_n}^1(\Delta(c),\Delta(a))$, where $a$, $b$ and $c$ are configured as above. Since $b$ labels $\ell(a)$, the vertex $a$ is the first vertex visited immediately after visiting the entire right subtree of $a$. Therefore, $a=s_b+1$. Since $c$ is in the right subtree of $b$, we have $s_b=s_c$, and therefore $a=s_c+1$. Plugging this into the above, we see that the generators of $B$ are of the form $$\Phi(x_i^{s_i+1})=\alpha_{s_i}\dots \alpha_i,$$ whence it follows that $S_B$ is a minimal generating set. \qedhere \end{proof} \begin{corollary}\label{corolllary:proposition:generating set for regular exact borel} Let $B$ be the exact Borel subalgebra $B\subset A_n$ containing the idempotents $e_1,\dots, e_n$. Then, the exact Borel subalgebras are precisely those given as $C=\langle u^{-1}S_Bu\rangle$, where $u\in A_n$ is some invertible element. \end{corollary} \begin{example} We consider the binary search tree $$\begin{tikzpicture} \node(a) [shape=circle, draw, thick, fill=lightgray] at (0,0) {4}; \node(b) [shape=circle, draw, thick, fill=lightgray] at (-2,-1) {2}; \node(c) [shape=circle, draw, thick, fill=lightgray] at (2,-1) {5}; \node(d) [shape=circle, draw, thick, fill=lightgray] at (-1,-2) {3}; \node(e) [shape=circle, draw, thick, fill=lightgray] at (-3,-2) {1}; \node(f) at (1,-2) {}; \node(g) [shape=circle, draw, thick, fill=lightgray] at (3,-2) {6}; \node(h) at (-3.5,-3) {}; \node(i) at (-2.5,-3) {}; \node(j) at (-1.5,-3) {}; \node(k) at (-0.5,-3) {}; \node(n) at (2.5,-3) {}; \node(o) at (3.5,-3) {}; \draw[thick] (a) to (b); \draw[thick] (b) to (e); \draw[thick] (e) to (h); \draw[thick] (e) to (i); \draw[thick] (b) to (d); \draw[thick] (d) to (j); \draw[thick] (d) to (k); \draw[thick] (a) to (c); \draw[thick] (c) to (f); \draw[thick] (c) to (g); \draw[thick] (g) to (n); \draw[thick] (g) to (o); \end{tikzpicture}$$ We saw, in the example following Theorem \ref{theorem:ext-algebra of A_n}, that the Ext-algebra of standard modules over $A_n$ is given by the quiver $$\xymatrix{ & & 4\\ & 2\ar[ru]^-{\varepsilon_2^4} & & 5\ar@{-->}[lu]_-{f_5^4}\\ 1 \ar[ru]^-{\varepsilon_1^2} & & 3 \ar@{-->}[lu]_-{f_3^2} & &6\ar@{-->}[lu]_-{f_6^5} }$$ modulo the relations $I=\langle \varepsilon_2^4\varepsilon_1^2\rangle$. Here, there are three non-zero spaces of extensions, namely $$\dim\operatorname{Ext}_{A_n}^1(\Delta(1),\Delta(2))=\dim\operatorname{Ext}_{A_n}^1(\Delta(2),\Delta(4))=\dim\operatorname{Ext}_{A_n}^1(\Delta(3),\Delta(4))=1.$$ Letting $B\subset A_n$ be the regular exact Borel subalgebra containing $e_1,\dots, e_n$, we see that the quiver of $B$ is $$\xymatrix{ 1 \ar[r]^-a & 2 \ar@/^1pc/[rr]^-b & 3 \ar[r]^-c & 4 & 5 & 6. }$$ The arrows $a$, $b$ and $c$ correspond to the paths $\alpha_1, \alpha_3\alpha_2$ and $\alpha_3$, respectively. A generating set $S$ for $B$ is then $$S=\{e_1, e_2, e_3, e_4, e_5, e_6, \alpha_1, \alpha_3, \alpha_3\alpha_2\}.$$ \end{example} \subsection{$A_\infty$-structure on $\operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$} The notion of an $A_\infty$-algebra is meant to capture the idea of an algebra which is not strictly associative, but associative only up to a system of ``higher homotopies''. There is also the natural ``multi-object'' version, called an $A_\infty$-cateogory, which we define below. In this section we are particularly interested in $\operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$, which always carries the structure of an $A_\infty$-algebra. However, we will view $\operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$ as an $A_\infty$-category with $n$ objects, given by the standard modules. This will be useful since it enables us to enforce compatibility of the higher homotopies with the natural idempotents of $\operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$: the identity homomorphisms $1_{\Delta(i)}$. \begin{definition}\cite{Keller2} Let $K$ be a field. An $A_\infty$-category $\mathcal{A}$ consists of the following: \begin{enumerate}[(i)] \item a class of objects $\operatorname{Ob}(\mathcal{A})$; \item for each pair of objects $A$ and $B$, a $\mathbb{Z}$-graded vector space $\operatorname{Hom}_{\mathcal{A}}(A,B)$; \item for all $n\geq 1$ and objects $A_0,\dots, A_n$, a homogeneous linear map $$m_n: \operatorname{Hom}_{\mathcal{A}}(A_{n-1},A_n) \otimes \operatorname{Hom}_{\mathcal{A}}(A_{n-2},A_{n-1}) \otimes \dots \otimes \operatorname{Hom}_{\mathcal{A}}(A_0, A_1) \to \operatorname{Hom}_{\mathcal{A}}(A_0, A_n)$$ of degree $2-n$ such that $$\sum_{r+s+t=n}(-1)^{r+st}m_{r+1+t}(1^{\otimes r} \otimes m_s \otimes 1^{\otimes t})=0.$$ \end{enumerate} \end{definition} Viewing $\operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$ as an $A_\infty$-category as outlined above amounts to the following: if we have extensions $\varepsilon_i$ arranged as $$\xymatrix{ \Delta(i_1) \ar[r]^-{\varepsilon_1} & \Delta(i_2) \ar[r]^-{\varepsilon_2} & \dots \ar[r] & \Delta(i_\ell) \ar[r]^-{\varepsilon_{\ell}} & \Delta(i_ {\ell+1}) }$$ then $m_\ell(\varepsilon_\ell,\dots, \varepsilon_1)$ is an extension from $\Delta(i_1)$ to $\Delta(i_{\ell+1})$. Writing out the $A_\infty$-relations for the first few $n$, we get: \begin{enumerate} \item[$n=1$:] $m_1m_1=0$, meaning that $A$ is a complex. \item[$n=2$:] $m_1m_2=m_2(m_1\otimes 1 + 1\otimes m_1)$, meaning that $m_1$ is a derivation with respect to $m_2$. \item[$n=3$:] $m_2(1\otimes m_2-m_2\otimes 1)=m_1m_3+m_3(m_1\otimes1\otimes1+1\otimes m_1\otimes 1+ 1\otimes 1\otimes m_1)$, meaning that $m_2$ is associative up to a homotopy given by $m_3$. \end{enumerate} When evaluating at actual elements, even more signs appear according to the Koszul sign rule: $$(f\otimes g)(x\otimes y)=(-1)^{|g||x|}f(x)\otimes g(y).$$ Here, $f$ and $g$ are homogeneous maps and $x$ and $y$ are homogeneous elements. The vertical bars denote degree. A graded category $A$ such that any $A_\infty$-structure on it satisfies $m_n=0$ for $n\geq 3$ is called \emph{intrinsically formal}. \begin{proposition}\label{proposition:a-infty on A_n is trivial} $\operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$ is intrinsically formal. \end{proposition} \begin{proof} Let $\varphi_1,\dots, \varphi_\ell$ be homogeneous extensions in $\operatorname{Ext}_{A_n}^\ast(\Delta,\Delta)$. Put $i=\sum_{j=1}^\ell \deg \varphi_j$. Since $m_\ell$ is of degree $2-\ell$, the extension $m_\ell(\varphi_\ell,\dots, \varphi_1)$ is of degree $2-\ell+i$. Because $A_n$ is hereditary, the extension $m_\ell(\varphi_\ell,\dots, \varphi_1)$ is of degree 0 or 1 if it is non-zero. This implies that there are the following cases: $$2-\ell+i=\begin{cases} 0, & \textrm{if } i=\ell-2 \\ 1, & \textrm{if } i=\ell-1. \end{cases}$$ Assume that $i=\ell-1$ and that $m_\ell(\varphi_\ell,\dots,\varphi_1)$ produces an extension in $\operatorname{Ext}_{A_n}^1(\Delta(c),\Delta(a))$. We know that if such an extension is non-zero, the vertices $a$ and $c$ must be configured in the following way: here the dashed edge represents a path which is the concatenation of edges between a vertex $v$ and $r(v)$. $$\begin{tikzpicture} \node(a)[shape=circle,draw, fill=lightgray, thick] at (0,0) {$a$}; \node(b) [shape=circle,draw, fill=lightgray, thick] at (-1,-1) {$b$}; \node(c) [shape=circle,draw, fill=lightgray, thick] at (-0.25, -3) {$c$}; \draw[thick] (a) to (b); \draw[thick,dashed] (b) to (c); \end{tikzpicture}$$ Since $i=\ell-1$, there is precisely one argument $\varphi_j$ of $m_\ell(\varphi_\ell,\dots,\varphi_1)$ which is a homomorphism rather than a proper extension. Consider the following picture, where the dashed edge represents the homomorphism $\varphi_j:\Delta(x)\to \Delta(y)$ and the wavy arrows from $c$ to $x$ and from $y$ to $a$ represent the compositions $\varphi_{j-1},\dots, \varphi_1$ and $\varphi_\ell,\dots,\varphi_{j+1}$, respectively. $$\begin{tikzpicture} \node(a)[shape=circle,draw, fill=lightgray, thick] at (0,0) {$a$}; \node(b) [shape=circle,draw, fill=lightgray, thick] at (-1,-1) {$y$}; \node(c) [shape=circle,draw, fill=lightgray, thick] at (-0.5, -2.5) {$x$}; \node(d) [shape=circle,draw, fill=lightgray, thick] at (-1.5, -3.5) {$c$}; \draw[thick, <-,snake=snake, segment amplitude=.4mm, segment length=2mm] (a) to (b); \draw[thick,dashed, <-] (b) to (c); \draw[thick, ->,snake=snake, segment amplitude=.4mm, segment length=2mm] (d) to (c); \end{tikzpicture}$$ Note that either of the wavy edges may represent a path of length $>1$, rather than actual edges. Moreover, depending on $j$, the edge between $c$ and $x$ or the edge between $y$ and $a$ may be ``empty''. We see that it is impossible to have the allowed configuration from the previous picture. If $i=\ell-2$, the outcome of $m_\ell(\varphi_\ell,\dots \varphi_1)$ is an element of $\operatorname{Hom}_{A_n}(\Delta(c),\Delta(a))$. Since $\ell\geq 3$, there is at least one argument $\varphi_j$ of $m_\ell(\varphi_\ell,\dots,\varphi_1)$ which is a proper extension. According to the allowed configuration above, this means that $c$ is in the left subtree of $a$. Using Theorem \ref{theorem:ext-algebra of A_n}, we get that $\operatorname{Hom}_{A_n}(\Delta(c),\Delta(a))=0$. \end{proof} \section{$\operatorname{Ext}$-algebras of standard modules over deconcatenations} In the previous section, we studied the path algebra of $\mathbb{A}_n$ with linear orientation. To extend this study to arbitrary orientations, we apply the method of ``deconcatenation'' of a quiver $Q$ at a sink or a source. In \cite{FKR}, the authors use this method to study the different quasi-hereditary structures of the path algebra $KQ$ in terms of the quasi-hereditary structures on path algebras of certain subquivers of $Q$. In the present section, we extend this study to the $\operatorname{Ext}$-algebra of standard modules. \begin{definition}\cite[Definition~3.1]{FKR} Let $Q$ be a finite connected quiver and let $v$ be a vertex of $Q$ which is a sink or a source. A \emph{deconcatenation} of $Q$ at the vertex $v$ is a union $Q^1\sqcup \dots \sqcup Q^\ell$ of full subquivers $Q^i$ of $Q$ such that the following hold. \begin{enumerate}[(i)] \item Each $Q^i$ is a connected full subquiver of $Q$ with $v\in Q^i_0$. \item For all $i,j\in \{1,\dots, \ell\}$, such that $i\neq j$, there holds $Q_0=\left(Q_0^1\backslash\{v\}\right)\sqcup \dots \sqcup \left(Q_0^\ell\backslash\{v\}\right) \sqcup \{v\}$ and $Q_0^i\cap Q_0^j=\{v\}$. \item For $x\in Q_0^i\backslash\{v\}$ and $y\in Q_0^j\backslash\{v\}$, where $i,j\in\{1,\dots, \ell\}$ are such that $i\neq j$, there are no arrows between $x$ and $y$ in $Q$. \end{enumerate} \end{definition} \begin{example} Consider the quiver $Q=\xymatrixcolsep{0.5cm}\xymatrix{1 & 2 \ar[l]& 3\ar[r] \ar[l] & 4\ar[r] & 5}$. We see that $Q$ has a deconcatenation at the source $3$: $$(\xymatrixcolsep{0.5cm}\xymatrix{1 & 2 \ar[l] & 3\ar[l]}) \sqcup (\xymatrixcolsep{0.5cm}\xymatrix{3 \ar[r] &4 \ar[r] & 5}).$$ \end{example} We follow the notation of \cite{FKR} and put \begin{align*} A^\ell = \faktor{A}{\langle e_u \ |\ u\in Q_0 \backslash Q_0^\ell \rangle }. \end{align*} Then, $A$ surjects onto $A^\ell$, and consequently, there is a fully faithful and exact functor $F^\ell: A^\ell\operatorname{-mod}\to A\operatorname{-mod}$. Regarding $A^\ell\operatorname{-mod}$ as a full subcategory of $A\operatorname{-mod}$ via the functor $F^\ell$, we remark that an $A$-module $M$ is an $A^\ell$-module if and only if $e_uM=0$ for any $u\in Q_0\backslash Q_0^{\ell}$. \begin{lemma}\label{lemma:morphisms lift between from part of deconcatenation} Let $M$ be an $A^\ell$-module, let $N$ be an $A$-module and consider the quotient $\overline{N}=\faktor{N}{\sum e_u\cdot N}$, where the sum ranges over all $u\in Q_0^{\overline{\ell}}\backslash\{v\}$. Then, $\overline{N}$ is an $A^\ell$-module and there are isomorphisms of vector spaces $$\operatorname{Hom}_A(M, \overline{N})\cong\operatorname{Hom}_{A^\ell}(M, \overline{N})\cong \operatorname{Hom}_A(M, N).$$ \end{lemma} \begin{proof}The first isomorphism is immediate from the fact that $F^\ell$ is fully faithful. To see the second isomorphism, put $e^\ell=\sum e_u$ where the sum is as in the statement of the Lemma. Then, there is an isomorphism of $A$-modules $$\overline{N}\cong\operatorname{Hom}_A(A/\langle e^\ell \rangle, N).$$ Next, note that by tensor-$\operatorname{Hom}$ adjunction, there is an adjoint pair of functors $(F^\ell, \operatorname{Hom}_A(A/\langle e^\ell\rangle, \blank))$. Then, $$\operatorname{Hom}_A(M, N)\cong \operatorname{Hom}_A(F^\ell(M), N)\cong \operatorname{Hom}_{A^\ell}(M, \operatorname{Hom}_A(A/\langle e^\ell \rangle, N))\cong \operatorname{Hom}_{A^\ell}(M, \overline{N}).$$ \end{proof} \begin{lemma}\cite[Lemma~3.3]{FKR}\label{lemma:elem. properties of deconcatenation} Let $Q^1\sqcup Q^2$ be a deconcatenation of $Q$ at a sink or source $v$ and let $\ell=1,2$. \begin{enumerate}[(i)] \item For all $i\in Q_0^\ell$, we have $L(i)\cong L^\ell(i)$. \item For all $i\in Q_0^\ell \backslash \{v\}$, we have $P(i)\cong P^\ell(i)$ and $I(i)\cong I^\ell(i)$. \item If $v$ is a sink, we have $L(v)\cong P(v) \cong P^\ell(v)$, for $\ell=1,2$. \item If $v$ is a source, we have $L(v) \cong I(v)\cong I^\ell(v)$, for $\ell=1,2$. \item For any non-zero $A$-module $M$, if both $\operatorname{top}M$ and $\operatorname{soc}M$ are simple, then, we have either $M\in A^1\operatorname{-mod}$ or $M\in A^2 \operatorname{-mod}$. \item Let $M$ be an $A^\ell$-module and let $i\in Q_0$ be a vertex. If $[M :L(i)]\neq 0$, then $i\in Q_0^\ell$. \end{enumerate} \end{lemma} Given a partial order $\trianglelefteq$ on $Q_0$ and a deconcatenation $Q_1\sqcup Q_2$, of $Q$ at a sink or source $v$, we may form the restriction $\trianglelefteq \mid_{Q_0^\ell}$ to obtain a partial order $\trianglelefteq^\ell=\trianglelefteq \mid_{Q_0^\ell}$ on $Q_0^\ell$. \begin{lemma}\cite[Lemma~3.4]{FKR}\label{lemma:qh-structure on parts from qh-structure of whole} Let $Q_1\sqcup Q_2$ be a deconcatenation of $Q$ at a sink or source $v$ and let $\trianglelefteq$ be a partial order on $Q_0$. Denote by $\Delta$ and $\nabla$ the sets of standard and costandard $A$-modules associated to $\trianglelefteq$, respectively. Denote by $\Delta^\ell$ and $\nabla^\ell$ the sets of standard and costandard $A^\ell$-modules, respectively. \begin{enumerate}[(i)] \item For any $i\in Q_0^\ell\backslash\{v\}$, there are isomorphisms of $A$-modules $\Delta(i)\cong \Delta^\ell(i)$ and $\nabla(i)\cong \nabla^\ell(i)$, for $\ell=1,2$. \item If $v$ is a sink, there are isomorphisms of $A$-modules $L(v)\cong \Delta(v)\cong \Delta^\ell(v)$. \item If $v$ is a source, there are isomorphisms of $A$-modules $L(v)\cong \nabla(v) \cong \nabla^\ell(i)$. \item If $A$ is quasi-hereditary with respect to $\trianglelefteq$, then $A^\ell$ is quasi-hereditary with respect to $\trianglelefteq^\ell$, for $\ell=1,2$. \end{enumerate} \end{lemma} Let $Q=Q^1\sqcup Q^2$ be a deconcatenation of $Q$ at a source or sink $v$ and let $\trianglelefteq^\ell$ be a partial order on $Q_0^\ell$ for $\ell=1,2$. Put $\overline{1}=2$ and $\overline{2}=1$. Define a partial order $\trianglelefteq=\trianglelefteq(\trianglelefteq^1,\trianglelefteq^2)$ on $Q_0$ as follows. For $i,j\in Q_0$, we say that $i\triangleleft j$ if one of the following hold. \begin{enumerate}[(i)] \item We have $i,j\in Q_0^\ell$ and $i\triangleleft^\ell j$, for some $\ell$. \item We have $i\in Q_0^\ell$, $j\in Q_0^{\overline{\ell}}$, $i\triangleleft^\ell v$ and $v\triangleleft^{\overline{\ell}} j$. \end{enumerate} \begin{lemma}\cite[Lemma~3.5]{FKR}\label{lemma:qh-structure on whole from qh-structure on deconcatenation} Let $Q^1\sqcup Q^2$ be a deconcatenation of $Q$ at a sink or source $v$. Let $\trianglelefteq^\ell$ be a partial order on $Q_0^\ell$ and denote by $\Delta^\ell$ and $\nabla^\ell$ the sets of standard and costandard $A^\ell$-modules, associated to $\trianglelefteq^\ell$, for $\ell=1,2$, respectively. Denote by $\Delta$ and $\nabla$ the sets of standard and costandard $A$-modules, respectively, associated to $\trianglelefteq=\trianglelefteq(\trianglelefteq^1,\trianglelefteq^2)$. Then, we have the following. \begin{enumerate}[(i)] \item For any $i\in Q_0^\ell \backslash\{v\}$, there are isomorphisms of $A$-modules $\Delta(i)\cong \Delta^\ell(i)$ and $\nabla(i)\cong \nabla^\ell(i)$. \item If $v$ is a sink, there are isomorphisms of $A$-modules $L(v)\cong \Delta(v)\cong \Delta^\ell(v)$. \item If $v$ is a source, there are isomorphisms of $A$-modules $L(v)\cong \nabla(v)\cong \nabla^\ell(i)$. \item If $\trianglelefteq^\ell$ defines a quasi-hereditary structure on $A^\ell$ for $\ell=1$ and $\ell=2$, then $\trianglelefteq=\trianglelefteq(\trianglelefteq^1,\trianglelefteq^2)$ defines a quasi-hereditary structure on $A$. \end{enumerate} \end{lemma} We have the following observation about homomorphism spaces between indecomposable projective modules. \begin{lemma}\label{lemma:no homs between projectives in different parts of deconcatenation} Let $Q^1\sqcup Q^2$ be a deconcatenation of $Q$ at the sink or source $v$. Then, for $i\in Q_0^1\backslash\{v\}$ and $j\in Q_0^2\backslash\{v\}$, there holds $\operatorname{Hom}_A(P(i),P(j))=\operatorname{Hom}_A(P(j),P(i))=0$. \end{lemma} \begin{proposition}\label{proposition:no ext between different parts of concatenation} Let $Q^1\sqcup Q^2$ be a deconcatenation of $Q$ at the sink or source $v$. Then, for $i\in Q^1_0\backslash\{v\}$ and $j\in Q^2_0\backslash\{v\}$, there holds $\operatorname{Ext}_A^k(\Delta(i),\Delta(j))=\operatorname{Ext}_A^k(\Delta(j),\Delta(i))=0$, for all $k\geq 0$. \end{proposition} \begin{proof} Let $P^\bullet\to \Delta(i)$ be a minimal projective resolution. We claim that each term $P^k$ of the resolution $P^\bullet$ decomposes into a direct sum of indecomposable projective modules as $$P^k=\bigoplus_{\ell=1}^n P(\ell)^{m_{\ell,k}},$$ where, if $m_{\ell,k}>0$, then $\ell \in Q_0^1\backslash\{v\}$. We proceed by induction. The basis is clear, as $P^0=P(i)$. Consider the following picture. $$\xymatrixcolsep{0.7cm}\xymatrixrowsep{0.5cm}\xymatrix{ \dots \ar[r]^-{d_{k+2}} & P^{k+1} \ar[rr]^{d_{k+1}} \ar@{->>}[rd]& & P^k \ar[r]^-{d_k}& \dots \\ & & \ker d_k \ar@{^{(}->}[ur] }$$ Assume that the module $P^k$ is of the form indicated above. By construction, the module $P^{k+1}$ is a projective cover of the module $\ker d_k$. Therefore, no direct summand of $P^{k+1}$ is contained in the kernel of the epimorphism $P^{k+1}\twoheadrightarrow \ker d_k$. Consequently, no direct summand of $P^{k+1}$ is contained in the kernel of the map $d_{k+1}:P^{k+1}\to P^k$. Fixing some decomposition of the modules $P^{k+1}$ and $P^k$ into direct sums of indecomposable projective modules, the map $d_{k+1}$ is given by a matrix, whose entries are homomorphisms $f\in \operatorname{Hom}_A(P(a),P(b))$, where $a$ and $b$ are such that $P(a)$ is a direct summand of $P^{k+1}$ and $P(b)$ is a direct summand of $P^k$. Since $\operatorname{Hom}_A(P(a),P(b))\cong e_aA e_b$, and $b\in Q_0^1\backslash \{v\}$ by the inductive assumption, if $f\neq 0$, we must have $a\in Q_0^1\backslash\{v\}$. This proves the claim. Next, apply the functor $\operatorname{Hom}_A(\blank, \Delta(j))$ to the projective resolution $P^\bullet$. The resulting complex looks as follows: $$\xymatrixcolsep{0.3cm}\xymatrix{0\ar[r] & \operatorname{Hom}_A(P(i),\Delta(j)) \ar[r] & \operatorname{Hom}_A(P^1,\Delta(j)) \ar[r] & \operatorname{Hom}_A(P^2,\Delta(j))\ar[r] & \dots} $$ By our previous claim, we have $$P^k=\bigoplus_{\ell=1}^n P(\ell)^{m_{\ell,k}},$$ where we may now assume that $\ell\in Q_0^1\backslash \{v\}$. By additivity of the Hom-functor, we have $$\operatorname{Hom}_A(P^k, \Delta(j))\cong\bigoplus_{\ell=1}^n \operatorname{Hom}_A(P(\ell),\Delta(j))^{m_{\ell,k}}.$$ Consider the following picture. $$\xymatrixcolsep{0.5cm}\xymatrixrowsep{0.5cm}\xymatrix{P(\ell) \ar@{-->}[d]_-{\exists h} \ar[rd]^-f\\ P(j) \ar@{->>}[r]_-{p_j} & \Delta(j)}$$ By projectivity, any homomorphism $f$ factors as $f=p_j\circ h$. But, according to Lemma \ref{lemma:no homs between projectives in different parts of deconcatenation}, we have $$\operatorname{Hom}_A(P(\ell),P(j))=0,$$ since $\ell\in Q_0^1\backslash\{v\}$ and $j\in Q_0^2\backslash\{v\}$. This proves that $\operatorname{Ext}_A^k(\Delta(i),\Delta(j))=0$. It is clear that we may argue in the same way to get the conclusion $\operatorname{Ext}_A^k(\Delta(j),\Delta(i))=0$. \qedhere \end{proof} \begin{lemma}\label{lemma:projective resolution of module in A^l does not contain P(v)} Assume that $v$ is a source. Let $i\in Q_0^\ell \backslash\{v\}$ and let $P^\bullet \to \Delta^\ell(i)$ be a minimal projective resolution of $\Delta^\ell(i)$ as an $A$-module. Then no term $P^k$ of the projective resolution $P^\bullet$ contains the projective module $P^\ell(v)$ as a direct summand. \end{lemma} \begin{proof} We proceed by induction. The basis is clear, as $P^0=P^\ell(i)$. Consider the following picture. $$\xymatrixcolsep{0.7cm}\xymatrixrowsep{0.5cm}\xymatrix{ \dots \ar[r]^-{d_{k+2}} & P^{k+1} \ar[rr]^{d_{k+1}} \ar@{->>}[rd]& & P^k \ar[r]^-{d_k}& \dots \\ & & \ker d_k \ar@{^{(}->}[ur] }$$ Assume that the statement holds for the module $P^k$ and assume towards a contradiction that the module $P^{k+1}$ contains a direct summand $P^\ell(v)$. By construction, $P^{k+1}$ is a projective cover of $\ker d_k$, thus we conclude that the module $\operatorname{top} \ker d_k$ contains a direct summand $L^\ell(v)$. This fact, in turn, implies that there is a direct summand $P^\ell(t)$ of $P^k$, such that $\left[ P^\ell(t):L^\ell(v)\right]>0$. Since a basis of $P^\ell(t)$ is given by paths in $A^\ell$ starting in $t$, this implies that there is a path in $A^\ell$ from $t$ to $v$. This is our desired contradiction, as $v$ was assumed to be a source. \end{proof} \begin{proposition}\label{proposition:ext between standards in deconcatenation} Let $Q^1\sqcup Q^2$ be a deconcatenation at a sink or source $v$. \begin{enumerate}[(i)] \item If $i,j\in Q_0^\ell \backslash\{v\}$, there holds $\operatorname{Ext}_{A^\ell}^k(\Delta^\ell(i),\Delta^\ell(j))\cong \operatorname{Ext}_A^k(\Delta(i),\Delta(j)),$ for all $k\geq 0$. \item If $v$ is a sink, then $\operatorname{Ext}^k_{A^\ell}(\Delta^\ell(v),\Delta^\ell(j))\cong \operatorname{Ext}_A^k(\Delta(v),\Delta(j))$, for all $k\geq 0$. \item If $v$ is a sink, then $\operatorname{Ext}^k_{A^\ell}(\Delta^\ell(i),\Delta^\ell(v))\cong \operatorname{Ext}_A^k(\Delta(i),\Delta(v))$, for all $k\geq 0$. \end{enumerate} \end{proposition} \begin{proof}Let $P^\bullet \to \Delta^\ell(i)$ and $Q^\bullet \to \Delta^\ell(j)$ be minimal projective resolutions. By Lemma \ref{lemma:elem. properties of deconcatenation}, there is an isomorphism of $A$-modules $P^\ell(x)\cong P(x)$, for all $x\in Q_0^\ell\backslash\{v\}$. Note that this isomorphism is the one induced by the functor $F^\ell$, that is, $F^\ell(P^\ell(x))\cong P(x)$. Similarly, we have $F^\ell(\Delta^\ell(i))\cong \Delta(i)$ and $F^\ell(\Delta^\ell(j))\cong \Delta(j)$. \begin{enumerate}[(i)] \item The functor $F^\ell:A^\ell\operatorname{-mod}\to A\operatorname{-mod}$ is fully faithful and exact. By Lemma \ref{lemma:projective resolution of module in A^l does not contain P(v)}, the terms of the projective resolution $P^\bullet \to \Delta^\ell(i)$ do not contain any direct summands isomorphic to $P^\ell(v)$. We conclude that $F^\ell(P^\bullet)\to F^\ell(\Delta^\ell(i))\cong \Delta(i)$ is a minimal projective resolution. Similarly, we have that $F^\ell(Q^\bullet) \to F^\ell(\Delta^\ell(j))\cong \Delta(j)$ is a minimal projective resolution. The statement follows. \item If $v$ is a sink, we have $P^\ell(v)\cong L(v)\cong P(v)$ as $A$-modules, by Lemma \ref{lemma:elem. properties of deconcatenation}, and $\Delta^\ell(v)\cong L(v)\cong \Delta(v)$ as $A$-modules, by Lemma \ref{lemma:qh-structure on parts from qh-structure of whole}. Thus, the statement follows by applying the functor $F^\ell$ to the minimal projective resolutions of $\Delta^\ell(v)$ and $\Delta^\ell(j)$. \item Similar to (ii). \qedhere \end{enumerate} \end{proof} We recall the notation of \cite{FKR}, where $\overline{1}=2$ and $\overline{2}=1$. \begin{lemma}\label{lemma:homs into projective at source} Let $Q_1\sqcup Q_2$ be a deconcatenation of $Q$ at the source $v$ and let $i\in Q_0^\ell \backslash \{v\}$. Then, there are isomorphisms of vector spaces \begin{align*} \operatorname{Hom}_A(P(i),P^\ell(v))\cong \operatorname{Hom}_A(P(i),P(v))\quad \textrm{and}\quad \operatorname{Hom}_A(P^\ell(v),P(i))\cong \operatorname{Hom}_A(P(v),P(i)), \textrm{ for }\ell=1,2. \end{align*} \end{lemma} \begin{proof} The first statement is precisely Lemma~\ref{lemma:morphisms lift between from part of deconcatenation} applied to the modules $P(i)$ and $P^\ell(v)$. Note that in the notation of Lemma~\ref{lemma:morphisms lift between from part of deconcatenation}, we have $P^\ell(v)=\overline{P(v)}$. For the second statement, let $p:P(v)\to P^\ell(v)$ denote the natural epimorphism. Then, there is a short exact sequence $$\xymatrix{0 \ar[r] & \ker p \ar[r] & P(v) \ar[r]^-p & P^\ell(v) \ar[r] & 0}.$$ Note that $P^\ell(v)$ is an $A^\ell$-module while $\ker p$ is an $A^{\overline{\ell}}$-module. Applying $\operatorname{Hom}_A(\blank, P(i))$ to this sequence, we obtain: $$\xymatrix{ 0\ar[r] & \operatorname{Hom}_A(P^\ell(v), P(i)) \ar[r] & \operatorname{Hom}_A(P(v),P(i)) \ar[r] & \operatorname{Hom}_A(\ker p, P(i)) }$$ The space $\operatorname{Hom}_A(\ker p, P(i))$ is zero because of our previous observation, so we have $$\operatorname{Hom}_A(P^\ell(v), P(i))\cong \operatorname{Hom}_A(P(v), P(i))$$ since the Hom-functor is left exact. \qedhere \end{proof} \begin{lemma}\label{lemma:projective resolution of standard at source} Let $v$ be a source, let $P^\bullet \to \Delta^1(v)$ be a minimal projective resolution with terms $P^k$, $k\geq 0$ and let $Q^\bullet \to \Delta^{2}(v)$ be a minimal projective resolution with terms $Q^k$, $k\geq 0$. Then, there is a minimal projective resolution $R^\bullet \to \Delta(v)$, with terms $R^k=P^k\oplus Q^k$, for $k\geq 1$, and $R^0=P(v)$. \end{lemma} \begin{proof} We proceed by induction. For the case $k=1$, consider the projection $p:P(v)\to \Delta(v)$. The module $\ker p$ has a basis consisting of paths $q$ in $A$, such that $s(q)=v$ and $t(q)=j$ with $j\ntriangleleft v$. Let $x_1,\dots, x_n$ be those paths that satisfy $t(x_i)\in Q_0^1$ and let $y_1, \dots, y_m$ be those paths that satisfy $t(y_j)\in Q_0^2$. Put $X=\operatorname{span}\{x_1,\dots, x_n\}$ and $Y=\operatorname{span}\{y_1,\dots, y_m\}$. Clearly, $\ker p=X\oplus Y$ as vector spaces. Since the action of $A$ on $X$ and $Y$ is given by left multiplication, we see that, in fact, $\ker p=X\oplus Y$ as $A$-modules. It is clear that, as an $A^1$-module, $X$ is isomorphic to the kernel of the projection $p^1: P^1(v)\twoheadrightarrow \Delta^1(v)$, so $P^1$ is a projective cover of $X$. Similarly, as an $A^2$-module, $Y$ is isomorphic to the kernel of the projection $p^2: P^2(v)\twoheadrightarrow \Delta^2(v)$, so $Q^1$ is a projective cover of $Y$. When $k=2$, we consider the picture: $$\xymatrix{ R^2 \ar[r] \ar[d]&P^1\oplus Q^1 \ar[rd] \ar[rr]^-{(d^1, e^1)} & & P(v) \\ \ker d^1\oplus \ker e^1 \ar[ru]& & X\oplus Y \ar[ru] }$$ It is clear that we may view the map from $P^1\oplus Q^1$ to $P(v)$ as $(d^1, e^1)$. It the follows that its kernel is the direct sum $\ker d^1\oplus \ker e^1$, which has a projective cover $R^2=P^2\oplus Q^2$. This finishes the basis of the induction. Consider the following picture: \begin{align*} \xymatrix@C=2cm{ R^{k+1} \ar[d]\ar[r]^-{f^{k+1}} &P^k\oplus Q^k \ar[r]^-{\left(\begin{smallmatrix} d^k &0 \\ 0 & e^k \end{smallmatrix}\right)} & P^{k-1}\oplus Q^{k-1} \\ \ker d^k\oplus \ker e^k \ar[ru] } \end{align*} That the right matrix above is diagonal follows from Lemma \ref{lemma:no homs between projectives in different parts of deconcatenation} and the induction hypothesis. Then, the kernel of the map $\left( \begin{smallmatrix} d^k & 0 \\ 0 & e^k \end{smallmatrix}\right)$ is isomorphic to $\ker d^k \oplus \ker e^k$, and a projective cover is therefore $P^{k+1}\oplus Q^{k+1}$. This shows that $R^{k+1}=P^{k+1}\oplus Q^{k+1}$ and $f^{k+1}=\left(\begin{smallmatrix} d^{k+1} & 0 \\ 0 & e^{k+1} \end{smallmatrix}\right)$.\qedhere \end{proof} \begin{proposition}\label{proposition:ext to and from source in deconcatenation} Let $Q_1\sqcup Q_2$ be a deconcatenation of $Q$ at the source $v$ and let $i\in Q_0^\ell \backslash \{v\}$. Then, there are isomorphisms of vector spaces \begin{align*} \operatorname{Ext}^k_{A^\ell}(\Delta^\ell(i),\Delta^\ell(v))&\cong \operatorname{Ext}^k_{A}(\Delta(i),\Delta(v))\quad \textrm{and}\quad \operatorname{Ext}^k_{A^\ell}(\Delta^\ell(v),\Delta^\ell(i))\cong \operatorname{Ext}^k_{A}(\Delta(v),\Delta(i)), \end{align*} for all $k\geq 0$. \end{proposition} \begin{proof} Let $P^\bullet\to \Delta^\ell(i)$ and $Q^\bullet \to \Delta^\ell(v)$ be minimal projective resolutions. Let $\varepsilon^\ell: P^\bullet \to Q^\bullet[k]$ be a chain map. \begin{align*} \xymatrix{ \dots \ar[r]& P^{k+2}\ar[r]^{d^{k+2}} \ar[d]^-{\varepsilon_2^\ell} & P^{k+1} \ar[d]^-{\varepsilon_1^\ell}\ar[r]^{d^{k+1}}& P^k \ar[d]^-{\varepsilon_0^\ell} \ar[r] & \dots\\ \dots \ar[r]& Q^{2}\ar[r]_-{\delta^2} & Q^{1}\ar[r]_-{\delta^1} & P^\ell(v) } \end{align*} According to Lemma \ref{lemma:projective resolution of standard at source}, there is a minimal projective resolution $T^\bullet\to \Delta(v)$ with terms $$T^k=\begin{cases} Q^k \oplus R^k, & \textrm{if }k\geq 1\\ P(v), & \textrm{if } k=0, \end{cases}$$ where the modules $R^k$ constitute a minimal projective resolution of $\Delta^{\overline{\ell}}(v)$. Define a map $\varepsilon: P^\bullet \to T^\bullet[k]$, as having components given by $$\varepsilon_n=\begin{cases} \left(\begin{smallmatrix} \varepsilon_n^\ell \\ 0 \end{smallmatrix}\right), & \textrm{if } n\geq 1 \\ \Phi(\varepsilon_0^\ell), & \textrm{if }n=0. \end{cases}$$ Here, $\Phi: \operatorname{Hom}_A(P^k, P^\ell(v))\to \operatorname{Hom}_A(P^k, P(v))$ is the isomorphism obtained in the proof of Lemma \ref{lemma:homs into projective at source}, so that $\varepsilon_0$ is uniquely defined by the equation $p\varepsilon_0=\varepsilon_0^\ell$, where $p:P(v)\to P(v)^\ell$ is the natural projection. \begin{align*} \xymatrix{ \dots \ar[r]& P^{k+2}\ar[r]^{d^{k+2}} \ar[d]^-{\varepsilon_2} & P^{k+1} \ar[d]^-{\varepsilon_1}\ar[r]^{d^{k+1}}& P^k \ar[d]^-{\varepsilon_0} \ar[r] & \dots\\ \dots \ar[r]& Q^{2}\oplus R^2\ar[r]_-{\left(\begin{smallmatrix} \delta^2 & 0 \\ 0 & \partial^2 \end{smallmatrix}\right)} & Q^{1} \oplus R^1\ar[r] & P(v) } \end{align*} We claim that $\varepsilon$ is a chain map. Considering the following diagram, where $n\geq 1$. $$\xymatrixcolsep{2cm}\xymatrix{ P^{k+n+1}\ar[d]_-{\varepsilon_{n+1}} \ar[r]^-{d^{k+n+1}} & P^{k+n} \ar[d]^-{\varepsilon_n} \\ Q^{n+1} \oplus R^{n+1} \ar[r]_-{\left(\begin{smallmatrix} \delta^{n+1} & 0 \\ 0 & \partial^{n+1} \end{smallmatrix}\right)}& Q^n\oplus R^n }$$ We have \begin{align*} \left(\begin{smallmatrix} \delta^{n+1} & 0 \\ 0 & \partial^{n+1} \end{smallmatrix}\right) \varepsilon_{n+1}&=\left(\begin{smallmatrix} \delta^{n+1} & 0 \\ 0 & \partial^{n+1} \end{smallmatrix}\right)\left(\begin{smallmatrix} \varepsilon_{n+1}^\ell \\ 0 \end{smallmatrix}\right)=\left(\begin{smallmatrix} \delta^{n+1}\varepsilon_{n+1}^\ell \\0 \end{smallmatrix}\right)=\left(\begin{smallmatrix} \varepsilon_n^\ell d^{k+n+1} \\ 0 \end{smallmatrix}\right)=\left(\begin{smallmatrix} \varepsilon_n^\ell\\0 \end{smallmatrix}\right) d^{k+n+1}=\varepsilon_n d^{k+n+1} \end{align*} since $\delta^{n+1}\varepsilon_{n+1}^\ell=\varepsilon_n^\ell d^{k+n+1}$ by virtue of $\varepsilon^\ell$ being a chain map. It remains to check the following square. $$\xymatrix{ P^{k+1} \ar[d]_-{\varepsilon_1}\ar[r]^{d^{k+1}} & P^k \ar[d]^-{\varepsilon_0} \\ Q^{1} \oplus R^1\ar[r] & P(v) }$$ In the proof of Lemma \ref{lemma:projective resolution of standard at source}, we saw that the kernel of the projection $P(v)\twoheadrightarrow \Delta(v)$ is isomorphic to $X\oplus Y$, where $X$ is isomorphic to the kernel of the projection $P^1(v)\twoheadrightarrow \Delta^1(v)$ and $Y$ is isomorphic to the kernel of the projection $P^2(v)\twoheadrightarrow \Delta^2(v)$. Consider the following pictures: $$\xymatrix{ Q^1 \ar[rr]^-{\delta^1} \ar[rd]& & P^1(v) \ar[r] & \Delta^1(v)\\ & X \ar[ru] },\quad \xymatrix{ R^1 \ar[rr]^-{\partial^1}\ar[rd]& & P^2(v) \ar[r] & \Delta^2(v)\\ & Y \ar[ru] }$$ By construction, $\delta^1$ and $\partial^1$ are radical maps. Since $\operatorname{rad}P^1(v)$ and $\operatorname{rad}P^2(v)$ are isomorphic to submodules of $P(v)$, we may view $\delta^1$ and $\partial^1$ as maps $\delta^1: Q^1\to P(v)$ and $\partial^1: R^1\to P(v)$, respectively. Then the bottom map in the above square can be written as $(\delta^1,\partial^1):P^1 \oplus R^1\to P(v)$. Consider the following picture: $$\xymatrix{ P^{k+1} \ar[d]_-{\varepsilon_1} \ar[r]^-{d^{k+1}} & P^k \ar[r]^-{\varepsilon_0^\ell} \ar[d]^-{\varepsilon_0} & P^\ell(v)\\ Q^1\oplus R^1 \ar[r]_-{(\delta^1,\partial^1)} & P(v) \ar@{->>}[ru]_-{p} }$$ By construction, the triangle on the right commutes. Since $\varepsilon^\ell$ is a chain map, we have $$(\delta^1,\partial^1)\varepsilon_1=(\delta^1, \partial^1)\left(\begin{smallmatrix} \varepsilon_1^\ell \\ 0 \end{smallmatrix}\right)=\delta^1 \varepsilon_1^\ell=\varepsilon_0^\ell d^{k+1}.$$ Moreover, the image of $\delta^1\varepsilon_1^\ell=\varepsilon_0^\ell d^{k+1}$ is contained in the submodule $\operatorname{rad}(A^\ell)\cdot P(v)$, on which $p$ acts as the identity. This implies that the perimeter of the diagram commutes, and since $p$ is an isomorphism on $\operatorname{rad}(A^\ell)\cdot P(v)$, the square commutes. Let $e^\ell:P^\bullet \to Q^\bullet[k]$ be another chain map. We claim that if $\varepsilon^\ell$ and $e^\ell$ are homotopic, so are $\varepsilon$ and $e$, where $e: P^\bullet \to T^\bullet[k]$ is the chain map with components $$e_n=\begin{cases} \left(\begin{smallmatrix} e_n^\ell \\ 0 \end{smallmatrix}\right), & \textrm{if } n\geq 1 \\ \Phi(e_0^\ell), & \textrm{if }n=0. \end{cases}$$ $$\xymatrixcolsep{2cm}\xymatrix{ P^{k+n+1}\ar[r]^-{d^{k+n+1}} \ar[d] & P^{k+n} \ar[ld]_-{h_{n+1}}\ar[d]^-{\varepsilon_n^\ell}_-{e_n^\ell} \ar[r]^-{d^{k+n}} &P^{k+n-1} \ar[d]\ar[ld]_-{h_n}\\ Q^{n+1} \ar[r]_-{\delta^{n+1}} & Q^n \ar[r]_-{\delta^n}& Q^{n-1} }$$ Assume that $h_n d^{k+n}+\delta^{n+1}\ h_{n+1} =\varepsilon_n^\ell-e_n^\ell$, for $n\geq 1$. Consider the picture: $$\xymatrixcolsep{2cm}\xymatrixrowsep{1.5cm}\xymatrix{ P^{k+n+1}\ar[r]^-{d^{k+n+1}} \ar[d] & P^{k+n}\ar[d]^-{\left(\begin{smallmatrix} \varepsilon_n^\ell\\ 0 \end{smallmatrix}\right)}_-{\left(\begin{smallmatrix} e_n^\ell\\ 0 \end{smallmatrix}\right)} \ar[r]^-{d^{k+n}} \ar[ld]_-{\left(\begin{smallmatrix} h_{n+1}\\0 \end{smallmatrix}\right)}&P^{k+n-1} \ar[ld]_-{\left(\begin{smallmatrix} h_{n}\\0 \end{smallmatrix}\right)} \ar[d]\\ Q^{n+1}\oplus R^{n+1} \ar[r]_-{\left(\begin{smallmatrix} \delta^{n+1} & 0 \\ 0 & \partial^{n+1} \end{smallmatrix}\right)} & Q^n\oplus R^n \ar[r]_-{\left(\begin{smallmatrix} \delta^{n} & 0 \\ 0 & \partial^{n} \end{smallmatrix}\right)}& Q^{n-1}\oplus R^{n-1} }$$ Then, we have $$\left(\begin{smallmatrix} \delta^{n+1} & 0 \\ 0 & \partial^{n+1} \end{smallmatrix}\right) \left(\begin{smallmatrix} h_{n+1}\\0 \end{smallmatrix}\right) + \left(\begin{smallmatrix} h_n\\0 \end{smallmatrix}\right)d^{k+n}=\left(\begin{smallmatrix} \delta^{n+1} h_{n+1}\\0 \end{smallmatrix}\right) + \left(\begin{smallmatrix} h_n d^{k+n}\\0 \end{smallmatrix}\right) =\left(\begin{smallmatrix} \varepsilon_n^\ell-e_n^\ell\\0 \end{smallmatrix}\right).$$ It remains to check the case $n=0$. Consider the following picture. $$\xymatrixcolsep{2cm}\xymatrix{ P^{k+1}\ar[r]^-{d^{k+1}} \ar[d] & P^{k} \ar[ld]_-{h_{1}}\ar[d]^-{\varepsilon_0^\ell}_-{e_0^\ell} \ar[r]^-{d^{k}} &P^{k-1} \ar[d]\ar[ld]_-{h_0}\\ Q^{1} \ar[r]_-{\delta^{1}} & P^\ell(v) \ar[r]& 0 }$$ Assume that $h_0\circ d^k + \delta^1 h_1=\varepsilon_0^\ell-e_0^\ell$ as maps $P^k\to P^\ell(v)$. To finish our homotopy, we take the map $\Phi(h_0)$, where $$\Phi:\operatorname{Hom}_A(P^k, P^\ell(v))\to \operatorname{Hom}_A(P^k, P(v))$$ is the isomorphism constructed in the proof of Lemma \ref{lemma:homs into projective at source}. $$\xymatrixcolsep{2cm}\xymatrixrowsep{1.5cm}\xymatrix{ P^{k+1} \ar[d]\ar[r]^-{d^{k+1}} & P^k \ar[ld]_-{\left(\begin{smallmatrix} h_1\\0 \end{smallmatrix}\right)} \ar[r]^-{d^k} \ar[d]^-{\varepsilon_0}_-{e_0} & P^{k-1}\ar[ld]_-{\Phi(h_0)} \ar[d]^-{h_0}\\ Q^1\oplus R^1 \ar[r]_-{(\delta^1,\partial^1)} & P(v) \ar@{->>}[r]_-{p} & P^\ell(v) }$$ Note that the maps $\varepsilon_0, e_0$ and $\Phi(h_0)$ are defined by the equations $$p\varepsilon_0=\varepsilon_0^\ell,\quad pe_0=e_0^\ell,\quad\textrm{and}\quad p\Phi(h_0)=h_0.$$ Using this, we obtain $\delta^1h_1 + p\Phi(h_0)d^k=p(\varepsilon_0-e_0)$. Since the image of $\delta^1 h_1$ is contained in the submodule $\operatorname{rad}P^\ell(v)$, which in a natural way is a submodule of $P(v)$ on which $p:P(v)\to P^\ell(v)$ acts as the identity, we have $\delta^1 h_1=p\delta^1 h_1$ as maps $P^k\to P^\ell(v)$. This means that we have the equality $$p(\delta^1h_1 + \Phi(h_0)d^k)=p(\varepsilon_0- e_0)$$ as maps $P^k\to P^\ell(v)$. Now, since $P^k$ is an $A^\ell$-module, we may argue as in the proof of Lemma \ref{lemma:homs into projective at source}, to conclude that the images of $\delta^1 h_1, \Phi(h_0), \varepsilon_0$ and $e_0$ are all contained in the submodule $\operatorname{rad}(A^\ell)\cdot P(v)\subset P(v)$, on which $p$ acts as the identity. This implies that $$\varepsilon_0 - e_0= \delta^1h_1 + \Phi(h_0)d^k= (\delta^1,\partial^1)\left(\begin{smallmatrix} h_1 \\0 \end{smallmatrix}\right)+\Phi(h_0)d^k,$$ confirming the claim that $\varepsilon$ and $e$ are homotopic. Thus, we have shown that the map $$\Upsilon:\operatorname{Ext}_{A^\ell}^k(\Delta^\ell(i),\Delta^\ell(v)) \to \operatorname{Ext}_A^k(\Delta(i),\Delta(v)),$$ defined by $\varepsilon^\ell \mapsto \varepsilon$, is well-defined. Next, note that, since $i\in Q_0^\ell\backslash\{v\}$, the terms of $P^\bullet$ do not contain indecomposable direct summands isomorphic to $P^\ell(v)$, according to Lemma \ref{lemma:projective resolution of module in A^l does not contain P(v)}. Then, since the indecomposable direct summands of $R^s$ are of the form $P(j)$, for $j\in Q_0^{\overline{\ell}}$, we have $\operatorname{Hom}_A(P^{k+s}, R^s)=0$, for any $s\geq 1$, according to Lemma \ref{lemma:no homs between projectives in different parts of deconcatenation}. Thus, we have an isomorphism of vector spaces $$\operatorname{Hom}_{A^\ell} (P^{k+s}, Q^s) \cong \operatorname{Hom}_A(P^{k+s}, T^s),$$ for all $s\geq 1$, given by $f\mapsto \left(\begin{smallmatrix} f \\ 0 \end{smallmatrix}\right)$ (note that $T^s=Q^s\oplus R^s$ for $s\geq 1$). When $s=0$, we have the isomorphism $\operatorname{Hom}_{A^\ell}(P^k, Q^0)\cong \operatorname{Hom}_A(P^k, T^0)$, according to Lemma \ref{lemma:homs into projective at source}, since $Q^0=P^\ell(v)$ and $T^0=P(v)$. This means that we have an isomorphism of vector spaces $$\xi^s: \operatorname{Hom}_{A^\ell}(P^{k+s}, Q^s)\cong \operatorname{Hom}_{A}(P^{k+s}, T^s),$$ for all $s\geq 0$. We note that, by construction, $\left(\Upsilon(\varepsilon^\ell)\right)_n=\xi^n(\varepsilon^\ell_n)$ for all $n\geq 0$, which shows that $\Upsilon$ is a linear isomorphism. This proves the first statement. The second statement is proved similarly. \end{proof} Let $\operatorname{Ext}_{A^\ell}^\ast(\Delta^\ell,\Delta^\ell)$ denote the $\operatorname{Ext}$-algebra of standard modules over $A^\ell$, for $\ell=1,2$. Similarly, let $\operatorname{Ext}_A^\ast(\Delta,\Delta)$ denote the $\operatorname{Ext}$-algebra of standard modules over $A$. Fix the basis $1_{\Delta(v)}$ of the space $\operatorname{End}_A(\Delta(v))$. \begin{theorem}\label{theorem:pasting of ext-algebras of parts of concatenation isomorphic to whole ext-algebra} Let $Q^1\sqcup Q^2$ be a deconcatenation of $Q$ at a sink or source $v$. Then, there is an isomorphism of graded algebras $\operatorname{Ext}_A^\ast(\Delta,\Delta)\cong \operatorname{Ext}_{A^1}^\ast(\Delta^1,\Delta^1)\diamond \operatorname{Ext}_{A^2}^\ast(\Delta^2,\Delta^2)$. \end{theorem} \begin{proof} We start by noting that, according to Propositions \ref{proposition:ext between standards in deconcatenation} and \ref{proposition:ext to and from source in deconcatenation}, there holds $$\operatorname{Ext}^k_{A^\ell}(\Delta^\ell(i),\Delta^\ell(j))\cong \operatorname{Ext}_A^k(\Delta(i),\Delta(j)),$$ for all $i,j\in Q_0^\ell$ and all $k\geq 0$. Put $C=\operatorname{Ext}_{A^1}^\ast(\Delta^1,\Delta^1)\diamond \operatorname{Ext}_{A^2}^\ast(\Delta^2,\Delta^2).$ By definition of the multiplication on $C$, for any $x\in \operatorname{Ext}_{A^1}^\ast(\Delta^1,\Delta^1)$ and $y\in \operatorname{Ext}_{A^2}^\ast(\Delta^2,\Delta^2)$ such that $\deg x>0$ and $\deg y>0$, there holds $x\cdot_C y=y\cdot_C x=0$. Similarly, according to Proposition \ref{proposition:no ext between different parts of concatenation}, we have $$\operatorname{Ext}_{A^1}^{>0}(\Delta^1,\Delta^1)\cdot \operatorname{Ext}_{A^2}^{>0}(\Delta^2,\Delta^2)=\operatorname{Ext}_{A^2}^{>0}(\Delta^2,\Delta^2)\cdot\operatorname{Ext}_{A^1}^{>0}(\Delta^1,\Delta^1)=0.$$ Together, these facts imply that there is a homomorphism of graded algebras $\Phi:C\to \operatorname{Ext}_A^\ast(\Delta,\Delta)$. Note that, because of Proposition \ref{proposition:no ext between different parts of concatenation}, the subspaces $\Phi(\operatorname{Ext}_{A^\ell}^\ast(\Delta^\ell,\Delta^\ell))$ generate $\operatorname{Ext}_A^\ast(\Delta,\Delta)$, showing that $\Phi$ is surjective. Lastly, note that $\dim_K C=\dim_K \operatorname{Ext}_A^\ast(\Delta,\Delta)$, so that $\Phi$ is a surjective linear map between vector spaces of the same dimension, hence a linear isomorphism. \end{proof} \begin{example} We consider the deconcatenation $(\xymatrixcolsep{0.5cm}\xymatrix{1 & 2 \ar[l] & 3\ar[l]}) \sqcup (\xymatrixcolsep{0.5cm}\xymatrix{3 \ar[r] &4 \ar[r] & 5})$ of the quiver $$Q=\xymatrixcolsep{0.5cm}\xymatrix{1 & 2 \ar[l]& 3\ar[r] \ar[l] & 4\ar[r] & 5}.$$ We draw the Loewy diagrams of the projective and standard modules over $A^1$. \begin{align*} P^1(1)&: 1,\quad P^1(2): \vcenter{\xymatrixrowsep{0.5cm}\xymatrix{2\ar[d]\\1}},\quad P^1(3):\vcenter{\xymatrixrowsep{0.5cm}\xymatrix{3\ar[d]\\2\ar[d]\\1}},\quad \Delta^1(1)\cong P^1(1),\quad \Delta^1(2)\cong P^1(2),\quad \Delta^1(3)\cong P^1(3). \end{align*} As the standard modules over $A^1$ are projective, there are no non-split extensions between them. We see that there are homomorphisms $f:\Delta^1(1)\to \Delta^1(2)$, $g:\Delta^1(2)\to \Delta^1(3)$ and $h:\Delta^1(1)\to \Delta^1(3)$, such that $h=g\circ f$. The quiver of the Ext-algebra $\operatorname{Ext}_{A_1}^\ast(\Delta^1,\Delta^1)$ is then $$\xymatrix{1 \ar@{-->}[r]^-f & 2 \ar@{-->}[r]^-g & 3},$$ with $\deg f=\deg g=0$, subject to no additional relations. Next, we draw the Loewy diagrams of the projective and standard modules over $A^2$. \begin{align*} P^2(3)&: \vcenter{\xymatrixrowsep{0.5cm}\xymatrix{3\ar[d]\\4\ar[d]\\5}},\quad P^2(4): \vcenter{\xymatrixrowsep{0.5cm}\xymatrix{4\ar[d]\\5}},\quad P^2(5):5,\quad \Delta^2(3):3,\quad \Delta^2(4):4,\quad \Delta^2(5):5. \end{align*} As the standard modules over $A^2$ are simple, we see that we have non-split extensions $\alpha\in \operatorname{Ext}_{A_2}^1(\Delta^2(3),\Delta^2(4))$ and $\beta\in \operatorname{Ext}_{A_2}^1(\Delta^2(4),\Delta^2(5))$, such that $\beta\alpha=0$. The quiver of the Ext-algebra $\operatorname{Ext}_{A_2}^\ast(\Delta^2,\Delta^2)$ is then $$\xymatrix{3 \ar[r]^-\alpha & 4 \ar[r]^-\beta& 5},$$ subject to the relation $\beta\alpha=0$. Finally, we draw the Loewy diagrams of the projective and standard modules over $A$. \begin{align*} P(1)&:\vcenter{\xymatrixrowsep{0.5cm}\xymatrix{1}},\quad P(2):\vcenter{\xymatrixrowsep{0.5cm}\xymatrix{2\ar[d]\\1}},\quad P(3):\vcenter{\xymatrixcolsep{0.5cm}\xymatrixrowsep{0.5cm}\xymatrix{&3 \ar[ld]\ar[rd]\\2\ar[d] & & 4\ar[d]\\1 & & 5}},\quad P(4):\vcenter{\xymatrixrowsep{0.5cm}\xymatrix{4 \ar[d]\\5}},\quad P(5):\vcenter{\xymatrixrowsep{0.5cm}\xymatrix{5}}, \\ \Delta(1)&\cong P(1),\quad \Delta(2)\cong P(2),\quad \Delta(3):\vcenter{\xymatrixrowsep{0.5cm}\xymatrix{3\ar[d]\\2\ar[d]\\1}},\quad \Delta(4)\cong L(4),\quad \Delta(5)\cong P(5). \end{align*} We see that there are homomorphisms $f^\prime:\Delta(1)\to \Delta(2)$, $g^\prime: \Delta(2) \to \Delta(3)$ and $h^\prime: \Delta(1)\to \Delta(3)$, such that $h^\prime=g^\prime\circ f^\prime$. Additionally, we see that there are non-split extensions $\alpha^\prime\in \operatorname{Ext}_A^1(\Delta(3),\Delta(4))$ and $\beta^\prime\in \operatorname{Ext}_{A}^1(\Delta(4),\Delta(5))$, such that $\beta^\prime \alpha^\prime=0$. The composition $\alpha^\prime g^\prime$ produces an extension in $\operatorname{Ext}_A^1(\Delta(2),\Delta(4))$, and is therefore equal to zero, according to Lemma \ref{proposition:ext between standards in deconcatenation}. Therefore, the Ext-algebra $\operatorname{Ext}_A^\ast(\Delta,\Delta)$ is given by the quiver $$\xymatrix{1 \ar@{-->}[r]^-{f^\prime} & 2 \ar@{-->}[r]^-{g^\prime} & 3\ar[r]^-{\alpha^\prime} &4 \ar[r]^-{\beta^\prime} & 5},$$ subject to the relations $\beta^\prime \alpha^\prime=0$ and $\alpha^\prime g^\prime=0$. We observe that this algebra coincides with the algebra $$C=\operatorname{Ext}_{A^1}^\ast(\Delta^1,\Delta^1)\diamond\operatorname{Ext}_{A^2}^\ast(\Delta^2,\Delta^2).$$ \end{example} \subsection{Exact Borel subalgebras under deconcatenations} Consider a deconcatenation $Q^1\sqcup Q^2$ of $Q$ at the sink or source $v$. Let $B^1\subset A^1$ be a subalgebra having a basis $\{b_1,\dots, b_n, e_v\}$ and let $B^2\subset A^2$ be a subalgebra with basis $\{c_1,\dots, c_m, e_v\}$. Then $B=B^1\diamond B^2$ is a subalgebra of $A$. \begin{proposition}\label{proposition:gluing at source admits borel} Let $Q=Q^1\sqcup Q^2$ be a deconcatenation at the source $v$ and assume that we have regular exact Borel subalgebras $B^1\subset A^1$ and $B^2\subset A^2$. Then, $A$ admits a regular exact Borel subalgebra. \end{proposition} \begin{proof} For any $i\in Q_0^\ell \backslash\{v\}$, we have $\Delta(i)\cong \Delta^\ell(i)$. Let $0\subset M_n\subset \dots \subset M_0=\operatorname{rad}\Delta^\ell(i)$ be a filtration with costandard subquotients. We claim that this is also a filtration of $\operatorname{rad}\Delta(i)$. Consider a subquotient of the filtration $$\faktor{M_k}{M_{k+1}}\cong \nabla^\ell(j).$$ Then, $\nabla^\ell(j)\cong \nabla(j)$ for all $j\in Q_0^\ell$. For $j\neq v$, we appeal to Lemma \ref{lemma:qh-structure on parts from qh-structure of whole}. For $j=v$, this is automatic as $\nabla(v)$ is simple by virtue of $v$ being a source. Left to check is that we have a $\nabla$-filtration of $\operatorname{rad}\Delta(v)$. Since $v$ is a source, we have a direct sum decomposition: $$\operatorname{rad}\Delta(v)=\operatorname{rad}\Delta^1(v)\oplus \operatorname{rad}\Delta^2(v).$$ Since $A^1$ has a regular exact Borel subalgebra, $\operatorname{rad}\Delta^1(v)$ has a filtration whose subquotients are members of the set $$\{\nabla(i)\ |\ i\in Q_0^1\backslash\{v\} \}.$$ To see that $\nabla(v)$ does not occur as a composition factor, note that $[\operatorname{rad}\Delta(v):L(v)]=0$. Indeed, an occurence of a composition factor $L(v)$ would imply that $[P(v):L(v)]\geq 2$, contradicting the fact that $v$ is a source. Similarly, $\operatorname{rad}\Delta^2(v)$ has a filtration whose subquotients are members of the set $$\{\nabla(i)\ | \ i\in Q_0^2\backslash\{v\}\}.$$ We conclude that $\operatorname{rad}\Delta(v)\in \mathcal{F}(\nabla)$, proving the statement. \end{proof} \begin{proposition}\label{proposition:gluing of deconcatenation admits regular exact borel} Let $Q=Q^1\sqcup Q^2$ be a deconcatenation at the sink $v$. Assume further that we have regular exact Borel subalgebras $B^1 \subset A^1$, $B^2\subset A^2$ and that the vertex $v$ is minimal with respect to the essential order on $Q_0$. Then, there exists a regular exact Borel subalgebra $B\subset A$. \end{proposition} \begin{proof} For any $i\in Q_0^\ell \backslash\{v\}$, we have $\Delta(i)\cong \Delta^\ell(i)$. Let $0\subset M_n\subset \dots \subset M_0=\operatorname{rad}\Delta^\ell(i)$ be a filtration with costandard subquotients. We claim that this is also a filtration of $\operatorname{rad}\Delta(i)$. Consider a subquotient of the filtration $$\faktor{M_k}{M_{k+1}}\cong \nabla^\ell(j).$$ Then $\nabla^\ell(j)\cong \nabla(j)$ for all $j\in Q_0^\ell\backslash\{v\}$, according to Lemma \ref{lemma:qh-structure on parts from qh-structure of whole}. For $j=v$, we claim that when $v$ is a sink, $v$ is minimal in the essential order on $Q_0$ if and only if $\nabla(v)$ is simple. Indeed, if $v$ is minimal, then $\nabla(v)$ is clearly simple. Conversely, assume that $\nabla(v)$ is simple and that there exists another vertex $k$ such that $k\triangleleft^e v$. Then, we have $$[\Delta(v):L(k)]>0 \quad \textrm{or} \quad (P(k):\Delta(v))>0.$$ The first condition cannot be satisfied, since $\Delta(v)$ is simple by virtue of $v$ being a sink. According to \cite[Lemma 2.5]{DlabRingel}, we have $$(P(k):\Delta(v))=[\nabla(v):L(k)],$$ so this condition may not be satisfied either, since $\nabla(v)$ was assumed to be simple. Therefore, for $i\in Q_0^\ell\backslash\{v\}$, we have $\operatorname{rad}\Delta(i)\in \mathcal{F}(\nabla)$, since $A^1$ and $A^2$ admit regular exact Borel subalgebras, by Theorem \ref{theorem:when does KA_n have regular exact borel}. Since $v$ is a sink, $\Delta(v)\cong L(v)$ which implies that $\operatorname{rad}\Delta(v)=0$, so there is nothing more to show. \end{proof} \begin{lemma}\label{lemma:costandard at sink only one with composition factor} Let $Q=Q^1\sqcup Q^2$ be a deconcatenation at the sink $v$. Then, for all $i\in Q_0$, we have $\left[\nabla(i):L(v)\right]>0$ if and only if $i=v$. \end{lemma} \begin{proof} By definition, $\nabla(i)$ is a submodule of $I(i)$, so $\left[\nabla(i):L(v)\right]>0$ if and only if there is a path from $v$ to $i$, not passing through a vertex greater than $i$ in $Q_0$. This is a contradiction, since $v$ is assumed to be a sink. \end{proof} From Lemma \ref{lemma:costandard at sink only one with composition factor}, it is also clear that for any module $M\in \mathcal{F}(\nabla)$, the costandard module $\nabla(v)$ appears as a subquotient in the filtration of $M$ if and only if $\left[M:L(v)\right]>0$. Now we are ready to provide a necessary and sufficient condition for $A$ to admit a regular exact Borel subalgebra $B\subset A$, where $A=KQ$ and $Q=Q^1\sqcup Q^2$ is a deconcatenation at a sink $v$. \begin{proposition}\label{proposition:sufficient and necessary condition for borel when v is a sink and costandard at v not simple} Let $Q=Q^1\sqcup Q^2$ be a deconcatenation at the sink $v$ and assume that $B^1\subset A^1$ and $B^2\subset A^2$ are regular exact Borel subalgebras. Moreover, assume that $v$ is not minimal with respect to the essential order on $Q_0$. Then, the following are equivalent. \begin{enumerate}[(i)] \item There exists a regular exact Borel subalgebra $C\subset A$. \item $\left[\operatorname{rad}\Delta(i):L(v)\right]=0$, for all $i\in Q_0$. \item The vertex $v$ is maximal with respect to the essential order on $Q_0$. \end{enumerate} \end{proposition} \begin{proof} We first show the equivalence (i) $\iff$ (ii). We assume that (ii) holds and claim that $\operatorname{rad}\Delta(i)\in \mathcal{F}(\nabla)$, for all $i\in Q_0$. If $i=v$, we have $\Delta(v)\cong L(v)$ and consequently $\operatorname{rad}\Delta(v)=0$, so there is nothing to prove. Assume that $i\neq v$, where $i\in Q_0^\ell$. Then, $\Delta^\ell(i)\cong \Delta(i)$. According to the remark after Lemma \ref{lemma:costandard at sink only one with composition factor}, any $\nabla$-filtration of $\operatorname{rad}\Delta^\ell(i)$ does not contain $\nabla^\ell(v)$ as a subquotient. Therefore, any $\nabla$-filtration of $\operatorname{rad}\Delta^\ell (i)$ is automatically a $\nabla$-filtration of $\operatorname{rad}\Delta(i)$, since all occurring subquotients in the filtration of $\operatorname{rad}\Delta^\ell(i)$ are of the form $\nabla^\ell(j)$ with $j\neq v$. Conversely, if $C\subset A$ is a regular exact Borel subalgebra, then $\operatorname{rad}\Delta(i)\in \mathcal{F}(\nabla)$ for $1\leq i\leq n$. Let $i$ be such that $[\operatorname{rad}\Delta(i):L(v)]>0$ and assume without loss of generality that $i\in Q_0^1\backslash\{v\}$. Then, $\nabla(v)$ occurs in the $\nabla$-filtration of $\operatorname{rad}\Delta(i)$, according to the remark after Lemma~\ref{lemma:costandard at sink only one with composition factor}. Note that since $v$ is assumed not to be minimal in the essential order, $\nabla(v)$ is not simple. Since $Q^2$ is connected, $v$ has at least one neighbour, $j\in Q_0^2\backslash\{v\}$. If $j\triangleleft^e v$, then $L(j)$ is a composition factor in $\nabla(v)$, which implies that $[\operatorname{rad}\Delta(i):L(j)]>0$ since $\nabla(v)$ occurs in the $\nabla$-filtration of $\operatorname{rad}\Delta(i)$. This is a contradiction, since $\operatorname{rad}\Delta(i)$ is an $A^1$-module. If $v\triangleleft^e j$, then $[\operatorname{rad}\Delta(j):L(v)]>0$, so that $\nabla(v)$ is a subquotient of $\operatorname{rad} \Delta(j)$. This contradicts $\nabla(v)$ being a subquotient of $\operatorname{rad}\Delta(i)$, unless $\nabla(v)$ is simple. However, we assumed that $v$ is not minimal in the essential order, guaranteeing it is not simple. If $v$ and $j$ are incomparable, we consider the module $M$, given by the following Loewy diagram: $$\xymatrixrowsep{0.4cm}\xymatrix{ j \ar[d] \\ v}$$ Since $A^2$ is quasi-hereditary, $\trianglelefteq^e$ is adapted to $A^2$, according to Lemma~\ref{lemma: q.h algebra has adapted order}. Then, by definition, there is a vertex $k$ such that $v\triangleleft^e k$, $j\triangleleft^e k$ and $[M:L(k)]\neq 0$. This is a contradiction, since $M$ has no other composition factors besides $L(v)$ and $L(j)$. This proves (i) $\iff$ (ii). Next, we prove (ii) $\iff$ (iii). Assume (ii) and that $v$ is not maximal. Then there is another vertex $k$, such that $v\triangleleft^ek$. Then, by definition of the essential order, we have $$\left[\Delta(k):L(v)\right]>0\quad \textrm{or}\quad (P(v):\Delta(k))>0.$$ The first condition can not be satisfied, as it contradicts (ii), and neither may the second, since $P(v)$ is simple by virtue of $v$ being a sink. Conversely, assume that $v$ is maximal and consider $P(i)$, for $i\in Q_0$. If $\left[P(i):L(v)\right]=0$, there is nothing to show, as $\operatorname{rad}\Delta(i)$ is a submodule of a quotient of $P(i)$. Otherwise, if $\left[P(i):L(v)\right]>0$, there still holds $\left[\Delta(i):L(v)\right]=0$, since by definition, the composition factors $L(t)$ of $\Delta(i)$ satisfy $t\triangleleft i$. \end{proof} \begin{proposition}\label{proposition:gluing of borels is borel} Let $Q=Q^1\sqcup Q^2$ be a deconcatenation at a sink or a source $v$. Assume that there are regular exact Borel subalgebras $B^1\subset A^1$ and $B^2\subset A^2$ containing the sets of idempotents $\{e_i \ |\ i\in Q_0^1\}$ and $\{e_j \ |\ j\in Q_0^2\}$, respectively, and that $A$ admits a regular exact Borel subalgebra. Then $B^1\diamond B^2\subset A$ is a regular exact Borel subalgebra of $A$. \end{proposition} \begin{proof} Consider the regular exact Borel subalgebra $B^\ell$, where $\ell=1,2$. We have seen that $B^\ell=KQ_{B^\ell}$, where $Q_{B^\ell}$ is the following quiver. \begin{enumerate}[(i)] \item The set of vertices of $Q_{B^\ell}$ is ${Q_{B^\ell}}_0=Q_0^\ell$. \item There is an arrow $i\to j$ in $Q_{B^\ell}$ if and only $\dim \operatorname{Ext}_{A^\ell}(\Delta^\ell(i),\Delta^\ell(j))=1$. \end{enumerate} Let $C$ be a regular exact Borel subalgebra. Then, $C=KQ_C$ for some quiver $Q_C$ with vertex set $Q_0$. We observe that, by Proposition \ref{proposition:ext between standards in deconcatenation} and \ref{proposition:ext to and from source in deconcatenation}, we have $$\dim\operatorname{Ext}_{A^\ell}(\Delta^\ell(i),\Delta^\ell(j))=\dim\operatorname{Ext}_{A}(\Delta(i),\Delta(j)),$$ for $i,j\in Q_0^\ell$. Using that $B^1$ and $B^2$ are regular exact Borel subalgebras, we get $$\dim\operatorname{Ext}^1_{B^\ell}(L(i),L(j))=\dim\operatorname{Ext}^1_{A^\ell}(\Delta^\ell(i),\Delta^\ell(j))=\dim\operatorname{Ext}^1_{A}(\Delta(i),\Delta(j))=\dim\operatorname{Ext}^1_C(L(i),L(j)).$$ This shows that for any $i,j\in Q_0^\ell$, there is an arrow $i\to j$ in the quiver of $C$ if and only if there is an arrow $i\to j$ in the quiver of $B^\ell$. Next, note that if $i\in Q_0^1\backslash\{v\}$ and $x\in Q_0^2\backslash\{v\}$, then there are no arrows between $i$ and $x$ in $Q_C$, because $$\dim \operatorname{Ext}_C^1(L(i),L(x))=\dim \operatorname{Ext}_{A}^1(\Delta(i),\Delta(x))=0,$$ according to Proposition \ref{proposition:no ext between different parts of concatenation}. This means that the quivers of $C$ and $B^1\diamond B^2$ coincide, so that $C$ and $B^1\diamond B^2$ are isomorphic. This shows that $B^1\diamond B^2$ has $n$ simple modules up to isomorphism and is quasi-hereditary with simple standard modules. Next, we check that the functor $A\otimes_{B^1\diamond B^2}\blank$ is exact. To this end, we claim that $A$ is projective as a right $B^1\diamond B^2$-module. Consider the decomposition of right $A^1$-modules $$A^1\cong \bigoplus_{i\in Q_0^1} P^1(i).$$ Since $A^1\otimes_{B^1}\blank$ is an exact functor, $A^1$ is projective as a right $B^1$-module. Because $B^1\subset A^1$ is a subalgebra, the decomposition above is also a decomposition of right $B^1$-modules. This, in turn, implies that the summands $P^1(i)$ above are projective as right $B^1$-modules. Because $B^1\diamond B^2$ surjects onto $B^1$, each $B^1$-module has the natural structure of a $B^1\diamond B^2$-module (see the remark prior to Lemma \ref{lemma:elem. properties of deconcatenation}). Then, the above decomposition is also a decomposition of right $B^1\diamond B^2$-modules, since the action of $\operatorname{rad}B^2$ on each summand is trivial. Now, consider the decomposition $$A\cong \bigoplus_{i\in Q_0^1\backslash\{v\}} P(i) \oplus \bigoplus_{j\in Q_0^2\backslash\{v\}} P(j) \oplus P(v)$$ of right $A$-modules. By the above argument, the summands $P(i)$ in the first term are projective as right $B^1\diamond B^2$-modules (note that for $i\in Q_0^1\backslash\{v\}$, we have $P(i)\cong P^1(i)$). Similarly, the summands of the second term are projective right $B^1\diamond B^2$-modules. Left to check is that $P(v)$ is a projective right $B^1\diamond B^2$-module. Let $v$ be a sink. Then $P(v)\cong L(v)$ and there is nothing to show. Let $v$ be a source. As noted earlier, $P^\ell(v)$ is projective as a right $B^\ell$-module. Since $v$ is a source also in the quiver of $B^\ell$, the module $P_{B^\ell}^\ell(v)$ is the unique indecomposable projective $B^\ell$-module having $L(v)$ as a composition factor. This implies that there is an isomorphism $f^\ell$ of right $B^\ell$-modules: $$f^\ell: P^\ell(v)\to P_{B^\ell}^\ell(v) \oplus M^\ell,$$ where $M^\ell$ is a direct sum of indecomposable projective $B^\ell$-modules, with some multiplicites. Note that $M^\ell$ does not contain $P_{B^\ell}^\ell(v)$ as a direct summand. We claim that there is an isomorphism of right $B^1\diamond B^2$-modules $$f:P(v)\to P_{B^1\diamond B^2}(v) \oplus M^1 \oplus M^2.$$ Define $f$ on paths in $P(v)$ by $$p\mapsto \begin{cases} f^1(p) & \textrm{if }p\in A^1\backslash \operatorname{span}(e_v) \\ f^2(p) & \textrm{if }p\in A^2\backslash \operatorname{span}(e_v) \\ e_v & \textrm{if }p=e_v. \end{cases}$$ It is clear that $f$ is bijective and a homomorphism of $B^1\diamond B^2$-modules, hence an isomorphism. Next, we check that $A\otimes_{B^1\diamond B^2}L(i)\cong \Delta(i)$, for all $i\in Q_0$. When $i\in Q_0^\ell\backslash\{v\}$, we know that $\Delta^\ell(i)\cong \Delta(i)$. In particular, $\Delta(i)$ is an $A^\ell$-module. Let $\varphi:A^\ell\otimes_{B^{\ell}} L(i)\to \Delta(i)$ be an isomorphism. The module $A\otimes_{B^1\diamond B^2}L(i)$ is generated by elements of the form $p\otimes e_i$, where $p$ is a path in $A$. Note that, if $p$ is a path in $A^{\overline{\ell}}$ not equal to $e_v$, then $$p\otimes e_i=p e_{s(p)} \otimes e_i= p\otimes e_{s(p)}e_i=0.$$ To see this, note that $p$ is a path in $A^{\overline{\ell}}$ and $i\in Q_0^\ell\backslash\{v\}$, so that $i\neq s(p)$. Then, we may define a homomorphism $$\tilde{\varphi}: A\otimes_{B^1\diamond B^2}L(i)\to \Delta(i)$$ on the non-zero generators of $A\otimes_{B^1\diamond B^2}e_i$ by $$p\otimes_{B^1\diamond B^2} e_i \mapsto \varphi(p\otimes_{B^\ell} e_i),$$ which is clearly an isomorphism. It remains to check that $A\otimes_{B^1\diamond B^2} L(v)\cong \Delta(v)$. When $v$ is a sink, we have $\Delta(v)\cong L(v)$, so the assertion is clear, given that $A^\ell\otimes_{B^{\ell}} L(v)\cong \Delta(v)$. Assume that $v$ is a source and let $$f_1: A^1\otimes_{B^1}L(v)\to \Delta^1(v),\quad \textrm{and} \quad f_2:A^2\otimes_{B^2}L(v)\to \Delta^2(v)$$ be isomorphisms. The module $A\otimes_{B^1\diamond B^2} L(v)$ is generated by elements of the form $p\otimes e_v$, where $p\in A$ is some path. Note that with the exception of $e_v$, every such $p$ is contained in $A^1$ or in $A^2$. Define a map $$f:A\otimes_{B^1\diamond B^2} L(v)\to \Delta(v)$$ on generators by $$p\otimes e_v \mapsto \begin{cases} f_1(p\otimes e_v) & \textrm{if }p\in A^1\backslash \operatorname{span}(e_v),\\ f_2(p\otimes e_v) & \textrm{if }p\in A^2\backslash \operatorname{span}(e_v),\\ e_v\otimes e_v & \textrm{if }p=e_v. \end{cases}$$ Note that, if $p\in A^1$ and $q\in A^2$, then $p\otimes e_v$ and $q\otimes e_v$ are either zero or linearly independent, except for when $p=q=e_v$. Then, $f$ is a well-defined homomorphism because $f_1$ and $f_2$ are. Similarly, $f$ is injective because $f_1$ and $f_2$ are and the intersection of their images equals $\operatorname{span}(e_v)$. For surjectivity, note that as a vector space, we have $\Delta(v)=\operatorname{span}(e_v)\oplus \operatorname{rad}\Delta^1(v)\oplus \operatorname{rad}\Delta^2(v).$ To finish the proof, note that since $B^1\diamond B^2$ and $C$ are both exact Borel subalgebras, they are conjugate by Theorem~\ref{theorem: uniqueness of borel}. By Theorem~\ref{theorem:inner automorphisms preserve regular exact borel subalgebras}, $B^1\diamond B^2$ is regular since $C$ is. \end{proof} We conclude this section with an easy observation about the behavior of (certain) $A_\infty$-structures under deconcatenations. \begin{proposition}\label{proposition:concatenation of formals is formal} Let $Q=Q^1\sqcup Q^2$ be a deconcatenation at a sink or a source $v$ and assume that $\operatorname{Ext}_{A^\ell}(\Delta^\ell,\Delta^\ell)$ is intrinsically formal, for $\ell=1,2$. Then, $\operatorname{Ext}_{A}^\ast(\Delta,\Delta)$ is intrinsically formal. \end{proposition} \begin{proof} Let $m_n$ denote the higher multiplications on $\operatorname{Ext}_A^\ast(\Delta,\Delta)$ and consider $m_n(\varphi_n,\dots, \varphi_1)$, where $\varphi_i\in \operatorname{Ext}_A^\ast(\Delta,\Delta)$ for $1\leq i \leq n$. Suppose that $m_n(\varphi_n,\dots, \varphi_1)\in \operatorname{Ext}_A^k(\Delta(a),\Delta(b))$, for some $a, b\in Q_0$ and $k\geq 0$. If $a,b\in Q_0^\ell$, then $m_n(\varphi_n,\dots, \varphi_1)=0$, because $\operatorname{Ext}_{A^\ell}^\ast(\Delta^\ell,\Delta^\ell)$ is intrinsically formal. If $a\in Q_0^{\ell}\backslash\{v\}$ and $b\in Q_0^{\ell}\backslash\{v\}$, then $m_n(\varphi_n,\dots, \varphi_1)=0$, according to Proposition~\ref{proposition:no ext between different parts of concatenation}. If $a=b=v$, then $m_n(\varphi_n,\dots,\varphi_1)=0$, since standard modules have no non-split self-extensions. \end{proof} \subsection{Regular exact Borel subalgebras for path algebras of linear quivers} Let $Q$ be the linear quiver $$\xymatrix{1\ar@{-}[r] & 2 \ar@{-}[r] &\dots \ar@{-}[r] & n-1 \ar@{-}[r] & n}$$ where the edges may be of either orientation, and put $\Lambda=KQ$. Denote by $\mathbf{A}_m$ and $\mathbf{B}_\ell$ the special cases $$\xymatrix{1 \ar[r] & \dots \ar[r] & m}\quad\textrm{and}\quad \xymatrix{1 & \dots \ar[l] & \ell\ar[l]},$$ respectively. By Proposition \ref{proposition:A_n has a regular exact borel}, $K\mathbf{A}_m$ admits a regular exact Borel subalgebra, regardless of the partial order on the vertices. We remark that all the statements from Section 2, describing the structure of $K\mathbf{A}_m$ in terms of its binary search tree, remain true for $K\mathbf{B}_\ell$, if we swap ``left'' and ``right''. For instance, over $K\mathbf{B}_\ell$, the composition factors of a standard module $\Delta_{K\mathbf{B}_\ell}(i)$ are found in the \emph{left} subtree of the vertex labeled by $i$, rather than the right. From this, it follows that also $K\mathbf{B}_\ell$ admits a regular exact Borel subalgebra, regardless of the partial order on the vertices. Now, consider $\Lambda=KQ$. The quiver $Q$ admits an iterated deconcatenation $$Q=Q^1\sqcup \dots \sqcup Q^s,$$ where, for each $1\leq r\leq s$, the quiver $Q^r$ is either $\mathbf{A}_m$ or $\mathbf{B}_\ell$. Recall that the partial order on the set of vertices of $Q$ is constructed in the following way. When $Q=Q^1\sqcup Q^2$, with $Q_0^1\cap Q_0^2=\{v\}$ and $i,j\in Q_0$, we say that $i\triangleleft j$ if one of the following hold. \begin{enumerate}[(i)] \item We have $i,j\in Q_0^\ell$ and $i\triangleleft^\ell j$, for some $\ell$. \item We have $i\in Q_0^\ell$, $j\in Q_0^{\overline{\ell}}$, $i\triangleleft^\ell v$ and $v\triangleleft^{\overline{\ell}} j$. \end{enumerate} It is clear that, in this setting, $v$ is maximal in $Q_0$ if and only if it is maximal in both $Q_0^\ell$ and $Q_0^{\overline{\ell}}$. We conclude that we have a special case of Proposition \ref{proposition:gluing of deconcatenation admits regular exact borel} as follows. \begin{theorem}\label{theorem:when does path algebra of linear quiver have borel} The algebra $\Lambda$ admits a regular exact Borel subalgebra $C\subset \Lambda$ if and only if each vertex $v\in Q_0$ which is a sink at which a deconcatenation occurs is minimal or maximal with respect to the essential order on $Q_0$. \end{theorem} \begin{example} Consider the quiver $$Q=\xymatrix{ 1 \ar[r] & 2 \ar[r] & 3 & 4 \ar[l] & 5\ar[l] \ar[r] & 6 \ar[r] &7. }$$ Deconcatenating $Q$ as far as possible we find that $$Q=(\xymatrix{1\ar[r]^-x & 2 \ar[r]^-y & 3})\sqcup(\xymatrix{3 & 4 \ar[l]_-a & 5\ar[l]_-b})\sqcup (\xymatrix{5 \ar[r]^-u & 6 \ar[r]^-v & 7}).$$ Suppose the order on $Q_0$ is the one given by the following orders on $Q_0^1, Q_0^2$ and $Q_0^3$. $$1\triangleleft_T 3, 2\triangleleft_T 3, \quad 4\triangleleft_T 5 \triangleleft_T3,\quad 6\triangleleft_T 5 \triangleleft_T 7,$$ corresponding to binary search trees $$\begin{tikzpicture} \node(a) [shape=circle, draw, thick, fill=lightgray] at (0,0) {3}; \node(b) [shape=circle, draw, thick, fill=lightgray] at (-1,-1) {2}; \node(c) [shape=circle, draw, thick, fill=lightgray] at (1,-1) {1}; \node(d) at (0,-2) {}; \draw[thick] (a) to (b); \draw[thick] (a) to (c); \end{tikzpicture}\quad \begin{tikzpicture} \node(a) [shape=circle, draw, thick, fill=lightgray] at (0,0) {3}; \node(b) [shape=circle, draw, thick, fill=lightgray] at (1,-1) {5}; \node(c) [shape=circle, draw, thick, fill=lightgray] at (0.5, -2) {4}; \draw[thick] (a) to (b); \draw[thick] (b) to (c); \end{tikzpicture}\quad \begin{tikzpicture} \node(a) [shape=circle, draw, thick, fill=lightgray] at (0,0) {7}; \node(b) [shape=circle, draw, thick, fill=lightgray] at (-1, -1) {5}; \node(c) [shape=circle, draw, thick, fill=lightgray] at (-0.5, -2) {6}; \draw[thick] (a) to (b); \draw[thick] (b) to (c); \end{tikzpicture}$$ We compute the standard modules. $$\Delta(1)\cong L(1), \quad \Delta(2)\cong L(2),\quad \Delta(3)\cong L(3), \quad \Delta(4)\cong L(4),\quad \Delta(5)\cong M(4,5),\quad \Delta(6)\cong L(6),\quad \Delta(7)\cong L(7)$$ The only standard module with non-trivial radical is $\Delta(5)$ with $\operatorname{rad}\Delta(5)\cong L(4)\cong \nabla(4)$, so $\operatorname{rad}\Delta(i)\in \mathcal{F}(\nabla)$ for all $1\leq i\leq 7$. Then, $A^1, A^2$ and $A^3$ have regular exact Borel subalgebras given by $$(\xymatrix{1\ar[r]^-x & 2 & 3}),\quad (\xymatrix{ 3 & 4\ar[r]^-b & 5}),\quad (\xymatrix{5\ar@/^1pc/[rr]^-{vu} & 6 & 7}).$$ We note that, since $3$ is maximal in the first and second quivers, $A$ has a regular exact Borel subalgebra, by Proposition \ref{proposition:gluing of borels is borel}, given by the following quiver. $$\xymatrix{ 1 \ar[r]^-x & 2 & 3 & 4\ar[r]^-b & 5 \ar@/^1pc/[rr]^-{vu} & 6 & 7 }.$$ \end{example} Similarly to the above, we now wish to extend the statement of Proposition \ref{proposition:a-infty on A_n is trivial} to the algebra $\Lambda$ discussed in this section. \begin{proposition}\label{proposition:a-infty on linear quiver is trivial} $\operatorname{Ext}_{\Lambda}^\ast(\Delta,\Delta)$ is intrinsically formal. \end{proposition} \begin{proof} This follows immediately from Proposition~\ref{proposition:a-infty on A_n is trivial} and Proposition~\ref{proposition:concatenation of formals is formal}. \end{proof} In view of the situations considered in \cite{FKR}, it would be interesting to do as similar investigation with quivers of Dynkin type $\mathbb{D}$ and $\mathbb{E}$. \subsection{A nonlinear example} Consider the algebra $A$, given by the quiver $$\xymatrix{ 1 \ar@<0.5ex>[r]^-a \ar@<-0.5ex>[r]_-{b} & 2 & 3\ar[r]^-c \ar@/^1.5pc/[ll]^-d \ar@/_1.5pc/[ll]_-e& 4}$$ subject to the relations $ae=bd$, $ad=0$ and $be=0$. With the usual order $1\triangleleft2\triangleleft3\triangleleft4$, the standard modules over $A$ are $$\Delta(1)\cong L(1),\quad \Delta(2)\cong L(2),\quad \Delta(3):\vcenter{ \xymatrixrowsep{0.5cm}\xymatrixcolsep{0.5cm}\xymatrix{ & 3 \ar[ld]_-d \ar[rd]^-e \\ 1 \ar[rd]_-b & &1\ar[ld]^-a\\ & 2 }},\quad \textrm{and} \quad \Delta(4)\cong L(4).$$ The only standard module with non-trivial radical is $\Delta(3)$, whose radical is isomorphic to the costandard module $\nabla(2)$. Therefore, $A$ admits a regular exact Borel subalgebra $B\subset A$, which we find is given by the quiver $$\xymatrix{ 1 \ar@<0.5ex>[r]^-a \ar@<-0.5ex>[r]_-{b} & 2 & 3 \ar[r]^-c& 4}.$$ Consider now the algebra $\Lambda$ given by the quiver $$\xymatrix{ 1 \ar@<0.5ex>[r]^-a \ar@<-0.5ex>[r]_-{b} & 2 & 3\ar[r]^-c \ar@/^1.5pc/[ll]^-d \ar@/_1.5pc/[ll]_-e& 4 & \ar[l]_-{c^\prime}5 \ar@/^1.5pc/[rr]^-{e^\prime} \ar@/_1.5pc/[rr]_-{d^\prime} & 6 & 7 \ar@<-0.5ex>[l]_-{a^\prime} \ar@<0.5ex>[l]^-{b^\prime}}$$ subject to the relations $$ae=bd, \quad a^\prime e^\prime=b^\prime d^\prime,\quad ad=0,\quad a^\prime d^\prime=0,\quad be=0,\quad \textrm{and}\quad b^\prime e^\prime=0,$$ with the order on the vertices being $1<2<3<4, \quad 7<6<5<4$. Then, $4$ is maximal and Proposition \ref{proposition:sufficient and necessary condition for borel when v is a sink and costandard at v not simple} applies, and $\Lambda$ has a regular exact Borel subalgebra $C\subset \Lambda$, given by the quiver $$\xymatrix{1 \ar@<0.5ex>[r]^-a \ar@<-0.5ex>[r]_-{b}& 2 & 3\ar[r]^-{c} & 4 & 5\ar[l]_-{c^\prime} & 6 & 7 \ar@<-0.5ex>[l]_-{a^\prime} \ar@<0.5ex>[l]^-{b^\prime}},$$ according to Proposition \ref{proposition:gluing of borels is borel}. \newpage\printbibliography \end{document}
77995f6026756476650f093eb6df2eb0f90b3cb9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{} \section{Introduction} The concept of fault diameter of Cartesian product graphs was first described in \cite{1}, but the upper bound was wrong, as shown by Xu, Xu and Hou who provided a small counter example and corrected the mistake \cite{0}. More precisely, denote by $\mathcal{D}^V_{a }(G)$ the fault diameter of a graph $G$, a maximum diameter of $G$ after deletion of any $a$ vertices, and $G\Box H$ the Cartesian product of graphs $G$ and $H$. Xu, Xu and Hou proved \cite{0} $$\mathcal{D}^V_{a+b+1}(G\Box H)\leq \mathcal{D}^V_{a}(G)+\mathcal{D}^V_{b}(H)+1 $$ while the claimed bound in \cite{1} was $ \mathcal{D}^V_{a}(G)+\mathcal{D}^V_{b}(H)$. (Our notation here slightly differs from notation used in \cite{1} and \cite{0}.) The result was later generalized to graph bundles in \cite{zer-ban} and generalized graph products (as defined by \cite{Bermond}) in \cite{KITAJCIdiameter}. Here we show that in most cases of Cartesian graph bundles the bound can indeed be improved to the one claimed in \cite{1}. Methods used involve the theory of mixed connectivity and recent results on mixed fault diameters \cite{3fd,mpsv,fdsv,fdsv2}. For completeness, we also give the analogous improved upper bound for edge fault diameter. The rest of the paper is organized as follows. In the next section we recall that the graph products and graph bundles often appear as practical interconnection network topologies because of some attractive properties they have. In Section \ref{Preliminaries} we provide general definitions, in particular of the connectivities. Section \ref{Bundles} introduces graph bundles and recalls relevant previous results. The improved bounds are proved in Section \ref{Results}. \section{Motivation - interconnection networks} Graph products and bundles belong to a class of frequently studied interconnection network topologies. For example meshes, tori, hypercubes and some of their generalizations are Cartesian products. It is less known that some other well-known interconnection network topologies are Cartesian graph bundles, for example twisted hypercubes \cite{cull,efe} and multiplicative circulant graphs \cite{ivan}. In the design of large interconnection networks several factors have to be taken into account. A usual constraint is that each processor can be connected to a limited number of other processors and that the delays in communication must not be too long. Furthermore, an interconnection network should be fault tolerant, because practical communication networks are exposed to failures of network components. Both failures of nodes and failures of connections between them happen and it is desirable that a network is robust in the sense that a limited number of failures does not break down the whole system. A lot of work has been done on various aspects of network fault tolerance, see for example the survey \cite{Bermond} and the more recent papers \cite{hung,sun,yin}. In particular the fault diameter with faulty vertices, which was first studied in \cite{1}, and the edge fault diameter have been determined for many important networks recently \cite{zer-ban,zbedge,ZB,erves,day,4,7,0}. Usually either only edge faults or only vertex faults are considered, while the case when both edges and vertices may be faulty is studied rarely. For example, \cite{hung,sun} consider Hamiltonian properties assuming a combination of vertex and edge faults. In recent work on fault diameter of Cartesian graph products and bundles \cite{zer-ban,zbedge,ZB,erves}, analogous results were found for both fault diameter and edge fault diameter. However, the proofs for vertex and edge faults are independent, and our effort to see how results in one case may imply the others was not successful. A natural question is whether it is possible to design a uniform theory that covers simultaneous faults of vertices and edges. Some basic results on edge, vertex and mixed fault diameters for general graphs appear in \cite{3fd}. In order to study the fault diameters of graph products and bundles under mixed faults, it is important to understand generalized connectivities. Mixed connectivity which generalizes both vertex and edge connectivity, and some basic observations for any connected graph are given in \cite{mpsv}. We are not aware of any earlier work on mixed connectivity. A closely related notion is the connectivity pairs of a graph \cite{Harary}, but after Mader \cite{Mader} showed the claimed proof of generalized Menger's theorem is not valid, work on connectivity pairs seems to be very rare. Upper bounds for the mixed fault diameter of Cartesian graph bundles are given in \cite{fdsv,fdsv2} that in some case also improve previously known results on vertex and edge fault diameters on these classes of Cartesian graph bundles \cite{zer-ban,erves}. However results in \cite{fdsv} address only the number of faults given by the connectivity of the fibre (plus one vertex), while the connectivity of the graph bundle can be much higher when the connectivity of the base graph is substantial, and results in \cite{fdsv2} address only the number of faults given by the connectivity of the base graph (plus one vertex), while the connectivity of the graph bundle can be much higher when the connectivity of the fibre is substantial. An upper bound for the mixed fault diameter that would take into account both types of faults remains to be an interesting open research problem. \section{Preliminaries} \label{Preliminaries} A {\em simple graph} $G=(V,E)$ is determined by a {\em vertex set} $V=V(G)$ and a set $E=E(G)$ of (unordered) pairs of vertices, called {\em edges}. As usual, we will use the short notation $uv$ for edge $\{u,v\}$. For an edge $e=uv$ we call $u$ and $v$ its {\em endpoints}. It is sometimes convenient to consider the union of \emph{elements} of a graph, $S(G)= V(G) \cup E(G)$. Given $X \subseteq S(G)$ then $S(G) \setminus X$ is a subset of elements of $G$. However, note that in general $S(G) \setminus X$ may not induce a graph. As we need notation for subgraphs with some missing (faulty) elements, we formally define $G\setminus X$, the subgraph of $G$ after deletion of $X$, as follows: \begin{definition} Let $X \subseteq S(G)$, and $X=X_E \cup X_V$, where $X_E \subseteq E(G)$ and $X_V \subseteq V(G)$. Then $G\setminus X$ is the subgraph of $(V(G), E(G)\setminus X_E)$ induced on vertex set $V(G)\setminus X_V$. \end{definition} A {\em walk} between vertices $x$ and $y$ is a sequence of vertices and edges $v_0,$ $e_1,$ $v_1,$ $e_2,$ $v_2,$ $\dots,$ $v_{k-1},$ $e_k,$ $v_k$ where $x=v_0$, $y=v_k$, and $e_i = v_{i-1}v_i$ for each $i$. A walk with all vertices distinct is called a {\em path}, and the vertices $v_0$ and $v_k$ are called the {\em endpoints} of the path. The {\em length} of a path $P$, denoted by $\ell (P)$, is the number of edges in $P$. The {\em distance} between vertices $x$ and $y$, denoted by $d_G(x,y)$, is the length of a shortest path between $x$ and $y$ in $G$. If there is no path between $x$ and $y$ we write $d_G(x,y) = \infty$. The {\em diameter} of a connected graph $G$, $\mathcal{D}(G)$, is the maximum distance between any two vertices in $G$. A path $P$ in $G$, defined by a sequence $x=v_0,e_1,v_1,e_2,v_2,\dots,v_{k-1},e_k,v_k=y$ can alternatively be seen as a subgraph of $G$ with $V(P) =\{v_0,v_1,v_2,\dots,v_k\}$ and $E(P) =\{e_1,e_2,\dots,e_k\}$. Note that the reverse sequence gives rise to the same subgraph. Hence we use $P$ for a path either from $x$ to $y$ or from $y$ to $x$. A graph is {\em connected} if there is a path between each pair of vertices, and is {\em disconnected} otherwise. In particular, $K_1$ is by definition disconnected. The {\em connectivity} (or {\em vertex connectivity}) $\kappa (G)$ of a connected graph $G$, other than a complete graph, is the smallest number of vertices whose removal disconnects $G$. For complete graphs is $\kappa (K_n)=n-1$. We say that $G$ is {\em $k$-connected} (or {\em $k$-vertex connected}) for any $0<k \leq \kappa (G)$. The {\em edge connectivity} $\lambda (G)$ of a connected graph $G$, is the smallest number of edges whose removal disconnects $G$. A graph $G$ is said to be {\em $k$-edge connected} for any $0<k \leq \lambda (G)$. It is well known that (see, for example, \cite{m}, page 224) $\kappa (G) \leq \lambda (G) \leq \delta_G, $ where $\delta_G$ is smallest vertex degree of $G$. Thus if a graph $G$ is $k$-connected, then it is also $k$-edge connected. The reverse does not hold in general. The mixed connectivity generalizes both vertex and edge connectivity \cite{mpsv,fdsv}. Note that the definition used in \cite{fdsv} and here slightly differs from the definition used in a previous work \cite{mpsv}. {\begin{definition} Let $G$ be any connected graph. A graph $G$ is \emph{$(p,q)$+connected}, if $G$ remains connected after removal of any $p$ vertices and any $q$ edges. \end{definition}} We wish to remark that the mixed connectivity studied here is closely related to connectivity pairs as defined in \cite{Harary}. Briefly speaking, a connectivity pair of a graph is an ordered pair $(k,\ell)$ of two integers such that there is some set of $k$ vertices and $\ell$ edges whose removal disconnects the graph and there is no set of $k-1$ vertices and $\ell$ edges or of $k$ vertices and $\ell-1$ edges with this property. Clearly $(k,\ell)$ is a connectivity pair of $G$ exactly when: (1) $G$ is $(k-1,\ell)$+connected, (2) $G$ is $(k,\ell-1)$+connected, and (3) $G$ is not $(k,\ell)$+connected. In fact, as shown in \cite{mpsv}, (2) implies (1), so $(k,\ell)$ is a connectivity pair exactly when (2) and (3) hold. From the definition we easily observe that any connected graph $G$ is $(0,0)$+ connected, $(p,0)$+connected for any $p<\kappa(G)$ and $(0,q)$+connected for any $q<\lambda(G)$. In our notation $(i,0)$+connected is the same as $(i+1)$-connected, i.e. the graph remains connected after removal of any $i$ vertices. Similarly, $(0,j)$+connected means $(j+1)$-edge connected, i.e. the graph remains con\-nected after removal of any $j$ edges. Clearly, if $G$ is a $(p,q)$+connected graph, then $G$ is $(p^{\prime},q^{\prime})$+connected for any $p^{\prime}\leq p$ and any $q^{\prime}\leq q$. Furthermore, for any connected graph $G$ with $k<\kappa(G)$ faulty vertices, at least $k$ edges are not working. Roughly speaking, graph $G$ remains connected if any faulty vertex in $G$ is replaced with a faulty edge. It is known \cite{mpsv} that if a graph $G$ is $(p,q)$+connected and $p>0$, then $G$ is $(p-1,q+1)$+connected. Hence for $p>0$ we have a chain of implications: $(p,q)$+connected $\Longrightarrow$ $(p-1,q+1)$+connected $\Longrightarrow\dots\Longrightarrow$ $(1,p+q-1)$+connected $\Longrightarrow$ $(0,p+q)$+connected, which generalizes the well-known proposition that any $k$-connected graph is also $k$-edge connected. Therefore, a graph $G$ is $(p,q)$+connected if and only if $p<\kappa(G)$ and $p+q<\lambda(G)$. Note that by our definition the complete graph $K_n$, $n\geq 2$, is $(n-2,0)$+ connected, and hence $(i,j)$+connected for any $i+j\leq n-2$. Graph $K_2$ is $(0,0)$+connected, and mixed connectivity of $K_1$ is not defined. If for a graph $G$ $\kappa(G)=\lambda(G) =k$, then $G$ is $(i,j)$+connected exactly when $i+j< k$. However, if $2\leq \kappa(G)<\lambda(G)$, the question whether $G$ is $(i,j)$+connected for $1\leq i<\kappa(G)\leq i+j < \lambda(G)$ is not trivial. The example below shows that in general the knowledge of $\kappa(G)$ and $\lambda(G)$ is not enough to decide whether $G$ is $(i,j)$+connected. \begin{example} \label{pr} For graphs on Fig. \ref{sl} we have $\kappa(G_1)=\kappa(G_2)=2$ and $\lambda(G_1)=\lambda(G_2)=3$. Both graphs are $(1,0)+$connected $\Longrightarrow$ $(0,1)+$connected, and $(0,2)+$ connected. Graph $G_1$ is not $(1,1)+$connected, while graph $G_2$ is. \end{example} \begin{figure}[htb] \begin{center} \includegraphics[width=4.0in]{grafMP12.eps} \caption{Graphs $G_1$ and $G_2$ from Example \ref{pr}.} \label{sl} \end{center} \end{figure} {\begin{definition} Let $G$ be a $k$-edge connected graph and $0\leq a < k$. The {\em $a$-edge fault diameter} of $G$ is $$\mathcal{D}^E_a(G)=\max{\{\mathcal{D}(G\setminus X)\ | \ X\subseteq E(G),\left|X\right|=a\}}.$$ \end{definition}} {\begin{definition} Let $G$ be a $k$-connected graph and $0\leq a < k$. The {\em $a$-fault diameter} (or {\em $a$-vertex fault diameter}) of $G$ is $$\mathcal{D}^V_a(G)=\max{\{\mathcal{D}(G\setminus X)\ | \ X\subseteq V(G),\left|X\right|=a\}}.$$ \end{definition}} Note that $\mathcal{D}^E_a(G)$ is the largest diameter among the diameters of subgraphs of $G$ with $a$ edges deleted, and $\mathcal{D}^V_a(G)$ is the largest diameter over all subgraphs of $G$ with $a$ vertices deleted. In particular, $\mathcal{D}^E_0(G)= \mathcal{D}^V_0(G)= \mathcal{D}(G)$, the diameter of $G$. For $p \geq \kappa (G)$ and for $q \geq \lambda (G)$ we set $\mathcal{D}^V_p(G) = \infty$, $\mathcal{D}^E_q(G) = \infty$, as some of the subgraphs are not vertex connected or edge connected, respectively. It is known \cite{3fd} that for any connected graph $G$ the inequalities below hold. \begin{enumerate} \item $\mathcal{D}(G) =\mathcal{D}^E_0(G) \leq \mathcal{D}^E_1(G) \leq \mathcal{D}^E_2(G) \leq \ldots \leq \mathcal{D}^E_{\lambda (G)-1} (G)< \infty$. \item $\mathcal{D}(G) =\mathcal{D}^V_0(G) \leq \mathcal{D}^V_1(G) \leq \mathcal{D}^V_2(G) \leq \ldots \leq \mathcal{D}^V_{\kappa (G)-1} (G)< \infty$. \end{enumerate} \begin{definition} \label{MFD} Let $G$ be a $(p,q)$+connected graph. The {\em $(p,q)$-mixed fault dia\-meter} of $G$ is $$\mathcal{D}_{(p,q)} (G)=\max{\{\mathcal{D}(G\setminus (X\cup Y))\ | \ X\subseteq V(G), Y\subseteq E(G), |X|=p, |Y|=q\}}.$$ \end{definition} Note that by Definition \ref{MFD} the endpoints of edges of set $Y$ can be in $X$. In this case we may get the same subgraph of $G$ by deleting $p$ vertices and fewer than $q$ edges. It is however not difficult to see that the diameter of such subgraph is smaller than or equal to the diameter of some subgraph of $G$ where exactly $p$ vertices and exactly $q$ edges are deleted. So the condition that the endpoints of edges of set $Y$ are not in $X$ need not to be included in Definition \ref{MFD}. The mixed fault diameter $\mathcal{D}_{(p,q)} (G)$ is the largest diameter among the diameters of all subgraphs obtained from $G$ by deleting $p$ vertices and $q$ edges, hence $\mathcal{D}_{(0,0)}(G)= \mathcal{D}(G)$, $\mathcal{D}_{(0,a)}(G)=\mathcal{D}^E_a(G)$ and $\mathcal{D}_{(a,0)}(G)=\mathcal{D}^V_a(G)$. Let $\mathcal{H}_a^V=\{G\setminus X\ | \ X\subseteq V(G), |X|=a\}$ and $\mathcal{H}_b^E=\{G\setminus X\ | \ X\subseteq E(G), |X|=b\}.$ It is easy to see that \begin{enumerate} \item $\max{\{\mathcal{D}^E_b(H) \ | \ H\in \mathcal{H}_a^V\}} = \mathcal{D}_{(a,b)}(G)$, \item $\max{\{\mathcal{D}^V_a(H) \ | \ H\in \mathcal{H}_b^E\}} = \mathcal{D}_{(a,b)}(G)$. \end{enumerate} In previous work \cite{3fd} on vertex, edge and mixed fault diameters of connected graphs the following theorem has been proved. {\begin{theorem}\label{main_mixed} Let $G$ be $\left( p,q\right)$+connected graph and $p>0$. \begin{itemize} \item If $q>0$, then $\mathcal{D}^{E}_{p+q} (G) \leq \mathcal{D}_{(1,p+q-1)}(G) \leq \dots \leq \mathcal{D}_{(p,q)} (G)$. \item If $q=0$, then $\mathcal{D}^{E}_{p} (G) \leq \mathcal{D}_{(1,p-1)} (G)\leq \dots \leq \mathcal{D}_{(p-1,1)} (G)\leq \mathcal{D}^{V}_{p} (G)+1$. \end{itemize} \end{theorem}} Note that for $(p+1)$-connected graph $G$, $p>0$, we have either $$\mathcal{D}^{E}_{p} (G) \leq \mathcal{D}_{(1,p-1)} (G)\leq \dots \leq \mathcal{D}_{(p-1,1)} (G)\leq \mathcal{D}^{V}_{p} (G)$$ or $$\mathcal{D}^{E}_{p} (G) \leq \mathcal{D}_{(1,p-1)} (G)\leq \dots \leq \mathcal{D}_{(p-1,1)} (G)= \mathcal{D}^{V}_{p} (G)+1.$$ For example, complete graphs, complete bipartite graphs, and cycles are graphs with $\mathcal{D}_{(p-1,1)} (G)= \mathcal{D}^{V}_{p}(G)+1$ for all meaningful of values of $p$. More examples of both types of graphs can be found in \cite{3fd}. \section{Fault diameters of Cartesian graph bundles} \label{Bundles} Cartesian graph bundles are a generalization of Cartesian graph products, first studied in \cite{PiVr,ST_P_V}. Let $G_1$ and $G_2$ be graphs. The {\em Cartesian product} of graphs $G_1$ and $G_2$, $G=G_1\Box G_2$, is defined on the vertex set $V(G_1)\times V(G_2)$. Vertices $(u_1,v_1)$ and $(u_2,v_2)$ are adjacent if either $u_1u_2\in E(G_1)$ and $v_1=v_2$ or $v_1v_2\in E(G_2)$ and $u_1=u_2$. For further reading on graph products we recommend \cite{3}. {\begin{definition} Let $B$ and $F$ be graphs. A graph $G$ is a {\em Cartesian graph bundle with fibre $F$ over the base graph} $B$ if there is a {\em graph map} $p:G\rightarrow B$ such that for each vertex $v\in V(B)$, $p^{-1}(\{v\})$ is isomorphic to $F$, and for each edge $e=uv\in E(B)$, $p^{-1}(\{e\})$ is isomorphic to $F \Box K_2$. \end{definition}} More precisely, the mapping $p:G\rightarrow B$ maps graph elements of $G$ to graph elements of $B$, i.e. $p: V(G) \cup E(G) \rightarrow V(B) \cup E(B)$. In particular, here we also assume that the vertices of $G$ are mapped to vertices of $B$ and the edges of $G$ are mapped either to vertices or to edges of $B$. We say an edge $e\in E(G)$ is {\em degenerate} if $p(e)$ is a vertex. Otherwise we call it {\em nondegenerate}. The mapping $p$ will also be called the {\em projection} (of the bundle $G$ to its base $B$). Note that each edge $e=uv \in E(B)$ naturally induces an isomorphism $\varphi_e :p^{-1}(\{u\})\rightarrow p^{-1}(\{v\})$ between two fibres. It may be interesting to note that while it is well-known that a graph can have only one representation as a product (up to isomorphism and up to the order of factors) \cite{3}, there may be many different graph bundle representations of the same graph \cite{ZmZe2002b}. Here we assume that the bundle representation is given. Note that in some cases finding a representation of $G$ as a graph bundle can be found in polynomial time \cite{ImPiZe,ZmZe2000,ZmZe2001a,ZmZe2002b,ZmZe2002a,directed}. For example, one of the easy classes are the Cartesian graph bundles over triangle-free base \cite{ImPiZe}. Note that a graph bundle over a tree $T$ (as a base graph) with fibre $F$ is isomorphic to the Cartesian product $T\Box F$ (not difficult to see, appears already in \cite{PiVr}), i.e. we can assume that all isomorphisms $\varphi_e$ are identities. For a later reference note that for any path $P\subseteq B$, $p^{-1}(P)$ is a Cartesian graph bundle over the path $P$, and one can define coordinates in the product $P\Box F$ in a natural way. In recent work on fault diameter of Cartesian graph products and bundles \cite{zer-ban,zbedge,ZB,erves}, analogous results were found for both fault diameter and edge fault diameter. \begin{theorem} \cite{zer-ban} \label{VPsv} Let $F$ and $B$ be $k_F$-connected and $k_B$-connected graphs, respectively, $0\leq a < k_F$, $0\leq b < k_B$, and $G$ a Cartesian bundle with fibre $F$ over the base graph $B$. Then $$\mathcal{D}^V_{a+b+1}(G)\leq \mathcal{D}^V_{a}(F)+\mathcal{D}^V_{b}(B)+1.$$ \end{theorem} \begin{theorem} \cite{erves} \label{EPsv} Let $F$ and $B$ be $k_F$-edge connected and $k_B$-edge connected graphs, respectively, $0\leq a < k_F$, $0\leq b < k_B$, and $G$ a Cartesian bundle with fibre $F$ over the base graph $B$. Then $$\mathcal{D}^E_{a+b+1}(G)\leq \mathcal{D}^E_{a}(F)+\mathcal{D}^E_{b}(B)+1.$$ \end{theorem} Before writing a theorem on bounds for the mixed fault diameter we recall a theorem on mixed connectivity. \begin{theorem} \cite{mpsv} \label{mpsv2} Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$, graph $F$ be $(p_{F},q_{F})$+connected and graph $B$ be $(p_{B},q_{B})$+connec\-ted. Then Cartesian graph bundle $G$ is $(p_{F}+p_{B}+1,q_{F}+q_{B})$+connected. \end{theorem} In recent work \cite{fdsv,fdsv2}, an upper bound for the mixed fault diameter of Cartesian graph bundles, $\mathcal{D}_{(p+1,q)} (G)$, in terms of mixed fault diameter of the fibre and diameter of the base graph and in terms of diameter of the fibre and mixed fault diameter of the base graph, respectively, is given. \begin{theorem} \cite{fdsv} \label{MPsvBsp} Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$, where graph $F$ is $(p,q)$+connected, $p+q>0$, and $B$ is a connected graph with diameter $\mathcal{D}(B)>1.$ Then we have: \begin{itemize} \item If $q>0$, then $\mathcal{D}_{(p+1,q)} (G)\leq \mathcal{D}_{(p,q)}(F)+ \mathcal{D}(B).$ \item If $q=0$, then $\mathcal{D}^{V}_{p+1} (G)\leq \max \{ \mathcal{D}^{V}_{p} (F), \mathcal{D}_{(p-1,1)}(F)\}+ \mathcal{D}(B).$ \end{itemize} \end{theorem} Theorem \ref{MPsvBsp} improves results \ref{VPsv} and \ref{EPsv} for $a>0$ and $b=0$. \\ Let $G$ be a Cartesian graph bundle with fibre $F$ over the connected base graph $B$ with diameter $\mathcal{D}(B)>1$, and let $a>0$. If graph $F$ is $(a+1)$-connected, i.e. $(a,0)$+connected, then by theorem \ref{MPsvBsp} we have an upper bound for the vertex fault diameter $\mathcal{D}^{V}_{a+1} (G) \leq \mathcal{D}^{V}_{a} (F)+ \mathcal{D}(B)+1$ for any graph $F$. Similarly, $\mathcal{D}^{V}_{a+1} (G)\leq \mathcal{D}^{V}_{a} (F)+ \mathcal{D}(B) $ if $\mathcal{D}_{(a-1,1)}(F) \leq \mathcal{D}^{V}_{a} (F)$ holds.\\ If graph $F$ is $(a+1)$-edge connected, i.e. $(0,a)$+connected, then by theorems \ref{main_mixed} and \ref{MPsvBsp} we have an upper bound for the edge fault diameter $\mathcal{D}^{E}_{a+1} (G) \leq \mathcal{D}_{(1,a)}(G)\leq \mathcal{D}^{E}_{a} (F)+ \mathcal{D}(B)$. \begin{theorem} \cite{fdsv2} \label{MPsvFsp} Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$, graph $F$ be a connected graph with diameter $\mathcal{D}(F)>1$, and graph $B$ be $\left(p,q\right)$+connected, $p+q>0$. Then we have: \begin{itemize} \item If $q>0$, then $\mathcal{D}_{(p+1,q)} (G)\leq \mathcal{D}(F) + \mathcal{D}_{(p,q)} (B).$ \item If $q=0$, then $\mathcal{D}^{V}_{p+1} (G)\leq \mathcal{D}(F) + \max \{ \mathcal{D}^{V}_{p} (B), \mathcal{D}_{(p-1,1)}(B)\}.$ \end{itemize} \end{theorem} Theorem \ref{MPsvFsp} improves results \ref{VPsv} and \ref{EPsv} for $a=0$ and $b>0$. \\ Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$, graph $F$ be a connected graph with diameter $\mathcal{D}(F)>1$, and let $b>0$. If graph $B$ is $(b+1)$-connected, i.e. $(b,0)$+connected, then by Theorem \ref{MPsvFsp} we have an upper bound for the vertex fault diameter $\mathcal{D}^{V}_{b+1} (G)\leq \mathcal{D}(F) + \mathcal{D}^{V}_{b} (B)+1$ for any graph $B$. Similarly, $\mathcal{D}^{V}_{b+1} (G)\leq \mathcal{D}(F) + \mathcal{D}^{V}_{b} (B)$ if $\mathcal{D}_{(b-1,1)}(B) \leq \mathcal{D}^{V}_{b} (B)$ holds. \\ If graph $B$ is $(b+1)$-edge connected, i.e. $(0,b)$+connected, then by theorems \ref{main_mixed} and \ref{MPsvBsp} we have an upper bound for the edge fault diameter $\mathcal{D}^{E}_{b+1} (G) \leq \mathcal{D}_{(1,b)}(G)\leq \mathcal{D} (F)+ \mathcal{D}^{E}_{b} (B)$. In the case when $a=b=0$ the fault diameter is determined exactly. \begin{proposition} \cite{fdsv} Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$, and graphs $F$ and $B$ be connected graphs with diameters $\mathcal{D}(F)>1$ and $\mathcal{D}(B)>1$. Then $$\mathcal{D}^{V}_{1} (G)= \mathcal{D}^{E}_{1} (G)=\mathcal{D}(G)=\mathcal{D}(F) + \mathcal{D}(B).$$ \end{proposition} In other words, the diameter of a nontrivial Cartesian graph bundle does not change when one element is faulty. Here we improve results of theorems \ref{VPsv} and \ref{EPsv} for positive $a$ and $b$. \section{The results - improved bounds} \label{Results} Before stating and proving the main theorems, we introduce some notation used in this section. \\ Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$. The {\em fibre of vertex} $x \in V(G)$ is denoted by $F_x$, formally, $F_x = p^{-1}(\{p(x)\})$. We will also use notation $F(u)$ for the fibre of the vertex $u \in V(B)$, i.e. $F(u) = p^{-1}(\{u\})$. Note that $F_x = F(p(x))$. We will also use shorter notation $x\in F(u)$ for $x\in V(F(u))$. \\ Let $u,v\in V(B)$ be distinct vertices, and $Q$ be a path from $u$ to $v$ in $B$, and $x\in F(u)$. Then the {\em lift of the path $Q$ to the vertex} $x\in V(G)$, $\tilde Q_x$, is the path from $x\in F(u)$ to a vertex in $F(v)$, such that $p(\tilde Q_x)=Q$ and $\ell(\tilde Q_x)=\ell(Q)$. Let $x,x'\in F(u)$. Then $\tilde Q_x$ and $\tilde Q_{x'}$ have different endpoints in $F(v)$ and are disjoint paths if and only if $x\neq x'$. In fact, two lifts $\tilde Q_x$ and $\tilde Q_{x'}$ are either disjoint $\tilde Q_x \cap \tilde Q_{x'} = \emptyset$ or equal, $\tilde Q_x = \tilde Q_{x'}$. We will also use notation $\tilde Q$ for lifts of path $Q$ to any vertex in $F(u)$. \\ Let $Q$ be a path from $u$ to $v$ and $e=uw\in E(Q)$. We will use notation $Q \setminus e$ for the subpath from $w$ to $v$, i.e. $Q \setminus e = Q \setminus \{u,e\}= Q \setminus \{u\}$.\\ Let $G$ be a graph and $X\subseteq S(G)$ be a set of elements of $G$. A path $P$ from a vertex $x$ to a vertex $y$ {\em avoids} $X$ in $G$, if $S(P)\cap X=\emptyset $, and it {\em internally avoids} $X$, if $(S(P)\setminus \{x,y\})\cap X=\emptyset$. \subsection{Vertex fault diameter of Cartesian graph bundles} \begin{theorem} \label{VFD} Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$, graphs $F$ and $B$ be $k_F$-connected and $k_B$-connected, respectively, and let $0< a < k_F$, $0< b < k_B$. If for fault diameters of graphs $F$ and $B$, $\mathcal{D}_{(a-1,1)}(F)\leq \mathcal{D}^{V}_{a} (F)$ and $\mathcal{D}_{(b-1,1)}(B)\leq \mathcal{D}^{V}_{b} (B)$ hold then $$\mathcal{D}^V_{a+b+1}(G)\leq \mathcal{D}^V_{a}(F)+\mathcal{D}^V_{b}(B).$$ \end{theorem} \begin{pf*}{Proof.} Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$, graph $F$ be $(a+1)$-connected, $ a >0$, graph $B$ be $(b+1)$-connected, $b >0$, and let $\mathcal{D}_{(a-1,1)}(F)\leq \mathcal{D}^{V}_{a} (F)$, $\mathcal{D}_{(b-1,1)}(B)\leq \mathcal{D}^{V}_{b} (B)$. Then $\mathcal{D}^{V}_{a} (F) \geq 2$, $\mathcal{D}^{V}_{b} (B) \geq 2$, and Cartesian bundle $G$ is $(a+b+2)$-co\-nnec\-ted. Let $X\subseteq V(G)$ be a set of faulty vertices, $\left|X\right|= a+b+1$, and let $x,y\in V(G)\setminus X$ be two distinct nonfaulty vertices in $G$. We shall consider the distance $d_{G\setminus X}(x,y)$. \begin{itemize} \item Suppose first that $x$ and $y$ are in the same fibre, i.e. $p(x) = p(y)$.\\ If $\left| X \cap V(F_x)\right|\leq a$, then $d_{G\setminus X}(x,y)\leq \mathcal{D}^{V}_{a} (F)$.\\ If $\left| X \cap V(F_x)\right| > a$, then outside of fibre $F_x$ there are at most $b$ faulty vertices. As graph $B$ is $(b+1)$-connected, there are at least $b+1$ neighbors of vertex $p(x)$ in $B$. Therefore there exist a neighbor $v$ of vertex $p(x)$ in $B$, such that $\left| X \cap F(v)\right|=0$, and there is a path $x\rightarrow x^{\prime} \stackrel{P}{\rightarrow}y^{\prime}\rightarrow y$, which avoids $X$, where $x^{\prime},y^{\prime} \in F(v)$ and $\ell(P)\leq \mathcal{D}(F)$. Thus $d_{G\setminus X}(x,y)\leq 1+\mathcal{D} (F)+1\leq \mathcal{D}^{V}_{a} (F)+\mathcal{D}^{V}_{b} (B)$. \item Now assume that $x$ and $y$ are in distinct fibres, i.e. $p(x) \neq p(y)$. Let $X_B=\{v \in V(B)\setminus \{p(x),p(y)\} ; \left| X \cap F(v)\right|>0\}$. We distinguish two cases. \begin{enumerate} \item If $\left|X_B\right| \geq b$, then let $X_B^{\prime} \subseteq X_B$ be an arbitrary subset of $X_B$ with $\left| X_B^{\prime} \right|=b$. The subgraph $B \setminus X_B^{\prime}$ is a connected graph and there exists a path $Q$ in $B \setminus X_B^{\prime}$ from $p(x)$ to $p(y)$ with $\ell(Q)\leq \mathcal{D}^{V}_{b} (B)$. In $p^{-1}(Q)=F\Box Q$ there are at most $a+1$ faulty vertices. Let $x^{\prime} \in F_y$ be the endpoint of the path $\tilde{Q}_x$, the lift of $Q$. We distinguish two cases. \begin{enumerate} \item If $x^{\prime}=y$, then $\tilde{Q}_x$ is a path from $x$ to $y$ in $G$. If $\tilde{Q}_x$ avoids $X$, then $d_{G\setminus X} (x,y)\leq \ell(Q)\leq \mathcal{D}^{V}_{b} (B)$. If $\tilde{Q}_x$ does not avoid $X$, then there are at most $a$ faulty vertices outside of the path $\tilde{Q}_x$ in $F\Box Q$. As the graph $F$ is $(a+1)$-connected, there are at least $a+1$ neighbors of $x$ in $F_x$. Since there are more neighbors than faulty vertices (outside of $\tilde{Q}_x$ in $F\Box Q$), there exists a neighbor $v\in V(F_x)$ of $x$, such that the lift $\tilde{Q}_v$ avoids $X$. The endpoint of the path $\tilde{Q}_v$ in fibre $F_y$ is a neighbor of $y$, therefore $d_{G\setminus X}(x,y) \leq 1+ \ell(Q) +1\leq \mathcal{D}^{V}_{a} (F)+ \mathcal{D}^{V}_{b} (B)$. \item Let $x^{\prime}\neq y$. If $\left|V(F_x) \cap X \right|=a+1$ or $\left|V(F_y) \cap X \right|=a+1$, then obviously $d_{G\setminus X}(x,y) \leq \ell(Q) + \mathcal{D}(F)\leq \mathcal{D}^{V}_{b} (B)+ \mathcal{D}^{V}_{a} (F)$.\\ Now assume $\left|V(F_x) \cap X \right|\leq a$ and $\left|V(F_y) \cap X \right|\leq a$. If $\tilde{Q}_x$ or $\tilde{Q}_y$ avoids $X$, then $d_{G\setminus X}(x,y) \leq \ell(Q) + \mathcal{D}^{V}_{a} (F)\leq \mathcal{D}^{V}_{b} (B)+\mathcal{D}^{V}_{a} (F)$. Suppose that paths $\tilde{Q}_x$ and $\tilde{Q}_y$ do not avoid $X$. Then there are at most $a-1$ faulty vertices outside of paths $\tilde{Q}_x$ and $\tilde{Q}_y$ in $F \Box Q$. Let $X^{\prime} \subseteq V(F_y)$ be defined as $X^{\prime} =\{v\in V(F_y)\setminus\{x^{\prime}, y\}, \left|\tilde{Q}_v \cap X \right|>0 \}$. Then $ \left|X^{\prime} \right| \leq a-1$. There is a path $P$ from $x^{\prime}$ to $y$ in $F_y\setminus X^{\prime}$ of length $\ell(P)\leq \mathcal{D}^V_{a-1}(F)\leq \mathcal{D}^V_{a}(F)$. Note that the path $P$ internally avoids $X$. If $x^{\prime}$ and $y$ are not adjacent, then $\ell(P)\geq 2$. For the neighbor $v^{\prime}$ of $x^{\prime}$ on the path $P$, $e^{\prime}=x^{\prime}v^{\prime}\subset P$, the lift $\tilde{Q}_{v^{\prime}}$ avoids $X$. Let $v\in V(F_x)$ be the endpoint of the lift $\tilde{Q}_{v^{\prime}}$. Then the path $x\rightarrow v\stackrel{\tilde{Q}}{\rightarrow}v^{\prime}\stackrel{P\setminus e^{\prime}}{\rightarrow}y$ avoids $X$, therefore $d_{G\setminus X}(x,y) \leq 1+\ell(Q) + \mathcal{D}^{V}_{a} (F)-1 \leq \mathcal{D}^{V}_{a} (F)+\mathcal{D}^{V}_{b} (B)$.\\ If $x^{\prime}$ and $y$ are adjacent, then remove from $F_y$ the set of vertices $X^{\prime}$ and the edge $e=x^{\prime}y$. There is a path $P^{\prime}$ from $x^{\prime}$ to $y$ in $F_y\setminus (X^{\prime} \cup \{e\})$ of length $2\leq \ell(P^{\prime})\leq \mathcal{D}_{(a-1,1)}(F)$, that internally avoids $X$. As before, for the neighbor $w^{\prime}$ of $x^{\prime}$ on the path $P^{\prime}$ the lift $\tilde{Q}_{w^{\prime}}$ avoids $X$. Therefore $d_{G\setminus X}(x,y) \leq 1+\ell(Q) + \mathcal{D}_{(a-1,1)}(F)-1 \leq \mathcal{D}^{V}_{a} (F)+\mathcal{D}^{V}_{b} (B)$. \end{enumerate} \item If $\left|X_B\right| < b$, then the subgraph $B \setminus X_B$ is (at least) $2$-connected, thus also $2$-edge connected. If the vertex $p(y)$ is not a neighbor of $p(x)$, then there is a path $Q$ from $p(x)$ to $p(y)$ in $B$ with $2\leq \ell(Q)\leq \mathcal{D}^{V}_{b-1} (B)\leq \mathcal{D}^{V}_{b} (B)$ that internally avoids $X_B$. Let $v\in V(Q)$ be a neighbor of $p(x)$, $e^{\prime}=p(x)v$. Then there is a path $x\rightarrow x^{\prime} \stackrel{P}{\rightarrow}y^{\prime}\stackrel{\tilde{Q}\setminus e^{\prime}}{\rightarrow} y$, which avoids $X$, where $x^{\prime},y^{\prime} \in F(v)$ and $\ell(P)\leq \mathcal{D}(F)$. Thus $d_{G\setminus X}(x,y)\leq 1+\mathcal{D} (F)+\mathcal{D}^{V}_{b} (B)-1 \leq \mathcal{D}^{V}_{a}(F)+\mathcal{D}^{V}_{b} (B)$. \\ If $e=p(x)p(y) \in E(B)$, then $B \setminus (X_B \cup \{e\})$ is a connected graph and there is a path $Q^{\prime}$ from $p(x)$ to $p(y)$ with $2\leq \ell(Q^{\prime})\leq \mathcal{D}_{(b-1,1)} (B)$ that internally avoids $X_B$. Similarly as before we have $d_{G\setminus X}(x,y)\leq 1+\mathcal{D} (F)+\mathcal{D}_{(b-1,1)} (B)-1 \leq \mathcal{D}^{V}_{a}(F)+\mathcal{D}^{V}_{b} (B)$. \qed \end{enumerate} \end{itemize} \end{pf*} Theorem \ref{VFD} improves Theorem \ref{VPsv} on the class of Cartesian graph bundles for which both, the fiber and the base graph, are at least 2-connected. Theorem \ref{VFD} also improves result of \cite{0} on the Cartesian graph products with at least 2-connected factors. The next example shows that the bound of Theorem \ref{VFD} is tight. \begin{example} Let $F=B=K_4\setminus \{e\}$. Then graph $F$ is $2$-connected and $\mathcal{D}^{E}_{1}(F)=\mathcal{D}^{V}_{1}(F) =2$. The vertex fault diameter of Cartesian graph product $F\Box F$ on Fig. \ref{K4-exK4-e} is $\mathcal{D}^{V}_{3}(F\Box F)= \mathcal{D}^{V}_{1} (F)+ \mathcal{D}^{V}_{1} (F)=4.$ \begin{figure}[hhhtb] \begin{center} \includegraphics[width=1.0in]{K4-exK4-e.eps} \caption{Cartesian graph product of two factors $K_4\setminus \{e\}$.} \label{K4-exK4-e} \end{center} \end{figure} \end{example} \begin{example} \label{V2} Cycle $C_{4}$ is $2$-connected graph and $\mathcal{D}^{E}_{1}(C_{4})=\mathcal{D}^{V}_{1}(C_{4})+1 =3$. The vertex fault diameter of Cartesian graph bundle $G$ with fibre $C_4$ over base graph $C_4$ on Fig. \ref{iliac} is $\mathcal{D}^{V}_{3}(G)= \mathcal{D}^{V}_{1} (C_4)+ \mathcal{D}^{V}_{1} (C_4)+1=5.$ \begin{figure}[h!] \begin{center} \includegraphics[width=5.0in]{twisted1.eps} \caption{Twisted torus: Cartesian graph bundle with fibre $C_4$ over base $C_4$.} \label{iliac} \end{center} \end{figure} \end{example} It is less known that graph bundles also appear as computer topologies. A well known example is the twisted torus on Fig. \ref{iliac}. Cartesian graph bundle with fibre $C_4$ over base $C_4$ is the ILLIAC IV architecture \cite{ILLIAC}, a famous supercomputer that inspired some modern multicomputer architectures. It may be interesting to note that the original design was a graph bundle with fibre $C_8$ over base $C_8$, but due to high cost a smaller version was build \cite{computermuseum}. \subsection{Edge fault diameter of Cartesian graph bundles} Let $G$ be a $k$-edge connected graph and $0\leq a < k$. Note that if $a>0$ then $\mathcal{D}^E_{a}(G)\geq 2$ for any graph $G$. More precisely, $\mathcal{D}^E_{a}(G)\geq 2$ if $a>0$ or ($a=0$ and $G$ is not a complete graph). Furthermore, $\mathcal{D}^E_{a}(G)=1$ if and only if $a=0$ and $G$ is a complete graph. \begin{theorem} \label{EFD} Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$, graphs $F$ and $B$ be $k_F$-edge connected and $k_B$-edge connected, respectively, and let $0\leq a < k_F$, $0\leq b < k_B$. If for edge fault diameters of graphs $F$ and $B$, $\mathcal{D}^E_{a}(F)\geq 2$ and $\mathcal{D}^E_{b}(B)\geq 2$ hold then $$\mathcal{D}^E_{a+b+1}(G)\leq \mathcal{D}^E_{a}(F)+\mathcal{D}^E_{b}(B).$$ \end{theorem} \begin{pf*}{Proof.} Let $G$ be a Cartesian graph bundle with fibre $F$ over the base graph $B$, the graph $F$ be $(a+1)$-edge connected, $\mathcal{D}^{E}_{a} (F) \geq 2$, and the graph $B$ be $(b+1)$-edge connected, $\mathcal{D}^{E}_{b} (B) \geq 2$. Then the Cartesian bundle $G$ is $(a+b+2)$-edge co\-nnec\-ted. Let $Y\subseteq E(G)$ be the set of faulty edges, $\left|Y\right|= a+b+1$. Denote the set of degenerate edges in $Y$ by $Y_D$, and the set of nondegenerate edges by $Y_N$, $Y=Y_N\cup Y_D$, $p(Y_D)\subseteq V(B)$, $p(Y_N)\subseteq E(B)$. Let $x,y\in V(G)$ be two arbitrary distinct vertices in $G$. We shall consider the distance $d_{G\setminus Y}(x,y)$. \begin{itemize} \item Suppose first that $x$ and $y$ are in the same fibre, i.e. $p(x) = p(y)$.\\ If $\left| Y_D \cap E(F_x)\right|\leq a$, then $d_{G\setminus Y}(x,y)\leq \mathcal{D}^{E}_{a} (F)$.\\ If $\left| Y_D \cap E(F_x)\right| > a$, then outside of the fibre $F_x$ there are at most $b$ faulty edges. As the graph $B$ is $(b+1)$-edge connected, there are at least $b+1$ neighbors of the vertex $p(x)$ in $B$. Therefore there exist a neighbor $v$ of vertex $p(x)$ in $B$, $e=p(x)v \in E(B)$, such that $\left| Y_D \cap F(v)\right|=0$ and $p(e) \notin p(Y_N)$, and hence there is a path $x\rightarrow x^{\prime} \stackrel{P}{\rightarrow}y^{\prime}\rightarrow y$, which avoids $Y$, where $x^{\prime},y^{\prime} \in F(v)$ and $\ell(P)\leq \mathcal{D}(F)$. Thus $d_{G\setminus Y}(x,y)\leq 1+\mathcal{D} (F)+1\leq \mathcal{D}^{E}_{a} (F)+\mathcal{D}^{E}_{b} (B)$. \item Now assume that $x$ and $y$ are in distinct fibres, i.e. $p(x) \neq p(y)$. We distinguish two cases. \begin{enumerate} \item If $\left|Y_N\right| \geq b$, then let $Y_N^{\prime} \subseteq Y_N$ be an arbitrary subset of $Y_N$ with $\left| Y_N^{\prime} \right|=b$. The subgraph $B \setminus p(Y_N^{\prime})$ is a connected graph and there exists a path $Q$ from $p(x)$ to $p(y)$ with $\ell(Q)\leq \mathcal{D}^{E}_{b} (B)$. In $p^{-1}(Q)=F\Box Q$ there are at most $a+1$ faulty edges. Let $x^{\prime} \in F_y$ be the endpoint of the path $\tilde{Q}_x$, the lift of $Q$. We distinguish two cases. \begin{enumerate} \item If $x^{\prime}=y$, then $\tilde{Q}_x$ is a path from $x$ to $y$ in $G$. If $\tilde{Q}_x$ avoids $Y$, then $d_{G\setminus Y} (x,y)\leq \ell(Q)\leq \mathcal{D}^{E}_{b} (B)$. If $\tilde{Q}_x$ does not avoid $Y$, then there are at most $a$ faulty edges outside of the path $\tilde{Q}_x$ in $F\Box Q$. As the graph $F$ is $(a+1)$-edge connected, there are at least $a+1$ neighbors of $x$ in $F_x$. Since there are more neighbors than faulty edges (outside of $\tilde{Q}_x$ in $F\Box Q$) there exist a neighbor $s\in V(F_x)$ of $x$, such that the path $x\rightarrow s \stackrel{\tilde{Q}}{\rightarrow}s^{\prime}\rightarrow y$ avoids $Y$, where $s^{\prime} \in V(F_y)$ is a neighbor of $y$, therefore $d_{G\setminus X}(x,y) \leq 1+ \ell(Q) +1\leq \mathcal{D}^{E}_{a} (F)+ \mathcal{D}^{E}_{b} (B)$. \item Let $x^{\prime}\neq y$. If $\left|E(F_x) \cap Y \right|=a+1$ or $\left|E(F_y) \cap Y \right|=a+1$, then obviously $d_{G\setminus Y}(x,y) \leq \ell(Q) + \mathcal{D}(F)\leq \mathcal{D}^{E}_{b} (B)+ \mathcal{D}^{E}_{a} (F)$.\\ Now let $\left|E(F_x) \cap Y \right|\leq a$ and $\left|E(F_y) \cap Y \right|\leq a$. If $\tilde{Q}_x$ or $\tilde{Q}_y$ avoid $Y$, then $d_{G\setminus Y}(x,y) \leq \ell(Q) + \mathcal{D}^{E}_{a} (F)\leq \mathcal{D}^{E}_{b} (B)+\mathcal{D}^{E}_{a} (F)$. Suppose that paths $\tilde{Q}_x$ and $\tilde{Q}_y$ do not avoid $Y$. Then there are at most $a-1$ faulty edges outside of paths $\tilde{Q}_x$ and $\tilde{Q}_y$ in $F \Box Q$. Let $Y_D^{\prime} \subseteq E(F_y)$ be set of edges from $x^{\prime}$ to such neighbors $v^{\prime}_i\in V(F_y)$ for which the paths $v^{\prime}_i \stackrel{\tilde{Q}}{\rightarrow}v_i\rightarrow x$ do not avoid faulty edges, $Y_D^{\prime} =\{e=x^{\prime}v^{\prime}\in E(F_y); \left|(\tilde{Q}_v^{\prime} \cup vx) \cap Y \right|>0, v= \tilde{Q}_v^{\prime} \cap F_x\}$. Note that if $x^{\prime}$ is a neighbor of $y$ then $x^{\prime}y \in Y_D^{\prime}$. Then the subgraph $F_y \setminus (Y_D^{\prime} \cup Y_D)$ is a connected graph as there are at most $a+1$ faulty edges in $p^{-1}(Q)=F\Box Q$ and $\tilde{Q}_x$ does not avoid $Y$. Therefore there is a path $P$ from $x^{\prime}$ to $y$ in $F_y\setminus (Y_D^{\prime} \cup Y_D)$ of length $2\leq \ell(P)\leq \mathcal{D}^E_{a}(F)$, which avoids $Y$ and for the neighbor $v^{\prime}$ of $x^{\prime}$ on the path $P$, the lift $\tilde{Q}_{v^{\prime}}$ avoids $Y$. Let $v= \tilde{Q}_{v^{\prime}} \cap F_x$. Then $vx \notin Y$, and the path $x\rightarrow v\stackrel{\tilde{Q}}{\rightarrow}v^{\prime}\stackrel{P\setminus e^{\prime}}{\rightarrow}y$ avoids $Y$, thus $d_{G\setminus Y}(x,y) \leq 1+\ell(Q) + \ell(P)-1 \leq \mathcal{D}^{E}_{a} (F)+\mathcal{D}^{E}_{b} (B)$. \end{enumerate} \item If $\left|Y_N\right| < b$, then there is a path $Q$ from $p(x)$ to $p(y)$ in $B$ which avoids $p(Y_N)$ of length $\ell(Q)\leq \mathcal{D}^{E}_{b-1} (B)\leq \mathcal{D}^{E}_{b} (B)$. If $\left|E(F_x) \cap Y_D \right|\leq a$ or $\left|E(F_y) \cap Y \right|\leq a$, then obviously $d_{G\setminus Y}(x,y) \leq \ell(Q) + \mathcal{D}^{E}_{a} (F)\leq \mathcal{D}^{E}_{b} (B)+ \mathcal{D}^{E}_{a} (F)$.\\ Now let $\left|E(F_x) \cap Y \right|>a$ and $\left|E(F_y) \cap Y \right|>a$. Then outside of the fibres $F_x$ and $F_y$ there are at most $b-1$ faulty edges. Let $Y_N^{\prime} \subseteq E(B)$ be set of edges from $p(x)$ to such neighbors $v_i\in V(B)$ for which fibre $F(v_i)$ contains faulty edges, $Y_N^{\prime} =\{e=p(x)v\in E(B); \left|F(v) \cap Y_D \right|>0\}$. Note that if $p(x)$ is a neighbor of $p(y)$ then $p(x)p(y) \in Y_N^{\prime}$. Then the subgraph $B \setminus (Y_N^{\prime} \cup p(Y_N))$ is a connected graph as there are at most $b-1$ faulty edges outside of fibres $F_x$ and $F_y$. Therefore there is a path $Q^{\prime}$ from $p(x)$ to $p(y)$ with $2\leq \ell(Q^{\prime})\leq \mathcal{D}^{E}_{b} (B)$ that avoids $p(Y_N)$ and there is no faulty edges in the fibre $F(v)$ of the neighbor $v$ of $p(x)$ on the path $Q^{\prime}$, $e=p(x)v\subset Q^{\prime}$. Hence there is a path $x\rightarrow x^{\prime} \stackrel{P}{\rightarrow}y^{\prime}\stackrel{\tilde{Q}\setminus e}{\rightarrow} y$, which avoids $Y$, where $x^{\prime},y^{\prime} \in F(v)$ and $\ell(P)\leq \mathcal{D}(F)$. Thus $d_{G\setminus Y}(x,y)\leq 1+\mathcal{D} (F)+\ell(Q)-1 \leq \mathcal{D}^{E}_{a}(F)+\mathcal{D}^{E}_{b} (B)$. \qed \end{enumerate} \end{itemize} \end{pf*} Clearly, Theorem \ref{EFD} improves Theorem \ref{EPsv} for all cases except in the following two cases: either when $a=0$ and $F$ is a complete graph or when $b=0$ and $B$ is a complete graph.
56d74dd34cdf699151d38b9e9bf751da8169412f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction and main results }\label{csw-sec1} Let $\mathbb{R}^{n} $ denote the $n$-dimensional Euclidean space, where $n\geq2$. For $a=(a_{1},\ldots,a_{n})$ and $x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}$, we define the inner product $\langle \cdot ,\cdot \rangle$ by $\langle x,a\rangle=x_{1}a_{1}+\cdots+x_{n}a_{n} $ so that the Euclidean length of $x$ is defined by $|x|=\langle x,x\rangle^{1/2}=(|x_{1}|^{2}+\cdots+|x_{n}|^{2})^{1/2}. $ Denote a ball in $\mathbb{R}^{n}$ with center $x'$ and radius $r$ by $\mathbb{B}^{n}(x',r)=\{x\in\mathbb{R}^{n}:\, |x-x'|<r\}. $ In particular, let $\mathbb{B}^{n}=\mathbb{B}^{n}(0,1)$ and $\mathbb{B}_{r}^{n}=\mathbb{B}^{n}(0,r)$. Set $\mathbb{D}=\mathbb{B}^2$, the open unit disk in the complex plane $\mathbb{C}$. Let $\Omega$ be a domain of $\mathbb{R}^{n}$, with non-empty boundary. We use $d_{\Omega}(x)$ to denote the Euclidean distance from $x\in \Omega$ to the boundary $\partial \Omega$ of $\Omega$. If $\Omega=\mathbb{B}^{n}$, we write $d(x)$ instead of $d_{\mathbb{B}^{n}}(x).$ We denote by $ \mathcal{C}^{m}(\Omega)$ the set of all $m$-times continuously differentiable functions from a domain $\Omega\subset\mathbb{R}^{n}$ into $\mathbb{R}$, where $m\in\{1,2,\ldots\}$. Furthermore, we use $C$ to denote the various positive constants, whose value may change from one occurrence to another. Fix $\tau\geq1$, and let $u\in \mathcal{C}^{2}(\mathbb{B}^{n})$ be a solution to the equation \begin{equation}\label{eq1c}\Delta u=\lambda(x)|u|^{\tau-1}u,\end{equation} where $\Delta$ is the Laplacian operator and $\lambda$ is a continuous function from $\mathbb{B}^{n}$ into $\mathbb{R}$. If $\lambda$ is a constant function in (\ref{eq1c}), then this type equation has attracted the attention of many authors, where the case $\tau=1$ and $\lambda<0$, i.e., the {\it Helmholtz} equation, is particularly important. We refer to \cite{Ba, Cl,E,Fa, Pas} and the references therein. If $\lambda>0$ is a constant and $\tau=1$, then (\ref{eq1c}) is the {\it Yukawa equation}, which arose out of an attempt of the Japanese Nobel physicist Hideki Yukawa to describe the nuclear potential of a point charge as $e^{-\sqrt{\lambda} r}/r$ (cf. \cite{A,BS,CPR,CRV,CRW,D-,D-1, SW,St,Ya}). It is well known that if $\lambda$ is a constant function and $\tau=1$, then each solution $u$ to (\ref{eq1c}) belongs to $\mathcal{C}^{\infty}(\mathbb{B}^{n}).$ Moreover, if $\lambda=0$ in (\ref{eq1c}), then $u$ is harmonic in $\mathbb{B}^{n}$. In fact, the equation (\ref{eq1c}) can be regarded as the induced equation by the elliptic partial differential operators $\mbox{div} p^{2}\nabla +q$, where $\nabla$ denotes the gradient and $p$, $q$ are real-valued functions satisfying $p\in \mathcal{C}^{2}(\mathbb{B}^{n})$ and $p\neq0$ in $\mathbb{B}^{n}$. Precisely, the elliptic operators \begin{equation}\label{eq-1}E_{p,q}=\mbox{div}\, p^{2}\nabla +q\end{equation} can be decomposed into the following form (cf. \cite{N,V}) \begin{equation}\label{eq-1.0}E_{p,q}=p\big(\Delta-\varphi\big)p,\end{equation} where $\varphi=(\Delta p)/p-q/p^{2}.$ By (\ref{eq-1.0}), we see that the equation \begin{equation}\label{eq-2}E_{p,q}(u)=\big(\mbox{div}\, p^{2}\nabla +q\big)u=0~\mbox{in}~\mathbb{B}^{n}\end{equation} is equivalent to {\it the stationary Schr\"odinger type equation} (cf. \cite{A, V}) \begin{equation}\label{eq-1.1}\Delta h=\varphi h,\end{equation} where $h=pu.$ If we can choose some $p$ and $q$ such that $\varphi=\lambda |h|^{\tau-1}$, then (\ref{eq-1.1}) is the same type equation as (\ref{eq1c}), where $\tau\geq1$. In particular, if $n=2$, the equation (\ref{eq-2}) is closely related to {\it the main Vekua equation} (cf. \cite{B1, B2, V,Ve}) \begin{equation}\label{eq-3} \partial_{\overline{z}}w=\frac{\partial_{\overline{z}}f}{f}\overline{w},\end{equation} where $z=x+iy$, $\partial_{z}=\frac{1}{2} (\partial/\partial x-i\partial/\partial y)$ and $\partial_{\overline{z}}=\frac{1}{2} (\partial/\partial x+i\partial/\partial y)$. In fact, if $f=pu_{0}$, then, for any solution $u$ to the equation (\ref{eq-2}), there is is a corresponding solution $w$ to the equation (\ref{eq-3}) such that $u=\mbox{Re}w/p$ is a solution to the equation (\ref{eq-2}), where $u_{0}$ is a positive solution to the equation (\ref{eq-2}). \begin{prop}\label{prop} Suppose $u\in \mathcal{C}^{2}(\mathbb{B}^{n})$ is a solution to the equation {\rm(\ref{eq1c})}, where $\lambda$ is a nonnegative continuous function from $\mathbb{B}^{n}$ into $\mathbb{R}$ with $\sup_{x\in\mathbb{B}^{n}}\lambda(x)<+\infty$. For all $x\in\mathbb{B}^{n},$ there is a positive constant $C$ such that $$|\nabla u(x)|^{\nu}\leq\frac{C}{R^{\nu+n}}\left(\int_{\mathbb{B}^{n}(x,R)}|u(y)|^{\nu}dy\\ +\int_{\mathbb{B}^{n}(x,R)}|u(y)|^{\tau\nu}dy\right), $$ where $\nu\in[1,+\infty)$ and $R$ is a positive constant such that $\overline{\mathbb{B}^{n}(x,R)}\subset\mathbb{B}^{n}$. \end{prop} A continuous increasing function $\omega:\, [0,+\infty)\rightarrow [0,+\infty)$ with $\omega(0)=0$ is called a {\it majorant} if $\omega(t)/t$ is non-increasing for $t>0$ (cf. \cite{CP,CPR,CPR-2015,Dy1,Dy2,P}). Given a subset $\Omega$ of $\mathbb{R}^{n}$, a function $u:\, \Omega\rightarrow \mathbb{R}$ is said to belong to the {\it Lipschitz space $L_{\omega}(\Omega)$} if there is a positive constant $C$ such that $$|u(x_{1})-u(x_{2})|\leq C\omega(|x_{1}-x_{2}|) ~\mbox{ for all $x_{1},x_{2}\in\Omega.$}$$ For $\nu\in(0,+\infty]$, the {\it generalized Hardy space $\mathcal{H}^{\nu}_{g}(\mathbb{B}^{n})$} consists of all those functions $u:\mathbb{B}^{n}\rightarrow\mathbb{R}$ such that $u$ is measurable, $M_{\nu}(r,f)$ exists for all $r\in(0,1)$ and $ \|u\|_{\nu}<+\infty$, where $$\|u\|_{\nu}= \begin{cases} \displaystyle\sup_{0<r<1}M_{\nu}(u,r) & \mbox{if } \nu\in(0,+\infty),\\ \displaystyle\sup_{x\in\mathbb{B}^{n}}|u(x)| &\mbox{if } \nu=+\infty, \end{cases} ~ M_{\nu}(u,r)=\left(\int_{\partial\mathbb{B}^{n}}|u(r\zeta)|^{\nu}\,d\sigma(\zeta)\right)^{\frac{1}{\nu}}, $$ and $d\sigma$ denotes the normalized surface measure on $\partial\mathbb{B}^{n}$. The classical {\it harmonic Hardy space $\mathcal{H}^{p}(\mathbb{D})$} consisting of harmonic functions in $\mathbb{D}$ is a subspace of $\mathcal{H}^{p}_{g}(\mathbb{D})$. \begin{defn}\label{def-1} For $\nu\in(0,+\infty]$, $\alpha>0$, $\beta\in\mathbb{R}$ and a majorant $\omega$, we use $\mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})$ to denote the {\it generalized Bloch type space} of all functions $u\in \mathcal{C}^{1}(\mathbb{B}^{n})$ with $\|u\|_{\mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}\\ <+\infty$, where $$\|u\|_{\mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}= \begin{cases} \displaystyle|u(0)|+\sup_{x\in\mathbb{B}^{n}} \left\{ M_{\nu}(|\nabla u|,|x|)\omega\big(\phi(x)\big)\right\} & \mbox{if } \nu\in(0,+\infty),\\ \displaystyle|u(0)|+\sup_{x\in\mathbb{B}^{n}} \left\{ |\nabla u(x)|\omega\big(\phi(x)\big)\right\} &\mbox{if } \nu=+\infty, \end{cases} $$ and $\phi(x)=d^{\alpha}(x)(1-\log\ d(x))^{\beta}$. \end{defn} It is easy to see that $\mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})$ is a Banach space for $\nu\geq1$. Moreover, we have the following: \begin{enumerate} \item[{\rm (1)}] If $\beta=0$, then $\mathcal{L}_{+\infty,\omega}\mathcal{B}^{0}_{\alpha}(\mathbb{D})$ is called the {\it $\omega$-$\alpha$-Bloch space} (cf. \cite{CPR}). \item[{\rm (2)}] If we take $\alpha=1$, then $\mathcal{L}_{+\infty,\omega}\mathcal{B}^{\beta}_{1}(\mathbb{D})$ is called the {\it logarithmic $\omega$-Bloch space}. \item[{\rm (3)}] If we take $\omega(t)=t$ and $\beta=0$, then $\mathcal{L}_{+\infty,\omega}\mathcal{B}^{0}_{\alpha}(\mathbb{D})$ is called the {\it generalized $\alpha$-Bloch space} (cf. \cite{CPR, RU,Z,Z1}). \item[{\rm (4)}] If we take $\omega(t)=t$ and $\alpha=1$, then $\mathcal{L}_{+\infty,\omega}\mathcal{B}^{\beta}_{1}(\mathbb{D})$ is called the {\it generalized logarithmic Bloch space} (cf. \cite{CPR, CPW1,Dyak,GPPJ,Pav1,Pe,Z1}). \end{enumerate} In \cite{Kr}, the author studied the Lipschitz spaces on smooth functions. Dyakonov \cite{Dy1} discussed the relationship between the Lipschitz space and the bounded mean oscillation on analytic functions in $\mathbb{D}$, and obtained the following result. \begin{Thm}{\rm (\cite[Theorem 1]{Dy1})}\label{Thm-Day} Suppose that $f$ is a analytic function in $\mathbb{D}$ which is continuous up to the boundary of $\mathbb{D}$. If $\omega$ and $\omega^{2}$ are regular majorants, then $$f\in L_{\omega}(\mathbb{D})\Longleftrightarrow \left(\mathcal{P}_{|f|^{2}}(z)-|f(z)|^{2}\right)^{1/2}\leq C\omega(d(z)),$$ where \[ \mathcal{P}_{|f|^{2}}(z)=\frac{1}{2\pi}\int_{0}^{2\pi}\frac{1-|z|^{2}}{|z-e^{i\theta}|^{2}}|f(e^{i\theta})|^{2}d\theta, \] and $C$ is a positive constant. \end{Thm} In \cite{CPR,CRW}, the authors extended Theorem \Ref{Thm-Day} to complex-valued harmonic functions (see \cite[Theorem 4]{CPR} and \cite[Theorem 3]{CRW}). For the solutions to (\ref{eq1c}), we get the following result, which is a generalization of Theorem \Ref{Thm-Day}, \cite[Theorem 4]{CPR} and \cite[Theorem 3]{CRW}. \begin{thm}\label{01-thm} Let $\alpha\in[1,2)$ and $\omega$ be a majorant. Suppose that $u$ is a solution to {\rm (\ref{eq1c})} with $\tau=1$, where $\lambda$ is a nonnegative constant. Then $u\in\mathcal{L}_{+\infty,\omega}\mathcal{B}^{0}_{\alpha}(\mathbb{B}^{n})$ if and only if there is a positive constant $C$ such that, for all $r\in(0,d(x)]$, \begin{equation}\label{eq-csw13}\frac{1}{|\mathbb{B}^{n}(x,r)|}\int_{\mathbb{B}^{n}(x,r)}|u(y)-u(x)|dy\leq\frac{Cr}{\omega(r^{\alpha})},\end{equation} where $|\mathbb{B}^{n}(x,r)|$ denotes the volume of $\mathbb{B}^{n}(x,r)$. \end{thm} Let $\Omega$ be a proper subdomain of $\mathbb{R}^{n}$. For $x,y\in\Omega$, let $$r_{\Omega}(x,y)=\frac{|x-y|}{\min\{d_{\Omega}(x), d_{\Omega}(y)\}}.$$ The distance ratio metric (see e.g. \cite{Vu}) is defined for $x,y\in\Omega$ by setting $$j_{\Omega}(x,y)=\log(1+r_{\Omega}(x,y)).$$ We say that $f:~\Omega\rightarrow f(\Omega)\subset\mathbb{R}^{n}$ is {\it weakly uniformly bounded} in $\Omega$ (with respect to $r_{\Omega}$) if there is a constant $C>0$ such that $r_{\Omega}(x,y)\leq1/2$ implies $r_{f(\Omega)}(f(x), f(y))\leq C.$ For $x,y\in\Omega$, let $$k_{\Omega}(x, y)=\inf_{\gamma}\int_{\gamma}\frac{ds}{d_{\Omega}(x)},$$ where infimum is taken over all rectifiable arcs $\gamma\subset\Omega$ and $ds$ stands for the arc length measure on $\gamma$ (cf. \cite{MM,Vu}). In \cite{MM}, Mateljevi\'c and Vuorinen proved the following result. \begin{Thm}{\rm (\cite[Theorem 2.8]{MM})} \label{Thm-MM} Suppose that $\Omega$ is a proper subdomain of $\mathbb{R}^{n}$ and $h:~\Omega\rightarrow\mathbb{R}^{n}$ is a harmonic mapping. Then the following conditions are equivalent. \begin{enumerate} \item[{\rm (a)}] $h$ is weakly uniformly bounded; \item[{\rm (b)}] There exists a constant $C$ such that, for all $x, y\in G$, $$k_{u(\Omega)}(u(x),u(y))\leq Ck_{\Omega}(x,y). $$ \end{enumerate} \end{Thm} We extended Theorem \Ref{Thm-MM} to the solutions of (\ref{eq1c}) with $\tau=1$, which is as follows. \begin{thm}\label{02-thm} Let $u=(u_{1},\cdots,u_{n})$ be a vector-valued function from $\mathbb{B}^{n}$ into the domain $u(\mathbb{B}^{n})\subset\mathbb{R}^{n}$ satisfying $\Delta u_{k}=\lambda_{k}u_{k},$ where $k\in\{1,\ldots, n\}$ and $\lambda_{k}$ is a nonnegative constant. Then the following conditions are equivalent. \begin{enumerate} \item[{\rm (1)}] $u$ is weakly uniformly bounded; \item[{\rm (2)}] There exists a constant $C$ such that, for all $x, y\in\mathbb{B}^{n}$, $$k_{u(\mathbb{B}^{n})}(u(x),u(y))\leq Ck_{\mathbb{B}^{n}}(x,y). $$ \end{enumerate} \end{thm} We remark that we can replace $\mathbb{B}^{n}$ by some proper domains $\Omega\subset\mathbb{R}^{n}$ in Theorem \ref{02-thm}. Makarov \cite{Ma} proved that if $f$ is analytic in $\mathbb{D}$ with $\mbox{Re}f\in\mathcal{L}_{+\infty,\omega}\mathcal{B}^{0}_{1}(\mathbb{D})$, then there is a positive constant $C$ such that \begin{equation}\label{eq-csw1}\limsup_{r\rightarrow1-}\frac{|f(r\zeta)|}{\sqrt{\log\frac{1}{1-r}\log\log\log\frac{1}{1-r}}}\leq C\|\mbox{Re}f\|_{\mathcal{L}_{+\infty,\omega}\mathcal{B}^{0}_{1}(\mathbb{D})}\end{equation} for almost every $\zeta\in\partial\mathbb{D}$, where $r\in[0,1)$ and $\omega(t)=t$. In particular, Korenblum \cite{Ko} showed that if $u$ is a real harmonic function in $\mathbb{D}$ with $u\in\mathcal{L}_{+\infty,\omega}\mathcal{B}^{0}_{1}(\mathbb{D}),$ then there is a positive constant $C$ such that \begin{equation}\label{eq-csw2}\limsup_{r\rightarrow1^{-}}\frac{|u(r\zeta)|}{\sqrt{\log\frac{1}{1-r}}\log\log\frac{1}{1-r}}\leq C\|u\|_{\mathcal{L}_{+\infty,\omega}\mathcal{B}^{0}_{1}(\mathbb{D})}\end{equation} for almost every $\zeta\in\partial\mathbb{D}$, where $\omega(t)=t$. For related investigations on the radial growth of Bloch type functions, we refer to \cite{CM,FM,GPP,GP,GK,Po,ST}. Analogously to (\ref{eq-csw1}) and (\ref{eq-csw2}), for $\nu\in(0,+\infty)$ and for functions in $u\in\mathcal{C}^{2}(\mathbb{B}^{n})$, satisfying a Bloch-type condition, we prove the following result. \begin{thm}\label{eq-y} Let $\omega$ be a majorant, $\nu\in[2,+\infty),$ $\alpha>0$ and $\beta\leq\alpha$. Suppose $u\in\mathcal{C}^{2}(\mathbb{B}^{n})$ satisfying $u\Delta u\geq0$ and $\big(|\nabla u|^{2}+u\Delta u\big)\in\mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n}).$ Then, for $n\geq3$ and $r\in[0,1)$, $$M_{\nu}(u,r)\leq\left[|u(0)|^{2}+\frac{\nu(\nu-1)\||\nabla u|^{2}+u\Delta u\|_{\mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}r^{2}}{\omega(1)(n-2)}\int_{0}^{1}\frac{t(1-t^{n-2})}{\phi(tr)}dt\right]^{\frac{1}{2}},$$ where $\phi$ is defined as in definition {\rm\ref{def-1}}. Moreover, for $n=2$ and $r\in[0,1)$, $$M_{\nu}(u,r)\leq\left[|u(0)|^{2}+\frac{\nu(\nu-1)\||\nabla u|^{2}+u\Delta u\|_{\mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}r^{2}}{\omega(1)}\int_{0}^{1}\frac{t\log\frac{1}{t}}{\phi(tr)}dt\right]^{\frac{1}{2}}.$$ \end{thm} \begin{defn}\label{def-2} For $m\in\{2,3,\ldots\}$, we denote by $\mathcal{HZ}_{m}(\mathbb{B}^{n})$ the class of all functions $u\in\mathcal{C}^{m}(\mathbb{B}^{n})$ satisfying {\it Heinz's type nonlinear differential inequality} (cf. \cite{CPR,HZ}) \begin{equation}\label{eq-5} |\Delta u(x)|\leq a_{1}(x)|\nabla u(x)|^{b_{1}}+a_{2}(x)|u(x)|^{b_{2}}+a_{3}(x)~ \mbox{ {\rm for}}~x\in\mathbb{B}^{n}, \end{equation} where $a_{k}~(k\in\{1,2,3\})$ are real-valued nonnegative continuous functions in $\mathbb{B}^{n}$ and $b_{j}~(j\in\{1,2\})$ are nonnegative constants. \end{defn} \begin{thm}\label{thm-1} Let $\omega$ be a majorant, $\nu\in[2,+\infty),$ $\alpha>0$, $\beta\leq\alpha$ and $u\in \mathcal{HZ}_{2}(\mathbb{B}^{n})\cap \mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})$ satisfying $\sup_{x\in\mathbb{B}^{n}}a_{1}(x)<+\infty$, $\sup_{x\in\mathbb{B}^{n}}a_{2}(x)<\frac{2n}{v},$ $\sup_{x\in\mathbb{B}^{n}}a_{3}(x)<+\infty$, $b_{1}\in[0,1]$ and $b_{2}\in[0,1]$. If $n\geq3$ and $u\Delta u\geq0$, then, for $r\in[0,1)$, \begin{eqnarray*} M_{\nu}( u, r)&\leq&\Bigg[|u(0)|^{2}+\frac{\nu(\nu-1)}{(n-2)\omega^{2}(1)}\|u\|^{2}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}r^{2}\int_{0}^{1}\frac{t(1-t^{n-2})}{\phi^{2}(rt)}dt\\ &&+\frac{\nu \sup_{x\in\mathbb{B}^{n}}a_{1}(x)}{(n-2)\omega^{b_{1}}(1)}\|u\|^{b_{1}}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}r^{2}M_{\nu}(u,r)\int_{0}^{1}\frac{t(1-t^{n-2})}{\phi^{b_{1}}(rt)}dt\\ &&+\frac{\nu \sup_{x\in\mathbb{B}^{n}}a_{2}(x)}{2n}r^{2}M_{\nu}^{1+b_{2}}( u, r)\\ &&+\frac{\nu \sup_{x\in\mathbb{B}^{n}}a_{3}(x)}{2n}r^{2}M_{\nu}( u, r)\Bigg]^{\frac{1}{2}}, \end{eqnarray*} where $\phi$ is defined as in Definition {\rm\ref{def-1}}. In particular, if $n=2$ and $u\Delta u\geq0$, then, for $r\in[0,1)$, \begin{eqnarray*} M_{\nu}( u, r)&\leq&\Bigg[|u(0)|^{2}+\frac{\nu(\nu-1)}{\omega^{2}(1)}\|u\|^{2}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{D})}r^{2}\int_{0}^{1}\frac{t\log\frac{1}{t}}{\phi^{2}(rt)}dt\\ &&+\frac{\nu \sup_{x\in\mathbb{D}}a_{1}(x)}{\omega^{b_{1}}(1)}\|u\|^{b_{1}}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{D})}r^{2}M_{\nu}(u,r)\int_{0}^{1}\frac{t\log\frac{1}{t}}{\phi^{b_{1}}(rt)}dt\\ &&+\frac{\nu \sup_{x\in\mathbb{D}}a_{2}(x)}{4}r^{2}M_{\nu}^{1+b_{2}}( u, r)\\ &&+\frac{\nu \sup_{x\in\mathbb{D}}a_{3}(x)}{4}r^{2}M_{\nu}( u, r)\Bigg]^{\frac{1}{2}}. \end{eqnarray*} \end{thm} We remark that Theorem \ref{thm-1} is a generalization of \cite[Theorem 1]{CPR}. As an application of Theorem \ref{thm-1}, we obtain the following result. \begin{cor} Let $\omega$ be a majorant, $\nu\in[2,+\infty),$ $\alpha>0$ and $\beta\leq\alpha$. Suppose that $u$ is a solution to {\rm (\ref{eq1c})} with $\tau=1$, where $\lambda$ is a nonnegative continuous function from $\mathbb{B}^{n}$ into $\mathbb{R}$ with $\sup_{x\in\mathbb{B}^{n}}\lambda(x)<\frac{\nu}{2n}.$ If $n\geq3$ and $u\in \mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})$, then, for $r\in[0,1)$, $$M_{\nu}( u, r)\leq\frac{1}{C^{\ast}} \left(|u(0)|^{2}+\frac{\nu(\nu-1)}{(n-2)\omega^{2}(1)}\|u\|^{2}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}r^{2}\int_{0}^{1}\frac{t(1-t^{n-2})}{\phi^{2}(rt)}dt\right)^{\frac{1}{2}},$$ where $C^{\ast}=\left(1-\frac{r^{2}\nu}{2n}\sup_{x\in\mathbb{B}^{n}}\lambda(x)\right)^{\frac{1}{2}}$ and $\phi$ is defined as in definition {\rm\ref{def-1}}. In particular, if $n=2$ and $u\in \mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})$, then, for $r\in[0,1)$, $$M_{\nu}( u, r)\leq\frac{1}{C^{\ast}} \left(|u(0)|^{2}+\frac{\nu(\nu-1)}{\omega^{2}(1)}\|u\|^{2}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{D})}r^{2}\int_{0}^{1}\frac{t\log\frac{1}{t}}{\phi^{2}(rt)}dt\right)^{\frac{1}{2}}.$$ \end{cor} For $\alpha,\gamma,\mu\in\mathbb{R}$, $$\mathcal{D}_{\nabla u}(\alpha,\gamma,\mu)=\int_{\mathbb{B}^{n}}(1-|x|^{2})^{\alpha}|\nabla u(x)|^{\gamma} \left(\sum_{1\leq j,k\leq n}u^{2}_{x_{j}x_{k}}(x)\right)^{\mu}dx<+\infty$$ is called a {\it Dirichlet type energy integral} of $u$ defined in $\mathbb{B}^{n}$ (\cite{CPR, CRV, CRW,E, HHL, SH, ST,W, Ya}). In \cite{CRW}, the authors investigated certain properties on the above Dirichlet type energy integral. In the following, we extend \cite[Theorem 4]{CRW} to a higher order form and give an application. \begin{thm}\label{thm-3} Let $u\in \mathcal{C}^{2}(\mathbb{B}^{n})$ be a solution to the equation {\rm (\ref{eq1c})} with $\tau=1$. For $\alpha>0$, $\mu\in[1,n/2]$ and $\nu\in[2,+\infty)$, if $\mathcal{D}_{\nabla u}(\alpha,0,\mu)<+\infty$, then $$\int_{\mathbb{B}^{n}}\big(d(x)\big)^{\beta\nu}\Delta \left(|\nabla u(x)|^{\nu}\right)dx<+\infty,$$ where $\beta=\frac{n+\alpha}{2\mu}-1.$ \end{thm} We recall that a real function $f$ is said to have a {\it harmonic majorant} if there is a positive harmonic function $F$ in $\mathbb{B}^{n}$ such that, for all $x\in\mathbb{B}^{n}$, $|f(x)|\leq F(x)$ (cf. \cite{CP,Du1,Nu, St, Ya}). Concerning harmonic majorants, it is well known that a subharmonic function $u$ defined in $\mathbb{D}$ has a harmonic majorant if and only if $\sup_{0<r<1}M_{1}(u,r)<+\infty$ (see \cite[Theorem 3.37]{Ho}). For the solutions to (\ref{eq1c}), we have \begin{thm}\label{thm-4} Let $u\in \mathcal{C}^{2}(\mathbb{B}^{n})$ be a solution to the equation {\rm (\ref{eq1c})} with $\tau=1$. Suppose that $\alpha>0$, $\mu\in[1,n/2]$ and $\nu\in[2,+\infty)$ satisfying $\frac{n+\alpha}{2\mu}-1=\frac{1}{\nu}$. If $\mathcal{D}_{\nabla u}(\alpha,0,\mu)<+\infty$, then $|\nabla u|\in\mathcal{H}^{\nu}_{g}(\mathbb{B}^{n})$ and $|\nabla u|^{\nu}$ has a majorant. \end{thm} The proofs of Proposition \ref{prop}, and Theorems \ref{01-thm}, \ref{02-thm}, \ref{eq-y}, \ref{thm-1}, \ref{thm-3} and \ref{thm-4} will be presented in Section \ref{csw-sec2}. \section{Proofs of the main results }\label{csw-sec2} \begin{lem}\label{lem-1cw} Let $u\in \mathcal{C}^{2}(\mathbb{B}^{n})$ with $u\Delta u\geq0$ in $\mathbb{B}^{n}$. Then, for $\nu\geq1$, $|u|^{\nu}$ is subharmonic in $\mathbb{B}^{n}$. \end{lem} \begin{pf} Let $\mathcal{Z}_{u}=\{x\in\mathbb{B}^{n}:~u(x)=0\}$. Then $\mathcal{Z}_{u}$ is a close set, which gives that $\mathbb{B}^{n}\backslash\mathcal{Z}_{u}$ is an open set. By calculations, for $x\in\mathbb{B}^{n}\backslash\mathcal{Z}_{u}$, we get \begin{equation}\label{eq-2c} \Delta(|u(x)|^{\nu})=\nu(\nu-1)|u(x)|^{\nu-2}|\nabla u(x)|^{2}+\nu|u(x)|^{\nu-2}u(x)\Delta u(x) \geq0, \end{equation} which implies that $|u|^{\nu}$ is subharmonic in $\mathbb{B}^{n}$. \end{pf} \begin{cor}\label{cor-1} For some $\tau\geq1$, let $u\in \mathcal{C}^{2}(\mathbb{B}^{n})$ be a solution to the equation {\rm (\ref{eq1c})}, where $\lambda$ is a nonnegative continuous function in $\mathbb{B}^{n}$. Then, for $\nu\geq1$, $|u|^{\nu}$ is subharmonic in $\mathbb{B}^{n}$. \end{cor} In \cite{P-1994}, Pavlovi\'c proved the following result. \begin{Lem}\label{lem-eqc-3} Suppose that $\Omega$ is a bounded domain of $\mathbb{R}^{n}$ and $u$ is a subharmonic function in $\Omega$. For any $x\in\Omega$, let $r$ be a positive constant such that $\overline{\mathbb{B}^{n}(x,r)}\in\Omega$. Then, for $\nu>0$, there are positive constant $C$ such that $$|u(x)|^{\nu}\leq\frac{C}{r^{n}}\int_{\mathbb{B}^{n}(x,r)}|u(y)|^{\nu}dy.$$ \end{Lem} The following result is well-known. \begin{lem}\label{Lemx} Suppose that $a,b\in[0,\infty)$ and $\iota\in(0,\infty)$. Then $$(a+b)^{\iota}\leq2^{\max\{\iota-1,0\}}(a^{\iota}+b^{\iota}). $$ \end{lem} \subsection*{Proof of Proposition \ref{prop}} Let $u\in \mathcal{C}^{2}(\mathbb{B}^{n})$ be a solution to the equation (\ref{eq1c}). Without loss of generality, we assume that $x=0$ and $n\geq3$. For $r\in(0,1)$ and all $w\in\mathbb{B}^{n}_{r}$, \begin{equation}\label{eq-016csw}u(w)=r^{n-2}\left[\int_{\partial\mathbb{B}^{n}}P_{r}(w,\zeta)u(r\zeta)d\sigma(\zeta)-\int_{\mathbb{B}^{n}}G_{r}(w,y)\lambda(ry)|u(ry)|^{\tau-1}u(ry)dy\right],\end{equation} where $V(\mathbb{B}^{n})$ is the volume of the unit ball, $$P_{r}(w,\zeta)=\frac{r^{2}-|w|^{2}}{|w-r\zeta|^{n}}$$ is the Poisson kernel and $$G_{r}(w,y)=\frac{1}{n(n-2)V(\mathbb{B}^{n})}\left[\frac{1}{|w-ry|^{n-2}}-\frac{1}{\big(r^{2}+|w|^{2}|y|^{2}-2r<w,y>\big)^{\frac{n-2}{2}}}\right]$$ is the Green function (see \cite{GT, Ho}). By calculations, we have \begin{equation}\label{eq-1c-2}|\nabla P_{r}(0,\zeta)|=O\left(\frac{1}{r^{n-1}}\right)~\mbox{and}~|\nabla G(0,y)|=O\left(\frac{1}{|ry|^{n-1}}\right).\end{equation} Then, by (\ref{eq-016csw}) and (\ref{eq-1c-2}), there is a positive constant $C_{1}$ such that \begin{eqnarray}\label{eqcxs-1} |\nabla u(0)|&\leq&\frac{C_{1}}{r}\int_{\partial\mathbb{B}^{n}}|u(r\zeta)|d\sigma(\zeta)+\frac{C_{1}}{r}\sup_{\xi\in\mathbb{B}^{n}}\lambda(\xi) \int_{\mathbb{B}^{n}}|u(ry)|^{\tau}|y|^{1-n}dy\\ \nonumber &\leq&\frac{C_{1}Q_{R}(|u|)}{r}+\frac{nC_{1}V(\mathbb{B}^{n})\sup_{\xi\in\mathbb{B}^{n}}\lambda(\xi)Q_{R}(|u|^{\tau})}{r}, \end{eqnarray} which, together with Lemma \ref{Lemx}, yield \begin{equation}\label{eq-1c-3}|\nabla u(0)|^{\nu}\leq2^{\nu-1}C_{1}^{\nu}\left(\frac{Q_{R}(|u|^{\nu})}{r^{\nu}}+ \frac{n^{\nu}\sup_{\xi\in\mathbb{B}^{n}}\lambda^{\nu}(\xi)Q_{R}(|u|^{\nu\tau})V^{\nu}(\mathbb{B}^{n})}{r^{\nu}}\right),\end{equation} where $Q_{R}(|u|)=\max\big\{|u(\xi)|:~\xi\in\overline{\mathbb{B}^{n}_{r}}\big\}$. By (\ref{eq-1c-3}), Corollary \ref{cor-1} and Lemma \Ref{lem-eqc-3}, there is a positive constant $C_{2}$ such that \begin{eqnarray*}|\nabla u(0)|^{\nu}&\leq&2^{\nu-1}C_{1}^{\nu}C_{2}\bigg(\frac{1}{r^{\nu+n}}\int_{\mathbb{B}^{n}_{2r}}|u(y)|^{\nu}dy\\ &&+ \frac{n^{\nu}V^{\nu}(\mathbb{B}^{n})\sup_{\xi\in\mathbb{B}^{n}}\lambda^{\nu}(\xi)}{r^{n+\nu}}\int_{\mathbb{B}^{n}_{2r}}|u(y)|^{\tau\nu}dy\bigg). \end{eqnarray*} The proof of the proposition is complete. \qed \begin{lem}\label{lem-03} For $\tau=1$, let $u\in \mathcal{C}^{2}(\mathbb{B}^{n})$ be a solution to the equation {\rm (\ref{eq1c})}, where $\lambda$ is a nonnegative constant. For all $a\in\mathbb{B}^{n}$, there is a positive constant $C$ such that $$|\nabla u(a)|\leq \frac{C}{r}\int_{\partial\mathbb{B}^{n}}|u(a+r\zeta)-u(a)|d\sigma(\zeta),$$ where $\overline{\mathbb{B}^{n}(a,r)}\subset\mathbb{B}^{n}$. \end{lem} \begin{pf} For any fixed $a\in\mathbb{B}^{n}$, let $f(x)=u(x+a)-u(a),~\mbox{$x\in\mathbb{B}^{n}_{r}$},$ where $r\in[0, d(a)).$ By (\ref{eqcxs-1}) and Corollary \ref{cor-1}, there is a positive constant $C_{3}$ such that \begin{eqnarray*} |\nabla f(0)|&\leq&\frac{C_{3}}{r}\left(\int_{\partial\mathbb{B}^{n}}|f(r\zeta)|d\sigma(\zeta)+ \int_{\mathbb{B}^{n}}|f(ry)||y|^{1-n}dy\right)\\ &=& \frac{C_{3}}{r}\left[\int_{\partial\mathbb{B}^{n}}|f(r\zeta)|d\sigma(\zeta)+n \int_{0}^{1}\left(\int_{\partial\mathbb{B}^{n}}|f(r\rho\zeta)|d\sigma(\zeta)\right)d\rho\right]\\ &=&\frac{C_{3}}{r}\left[\int_{\partial\mathbb{B}^{n}}|f(r\zeta)|d\sigma(\zeta)+\frac{n}{r} \int_{0}^{r}\Big(\int_{\partial\mathbb{B}^{n}}|f(t\zeta)|d\sigma(\zeta)\Big)dt\right]\\ &=&\frac{C_{3}}{r}\left[\int_{\partial\mathbb{B}^{n}}|f(r\zeta)|d\sigma(\zeta)+\frac{n}{r} \int_{0}^{r}M_{1}(f,t)dt\right]\\ &\leq&\frac{C_{3}}{r}\left(\int_{\partial\mathbb{B}^{n}}|f(r\zeta)|d\sigma(\zeta)+nM_{1}(f, r)\right)\\ &\leq&\frac{C_{3}(1+n)}{r}\int_{\partial\mathbb{B}^{n}}|f(r\zeta)|d\sigma(\zeta), \end{eqnarray*} which yields $$|\nabla u(a)|\leq \frac{C}{r}\int_{\partial\mathbb{B}^{n}}|u(a+r\zeta)-u(a)|d\sigma(\zeta),$$ where $C=C_{3}(1+n)$, completing the proof. \end{pf} \subsection*{Proof of Theorem \ref{01-thm}} First, we show the ``if" part. By Lemma \ref{lem-03}, there is a positive constant $C_{4}$ such that \begin{equation}\label{eq-1c4}|\nabla u(x)|\leq \frac{C_{4}}{\rho}\int_{\partial\mathbb{B}^{n}}|u(x+\rho\zeta)-u(x)|d\sigma(\zeta),\end{equation} where $\rho\in(0,d(x)].$ Let $r=d(x)$. Multiplying both sides of the inequality (\ref{eq-1c4}) by $n\rho^{n-1}$ and integrating from $0$ to $r$, together with (\ref{eq-csw13}), we obtain \begin{eqnarray*} |\nabla u(x)|&\leq&\frac{(n+1)C_{4}}{nr^{n+1}}\int_{0}^{r}\left(n\rho^{n-1}\int_{\partial\mathbb{B}^{n}}|u(x+\rho\zeta)-u(x)|d\sigma(\zeta)\right)d\rho\\ &=&\frac{(n+1)C_{4}}{nr|\mathbb{B}^{n}(x,r)|}\int_{\mathbb{B}^{n}(x,r)}|u(y)-u(x)|dy\\ &\leq&\frac{(n+1)C_{4}C}{n}\frac{1}{\omega(r^{\alpha})}\\ &=&\frac{(n+1)C_{4}C}{n}\frac{1}{\omega\big(d^{\alpha}(x)\big)}, \end{eqnarray*} which implies that $u\in\mathcal{L}_{+\infty,\omega}\mathcal{B}^{0}_{\alpha}(\mathbb{B}^{n}).$ Next we prove the ``only if" part. Since $u\in\mathcal{L}_{+\infty,\omega}\mathcal{B}^{0}_{\alpha}(\mathbb{B}^{n}),$ we see that, for $x\in\mathbb{B}^{n}$, there is a positive constant $C_{5}$ such that \begin{equation}\label{eq-1c-5}|\nabla u(x)|\leq\frac{C_{5}}{\omega\big(d^{\alpha}(x)\big)}.\end{equation} For $x, y\in\mathbb{B}^{n}$ and $t\in[0,1]$, if $d(x)>t|x-y|$, then, by (\ref{eq-1c-5}), we get \begin{eqnarray*} |u(x)-u(y)|&\leq&|x-y|\int_{0}^{1}|\nabla u(x+t(y-x))|dt\\ &\leq&C_{5}|x-y|\int_{0}^{1}\frac{dt}{\omega\big(d^{\alpha}(x+t(y-x))\big)}\\ &\leq&C_{5}|x-y|\int_{0}^{1}\frac{dt}{\omega\Big(\big(d(x)-t|x-y|\big)^{\alpha}\Big)}\\ &=&C_{5}\int_{0}^{|x-y|}\frac{dt}{\omega\left(\big(d(x)-t\big)^{\alpha}\right)}, \end{eqnarray*} which yields that \begin{eqnarray*} I&\leq& \frac{C_{5}}{|\mathbb{B}^{n}(x,r)|}\int_{\mathbb{B}^{n}(x,r)}\left[\int_{0}^{|x-y|}\frac{dt}{\omega\Big(\big(d(x)-t\big)^{\alpha}\Big)} \right]dy\\ &=&\frac{C_{5}}{|\mathbb{B}^{n}_{r}|}\int_{\mathbb{B}^{n}_{r}}\left[\int_{0}^{|\xi|}\frac{dt}{\omega\Big(\big(d(x)-t\big)^{\alpha}\Big)}\right]d\xi\\ &=&\frac{C_{5}n}{r^{n}}\int_{0}^{r}\rho^{n-1}\left\{\int_{0}^{\rho}\frac{dt}{\omega\Big(\big(d(x)-t\big)^{\alpha}\Big)} \right\}d\rho\\ &\leq&\frac{C_{5}n}{r^{n}}\int_{0}^{r}\left(\int_{t}^{r}\rho^{n-1}d\rho\right)\frac{1}{\omega\Big(\big(r-t\big)^{\alpha}\Big)}dt\\ &=&\frac{C_{5}}{r^{n}}\int_{0}^{r}\frac{(r-t)(r^{n-1}+r^{n-2}t+\cdots+t^{n-1})}{\omega\big((r-t)^{\alpha}\big)}dt\\ &\leq&\frac{C_{5}n}{r}\int_{0}^{r}\frac{(r-t)}{\omega\big((r-t)^{\alpha}\big)}dt\\ &=&\frac{C_{5}n}{r}\int_{0}^{r}\frac{(r-t)^{\alpha}}{\omega\big((r-t)^{\alpha}\big)}(r-t)^{1-\alpha}dt\\ &\leq&\frac{C_{5}nr^{\alpha-1}}{\omega(r^{\alpha})}\int_{0}^{r}(r-t)^{1-\alpha}dt\\ &=&\frac{C_{5}n}{(2-\alpha)}\frac{r}{\omega(r^{\alpha})}, \end{eqnarray*} where $$I=\frac{1}{|\mathbb{B}^{n}(x,r)|}\int_{\mathbb{B}^{n}(x,r)}|u(y)-u(x)|dy.$$ The proof of this theorem is complete. \qed For an $n\times n$ real matrix $A$, we define the standard {\it operator norm} by $$\|A\|=\sup_{x\neq0}\frac{|Ax|}{|x|}=\max\big\{|A\theta|:\,\theta\in\partial\mathbb{B}^{n}\big\}. $$ \subsection*{Proof of Theorem \ref{02-thm}} We first prove $(2)\Rightarrow(1)$. Let $x,y\in\mathbb{B}^{n}$ with $r_{\mathbb{B}^{n}}(x,y)\leq1/2$. Then \begin{equation}\label{eq-01csw}|x-y|\leq d(x)/2.\end{equation} By (\ref{eq-01csw}) and \cite[Lemma 3.7]{Vu}, we obtain $$k_{\mathbb{B}^{n}}(x,y)\leq2j_{\mathbb{B}^{n}}(x,y)\leq2r_{\mathbb{B}^{n}}(x,y)\leq1,$$ which gives that \begin{equation}\label{eq-02csw}k_{u(\mathbb{B}^{n})}(u(x),u(y))\leq Ck_{\mathbb{B}^{n}}(x,y)\leq C.\end{equation} Applying (\ref{eq-02csw}), we get $$j_{u(\mathbb{B}^{n})}(u(x),u(y))=\log\left(1+r_{u(\mathbb{B}^{n})}\big(u(x),u(y)\big)\right)\leq k_{u(\mathbb{B}^{n})}(u(x),u(y))\leq C,$$ which implies that $r_{u(\mathbb{B}^{n})}\big(u(x),u(y)\big)\leq e^{C}-1$. Now we prove $(1)\Rightarrow(2)$. Since $u$ is weakly uniformly bounded, for every $x\in\mathbb{B}^{n}$ and $y\in\overline{\mathbb{B}^{n}\left(x,d(x)/4\right)}$, we see that there is a positive constant $C$, \begin{equation}\label{eq-03csw}|u(y)-u(x)|\leq Cd_{u(\mathbb{B}^{n})}(u(x)).\end{equation} By (\ref{eq-03csw}) and Lemma \ref{lem-03}, we see that there is a positive $C_{6}$ such that \begin{eqnarray}\label{eq-06csw} \|u'(x)\|&\leq&\left(\sum_{k=1}^{n}|\nabla u_{k}(x)|^{2}\right)^{\frac{1}{2}}\leq\sum_{k=1}^{n}|\nabla u_{k}(x)|\\ \nonumber &\leq&\frac{C_{6}}{r}\int_{\partial\mathbb{B}^{n}}\sum_{k=1}^{n}|u_{k}(x+r\zeta)-u_{k}(x)|d\sigma(\zeta)\\ \nonumber &\leq&\frac{C_{6}\sqrt{n}}{r}\int_{\partial\mathbb{B}^{n}}|u(x+r\zeta)-u(x)|d\sigma(\zeta)\\ \nonumber &\leq&\frac{C_{6}C\sqrt{n}}{r}d_{u(\mathbb{B}^{n})}(u(x)), \end{eqnarray} where $r=d(x)$ and $$u'(x)=\left(\begin{array}{cccc} \nabla u_{1}(x) \\ \vdots \\ \nabla u_{n}(x) \end{array}\right). $$ Hence $(1)\Rightarrow(2)$ follows from (\ref{eq-06csw}) and \cite[Lemma 2.6]{MM}.\qed \begin{Thm}\label{Thm-B} Let $g$ be a function of class $C^{2}(\mathbb{B}^{n})$. If $n\geq3$, then for $r\in(0,1)$, $$\int_{\partial \mathbb{B}^{n}}g(r\zeta)\,d\sigma(\zeta)= g(0)+\int_{ \mathbb{B}^{n}(0,r)}\Delta g(x)G_{n}(x,r)\,dV_{N}(x), $$ where $G_{n}(x,r)=(|x|^{2-n}-r^{2-n})/[n(n-2)]$ and $dV_{N}$ is the normalized Lebesgue volume measure in $\mathbb{B}^{n}$. Moreover, if $n=2$, then for $r\in(0,1)$, \begin{eqnarray*} \frac{1}{2\pi}\int_{0}^{2\pi}g(re^{i\theta})\,d\thet &=&g(0)+ \frac{1}{2}\int_{\mathbb{D}_{r}}\Delta g(z)\log\frac{r}{|z|}\,dA(z), \end{eqnarray*} where $dA$ denotes the normalized area measure in $\mathbb{D}$ {\rm (cf. \cite{Pav,Z})}. \end{Thm} \begin{Lem}{\rm (\cite[Lemma 3]{CPR})}\label{Lem-5} Suppose that $\alpha>0$, $\beta\leq\alpha$ and $\omega$ is a majorant. Then, for $r\in(0,1)$, $\phi(r)$ and $\phi(r)/\omega(\phi(r))$ are decreasing in $(0,1)$, where $\phi$ is the same as in Definition {\rm \ref{def-1}}. \end{Lem} \subsection*{Proof of Theorem \ref{eq-y}} Without loss of generality, we assume that $u$ is a nonzero function and $n\geq3$. By H\"older inequality, for $\rho\in[0,1),$ we have $$M_{\frac{\nu(\nu-2)}{\nu-1}}^{\frac{\nu(\nu-2)}{\nu-1}}(u,\rho)\leq \left(\int_{\partial\mathbb{B}^{n}}|u(\rho\zeta)|^{\nu}d\sigma(\zeta)\right)^{\frac{\nu-2}{\nu-1}} \left(\int_{\partial\mathbb{B}^{n}}d\sigma(\zeta)\right)^{\frac{1}{\nu-1}},$$ which gives that \begin{eqnarray}\label{eq-y2}&&\int_{\partial\mathbb{B}^{n}}|u(\rho\zeta)|^{\nu-2}\big(|\nabla u(\rho\zeta)|^{2}+u(\rho\zeta)\Delta u(\rho\zeta)\big)d\sigma(\zeta)\\ \nonumber &\leq&M_{\frac{\nu(\nu-2)}{\nu-1}}^{\nu-2}(u,\rho)\left[\int_{\partial\mathbb{B}^{n}}\Big(|\nabla u(\rho\zeta)|^{2}+u(\rho\zeta)\Delta u(\rho\zeta)\Big)^{\nu}d\sigma(\zeta)\right]^{\frac{1}{\nu}}\\ \nonumber &\leq&M_{\nu}^{\nu-2}(u,\rho)\left[\int_{\partial\mathbb{B}^{n}}\Big(|\nabla u(\rho\zeta)|^{2}+u(\rho\zeta)\Delta u(\rho\zeta)\Big)^{\nu}d\sigma(\zeta)\right]^{\frac{1}{\nu}}. \end{eqnarray} By (\ref{eq-y2}) and Theorem \Ref{Thm-B}, we obtain \begin{eqnarray*} M_{\nu}^{\nu}(u,r)&=&|u(0)|^{\nu}+\int_{\mathbb{B}^{n}_{r}}\Delta(|u(x)|^{\nu})G_{n}(x,r)dV_{N}(x)\\ &=&|u(0)|^{\nu}+\int_{\mathbb{B}^{n}_{r}}\big[\nu(\nu-1)|u(x)|^{\nu-2}|\nabla u(x)|^{2}\\ &&+\nu|u(x)|^{\nu-2}u(x)\Delta u(x)\big]G_{n}(x,r)dV_{N}(x)\\ &\leq&|u(0)|^{\nu}+\nu(\nu-1)\int_{0}^{r}\Big[n\rho^{n-1}G_{n}(\rho,r)\int_{\partial\mathbb{B}^{n}}|u(\rho\zeta)|^{\nu-2}\big(|\nabla u(\rho\zeta)|^{2}\\ &&+u(\rho\zeta)\Delta u(\rho\zeta)\big)d\sigma(\zeta)\Big]d\rho\\ &\leq&|u(0)|^{\nu}+\nu(\nu-1)\int_{0}^{r}\bigg\{n\rho^{n-1}G_{n}(\rho,r)M_{\nu}^{\nu-2}(u,\rho)\\ &&\times\Big[\int_{\partial\mathbb{B}^{n}}\Big(|\nabla u(\rho\zeta)|^{2}+u(\rho\zeta)\Delta u(\rho\zeta)\Big)^{\nu}d\sigma(\zeta)\Big]^{\frac{1}{\nu}}\bigg\}d\rho\\ &\leq&|u(0)|^{\nu}+\nu(\nu-1)M_{\nu}^{\nu-2}(u,r)\int_{0}^{r}\bigg\{n\rho^{n-1}G_{n}(\rho,r)\\ &&\times\Big[\int_{\partial\mathbb{B}^{n}}\Big(|\nabla u(\rho\zeta)|^{2}+u(\rho\zeta)\Delta u(\rho\zeta)\Big)^{\nu}d\sigma(\zeta)\Big]^{\frac{1}{\nu}}\bigg\}d\rho, \end{eqnarray*} which, together with the subharmonicity of $u$ (Corollary \ref{cor-1}) and Lemma \Ref{Lem-5}, yield that \begin{eqnarray*} M_{\nu}^{2}(u,r)&\leq&|u(0)|^{2}+\nu(\nu-1)\int_{0}^{r}\bigg\{n\rho^{n-1}G_{n}(\rho,r)\\ &&\times\Big[\int_{\partial\mathbb{B}^{n}}\Big(|\nabla u(\rho\zeta)|^{2}+u(\rho\zeta)\Delta u(\rho\zeta)\Big)^{\nu}d\sigma(\zeta)\Big]^{\frac{1}{\nu}}\bigg\}d\rho\\ &\leq&|u(0)|^{2}+\nu(\nu-1)C_{7}\int_{0}^{r}\frac{n\rho^{n-1}G_{n}(\rho,r)}{\omega\big(\phi(\rho)\big)}d\rho\\ &=&|u(0)|^{2}+\nu(\nu-1)C_{7}\int_{0}^{r}\frac{n\rho^{n-1}G_{n}(\rho,r)}{\phi(\rho)}\frac{\phi(\rho)}{\omega\big(\phi(\rho)\big)}d\rho\\ &\leq&|u(0)|^{2}+\frac{\nu(\nu-1)C_{7}}{\omega(1)}\int_{0}^{r}\frac{n\rho^{n-1}G_{n}(\rho,r)}{\phi(\rho)}d\rho\\ &=&|u(0)|^{2}+\frac{\nu(\nu-1)C_{7}r^{2}}{\omega(1)(n-2)}\int_{0}^{1}\frac{t(1-t^{n-2})}{\phi(tr)}dt, \end{eqnarray*} where $C_{7}=\||\nabla u|^{2}+u\Delta u\|_{\mathcal{L}_{\nu,\omega}\mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}.$ The proof of the theorem is complete. \qed \subsection*{Proof of Theorem \ref{thm-1}} Without loss of generality, we assume that $u$ is a nonzero function and $n\geq3$. By H\"older inequality, for $\rho\in[0,1),$ we have \begin{equation}\label{eq-06}\int_{\partial\mathbb{B}^{n}}|u(\rho\zeta)|^{\nu-2}|\nabla u(\rho\zeta)|^{2}d\sigma(\zeta)\leq M_{\nu}^{\nu-2}(u,\rho)M_{\nu}^{2}(|\nabla u|, \rho), \end{equation} \begin{equation}\label{eq-07} \int_{\partial\mathbb{B}^{n}}|u(\rho\zeta)|^{\nu-1}|\nabla u(\rho\zeta)|^{b_{1}}d\sigma(\zeta)\leq M_{\nu}^{\nu-1}(u,\rho)M_{\nu b_{1}}^{b_{1}}(|\nabla u|, \rho),\end{equation} \begin{equation}\label{eq-08} M_{\nu-1+b_{2}}^{\nu-1+b_{2}}( u, \rho)\leq\left(\int_{\partial\mathbb{B}^{n}}|u(\rho\zeta)|^{\nu}d\sigma(\zeta)\right)^{\frac{\nu+b_{2}-1}{\nu}} \left(\int_{\partial\mathbb{B}^{n}}d\sigma(\zeta)\right)^{\frac{1-b_{2}}{\nu}}, \end{equation} \begin{equation}\label{eq-09} M_{\nu-1}^{\nu-1}( u, \rho)\leq\left(\int_{\partial\mathbb{B}^{n}}|u(\rho\zeta)|^{\nu}d\sigma(\zeta)\right)^{\frac{\nu-1}{\nu}} \left(\int_{\partial\mathbb{B}^{n}}d\sigma(\zeta)\right)^{\frac{1}{\nu}}, \end{equation} and \begin{equation}\label{eq-10} M_{\nu b_{1}}^{\nu b_{1}}( |\nabla u|, \rho)\leq\left(\int_{\partial\mathbb{B}^{n}}|\nabla u(\rho\zeta)|^{\nu}d\sigma(\zeta)\right)^{\frac{\nu b_{1}}{\nu}} \left(\int_{\partial\mathbb{B}^{n}}d\sigma(\zeta)\right)^{\frac{\nu-\nu b_{1}}{\nu}}. \end{equation} Applying (\ref{eq-06}), (\ref{eq-07}), (\ref{eq-08}), (\ref{eq-09}), (\ref{eq-10}), \cite[Lemma 3]{CPR} and Theorem \Ref{Thm-B}, for $r\in[0,1)$, we get \begin{eqnarray*} M_{\nu}^{\nu}(u,r)&=&|u(0)|^{\nu}+\int_{\mathbb{B}^{n}_{r}}\Delta(|u(x)|^{\nu})G_{n}(x,r)dV_{N}(x)\\ &=&\nu(\nu-1)\int_{0}^{r}\left[n\rho^{n-1}G_{n}(\rho,r)\int_{\partial\mathbb{B}^{n}}|u(\rho\zeta)|^{\nu-2}|\nabla u(\rho\zeta)|^{2}d\sigma(\zeta)\right]d\rho\\ &&+\nu\int_{0}^{r}\left[n\rho^{n-1}G_{n}(\rho,r)\int_{\partial\mathbb{B}^{n}}|u(\rho\zeta)|^{\nu-2}u(\rho\zeta)\Delta u(\rho\zeta)d\sigma(\zeta)\right]d\rho\\ &&+|u(0)|^{\nu}\\ &\leq&|u(0)|^{\nu}+\nu(\nu-1)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu}^{\nu-2}(u,\rho)M_{\nu}^{2}(|\nabla u|, \rho)d\rho\\ &&+\nu\sup_{x\in\mathbb{B}^{n}}a_{1}(x)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu}^{\nu-1}(u,\rho)M_{\nu b_{1}}^{b_{1}}(|\nabla u|, \rho)d\rho\\ &&+\nu\sup_{x\in\mathbb{B}^{n}}a_{2}(x)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu-1+b_{2}}^{\nu-1+b_{2}}( u, \rho)d\rho\\ &&+\nu\sup_{x\in\mathbb{B}^{n}}a_{3}(x)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu-1}^{\nu-1}( u, \rho)d\rho\\ &\leq&|u(0)|^{\nu}+\nu(\nu-1)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu}^{\nu-2}(u,\rho)M_{\nu}^{2}(|\nabla u|, \rho)d\rho\\ &&+\nu\sup_{x\in\mathbb{B}^{n}}a_{1}(x)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu}^{\nu-1}(u,\rho)M_{\nu b_{1}}^{b_{1}}(|\nabla u|, \rho)d\rho\\ &&+\nu\sup_{x\in\mathbb{B}^{n}}a_{2}(x)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu}^{\nu-1+b_{2}}( u, \rho)d\rho\\ &&+\nu\sup_{x\in\mathbb{B}^{n}}a_{3}(x)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu}^{\nu-1}( u, \rho)d\rho, \end{eqnarray*} which gives that \begin{eqnarray*} M_{\nu}^{2}(u,r)&\leq&|u(0)|^{2}+\nu(\nu-1)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu}^{2}(|\nabla u|, \rho)d\rho\\ &&+\nu M_{\nu}(u,r)\sup_{x\in\mathbb{B}^{n}}a_{1}(x)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)M_{\nu b_{1}}^{b_{1}}(|\nabla u|, \rho)d\rho\\ &&+\nu M_{\nu}^{1+b_{2}}( u, r)\sup_{x\in\mathbb{B}^{n}}a_{2}(x)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)d\rho\\ &&+\nu M_{\nu}( u, r)\sup_{x\in\mathbb{B}^{n}}a_{3}(x)\int_{0}^{r}n\rho^{n-1}G_{n}(\rho,r)d\rho\\ \end{eqnarray*} \begin{eqnarray*} &=&|u(0)|^{2}+\frac{\nu(\nu-1)r^{2}}{n-2}\int_{0}^{1}t(1-t^{n-2})M_{\nu}^{2}(|\nabla u|, rt)dt\\ &&+\frac{\nu r^{2}\sup_{x\in\mathbb{B}^{n}}a_{1}(x)}{n-2}M_{\nu}(u,r)\int_{0}^{1}t(1-t^{n-2})M_{\nu }^{b_{1}}(|\nabla u|, rt)dt\\ &&+\frac{\nu r^{2}\sup_{x\in\mathbb{B}^{n}}a_{2}(x)}{2n}M_{\nu}^{1+b_{2}}( u, r)\\ &&+\frac{\nu r^{2}\sup_{x\in\mathbb{B}^{n}}a_{3}(x)}{2n}M_{\nu}( u, r)\\ &\leq&|u(0)|^{2}+\frac{\nu(\nu-1)}{n-2}\|u\|^{2}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}r^{2}\int_{0}^{1}\frac{\phi^{2}(rt)}{\omega^{2}\big(\phi(rt)\big)}\frac{t(1-t^{n-2})}{\phi^{2}(rt)}dt\\ &&+\frac{\nu \sup_{x\in\mathbb{B}^{n}}a_{1}(x)}{n-2}\|u\|^{b_{1}}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}r^{2}M_{\nu}(u,r)\int_{0}^{1}\frac{\phi^{b_{1}}(rt)}{\omega^{b_{1}}\big(\phi(rt)\big)}\frac{t(1-t^{n-2})}{\phi^{b_{1}}(rt)}dt\\ &&+\frac{\nu \sup_{x\in\mathbb{B}^{n}}a_{2}(x)}{2n}r^{2}M_{\nu}^{1+b_{2}}( u, r)\\ &&+\frac{\nu \sup_{x\in\mathbb{B}^{n}}a_{3}(x)}{2n}r^{2}M_{\nu}( u, r)\\ &\leq&|u(0)|^{2}+\frac{\nu(\nu-1)}{(n-2)\omega^{2}(1)}\|u\|^{2}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}r^{2}\int_{0}^{1}\frac{t(1-t^{n-2})}{\phi^{2}(rt)}dt\\ &&+\frac{\nu \sup_{x\in\mathbb{B}^{n}}a_{1}(x)}{(n-2)\omega^{b_{1}}(1)}\|u\|^{b_{1}}_{\mathcal{L}_{\nu,\omega} \mathcal{B}^{\beta}_{\alpha}(\mathbb{B}^{n})}r^{2}M_{\nu}(u,r)\int_{0}^{1}\frac{t(1-t^{n-2})}{\phi^{b_{1}}(rt)}dt\\ &&+\frac{\nu \sup_{x\in\mathbb{B}^{n}}a_{2}(x)}{2n}r^{2}M_{\nu}^{1+b_{2}}( u, r)\\ &&+\frac{\nu \sup_{x\in\mathbb{B}^{n}}a_{3}(x)}{2n}r^{2}M_{\nu}( u, r). \end{eqnarray*} The proof of the theorem is complete. \qed \begin{lem}\label{lem-cw4} Let $u\in \mathcal{C}^{2}(\mathbb{B}^{n})$ be a solution to the equation {\rm (\ref{eq1c})} with $\tau=1$. Then, for $\nu\geq1$, $\left(\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}\right)^{\nu}$ is subharmonic in $\mathbb{B}^{n}$. \end{lem} \begin{pf} Without loss of generality, we assume that $U=\left(\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}\right)^{\nu}$ has no zeros. By computations, we get \begin{eqnarray*} \Delta U&=&4\nu(\nu-1)\left(\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}\right)^{\nu-2} \left(\sum_{1\leq m,k,j\leq n}u_{x_{k}x_{j}}u_{x_{k}x_{j}x_{m}}\right)^{2}\\ &&+2\nu\left(\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}\right)^{\nu-1} \sum_{1\leq k,j\leq n}\left[(\Delta u)_{x_{k}x_{j}}u_{x_{k}x_{j}}+\sum_{m=1}^{n}u^{2}_{x_{m}x_{k}x_{j}}\right]\\ \end{eqnarray*} \begin{eqnarray*} &=&4\nu(\nu-1)\left(\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}\right)^{\nu-2} \left(\sum_{1\leq m,k,j\leq n}u_{x_{k}x_{j}}u_{x_{k}x_{j}x_{m}}\right)^{2}\\ &&+2\nu\left(\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}\right)^{\nu-1} \sum_{1\leq k,j\leq n}\left[\lambda u^{2}_{x_{k}x_{j}}+\sum_{m=1}^{n}u^{2}_{x_{m}x_{k}x_{j}}\right]\geq0, \end{eqnarray*} which implies that $U$ is subharmonic in $\mathbb{B}^{n}$. \end{pf} \subsection*{Proof of Theorem \ref{thm-3}} By Lemma \Ref{lem-eqc-3} and Lemma \ref{lem-cw4}, for $\mu\in[1,n/2]$ and $x\in\mathbb{B}^{n}$, there is a positive constant $C_{8}$ such that $$\left(\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}(x)\right)^{\mu}\leq\frac{C_{8}}{(d(x))^{n+\alpha}}\int_{\mathbb{B}^{n}\left(x,\frac{d(x)}{2}\right)}(1-|y|)^{\alpha}\left(\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}(y)\right)^{\mu}dy,$$ which gives that \begin{equation}\label{eq-csw5} \sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}(x)\leq\frac{\big(C_{8}\mathcal{D}_{\nabla u}(\alpha,0,\mu)\big)^{\frac{1}{\mu}}}{(d(x))^{\frac{n+\alpha}{\mu}}}. \end{equation} Let $$H_{u}=\left(\begin{array}{cccc} \displaystyle \frac{\partial^{2} u}{\partial x^{2}_{1}}\; \frac{\partial^{2} u}{\partial x_{1}\partial x_{2}}\; \cdots\; \frac{\partial^{2} u}{\partial x_{1}\partial x_{n}}\\[4mm] \displaystyle \frac{\partial^{2} u}{\partial x_{2}\partial x_{1}}\; \frac{\partial^{2} u}{\partial x^{2}_{2}}\; \cdots\; \frac{\partial^{2} u}{\partial x_{2}\partial x_{n}}\\[2mm] \vdots \\[2mm] \displaystyle \frac{\partial^{2} u}{\partial x_{n}\partial x_{1}}\; \frac{\partial^{2} u}{\partial x_{n}\partial x_{2}}\; \cdots\; \frac{\partial^{2} u}{\partial x_{n}^{2}} \end{array}\right) $$ be the Hessian matrix of $u$. Then \begin{equation}\label{eq-csw0} \|H_{u}\|\leq\sqrt{\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}}.\end{equation} By (\ref{eq-csw5}) and (\ref{eq-csw0}), we get \begin{eqnarray}\label{eq-csw6} |\nabla u(x)|&\leq& |\nabla u(0)|+ \int_{[0,x]}\|H_{u}(y)\||dy|\\ \nonumber&\leq&|\nabla u(0)|+\int_{[0,x]}\frac{\big(C_{8}\mathcal{D}_{\nabla u}(\alpha,0,\mu)\big)^{\frac{1}{2\mu}}}{(d(y))^{\frac{n+\alpha}{2\mu}}}|dy|\\ \nonumber &\leq&|\nabla u(0)|+\frac{C_{9}}{(d(x))^{\frac{n+\alpha}{2\mu}-1}}, \end{eqnarray} where \[ C_{9}=\frac{2\mu\big(C_{8}\mathcal{D}_{\nabla u}(\alpha,0,\mu)\big)^{\frac{1}{2\mu}}}{n+\alpha-2\mu}, \] and $[0,x]$ is the line segment from $0$ to $x$. Applying (\ref{eq-csw6}) and Lemma \ref{Lemx}, for $\nu\geq2$, we have \begin{eqnarray}\label{eq-csw7} |\nabla u(x)|^{\nu-2}&\leq&\left[|\nabla u(0)|+\frac{C_{9}}{(d(x))^{\beta}}\right]^{\nu-2}\\ \nonumber &\leq&2^{\nu-2}\left[|\nabla u(0)|^{\nu-2}+\frac{C_{9}^{\nu-2}}{(d(x))^{\beta(\nu-2)}}\right]\end{eqnarray} and \begin{eqnarray}\label{eq-csw8} |\nabla u(x)|^{\nu}&\leq&\left[|\nabla u(0)|+\frac{C_{9}}{(d(x))^{\beta}}\right]^{\nu}\\ \nonumber &\leq&2^{\nu}\left[|\nabla u(0)|^{\nu}+\frac{C_{9}^{\nu}}{(d(x))^{\beta\nu}}\right],\end{eqnarray} where $\beta=\frac{n+\alpha}{2\mu}-1.$ We divide the remaining part of the proof into two cases, namely $\nu\in[2,4)$ and $\nu\in[4,+\infty).$ $\mathbf{Case~ I:}$ Let $ \nu\in[4,+\infty).$ By direct computations, we see that \begin{eqnarray}\label{eq-csw9} \Delta \left(|\nabla u|^{\nu}\right) &=&\nu(\nu-2)|\nabla u|^{\nu-4}\sum_{j=1}^{n}\left(\sum_{k=1}^{n}u_{x_{k}x_{j}}u_{x_{k}}\right)^{2}\\ \nonumber &&+\nu|\nabla u|^{\nu-2}\sum_{k=1}^{n}u_{x_{k}}(\Delta u)_{x_{k}}+\nu|\nabla u|^{\nu-2}\sum_{j=1}^{n}\sum_{k=1}^{n}u^{2}_{x_{k}x_{j}}\\ \nonumber &\leq&\nu(\nu-2)|\nabla u|^{\nu-2}\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}+\lambda \nu|\nabla u|^{\nu}\\ \nonumber &&+\nu|\nabla u|^{\nu-2}\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}\\ \nonumber&=&\nu(\nu-1)|\nabla u|^{\nu-2}\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}+\lambda \nu|\nabla u|^{\nu}. \end{eqnarray} It follows from (\ref{eq-csw7}), (\ref{eq-csw8}) and (\ref{eq-csw9}) that \begin{eqnarray}\label{eq-csw10} \big(d(x)\big)^{\beta\nu}\Delta \left(|\nabla u|^{\nu}\right)&\leq&\nu(\nu-1)\big(d(x)\big)^{\beta\nu}|\nabla u|^{\nu-2}\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}\\ \nonumber &&+\lambda \nu\big(d(x)\big)^{\beta\nu}|\nabla u|^{\nu}\\ \nonumber&=&\nu(\nu-1)\big(d(x)\big)^{\beta\nu-\frac{\alpha}{\mu}}|\nabla u|^{\nu-2}\big(d(x)\big)^{\frac{\alpha}{\mu}}\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}\\ \nonumber &&+\lambda \nu\big(d(x)\big)^{\beta\nu}|\nabla u|^{\nu}\\ \nonumber &\leq&C_{10}\big(d(x)\big)^{\frac{\alpha}{\mu}}\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}+C_{11}, \end{eqnarray} where $C_{10}=2^{\nu-2}\nu(\nu-1)\big(|\nabla u(0)|^{\nu-2}+C_{9}^{\nu-2}\big)$ and $C_{11}=2^{\nu}\lambda \nu\big(|\nabla u(0)|^{\nu}+C_{9}^{\nu}\big).$ By H\"older's inequality, we obtain \begin{eqnarray}\label{eq-csw11} \int_{\mathbb{B}^{n}}\big(d(x)\big)^{\frac{\alpha}{\mu}}\left(\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}(x)\right)dx&\leq& \big(\mathcal{D}_{\nabla u}(\alpha,0,\mu)\big)^{\frac{1}{\mu}}\left(\int_{\mathbb{B}^{n}}dx\right)^{1-\frac{1}{\mu}}\\ \nonumber &=& \big(V(\mathbb{B}^{n})\big)^{1-\frac{1}{\mu}}\big(\mathcal{D}_{\nabla u}(\alpha,0,\mu)\big)^{\frac{1}{\mu}}. \end{eqnarray} By (\ref{eq-csw10}) and (\ref{eq-csw11}), we conclude that \begin{eqnarray}\label{eq-csw12} \nonumber \int_{\mathbb{B}^{n}}\big(d(x)\big)^{\beta\nu}\Delta \left(|\nabla u(x)|^{\nu}\right)dx&\leq& \int_{\mathbb{B}^{n}}\left[ C_{10}\big(d(x)\big)^{\frac{\alpha}{\mu}}\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}(x)+C_{11}\right]dx\\ &\leq&C_{10}\big(V(\mathbb{B}^{n})\big)^{2-\frac{1}{\mu}}\big(\mathcal{D}_{\nabla u}(\alpha,0,\mu)\big)^{\frac{1}{\mu}}+C_{11}V(\mathbb{B}^{n})\\ \nonumber &<&+\infty. \end{eqnarray} $\mathbf{Case~ II:}$ Let $ \nu\in[2,4).$ In this case, for $m\in\{1,2,\ldots\}$, we let $f_{m}^{\nu}=\left(|\nabla u|^{2}+\frac{1}{m}\right)^{\frac{\nu}{2}}$. It is not difficult to know that $\Delta(f_{m}^{\nu})$ is integrable in $\mathbb{B}^{n}_{r}$. Then, by (\ref{eq-csw10}), (\ref{eq-csw12}) and Lebesgue's dominated convergence theorem, we have \begin{eqnarray*} \lim_{m\rightarrow+\infty}\int_{\mathbb{B}^{n}}\big(d(x)\big)^{\beta\nu}\Delta \left( f_{m}^{\nu}(x)\right)dx&=&\int_{\mathbb{B}^{n}}\big(d(x)\big)^{\beta\nu}\lim_{m\rightarrow+\infty}\Delta \left( f_{m}^{\nu}(x)\right)dx\\ &\leq&\int_{\mathbb{B}^{n}}\left[ C_{10}\big(d(x)\big)^{\frac{\alpha}{\mu}}\sum_{1\leq k,j\leq n}u^{2}_{x_{k}x_{j}}(x)+C_{11}\right]dx\\ &\leq&C_{10}\big(V(\mathbb{B}^{n})\big)^{2-\frac{1}{\mu}}\big(\mathcal{D}_{\nabla u}(\alpha,0,\mu)\big)^{\frac{1}{\mu}}+C_{11}V(\mathbb{B}^{n})\\ &<&+\infty. \end{eqnarray*} The proof of the theorem is complete. \qed \begin{lem}\label{lem-cw5} Let $u\in \mathcal{C}^{3}(\mathbb{B}^{n})$ with $\sum_{k=1}^{n}u_{x_{k}}(\Delta u)_{x_{k}}\geq0$ in $\mathbb{B}^{n}$. Then, for $\nu\geq1$, $|\nabla u|^{\nu}$ is subharmonic in $\mathbb{B}^{n}$. \end{lem} \begin{pf} Let $\mathcal{Z}_{\nabla u}=\{x\in\mathbb{B}^{n}:~|\nabla u(x)|=0\}$. Then $\mathbb{B}^{n}\backslash\mathcal{Z}_{\nabla u}$ is an open set. For $j\in\{1,\ldots,n\}$ and $x\in\mathbb{B}^{n}\backslash\mathcal{Z}_{\nabla u}$, we have \begin{eqnarray*} (|\nabla u(x)|^{\nu})_{x_{j}x_{j}}&=&\nu(\nu-2)|\nabla u(x)|^{\nu-4}\left(\sum_{k=1}^{n}u_{x_{k}x_{j}}(x)u_{x_{k}}(x)\right)^{2}\\ &&+\nu|\nabla u(x)|^{\nu-2}\sum_{k=1}^{n}\left(u_{x_{k}x_{j}x_{j}}(x)u_{x_{k}}(x)+u_{x_{k}x_{j}}^{2}(x)\right), \end{eqnarray*} which gives that \begin{eqnarray*} \Delta \left(|\nabla u(x)|^{\nu}\right) &=&\nu(\nu-2)|\nabla u(x)|^{\nu-4}\sum_{j=1}^{n}\left(\sum_{k=1}^{n}u_{x_{k}x_{j}}(x)u_{x_{k}}(x)\right)^{2}\\ &&+\nu|\nabla u(x)|^{\nu-2}\sum_{k=1}^{n}u_{x_{k}}(x)(\Delta u(x))_{x_{k}}+\nu|\nabla u(x)|^{\nu-2}\sum_{j=1}^{n}\sum_{k=1}^{n}u^{2}_{x_{k}x_{j}}(x)\\ &\geq&0. \end{eqnarray*} Therefore, for $\nu\geq1$, $|\nabla u|^{\nu}$ is subharmonic in $\mathbb{B}^{n}$. \end{pf} The following result easily follows from Lemma \ref{lem-cw5}. \begin{cor}\label{cor-cxs2} Let $u\in \mathcal{C}^{3}(\mathbb{B}^{n})$ be a solution to the equation {\rm (\ref{eq1c})}, where $\lambda$ is a nonnegative constant. Then, for $\nu\geq1$, $|\nabla u|^{\nu}$ is subharmonic in $\mathbb{B}^{n}$. \end{cor} \subsection*{Proof of Theorem \ref{thm-4}} $|\nabla u|\in\mathcal{H}^{\nu}_{g}(\mathbb{B}^{n})$ follows from \cite[Theorem 1]{CRV} and Theorem \ref{thm-3}. Next we prove $|\nabla u|^{\nu}$ has a harmonic majorant. For $x\in\mathbb{B}^{n}$, let $$G_{r}(x)=\int_{\partial\mathbb{B}^{n}}\frac{1-|x|^{2}}{|x-\zeta|^{n}}|\nabla u(r\zeta)|^{\nu}d\sigma(\zeta),$$ where $r\in[0,1).$ By Corollary \ref{cor-cxs2}, we see that $|\nabla u|^{\nu}$ is subharmonic in $\mathbb{B}^{n}$, which, together with $|\nabla u|\in\mathcal{H}^{\nu}_{g}(\mathbb{B}^{n})$, imply that $$|\nabla u(rx)|^{\nu}\leq\int_{\partial\mathbb{B}^{n}}\frac{1-|x|^{2}} {|x-\zeta|^{n}}|\nabla u(r\zeta)|^{\nu}d\sigma(\zeta)=G_{r}(x)<+\infty$$ and $G_{r}(0)=M_{\nu}^{\nu}(|\nabla u|,r)<+\infty$. For $x\in\mathbb{B}^{n}$, applying the Harnack Theorem to the sequence $\{G_{1-1/m}(x)\}_{m=1}^{\infty}$, we see that $$G(x)=\lim_{m\rightarrow+\infty}G_{1-1/m}(x)$$ is also a harmonic function in $\mathbb{B}^{n}$. Hence $|\nabla u|^{\nu}$ has a harmonic majorant in $\mathbb{B}^{n}$. The proof of the theorem is complete. \qed
89e14879130d3119b80a559b543c0965d9baa8a8
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:introduction} Visual Place Recognition (VPR) is a well known problem in Robotics and Computer Vision~\cite{lowry-place-recognition-survey-2016} and represents a building block of several applications in Robotics. These range from Localization and Navigation to Simultaneous Localization and Mapping (SLAM). The task of a VPR system is to localize an image within a database of places represented by other images. VPR is commonly cast as a data association problem and used in loop closing modules of SLAM pipelines. A robust VPR system consists of one or multiple of the following components, which progressively improve the solution accuracy: \begin{itemize} \item Image Retrieval: is the process of retrieving one or more \emph{images} from a database that are similar to a query one. \item Descriptor Matching: consists of seeking \emph{points} between images which look similar. The local appearance of such points is captured by \emph{Feature descriptors}. \item Geometric Verification: is a common pruning technique that removes points obtained from descriptor matching, which are inconsistent with the \emph{epipolar geometry}. \end{itemize} In the domain of Image Retrieval, common approaches compress entire images in single \emph{global} descriptors to obtain high processing speed~\cite{2001-oliva-gist, dalal2005histograms}. Recently, convolutional neural network methods demonstrated highly accurate results, especially in the field of long-term VPR~\cite{2015-sunderhauf-convnet, arandjelovic2016netvlad, 2017arXiv170405016B}. These methods, however, might suffer from a high ratio of false positives, and thus often require a further stage of \emph{local} feature Descriptor Matching to reject wrong candidates. Brute-force (BF) and k-d trees~\cite{kdtree} are two prominent methods for solving this task. \begin{figure} \centering \vspace{0pt} \hspace{-5pt} \begin{subfigure}[t]{0.6\columnwidth} \resizebox{\columnwidth}{!}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{ \gdef\gplfronttext{ \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(946,704){\makebox(0,0)[r]{\strut{}$0.001$}}% \csname LTb\endcsname% \put(946,1317){\makebox(0,0)[r]{\strut{}$0.01$}}% \csname LTb\endcsname% \put(946,1929){\makebox(0,0)[r]{\strut{}$0.1$}}% \csname LTb\endcsname% \put(946,2542){\makebox(0,0)[r]{\strut{}$1$}}% \csname LTb\endcsname% \put(946,3154){\makebox(0,0)[r]{\strut{}$10$}}% \csname LTb\endcsname% \put(946,3767){\makebox(0,0)[r]{\strut{}$100$}}% \csname LTb\endcsname% \put(946,4379){\makebox(0,0)[r]{\strut{}$1000$}}% \csname LTb\endcsname% \put(1078,484){\makebox(0,0){\strut{}$0$}}% \csname LTb\endcsname% \put(1714,484){\makebox(0,0){\strut{}$500$}}% \csname LTb\endcsname% \put(2350,484){\makebox(0,0){\strut{}$1000$}}% \csname LTb\endcsname% \put(2986,484){\makebox(0,0){\strut{}$1500$}}% \csname LTb\endcsname% \put(3622,484){\makebox(0,0){\strut{}$2000$}}% \csname LTb\endcsname% \put(4259,484){\makebox(0,0){\strut{}$2500$}}% \csname LTb\endcsname% \put(4895,484){\makebox(0,0){\strut{}$3000$}}% \csname LTb\endcsname% \put(5531,484){\makebox(0,0){\strut{}$3500$}}% \csname LTb\endcsname% \put(6167,484){\makebox(0,0){\strut{}$4000$}}% \csname LTb\endcsname% \put(6803,484){\makebox(0,0){\strut{}$4500$}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(176,2541){\rotatebox{-270}{\makebox(0,0){\strut{}Processing time $t_i$ (seconds)}}}% \put(3940,154){\makebox(0,0){\strut{}Image number $i$}}% \put(3940,4709){\makebox(0,0){\strut{}Runtime: KITTI sequence 00 (BRIEF-256)}}% \csname LTb\endcsname% \put(6219,2003){\makebox(0,0)[r]{\strut{}BF}}% \csname LTb\endcsname% \put(6219,1783){\makebox(0,0)[r]{\strut{}FLANN-LSH}}% \csname LTb\endcsname% \put(6219,1563){\makebox(0,0)[r]{\strut{}DBoW2}}% \csname LTb\endcsname% \put(6219,1343){\makebox(0,0)[r]{\strut{}HBST (ours)}}% }% \gplbacktext \put(0,0){\includegraphics{duration-kitti-00-eps-converted-to.pdf}}% \gplfronttext \end{picture}} \vspace{-10pt} \end{subfigure} \hspace{-2pt} \begin{subfigure}[t]{0.39\columnwidth} \includegraphics[width=\columnwidth]{motivation-trajectories.pdf} \vspace{-10pt} \end{subfigure} \begin{subfigure}{\columnwidth} \includegraphics[width=\columnwidth]{motivation.pdf} \end{subfigure} \vspace{-0pt} \caption{Matching performance of the proposed HBST approach on KITTI sequence 00. Top: Image processing times and image retrieval result of compared approaches at 70\% Recall. Bottom: A single query and reference image with highlighted descriptor matches provided by HBST. The shown query image was acquired 4'500 frames after the reference image.} \label{fig:motivation} \vspace{-0pt} \end{figure} Since the introduction of the BRIEF descriptor~\cite{calonder2010brief}, the computer vision community embraced the use of binary descriptors due to their low computation and matching cost. Many popular feature based SLAM systems such as ORB-SLAM~\cite{2017-mur-orbslam} are built on these binary descriptors. Whereas standard multi-dimensional search structures like k-d trees, are reported to perform well for incremental construction with floating point descriptors like SURF~\cite{bay2008speeded}, the same approaches suffer a relevant performance loss when used with binary descriptors. This is the reason why in this work we focus on constructing a specific search structure, that is tailored to matching \emph{binary} descriptors for VPR. In this paper we propose an approach for binary descriptor matching \emph{and} image retrieval that approximates the BF search. Our system does not need to construct any kind of dictionary and relies purely on a dynamically built binary search tree (BST) that allows for logarithmic searches and insertions of binary descriptors. Our approach runs several orders of magnitude faster than well-used implementations of other state-of-the-art methods, while retaining high matching accuracy. We provide our approach to the community in the form of a compact C++ header-only library\footnote{{\scriptsize HBST is available at: \url{www.gitlab.com/srrg-software/srrg_hbst}} \\ } accompanied by several, straightforward use cases. \section{Image Retrieval and Descriptor Matching} In this section we discuss in detail the two fundamental building blocks of VPR which we address in our approach: Image Retrieval and Descriptor Matching. We present related work directly in context of these two problems. \subsection{Image Retrieval} A system for \emph{image retrieval} returns the image ${\mathbb{I}}_i^\star$ contained in a database ${\left\{\image_i\right\}}$ that is the most similar to a given query image ${\image_q}$ according to a similarity metric ${e_\image}$. The more similar two images ${\mathbb{I}}_i$ and ${\mathbb{I}}_q$, the lower the resulting distance becomes. More formally, image retrieval consists in solving the following problem: \begin{equation} {\mathbb{I}}_i^\star = \argmin_{{\mathbb{I}}_i}{e_\image}({\mathbb{I}}_q, {\mathbb{I}}_i) : {\mathbb{I}}_i \in {\left\{\image_i\right\}}. \label{eq:image-retrieval} \end{equation} Often one is interested in retrieving \emph{all} images in the database, whose distance to the query image ${e_\image}$ is within a certain threshold $\tau_{\mathbb{I}}$: \begin{equation} \left\{{\mathbb{I}}_i^\star\right\} = \left\{{\mathbb{I}}_i \in {\left\{\image_i\right\}} : {e_\image}({\mathbb{I}}_q, {\mathbb{I}}_i) < \tau_{\mathbb{I}} \right\}. \label{eq:image-retrieval-output-multiple} \end{equation} The distance metric itself depends on the target application. A straightforward example of distance between two images is the Frobenius norm of the pixel-wise difference: \begin{equation} {e_\image}({\mathbb{I}}_q, {\mathbb{I}}_i)=\left\|{\image_q}-{\mathbb{I}}_i\right\|_F. \label{eq:distance-frobenius} \end{equation} This measure is not robust to viewpoint or illumination changes and its computational cost is proportional to the image sizes. \emph{Global image descriptors} address these issues by compressing an entire image into a set of few values. In the remainder we will refer to a global descriptor obtained from an image ${\mathbb{I}}$ as: ${\mathbf{d}}({\mathbb{I}})$. GIST of Olvia and Torralba~\cite{2001-oliva-gist} and Histogram of Oriented Gradients (HOG) by Dalal and Triggs~\cite{dalal2005histograms} are two prominent methods in this class. GIST computes a whole image descriptor as the distribution of different perceptual qualities and semantic classes detected in an image. Conversely, HOG computes the descriptor as the histogram of gradient orientations in portions of the image. When using global descriptors, the distance between images is usually computed as the $L_2$ norm of the difference between the corresponding descriptors: \begin{equation} {e_\image}({\mathbb{I}}_q, {\mathbb{I}}_i)=\left\|{\mathbf{d}}({\mathbb{I}}_q)-{\mathbf{d}}({\mathbb{I}}_i)\right\|_2. \label{eq:distance-global} \end{equation} Milford and Wyeth considered \emph{image sequences} instead of single images for place recognition. With SeqSLAM~\cite{milford2012seqslam} they presented an impressive SLAM system, that computes and processes contrast enhancing image difference vectors between subsequent images. Using this technique, SeqSLAM manages to recognize places that underwent heavy changes in appearance (e.g. from summer to winter). In recent years, convolutional neural network approaches have shown to be very effective in VPR. They are used to generate powerful descriptors that capture large portions of the scene at different resolutions. For one, there is the CNN feature boosted SeqSLAM system of Bai~\emph{et al.}~\cite{2017arXiv170405016B}, accompanied by other off-the-shelf systems such as ConvNet of S\"underhauf~\emph{et al.}~\cite{2015-sunderhauf-convnet} or NetVLAD by Arandjelovi\'c~\emph{et al.}~\cite{arandjelovic2016netvlad}. The large \emph{CNN descriptors} increase the description granularity and therefore they are more robust to viewpoint changes than global descriptors. CNN descriptors are additionally resistant to minor appearance changes, making them suitable for \emph{lifelong} place recognition applications. One can obtain up to a dozen CNN descriptors per image, which enable for high-dimensional image distance metrics for ${e_\image}$. However, if one wants to determine the \emph{relative location} at which images have been acquired, which is often the case for SLAM approaches, additional effort needs to be spent. Furthermore, due to their holistic nature, global descriptors might disregard the geometry of the scene and thus are more likely to provide false positives. Both of these issues can be handled by \emph{descriptor matching} and a subsequent geometric verification. \subsection{Descriptor Matching} Given two images ${\image_q}$ and ${\mathbb{I}}_i$, we are interested in determining which pixel ${\mathbf{p}}_q \in {\image_q}$ and which pixel ${\mathbf{p}}_j \in {\mathbb{I}}_i$, if any, capture the same point in the world. Knowing a set of these point correspondences, allows us to determine the relative position of the two images up to a scale using projective geometry~\cite{zisserman}. To this extent it is common to detect a set of salient points $\{{\mathbf{p}}\}$ (keypoints) in each image. Among others, the Harris corner detector and the FAST detector are prominent approaches for detecting keypoints. Keypoints are usually characterized by a strong local intensity variation. The local appearance around a keypoint ${\mathbf{p}}$ is captured by a descriptor ${\mathbf{d}}({\mathbf{p}})$ which is usually represented as a vector of either floating point or boolean values. SURF~\cite{bay2008speeded} is a typical floating point descriptor, while BRIEF~\cite{calonder2010brief}, BRISK~\cite{leutenegger2011brisk} and ORB~\cite{rublee2011orb} are well known boolean descriptors. The desired properties for local descriptors are the same as for global descriptors: light and viewpoint invariance. Descriptors are designed such that regions that appear locally similar in the image result in similar descriptors, according to a certain metric ${e_\descriptor}$. For floating point descriptors, ${e_\descriptor}$ is usually chosen as the $L_2$-norm. In the case of binary descriptors, the Hamming distance is a common choice. The Hamming distance between two binary vectors is the number of bit changes needed to turn one vector into the other, and can be effectively computed by current processors. Finding the point ${\mathbf{p}}_j^\star \in {\mathbb{I}}_i$ that is the most similar to a query ${\mathbf{p}}_q \in {\image_q}$ is resolved by seeking the descriptor ${\mathbf{d}}({\mathbf{p}}^\star_j)$ with the minimum distance to the query ${\mathbf{d}}({\mathbf{p}}_q)$: \begin{equation} {\mathbf{p}}_j^\star = \argmin_{{\mathbf{p}}_j}{e_\descriptor}({\mathbf{d}}({\mathbf{p}}_q), {\mathbf{d}}({\mathbf{p}}_j)) : {\mathbf{p}}_j\in {\mathbb{I}}_i. \label{eq:descriptor-matching} \end{equation} If a point ${\mathbf{p}}_q \in {\image_q}$ is not visible in ${\mathbb{I}}_i$, \eqref{eq:descriptor-matching} will still return a point ${\mathbf{p}}_j^\star \in {\mathbb{I}}_i$. Unfeasible matches however will have a high distance, and can be rejected whenever their distance ${e_\descriptor}$ is greater than a certain \emph{matching threshold} $\tau$. The most straightforward way to compute~\eqref{eq:descriptor-matching} is the brute-force (BF) search. BF computes the distance ${e_\descriptor}$ between ${\mathbf{p}}_q$ and \emph{every} ${\mathbf{p}}_j\in {\mathbb{I}}_i$. And hence \emph{always} returns the \emph{closest} match for each query. This unbeatable accuracy comes with a computational cost proportional to the number of descriptors $N_{\mathbf{d}} = \left|\{{\mathbf{d}}({\mathbf{p}}_j)\}\right|$. Assuming $N_{\mathbf{d}}$ is the average number of descriptors extracted for each image, finding the best correspondence for each keypoint in the query image would require $\mathcal{O}(N_{\mathbf{d}}^2)$ operations. In current applications, $N_{\mathbf{d}}$ ranges from 100 to 10'000, hence using BF for descriptor matching quickly becomes computationally prohibitive. To carry on the correspondence search in a more effective way it is common to organize the descriptors in search structure, typically a tree. In the case of floating point descriptors, FLANN (Fast Approximate Nearest Neighbor Search Library) of Muja and Lowe~\cite{flann} with k-d tree indexing is a common choice. When working with binary descriptors, the (Multi-Probe) Locality-sensitive hashing (LSH)~\cite{lsh} by Lv~\emph{et al.} and hierarchical clustering trees (HCT) of Muja and Lowe~\cite{2012-muja-fast-matching} are popular methods to index the descriptors with FLANN. While LSH allows for \emph{database incrementation} at a decent computational cost, HCT quickly exceeds real-time constraints. Accordingly, we consider only LSH in our result evaluations. The increased speed of FLANN compared to BF comes at a decreased accuracy of the matches. In our previous work~\cite{schlegel2016visual} we presented a binary search tree structure, that resolves~\eqref{eq:descriptor-matching} in logarithmic time. However, a tree had to be built for every desired image candidate pair. \subsection{Image Retrieval based on Descriptor Matching} \label{sec:image-retrieval-descriptor-matching} Assuming to have an efficient method to perform descriptor matching as defined in~\eqref{eq:descriptor-matching}, one could design a simple, yet effective image retrieval system by using a voting scheme. An image ${\mathbb{I}}_i$ will receive at most one vote $\left<{\mathbf{p}}_q, {\mathbf{p}}^\star_{i,j}\right>$ for each keypoint ${\mathbf{p}}_q \in {\image_q}$ that is successfully matched with a keypoint of another image ${\mathbf{p}}^\star_{i,j} \in {\mathbb{I}}_i$. The distance between two images ${\image_q}$ and ${\mathbb{I}}_i$ is then the number of votes $\left|\left\{\left<{\mathbf{p}}_q, {\mathbf{p}}^\star_{i,j}\right>\right\}\right|$ normalized by the number of descriptors in the query image $N_{\mathbf{d}}$: \begin{equation} {e_\image}({\mathbb{I}}_q, {\mathbb{I}}_i)=\frac{\left|\left\{\left<{\mathbf{p}}_q, {\mathbf{p}}^\star_{i,j}\right>\right\}\right|}{N_{\mathbf{d}}}. \label{eq:voting-scheme} \end{equation} The above procedure allows to gather reasonably good matches at a cost proportional to both, the number of descriptors in the query image $N_{\mathbf{d}}$ and the cost of retrieving the most likely descriptor as defined in~\eqref{eq:descriptor-matching}. An alternative strategy to enhance the efficiency of image retrieval, when local descriptors are available, is to compute \emph{a single} image ``descriptor'' from \emph{multiple} feature descriptors. Bag-of-visual-Words (BoW) approaches follow this strategy by computing an image descriptor as the \emph{histogram of the distribution of words} appearing in the image. A \emph{word} represents a group of nearby descriptors, and is learned by a clustering algorithm such as k-means from a set of train descriptors. To compute the histogram, each keypoint is converted in a set of weights computed as the distance of the descriptors' keypoint from the centroid of each word in the dictionary. The histogram is then normalized by the sum of word weights in the scene. Images that present similar distribution of words are likely to be similar. Comparing a pair of images can be done in a time linear in the number of words in the dictionary. This procedure has been shown to be both robust and efficient, however it does not provide point correspondences, that are required for geometric verification. Notably the open-source library DBoW2 by Galvez-Lopez and Tardos~\cite{galvez2012bags} extends the data structures used in BoW to add point correspondences to the system. This is done by storing an \emph{Inverted Index} from words to descriptors that are close to a specific word. To retrieve the keypoints ${\mathbf{p}}^\star_j$ that are similar to a query ${\mathbf{p}}^\star_j$ one can pick the words in the dictionary that are best represented by ${\mathbf{d}}({\mathbf{p}}^\star_q)$ and from them retrieve the descriptors through the inverted index. In the current version of DBoW2, Galvez-Lopez and Tardos provide also a \emph{Direct Index} descriptor index for correspondence access. DBoW2 is integrated within the recently published ORB-SLAM2~\cite{2017-mur-orbslam} by Mur-Artal~\emph{et al.} and displays fast and robust performance for ORB descriptors~\cite{rublee2011orb}. Another famous BoW based approach is FAB-MAP~\cite{OpenFabMap} developed by Cummins~\emph{et al.} FAB-MAP allows to quickly retrieve similar images on very large datasets. FAB-MAP uses costly SURF descriptors~\cite{bay2008speeded} to maintain a certain level of individuality between the massive number of images described. Typically, BoW is used to determine a preliminary set of good image candidates, on which BF, FLANN or BST descriptor matching is performed. This is a common practice for SLAM systems, that require high numbers of matches for few images. In this paper, we present a novel approach that: \begin{itemize} \item Allows to perform image retrieval and descriptor matching \emph{with} correspondences faster than BoW approaches perform image retrieval \emph{without} correspondences. \item Yields levels of search correctness and completeness comparable to the one achieved by to state-of-the-art methods such as FLANN-LSH~\cite{lsh} and DBoW2~\cite{galvez2012bags}. \item Allows for incremental insertion of subsequent descriptor sets (i.e. images) in a time bounded by the dimension of the descriptors ${\dim(\descriptor)}$. \end {itemize} Furthermore, we provide our approach as a compact C++ header-only library, that does not require a vocabulary or any other pretrained information. The library is accompanied by a set of simple use cases and includes an OpenCV wrapper. \section{Our Approach} We arrange binary feature descriptors ${\left\{\inputdescriptor\right\}}$ extracted from each image ${\mathbb{I}}_i$ of an image sequence ${\left\{\image_i\right\}}$ in a binary tree. This tree allows us to efficiently perform descriptor matching. Additionally, we build a voting scheme on top of this method that enables fast and robust image retrieval. \subsection{Tree Construction} \label{sec:approach-tree} In our tree, each leaf ${\mathcal{L}}_i$ stores a subset ${\left\{\leafdescriptor\right\}}$ of the input descriptors ${\left\{\inputdescriptor\right\}}$. The leafs partition the input set such that each descriptor ${\descriptor_j}$ belongs to a single leaf. Every non-leaf node ${\mathcal{N}}_i$ has exactly two children and stores an index ${k_i} \in [0, .., {\dim(\descriptor)}-1]$. Where ${\dim(\descriptor)}$ is the descriptor dimension, corresponding to the number of bits contained in each descriptor. We require that in each path from the root to a leaf a specific index value ${k_i}$ should appear at most once. This limits the depth of the tree $h$ to the dimension of the descriptors. \figref{fig:hbst-construction} illustrates an example tree constructed from 8 binary input descriptors ${\left\{\inputdescriptor\right\}}$ according to these rules. \begin{figure}[ht!] \centering \vspace{-5pt} \includegraphics[width=\columnwidth]{hbst-construction.pdf} \vspace{-10pt} \caption{HBST tree construction for a scenario with 8 input descriptors of dimension ${\dim(\descriptor)} = 4$. The tree contains 4 nodes ${\left\{\node_i\right\}}$ (circles) with bit indices $\left\{{k_i}\right\}$, 5 leafs ${\left\{\leaf_i\right\}}$ (rectangles) and has maximum depth $h=3$.} \label{fig:hbst-construction} \vspace{-5pt} \end{figure} A descriptor ${\descriptor_j}$ is stored in the left or in the right subtree depending on ${\descriptor_j}[{k_i}]$, that is the bit value of ${\descriptor_j}$ at index ${k_i}$. The structure of the tree is determined by the choice of the bit indices ${\left\{k_i\right\}}$ in the intermediate nodes and by the respective number of descriptors stored in the leafs ${\left\{\leaf_i\right\}}$. \subsection{Descriptor Search and Matching} \label{sec:approach-search} The most similar descriptor ${\mathbf{d}}^\star_{i,j}$ to a query ${\descriptor_q}$ is stored in a leaf ${\mathcal{L}}^\star_i$. This leaf is reached by traversing the tree, starting from the root. At each intermediate node ${\mathcal{N}}_i$ the search branches according to ${\descriptor_q}[{k_i}]$. Eventually, the search will end up in a leaf ${\mathcal{L}}^\star_i$. At this point all leaf descriptors ${\left\{\leafdescriptor\right\}}$ stored in ${\mathcal{L}}^\star_i$ are sequentially scanned (BF matching) to seek for the best match according to~\eqref{eq:descriptor-matching}. \figref{fig:hbst-matching} illustrates two examples of the proposed search strategy. \begin{figure}[ht!] \centering \vspace{-0pt} \includegraphics[width=\columnwidth]{hbst-matching.pdf} \vspace{-10pt} \caption{Search scenarios a) and b) for a small tree of depth $h=1$. The only configuration change between the scenarios is the value of $k_1$. In this example only a single descriptor is contained in each leaf. For ${\descriptor_q}$ the best matching descriptor is ${\mathbf{d}}^\star_{1,1}$, which is found in a) but not in b).} \label{fig:hbst-matching} \vspace{-5pt} \end{figure} Organizing $N_j$ descriptors in a balanced tree of depth $h$, results in having an average of $\frac{N_j}{2^h}$ descriptors in a leaf. Consequently, the time complexity of a search is: \begin{equation} \mathcal{O}(h+\frac{N_j}{2^h}) \label{eq:search-complexity} \end{equation} since $h$ operations are needed to find the leaf and the descriptor matching in the leaf can be performed in $\frac{N_j}{2^h}$ steps. If a query descriptor ${\descriptor_q}$ is already contained in the tree, the search is guaranteed to correctly return the stored descriptor ${\mathbf{d}}^\star_{i,j}$. This, however does not hold for nearest neighbor searches when one is interested in finding the descriptor in the tree that is similar to ${\descriptor_q}$. This is a consequence of the binary search procedure that preforms a greedy search based on the bit index ${k_i}$ at each node. Once a leaf is reached, only descriptors in that leaf are considered as potential results. Thus we can say that in general the nearest neighbor search in the tree is not ensured to be \emph{correct}. In practice, however, one is usually interested in finding a descriptor ${\mathbf{d}}_{i,j}$ that is \emph{similar enough} to ${\descriptor_q}$. Hence incorrect matches are tolerated as long as they are not too far off w.r.t. ${\descriptor_q}$, according to the metric ${\mathbf{d}}_{i,j}:{e_\descriptor}({\descriptor_q}, {\mathbf{d}}_{i,j}) < \tau$. If we want to retrieve \emph{all} descriptors that lay within a certain distance $\left\{{\mathbf{d}}^\star_{i,j}:{e_\descriptor}({\descriptor_q}, {\mathbf{d}}_{i,j}) < \tau\right\}$, the search in the tree might be not \emph{complete}. Incompleteness occurs when only a subset of the feasible matches are returned from a search. If a search is complete, it is also correct. \subsection{Completeness Analysis} \label{sec:completeness-analysis} A bounded nearest neighbor search for a query descriptor ${\descriptor_q}$ and a threshold $\tau$ is said to be \emph{complete} if all possible matching descriptors $\{{\mathbf{d}}_j^{(\tau,q)} \}$ such that ${e_\descriptor}({\descriptor_q}, {\mathbf{d}}_j^{(\tau,q)}) < \tau$ are returned. Given ${\descriptor_q}$, our search procedure returns all descriptors $\{{\mathbf{d}}_{i,j}^{(\tau,q)} \}$ in the leaf ${\mathcal{L}}_i$ whose distance is below $\tau$. These matching descriptors are necessarily a subset of all feasible ones $\{{\mathbf{d}}_{i,j}^{(\tau,q)}\} \subset \{{\mathbf{d}}_j^{(\tau,q)} \}$. A straightforward measure of completeness for a \emph{single} descriptor search is: \begin{equation} {c}_\tau({\descriptor_q})=\frac{|\{{\mathbf{d}}_{i,j}^{(\tau,q)}\}|}{|\{{\mathbf{d}}_j^{(\tau,q)}\}|} \in [0,1]. \label{eq:completeness} \end{equation} Given a set of input descriptors ${\left\{\inputdescriptor\right\}}$, a set of query descriptors ${\left\{\descriptor_q\right\}}$, a search threshold $\tau$ and a search tree constructed from ${\left\{\inputdescriptor\right\}}$, we can evaluate the \emph{mean completeness} $\overline{{c}}_\tau({\left\{\descriptor_q\right\}})$ over all searches. This gives us a meaningful measure of the overall completeness of our search. Since the structure of the search tree is governed by the choice of bit indices ${\left\{k_i\right\}}$, we conducted an experiment to evaluate how the choice of ${k_i}$ influences the completeness, under different thresholds $\tau$. Therefore we evaluated the resulting \emph{bitwise} mean completeness $\overline{{c}}_\tau({k_i})$ obtained by constructing trees for every possible bit index ${k_i}$. Without loss of generality we restricted our evaluation to a number of $n={\dim(\descriptor)}$ trees $\{{\mathcal{T}}^{(n)}\}$. Each tree ${\mathcal{T}}^{(n)}$ consists of only a root node ${\mathcal{N}}^{(n)}_{1}$ with bit index $k^{(n)}_{1}=n$ and two single leafs that partition the descriptors (similar to the tree described in~\figref{fig:hbst-matching}). In~\figref{fig:completeness-split} we report the results of our bitwise completeness analysis on sequence 00 of the KITTI benchmark dataset~\cite{geiger2012we}. A broader analysis, featuring also FREAK and A-KAZE descriptors as well as many other datasets, is available on the project website. Based on the results shown in~\figref{fig:completeness-split} two facts are evident: \begin{itemize} \item The choice of the bit index ${k_i}$ does not substantially influence the mean completeness $\overline{{c}}_\tau({k_i})$ of the nearest neighbor search. This behavior is similar for different types of binary descriptors. \item The greater the threshold $\tau$, the lower the mean completeness $\overline{{c}}_\tau({k_i})$ becomes. \end{itemize} \begin{figure*} \centering \begin{subfigure}{0.34\textwidth} \resizebox{\columnwidth}{!}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{ \gdef\gplfronttext{ \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(814,704){\makebox(0,0)[r]{\strut{}$0$}}% \csname LTb\endcsname% \put(814,1372){\makebox(0,0)[r]{\strut{}$0.2$}}% \csname LTb\endcsname% \put(814,2040){\makebox(0,0)[r]{\strut{}$0.4$}}% \csname LTb\endcsname% \put(814,2709){\makebox(0,0)[r]{\strut{}$0.6$}}% \csname LTb\endcsname% \put(814,3377){\makebox(0,0)[r]{\strut{}$0.8$}}% \csname LTb\endcsname% \put(814,4045){\makebox(0,0)[r]{\strut{}$1$}}% \csname LTb\endcsname% \put(946,484){\makebox(0,0){\strut{}$0$}}% \csname LTb\endcsname% \put(2094,484){\makebox(0,0){\strut{}$50$}}% \csname LTb\endcsname% \put(3243,484){\makebox(0,0){\strut{}$100$}}% \csname LTb\endcsname% \put(4391,484){\makebox(0,0){\strut{}$150$}}% \csname LTb\endcsname% \put(5540,484){\makebox(0,0){\strut{}$200$}}% \csname LTb\endcsname% \put(6688,484){\makebox(0,0){\strut{}$250$}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(176,2541){\rotatebox{-270}{\makebox(0,0){\strut{}Completeness}}}% \put(3874,154){\makebox(0,0){\strut{}Bit index $k$}}% \put(3874,4709){\makebox(0,0){\strut{}Bitwise Completeness for BRIEF-256 over $1542$ matching image pairs}}% }% \gplbacktext \put(0,0){\includegraphics{bit-statistics-depth-0-brief-1000-kitti-00-eps-converted-to.pdf}}% \gplfronttext \end{picture}} \end{subfigure} \hspace{-15pt} \begin{subfigure}{0.34\textwidth} \resizebox{\columnwidth}{!}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{ \gdef\gplfronttext{ \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(594,704){\makebox(0,0)[r]{\strut{}$0$}}% \csname LTb\endcsname% \put(594,1372){\makebox(0,0)[r]{\strut{}$0.2$}}% \csname LTb\endcsname% \put(594,2040){\makebox(0,0)[r]{\strut{}$0.4$}}% \csname LTb\endcsname% \put(594,2709){\makebox(0,0)[r]{\strut{}$0.6$}}% \csname LTb\endcsname% \put(594,3377){\makebox(0,0)[r]{\strut{}$0.8$}}% \csname LTb\endcsname% \put(594,4045){\makebox(0,0)[r]{\strut{}$1$}}% \csname LTb\endcsname% \put(726,484){\makebox(0,0){\strut{}$0$}}% \csname LTb\endcsname% \put(1918,484){\makebox(0,0){\strut{}$50$}}% \csname LTb\endcsname% \put(3109,484){\makebox(0,0){\strut{}$100$}}% \csname LTb\endcsname% \put(4301,484){\makebox(0,0){\strut{}$150$}}% \csname LTb\endcsname% \put(5492,484){\makebox(0,0){\strut{}$200$}}% \csname LTb\endcsname% \put(6684,484){\makebox(0,0){\strut{}$250$}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(3764,154){\makebox(0,0){\strut{}Bit index $k$}}% \put(3764,4709){\makebox(0,0){\strut{}Bitwise Completeness for ORB-256 over $1542$ matching image pairs}}% }% \gplbacktext \put(0,0){\includegraphics{bit-statistics-depth-0-orb-1000-kitti-00-eps-converted-to.pdf}}% \gplfronttext \end{picture}} \end{subfigure} \hspace{-15pt} \begin{subfigure}{0.34\textwidth} \resizebox{\columnwidth}{!}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{ \gdef\gplfronttext{ \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(594,704){\makebox(0,0)[r]{\strut{}$0$}}% \csname LTb\endcsname% \put(594,1372){\makebox(0,0)[r]{\strut{}$0.2$}}% \csname LTb\endcsname% \put(594,2040){\makebox(0,0)[r]{\strut{}$0.4$}}% \csname LTb\endcsname% \put(594,2709){\makebox(0,0)[r]{\strut{}$0.6$}}% \csname LTb\endcsname% \put(594,3377){\makebox(0,0)[r]{\strut{}$0.8$}}% \csname LTb\endcsname% \put(594,4045){\makebox(0,0)[r]{\strut{}$1$}}% \csname LTb\endcsname% \put(726,484){\makebox(0,0){\strut{}$0$}}% \csname LTb\endcsname% \put(1915,484){\makebox(0,0){\strut{}$100$}}% \csname LTb\endcsname% \put(3104,484){\makebox(0,0){\strut{}$200$}}% \csname LTb\endcsname% \put(4294,484){\makebox(0,0){\strut{}$300$}}% \csname LTb\endcsname% \put(5483,484){\makebox(0,0){\strut{}$400$}}% \csname LTb\endcsname% \put(6672,484){\makebox(0,0){\strut{}$500$}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(3764,154){\makebox(0,0){\strut{}Bit index $k$}}% \put(3764,4709){\makebox(0,0){\strut{}Potential Completeness for BRISK-512 over $1542$ matching image pairs}}% \csname LTb\endcsname% \put(5816,1537){\makebox(0,0)[r]{\strut{}$\overline{c}_{10}(k)$}}% \csname LTb\endcsname% \put(5816,1317){\makebox(0,0)[r]{\strut{}$\overline{c}_{25}(k)$}}% \csname LTb\endcsname% \put(5816,1097){\makebox(0,0)[r]{\strut{}$\overline{c}_{50}(k)$}}% \csname LTb\endcsname% \put(5816,877){\makebox(0,0)[r]{\strut{}$\overline{c}_{75}(k)$}}% }% \gplbacktext \put(0,0){\includegraphics{bit-statistics-depth-0-brisk-1000-kitti-00-eps-converted-to.pdf}}% \gplfronttext \end{picture}} \end{subfigure} \caption{Bitwise mean completeness $\overline{{c}}_\tau({k_i})$ for matching thresholds $\tau\in\left\{10, 25, 50, 75\right\}$ and 3 binary descriptor types. A number of $N_{\mathbf{d}}=1000$ descriptors has been extracted per image. The ground truth for this experiment consisted of 1542 image pairs, corresponding to over 1.5 million descriptors in the database. The colorization and legend based on $\tau$ is identical for all plots and can be inspected in the rightmost figure. Note that the BRISK-512 descriptor has ${\dim(\descriptor)}=512$ and therefore the considered matching threshold $\tau$ is much more restrictive with respect to BRIEF-256 and ORB-256} \label{fig:completeness-split} \vspace{-15pt} \end{figure*} Whereas the above experiment (\figref{fig:completeness-split}) only considers trees of depth $h=1$, its results can be used to predict the evolution of the completeness as the depth of the tree increases further. Let $\overline{c}_\tau$ be the average completeness over all bit indices ${\left\{k_i\right\}}$ at threshold $\tau$, for a tree having depth $1$. Performing a search on a tree at depth $h > 1$, would result in applying the decision rule ${\mathbf{d}}[{k_i}]$ exactly $h$ times, and each decision would result in a potential loss of completeness according to~\eqref{eq:completeness}. Assuming that $\overline{c}_\tau$ is evaluated on a representative set of query and input descriptors, we expect that $\overline{c}_\tau$ does not change significantly on other tree \emph{levels} as well. A tree level is a single view of a node ${\mathcal{N}}_i$ and two leafs which can be inspected at any depth in the tree by collapsing the left and right subtree of ${\mathcal{N}}_i$. Thus we predict the completeness at depth $h$ as: \begin{equation} \hat{c}_\tau(h)={\overline{c}_\tau}^h\in\left[0, 1\right]. \label{eq:completeness-prediction} \end{equation} To confirm our conjuncture, we conducted a second experiment, where we organize the input descriptor set in a sequence of balanced trees $\left\{{\mathcal{T}}^{(h)}\right\}$ with increasing depths $h=\left\{0, 1, .., 16\right\}$. We then repeated the evaluation conducted in the previous experiment (\figref{fig:completeness-split}), by computing the average completeness of all queries for all depths. \figref{fig:completeness-evolution} reports the results of this evaluation. \begin{figure}[ht!] \centering \vspace{-10pt} \resizebox{\columnwidth}{!}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{ \gdef\gplfronttext{ \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(814,704){\makebox(0,0)[r]{\strut{}$0$}}% \csname LTb\endcsname% \put(814,1439){\makebox(0,0)[r]{\strut{}$0.2$}}% \csname LTb\endcsname% \put(814,2174){\makebox(0,0)[r]{\strut{}$0.4$}}% \csname LTb\endcsname% \put(814,2909){\makebox(0,0)[r]{\strut{}$0.6$}}% \csname LTb\endcsname% \put(814,3644){\makebox(0,0)[r]{\strut{}$0.8$}}% \csname LTb\endcsname% \put(814,4379){\makebox(0,0)[r]{\strut{}$1$}}% \csname LTb\endcsname% \put(946,484){\makebox(0,0){\strut{}$0$}}% \csname LTb\endcsname% \put(1503,484){\makebox(0,0){\strut{}$2$}}% \csname LTb\endcsname% \put(2059,484){\makebox(0,0){\strut{}$4$}}% \csname LTb\endcsname% \put(2616,484){\makebox(0,0){\strut{}$6$}}% \csname LTb\endcsname% \put(3173,484){\makebox(0,0){\strut{}$8$}}% \csname LTb\endcsname% \put(3729,484){\makebox(0,0){\strut{}$10$}}% \csname LTb\endcsname% \put(4286,484){\makebox(0,0){\strut{}$12$}}% \csname LTb\endcsname% \put(4842,484){\makebox(0,0){\strut{}$14$}}% \csname LTb\endcsname% \put(5399,484){\makebox(0,0){\strut{}$16$}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(176,2541){\rotatebox{-270}{\makebox(0,0){\strut{}Completeness}}}% \put(3172,154){\makebox(0,0){\strut{}Tree depth $h$}}% \put(3172,4709){\makebox(0,0){\strut{}Mean Completeness for different depths (BRIEF-256)}}% \csname LTb\endcsname% \put(6212,4269){\makebox(0,0)[r]{\strut{}$\overline c_{10}(h)$}}% \csname LTb\endcsname% \put(6212,4049){\makebox(0,0)[r]{\strut{}$\hat{c}_{10}(h)$}}% \csname LTb\endcsname% \put(6212,3829){\makebox(0,0)[r]{\strut{}$\overline c_{25}(h)$}}% \csname LTb\endcsname% \put(6212,3609){\makebox(0,0)[r]{\strut{}$\hat{c}_{25}(h)$}}% \csname LTb\endcsname% \put(6212,3389){\makebox(0,0)[r]{\strut{}$\overline c_{50}(h)$}}% \csname LTb\endcsname% \put(6212,3169){\makebox(0,0)[r]{\strut{}$\hat{c}_{50}(h)$}}% \csname LTb\endcsname% \put(6212,2949){\makebox(0,0)[r]{\strut{}$\overline c_{75}(h)$}}% \csname LTb\endcsname% \put(6212,2729){\makebox(0,0)[r]{\strut{}$\hat{c}_{75}(h)$}}% }% \gplbacktext \put(0,0){\includegraphics{completeness-evolution-brief-1000-kitti-00-eps-converted-to.pdf}}% \gplfronttext \end{picture}} \vspace{-15pt} \caption{Measured and predicted mean completeness: $\overline{c}_\tau(h)$ and $\hat{{c}}_\tau(h)$ for increasing depth $h=\left\{0, 1, .., 16\right\}$ at varying matching thresholds $\tau$. This experiment was conducted for 1542 matching images of the KITTI sequence 00 using BRIEF-256 descriptors. The values for $\overline{c}_\tau(1)$ correspond to the mean values of $\overline{c}_\tau(k)$ reported in \figref{fig:completeness-split}.} \label{fig:completeness-evolution} \vspace{-0pt} \end{figure} From this analysis we further conclude that: \begin{itemize} \item The completeness $\overline{c}_\tau(h)$ decreases exponentially with increasing depth $h$ of the search tree. \item \eqref{eq:completeness-prediction} captures the relation between depth of the tree and completeness reasonably well. \end{itemize} Summarizing the results of both experiments, we found that the bit index ${k_i}$ does not significantly affect the mean completeness $\overline{c}_\tau$. Increasing the tree depth $h$ on the other hand drastically reduces $\overline{c}_\tau$. \subsection{Balanced Tree Construction} \label{sec:balanced-tree-construction} In this section we describe how to organize a set of descriptors in a \emph{balanced} tree of depth $h$. Considering~\eqref{eq:search-complexity} and~\eqref{eq:completeness-prediction}, for a given threshold $\tau$, we have a trade-off between search time and completeness. Higher values of $h$ will result in increased search speed at the cost of a reduced completeness. These results however, hold only in the case of balanced trees, and both search speed and completeness will decrease when the tree becomes unbalanced. A straightforward strategy to build a balanced tree from a set of input descriptors ${\left\{\inputdescriptor\right\}}$ consists in recursively splitting the current input descriptors evenly. Since the structure of the tree is governed by the choice of ${\left\{k_i\right\}}$, to achieve an even partitioning of ${\left\{\inputdescriptor\right\}}$, we choose the bit index for which the bit ${\descriptor_j}[{k_i}]$ is ``on'' for half of ${\left\{\inputdescriptor\right\}}$ for \emph{every} node ${\mathcal{N}}_i$. The chosen bit index $k_i^\star$ will therefore be the one whose \emph{mean value} among all descriptors is the \emph{closest} to 0.5: \begin{equation} k_i^\star = \argmin_{k_i} {\Big\vert} 0.5- \frac{1}{N_j} \sum_j {\descriptor_j}[{k_i}] {\Big\vert}. \label{eq:decision-rule} \end{equation} Note that when selecting ${k_i}$ we have to neglect all the indices that have been used in the nodes ancestors. If the minimized norm in~\eqref{eq:decision-rule} is below a certain threshold $\delta_{max}$, we say that the mean value is \emph{close enough} to 0.5 and pick $k_i^\star$ for splitting. In case that no such mean value is available, we do not split the descriptors and the recursion stops. Constructing a tree of depth $h$ for $N_j$ descriptors according to~\eqref{eq:decision-rule} has a complexity of $\mathcal{O}(N_j \cdot h)$. In typical applications such as SLAM, $N_j$ grows significantly for every new image, as new descriptors are added to the set. Therefore constructing the tree from scratch for all descriptors of all images for every new image quickly leads to runtimes not adequate for real-time applications. To overcome this computational limitation we propose an alternative strategy to \emph{insert} new images (i.e. descriptors) into an \emph{existing} tree. \subsection{Incremental Tree Construction} \label{sec:approach-insertion} In this section we describe an alternative strategy that allows to augment an initial tree with additional descriptors while limiting its depth. The idea is to accumulate descriptors in a leaf until a number ${N_{max}}$ (maximum leaf size) is reached. Whereas hierarchical clustering trees~\cite{2012-muja-fast-matching} use the maximum leaf size as termination criterion for the clustering process, we on the other hand evaluate it to determine if a clustering (i.e. splitting) is necessary. When the maximum leaf size is exceeded, we say that the leaf ${\mathcal{L}}_i$ becomes ``too large'', and we turn the leaf in an intermediate node ${\mathcal{N}}_i$. The bit index ${k_i}$ for ${\mathcal{N}}_i$ is selected according to the criterion in~\eqref{eq:decision-rule}, and the descriptors previously contained in ${\mathcal{L}}_i$ are organized in two new leafs spawning from ${\mathcal{N}}_i$. \figref{fig:hbst-insertion} illustrates the proposed procedure. \begin{figure}[ht!] \centering \vspace{-0pt} \includegraphics[width=\columnwidth]{hbst-insertion.pdf} \vspace{-15pt} \caption{Descriptor insertion procedure for a single input descriptor ${\descriptor_j}$. Only the affected part of the tree is shown. Step 1) The leaf ${\mathcal{L}}_i$ containing the most similar descriptor(s) to ${\descriptor_j}$ is found. Step 2) ${\descriptor_j}$ is integrated into the leaf descriptor set $\left\{{\mathbf{d}}_{4,j}\right\}$. Step 3) If the leaf becomes ``too big'': $N_{4,j}>{N_{max}}$, it breaks into two child leafs and becomes an intermediate node. In this example we set the maximum leaf size to ${N_{max}}=3$.} \label{fig:hbst-insertion} \vspace{-5pt} \end{figure} Notably the tree traversal needed to find ${\mathcal{L}}_i$ is the same as for the search. This enables us to perform both search and insertion at the same time. Albeit this straightforward insertion technique does not guarantee a balanced tree, it succeeds in limiting the depth of the tree as shown in~\figref{fig:leaf-evolution}. \begin{figure}[ht!] \centering \vspace{-70pt} \resizebox{\columnwidth}{!}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{ \gdef\gplfronttext{ \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(682,704){\makebox(0,0)[r]{\strut{}$0$}}% \csname LTb\endcsname% \put(682,1131){\makebox(0,0)[r]{\strut{}$5$}}% \csname LTb\endcsname% \put(682,1558){\makebox(0,0)[r]{\strut{}$10$}}% \csname LTb\endcsname% \put(682,1985){\makebox(0,0)[r]{\strut{}$15$}}% \csname LTb\endcsname% \put(682,2412){\makebox(0,0)[r]{\strut{}$20$}}% \csname LTb\endcsname% \put(682,2839){\makebox(0,0)[r]{\strut{}$25$}}% \csname LTb\endcsname% \put(814,484){\makebox(0,0){\strut{}$0$}}% \csname LTb\endcsname% \put(1716,484){\makebox(0,0){\strut{}$5000$}}% \csname LTb\endcsname% \put(2618,484){\makebox(0,0){\strut{}$10000$}}% \csname LTb\endcsname% \put(3520,484){\makebox(0,0){\strut{}$15000$}}% \csname LTb\endcsname% \put(4422,484){\makebox(0,0){\strut{}$20000$}}% \csname LTb\endcsname% \put(5324,484){\makebox(0,0){\strut{}$25000$}}% \csname LTb\endcsname% \put(6226,484){\makebox(0,0){\strut{}$30000$}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(176,1771){\rotatebox{-270}{\makebox(0,0){\strut{}Tree depth $h$}}}% \put(3808,154){\makebox(0,0){\strut{}Number of inserted images}}% \put(3808,3169){\makebox(0,0){\strut{}Tree depths on St. Lucia (BRIEF-256)}}% \csname LTb\endcsname% \put(6080,1097){\makebox(0,0)[r]{\strut{}Standard deviation}}% \csname LTb\endcsname% \put(6080,877){\makebox(0,0)[r]{\strut{}Mean depth}}% }% \gplbacktext \put(0,0){\includegraphics{depths-lucia-eps-converted-to.pdf}}% \gplfronttext \end{picture}}\\ \vspace{-3pt} \caption{Mean and standard deviation of the tree depth $h$ for increasing numbers of sequentially inserted images. $N_{\mathbf{d}}=1000$ BRIEF-256 descriptors were extracted for each of the 33'197 images in the dataset (resulting over 30 million inserted descriptors). For this experiment we set $\tau=25, \delta_{max}=0.1$ and ${N_{max}}=100$.} \label{fig:leaf-evolution} \vspace{-10pt} \end{figure} Note that using re-balancing structures such as Red-Black trees to organize the descriptors is not straightforward in our case. Since the constraint that a bit index would appear at most once in a path from root to leaf would be violated by moving nodes. In our approach, no tree re-balancing is performed as we are able to enforce a desired balance to a satisfiable degree using the parameter $\delta_{max}$ (\secref{sec:balanced-tree-construction}). To enable image retrieval, we augment each stored descriptor with the index of the image from which it was extracted. This allows us to implement a voting scheme for image retrieval at no additional cost. \section{Experiments} \label{sec:experiments} In this section we report the results of a comparative evaluation of our approach with several state-of-the-art methods (\secref{sec:compared-approaches}). We measure the image retrieval accuracy and the runtime of each method on multiple publicly available datasets (\secref{sec:datasets}). To quantify the accuracy of image retrieval, we extract a VPR ground truth on which images should match for the analyzed datasets, using a brute-force offline procedure (\secref{sec:ground-truth-computation}). For space reasons and due to the high number of considered datasets, we report the achieved accuracy using the maximum $F_1$ score, which is a single number summarizing the well known Precision-Recall curves (\secref{sec:precision-recall}). For each dataset and each approach we process the images sequentially. Every time a new image is acquired, the approaches are queried for image retrieval. Based on the reported image matches and on the provided ground truth we then can estimate precision, recall and $F_1$ score. The database is subsequently augmented by inserting the new image, so that it can be returned as a match in future queries. We gather runtime information by measuring the average time $\overline{t}$ spent for both of these operations for each image. \subsection{Compared Approaches} \label{sec:compared-approaches} Our comparison has been conducted on the following state-of-the-art image retrieval approaches: \begin{itemize} \item BF: The classical brute-force approach. We utilize the current OpenCV3 implementation. BF is expected to achieve the highest precision and recall, while requiring the highest processing time $\overline{t}$ per image. \item FLANN-LSH: We utilize the current OpenCV3 implementation of FLANN with LSH indexing. The LSH index is built using the parameters: $\mathrm{table\_number}=10$, $\mathrm{key\_size}=20$ and $\mathrm{multi\_probe\_level}=0$. \item DBoW2-DI: We used the DBoW2 approach \emph{with} Direct Indexing. Image matches are pruned by decreasing number of matches obtained through descriptor matching using the provided DBoW2 indices. DBoW2 was run with parameters: $\mathrm{use\_di}=true$ and $\mathrm{di\_levels}=2$. \item DBoW2-SO: DBoW2 \emph{without} direct indexing. Accordingly the parameters are: $\mathrm{use\_di}=false$. This configuration does not report matching descriptors but only matching images (based on image Score Only). \item HBST-10: HBST is the approach proposed in this paper, with parameters: $\delta_{max}=0.1$, ${N_{max}}=10$. \item HBST-50: Same as above but with an extended maximum leaf size of ${N_{max}}=50$. HBST-50 is designed to provide a higher accuracy than HBST-10 at the price of a higher processing time $\overline{t}$. \end{itemize} For all approaches we considered a maximum descriptor matching distance of $\tau=25$ and we extracted for each image $N_{\mathbf{d}}=1000$ BRIEF-256 descriptors. All results were obtained on the same machine, running Ubuntu 16.04.3 with an Intel i7-7700K CPU@4.2GHz and 32GB of RAM@4.1GHz. A more extensive evaluation featuring various binary descriptor types (e.g. ORB, BRISK, FREAK and A-KAZE) is available on the project website. \subsection{Datasets} \label{sec:datasets} We performed our result evaluation on 4 publicly available large-scale visual SLAM datasets: KITTI~\cite{geiger2012we}, M\'alaga~\cite{2013-blanco-malaga}, St. Lucia~\cite{warren2010} and Oxford~\cite{2017-oxford-dataset}. Each dataset contains multiple sequences with thousands of images. In \figref{fig:datasets} we show an aerial view of the robot trajectories in these sequences. For space reasons, we report in this paper only the results of KITTI and St. Lucia, being in line with the other datasets. The results of M\'alaga and Oxford can be inspected on the project website. \begin{figure}[ht!] \centering \vspace{-10pt} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\columnwidth]{dataset-kitti.pdf} \caption{KITTI: 14.6 km, 15'756 images.} \vspace{-0pt} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\columnwidth]{dataset-lucia.pdf} \caption{St. Lucia: 9.5 km, 33'197 images.} \vspace{-0pt} \end{subfigure} \caption{Selected datasets sequences with acquisition trajectories in blue. Respective matching image segments (defined by the VPR ground truth of \secref{sec:ground-truth-computation}) are highlighted in green.} \label{fig:datasets} \vspace{-10pt} \end{figure} \subsection{Ground Truth Computation} \label{sec:ground-truth-computation} Obtaining the ground truth for image retrieval is a crucial aspect of our evaluation. To this extent we employ a brute-force approach aided by the ground truth camera pose information available in the datasets. We report a \emph{match} (true positive) between a query ${\mathbb{I}}_q$ and an image ${\mathbb{I}}_i$ in the database, whenever \emph{all} of the following criteria are met: \begin{enumerate} \item The fields of view at which the images were acquired must overlap, and the camera positions have to be close. This occurs when two images are acquired at positions closer than 10 meters, and the optical axes of the cameras have an angular distance below 20 degrees. \item Since all approaches are designed to approximate the BF accuracy, we require that matching images are supported by a minimum number of matching descriptors. This test is passed when more than 10\% of the descriptors are within the matching threshold $\tau=25$. \item To confirm the usability of returned descriptor matches for image registration, we perform a geometric validation for the keypoint correspondences $\left<{\mathbf{p}}_q, {\mathbf{p}}_j\right>$. A correspondence $\left<{\mathbf{p}}_q, {\mathbf{p}}_j\right>$ is valid, if the essential constraint ${\mathbf{p}}_q^\top\mathbf{E}{\mathbf{p}}_j=0$ is approached~\cite{zisserman}. \end{enumerate} The tool we used to generate such a ground truth for image matches is available online\footnote{\scriptsize Benchmark project: \url{www.gitlab.com/srrg-software/srrg_bench}}. The subset of matches that passes our criteria forms the set of \emph{ground truth matches}. \subsection{Precision, Recall and the F1 score} \label{sec:precision-recall} To determine the reliability of a place recognition approach one generally measures the resulting \emph{Precision} and \emph{Recall} statistics. The first statistic being: \begin{equation} Precision = \frac{\#~correctly~reported~associations}{\#~total~reported~associations} \in \left[0,1\right]. \notag \end{equation} Here $\#~correctly~reported~associations$ is the subset of matches reported that are also in the ground truth set, while the $\#~total~reported~associations$ are all matches returned. To evaluate the completeness we also consider: \begin{equation} Recall = \frac{\#~correctly~reported~associations}{\#~total~possible~associations} \in \left[0,1\right]. \notag \end{equation} Here $\#~total~possible~associations$ are all associations in the ground truth set. The $F_1$ score is a compact measure that combines Precision and Recall in a single value: \begin{equation} F_1 = 2\cdot\frac{Precision \cdot Recall}{Precision+Recall} \in \left[0,1\right] \notag \end{equation} The \emph{maximum} $F_1$ score obtained by a method represents the \emph{best} tradeoff between Precision and Recall. The higher the $F_1$ score, the more accurate \emph{and} complete is an approach. \subsection{Results} \label{sec:results} \begin{figure*} \centering \vspace{-0pt} \begin{subfigure}[]{0.44\textwidth} \resizebox{\columnwidth}{!}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{ \gdef\gplfronttext{ \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(858,704){\makebox(0,0)[r]{\strut{}$0.001$}}% \csname LTb\endcsname% \put(858,1439){\makebox(0,0)[r]{\strut{}$0.01$}}% \csname LTb\endcsname% \put(858,2174){\makebox(0,0)[r]{\strut{}$0.1$}}% \csname LTb\endcsname% \put(858,2909){\makebox(0,0)[r]{\strut{}$1$}}% \csname LTb\endcsname% \put(858,3644){\makebox(0,0)[r]{\strut{}$10$}}% \csname LTb\endcsname% \put(858,4379){\makebox(0,0)[r]{\strut{}$100$}}% \put(1820,484){\makebox(0,0){\strut{}00}}% \put(2651,484){\makebox(0,0){\strut{}02}}% \put(3481,484){\makebox(0,0){\strut{}05}}% \put(4312,484){\makebox(0,0){\strut{}06}}% \put(5142,484){\makebox(0,0){\strut{}07}}% \put(5973,484){\makebox(0,0){\strut{}09}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(220,2541){\rotatebox{-270}{\makebox(0,0){\strut{}Mean processing time $\overline{t}$ (seconds)}}}% \put(3896,154){\makebox(0,0){\strut{}KITTI sequence number}}% \put(3896,4709){\makebox(0,0){\strut{}Runtime: KITTI benchmark}}% }% \gplbacktext \put(0,0){\includegraphics{duration-statistics-kitti-eps-converted-to.pdf}}% \gplfronttext \end{picture}} \end{subfigure} \begin{subfigure}[]{0.44\textwidth} \resizebox{\columnwidth}{!}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{ \gdef\gplfronttext{ \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(682,704){\makebox(0,0)[r]{\strut{}$0$}}% \csname LTb\endcsname% \put(682,1072){\makebox(0,0)[r]{\strut{}$0.1$}}% \csname LTb\endcsname% \put(682,1439){\makebox(0,0)[r]{\strut{}$0.2$}}% \csname LTb\endcsname% \put(682,1807){\makebox(0,0)[r]{\strut{}$0.3$}}% \csname LTb\endcsname% \put(682,2174){\makebox(0,0)[r]{\strut{}$0.4$}}% \csname LTb\endcsname% \put(682,2542){\makebox(0,0)[r]{\strut{}$0.5$}}% \csname LTb\endcsname% \put(682,2909){\makebox(0,0)[r]{\strut{}$0.6$}}% \csname LTb\endcsname% \put(682,3277){\makebox(0,0)[r]{\strut{}$0.7$}}% \csname LTb\endcsname% \put(682,3644){\makebox(0,0)[r]{\strut{}$0.8$}}% \csname LTb\endcsname% \put(682,4011){\makebox(0,0)[r]{\strut{}$0.9$}}% \csname LTb\endcsname% \put(682,4379){\makebox(0,0)[r]{\strut{}$1$}}% \put(1670,484){\makebox(0,0){\strut{}00}}% \put(2525,484){\makebox(0,0){\strut{}02}}% \put(3381,484){\makebox(0,0){\strut{}05}}% \put(4236,484){\makebox(0,0){\strut{}06}}% \put(5092,484){\makebox(0,0){\strut{}07}}% \put(5947,484){\makebox(0,0){\strut{}09}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(176,2541){\rotatebox{-270}{\makebox(0,0){\strut{}Maximum F1 score}}}% \put(3808,154){\makebox(0,0){\strut{}KITTI sequence number}}% \put(3808,4709){\makebox(0,0){\strut{}Precision-Recall: KITTI benchmark}}% }% \gplbacktext \put(0,0){\includegraphics{maximum-f1-scores-kitti-eps-converted-to.pdf}}% \gplfronttext \end{picture}} \end{subfigure} \begin{subfigure}{0.1\textwidth} \adjustbox{width=\columnwidth, trim=10pt 0pt 260pt 40pt, clip}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{ \gdef\gplfronttext{ \gplgaddtomacro\gplbacktext{% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(1320,4011){\makebox(0,0)[r]{\strut{}BF}}% \csname LTb\endcsname% \put(1320,3791){\makebox(0,0)[r]{\strut{}FLANN-LSH}}% \csname LTb\endcsname% \put(1320,3571){\makebox(0,0)[r]{\strut{}DBoW2-DI}}% \csname LTb\endcsname% \put(1320,3351){\makebox(0,0)[r]{\strut{}DBoW2-SO}}% \csname LTb\endcsname% \put(1320,3131){\makebox(0,0)[r]{\strut{}HBST-10}}% \csname LTb\endcsname% \put(1320,2911){\makebox(0,0)[r]{\strut{}HBST-50}}% }% \gplbacktext \put(0,0){\includegraphics{key-benchmark-eps-converted-to.pdf}}% \gplfronttext \end{picture}} \end{subfigure} \vspace{-5pt} \caption{KITTI Visual Odometry/SLAM Evaluation 2012: Large-scale urban and rural environments in Germany. We acquired a number of $N_{\mathbf{d}}=1000$ BRIEF-256 descriptors per image in every sequence. All approaches were evaluated on a single core. Note that the runtime axis is logarithmic.} \label{fig:benchmark-kitti} \vspace{-10pt} \end{figure*} In \figref{fig:benchmark-kitti} we report the results of all approaches on KITTI. We observed the following for each compared approach: \begin{itemize} \item BF: Not surprisingly, BF is clearly the most accurate, at the cost of a higher computation that grows linearly with the number of inserted images. BF prohibits real-time execution after 10 to 20 images. \item FLANN-LSH: It generally achieves decent $F_1$ scores between DBoW2-SO and HBST-10. Its high computational requirements are not adequate for a real-time application in our scenario. \item DBOW2-DI: The BoW approach achieves the best $F_1$ score after BF, at a computational cost that grows mildly with the number of images inserted. Yet it is two orders of magnitude slower than HBST. \item DBOW2-SO: The pure histogram comparison (Score Only) used with this settings leads to the poorest $F_1$ score. However it is the fastest approach after HBST. \item HBST-10: Our approach achieves accuracy between FLANN-LSH and DBOW2-DI, while it is by far the fastest approach compared. \item HBST-50: As expected, HBST-50 achieves a higher accuracy than HBST-10 while being slightly slower. \end{itemize} In \figref{fig:benchmark-lucia} we present a more detailed analysis performed on a \emph{single} sequence with 33'197 images. Here we show the Runtime and Precision-Recall curves of all approaches. FLANN-LSH and DBoW2-SO fail due to the large, incrementally built database. Both report many false positives, drastically reducing accuracy. DBoW2-DI achieves acceptable accuracy, using descriptor matching to prune reported image matches. Our method (HBST-10, HBST-50) outperforms all other approaches considered in this scenario. \begin{figure}[ht!] \hspace{2pt} \vspace{-0pt} \resizebox{\columnwidth}{!}{\begin{picture}(7200.00,5040.00)% \gdef\gplbacktext{}% \gdef\gplfronttext{}% \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(588,704){\makebox(0,0)[r]{\strut{}$0.001$}}% \csname LTb\endcsname% \put(588,1317){\makebox(0,0)[r]{\strut{}$0.01$}}% \csname LTb\endcsname% \put(588,1929){\makebox(0,0)[r]{\strut{}$0.1$}}% \csname LTb\endcsname% \put(588,2542){\makebox(0,0)[r]{\strut{}$1$}}% \csname LTb\endcsname% \put(588,3154){\makebox(0,0)[r]{\strut{}$10$}}% \csname LTb\endcsname% \put(588,3767){\makebox(0,0)[r]{\strut{}$100$}}% \csname LTb\endcsname% \put(588,4379){\makebox(0,0)[r]{\strut{}$1000$}}% \csname LTb\endcsname% \put(720,484){\makebox(0,0){\strut{}$0$}}% \csname LTb\endcsname% \put(1418,484){\makebox(0,0){\strut{}$10000$}}% \csname LTb\endcsname% \put(2116,484){\makebox(0,0){\strut{}$20000$}}% \csname LTb\endcsname% \put(2814,484){\makebox(0,0){\strut{}$30000$}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(-50,2541){\rotatebox{-270}{\makebox(0,0){\strut{}Processing time $t_i$ (seconds)}}}% \put(1871,154){\makebox(0,0){\strut{}Image number $i$}}% \put(1871,4709){\makebox(0,0){\strut{}Runtime: St. Lucia}}% }% \gplgaddtomacro\gplbacktext{% \csname LTb\endcsname% \put(3240,484){\makebox(0,0){\strut{}$0$}}% \csname LTb\endcsname% \put(3888,484){\makebox(0,0){\strut{}$0.2$}}% \csname LTb\endcsname% \put(4536,484){\makebox(0,0){\strut{}$0.4$}}% \csname LTb\endcsname% \put(5183,484){\makebox(0,0){\strut{}$0.6$}}% \csname LTb\endcsname% \put(5831,484){\makebox(0,0){\strut{}$0.8$}}% \csname LTb\endcsname% \put(6479,484){\makebox(0,0){\strut{}$1$}}% \csname LTb\endcsname% \put(6611,704){\makebox(0,0)[l]{\strut{}$0$}}% \csname LTb\endcsname% \put(6611,1439){\makebox(0,0)[l]{\strut{}$0.2$}}% \csname LTb\endcsname% \put(6611,2174){\makebox(0,0)[l]{\strut{}$0.4$}}% \csname LTb\endcsname% \put(6611,2909){\makebox(0,0)[l]{\strut{}$0.6$}}% \csname LTb\endcsname% \put(6611,3644){\makebox(0,0)[l]{\strut{}$0.8$}}% \csname LTb\endcsname% \put(6611,4379){\makebox(0,0)[l]{\strut{}$1$}}% }% \gplgaddtomacro\gplfronttext{% \csname LTb\endcsname% \put(7116,2541){\rotatebox{-270}{\makebox(0,0){\strut{}Precision (correct/reported associations)}}}% \put(4859,154){\makebox(0,0){\strut{}Recall (correct/possible associations)}}% \put(4859,4709){\makebox(0,0){\strut{}Precision-Recall: St. Lucia}}% \csname LTb\endcsname% \put(3963,1977){\makebox(0,0)[l]{\strut{}\small BF}}% \csname LTb\endcsname% \put(3963,1757){\makebox(0,0)[l]{\strut{}\small FLANN-LSH}}% \csname LTb\endcsname% \put(3963,1537){\makebox(0,0)[l]{\strut{}\small DBoW2-DI}}% \csname LTb\endcsname% \put(3963,1317){\makebox(0,0)[l]{\strut{}\small DBoW2-SO}}% \csname LTb\endcsname% \put(3963,1097){\makebox(0,0)[l]{\strut{}\small HBST-10}}% \csname LTb\endcsname% \put(3963,877){\makebox(0,0)[l]{\strut{}\small HBST-50}}% }% \gplbacktext \put(0,0){\includegraphics{time-precision-recall-lucia-eps-converted-to.pdf}}% \gplfronttext \end{picture}} \vspace{-15pt} \caption{UQ St. Lucia Stereo Vehicular Dataset: Wide-ranging urban environment in Australia. With a total number of 33'197 images. $N_{\mathbf{d}}=1000$ BRIEF-256 descriptors were computed for each image.} \label{fig:benchmark-lucia} \vspace{0pt} \end{figure} \section{Conclusions} \label{sec:conclusions} In this paper we present a binary feature based search tree approach for Visual Place Recognition. We conducted an analysis of the behavior of binary descriptors. Based on this analysis we provide an approach that can address descriptor matching and image retrieval. While retaining an adequate accuracy, our approach significantly outperforms state-of-the-art methods in terms of computational speed. All of our results were obtained on publicly available datasets and can be reproduced using the released open-source library. \bibliographystyle{ieeetr}
01b560c6acdbe8296fa7b443d243fa7d800ab60d
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} One of the joys of working in a metric space is that the closure of a set coincides with its \textit{sequential closure}. In particular, if $X$ is a metric space, $A$ is a subset of $X$, and $b$ is in the closure of $A$, then there exists a sequence of elements in $A$ which converges to $b$. In \cite{Invariant}, Simon showed that global types which are finitely satisfiable over a countable model of a countable NIP theory admit a similar property. Let $T$ be a complete, first-order theory, $\mathcal{U}$ a monster model of $T$, and $M$ a small submodel of $\mathcal{U}$. Simon proved the following (\cite[Lemma 2.8]{Invariant}): \begin{theorem}\label{sim:conv} Let $T$ be a countable NIP theory. Suppose $p$ is a type in $S_{x}(\mathcal{U})$ and $p$ is finitely satisfiable over $M$ where $|M| = \aleph_0$. Then there exists a sequence of points $(a_{i})_{i \in \omega}$ in $M^{x}$ such that $\lim_{i\to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$. \end{theorem} One of the goals of this paper is to \textit{morally} generalize the proof of the above theorem in two different directions. By mimicking Simon's proof, we are able to prove the following, \begin{enumerate}[(T$1$)] \item Let $T$ be any countable theory. Suppose $p$ is a type in $S_{x}(\mathcal{U})$ and $p$ is generically stable over $M$. Then there exists a sequence of points $(a_i)_{i \in \omega}$ in $M^{x}$ such that $\lim_{i \to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$. \item Let $T$ be a countable NIP theory. Suppose $\mu$ is a Keisler measure in $\mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is finitely satisfiable over $M$ where $|M| = \aleph_0$. Then there exists a sequence of points $(\overline{a}_{i})_{i \in \omega}$ in $(M^{x})^{< \omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$. More explicitly, for any formula $\varphi(x)$ in $\mathcal{L}_{x}(\mathcal{U})$, we have that \begin{equation*} \lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i})(\varphi(x)) = \mu(\varphi(x)). \end{equation*} \end{enumerate} The proofs of both of these theorems are slightly more \textit{enjoyable} than one would anticipate. For example, we already know many diverse and useful approximation theorems for measures in NIP theories (and some for generically stable types in arbitrary theories) and so one might expect that our proofs rely on composing approximation techniques. However, stringing together different approximation methods can result in an array with some kind of \textit{modes-of-convergence} problem. As stated previously, the technique used to prove both these theorems mimics the argument used in \cite[Lemma 2.8]{Invariant}. In the generically stable case, the set up is identical: Suppose $p$ is in $S_{x}(\mathcal{U})$ where $p$ is generically stable over $M$ and $I$ is a Morley sequence in $p$ over $M$. As in Simon's proof, we use both $M$ and $I$ to find an eventually indiscernible sequence of points in $M^{x}$ which converge to $p|_{MI}$. The \textit{eventual EM-type} of this sequence over $M$ is precisely $p^{(\omega)}|_{M}$. Using generic stability and compactness, we conclude that this sequence must converge to $p$. Our proof of the Keisler measure case is slightly more exotic since there is no standard notion of a ``Morley sequence in a Keisler measure". The proof we provide is \textit{essentially} done in first order model theory (with an important exceptional lemma following from Ben Yaacov's work on randomizations \cite{Ben}). We expect that there exists other proofs using other methods such as continuous model theory\footnote{In fact, after this paper was posted to arXiv, another proof was discovered by Khanaki using BFT on an infinite product space \cite{Khanaki3}.}. The proof we give here embraces the ideology first developed in \cite{HPS} and shows that this can be resolved by replacing the Morley sequence (in Simon's proof) by a \textit{smooth sequence in $\mu$ over $M$}. This provides more evidence for the intuition that smooth measures can play the role of realized types, at least in the NIP context. After constructing a countable model $N_{\omega}$ ``containing this sequence", we find a sequence of points in $(M^{x})^{<\omega}$ such that the corresponding average measures on these tuples converge to $\mu|_{N_{\omega}}$. After constructing an eventually indiscernible subsequence in this context, we are able to readapt most of Simon's proof technique by making use of known approximation theorems, symmetry properties, and some basic integration techniques. It is interesting to note that one can give another equivalent characterization of generically stable measures in NIP theories using smooth sequences. This characterization highlights the connection between generically stable types and generically stable measures. Recall that a type $p$ is generically stable over a model $M$ if for every Morley sequence $(a_i)_{i \in \omega}$ in $p$ over $M$, $\lim_{i \to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$. We show that in an NIP theory, a measure $\mu$ is generically stable over a model $M$ if and only if for every \textit{smooth sequence} in $\mu$ over $M$, the limit of this sequence is precisely $\mu$. In addition to proving these theorems, we also introduce the classes of \textit{sequentially approximated measures} and \textit{sequentially approximated types}. These definitions can be seen as the \textit{global analogue} to Khanaki's definition of \textit{Baire 1 definability} for local types (see \cite{Khanaki2}). Sequentially approximated measures should be thought of as a ``halfway point" between finitely approximated measures and Keisler measures which are finitely satisfiable over a small model. For instance, we show that a Keisler measure is finitely approximated if and only if it is both definable and sequentially approximated (Proposition \ref{Mazur}) and sequentially approximated measures commute with definable measures (Proposition \ref{prop:com}). Sequentially approximated types remain a little more mysterious. We show that there exists a type such that its corresponding Keisler measure is sequentially approximated (even finitely approximated), but the type itself is not sequentially approximated (Proposition \ref{Gabe}). In the last section, we consider connections to the local measure case and generalize the main result in \cite{GannNIP} (Theorem \ref{main:Gan}). Explicitly, the main result in \cite{GannNIP} demonstrates that if a formula $\varphi$ is NIP and $\mu$ is a $\varphi$-measure which is $\varphi$-definable and finitely satisfiable over a \textit{countable model}, then $\mu$ is $\varphi$-finitely approximated in said model. Here, we demonstrate that \textit{countable} can be replaced by \textit{small}. This paper is structured as follows: In section 2, we discuss preliminaries. In section 3, we describe sequentially approximated measures and sequentially approximated types. In section 4, we show that if $p$ is generically stable over $M$, then $p$ is sequentially approximated over $M$. We also give some examples of types which are which are not sequentially approximated at the end of the section. In section 5, we show that if $T$ is a countable NIP theory, and $\mu$ is finitely satisfiable over a countable model $M$, then $\mu$ is sequentially approximated over $M$. We then give an equivalent characterization of generically stable measures in NIP theories using smooth sequences. In section 6, we generalize the main theorem in \cite{GannNIP}. \subsection*{Acknowledgements} We would like to thank Gabriel Conant, James Hanson, Karim Khanaki, Pierre Simon and our Ph.D. defense committee Daniel Hoffmann, Anand Pillay, Sergei Starchenko, and Minh Chieu Tran for helpful discussions and comments. Thanks also to the referee for many helpful comments. This paper was also partially supported by the NSF research grant DMS-1800806 as well as the NSF CAREER grant DMS-1651321. \section{Preliminaries} If $r$ and $s$ are real numbers and $\epsilon$ is a real number greater than $0$, then we write $r \approx_{\epsilon} s$ to mean $|r - s| < \epsilon$. Fix $\mathcal{L}$ a countable language. Throughout this paper, we always have a countable, complete, first-order theory $T$ and a monster model $\mathcal{U}$ of $T$ in the background. The letters $M$ and $N$ will be used to denote small elementary submodels of $\mathcal{U}$. The letters $x,y,z$ will denote tuples of variables. If $A \subseteq \mathcal{U}$, we let $\mathcal{L}(A)$ be the collection of formulas with parameters from $A$ (modulo logical equivalence). A formula in $\mathcal{L}(A)$ is called an ``$\mathcal{L}(A)$-formula". If $x_0,...,x_k$ is a finite sequence of pairwise disjoint tuples of variables, we let $\mathcal{L}_{x_0,...,x_k}(A)$ be the collection of $\mathcal{L}(A)$-formulas with free variables in these tuples. We write $\mathcal{L}_{x_{0},...,x_{k}}(\emptyset)$ simply as $\mathcal{L}_{x_{0},...,x_{k}}$. If $(x_i)_{i \in \omega}$ is a countable sequence of pairwise distinct tuples of variables, we let $\mathcal{L}_{(x_i)_{i \in \omega}}(A) = \bigcup_{k \in \omega} \mathcal{L}_{x_0,...,x_k}(A)$. For a tuple $x$, let $A^{x}= \{(a_0,...,a_{|x|-1}): a_i \in A, i \leq |x|-1\}$. We let $(A^{x})^{<\omega}$ be the collection of all finite sequences of points in $A^{x}$. If we call $\varphi(x,y)$ a \textit{partitioned $\mathcal{L}_{x,y}(\mathcal{U})$-formula}, we treat $x$ as object variables and $y$ as parameter variables. The formula $\varphi^{*}(y,x)$ denotes the exact same formula as $\varphi(x,y)$, but with the roles exchanged for parameters and object tuples. Generally speaking, in any instance where we have multiple tuples of variables (e.g. $x$ and $y$, or $(x_1,x_2,x_3,...)$), we will always assume they are pairwise distinct without comment. \textbf{Unlike similar papers about Keisler measures, we do not identify a type and its corresponding Keisler measure}. We let $S_{x}(A)$ denote the usual type space over $A$ and $\mathfrak{M}_{x}(A)$ the space of Keisler measures over $A$. We let $\mathfrak{M}_{(x_i)_{i \in \omega}}(\mathcal{U})$ be the collection of finitely additive probability measures on $\mathcal{L}_{(x_{i})_{i \in \omega}}(\mathcal{U})$. For any (tuple of) variable(s) $x$, and any subset $A \subseteq \mathcal{U}$, we have a map $\delta: S_{x}(A) \to \mathfrak{M}_{x}(A)$ via $\delta(p) = \delta_{p}$ where $\delta_{p}$ is the \textit{Dirac measure at the type $p$}. We sometimes refer to $\delta_{p}$ as the \textit{corresponding Keisler measure} of $p$. If $\overline{a} = (a_1,...,a_n)$ is a sequence of points in $\mathcal{U}^{x}$, then we let $\operatorname{Av}(\overline{a})$ be the associated average measure in $\mathfrak{M}_{x}(\mathcal{U})$. Explicitly, for any $\psi(x) \in \mathcal{L}_{x}(\mathcal{U})$, we define \begin{equation*}\operatorname{Av}(\overline{a})(\psi(x)) = \frac{|\{1\leq i \leq n: \mathcal{U} \models \psi(a_i)\}|}{n}. \end{equation*} \subsection{Basics of convergence} Recall that if $A \subseteq \mathcal{U}$, then both $S_{x}(A)$ and $\mathfrak{M}_{x}(A)$ carry a natural compact Hausdorff topology. For $S_{x}(A)$, we have the usual Stone space topology. Similarly, $\mathfrak{M}_{x}(A)$ admits a compact Hausdorff topology. There are two ways to describe this topology. First, this topology is the topology induced from the compact Hausdorff space $[0,1]^{\mathcal{L}_{x}(A)}$ where we identify each measure with the obvious map from $\mathcal{L}_{x}(A)$ to $[0,1]$. This topology on $\mathfrak{M}_{x}(A)$ can also be described as the coarsest topology such that for any continuous function $f: S_{x}(A) \to \mathbb{R}$, the map $\int f : \mathfrak{M}_{x}(A) \to \mathbb{R}$ is continuous. We will routinely need to keep track of which sets of parameters our types and measures are converging over. Hence, we establish the following conventions. \begin{definition} Fix $A \subseteq \mathcal{U}$, $p \in S_{x}(A)$ and $\mu \in \mathfrak{M}_{x}(A)$. \begin{enumerate}[$(i)$] \item We say that a sequence of types $(p_{i})_{i \in \omega}$, where each $p_i$ is in $S_{x}(A)$, \textbf{converges} to $p$ if it converges in the Stone space topology on $S_{x}(A)$, which we write as ``$\lim_{i \to \infty} p_i = p$ in $S_{x}(A)$" or simply as ``$\lim_{i \to \infty} p_i = p$" when the underlying space is obvious. We recall that $\lim_{i \to \infty} p_i = p$ if for every $\psi(x) \in p$, there exists some natural number $N_{\psi}$ such that for any $n > N_{\psi}$, $\psi(x) \in p_n$. \item We say that a sequence of measures $(\mu_i)_{i \in \omega}$, where each $\mu_{i}$ is in $\mathfrak{M}_{x}(A)$, \textbf{converges} to $\mu$ if this sequence converges in the compact Hausdorff topology on $\mathfrak{M}_{x}(A)$, which we write as ``$\lim_{i \to \infty} \mu_{i} = \mu$ in $\mathfrak{M}_{x}(A)$" or simply as ``$\lim_{i \to \infty} \mu_i = \mu$" when there is no possibility of confusion. Notice that $\lim_{i \to \infty} \mu_i = \mu$ if for every $\psi(x) \in \mathcal{L}_{x}(A)$ and $\epsilon >0$, there exists some natural number $N_{\psi,\epsilon}$ such that for any $n > N_{\psi,\epsilon}$, \begin{equation*} |\mu_{n}(\psi(x)) - \mu(\psi(x))| < \epsilon. \end{equation*} \end{enumerate} \end{definition} We now observe the relationship between finitely satisfiable types and measures and topological closure in their respective spaces. \begin{fact}\label{Avcls} Suppose $p \in S_{x}(\mathcal{U})$, $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $M \prec \mathcal{U}$. Assume that $p$ and $\mu$ are finitely satisfiable over $M$. Then the following are true. \begin{enumerate}[($i$)] \item The type $p$ is in the closure of $\{tp(a/\mathcal{U}): a \in M^{x}\}$ in $S_{x}(\mathcal{U})$. \item The associated Keisler measure $\delta_{p}$ is in the closure of $\{\delta_{a}: a \in M^{x}\}$ in $\mathfrak{M}_{x}(\mathcal{U})$. \item The measure $\mu$ is in the closure of \begin{equation*} \Big\{\sum_{i=1}^{n} r_i \delta_{a_i}: n \in \mathbb{N}, r_i > 0, \sum_{i=1}^{n} r_i =1, a_i \in M^{x}\Big\} \end{equation*} in $\mathfrak{M}_{x}(\mathcal{U})$. \item The measure $\mu$ is in the closure of $\{\operatorname{Av}(\overline{a}): \overline{a} \in (M^{x})^{<\omega}\}$ in $\mathfrak{M}_{x}(\mathcal{U})$. \end{enumerate} \end{fact} We remark that the proof of $(i)$ is a standard exercise and the proof of $(ii)$ follows directly from $(i)$. A proof of $(iii)$ can be found at \cite[Proposition 2.11]{ChGan} and $(iv)$ follows directly from $(iii)$. \subsection{Types} We recall some basic definitions and facts about special kinds of types (e.g. generically stable types). Our notion of an \textit{EM-type} is not defined in complete generality since we are only concerned with countable sequences in this paper. \begin{definition} Let $(a_i)_{i \in \omega}$ be a sequence of points in $\mathcal{U}^{x}$ and let $B \subseteq \mathcal{U}$. Then the \textbf{Ehrenfeucht-Mostowski type} or \textbf{EM-type} of the sequence $(a_{i})_{i \in \omega}$ over $B$, denoted $\operatorname{EM}((a_{i})_{i \in \omega }/B)$, is the following partial type: \begin{equation*} \{\varphi(x_0,...,x_k) \in \mathcal{L}_{(x_i)_{i \in \omega}}(B): \mathcal{U} \models \varphi(a_{i_{0}},...,a_{i_{k}}) \text{ for any } i_0 <...<i_{k} \}. \end{equation*} We remark that this partial type corresponds to a subset of $S_{(x_{i})_{i \in \omega}}(B)$. \end{definition} \begin{observation} It is clear from the definition above that for any sequence of points $(a_{i})_{i \in \omega}$ in $\mathcal{U}^{x}$ and any $B \subseteq \mathcal{U}$, the type $\operatorname{EM}((a_{i})_{i \in \omega }/B)$ is complete if and only if the sequence $(a_{i})_{i \in \omega}$ is indiscernible over $B$. \end{observation} The general notion of a \textit{generically stable type} was introduced by Pillay and Tanovi\'{c} in \cite{PiTa}. The definition of a generically stable type provided below was proved to be equivalent in \cite{CoGan} (see Proposition 3.2). We also provide the definition of a $\operatorname{dfs}$ type which will be important throughout this paper. In general, the class of $\operatorname{dfs}$ types strictly contains the class of generically stable types. \begin{definition} Suppose that $p \in S_{x}(\mathcal{U})$. \begin{enumerate}[$(i)$] \item We say that $p$ is \textbf{dfs} if there exists a small model $M \prec \mathcal{U}$ such that $p$ is both definable and finitely satisfiable over $M$. In this case, we say that $p$ is \textbf{dfs over $M$}. \item We say that $p$ is \textbf{generically stable} if there exists a small model $M \prec \mathcal{U}$ such that $p$ is invariant over $M$ and for any Morley sequence $(a_i)_{i \in \omega}$ in $p$ over $M$, we have that $\lim_{i \to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$. In this case, we say that $p$ is \textbf{generically stable over $M$}. \end{enumerate} \end{definition} Finally, we provide a collection of standard facts about these classes of types. \begin{fact}\label{gfs:facts} Let $p$ be in $S_{x}(\mathcal{U})$ and $M \prec \mathcal{U}$. \begin{enumerate}[$(i)$] \item If $p$ is generically stable over $M$, then $p$ is $\operatorname{dfs}$ over $M$ $($\cite[Proposition 1]{PiTa}$)$. \item If $p$ is $\operatorname{dfs}$ over $M$, then any Morley sequence in $p$ over $M$ is totally indiscernible over $M$ $($\cite[Proposition 3.2]{HP}, proof does not use NIP$)$. \item If $p$ is generically stable/$\operatorname{dfs}$ over $M$ and $M_0$-invariant, then $p$ is respectively generically stable/$\operatorname{dfs}$ over $M_0$ $($generically stable case follows from $(i)$ of \cite[Proposition 1]{PiTa}; $\operatorname{dfs}$ case can be found in \cite[Lemma 2.8]{Sibook}$)$. \item $($T is countable$)$ If $p$ is generically stable/$\operatorname{dfs}$ over $M$, there exists an elementary submodel $M_0$ such that $|M_0| = \aleph_0$ and $p$ is generically stable/$\operatorname{dfs}$ over $M_0$ $($Easy to check from $(iii)$$)$. \item $($T is NIP$)$ If $p$ is $\operatorname{dfs}$ over $M$ then $p$ is generically stable over $M$ $($e.g. \cite[Theorem 2.29]{Sibook}$)$. \end{enumerate} \end{fact} \subsection{Keisler measures} In this subsection, we will briefly recall some important definitions and facts about these measures. As with any paper about Keisler measures, we provide the following \textit{standard atlas}. \begin{definition} Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. \begin{enumerate}[($i$)] \item $\mu$ is \textbf{invariant} if there exists a model $M \prec \mathcal{U}$ such that for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$ and $b,b' \in \mathcal{U}^{y}$ such that $b \equiv_{M} b'$, $\mu(\varphi(x,b)) = \mu(\varphi(x,b'))$. In this case, we say that $\mu$ is \textbf{$M$-invariant} or \textbf{invariant over $M$}. \item If $\mu$ is invariant over $M$, then for every partitioned $\mathcal{L}(M)$-formula $\varphi(x,y)$, we can define the map $F_{\mu,M}^{\varphi}:S_{y}(M) \to [0,1]$ via $F_{\mu,M}^{\varphi}(q) = \mu(\varphi(x,b))$ where $b \models q$. When $M$ is obvious we will simply write $F_{\mu,M}^{\varphi}$ as $F_{\mu}^{\varphi}$. \item $\mu$ is \textbf{Borel-definable} if there exists a model $M \prec \mathcal{U}$ such that $\mu$ is $M$-invariant and for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, the map $F_{\mu,M}^{\varphi}$ is Borel. In this case, we say that $\mu$ is \textbf{Borel-definable over $M$}. \item $\mu$ is \textbf{definable} if there exists a model $M \prec \mathcal{U}$ such that $\mu$ is $M$-invariant and for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, the map $F_{\mu,M}^{\varphi}$ is continuous. In this case, we say that $\mu$ is \textbf{$M$-definable} or \textbf{definable over $M$}. \item $\mu$ is \textbf{finitely satisfiable over a small model} if there exists $M \prec \mathcal{U}$ such that for every formula $\varphi(x) \in \mathcal{L}_{x}(\mathcal{U})$, if $\mu(\varphi(x)) > 0$ then there exists $a \in M^{x}$ such that $\mathcal{U} \models \varphi(a)$. In this case, we say that $\mu$ is \textbf{finitely satisfiable over $M$}. \item $\mu$ is \textbf{finitely approximated} if there exists a model $M \prec \mathcal{U}$ such that for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$ and every $\epsilon > 0$, there exists $\overline{a} \in (M^{x})^{<\omega}$ such that \begin{equation*} \sup_{b \in \mathcal{U}^{y}} |\mu(\varphi(x,b)) - \operatorname{Av}(\overline{a})(\varphi(x,b))| < \epsilon. \end{equation*} In this case, we say that $\mu$ is \textbf{finitely approximated over $M$}. \item $\mu$ is \textbf{smooth} if there exists a model $M \prec \mathcal{U}$ such that for any $\lambda \in \mathfrak{M}_{x}(\mathcal{U})$ if $\lambda|_{M} = \mu|_{M}$, then $\lambda = \mu$. If this is the case, we say that $\mu$ is \textbf{smooth over $M$}. \end{enumerate} \end{definition} We now provide a collection of basic facts. Statements $(i)$, $(iii)$, $(iv)$, and $(v)$ in Fact \ref{KM:imp} are relatively straightforward to prove and so we leave them as exercises. \begin{fact}\label{KM:imp} Assume that $T$ is any theory and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ with $M \prec \mathcal{U}$. \begin{enumerate}[$(i)$] \item If $\mu = \operatorname{Av}(\overline{a})$ for some $\overline{a} \in (M^{x})^{<\omega}$, then $\mu$ is smooth over $M$. \item If $\mu$ is smooth over $M$, then $\mu$ is finitely approximated over $M$, $($e.g. \cite[Proposition 7.10]{Sibook}$)$. \item If $\mu$ is finitely approximated over $M$, then $\mu$ is both definable and finitely satisfiable over $M$. \item If $\mu$ is definable or finitely satisfiable over $M$, then $\mu$ is $M$-invariant. \item The measure $\mu$ is definable over $M$ if and only if for every partitioned $\mathcal{L}(M)$-formula $\varphi(x,y)$ and for every $\epsilon > 0$, there exists formulas $\psi_{1}(y),...,\psi_{n}(y) \in \mathcal{L}_{y}(M)$ and real numbers $r_1,...,r_n \in [0,1]$ such that \begin{equation*} \sup_{q \in S_{y}(M)} | F_{\mu,M}^{\varphi}(q) - \sum_{i=1}^{n} r_i \mathbf{1}_{\psi_i(y)}(q)| < \epsilon. \end{equation*} where $\mathbf{1}_{\psi_{i}(y)}$ is the characteristic function of the clopen set $[\psi_{i}(y)]$. \end{enumerate} Moreover, if $T$ is NIP then the following also hold. \begin{enumerate}[$(vi)$] \item If $\mu$ is invariant over $M$, then $\mu$ is Borel-definable $M$ $($e.g. \cite[Proposition 7.19]{Sibook}$)$. \item Any measure $\mu$ is definable and finitely satisfiable over $M$ if and only if $\mu$ is finitely approximated over $M$ $($\cite[Proposition 3.2]{HPS}$)$. \item Every measure has a ``smooth extension". In particular, for any given $M \prec \mathcal{U}$ and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, there exists some $N$ such that $M \prec N \prec \mathcal{U}$ and a measure $\lambda \in \mathfrak{M}_{x}(\mathcal{U})$ such that $\lambda$ is smooth over $N$ and $\lambda|_{M} = \mu|_{M}$ $($\cite[Lemma 2.2]{HPS}$)$. \end{enumerate} \end{fact} \begin{proposition}[T is countable]\label{m:countable} If $\mu$ is definable, finitely approximated, smooth or $\operatorname{dfs}$, then there exists a countable model $M_0$ such that $\mu$ is definable, finitely approximated, smooth or $\operatorname{dfs}$ over $M_0$ $($respectively$)$. \end{proposition} \begin{proof} We notice that the properties of definability and smoothness only require the existence of $\aleph_0$-many $\mathcal{L}(M)$-formulas (by \cite[Lemma 2.3]{HPS} and (v) of Fact \ref{KM:imp} respectively). If we choose an elementary submodel $M_0$ of $M$ containing the parameters from these formulas, then $\mu$ will have the desired property over $M_0$. Finitely approximated measures only require the existence of $\aleph_0$-many elements of $M$. Choosing an elementary submodel $M_0$ of $M$ with these elements demonstrates that $\mu$ is finitely approximated over $M_0$. Finally, if $\mu$ is $\operatorname{dfs}$ then $\mu$ is definable over a countable model $M_{0}$. In particular, $\mu$ is invariant over $M_0$ and so $\mu$ is also finitely satisfiable over $M_0$ by the same argument as in \cite[Proposition 4.13]{GannNIP}. \end{proof} \begin{remark} Assuming $T$ is countable, there are measures (even types) which are finitely satisfiable over a small submodel, but are not finitely satisfiable over a countable submodel. See Proposition \ref{omega} and Remark \ref{example:coheir1} for an explicit example. \end{remark} \begin{definition}Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, $\nu \in \mathfrak{M}_{y}(\mathcal{U})$ and assume that $\mu$ is Borel-definable over $M$. Then we define the \textbf{Morley product} of $\mu$ and $\nu$ (denoted $\mu \otimes \nu)$ is the unique Keisler measure in $\mathfrak{M}_{x,y}(\mathcal{U})$ with the following property: for any formula $\varphi(x,y) \in \mathcal{L}_{x,y}(\mathcal{U})$, \begin{equation*} \mu \otimes \nu (\varphi(x,y)) = \int_{S_{y}(N)} F_{\mu}^{\varphi} d(\nu|_{N}), \end{equation*} where $N$ is any small elementary submodel of $\mathcal{U}$ containing $M$ and any parameters from $\varphi$ and $\nu|_{N}$ is the associated regular Borel probability measure of the restriction of $\nu$ to $N$ on the type space $S_{y}(N)$. \end{definition} We remark that this this product is well-defined and the computation does not depend on our choice of $N$ (assuming $N$ contains $M$ and all parameters in $\varphi(x,y)$) (see discussion after \cite[Proposition 7.19]{Sibook}). This observation allows us to grow or shrink the space in which we are integrating over and we will make substantial use of this property in section 5. We end this section with a list of facts about measures and products. \begin{fact}\label{KM:imp2} Assume that $T$ is any theory and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, $\nu \in \mathfrak{M}_{y}(\mathcal{U})$, and $\lambda \in \mathfrak{M}_{z}(\mathcal{U})$. Assume that $\mu$ and $\nu$ are both $M$-invariant. \begin{enumerate}[$(i)$] \item If $\mu$ is smooth and $\nu$ is Borel definable, then $\mu \otimes \nu = \nu \otimes \mu$ $($see \cite[Corollary 2.5]{HPS}$)$. \item If $\mu$ and $\nu$ are definable (over $M$), then $\mu \otimes \nu$ is definable (over $M$) and $\mu \otimes (\nu \otimes \lambda) = (\mu \otimes \nu) \otimes \lambda)$ $($see \cite[Proposition 2.6]{CoGan}$)$. \item If $\mu$ and $\nu$ are smooth (over $M$), then $\mu \otimes \nu$ is smooth (over $M$) $($e.g. \cite[Corollary 3.1]{CoGaNA}$)$. \item If $\mu$ is Borel definable (over $M$) and $\nu$ is invariant (over $M$), then $\mu \otimes \nu$ is invariant (over $M$) $($discussion before \cite[Exercise 7.20]{Sibook}$)$. \item If $\mu$ and $\nu$ are $\operatorname{dfs}$ (over $M$), then $\mu \otimes \nu$ is $\operatorname{dfs}$ (over $M$) $($e.g. \cite[Proposition 2.10]{CoGan}$)$. \end{enumerate} Moreover, if $T$ is NIP then the following also hold. \begin{enumerate}[$(a)$] \item If $\mu,\nu$ are invariant then $\mu \otimes (\nu \otimes \lambda) = (\mu \otimes \nu) \otimes \lambda$ $($see \cite{CoGaNA}$)$. \item If $\mu$ is $\operatorname{dfs}$ and $\nu$ is invariant, then $\mu \otimes \nu = \nu \otimes \mu$ $($see \cite[Theorem 3.2]{HPS}$)$. \end{enumerate} \end{fact} \begin{definition}[T is NIP]\label{prod:inf} Suppose that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is invariant. Then, we define the following measures: \begin{enumerate} \item $\mu^{(0)}(x_0) = \mu(x_0)$. \item $\mu^{(n)} = \mu(x_{n}) \otimes \mu^{(n-1)}(x_0,...,x_{n-1})$. \item $\mu^{(\omega)} = \bigcup_{i \in \omega} \mu^{(n)}$ (where $\mu^{(\omega)} \in \mathfrak{M}_{(x_i)_{i \in \omega}}(\mathcal{U})$). \end{enumerate} We note that $\mu^{(n)}$ and $\mu^{(\omega)}$ are well-defined by Fact \ref{KM:imp2}, and moreover we do not need to worry about the ordering of the parentheses in the product. \end{definition} \section{Sequentially approximated types and measures} We begin this section by isolating the property of \textit{sequential approximability}. We again remark that these classes of objects are a global version of Khanaki's \textit{Baire 1 definability} \cite{Khanaki2}. We assume that $T$ is countable, but make no other global assumptions about $T$. As usual, $\mathcal{U}$ is a fixed sufficiently saturated model of $T$. We now define sequentially approximated types and measures. \begin{definition}\label{SA} Let $p \in S_{x}(\mathcal{U})$ and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. We say that, \begin{enumerate} \item $p$ is \textbf{sequentially approximated} if there exists $M \prec \mathcal{U}$ and a sequence of points $(a_i)_{i \in \omega}$ in $M^{x}$ such that $\lim_{i \to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$ in $S_{x}(\mathcal{U})$. In this case, we say $p$ is \textbf{sequentially approximated over $M$}. \item $\mu$ is \textbf{sequentially approximated} if there exists $M \prec \mathcal{U}$ and a sequence of points $(\overline{a}_{i})_{i \in \omega}$ in $(M^{x})^{<\omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. In this case, we say $\mu$ is \textbf{sequentially approximated over $M$}. \end{enumerate} \end{definition} We warn the reader that Definition \ref{SA} is only meaningful in the context of types and measures over large models. Indeed, if $M$ is a countable model and $T$ is a countable theory, then for every $p \in S_{x}(M)$, there exists a sequence of points in $M^{x}$ such that $\lim_{i \to \infty} \operatorname{tp}(a_{i}/M) =p$ in $S_{x}(M)$. The analogous statement also holds for measures. We also emphasize to the reader that there is a real distinction between a type $p$ being sequentially approximated over a model $M$ and its associated Keisler measure $\delta_{p}$ being sequentially approximated over $M$. Proposition \ref{Gabe} gives an example of a type which is not sequentially approximated while its associated Keisler measure is sequentially approximated. However, the other implication holds almost trivially. \begin{observation}\label{forward:easy} If a type $p$ in $S_{x}(\mathcal{U})$ is sequentially approximated over a model $M$, then the associated Keisler measure $\delta_p$ is sequentially approximated over $M$. \end{observation} \begin{proof} If $\lim_{i \to \infty}\operatorname{tp}(a_i/\mathcal{U}) = p$ in $S_{x}(\mathcal{U})$, then $\lim_{i \to \infty} \delta_{a_{i}} = \delta_{p}$ in $\mathfrak{M}_{x}(\mathcal{U})$ since $\delta: S_{x}(\mathcal{U}) \to \mathfrak{M}_{x}(\mathcal{U})$ is a topological embedding. \end{proof} \subsection{Basic properties} We now connect sequentially approximated types and measures to standard model-theoretic properties. For the reader's intuition, sequential approximability (at least in the case of measures) should be thought of as a strong version of finite satisfiability over a small model or a weak version of finite approximability. Sequentially approximated types remain a little more mysterious. \begin{proposition}\label{finitesat} Assume that $p \in S_{x}(\mathcal{U})$ and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. \begin{enumerate}[($i$)] \item If $p$ and $\mu$ are sequentially approximated over $M$, then $p$ and $\mu$ are finitely satisfiable over $M$. Even more, $p$ and $\mu$ are finitely satisfiable over a countable elementary submodel of $M$. \item If $p$ and $\mu$ are sequentially approximated over $M$, then $p$ and $\mu$ are Borel-definable over $M$. \item If $\mu$ is finitely approximated over $M$, then $\mu$ is sequentially approximated over $M$. $($Warning: In general, this fails for types.$)$ \item If $T$ is NIP, then $p$ is sequentially approximated over $M$ if and only if $\delta_{p}$ is sequentially approximated over $M$. \item Assume that $k \subseteq \{1,2,...,n\}$ and let $\pi_{k}:S_{n}(\mathcal{U}) \to S_{k}(\mathcal{U})$ and $\rho_{k}:\mathfrak{M}_{n}(\mathcal{U}) \to \mathfrak{M}_{k}(\mathcal{U})$ be the obvious projection maps. If $p \in S_{n}(\mathcal{U})$ and $p$ is sequentially approximated over $M$, then $\pi_{k}(p)$ is sequentially approximated over $M$. Similarly, if $\mu \in \mathfrak{M}_{n}(\mathcal{U})$ is sequentially approximated over $M$ then so is $\rho_{k}(\mu)$. \end{enumerate} \end{proposition} \begin{proof} We prove the claims. \begin{enumerate}[($i$)] \item The first part of $(i)$ is obvious. For the second part, we only need to choose a submodel containing a sequence which sequentially approximates our type or measure. Since $T$ is countable, we can choose a countable model. \item The proofs for both the type and measure cases are similar, so we prove the measure case. Assume that $(\overline{a}_{i})_{i \in \omega}$ is a sequence of points in $(M^{x})^{<\omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. By part $(i)$, $\mu$ is finitely satisfiable over $M$ and hence $M$-invariant. So, for any partitioned formula $\varphi(x,y)$ in $\mathcal{L}$, the map $F_{\mu}^{\varphi}:S_{y}(M) \to [0,1]$ is well-defined. By sequential approximability, the sequence of continuous functions $\big(F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}\big)_{i \in \omega}$ converges pointwise to $F_{\mu}^{\varphi}$. Hence, $F_{\mu}^{\varphi}$ is Baire-1 (and therefore Borel). \item This follows from an encoding argument. Let $(\varphi_{n}(x,y_{n}))_{n \in \omega}$ be an enumeration of the partitioned $\mathcal{L}$-formulas. For each $n \in \mathbb{N}$, consider the partitioned formula $\theta_{n}(x;y_0,...,y_n,z_{*},z_0,...,z_n)$ where $|z_*| = |z_i| = 1$ and \begin{equation*} \theta_{n}(x;\bar{y},\bar{z}) : = \bigwedge_{i \leq n}\left( \left( z_* = z_i \wedge \bigwedge_{\substack{j \leq n \\ j \neq i }} z_{j} \neq z_{*} \right) \to \varphi_{i}(x,y_i) \right). \end{equation*} Since $\mu$ is finitely approximated over $M$, for $\epsilon = \frac{1}{n}$, there exists some $\overline{a}_n$ in $(M^{x})^{<\omega}$ such that for every $(\bar{b},\bar{c}) \in \mathcal{U}^{\bar{y}\bar{z}}$, \begin{equation*} |\operatorname{Av}(\overline{a}_n)(\theta_n(x,\bar{b},\bar{c})) - \mu((\theta_n(x,\bar{b},\bar{c})) | < \epsilon. \end{equation*} Notice that $\theta_{n}(x;\bar{y},\bar{z})$ encodes the definable sets which are obtained by the formulas $\varphi_{0}(x,y_0),...,\varphi_{n}(x,y_n)$. In particular, for every $b \in \mathcal{U}^{y_{j}}$ where $j \leq n$, consider then tuple $(\bar{d}_{b},\bar{c}_j) = (d_{0},...d_{j-1},b,d_{j+1}...,d_n,c_*,c_0,...,c_n)$ where the $d_{i}$'s are arbitrary and $c_* = c_l$ if and only if $l = j$. Then \begin{equation*} |\operatorname{Av}(\overline{a}_{n})(\varphi_{j}(x,b)) - \mu(\varphi_{j}(x,b))| = |\operatorname{Av}(\theta(x,\bar{d}_{b},\bar{c}_j)) - \mu((\theta(x,\bar{d}_{b},\bar{c}_j))|. \end{equation*} So for any $j \leq n$ and $b \in \mathcal{U}^{y_{j}}$, \begin{equation*} |\operatorname{Av}(\overline{a}_n)(\varphi_j(x,b)) - \mu(\varphi_j(x,b))| < \frac{1}{n}. \end{equation*} It is clear that $\lim_{n\to \infty} \operatorname{Av}(\overline{a}_{n}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. \item The forward direction is Observation \ref{forward:easy}. We consider the converse. If $\delta_{p}$ is sequentially approximated over $M$ then $\delta_{p}$ is finitely satisfiable over a countable submodel $M_0$ by $(i)$ above. Then $p$ is finitely satisfiable over $M_0$ and so by Theorem \ref{sim:conv}, $p$ is sequentially approximated over $M_0$ (and also over $M$). \item Simply consider the approximating sequence restricted to the appropriate coordinates. \qedhere \end{enumerate} \end{proof} \begin{proposition}\label{Mazur} A measure $\mu$ is sequentially approximated and definable over $M$ if and only if $\mu$ is finitely approximated over $M$. \end{proposition} \begin{proof} We first prove the forward direction. The proof is similar to the proof of \cite[Theorem 4.8]{GannNIP}. Fix $\epsilon > 0$. For any partitioned $\mathcal{L}$-formula $\varphi(x,y)$, consider the map $F_{\mu}^{\varphi}:S_{y}(M) \to [0,1]$. Let $(\overline{a}_{i})_{i \in \omega}$ be a sequence of points in $(M^{x})^{<\omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. Observe that each map $F_{\operatorname{Av}(\overline{a})}^{\varphi}:S_{y}(M) \to [0,1]$ is continuous and the sequence $\big(F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}\big)_{i \in \omega}$ converge pointwise to $F_{\mu}^{\varphi}$. Since $\mu$ is definable, the map $F_{\mu}^{\varphi}$ is continuous. By the Riesz representation theorem and dominated convergence theorem, we have that $\big(F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}\big)_{i \in \omega}$ converges weakly to $F_{\mu}^{\varphi}$ in $C(S_{y}(M))$. By a standard application of Mazur's lemma, there exists a sequence of functions $(g_j)_{j \in \omega}$ such that each $g_j$ is a rational convex combination of $\{F_{\operatorname{Av}(\overline{a}_i)}^{\varphi}: i \leq n_{j}\}$ for some natural number $n_{j}$ and the sequence $(g_j)_{j \in \omega}$ converges uniformly to $F_{\mu}^{\varphi}$. Choose $m \in \mathbb{N}$ so that \begin{equation*} \sup_{p \in S_{y}(M)}|F_{\mu}^{\varphi}(p) - g_m(p)| < \epsilon. \end{equation*} By construction, $g_m = F_{\operatorname{Av}(\overline{c})}^{\varphi}$ for some $\overline{c} \in (M^{x})^{< \omega}$. Notice that \begin{equation*} \sup_{b \in \mathcal{U}^{y}}|\mu(\varphi(x,b)) - \operatorname{Av}(\overline{c})(\varphi(x,b))| < \epsilon. \end{equation*} For the converse, $\mu$ is definable over $M$ by $(iii)$ of Fact \ref{KM:imp}. Moreover, $\mu$ is sequentially approximated over $M$ by $(iii)$ of Proposition \ref{finitesat}. \end{proof} We now show that sequentially approximated measures commute with definable measures. It is well-known that in the context of NIP theories definable measures commute with measures which are finitely satisfiable over a small model (see \cite[Lemma 3.1]{HPS} or \cite[Proposition 7.22]{Sibook}). Recently, it was shown that in general, measures which are finitely satisfiable over a small model (even $\operatorname{dfs}$ measures) do not always commute with definable measures (see \cite[Proposition 7.14]{CGH}). We first present a topological proof (in NIP theories) which shows that measures which are finitely satisfiable over a small model commute with definable measures. We will then modify this proof (by replacing an instance of continuity by the dominated convergence theorem) to show that sequentially approximated measures commute with definable ones in any theory. Recall the following facts. \begin{fact}\label{cont:meas} Let $\nu \in \mathfrak{M}_{y}(\mathcal{U})$, $N \prec \mathcal{U}$, and $\varphi(x,y)$ be an $\mathcal{L}_{x,y}(N)$ formula. Let $\mathfrak{M}_{x}(\mathcal{U},N)$ denote the collection of measures in $\mathfrak{M}_{x}(\mathcal{U})$ which are finitely satisfiable over $N$. \begin{enumerate}[($i$)] \item If $\nu$ is definable over $N$, then the map from $\mathfrak{M}_{x}(\mathcal{U})$ to $[0,1]$ defined via $\mu \to \nu \otimes \mu(\varphi(x,y))$ is continuous $($\cite[Lemma 5.4]{CGH}$)$. \item $($T is NIP$)$ If $\nu$ is any measure, then the map from $\mathfrak{M}_{x}(\mathcal{U},N)$ to $[0,1]$ defined via $\mu \to \mu \otimes \nu(\varphi(x,y))$ is well-defined and continuous $($\cite[Proposition 6.3]{ChGan}$)$. \end{enumerate} \end{fact} We remark that statement $(ii)$ of Fact \ref{cont:meas} requires NIP for two reasons. First, it is not true in general that measures which are finitely satisfiable over a small model are Borel definable. In NIP theories, this is true ($(vi)$ of Fact \ref{KM:imp}). Secondly, the proof that this map is continuous relies on the existence of a smooth extension of $\nu|_N$. Without NIP, this map need not be continuous. The first proof of the following proposition can be found in \cite{HPS}. \begin{proposition}[T is NIP] Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\nu \in \mathfrak{M}_{y}(\mathcal{U})$. If $\mu$ is finitely satisfiable over a small model and $\nu$ is definable, then $\mu \otimes \nu = \nu \otimes \mu$. \end{proposition} \begin{proof} Fix a formula $\varphi(x,y) \in \mathcal{L}_{x,y}(\mathcal{U})$. Choose $N$ such that $\mu$ is finitely satisfiable over $N$, $\nu$ is definable over $N$, and $N$ contains all the parameters from $\varphi$. Since $\mu$ is finitely satisfiable over $N$, there exists a net of measures $(\operatorname{Av}(\overline{a}_i))_{i \in I}$ such that each $\overline{a}_{i} \in (N^{x})^{< \omega}$ and $\lim_{i \in I} \operatorname{Av}(\overline{a}_i) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$ ($(iv)$ of Fact \ref{Avcls}). By Fact \ref{cont:meas} \begin{align*} \mu \otimes \nu(\varphi(x,y)) = \int_{S_{y}(N)}F_{\mu}^{\varphi} d(\nu|_N) &\overset{(a)}{=}\ \lim_{i \in I} \int_{S_{y}(N)}F_{\operatorname{Av}(\overline{a}_i)}^{\varphi} d(\nu|_N)\\ & \overset{(b)}{=}\ \lim_{i \in I} \int_{S_{x}(N)}F_{\nu}^{\varphi^*} d(\operatorname{Av}(\overline{a}_{i})|_{N})\\ & \overset{(c)}{=}\ \int_{S_{x}(N)}F_{\nu}^{\varphi^*} d(\mu|_N) = \nu \otimes \mu (\varphi(x,y)).\\ \end{align*} Where the equalities $(a)$ and $(c)$ follow from the fact that continuous functions commute with nets. The equality $(b)$ is simple to check and is also justified by statement $(i)$ of Fact \ref{KM:imp2}. \end{proof} \begin{proposition}\label{prop:com} Sequentially approximated and definable measures commute. Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\nu \in \mathfrak{M}_{y}(\mathcal{U})$. If $\mu$ is sequentially approximated and $\nu$ is definable, then $\mu \otimes \nu = \nu \otimes \mu$. \end{proposition} \begin{proof} Fix a formula $\varphi(x,y) \in \mathcal{L}_{x,y}(\mathcal{U})$. Choose $N$ such that $\mu$ is sequentially approximated over $N$, $\nu$ is definable over $N$, and $N$ contains all the parameters from $\varphi$. Let $(\overline{a}_{i})_{i \in \omega}$ be a sequence of points in $(N^{x})^{<\omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. Now we consider the following computation. \begin{align*} \mu \otimes \nu(\varphi(x,y)) = \int_{S_{y}(N)} F_{\mu}^{\varphi} d(\nu|_N) &\overset{(a)}{=}\ \lim_{i \to \infty}\int_{S_{y}(N)} F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi} d(\nu|_N)\\ & \overset{(b)}{=}\ \lim_{i \to \infty} \int_{S_{x}(N)} F_{\nu}^{\varphi^{*}} d(\operatorname{Av}(\overline{a}_{i})|_N)\\ & \overset{(c)}{=}\ \int_{S_{x}(N)} F_{\nu}^{\varphi^{*}} d(\mu|_N) = \nu \otimes \mu(\varphi(x,y)).\\ \end{align*} Where the equality $(a)$ now holds from the dominated convergence theorem, equality $(c)$ holds from $(i)$ of Fact \ref{cont:meas} and the observation that continuous functions commute with nets, and equality $(b)$ is easy to check (also $(i)$ of Fact \ref{KM:imp2}). \end{proof} \begin{corollary} Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\nu \in \mathfrak{M}_{y}(\mathcal{U})$. If $\mu$ is finitely approximated and $\nu$ is definable, then $\mu \otimes \nu = \nu \otimes \mu$. \end{corollary} \begin{proof} By (iii) of Proposition \ref{finitesat}, $\mu$ is sequentially approximated. Apply Proposition \ref{prop:com}. \end{proof} \subsection{Egorov's theorem} It is interesting to note that sequentially approximated measures are not too far away from finitely approximated measures. In particular, if we fix some measure on the parameter space, any sequentially approximated measure is \textit{almost} finitely approximated. This result is in a similar vein as Khanaki's \textit{almost definable} coheirs in the local setting (\cite{Khanaki1}). A direct application of Egorov's theorem gives our result. \begin{theorem}[Egorov's Theorem] Let $(X,B,\mu)$ be a finite measure space. Assume that $(f_i)_{i \in \omega}$ is a sequence of measurable functions from $X \to \mathbb{R}$ such that $(f_i)_{i \in \omega}$ converges to a function $f$ pointwise. Then for every $\epsilon > 0$ there exists a $Y_{\epsilon} \in B$ such that $f_i|_{Y_{\epsilon}}$ converges to $f|_{Y_{\epsilon}}$ uniformly on $Y_{\epsilon}$ and $\mu(X \backslash Y_{\epsilon}) < \epsilon$. \end{theorem} A proof of Egorov's theorem can be found in \cite[Theorem 3.2.4.1]{Acourse}. Restating this theorem in our context gives the following result. \begin{corollary} Assume that $p$ and $\mu$ are sequentially approximated over $M$. Let $\nu \in \mathfrak{M}_{y}(M)$. Then, for every $\epsilon > 0$, there exists a Borel set $Y_{\epsilon} \subset S_{y}(M)$ such that \begin{enumerate} \item $\nu(Y_{\epsilon}) > 1 - \epsilon$. \item For every $\delta > 0$ and every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, there exists $\overline{a}_{\delta}$ in $(M^{x})^{<\omega}$ such that for every $b \in \mathcal{U}^{y}$ so that $\operatorname{tp}(b/M) \in Y_{\epsilon}$, we have \begin{equation*} |\mu(\varphi(x,b)) - \operatorname{Av}(\overline{a}_{\delta})(\varphi(x,b))| < \delta. \end{equation*} \item For every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, there exists $a$ in $M^{x}$ such that for every $b \in \mathcal{U}^{y}$ so that $\operatorname{tp}(b/M) \in Y_{\epsilon}$, we have \begin{equation*} \varphi(x,b) \in p \iff \models \varphi(a,b). \end{equation*} \end{enumerate} \end{corollary} \section{Generically stable types} Throughout this section, we let $T$ be a countable theory and $\mathcal{U}$ be a monster model of $T$. We show that if a type $p$ is generically stable over a small submodel $M$ of $\mathcal{U}$, then $p$ is sequentially approximated over $M$. Toward proving this result, we actually prove a slightly stronger lemma than what is necessary. Namely, let $p$ be a $\operatorname{dfs}$ type and let $M$ be a countable model such that $p$ is $\operatorname{dfs}$ over $M$ (for any $\operatorname{dfs}$ type, these models always exist by (iv) of Fact \ref{gfs:facts}). We show that there exists a special sequence of points in $M$ such that the \textit{limiting behavior} of this sequence \textit{resembles} a Morley sequence in $p$ over $M$. In the case where $p$ is generically stable over $M$, we show that this special sequence converges to $p$. This is enough to show the result since every generically stable type is generically stable over some countable model. We now begin with a discussion on eventually indiscernible sequences, which were introduced in \cite{Invariant}. \begin{definition} Let $(c_i)_{i \in \omega}$ be a sequence of points in $\mathcal{U}^{x}$ and $A \subset \mathcal{U}$. We say that $(c_i)_{i \in \omega}$ is an \textbf{eventually indiscernible sequence over $A$} if for any formula $\varphi(x_0,...,x_k)$ in $\mathcal{L}_{(x_i)_{i \in \omega}}(A)$, there exists some natural number $N_{\varphi}$ such that for any indices $n_{k} > .... > n_{0} > N_{\varphi}$ and $m_{k} > ... > m_{0} > N_{\varphi}$, we have that \begin{equation*} \mathcal{U} \models \varphi(c_{n_0},...,c_{n_{k}}) \leftrightarrow \varphi(c_{m_0},...,c_{m_k}). \end{equation*} \end{definition} \begin{fact}\label{eventual} Let $(b_i)_{i \in \omega}$ be a sequence of points in $\mathcal{U}^{x}$ and $A \subset \mathcal{U}$ such that $|A| = \aleph_0$. Then there exists a subsequence $(c_i)_{i \in \omega}$ of $(b_i)_{i \in \omega}$ such that $(c_i)_{i \in \omega}$ is eventually indiscernible over $A$. \end{fact} The proof is a standard application of Ramsey's theorem and taking the diagonal (as mentioned in \cite{Invariant}). We prove a ``continuous" version of this fact in the next section and the proof is analogous (see Proposition \ref{correct} for details). For any eventually indiscernible sequence $(c_{i})_{i \in \omega}$ over a set of parameters $A$, we can associate to this sequence a unique type in $S_{(x_i)_{i \in \omega}}(A)$. We call this the \textit{eventual Ehrenfeucht-Mostowski type} (or $\operatorname{EEM}$-type) of $(c_{i})_{i \in \omega}$ over $A$. We now give the formal definition. \begin{definition} Let $(b_{i})_{i \in \omega}$ be a sequence of points in $\mathcal{U}^{x}$ and $A \subset \mathcal{U}$. Then the \textbf{eventual Ehrenfeucht-Mostowski type} (or \textbf{EEM-type}) of $(b_i)_{i \in \omega}$ over $A$, which is written as $\operatorname{EEM}((b_i)_{i \in \omega})/A)$, is a subset of $\mathcal{L}_{(x_i)_{i \in \omega}}(A)$ defined as follows: Let $\varphi(x_{i_{0}},...,x_{i_{k}})$ be a formula in $\mathcal{L}_{(x_{i})_{i \in \omega}}(A)$ where the indices are ordered $i_{0} < ... < i_{k}$. Then $\varphi(x_{i_0},...x_{i_{k}}) \in \operatorname{EEM}((b_{i})_{i \in \omega})/A)$ if and only if there exists an $N_{\varphi}$ such that for any $n_k > ... > n_0 > N_{\varphi}$, we have that $\mathcal{U} \models \varphi(b_{n_0},..., b_{n_k})$. \end{definition} Notice that an $\operatorname{EEM}$-type of a sequence is always indiscernible in the following sense: If we have indices $i_{0},...,i_{k}$ and $j_{0},...,j_{k}$ where $i_{0} < ... < i_{k}$ and $j_{0}<...<j_{k}$, then $\varphi(x_{i_{0}},...,x_{i_{k}})$ is in the $\operatorname{EEM}$-type of $(b_{i})_{i \in \omega}$ over $A$ if and only if $\varphi(x_{j_0},...,x_{j_k})$ is. This follows directly from the definition. We have some basic observations. \begin{observation} Let be $(c_i)_{i \in \omega}$ an eventually indiscernible sequence over $A$. \begin{enumerate} \item Then $\operatorname{EEM}((c_{i})_{i \in \omega}/A)$ is a complete type in $S_{(x_i)_{i \in \omega}}(A)$. \item If $(c_i)_{i \in \omega}$ is $A$-indiscernible, then $\operatorname{EEM}((c_i)_{i \in \omega}/A) = \operatorname{EM}((c_i)_{i \in \omega}/A)$. \item If $\operatorname{tp}((b_i)_{i \in \omega}/A) = \operatorname{EEM}((c_i)_{i \in \omega}/A)$, then $(b_i)_{i \in \omega}$ is $A$-indiscernible. \end{enumerate} \end{observation} \begin{proof} Clear from the definitions and discussion above. \end{proof} We warn the reader that an eventually indiscernible sequence need not ``realize" its own $\operatorname{EEM}$-type. Consider the following example: \begin{example} Let $T_{<}$ be the theory of $(\mathbb{R};<)$. Let $\mathcal{U}$ be a monster model of $T_{real}$ and $\mathbb{R} \prec \mathcal{U}$. Then the sequence $(a_i)_{i \in \omega}$ where $a_i = i$ is eventually indiscernible over $\mathbb{R}$ while the sequence $(b_i)_{i \in \omega}$ where $b_i = i(-1)^{i}$ is not. Clearly, $(a_i)_{i \in \omega}$ is not $\mathbb{R}$-indiscernible. Moreover, for each $r \in \mathbb{R}$, the formula $x_0 > r$ is in $\operatorname{EEM}((a_i)_{i \in \omega}/\mathbb{R})$ while $a_{1} >2$ clearly does not hold. So if $\operatorname{tp}((c_i)_{i \in \omega}/\mathbb{R})) = \operatorname{EEM}((a_i)_{i \in \omega}/\mathbb{R})$, then $c_i > \mathbb{R}$ for each $i \in \omega$. \end{example} The next two lemmas prove the bulk of this section's main theorem and their proofs are similar to the proof of Theorem \ref{sim:conv}. The proof strategy for this theorem is the following: If $p$ is in $S_{x}(\mathcal{U})$ and $p$ is $\operatorname{dfs}$, then we can find a countable model $M$ such that $p$ is $\operatorname{dfs}$ over $M$. Let $I$ be a Morley sequence in $p$ over $M$. Using the fact that $p$ is finitely satisfiable over $M$, we can find a sequence of points in $M^{x}$ which converge to $p|_{MI}$ in $S_{x}(MI)$. After moving to an eventually indiscernible subsequence, we show that the $\operatorname{EEM}$-type of this eventually indiscernible sequence is precisely $p^{\omega}|_{M}$. With the stronger assumption that our type $p$ is generically stable (instead of just $\operatorname{dfs}$), we show that this eventually indiscernible subsequence must converge to $p$ in $S_{x}(\mathcal{U})$. \begin{lemma}\label{dfs:lemma} Suppose $p$ is in $S_{x}(\mathcal{U})$ and $p$ is $\operatorname{dfs}$ over $M$ where $|M| = \aleph_0$. Then there exists a sequence $(c_i)_{i \in \omega}$ in $M^{x}$ such that $\operatorname{EEM}((c_i)_{i \in \omega}/M) = p^{\omega}|_{M}$. \end{lemma} \begin{proof} Let $I = (a_i)_{i \in \omega}$ be a Morley sequence in $p$ over $M$. Since $T$, $M$, and $I$ are countable, $\mathcal{L}_{x}(MI)$ is countable. It follows that $p|_{MI}$ is countable and we may enumerate this collection of formulas as $(\varphi_{i}(x))_{i \in \omega}$. Since $p$ is $\operatorname{dfs}$ over $M$, in particular $p$ is finitely satisfiable over $M$. For each natural number $n$, we choose $b_{n}$ in $M^{x}$ such that $\mathcal{U} \models \bigwedge_{j \leq n} \varphi_{j}(b_n)$. By construction, we have that $\lim_{i \to \infty} \operatorname{tp}(b_i/MI) = p|_{MI}$ in $S_{x}(MI)$. By Fact \ref{eventual}, we may choose a subsequence $(c_{i})_{i \in \omega}$ of $(b_{i})_{i \in \omega}$ such that $(c_i)_{i \in \omega}$ is eventually indiscernible over $MI$. For ease of notation, we write $(c_{i})_{i \in \omega}$ as $J$. We now show that \textit{$\operatorname{EEM}(J/M) = \operatorname{EM}(I/M) = p^{\omega}|_{M}$}. We remind the reader that $\operatorname{EM}(I/M) = p^{\omega}|_{M}$ follows directly from the definition of a Morley sequence. We prove the first equality by induction on the number of free variables occurring in a formula. We begin with the base case. It suffices to show that for every $\varphi(x_0) \in \mathcal{L}_{x_{0}}(M)$, if $\varphi(x_0) \in \operatorname{EM}(I/M)$, then $\varphi(x_0) \in \operatorname{EEM}(J/M)$. Notice that $ \lim _{n\to \infty} \operatorname{tp}(b_n/MI) = p|_{MI}$, and $(c_i)_{i \in \omega}$ is a subsequence of $(b_n)_{n \in \omega}$, $\lim _{i\to \infty} \operatorname{tp}(c_i/MI) = p|_{MI}$. This clearly implies the base case. Fix $k$ and suppose that for any formula $\theta(x_0,...,x_k)$ in $\mathcal{L}_{x_{0},...,x_{k}}(M)$, we have that $\theta(x_0,...,x_k) \in \operatorname{EM}(I/M)$ if and only if $\theta(x_0,...,x_k) \in \operatorname{EEM}(J/M)$. Towards a contradiction, we assume that $\neg \theta(x_0,...,x_{k+1}) \in \operatorname{EEM}(J/M)$ and $\theta(x_0,...,x_{k+1}) \in \operatorname{EM}(I/M)$. Since $\neg \theta(\overline{x}) \in \operatorname{EEM}(J/M)$, there exists some natural number $N_{\theta_{1}}$ such that for any $n_{k+1} > ... > n_{0} > N_{\theta_{1}}$, we have that $\mathcal{U}\models \neg \theta(c_{n_{0}},...,c_{n_{k+1}})$. Since $\theta(\overline{x}) \in \operatorname{EM}(I/M)$, we conclude that $\mathcal{U} \models \theta(a_0,...,a_{k+1})$. Since $p$ is $\operatorname{dfs}$ over $M$, $I$ is totally indiscernible over $M$ by Fact \ref{gfs:facts}. Therefore, $\mathcal{U} \models \theta(a_{k+1}, a_{0}...,a_{k})$ and so $\theta(x,a_{0},...,a_{k}) \in p|_{Ma_{0},...,a_{k}}$. Since $\lim_{i \to \infty}\operatorname{tp}(c_i/MI) = p|_{MI}$, there exists some $N_{\theta_2}$ such that for every $n > N_{\theta_{2}}$, we have that $\mathcal{U}\models \theta(c_{n},a_{0},...,a_{k})$. Choose $n_{*} > \max\{N_{\theta_{1}},N_{\theta_{2}}\}$. Then the formula $\theta(c_{n_*},x_{0},...,x_{k}) \in \operatorname{tp}(a_0,...,a_{k}/M)$. By our induction hypothesis, we have that $\theta(c_{n_{*}},\overline{x}) \in \operatorname{EEM}(J/M)$ and so there exists $N_{\theta_{3}}$ such that for any $m_{k}> ... > m_{0} > N_{\theta_{3}}$, we have that $\mathcal{U}\models \theta(c_{n_*}, c_{m_{0}},...,c_{m_{k}})$. Now consider what happens when $m_0 > \max\{N_{\theta_{3}}, n_{*}\}$. Then $m_k > ... > m_{0} > n_{*} > N_{\theta_1}$ and so $\mathcal{U} \models \neg \theta(c_{n_*},c_{m_0},...,c_{m_k})$ by our assumption. However, $m_k > ... > m_{0} > N_{\theta_{3}}$ and therefore $\mathcal{U} \models \theta(c_{n_*},c_{m_0},...,c_{m_{k}})$. This is a contradiction. \end{proof} \begin{lemma}\label{gs:lemma} Suppose $p$ is in $S_{x}(\mathcal{U})$ and $M \prec \mathcal{U}$. Assume that $p$ is generically stable over $M$. If $(c_i)_{i \in \omega}$ is a sequence in $M^{x}$ such that $\operatorname{EEM}((c_i)_{i \in \omega}/M) = p^{\omega}|_{M}$, then $\lim_{i \to \infty} tp(c_i/\mathcal{U}) = p$. \end{lemma} \begin{proof} Let $p$, $(c_{i})_{i \in \omega}$ and $M$ be as in the statement of the lemma. Let $J = (c_{i})_{i \in \omega}$. We first argue that the sequence of global types $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ converges and then argue that this sequence converges to $p$. \textbf{Claim 1:} The sequence $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ converges to a some type in $S_{x}(\mathcal{U})$. It suffices to argue that for any formula $\psi(x) \in \mathcal{L}_{x}(\mathcal{U})$, $\lim_{i \to \infty} \mathbf{1}_{\psi}(c_i)$ exists (recall that $\mathbf{1}_{\psi(x)}$ is the characteristic function of the definable set $\psi(x)$). Assume not. Then we may choose a subsequence $(c_i')_{i \in \omega}$ of $(c_i)_{i \in \omega}$ such that $ \mathcal{U} \models \psi(c_{i}') \leftrightarrow \neg \psi(c_{i+1}')$. For notational purposes, we also denote $(c'_i)_{i \in \omega}$ as $J'$. It is clear that $(c_{i}')_{i \in \omega}$ is also eventually indiscernible over $M$ and $\operatorname{EEM}((c_i')_{i \in \omega}/M) = \operatorname{EEM}((c_i)_{i \in \omega}/M)$. By using $J'$, one can show that the following type is finitely consistent: \begin{equation*} \Theta_1 = \operatorname{EEM}(J'/M) \cup \bigcup_{\textit{$i$ is even}}\{\psi(x_i) \wedge \neg \psi(x_{i+1})\}. \end{equation*} Let $(d_i)_{i \in \omega}$ realize this type. Then $(d_i)_{i \in \omega}$ is a Morley sequence in $p$ over $M$ because \begin{equation*} \operatorname{EM}((d_i)_{i \in \omega}/M) = \operatorname{EEM}(J'/M) = \operatorname{EEM}(J/M) = p^{\omega}|_{M}. \end{equation*} Then $\mathcal{U} \models \psi(d_{i})$ if and only if $i$ is even. This contradicts generic stability since $\lim_{i \to \infty} \operatorname{tp}(d_i/M)$ does not converge. \textbf{Claim 2:} The sequence $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ converges to $p$. Again, assume not. By claim 1, $\lim_{i \to \infty} \operatorname{tp}(c_i/\mathcal{U}) = q$ for some $q \in S_{x}(\mathcal{U})$. By assumption, $q \neq p$ and so there exists a formula $\psi(x)$ such that $\psi(x) \in p$ and $\neg \psi(x) \in q$. Since $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ converges to $q$, there is an $N$ such that for every $n > N$, we have that $\mathcal{U} \models \neg \theta(c_n)$. By a similar argument as the previous claim, one can show the following type is finitely consistent: \begin{equation*} \Theta_2 = \operatorname{EEM}(J/M) \cup \bigcup_{i \in \omega} \neg \theta(x_{i}). \end{equation*} Again, we let $(d_i)_{i \in \omega}$ realize this type. Then $(d_i)_{i \in \omega}$ is a Morley sequence in $p$ over $M$ and we have that $\lim_{i \to \infty} \operatorname{tp}(d_i/\mathcal{U}) \neq p$ in $S_{x}(\mathcal{U})$. This again contradicts the definition of generic stability. \end{proof} \begin{theorem}\label{gstheorem} Suppose $p$ is in $S_{x}(\mathcal{U})$ and $p$ is generically stable (over $M$). Then $p$ is sequentially approximated (over $M$). \end{theorem} \begin{proof} If $p$ is generically stable, then $p$ is generically stable over a countable submodel model $M_{0}$ contained in $M$ by Fact \ref{gfs:facts}. Then $p$ is $\operatorname{dfs}$ over $M_0$ and so by Lemma \ref{dfs:lemma}, one can choose $(c_i)_{i \in \omega}$ where each $c_i \in M_{0}^{x}$ and $\operatorname{EEM}((c_i)_{i \in \omega}/M_{0}) = p^{\omega}|_{M_0}$. By Lemma \ref{gs:lemma}, $\lim_{i \to \infty}\operatorname{tp}(c_i/\mathcal{U}) = p$. \end{proof} \begin{corollary} Assume that $T'$ is countable or uncountable in the language $\mathcal{L'}$, $\mathcal{U}' \models T'$, and $M'$ a submodel of $\mathcal{U}'$. Assume that $p$ is generically stable over $M'$. Then for any countable collection of formulas $\Delta = \{\psi_{i}(x,y_{i})\}_{i \in \omega}$ in $\mathcal{L'}$, there exists a sequence of points $(c_i)_{i \in \omega}$ each in $(M')^{x}$ such that $\lim_{i \to \infty} \operatorname{tp}_{\Delta}(c_i/\mathcal{U}) = p|_{\Delta}$. \end{corollary} \begin{proof} Let $\mathcal{L}$ be a countable sublanguage of $\mathcal{L'}$ containing all the formulas in $\Delta$. The corresponding type $p|_{\mathcal{L}}$ is generically stable over the model $M$ where $M = M'|_{\mathcal{L}}$ (see \cite[Remark 3.3]{CoGan}). Hence we may apply Theorem \ref{gstheorem}. \end{proof} \subsection{Examples and non-examples} We begin this subsection by collecting the known examples of sequentially approximated types. We then go on to give two examples of types which are not sequentially approximated (over any model). \begin{observation} Assume that $p \in S_{x}(\mathcal{U})$ and let $M$ be a small elementary submodel. Then, $p$ is sequentially approximated over $M$ if \begin{enumerate}[($i$)] \item $T$ is stable, and $p$ is invariant over $M$, \item $T$ is NIP, $|M| = \aleph_0$, and $p$ is finitely satisfiable over $M$, or \item $p$ is generically stable over $M$. \end{enumerate} \end{observation} We just proved $(iii)$. Clearly, $(i)$ follows from $(iii)$ (we remark that it also follows from $(ii)$). As noted previously, the proof of $(ii)$ is precisely \cite[Lemma 2.8]{Invariant}. We now exhibit some concrete examples of types which are not sequentially approximated. We begin by describing a type in an NIP theory which is finitely satisfiable over a small model but not sequentially approximated (and its associated Keisler measure is not sequentially approximated either). We then discuss a finitely approximated type which is not sequentially approximated. \begin{proposition}\label{omega} Let $\omega_{1}$ be the first uncountable ordinal, $M = (\omega_{1};<)$ with the usual ordering, and let $T_{<}$ be the theory of $M$ in the language $\{<\}$. Recall that $T_{<}$ is NIP. Let $p \in S_{x}(\omega_{1})$ be the complete type extending $\{\alpha < x: \alpha < \omega_1\}$. Let $\mathcal{U}$ be a monster model of $T_{<}$ such that $M \prec \mathcal{U}$ and let $p_* \in S_{x}(\mathcal{U})$ be the unique global coheir of $p$. Then, $p_{*}$ is not sequentially approximated over any model. \end{proposition} \begin{proof} Assume for the sake of contradiction that $p_{*}$ is sequentially approximated over some model $N$. Then there exists a sequence of points $(b_i)_{i \in \omega}$ in $N$ such that $\lim_{i \to \infty} \operatorname{tp}(b_i/\mathcal{U}) = p_{*}$ in $S_{x}(\mathcal{U})$. There is either an infinite subsequence which is strictly increasing or strictly decreasing and so without loss of generality, $(b_i)_{i \in \omega}$ has one of these two properties. First assume that $(b_i)_{i \in \omega}$ is strictly increasing. Notice that $b_i < x \in p_{*}$. Since $p_{*}$ is a coheir of $p$, $p_*$ is finitely satisfiable over $\omega_1$. So, for each $b_i$ there exists $\alpha$ in $\omega_{1}$ such that $b_{i} < \alpha$. Now, for each $b_i$, we define $\alpha_i := \min \{\alpha \in \omega_1: \mathcal{U} \models b_i<\alpha \}$. Since $\omega_1$ is well-ordered, $\alpha_i$ is well-defined. We let $\beta$ be the supremum (in $\omega_{1}$) of $\{\alpha_{i}: i \in \omega\}$. Then $\mathcal{U} \models b_i < \beta$ for each $i \in \omega$, and so but $x < \beta \in p_{*}$, contradiction. Now we assume that $(b_i)_{i \in \omega}$ is a strictly decreasing subsequence. Notice that for each $i \in \omega$, $b_i>x \in p_{*}$. Let $\Theta(x) = \{\alpha < x: \alpha \in \omega_1\}\ \cup \{x < b_i: i \in \omega\}$. By compactness, choose $c_{\infty}$ in $\mathcal{U}$ satisfying $\Theta(x)$. Since $p_{*}$ is finitely satisfiable over $\omega_1$, we have $c_{\infty} > x \in p_{*}$. But since $\mathcal{U} \models b_i > c_{\infty}$ for each $i \in \omega$, we have that $x >c_{\infty} \in p$, contradiction. \end{proof} \begin{remark}\label{example:coheir1} The type $p_{*}$ in Proposition \ref{omega} is finitely satisfiable over a small model, but not finitely satisfiable over any countable submodel by Theorem \ref{sim:conv}. \end{remark} \begin{proposition} Let $p_{*}$ be as in Proposition \ref{omega}. Then the associated Keisler measure $\delta_{p_{*}}$ is not sequentially approximated. \end{proposition} \begin{proof} Clear from $(iv)$ of Proposition \ref{finitesat}. \end{proof} \begin{proposition}\label{Gabe} Let $T^{2}_{s}$ be the theory of the random $K_{s}$-free graph in the language $\mathcal{L} = \{E(x,y)\}$. Let $p_{*}$ be the unique global complete type extending the formulas $\{ \neg E(x,b): b \in \mathcal{U}\}$. Then, $\delta_{p_{*}}$ is sequentially approximated (even finitely approximated over any submodel) but $p_{*}$ is not sequentially approximated. Moreover, $T^{2}_{s}$ admits no (non-realized) sequentially approximated types. \end{proposition} \begin{proof} The proof that $\delta_{p_*}$ is finitely approximated can be found in \cite[Theorem 5.8]{CoGan}. By $(iii)$ of Proposition \ref{finitesat}, $\delta_{p_*}$ is sequentially approximated. By $(v)$ of Proposition \ref{finitesat}, it suffices to show that there are no non-realized types in one variable which are sequentially approximated. Let $p$ be any non-realized type in $S_{1}(\mathcal{U})$ and assume that $(b_i)_{i \in \omega}$ is a sequence of points in $\mathcal{U}^{x}$ such that $\lim_{i\to \infty}\operatorname{tp}(b_i/\mathcal{U}) = p$. Since $p$ is non-realized, we may assume that the points in $(b_i)_{i \in \omega}$ are distinct. Then, by Ramsey's theorem, there is a subsequence which is either independent or complete. It cannot be complete, because that would violate $K_{s}$-freeness. Therefore, $(b_i)_{i \in \omega}$ contains an independent subsequence, call it $(c_i)_{i \in \omega}$. By compactness, there exists an $a$ in $\mathcal{U}$ such that $\mathcal{U} \models E(c_i,a)$ if and only if $i$ is even. Then, $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ does not converge in $S_{x}(\mathcal{U})$ and so $(\operatorname{tp}(b_i/\mathcal{U}))_{i \in \omega}$ does not converge in $S_{x}(\mathcal{U})$. \end{proof} \begin{question} We say a global type $p$ in $S_{x}(\mathcal{U})$ is \textbf{sad}\footnote{Credit to James Hanson for the terminology.} if it is both \textbf{s}equentially \textbf{a}pproximated and \textbf{d}efinable. Does there exist a global type $p$ which is sad over a model $M$ but is not generically stable over $M$? It is clear that if $T$ is NIP, then all sad types are generically stable. Therefore an example of such a type must come from \textit{the wild}. \end{question} \section{Sequential approximations of measures in NIP theories} Throughout this section, we assume that $T$ is a countable NIP theory and $\mathcal{U}$ is a monster model of $T$. We show that measures which are finitely satisfiable over a countable model of $T$ are sequentially approximated (Theorem T2). To do this, we introduce the notion of a \textit{smooth sequence}. These are sequences of global measures which are intended to play the role of a Morley sequence for a measure. Unfortunately, these sequences only exist (a priori) in the NIP context and it is currently not known how to expand this idea to IP theories. At the end of this section, we give a characterization of generic stability using smooth sequences (again, only in the NIP context). To motivate the machinery introduced in this section, we explain why Theorem T2 does not follow directly from some approximation results currently in the literature. One might assume that one could prove Theorem T2 from Theorem \ref{sim:conv} in tandem with the following fact \cite[Proposition 7.11]{Sibook}, \begin{fact}[T is NIP]\label{type:approx} Suppose that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is finitely satisfiable over $M$. Then, for any formula $\varphi(x,y) \in \mathcal{L}$ and every $\epsilon > 0$, there exists types $p_1,...,p_n \in S_{x}(\mathcal{U})$, where for each $i \leq n$ the type $p_i$ is finitely satisfiable over $M$, and \begin{equation*} \sup_{b \in \mathcal{U}^{y}}| \mu(\varphi(x,b)) - \operatorname{Av}(\overline{p})(\varphi(x,b))|< \epsilon. \end{equation*} \end{fact} If $\mu$ is in $\mathfrak{M}_{x}(\mathcal{U})$ and is finitely satisfiable over a countable model $M$, then one can use Theorem \ref{sim:conv} and Fact \ref{type:approx} together to produce: \begin{enumerate} \item a sequence of global measures $(\operatorname{Av}(\overline{p}_{i}))_{i \in \mathbb{N}}$ such that each $\overline{p}_{i} = (p_{i_1},...,p_{i_{k}})$, each $p_{i_{k}} \in S_{x}(\mathcal{U})$ is finitely satisfiable over $M$, and $\lim_{i \to \infty} \operatorname{Av}(\overline{p}_i) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$, \item for each $i \in \mathbb{N}$, a sequence of points $(\overline{a}_{i_j})_{j \in \mathbb{N}}$ each in $(M^{x})^{<\omega}$ so that $\lim_{j \to \infty} \operatorname{Av}(\overline{a}_{i_j})= \operatorname{Av}(\overline{p}_{i})$. \end{enumerate} This construction gives an \textit{array} of points $(\overline{a}_{i_j})_{(i,j) \in \mathbb{N} \times \mathbb{N}}$ in $(M^{x})^{< \omega}$ so that \begin{equation*}\lim_{i \to \infty} \lim_{j \to \infty} \Big(\operatorname{Av}(\overline{a}_{i_j})\Big) = \mu \text{ in $\mathfrak{M}_{x}(\mathcal{U})$}. \end{equation*} A priori, the convergence of an array \textit{does not imply} that there exists a subsequence of that array which converges to the array's limit\footnote{For example, any Baire-2 function which is not Baire-1 can be written as the limit of an array of continuous functions, but cannot be written as the sequential limit of continuous functions.}. A similar situation arises by trying to iterate Theorem \ref{Khanaki}. So, we must work slightly harder. As previously stated, our proof essentially mimics the proof of Theorem \ref{sim:conv} but with Morley sequences replaced by \textit{smooth sequences}. Finally we remark that if there were an \textit{elementary proof} using an array to show this result, then we would have a moderately simple proof that $\operatorname{dfs}$ measures are finitely approximated in NIP theories. In particular, this proof would bypass the implicit use of randomizations (i.e. $(i)$ of Fact \ref{HPSFact}). We formally begin this section by discussing a ``continuous" analogue of eventually indiscernible sequences. \subsection{Eventually indiscernible sequences revisited} We fix some notation. Fix distinct tuples of variables $x$ and $x_0,...,x_n$ such that $|x| = |x_i|$ for $i \leq n$. If $\varphi(x_0,...,x_n)$ is a formula in $\mathcal{L}_{x_0,...,x_n}(\mathcal{U})$ and $\overline{a}_{0},...,\overline{a}_{n}$ is a finite sequence of elements where each $\overline{a}_i \in (\mathcal{U}^{x})^{<\omega}$ and $\overline{a}_{i} = (a_{i,0},...,a_{i,m_{i}})$ for $i \leq n$, then we write $\varphi_{c}(\overline{a}_{0},...,\overline{a}_{n})$ to mean, \begin{equation*} \bigotimes_{i=0}^{n}\operatorname{Av}(\overline{a}_i)_{x_{i}}(\varphi(x_0,...,x_n)). \end{equation*} Notice that $\varphi_{c}(\overline{a}_0,...,\overline{a}_n)$ is a real number. We observe that by unpacking the definition of the product measure, our formula can be computed as follows: \begin{equation*} \varphi_{c}(\overline{a}_{0},...,\overline{a}_{n})= \frac{1}{\prod_{i=0}^{n} (m_{i} + 1)} \sum_{j_{0} = 0}^{m_{0}}...\sum_{j_{n} = 0}^{m_{n}}\mathbf{1}_{\varphi}(a_{0,j_0},...,a_{n,j_{n}}). \end{equation*} \begin{definition}\label{convex} Let $(\overline{a}_{i})_{i \in \omega}$ be a sequence of elements in $(\mathcal{U}^{x})^{<\omega}$ and let $A \subset \mathcal{U}$ be a collection of parameters. Then we say that the sequence $(\overline{a}_i)_{i \in \omega}$ is \textbf{eventually indiscernible over $A$} if for any formula $\varphi(x_0,...,x_n)$ in $\mathcal{L}_{(x_i)_{i\in \omega}}(A)$ and any $\epsilon > 0$, there exists $N_{\epsilon,\varphi}$ such that for any $n_{k}>...>n_{0}>N_{\epsilon,\varphi}$ and $m_{k}>....>m_{0}>N_{\epsilon,\varphi}$, \begin{equation*} |\varphi_{c}(\overline{a}_{n_{0}},...,\overline{a}_{n_{k}})-\varphi_{c}(\overline{a}_{m_{0}},...,\overline{a}_{m_{k}})|<\epsilon. \end{equation*} \end{definition} \begin{proposition}\label{correct} Let $(\overline{a}_{i})_{i\in\omega}$ be a sequence of tuples in $(\mathcal{U}^{x})^{< \omega}$. If $A$ is a countable set of parameters, then there exists some subsequence $(\overline{c}_i)_{i \in \omega}$ of $(\overline{a})_{i \in \omega}$ such that $(\overline{c}_{i})_{i \in \omega}$ is eventually indiscernible over $A$. \end{proposition} \begin{proof} This proof is a standard application of Ramsey's theorem applied to the ``continuous" setting. Enumerate all pairs in $\mathcal{L}_{(x_i)_{i \in \omega}}(A) \times \mathbb{N}_{>0}$. Let $(\overline{a}_{i}^{0})_{i\in\omega} :=(\overline{a}_{i})_{i\in\omega}$ and set $B_{0} = \{\overline{a}^{0}_{i}: i \in \omega\}$. Now, assume we have constructed the subsequence $(\overline{a}_{i}^{l})_{i\in\omega}$ and $B_{l}$ (where $B_{l} = \{\overline{a}_{i}^{l}: i \in \omega\}$). We now construct $(\overline{a}_{i}^{l+1})_{i\in\omega}$ and $B_{l+1}$. Assume that $(\varphi(x_{0},...,x_{k}),n)$ is the $l+1$ indexed pair in our enumeration. Then we define the coloring $r_{l+1}:[B_{l}]^{k+1}\to\{0,...,n\}$ via \begin{equation*} r(\{\overline{a}^{l}_{i_{0}},...,\overline{a}^{l}_{i_{k}}\}) = \lfloor n \cdot \varphi_{c}(\overline{a}^{l}_{i_{0}},...,\overline{a}^{l}_{i_{k}}) \rfloor. \end{equation*} where $i_0 < i_1 < ...< i_k$. By Ramsey's theorem, there is an infinite monochromatic subset $B_{l}'$ of $B_{l}$. Let $(\overline{a}_{i}^{l+1})_{i\in\omega}$ be the obvious reindexed subsequence of $(\overline{a}_{i}^{l})_{i\in\omega}$ with the elements only from the monochromatic set $B_{l}^{'}$. We let $B_{l + 1} = \{\overline{a}^{l+1}_{i}: i \in \omega\}$. By construction, the sequence $(\overline{a}_{i}^{i})_{i\in\omega}$ is eventually indiscernible. \end{proof} We now present a collection of facts which will help us prove that the associated average measures along eventually indiscernible sequences always converge to a measure in $\mathfrak{M}_{x}(\mathcal{U})$ when the underlying theory is NIP. The first fact is elementary and left to the reader as an exercise. \begin{fact} Assume that $(\mu_{i})_{i \in \omega}$ is a sequence of Keisler measures in $\mathfrak{M}_{x}(\mathcal{U})$. If for every formula $\varphi(x) \in \mathcal{L}_{x}(\mathcal{U})$, $\lim_{i \to \infty} \mu_{i}(\varphi(x))$ converges, then $(\mu_{i})_{i \in \omega}$ converges to a measure in $\mathfrak{M}_{x}(\mathcal{U})$. \end{fact} The next collection of facts can be found in \cite{HPS}. In particular, $(i)$ follows immediately from Lemma 2.10 while $(ii)$ and $(iii)$ are from Corollary 2.14. The proof of Lemma 2.10 is non-trivial and is an interpretation of results in \cite{Ben}. Implicitly, our proof uses the fact that the randomization of an NIP theory is NIP. \begin{fact}[T is NIP]\label{HPSFact} Suppose that $\lambda \in \mathfrak{M}_{(x_i)_{i \in \omega}}$ where $|x_i| =|x_j|$ for each $i,j < \omega$. $\lambda$ is said to be \textbf{$M$-indiscernible} if for every increasing sequence of indices $i_0,...,i_n$ and any formula $\varphi(x_{i_0},...,x_{i_{n}})$ in $\mathcal{L}_{(x_i)_{i \in \omega}}(M)$, we have that \begin{equation*} \lambda(\varphi(x_{i_0},...,x_{i_n})) = \lambda(\varphi(x_{0},...,x_{n})). \end{equation*} Let $\mu, \nu \in \mathfrak{M}_{x}(\mathcal{U})$ such that $\mu,\nu$ are invariant over $M$. The following statements are true. \begin{enumerate}[($i$)] \item If $\lambda$ is $M$-indiscernible, then for any formula $\varphi(x,b) \in \mathcal{L}_{x}(\mathcal{U})$, we have that $\lim_{i \to \infty} \lambda(\varphi(x_i,b))$ exists. \item The measures $\mu^{(\omega)}$ and $\nu^{(\omega)}$ are $M$-indiscernible. \item If $\mu^{(\omega)}|_{M} = \nu^{(\omega)}|_{M}$, then $\mu = \nu$. \end{enumerate} \end{fact} We now establish a formal connection between eventually indiscernible sequences of tuples and indiscernible measures. We use this connection to show that the eventually indiscernible sequences converges to a measure in $\mathfrak{M}_{x}(\mathcal{U})$. \begin{proposition}\label{converge} Let $(\overline{c}_i)_{i \in \omega}$ be a sequence of points in $(\mathcal{U}^{x})^{<\omega}$. If $(\overline{c}_i)_{i \in \omega}$ is an eventually indiscernible sequence over some model $M$, then the sequence $(\operatorname{Av}(\overline{c}_i))_{i \in \omega}$ converges in $\mathfrak{M}_{x}(\mathcal{U})$. \end{proposition} \begin{proof} Assume not. Then there exists some formula $\psi(x,b)$ in $\mathcal{L}_{x}(\mathcal{U})$, some $\epsilon_{0} >0$, and some subsequence $(\overline{c}_i')_{i \in \omega}$ of $(\overline{c}_{i})_{i \in \omega}$ such that for each natural number $i$, \begin{equation*} |\operatorname{Av}(\overline{c}_i')(\psi(x;b)) - \operatorname{Av}(\overline{c}_{i+1}')(\psi(x;b))| > \epsilon_0. \end{equation*} It is clear that $(\overline{c}_{i}')_{i \in \omega}$ is also eventually indiscernible over $M$. We now aim to contradict $(i)$ of Fact \ref{HPSFact} via (topological) compactness of the space $\mathfrak{M}_{\omega}(\mathcal{U}) : = \mathfrak{M}_{(x_{i})_{i \in \omega}}(\mathcal{U})$. For any formula $\varphi(x_{i_{0}},...,x_{i_{k}}) \in \mathcal{L}_{(x_i)_{i \in \omega}}(M)$, we let $r_{\varphi}$ be the unique real number such that for every $\epsilon > 0$, there exists an $N_{\epsilon,\varphi}$ so that for any $n_k > ... >n_0 > N_{\epsilon,\varphi}$ we have \begin{equation*} | \varphi_{c}(\overline{c}'_{n_0},...,\overline{c}'_{n_k}) - r_{\varphi} | < \epsilon. \end{equation*} Since the sequence $(\overline{c}_{i}')_{i\in \omega}$ is eventually indiscernible over $M$, $r_{\varphi}$ exists for each $\varphi(\overline{x}) \in \mathcal{L}_{(x_i)_{i \in \omega}}(M)$. Now, for every $\varphi(\overline{x}) \in \mathcal{L}_{(x_i)_{i \in \omega}}(M)$ and $\epsilon >0$, we define the following family of closed subsets of $\mathfrak{M}_{\omega}(\mathcal{U})$; \begin{equation*} C_{\epsilon,\varphi} = \Big\{ \lambda \in \mathfrak{M}_{\omega}(\mathcal{U}): r_{\varphi} - \epsilon \leq \lambda(\varphi(\overline{x})) \leq r_{\varphi} + \epsilon \Big\}. \end{equation*} We also define another family of sets and argue that they are closed; let \begin{equation*} D_{i} = \Big\{\lambda \in \mathfrak{M}_{\omega}(\mathcal{U}) : |\lambda(\psi(x_i,b)) - \lambda(\psi(x_{i+1},b))| \geq \frac{\epsilon_{0}}{2}\Big\}. \end{equation*} Notice that $D_{i}$ is closed since for every natural number $i$, the evaluation map $E_{i}: \mathfrak{M}_{\omega}(\mathcal{U}) \to [0,1]$ via $E_{i}(\lambda) = \lambda(\varphi(x_i,b))$ is continuous. Indeed, define $F_{i} = E_{i} - E_{i+1}$ and $H_{i} = E_{i+1} - E_{i}$. Then we have $D_{i} = F_{i}^{-1}([\frac{\epsilon_{0}}{2},1]) \cup H_{i}^{-1}([\frac{\epsilon_{0}}{2},1])$ and so $D_{i}$ is a union of two closed sets and therefore closed. Using $(\overline{c}_{i}')_{i \in \omega}$, the collection $\Phi = \{C_{\epsilon,\varphi}: \epsilon > 0, \varphi(\overline{x}) \in \mathcal{L}_{\omega}(M)\}\cup\{D_{i}: i \in \omega\}$ has the finite intersection property. Therefore, there exists some $\lambda \in \mathfrak{M}_{\omega}(\mathcal{U})$ in the intersection of all the sets in $\Phi$. Moreover, $\lambda$ is $M$-indiscernible by construction. Since $\lambda$ is in $D_{i}$ for each $i$, its existence contradicts $(i)$ of Fact \ref{HPSFact}. \end{proof} \subsection{Smooth sequences} In this subsection, we define the notion of a smooth sequence and prove the main theorem. If $\mu$ is a global $M$-invariant measure, then a smooth sequence is a collection of models and measures meant to replicate a Morley sequence. The ideology is the following: A Morley sequence in $p$ over $M$ is to the infinite type $p^{\omega}|_{M}$ as a smooth sequence in $\mu$ over $M$ is to the measure $\mu^{(\omega)}|_{M}$. We now provide the formal definition. \begin{definition}Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and assume that $\mu$ is invariant over some small model $M$. Then, a \textbf{smooth sequence in $\mu$ over $M$} is a sequence of pairs of measures and small models, $(\mu_i,N_i)_{i \in \omega}$, such that: \begin{enumerate}[$(i)$] \item $M \prec N_0$ and $N_i \prec N_{i +1}$ and each $N_i$ is small. \item $\mu_{i}$ is smooth over $N_i$. \item $\mu_{0}|_M = \mu|_M$ and for $i > 0$, $\mu_{i}|_{N_{i-1}} = \mu|_{N_{i-1}}$. \end{enumerate} Furthermore, we define $\bigotimes_{i=0}^{\omega} \mu_{i} = \bigcup_{i =0}^{\omega} \bigotimes_{i=0}^{n}\mu_i$ which is an element of $\mathfrak{M}_{(x_i)_{i \in \omega}}(\mathcal{U})$. We let $N_{\omega} = \bigcup_{i \in \omega} N_{i}$. Notice that for each $i \in \omega$, the measure $\mu_{i}$ is smooth over $N_{\omega}$. \end{definition} \begin{proposition}\label{existence} If $T$ is a countable NIP theory, $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, and $\mu$ is invariant over $M$ where $|M|=\aleph_0$, then there exists a smooth sequence $(\mu_{i},N_i)_{i\in\omega}$ in $\mu$ over $M$ such that each $N_{i}$ is countable. \end{proposition} \begin{proof} This follows directly from Proposition \ref{m:countable}. \end{proof} \begin{proposition}[T is NIP]\label{smoothinv} Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is $M$-invariant. Let $(\mu_i,N_i)_{i \in \omega}$ be a smooth sequence in $\mu$ over $M$. Then, $\bigotimes_{i=0}^{\omega} \mu_{i} |_{M} = \mu^{(\omega)}|_{M}$. Hence, $\bigotimes_{i=0}^{\omega} \mu_i$ is $M$-indiscernible. \end{proposition} \begin{proof} We prove this via induction on formulas in $\mathcal{L}_{(x_i)_{i \in \omega}}(\mathcal{U})$. For our base case, it is true by construction that $\mu_{0}|_{M} = \mu|_{M}$. For our induction hypothesis, we assume that $\mu^{(k-1)}|_{M} = \bigotimes_{i=0}^{k-1} \mu_{i}|_{M}$. For ease of notation, we set $\lambda = \bigotimes_{i=0}^{k-1} \mu_{i}$ and show the induction step: Let $\varphi(x_0,...,x_k)$ be any formula in $\mathcal{L}_{x_0,...,x_k}(M)$. Since the product of smooth measures is smooth (by $(iii)$ of Fact \ref{KM:imp2}), we have that $\lambda$ is smooth over $N_{k-1}$. In particular, $\lambda$ is invariant over $N_{k-1}$. We let $\overline{x} = (x_0,...,x_{k-1})$ and $\theta(x_{k};\overline{x}) = \varphi(x_{0},...,x_{k})$. We consider the following computation followed by a list of justifications. \begin{equation*} \mu_{k} \otimes \lambda (\varphi(x_0,...,x_{k})) = \int_{S_{\overline{x}}(N_{k})} F_{\mu_{k}}^{\theta} d(\lambda|_{N_{k}}) \overset{(a)}{=} \int_{S_{x_{k}}(N_{k})} F_{\lambda}^{\theta^*}d(\mu_{k}|_{N_{k}}) \end{equation*} \begin{equation*} \overset{(b)}{=} \int_{S_{x_{k}}(N_{k-1})}F_{\lambda}^{\theta^{*}}d(\mu_{k}|_{N_{k-1}}) \overset{(c)}{=} \int_{S_{x_{k}}(N_{k-1})}F_{\lambda}^{\theta^{*}}d(\mu|_{N_{k-1}}) \overset{(a)}{=}\int_{S_{\overline{x}}(N_{k-1})}F_{\mu}^{\theta}d(\lambda|_{N_{k-1}}) \end{equation*} \begin{equation*} \overset{(d)}{=} \int_{S_{\overline{x}}(M)} F_{\mu}^{\theta} d(\lambda|_M) \overset{(e)}{=} \int_{S_{\overline{x}}(M)} F_{\mu}^{\theta} d(\mu^{(k-1)}|_M) = \mu \otimes \mu^{(k-1)} (\varphi(x_0,...,x_{k})). \end{equation*} We provide the following justifications: \begin{enumerate}[($a$)] \item Smooth measures commute with invariant measures. \item Changing space of integration since $\lambda$ is invariant over $N_{k-1}$. \item By construction of smooth sequences, we have that $\mu_{k}|_{N_{k - 1}} = \mu|_{N_{k -1}}$. \item Changing space of integration since $\mu$ is invariant over $M$. \item By our induction hypothesis. \qedhere \end{enumerate} \end{proof} We now begin the proof of our main theorem. Again, the proof is similar to both the generically stable case in the previous section and even more so to the proof of Lemma 2.8 in \cite{Invariant}. Here, the major difference is that we replace the Morley sequence in that proof with a countable model, $N_{\omega}$, which ``contains" a smooth sequence in $\mu$ over $M$. Then we find a sequence of elements in $(M^{x})^{< \omega}$ such that the associated average measures converge to $\mu|_{N_{\omega}}$ in $\mathfrak{M}_{x}(N_{\omega})$. After choosing an eventually indiscernible subsequence, we know from our NIP assumption that this new sequence converges to a global measure $\nu$ in $\mathfrak{M}_{x}(\mathcal{U})$. Finally, we demonstrate that $\nu^{(\omega)}|_{M} = \mu^{(\omega)}|_{M}$ which completes the proof. \begin{theorem}[$T$ is NIP] Let $\mu$ be finitely satisfiable over a countable model $M$. Then there exists a sequence $(\overline{a})_{i \in \omega}$ of elements, each in $(M^{x})^{<\omega}$, such that for any $\theta(x) \in \mathcal{L}_{x}(\mathcal{U})$, we have that, \begin{equation*} \lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i})(\theta(x)) = \mu(\theta(x)). \end{equation*} \end{theorem} \begin{proof} Choose a smooth sequence $(\mu_{i},N_i)_{i \in \omega}$ in $\mu$ over $M$. By Proposition \ref{existence} we may choose this sequence so that for each $i \in \omega$, $N_i$ is countable. In particular, this implies that $N_{\omega}$ is a countable model. We begin by constructing a sequence of elements $(\overline{a}_{i})_{i \in \omega}$ in $(M^{x})^{< \omega}$ such that $(\operatorname{Av}(\overline{a}_{i})|_{N_{\omega}})_{i \in \omega}$ converges to $\mu|_{N_{\omega}}$ in $\mathfrak{M}_{x}(N_{\omega})$. Since $N_{\omega}$ is countable, we let $(\theta_{i}(x))_{i \in \omega}$ be an enumeration of the formulas in $\mathcal{L}_{x}(N_{\omega})$. Since $\mu$ is finitely satisfiable over $M$, we can find we find $\overline{a}_{k} \in (M^{x})^{<\omega}$ such that for any $j \leq k$, we have that, \begin{equation*} |\mu(\theta_{j}(x)) - \operatorname{Av}(\overline{a}_{k})(\theta_{j}(x))| < \frac{1}{k}. \end{equation*} By construction, it is clear that the sequence $(\operatorname{Av}(\overline{a}_i)|_{N_{\omega}})_{i\in \omega}$ converges to $\mu|_{N_{\omega}}$ in $\mathfrak{M}_{x}(N_{\omega}$). Now, we let $(\overline{c}_i)_{i \in \omega}$ be a subsequence of $(\overline{a}_i)_{i \in \omega}$ so that $(\overline{c}_i)_{i \in \omega}$ is eventually indiscernible over $N_{\omega}$. Then the sequence $(\operatorname{Av}(\overline{c}_i))_{i \in \omega}$ converges in $\mathfrak{M}_{x}(\mathcal{U})$ by Proposition \ref{converge}. Assume that $(\operatorname{Av}(\overline{c}_{i}))_{i \in \omega}$ converges to some measure $\nu \in \mathfrak{M}_{x}(\mathcal{U})$. Hence, $\nu$ is finitely satisfiable over $M$ by $(i)$ of Proposition \ref{finitesat} and therefore $\nu$ is invariant over $M$. We show that $\nu^{(\omega)}|_{M} = \mu^{(\omega)}|_{M}$. This will conclude the proof by $(iii)$ of Fact \ref{HPSFact}. Since $(\overline{c}_{i})_{i \in \omega}$ is a subsequence of $(\overline{a}_{i})_{i \in \omega}$, it follows that $\nu|_{N_{\omega}} = \mu|_{N_{\omega}}$ and therefore $\nu|_{M} = \mu|_{M}$. We now proceed by induction. Assume that $\nu^{(k-1)}|_{M} = \mu^{(k-1)}|_{M}$. Fix $\varphi(x_0,...,x_{k})$ in $\mathcal{L}_{x_0,...,x_k}(M)$. For ease of notation, set $\lambda = \bigotimes_{i=0}^{k-1} \mu_{i}$. We recall that $\lambda$ is smooth over $N_{\omega}$ (see Fact \ref{KM:imp2}). By Proposition \ref{smoothinv}, $\mu^{(k-1)}|_{M} = \lambda|_{M}$. We let $\overline{x} = (x_0,...,x_{k-1})$ and let $\theta(x_{k};\overline{x}) = \varphi(x_0,...,x_k)$. We now consider the critical computation followed a small glossary of justifications. \begin{equation*} \nu^{(k)}(\varphi(x_0,...,x_{k})) = \int_{S_{\overline{x}}(M)} F_{\nu}^{\theta} d(\nu^{(k-1)}|_{M}) \overset{(a)}{=} \int_{S_{\overline{x}}(M)} F_{\nu}^{\theta} d(\mu^{(k-1)}|_M) \end{equation*} \begin{equation*} \overset{(b)}{=} \int_{S_{\overline{x}}(M)} F_{\nu}^{\theta}d(\lambda|_{M}) \overset{(c)}{=} \int_{S_{\overline{x}}(N_{\omega})} F_{\nu}^{\theta}d(\lambda|_{N_{\omega}}) \overset{(d)}{=} \int_{S_{x_{k}}(N_{\omega})} F_{\lambda}^{\theta^*}d(\nu|_{N_{\omega}}) \end{equation*} \begin{equation*} \overset{(e)}{=} \int_{S_{x_{k}}(N_{\omega})} F_{\lambda}^{\theta^*}d(\mu|_{N_\omega}) \overset{(d)}{=} \int_{S_{\overline{x}}(N_{\omega})} F_{\mu}^{\theta} d(\lambda|_{N_{\omega}}) \overset{(c)}{=} \int_{S_{\overline{x}}(M)} F_{\mu}^{\theta} d(\lambda|_{M}) \end{equation*} \begin{equation*} \overset{(b)}{=}\int_{S_{\overline{x}}(M)} F_{\mu}^{\theta} d(\mu^{(k-1)}|_{M}) = \mu^{(k)}(\varphi(x_0,...,x_{k})). \end{equation*} We provide the following justifications: \begin{enumerate}[(a)] \item Induction hypothesis. \item $\mu^{(k-1)}|_M = \lambda|_M$. \item Changing the space of integration. \item Smooth measures commute with invariant measures. \item $\nu|_{N_{\omega}} = \mu|_{N_{\omega}}$ \qedhere \end{enumerate} \end{proof} We now observe that we have another proof of the theorem that global measures in NIP theories which are definable and finitely satisfiable are also finitely approximated. \begin{corollary} If $T'$ is a countable or uncountable NIP theory and $\mu$ is $\operatorname{dfs}$ over $M$, then $\mu$ is finitely approximated over $M$. \end{corollary} \begin{proof} After restricting to a countable language, we still have a $\operatorname{dfs}$ measures (by \cite[Proposition 2.9]{CoGan}). By Proposition \ref{m:countable}, $\mu$ restricted to this language is $\operatorname{dfs}$ over a countable model, $M_0$. By the previous result, $\mu$ is sequentially approximated over $M_0$. Since $\mu$ is also definable, an application of Proposition \ref{Mazur} yields the result. \end{proof} \begin{observation} Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and let $M$ be a small elementary submodel. Then, $\mu$ is sequentially approximated over $M$ if \begin{enumerate} \item $T$ is stable, and $\mu$ is invariant over $M$, \item $T$ is NIP, $|M| = \aleph_0$, and $\mu$ is finitely satisfiable over $M$, or \item $\mu$ is finitely approximated over $M$. \end{enumerate} \end{observation} Finally, one may ask what happens in the local context. We remark that there exists two proofs for a local version of Theorem T2 which both rely on an important result of Bourgain, Fremlin, and Talagrand whose connection to model theory is (by now) well-known (e.g. \cite{IBFT,SimonBFT, Khanaki1,GannNIP}). Chronologically, the first proof of the following theorem is implicit in the work of Khanaki (see \cite[Remark 3.21, Theorem 3.26]{Khanaki1}) (through the observation that measures are types over models of the randomization in continuous model theory and \cite[Proposition 1.1]{Ben2}), \begin{theorem}\label{Khanaki}Suppose $\mu$ is a Keisler measure in $\mathfrak{M}_{\varphi}(\mathcal{U})$, $\mu$ is finitely satisfiable over $M$ where $|M| = \aleph_0$, and $\varphi(x,y)$ is an NIP formula. Then there exists a sequence of points $(\overline{a}_{i})_{i \in \omega}$ in $(M^{x})^{< \omega}$ such that for each $b \in \mathcal{U}^{y}$, \begin{equation*}\lim_{i \to \infty} \operatorname{Av}(\overline{a}_i)(\varphi(x,b)) = \mu(\varphi(x,b)). \end{equation*} \end{theorem} \noindent There is another proof for the case of just Keisler measures via the VC theorem (see \cite[Lemma 4.7]{GannNIP}) which came later. \subsection{Smooth sequences and generically stable measures in NIP theories} We now give an equivalent characterization for generically stable measures in NIP theories. We invite the reader to review the definition of a generically stable type prior to reading this section. Recall the following theorem due to Hrushovski, Pillay, and Simon \cite[Theorem 3.2]{HPS}. \begin{theorem}[T is NIP]\label{genstab:equiv} Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. Then the following are equivalent. \begin{enumerate}[($i$)] \item $\mu$ is dfs. \item $\mu$ is finitely approximated. \item $\mu$ is fim (see \cite[Definition 2.7]{HPS}). \item $\mu$ is invariant and $\mu_{x} \otimes \mu_{y} = \mu_{y} \otimes \mu_{x}$. \end{enumerate} Moreover, a Keisler measure (in an NIP theory) is called \textbf{generically stable} if it satisfies any/all of $(i) - (iv)$. \end{theorem} We will now show that smooth sequences can also give a characterization of generically stable measures in NIP theories. \begin{lemma}[T is NIP] Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. Suppose that $\mu$ is generically stable over $M$. For any smooth sequence $(\mu_i,N_i)_{i \in \omega}$ in $\mu$ over $M$, we have that $\lim_{i \to \infty} \mu_i = \mu$ in $\mathfrak{M}_x(\mathcal{U})$. \end{lemma} \begin{proof} Since $(\mu_i,N_i)_{i \in \omega}$ is a smooth sequence in $\mu$ over $M$, the measure $\bigotimes_{i=0}^{\omega} \mu_i$ is indiscernible over $M$ by Proposition \ref{smoothinv}. By $(i)$ of Fact \ref{HPSFact}, we know that $\lim_{i \to \infty} \mu_{i} = \nu$ for some $\nu \in \mathfrak{M}_{x}(\mathcal{U})$. Since each $\mu_i$ is finitely satisfiable over $N_i$, it follows that $\nu$ is finitely satisfiable over $N_{\omega}$. By $(iii)$ of Fact \ref{HPSFact}, it is enough to show that $\nu^{(\omega)}|_{N_{\omega}} = \mu^{(\omega)}|_{N_{\omega}}$. The base case is trivial. Assume that $\nu^{(k-1)}|_{N_{\omega}} = \mu^{(k-1)}|_{N_{\omega}}$. Fix $\varphi(x_0,...,x_k) \in \mathcal{L}_{x_0,...,x_k}(N_{\omega})$ and $\epsilon > 0$. Let $\overline{x} = (x_0,...,x_{k-1})$ and $\theta(x_k;\overline{x}) = \varphi(x_0,...,x_k)$. Since $\mu$ is generically stable over $M$, $\mu^{(k-1)}$ is generically stable over $M$ ($(v)$ of Fact \ref{KM:imp2}) and so also definable over $N_{\omega}$. Therefore by $(v)$ of Fact \ref{KM:imp}, there exists formulas $\psi_1(x_{k}),...,\psi_{n}(x_{k}) \in \mathcal{L}_{x_{k}}({N_{\omega}})$ and real numbers $r_1,...,r_n \in [0,1]$ so that \begin{equation*} \sup_{q \in S_{x_{k}}(N_{\omega})} | F_{\mu^{(k-1)}}^{\theta^{*}}(q) - \sum_{i=1}^{n} r_i \mathbf{1}_{\psi_i(x_k)}(q)| < \epsilon. \end{equation*} Consider the following sequence of equations followed by a short list of justifications. \begin{equation*} \nu^{(k)}(\varphi(x_0,...,x_{k})) = \int_{S_{\bar{x}}(N_{\omega})} F_{\nu}^{\theta} d( \nu^{(k-1)}|_{N_{\omega}}) \overset{(a)}{=} \int_{S_{\bar{x}}(N_{\omega})} F_{\nu}^{\theta} d(\mu^{(k-1)}|_{N_{\omega}}) \end{equation*} \begin{equation*} \overset{(b)}{=} \int_{S_{x_{k}}(N_{\omega})} F_{\mu^{(k-1)}}^{\theta^{*}} d(\nu|_{N_{\omega}}) \approx_{\epsilon} \int_{S_{x_{k}}(N_{\omega})} \sum_{i=1}^{n} r_i \mathbf{1}_{\psi_{i}(x_{k})} d(\nu|_{N_{\omega}}) \end{equation*} \begin{equation*} = \sum_{i=1}^{n} r_{i} \nu(\psi_{i}(x_{k})) \overset{(c)}{=} \sum_{i=1}^{n} r_{i} \mu(\psi_{i}(x_{k})) = \int_{S_{x_{k}}(N_{\omega})} \sum_{i=1}^{n} r_i \mathbf{1}_{\psi_{i}(x_{k})} d(\mu|_{N_{\omega}}) \end{equation*} \begin{equation*} \approx_{\epsilon} \int_{S_{x_{k}}(N_{\omega})} F_{\mu^{(k-1)}}^{\theta^{*}} d(\mu|_{N_{\omega}}) \overset{(b)}{=} \int_{S_{x_{k}}(N_{\omega})} F_{\mu}^{\theta} d(\mu^{(k-1)}|_{N_{\omega}}) = \mu^{(k)}(\varphi(x_0,...,x_{k})). \end{equation*} \begin{enumerate}[(a)] \item Induction hypothesis. \item (T is NIP) Generically stable measures commute with invariant measures (see $(b)$ of Fact \ref{KM:imp2}). \item Base case. \end{enumerate} As $\epsilon$ was arbitrary, this proves the result. \end{proof} \begin{lemma}[T is NIP] Assume that $\mu$ is $M$-invariant. If for every smooth sequence $(\mu_{i},N_i)_{i \in \mathbb{N}}$ in $\mu$ over $M$, we have that $\lim_{i \to \infty} \mu_{i} = \mu$, then $\mu$ is generically stable over $M$. \end{lemma} \begin{proof} Since $T$ is NIP, all invariant measures are Borel definable. By Theorem \ref{genstab:equiv}, it suffices to show that $\mu$ commutes with itself, i.e. $\mu_x \otimes \mu_y = \mu_y \otimes \mu_x$. Fix $\varphi(x,y) \in \mathcal{L}_{x,y}(\mathcal{U})$. Let $M_1$ be a small model such that $ M \prec M_1$ and $M_1$ contains all the parameters from $\varphi(x,y)$. We choose a smooth sequence $(\mu_{i,x}; N_i)_{i \in \omega}$ in $\mu_{x}$ over $M_1$ and let $N_\omega = \bigcup_{i \in \omega} N_i$. By construction, the sequence $(\mu_{i,x},N_i)_{i \in \omega}$ is a smooth sequence in $\mu_{x}$ over $M$. Consider the following computation. \begin{equation*} \mu_{x}\otimes\mu_{y}(\varphi(x,y))= \int_{S_{y}(M_{1})}F_{\mu_{x}}^{\varphi}d(\mu_{y}|_{M_{1}}) \overset{(a)}{=} \int_{S_{y}(N_{\omega})}F_{\mu_{x}}^{\varphi}d(\mu_{y}|_{N_{\omega}}) \end{equation*} \begin{equation*} \overset{(b)}{=}\lim_{i\to\infty}\int_{S_{y}(N_{\omega})}F_{\mu_{i,x}}^{\varphi}d(\mu_{y}|_{N_{\omega}}) \overset{(c)}{=} \lim_{i\to\infty}\int_{S_{x}(N_{\omega})}F_{\mu_{y}}^{\varphi^{*}}d(\mu_{i,x}|_{N_{\omega}}) \end{equation*} \begin{equation*} \overset{(d)}{=} \lim_{i \to \infty} \int_{S_{x}(M_{1})} F_{\mu_y}^{\varphi^{*}} d(\mu_{i,x}|_{M_1}) \overset{(e)}{=} \lim_{i \to \infty} \int_{S_{x}(M_1)} F_{\mu_{y}}^{\varphi^{*}} d(\mu_{x}|_{M_1}) \end{equation*} \begin{equation*} = \int_{S_{x}(M_1)} F_{\mu_{y}}^{\varphi^{*}} d(\mu_{x}|_{M_1}) = \mu_{y} \otimes \mu_{x} (\varphi(x,y)). \end{equation*} \noindent We provide a list of the following justifications: \begin{enumerate}[$(a)$] \item Changing the space of integration. \item Dominated convergence theorem. \item Smooth measures commute with Borel definable measures. \item Since $\mu_{y}$ is $M_1$ invariant. \item Since $\mu_{i,x}|_{M_{1}} = \mu_{x}|_{M_1}$ for any $i \in \omega$. \qedhere \end{enumerate} \end{proof} \begin{theorem}[T is NIP] Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. Then the following are equivalent: \begin{enumerate} \item $\mu$ is generically stable over $M$. \item For any smooth sequence $(\mu_i,N_i)_{i \in \omega}$ in $\mu$ over $M$, \begin{equation*} \lim_{i \to \infty} \mu_i = \mu \text{ in $\mathfrak{M}_{x}(\mathcal{U})$.} \end{equation*} \end{enumerate} \end{theorem} \begin{proof} Follows directly from the previous two lemmas. \end{proof} \section{Local Measures revisited} We generalize the main theorem of \cite{GannNIP}. Fix a partitioned NIP formula $\varphi(x,y)$ and let $\mu$ be a $\varphi$-measure. In \cite{GannNIP}, we proved two main theorems. We showed that if $\varphi(x,y)$ is an NIP formula and $\mu$ is $\varphi$-definable and finitely satisfiable over a \textbf{countable} model $M$, then $\mu$ is $\varphi$-finitely approximated. We then proved that if $\mu$ is definable and finitely satisfiable over any small model $M$, then $\mu$ is finitely approximated in $M$ by reducing to the previous theorem. But this was somewhat unsatisfactory and the following question was left open: if $\mu$ is $\varphi$-definable and finitely satisfiable over a \textbf{small} model, then is $\mu$ $\varphi$-finitely approximated? We give a positive answer to this question by modifying one of the important technical lemmas in the proof. Let us first recall some definitions. \begin{definition}\label{local} Fix $\mathcal{U}$ and a formula $\varphi(x,y)$ in $\mathcal{L}(\mathcal{U})$. \begin{enumerate} \item $\mathcal{L}_{\varphi}(\mathcal{U})$ denotes the Boolean algebra of definable sets of $\mathcal{U}^{x}$ generated by the collection $\{\varphi(x,b): b \in \mathcal{U}\}$. \item A $\varphi$-measure is a finitely additive measure on the Boolean algebra $\mathcal{L}_{\varphi}(\mathcal{U})$. \item The collection of all $\varphi$-measures is denoted $\mathfrak{M}_{\varphi}(\mathcal{U})$. \item Let $M \prec \mathcal{U}$ and assume that $M$ contains all the parameters from $\varphi(x,y)$. For any $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$, we say that $\mu$ is $(M,\varphi)$-invariant if for any $b,c \in \mathcal{U}^{y}$ such that $\operatorname{tp}(b/M) = \operatorname{tp}(c/M)$, we have that $\mu(\varphi(x,b)) = \mu(\varphi(x,c))$. \item Let $\mu \in \mathfrak{M}_{\varphi}(M)$. If $\mu$ is $(M,\varphi)$-invariant, then we define can the fiber map $F_{\mu}^{\varphi}: S_{y}(M) \to [0,1]$ via $F_{\mu,M}^{\varphi}(q) = \mu(\varphi(x,b))$ where $b \models q|_M$. When $M$ is clear from context, we write $F_{\mu,M}^{\varphi}$ simply as $F_{\mu}^{\varphi}$. \item Let $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$. Then $\mu$ is said to be $\varphi$-definable if the map $F_{\mu,M}^{\varphi}: S_{y}(M) \to [0,1]$ is continuous. \item Let $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$. Then $\mu$ is said to be definable if for any formula $\theta(x,\overline{y})$ in the algebra generated by $\{\varphi(x,y_i): i \in \mathbb{N}\}$, $\mu$ is $(M,\theta)$-invariant and the map $F_{\mu}^{\varphi}:S_{y}(M) \to [0,1]$ is continuous. \item For any $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$, $\mu$ is said to be finitely satisfiable in $M$ if for every $\theta(x) \in \mathcal{L}_{\varphi}(\mathcal{U})$ such that $\mu(\theta(x)) > 0$, there exists some $a \in M$ so that $\mathcal{U} \models \theta(a)$. \item For each $a \in M$ we let $F_{a}^{\varphi}: S_{y}(M) \to [0,1]$ via $F_{a}^{\varphi} = \mathbf{1}_{\varphi(a,y)}$. We denote the collection of such functions as $\mathbb{F}_{M}$. We let $\operatorname{conv}(\mathbb{F}_M)$ be the collection of convex combinations of elements in $\mathbb{F}_{M}$. We let $F = [0,1]^{S_{y}(M)}$ endowed with the Tychonoff topology and if $A \subset F$, we let $\operatorname{cl}(A)$ denote its closure in this space and so the set $\operatorname{cl}(\operatorname{conv}(A))$ is well-defined. \end{enumerate} \end{definition} \noindent Recall the following facts about $\varphi$-measures which can be found in \cite{GannNIP}. \begin{fact}\label{local:facts} Let $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$ and $M \prec \mathcal{U}$. \begin{enumerate}[$(i)$] \item If $\mu$ is finitely satisfiable or $\varphi$-definable over $M$ then $\mu$ is $(M,\varphi)$-invariant. \item If $\mu$ is $\varphi$-definable over $M$ then $\mu$ is $(M_{0},\varphi)$-invariant for some $M_0 \prec M$ such that $|M_0| = \aleph_0$. \item If $\mu$ is finitely satisfiable over $M$ then $F_{\mu,M}^{\varphi}$ is in $\operatorname{cl}(\operatorname{conv}(\mathbb{F}_{M}))$. \item If $|M| = \aleph_0$ and $\varphi(x,y)$ is NIP, there exists a sequence of elements $(g_i)_{i \in \omega}$ with each $g_i \in \operatorname{conv}(\mathbb{F}_M)$ so that $\lim_{i \to \infty} g_i = F_{\mu,M}^{\varphi}$. \end{enumerate} \end{fact} \noindent The following lemma is essentially the \textit{missing lemma} from \cite{GannNIP}. The missed observation is that one can consider finitely many parameters at once (instead of a single parameter). \begin{lemma}\label{meas:lemma} Suppose that $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$ and $\mu$ is finitely satisfiable in a small submodel $N$ and $(M,\varphi)$-invariant. Then the map $F_{\mu,M}^{\varphi} \in \operatorname{cl}(\operatorname{conv}(\mathbb{F}_M))$. \end{lemma} \begin{proof} The proof is similar to the proof for types \cite[Lemma 2.18]{Sibook} as well as the proof for measures \cite[Proposition 4.13]{GannNIP} (which has both a stronger assumption and conclusion). It suffices to show that for any finite collection of types $p_1,...,p_n \in S_{y}(M)$ and $\epsilon > 0$ there exists $\overline{a} \in (M^{x})^{< \omega}$ such that $F_{\operatorname{Av}(\overline{a}),M}^{\varphi}(p_i) \approx_{\epsilon} F_{\mu,M}^{\varphi}(p_i)$ for each $i \leq n$. Fix $p_1,...,p_n \in S_{y}(M)$ and $\epsilon >0$. Choose $b_i \models p_i$ for $i \leq n$. Let $q = \operatorname{tp}(N/M) \in S_{|N|}(M)$. Let $\hat{q} \in S_{|N|}(\mathcal{U})$ such that $\hat{q} \supset q$ and $\hat{q}$ is finitely satisfiable in $M$, i.e. $\hat{q}$ is a global coheir of $q$. Let $N_{1} \models \hat{q}|_{Mb_1,...,b_n}$. By compactness, there exists elements $b_1',...,b_n' \in \mathcal{U}$ such that $\operatorname{tp}(N_1 b_1,...,b_n/M) = tp(Nb_1',...,b_n'/M)$. Since $\mu$ is $(M,\varphi)$-invariant, we have that \begin{equation*} F_{\mu,M}^{\varphi}(p_i) = \mu(\varphi(x,b_i)) = \mu(\varphi(x,b'_i)), \end{equation*} for each $i \leq n$. Since $\mu$ is finitely satisfiable in $N$, there exists some $m$ and $\overline{c} \in (N^{x})^{m}$ such that $\operatorname{Av}(\overline{c})(\varphi(x,b'_i)) \approx_{\epsilon} \mu(\varphi(x,b_i'))$ for $i \leq n$. Let $B_i =\{j \leq m: \models \varphi(c_j,b'_i)\}$. Now consider the formula \begin{equation*} \theta(x_1,...,x_m,y_1,...y_n) = \bigwedge_{i \leq n} \Big( \bigwedge_{j\in B_i} \varphi(x_{j},y_{i}) \wedge \bigwedge_{j \not \in B_i} \neg \varphi(x_{j},y_{i}) \Big). \end{equation*} By construction $\theta(\overline{x},\overline{y}) \in \operatorname{tp}(\overline{c},\overline{b'}/M)$ and so for an appropriate choice of indices, $\theta(\overline{x},\overline{y}) \in \operatorname{tp}(Nb_1',...,b_n'/M)$. Hence $\theta(\overline{x},\overline{y}) \in \operatorname{tp}(N_1b_1,...,b_n/M)$ and so $\theta(\overline{x},\overline{b}) \in \operatorname{tp}(N_1/Mb_1,...,b_n) \subset \hat{q}$. Since $\hat{q}$ is finitely satisfiable in $M$, there exists $\overline{a} \in (M^{x})^{m}$ such that $\models \theta(\bar{a},\bar{b})$. By construction, we have that for any $i \leq n$, \begin{equation*} F_{\operatorname{Av}(\overline{a}),M}^{\varphi}(p_i) = \operatorname{Av}(\overline{a})(\varphi(x,b_i)) = \operatorname{Av}(\overline{c})(\varphi(x,b'_i)) \approx_{\epsilon} \mu(\varphi(x,b'_i)) = F_{\mu,M}^{\varphi}(p_i). \end{equation*} This concludes the proof. \end{proof} \begin{theorem}\label{main:Gan} Fix a formula $\varphi(x,y)$ and a small model $M$ containing all the parameters from $\varphi(x,y)$. Assume that $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$. If \begin{enumerate} \item $\varphi(x;y)$ is NIP, \item $\mu$ is $\varphi$-definable over $M$, \item and $\mu$ is finitely satisfiable in $M$, \end{enumerate} Then for every $\epsilon > 0$, there exists $a_1,...,a_n \in M^{x}$ such that, \begin{equation*} \sup_{b \in \mathcal{U}^{y}}|\mu(\varphi(x,b)) - \operatorname{Av}(\overline{a})(\varphi(x,b))| < \epsilon. \end{equation*} \end{theorem} \begin{proof} We remark that the proof is similar to that of Proposition \ref{Mazur}. Since $\mu$ is $\varphi$-definable over $M$, $\mu$ is $(M_0,\varphi)$-invariant where $M_0$ is a countable submodel of $M$. By Lemma \ref{meas:lemma}, the map $F_{\mu,M_{0}}^{\varphi} \in \operatorname{cl}(\operatorname{conv}(\mathbb{F}_{M_0}))$. By Fact \ref{local:facts}, there exists a sequence $(g_i)_{i \in I}$ so that $\lim_{i \to \infty} g_i = F_{\mu,M_0}^{\varphi}$. By Mazur's lemma, for every $\epsilon > 0$, there exists a finite set $I \subset \mathbb{N}$ and positive real numbers $\{r_i: i \in I\}$ such that $\sum_{i \in I} r_i = 1$ and \begin{equation*} \sup_{q \in S_{y}(M_{0})} |F_{\mu,M_0}^{\varphi}(q) - \sum_{i \in I} r_i g_i(q)| < \epsilon. \end{equation*} The map $\sum_{i \in I} r_i g_{i}$ can clearly be uniformly approximated by an average function. More explicitly, there exists $ \overline{d} \in (M^{x})^{<\omega}$ such that \begin{equation*} \sup_{q \in S_{y}(M)} |\sum_{i \in I} r_i g_i (q) - F^{\varphi}_{\operatorname{Av}(\overline{d}),M}(q)| <\epsilon. \end{equation*} Hence \begin{equation*} \sup_{b \in \mathcal{U}^{y}}|\mu(\varphi(x,b)) - \operatorname{Av}(\overline{d})(\varphi(x,b))| = \sup_{q \in S_{y}(M)} |F_{\mu,M}^{\varphi}(q) - F_{\operatorname{Av}(\overline{d}),M}^{\varphi}(q)| < 2\epsilon. \end{equation*} which completes the proof. \end{proof}
8e5fd2e6b0b96f94a42a291146753ff58b176445
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{\bf Introduction} \label{s:intro} The investigation of strange particle production in neutral current, deep inelastic scattering (DIS) interactions could provide information about the $s$-quarks in the nucleon, about the boson-gluon fusion process and, above all, the parton fragmentation process. Strange particle production has been measured previously by experiments where the $\gamma^* p$ centre-of-mass energy, $W$, is at least one order of magnitude lower than at HERA \cite{BAKER,AMMOSOV,WA59,WA21b}. The ratio of strange particle to light non-strange particle production of approximately 1:5 is ascribed to a reduced probability of strange quark creation in the parton fragmentation chain. In simulation programs based on the Lund scheme it is parametrised by the strange quark suppression factor $P_s/P_u$. Here $P_s$ and $P_u$ are the probabilities for creating $s-$ or $u,d-$quarks from the vacuum during the fragmentation process. A detailed review of our knowledge on heavy quark suppression is given in \cite{WRO}. In hadron-hadron collisions an increasing $P_s/P_u$ is found with increasing centre-of-mass energy. Also indications of a dependence of the strangeness suppression factor on the region of phase space under investigation are reported. The values found vary between about 0.15 and 0.55 with a mean value close to 0.3 (see for example \cite{E665,PREVIOUS}). The parameters for the hadronisation process in the present day electron-proton Monte Carlo event generators are obtained from fits to $e^+e^-$ data and are assumed to be the same in DIS experiments due to jet universality. The longitudinal phase space of the $\gamma^* p$ interactions at HERA can be divided into three regions where different processes are expected to dominate. These processes also appear in $e^+e^-$ or hadron-hadron scattering: a) the fragmentation region of the struck quark, which resembles that of one of the pair-produced quarks in $e^+e^-$ annihilation experiments; b) the fragmentation of the proton remnant, which resembles the fragmentation in hadron colliders; and c) the hadronic centre-of-mass central rapidity region, where the colour flow between the struck quark and the proton remnant evolves. The latter region exists in both $e^+e^-$ and hadron collider experiments. The acceptance of our central tracking detector allows us to study \k\ production in the fragmentation region of the struck quark and the central rapidity region. The part of the event which is well inside our detector acceptance is dominated by particles originating from the central rapidity region. In about 10\% of the DIS events no proton remnant is detected in the ZEUS detector, resulting in a large rapidity gap (LRG) between the acceptance limit in the proton direction and the first visible particle in the detector \cite{JETLRG,H1LRG}. The properties of these events are consistent with the assumption that the exchanged photon is scattered off a colourless object emitted by the proton. This object is generically called a pomeron. There exist indications that the pomeron has a partonic substructure \cite{UA8,JETLRG} but the nature of its constituents is still under investigation. A natural assumption is that they are quarks and gluons or a combination of both. It is expected that the strange quark content of the pomeron could affect the strange particle multiplicity in the final state of these events. The investigation of strange particle production allows us to connect results from $e^+e^-$ experiments and from hadron collider experiments. This paper is a first step of such a program. We compare the \k\ and \lam\ multiplicities\footnote{Throughout this paper, a reference to a particle includes a reference to its antiparticle.} and their momentum and angular distributions in the new kinematic region of HERA with extrapolations from Monte Carlo models based on the results of lower energy experiments. The $Q^2$ evolution of the \k\ multiplicity is studied. The production of \k 's in events with a large rapidity gap is compared to that of events without a large rapidity gap. All studies are performed in the HERA laboratory frame and are restricted to a kinematic range where the tracking acceptance is high and well understood. \section{\bf Experimental setup} \subsection*{\bf HERA machine conditions} The data were collected at the electron-proton collider HERA using the ZEUS detector during the 1993 running period. HERA collided 26.7~GeV electrons with 820~GeV protons. 84 bunches were filled for each beam and in addition 10 electron and 6 proton bunches were left unpaired for background studies. The typical electron and proton currents were 10~mA leading to a typical instantaneous luminosity of $6 \cdot 10^{29}~{\rm cm}^{-2} {\rm s}^{-1}$. An integrated luminosity of 0.55~pb$^{-1}$ of data was collected in 1993. \subsection*{\bf The ZEUS detector} ZEUS is a multipurpose, magnetic detector which has been described elsewhere \cite{ZEUS1}. Here we give a brief description concentrating on those parts of the detector relevant for the present analysis. Charged particles are tracked by the inner tracking detectors which operate in a magnetic field of 1.43 T provided by a thin superconducting solenoid surrounding the tracking detectors. Immediately after the beampipe there is a cylindrical drift chamber, the vertex detector (VXD), which consists of 120 radial cells, each with 12 sense wires \cite{VXD}. The achieved resolution is 50~$\mu$m in the central region of a cell and 150~$\mu$m near the edges. Surrounding the VXD is the central tracking detector (CTD) which consists of 72 cylindrical drift chamber layers, organised into 9 ``superlayers'' \cite{CTD}. Each superlayer consists either of wires parallel (axial) to the beam axis or of wires inclined at a small angle to give a stereo view. With the present understanding of the chamber, a spatial resolution of 260~$\mu$m has been achieved. The hit efficiency of the chamber is greater than 95\%. In events with charged particle tracks, using the combined data from both chambers, reconstructed primary vertex position resolutions of 0.6~cm in the $Z$ direction and 0.1~cm in the $XY$ plane are measured\footnote{The ZEUS coordinate system is defined as right-handed with the $Z$ axis pointing in the proton beam direction and the $X$ axis horizontal pointing towards the centre of HERA. The polar angle $\theta$ is defined with respect to the $Z$-direction.}. The resolution in transverse momentum for full length tracks is $\sigma(p_{\rm T}) / p_{\rm T} = \sqrt{ (0.005~ p_{\rm T})^2 + (0.016)^2} $ ($p_{\rm T}$ in GeV). The solenoid is surrounded by a high resolution uranium-scintillator calorimeter divided into three parts, forward (FCAL), barrel (BCAL) and rear (RCAL). Holes of 20 $\times$ 20~cm$^2$ in the centre of FCAL and RCAL are required to accommodate the HERA beam pipe. Each of the calorimeter parts is subdivided into towers which in turn are segmented longitudinally into electromagnetic (EMC) and hadronic (HAC) sections. A section of a tower is called a cell and is read out by two photomultiplier tubes. A detailed description of the calorimeter is given in \cite{CALRes}. For measuring the luminosity via the Bethe-Heitler process $ e p \rightarrow e' p' \gamma$, as well as for tagging very small $Q^2$ processes, two lead-scintillator calorimeters are used \cite{LUMIhard}. Bremsstrahlung photons emerging from the electron-proton interaction region at angles $\theta'_{\gamma}\le$ 0.5~mrad with respect to the electron beam axis hit the photon calorimeter at 107~m from the interaction point (IP). Electrons emitted from the IP at scattering angles less than or equal to 6~mrad and with energies between 20\% and 90\% of the incident electron energy are deflected by beam magnets and hit the electron calorimeter placed 35~m from the IP. \section{\bf HERA kinematics} The kinematics of deep inelastic scattering processes at HERA, $e^- p \rightarrow e^- h$, where $h$\ is the hadronic final state, can be described by the Lorentz invariant variables $Q^2$, $x$ and $y$. Here $-Q^2$ is the square of the four-momentum transfer between the incoming electron and the scattered electron; $x$, in the na\"{\i}ve quark-parton model, is the fractional momentum of the struck quark in the proton, and $y$ is the relative energy transfer of the electron to the hadronic system. The variables are related by $ Q^2 = s x y $, where $s$\ is the squared invariant mass of the $e p$ \ system. $Q^2$, $x$ and $y$ can be calculated from the kinematic variables of the scattered electron, from the hadronic final state variables, or from a combination of both. The optimal reconstruction method depends on the event kinematics and the detector resolution. In this paper we use the double angle method \cite{Q2DA} to calculate the $Q^2$ and $x$ variables: \begin{eqnarray*} Q^2_{DA} & = &4 E_e^2 \cdot \ { \sin \gamma_h \ (1+ \cos \theta_e) \over \sin \gamma_h + \sin \theta_e - \sin (\gamma_h + \theta_e)}, \\ x_{DA} & = &{E_e \over E_p} \cdot { \sin \gamma_h + \sin \theta_e + \sin(\gamma_h + \theta_e) \over \sin \gamma_h + \sin \theta_e - \sin (\gamma_h + \theta_e)}. \end{eqnarray*} Here $E_e$ and $E_p$ are the initial electron and proton energies; $\theta_e$ is the electron scattering angle with respect to the incident proton direction and $\gamma_h$ is the polar angle of a massless object balancing the momentum vector of the scattered electron to satisfy four-momentum conservation. In the na\"{\i}ve quark-parton model $\gamma_h$ is the scattering angle of the struck quark. It is determined from the hadronic energy flow in the calorimeter: $$\cos \gamma_h = {(\sum p_X)_h^2 + (\sum p_Y)_h^2 - (\sum E-p_Z)_h^2 \over (\sum p_X)_h^2 + (\sum p_Y)_h^2 + (\sum E-p_Z)_h^2}.$$ Here the sums run over all calorimeter cells which are not assigned to the scattered electron and ($p_X,p_Y,p_Z$) is the momentum vector assigned to each cell of energy $E$. The cell angles are calculated from the geometric centres of the cells and the vertex position of the event. Using the hadronic energy flow of the final state, $y$\ can be calculated according to the Jacquet-Blondel method \cite{JacBlon}: $$y_{JB} = {1 \over 2 E_e } \ \sum_h \ (E-p_{Z})_h. $$ For background rejection we also calculate $y$ using the electron information: $$ y_e = 1 - {E^{\prime}_e \over 2 E_e} \ (1-\cos \theta_e),$$ where $E'_e$ is the energy of the scattered electron. The square of the centre-of-mass energy of the virtual photon-proton system, $\gamma^* p$, is calculated using: $$ W^2_{DA} = m_p^2 + Q^2_{DA} ({1 \over x_{DA}} - 1), $$ where $m_p$ is the proton mass. We use the described methods for calculating the kinematic variables and do not mention them explicitly below except when necessary. \section{\bf Event selection} \subsection{\bf Trigger conditions} The trigger is organised in three levels \cite{ZEUS1}. For DIS events, the first level trigger (FLT) requires at least one of three conditions of energy sums in the EMC calorimeter: the BCAL EMC energy exceeds 3.4 GeV; the RCAL EMC energy (excluding the innermost towers surrounding the beam pipe) exceeds 2.0 GeV; or the RCAL EMC energy (including those towers) exceeds 3.75 GeV. The second level trigger (SLT) rejects proton beam-gas events by using the times measured in the calorimeter cells. The DIS trigger rate of the SLT is about one-tenth of the FLT DIS trigger rate. The loss of DIS events at the SLT is negligible. The third level trigger (TLT) has the full event information available and uses physics-based filters. It applies tighter timing cuts to suppress beam-gas background further and also rejects beam halo muons and cosmic muons. The TLT selects DIS event candidates by calculating: $$\delta~ =~ \sum_i E_i\cdot (1-\cos\theta_i)~~~> ~~~ 20~{\rm GeV} ~~ - ~~2~E_{\gamma},$$ where $E_i$ and $\theta_i$ are the energy and the polar angle of the energy deposits in the central calorimeter. $E_{\gamma}$ is the energy measured in the photon calorimeter of the luminosity monitor. The summation runs over all energy deposits in the calorimeter cells. For fully contained DIS events $\delta \approx 2 E_e = 53.4$ GeV. Photoproduction events have low values of $\delta$ compared to DIS events because the scattered electron escapes in the beam pipe hole of the rear calorimeter. For events with $Q^2$ \ less than $\sim$~4~GeV$^2$ the calorimeter cannot detect the scattered electron. For events with the scattered electron detected in the calorimeter, the trigger acceptance was essentially independent of the DIS hadronic final state. It was greater than 97\% for $Q^2>10$~GeV$^2$ and independent of $Q^2$. A total of $7 \cdot 10^6$ \ events passed the TLT and was written to tape during the 1993 running period. \subsection{Offline event selection} The offline selection of DIS events is similar to that described in our earlier publication \cite{F2}. The characteristic signature of a DIS event is the scattered electron detected in the uranium scintillator calorimeter. The pattern of energy deposition in the calorimeter cells is used to identify an electron candidate. We use the following criteria to select a sample of DIS events: \begin{itemize} \item a scattered electron candidate has to be found with $E^{\prime}_e > 5 $ GeV and an impact point at the RCAL surface outside a square of 32 x 32 cm$^2$ centred on the beam line. This requirement ensures that the electromagnetic shower is fully contained within the calorimeter and its impact point can be reconstructed with sufficient accuracy; \item $ y_{e} < 0.95 $ \ \ to reduce photoproduction background; \item 35~GeV $ < \delta < 60 $ GeV \ \ to remove photoproduction events and to suppress events with hard initial state radiation; \item --50~cm $< Z < 40$~cm, \ \ where $Z$ is the position of the event vertex reconstructed from the CTD. This requirement rejects beam-gas and cosmic ray events. \end{itemize} From Monte Carlo studies we find an average electron finding efficiency of 95\% in the kinematic range considered, being above 98\% in most of the kinematic range and dropping below 70\% for high $y$ events. The purity is better than 96\% for electron energies above 10~GeV and drops to about 60\% at high $y$. A total of 91000 events survive these criteria. The particle multiplicity and the kinematics of particle production depend on $Q^2$ and $x$. We have restricted our analysis to a kinematic range in $Q^2,x$ and $y$ in which migration effects are small \cite{F2} and have little influence on the momentum and angular distributions of the \k 's and \lam 's. We chose the following range: \begin{itemize} \item $ 10~{\rm GeV}^2 < Q^2 < 640 $ GeV$^2$; \item $ 0.0003 < x < 0.01$; \item $ y > 0.04.$ \end{itemize} The $Q^2$ and $x$ variables are calculated according to the double angle method and $y$ with the Jacquet-Blondel method. After applying these criteria to the previously selected sample we are left with 27500 events. \section{Monte Carlo simulation} Monte Carlo event simulation is used to determine the acceptance and resolution of the ZEUS detector. The simulation is based on the GEANT 3.13 \cite{GEANT} program and incorporates the knowledge of the detector and the trigger. \subsection*{Simulation of normal DIS events} Neutral current DIS events with $Q^2>4$ GeV$^2$ were generated using the HERACLES 4.4 program \cite{SPIES} which incorporates first order electroweak corrections. The Monte Carlo program LEPTO 6.1 \cite{INGEL}, interfaced to HERACLES via the program DJANGO 6.0 \cite{DJANGO}, was used to simulate QCD cascades and fragmentation. The parton cascade was modelled in different ways: \begin{itemize} \item the colour-dipole model including the boson-gluon fusion process (CDM) as implemented in the ARIADNE 4.03 \cite{ARI} program was used. In this model coherence effects are implicitly included in the formalism of the parton cascade; \item matrix element calculations plus the parton shower option (MEPS) as implemented in LEPTO were used, where coherence effects in the final state cascade are included by angular ordering of successive parton emissions. \end{itemize} These models use the Lund string fragmentation \cite{LUND} for the hadronisation phase as implemented in JETSET 7.3 \cite{JET}. For the CDM event sample the MRSD$^{\prime}_{-}$ parton density parametrisation for the proton was used \cite{MRSD}. The GRV \cite{GRV} parametrisation was used for the MEPS data set. These parametrisations describe reasonably the HERA measurements of the proton structure function F$_2$ \cite{Z93F2,H93F2}. The simulations predict that about 10\% of the \ks 's are produced in charm events and about 5\% originate from sea quarks in the proton. The remaining $\sim 85$\% of the \ks 's are created in the fragmentation chain depending on the actual value of the strange-quark suppression factor $P_s/P_u$. The parameters of the Monte Carlo models are set to their default values ($P_s/P_u = 0.3$). We have also generated events with $P_s/P_u = 0.2$ as suggested in \cite{E665} and we have compared the predictions of the simulations with the measured rates. Since the MEPS model and the CDM model behave similarly when reducing the $P_s/P_u$ parameter, we only show the predictions with $P_s/P_u = 0.2$ for the CDM model. \subsection*{Simulation of large rapidity gap DIS events} Our previous study \cite{JETLRG} shows that diffractive models, specifically POMPYT \cite{POMPYT} and a model by Nikolaev and Zakharov \cite{NZ} as implemented in our Monte Carlo program NZ \cite{ADA}, give adequate descriptions of the properties of the LRG events. We have used POMPYT and NZ event samples for our study of \k\ multiplicities in events with a large rapidity gap. The POMPYT Monte Carlo program uses an implementation of the Ingelman and Schlein model \cite{IS}, describing high energy diffractive processes. In this model the virtual photon interacts with the constituents of the pomeron emitted by the proton. Factorisation is assumed in the sense that the pomeron emission and the pomeron structure are independent. The current version of POMPYT contains no strange quark constituents for the pomeron. The NZ Monte Carlo model on the other hand is non-factorisable. Here the virtual photon fluctuates into a $q\overline {q}$ or a $q\overline {q} g$ state and interacts with a colourless two-gluon system emitted by the proton. The $q \overline q g$ states were fragmented as if they were $q \overline q$ states and the flavours are generated in 90\% of the cases as ($u, d$) and in 10\% as $s$. \section{\bf Selection of \ks\ and \lam\ candidates} \ks\ particles are identified in the decay channel $\ks \rightarrow \pi^+ \pi^- $ and \lam\ particles are detected in the channel \lam\ $\rightarrow p \pi^-$. Due to their lifetime of ${\cal O}$(10$^{-10}$s) and their typical momenta of about 1~GeV they have an average decay length of a few centimetres, which results in secondary vertices well separated from the primary event vertex. Tracks are reconstructed using the CTD and the VXD. The track finding algorithm starts with hits in the outermost axial superlayers of the CTD. As the trajectory is followed inwards to the beam axis, more hits on the axial wires and from the VXD are assigned to the track. The resulting circle in the transverse plane is used for the pattern recognition in the stereo superlayers. The momentum is determined in a 5-parameter helix fit. Multiple Coulomb scattering in the beam pipe and in the outer walls of the VXD is taken into account in the evaluation of the covariance matrix. The primary event vertex is determined from a $\chi^2$ fit performed with the tracks using the perigee parametrisation \cite{SIJIN} and assuming that the tracks come from a common point in space. A track is considered not to be associated with the primary vertex if the $\chi^2$ for the primary vertex fit increases significantly when the track is included in the fit. The systematic effects in the CTD are most serious for low $p_{\rm T}$ tracks and for tracks which traverse the inhomogeneous part of the magnetic field at the ends of the CTD. The reconstructed tracks used in this analysis were required to have a transverse momentum \ptr\ $>$ 0.2 GeV and a polar angle between $25^\circ < \theta < 155^\circ$. In terms of pseudorapidity, $\eta = -\log (\tan (\theta / 2))$, this corresponds to $\left| \eta \right| < 1.5 $. This is the region where the CTD response and systematics are well understood. \subsection{ \bf \ks\ Identification} To search for \ks, we examine pairs of oppositely charged tracks to find a secondary vertex. We refer to these tracks as daughter tracks. At least one of the daughter tracks is not allowed to be associated with the primary vertex and track pairs which do not intersect when projected into the transverse plane are rejected. For each remaining track pair, we obtain the momentum of the \ks\ candidate by calculating the momenta of the individual tracks at their intersection point and adding them. \ks\ candidates with transverse momenta below 0.5~GeV or above 4~GeV or with directions of flight too near to the beam pipe, $\left| \eta \right| > 1.3$, are removed. The background in the mass region of the \ks\ is reduced by applying the following criteria: \begin{itemize} \item $\cos(\alpha_{XY}) > 0.99$, \ where $\alpha_{XY}$ is the angle in the transverse plane between the direction of flight of the \ks\ candidate and its reconstructed momentum direction; \item the separation in Z between the two tracks at their $XY$ intersection point has to be $ \left| \Delta Z \right| < 2.5$~cm. The coordinates of the \ks\ decay vertex are set to the $XY$ coordinates of the intersection point of the track circles and the $Z$ coordinate is chosen to be in the centre between the closest approaches in $Z$ of the two track circles; \item the proper lifetime of the candidates, $ c\tau = (L M c) / p$, has to be less than 10 cm. Here $L$ \ is the decay length, $p\ $ is the momentum and $M$ is the invariant mass of the candidate; \item to reduce background arising from photon conversions into $e^+e^-$ pairs, pairs of tracks considered as electrons must have an effective mass \mee$>50$ MeV (see Fig.~\ref{fig:k0_lam_scatter}); \item to eliminate \lam\ contamination of the \ks\ signal, candidates with a mass hypothesis $ \mppi < 1.12$~GeV are rejected (see Fig.~\ref{fig:k0_lam_scatter}). \end{itemize} Using these criteria (summarised in Tab.~\ref{tab:reconstruction_cuts}) we obtain the \ks\ signal shown in Fig.~\ref{fig:k0_lam_signal}a. We fit the $\pi^+ \pi^-$ mass spectrum with a Gaussian and a linear background in the region 0.4 to 0.6 GeV. The fitted mass is 497.4~\pms~0.3 MeV and the standard deviation is 7.8~\pms~0.3~MeV. The mass value and width of the signal are well reproduced by the Monte Carlo simulations. In the signal region we find a total of 971 \ks\ mesons on top of a background of about 150 $\pi\pi$-combinations. The \ks\ signal region extends from 474 to 521 MeV. The average lifetime of the \ks\ mesons was determined by fitting the exponential form $\exp(-c\tau / c\tau_{\ks})$ to the acceptance corrected $c\tau$ lifetime distribution. Here the $c\tau$ upper limit was relaxed to 20~cm and all other selection criteria were set to their default value. The result $c\tau_{\ks } = 2.66 \pm 0.11 \pm 0.06$~cm is consistent with the world average of 2.676 $\pm$ 0.006 cm given in \cite{PDG}. The systematic uncertainty includes the variation of the number of bins used in the fit and tightening or loosening the selection criteria. \subsection{ \bf \lam\ Identification} The \lam\ identification closely resembles the \ks\ identification. The daughter track with the higher momentum is considered to be the proton. No daughter track is allowed to be associated with the primary vertex. The \ $c\tau\ $ upper limit is increased to 40~cm in order to account for the longer lifetime of the \lam. Requiring $M_{\pi\pi} < 0.481 $~GeV removes the background from \ks\ mesons (see Fig.~\ref{fig:k0_lam_scatter}). Since there is no clear \lam\ signal seen for candidates with \ptr\ above 3.5~GeV, this value is chosen as the upper limit of the investigated momentum range. Figure~\ref{fig:k0_lam_signal}b shows the \lam\ signal obtained. We fit the $p \pi$~mass spectrum from 1085 to 1185 MeV. The fit yields a mass of 1116.2~\pms~0.4~MeV with a standard deviation of 3.0~\pms~0.5~MeV. The Monte Carlo simulation reproduces well the \lam\ mass position and width. Within the signal region we find 80 \lam\ baryons and 18 background combinations. The signal region runs from 1107 to 1125~MeV. Of the 80 \lam\ baryons, (60 $\pm$ 5)\% are \lb\ and the remaining (40 $\pm$ 5)\% are \lam. The determination of the average lifetime of the \lam\ from the lifetime distribution gives $c\tau_{\lam } = 7.3 \pm 2.2 \pm 0.5$~cm, consistent with the value of 7.89~cm given in \cite{PDG}. \begin{table}[htb] \begin{center} \begin{tabular}{|l|c|c|} \hline Selection parameters for candidates & \ks & \lam \\ \hline $\cos(\angxy)$ & $>$~0.99 & $>$~0.99 \\ $\left| \delz \right|$ [cm] & $<$~2.5 & $<$~2.5 \\ \ctau\ [cm] & $<$~10 & $<$~40 \\ \mppi\ [GeV] & $>$~1.12 & - \\ \mpipi\ [GeV] & - & $<$~0.481 \\ \mee\ [GeV] & $>$~0.05 & $>$~0.05 \\ \ptr$_{\ daughter-tracks}$ [GeV] & $>$~0.2 & $>$~0.2 \\ $\theta_{daughter-tracks}$ [$^o$] & [25,~155] & [25,~155] \\ No. of tracks from primary vertex & $\le$~1 & 0 \\ \hline $\eta$ range & [--1.3,~1.3] & [--1.3,~1.3]\\ \ptr\ [GeV] range & [0.5,~4.0] & [0.5,~3.5] \\ \hline \end{tabular} \caption{ Selection criteria for \ks\ and \lam\ identification.} \label{tab:reconstruction_cuts} \end{center} \end{table} \section{\bf Data correction} This analysis uses two types of selection criteria. The first kind is event based and selects a reasonably pure sample of DIS events with minimal contamination from background (photoproduction, beam-gas, cosmic-ray events). The second kind of selection criteria is particle based and selects a sample of \ks\ and \lam\ particles from the event sample defined above. We find a 90\% event selection efficiency, where we define the efficiency as the ratio of the number of Monte Carlo events passing all the event selection criteria (including those that restrict the kinematic range in $Q^2,x$ and $y$) to the total number of generated events in the restricted kinematic region. We have restricted the \k\ and \lam\ kinematic ranges to regions where our systematic uncertainties are small: their pseudorapidity is limited to $-1.3 < \eta < 1.3$ and their transverse momentum is restricted to a \ptr\ between 0.5~GeV and 4.0~GeV (3.5~GeV) for \k 's (\lam 's). We do not extrapolate our results to the full \ptr\ and $\eta$ range in order not to be dominated by model predictions. The models are known to have uncertainties especially in the low \ptr\ region and are not yet compared to particle properties in the proton fragmentation region of HERA events. The \ks\ and \lam \ reconstruction efficiencies were determined as a function of \ptr\ and $\eta$. For each particle type, the efficiency in a given (\ptr, $\eta$) bin was defined as the ratio of the number of reconstructed particles in the bin to the number of generated particles in the bin. The $\eta$ and \ptr\ resolutions are less than 5\% of the bin width chosen for the plots and show no systematic shifts. The DIS Monte Carlo events that passed all the selection criteria were used for these calculations. The \ks\ reconstruction efficiency in the kinematic region considered varies between 20\% for low \ptr\ and 55\% for \ptr\ above 1.5 GeV. The efficiency varies in $ \eta $ from 30\% around $\eta = \pm 1.3$ to 40\% for \ks 's moving transversely to the beam direction ($\eta = 0$). The \lam\ reconstruction efficiency varies between 5\% for low transverse momentum and approaches 20\% for high \ptr. The efficiency varies in $ \eta $ between 10\% and 15\%. The largest loss of true \ks 's and \lam 's results from the collinearity requirement ($\alpha_{XY}$) and the requirement that daughter tracks are unassociated with the event vertex. Each requirement rejects about 25\% of the candidates if no other selection criterion is applied. The \k (\lam) measurements are corrected for the above efficiencies as well as for the branching ratios \k\ to \ks\ and $\ks \rightarrow \pi^+ \pi^- \quad (\lam \rightarrow p \pi )$ \cite{PDG}. No corrections were made to the measurements for migrations and initial state radiation effects since the changes predicted from Monte Carlo studies are small. Instead we include these effects in our systematic error analysis (see section 9). The analysis procedure was checked using the reconstructed CDM (MEPS) Monte Carlo events as if they were data events and correcting them with the efficiencies obtained with the MEPS (CDM) samples. The corrected Monte Carlo distributions agreed at the 5\% level with the generated distributions. For the comparison of \k\ production in events with and without a large rapidity gap, the two-dimensional (\ptr, $\eta$) efficiencies were determined from the standard DIS Monte Carlo events satisfying the additional requirement $W>140$~GeV (see section 8.2 for details). This corresponds to a restriction to $y > 0.22$. Both non-LRG (NRG) and LRG data samples were corrected with the same efficiencies. It has been checked that the corrected and generated \ks\ distributions of the LRG Monte Carlo events agree well when using the efficiencies of those Monte Carlo sets. The ratio of \k\ to charged particle multiplicity, N(\k )/N(tracks), is investigated below. The charged particle multiplicity, N(tracks), is determined for charged particles originating at the primary vertex and produced in the restricted kinematic range $\left| \eta \right| <1.3$ and $\ptr > 0.2$~GeV. The number of reconstructed tracks is corrected for tracking inefficiencies, wrong assignments by the vertex finding routine to the decay products of long lived particles and pair conversions by using standard Monte Carlo techniques. The Monte Carlo corrections for particle based selection criteria were below 10\%. \section{\bf Results} \subsection{\k\ and \lam \ multiplicity distributions} Figure~\ref{fig:k0_rate_pt_eta_lab} shows the differential \k\ multiplicity as a function of \ptr\ and $\eta$. The inner error bars are statistical errors and the outer ones statistical and systematic errors added in quadrature. The distributions are normalised by the number of events $N_{ev}$. The predictions of the CDM and the MEPS models are overlaid. The two curves for the CDM sample are generated with different strange-quark suppression factors $P_s/P_u$. The predicted multiplicity for the default strange-quark suppression factor of 0.3 is higher than measured. Using the smaller suppression factor of 0.2 reduces the predicted multiplicity to a value closer to that observed in the data. Both parameters give a reasonable description of the measured shapes. For events with $10<Q^2<640$~GeV$^2$, $0.0003<x<0.01$ and $y>0.04$, the number of neutral kaons per event with $0.5<\ptr<4.0$~GeV and $| \eta | < 1.3$ is 0.289~\pms~0.015~\pms~0.014. The first error is statistical, the second error is systematic. A function of the form ${C_1} / {p_{\rm T}} \cdot \exp{(C_2 p_{\rm T})}$ fits well the measured $1/N_{ev} \cdot dN(K^0)/dp^2_T$ distribution as a function of \ptr\ over the \ptr\ range shown in Fig.~\ref{fig:k0_rate_pt_eta_lab}. $C_1$ and $C_2$ are constants. The slope, $C_2$, of the \ptr\ distribution for the \k 's is $-$1.31~\pms~0.09~\pms~0.06 GeV$^{-1}$. These values, together with the predictions from Monte Carlo models, are listed in Tab.~\ref{tab:kaon_results}. According to Monte Carlo studies, the fraction of $K^0$'s produced in the restricted \ptr \ and $\eta$ range is 23\% of the total number of $K^0$'s produced in the final state. \begin{table}[htbp] \begin{center} \begin{tabular}{|l|l|l|} \hline &N(\k ) / event & \ptr\ slope [GeV$^{-1}$] \\ \hline Data &0.289 \pms~0.015 \pms~0.014&--1.31 \pms~0.09 \pms~0.06 \\ CDM & & \\ \ \ \ with $P_s/P_u = 0.3$ & 0.342 \pms~0.005 & --1.40 \pms~0.05 \\ \ \ \ with $P_s/P_u = 0.2$ & 0.264 \pms~0.003 & --1.37 \pms~0.04 \\ MEPS & & \\ \ \ \ with $P_s/P_u = 0.3$ & 0.348 \pms~0.006 & --1.36 \pms~0.05 \\ \hline \end{tabular} \caption{ Results of the \k\ measurement for events with $10<Q^2<640$~GeV$^2$, $0.0003<x<0.01$, $y>0.04$ and for a \k\ with $0.5 < \ptr < 4.0$~GeV and $| \eta | <1.3$. The two CDM samples have been generated with a different strange-quark suppression factor $P_s/P_u$. \label{tab:kaon_results} } \end{center} \end{table} Figures~\ref{fig:lam_rate_pt_eta_lab}a,~b show the differential \lam\ multiplicity as a function of the transverse momentum and the pseudorapidity. The predictions of the CDM and the MEPS Monte Carlo are also displayed in Fig.~\ref{fig:lam_rate_pt_eta_lab}. The two CDM curves correspond to samples generated with different strange-quark suppression factors $P_s/P_u$. The number of \lam 's with $0.5 < \ptr < 3.5~$GeV and $| \eta | < 1.3 $ produced per event is $0.038 \pm 0.006 \pm 0.002$ for events with $10<Q^2<640$~GeV$^2$, \ $0.0003<x<0.01$, $y>0.04$. The measured slope of the \ptr\ distribution of the \lam\ is $-1.4~\pm~0.3~\pm~0.1$~GeV$^{-1}$, which, due to the large statistical uncertainty, is still in agreement with the model predictions. These values, together with the predictions of the models are listed in Tab.~\ref{tab:lam_results}. Monte Carlo studies predict that 16\% (25\%) of the total number of \lam's (${\overline {\lam}}$'s) will be inside this restricted \ptr\ and $\eta$ region. The measured \k \ and \lam \ multiplicities seem to be better described by a model with a strangeness suppression factor of 0.2. \begin{table}[htbp] \begin{center} \begin{tabular}{|l|l|l|} \hline & N(\lam ) / event & \ptr\ slope [GeV$^{-1}$] \\ \hline Data & 0.038 \pms~0.006 \pms~0.002 & --1.4 \pms~0.3 \pms~0.1 \\ CDM & & \\ \ \ \ with $P_s/P_u = 0.3$ & 0.066 \pms~0.003 & --1.04 \pms~0.07 \\ \ \ \ with $P_s/P_u = 0.2$ & 0.050 \pms~0.002 & --1.00 \pms~0.06 \\ MEPS & & \\ \ \ \ with $P_s/P_u = 0.3$ & 0.068 \pms~0.003 & --0.98 \pms~0.06 \\ \hline \end{tabular} \caption{ Results of the \lam\ measurement for events with $10<Q^2<640$~GeV$^2$, $0.0003<x<0.01$, $y>0.04$ and for a \lam\ with $0.5 < \ptr < 3.5$~GeV, $ | \eta | < 1.3 $. The two CDM samples have been generated with a different strange-quark suppression factor $P_s/P_u$. \label{tab:lam_results} } \end{center} \end{table} We have studied the mean \k\ multiplicity and the ratio of \k\ to charged particle multiplicities N(\k )/N(tracks) as a function of the $Q^2$ of the event. In order to stay in the region of uniform acceptance given by the inner tracking detector geometry and the analysis cuts, we restrict this study to events with $-1.5 < \eta_{\gamma_h} < 0$. Figure~\ref{fig:etagammah} shows the distribution of our event sample in the ($x,Q^2$) plane. The lines of constant $\gamma_h$ delimiting the accepted events and the $Q^2$ bins chosen for this study are shown. In those bins the variables $Q^2$ and $W$ are correlated: as $Q^2$ increases from 10 GeV$^2$ to 200 GeV$^2$, the mean value of $W$ increases from 110 GeV to 160 GeV. Figure~\ref{fig:k0_qq}a,~b show the mean \k\ multiplicity and the ratio N(\k )/N(tracks) in the selected bins plotted versus the mean $Q^2$ of the bins. The number of charged particles does not include secondary particles from \k\ and \lam\ decays and from weakly decaying particles with a lifetime $> 10^{-8}$s. A slight increase of the $K^0$ multiplicity and a constant behaviour of N(\k )/N(tracks) are observed. We have included the predictions from the CDM and MEPS Monte Carlo samples, which describe the data reasonably well. A study at the Monte Carlo generator level shows that the mean \k\ multiplicity is independent of $Q^2$ for fixed $W$. Since data and Monte Carlo agree over a wide range of $Q^2$, we conclude that the mean \k\ multiplicity of our data also shows no $Q^2$ dependence at fixed $W$ within the accuracy of these data. Furthermore the ratio of \k\ to charged particle multiplicities is observed to be constant and thus within our experimental errors this ratio does not depend on the kinematic variables in the region under study. Therefore we attribute our observed increase of \k\ multiplicity with $Q^2$ to the increase of the corresponding $W$ values. \subsection{\k\ production in events with a large rapidity gap } The DIS data sample is a mixture of non-diffractive and diffractive events. We have searched for differences in \k\ production in these event types. Following our earlier publications \cite{JETLRG,LUMI}, we separate a non-rapidity gap event sample (NRG) and a LRG event sample using \etamx . \etamx\ is the largest pseudorapidity of any calorimeter cluster in an event, where a cluster is defined as an isolated set of adjacent cells with summed energy above 400 MeV. The NRG sample is selected by \etamx\ $>$1.5. It is dominated by non-diffractive events. The requirement \etamx\ $<$ 1.5 selects a LRG sample which is dominated by diffractive events. The standard non-diffractive DIS models (CDM, MEPS) give a reasonable description of the \etamx\ distribution for values above 1.5 but cannot account for the excess of events at lower values (see Fig.~\ref{fig:etamx}a). Values of \etamx\ $>$ 4.3, which are outside the calorimeter acceptance, occur when energy is deposited in many contiguous cells around the beam pipe in the proton direction. An admixture of about 10\% -- 20\% of diffractive events generated with the NZ or POMPYT Monte Carlo programs to the non-diffractive Monte Carlo sample gives a reasonable description of the \etamx\ distribution. The background of non-diffractive DIS events in the LRG sample is estimated to be 7\% \cite{JETLRG}. Less than 10\% of the NRG DIS event sample are diffractive events. Figure~\ref{fig:etamx}b shows the \etamx\ distribution for those events which have a \ks\ candidate in the signal band. The \etamx\ distribution of events from one of the non-diffractive (CDM) and from one of the diffractive (NZ) Monte Carlo samples is also shown. The excess of \ks\ candidates over predictions from the CDM model for \etamx\ $<1.5$ represents the \ks\ production in diffractive events. As discussed elsewhere \cite{JETLRG,LUMI}, the acceptances for diffractive events selected by the LRG requirement (\etamx) and for NRG events are flat with respect to $W$ and $Q^2$ for $W > 140$~GeV. We have therefore restricted our comparison to events with $W > 140$~GeV. After this additional requirement, 11000 NRG events and 940 LRG events remain. In the LRG sample we find in the signal region 18 \ks\ candidates over a background of 2 candidates. Figure~\ref{fig:k0_rate_pt_eta_lrg} shows the differential \k\ multiplicity as a function of the transverse momentum and pseudorapidity for NRG and for LRG DIS events separately. The results in this subsection are not corrected for either the \etamx\ or the $W$ selection criteria. The predictions of the standard DIS Monte Carlo programs (CDM and MEPS) and the diffractive DIS Monte Carlo programs (POMPYT and NZ) are shown. The \ptr\ distributions have similar shapes in both data subsamples, although the multiplicity is lower for the LRG DIS events. Within the limited statistics of the data, both diffractive models give a reasonable description of the \k\ multiplicities in LRG events. Since the invariant mass of the measured hadronic system in LRG events is smaller than in NRG events, a reduced \k\ rate is expected in the diffractive events. We have compared the \k\ multiplicity with the charged particle multiplicity for both subsamples. Table~\ref{tab:charged} lists the \k\ multiplicity and the ratio of the \k\ to charged particle multiplicity for NRG and LRG DIS events and for the Monte Carlo samples. If one subtracts the diffractive background, which, as seen from Fig.~\ref{fig:etamx}, is still present in the NRG DIS sample, the quoted \k\ multiplicity in the non-diffractive DIS sample increases by 5\%. The ratios of \k 's to charged tracks for both data samples are consistent with each other. Thus, within the limited statistics, these results give no indication of any additional strange quark enhancement or suppression in the production mechanism of the LRG final state. \begin{table}[hptb] \begin{center} \begin{tabular}{|c|l|l|l|} \hline & Data type & N(\k ) /event & N(\k ) / N(tracks) \\ \hline \etamx\ $>$ 1.5 & ZEUS data & $0.344\pm 0.023 \pm 0.025$ & $ 0.077 \pm 0.006 \pm 0.008$\\ NRG & CDM & & \\ & \ \ \ with $P_s/P_u = 0.3$ & $0.396\pm 0.009$ & $ 0.095 \pm 0.003 $ \\ & \ \ \ with $P_s/P_u = 0.2$ & $0.296\pm 0.011$ & $ 0.071 \pm 0.003 $ \\ & MEPS & & \\ & \ \ \ with $P_s/P_u = 0.3$ & $0.375\pm 0.009$ & $ 0.096 \pm 0.003 $ \\ \hline \etamx\ $<$1.5 & ZEUS data & $0.156\pm 0.047 \pm 0.007$ & $ 0.071 \pm 0.021 \pm 0.007$\\ LRG & POMPYT & $0.106\pm 0.010$ & $ 0.058 \pm 0.006 $ \\ & NZ & $0.173\pm 0.017$ & $ 0.073 \pm 0.007 $ \\ \hline \end{tabular} \caption{ The \k\ multiplicity and the ratio of the \k\ and charged particle multiplicities for NRG and LRG DIS events. The predictions of five Monte Carlo samples are listed. The diffractive samples are generated with a strangeness suppression factor $P_s/P_u = 0.3$. \label{tab:charged} } \end{center} \end{table} \section{Study of systematic errors} We have investigated several sources of systematic errors for our measurements of the \k\ and \lam\ production rates. 1) The sensitivity of the results with respect to the track and primary vertex reconstruction methods was determined by repeating the analysis with a modified version of the reconstruction package. The differences seen are at the 5\% level for the multiplicity distributions. No systematic effect is apparent. The ratio of \k\ to charged particle multiplicity is similarly unaffected. 2) The sensitivity of the results on the choice of the \ks\ and \lam\ selection criteria has been investigated by varying them by $\pm 25$\% of their nominal values. The uncertainty in the results from the DIS event selection was determined by repeating the analysis with different electron finding algorithms and by varying the event selection criteria by reasonable values. The systematic error from those sources is about 5\% except for the highest $\eta$ and \ptr\ points in the multiplicity distributions and for the results of the LRG event analysis, where the error is up to 15\%. The mean particle multiplicities per event show lower systematic errors (3\%) than the bin by bin errors in the figures. 3) Uncertainties from events rejected by the DIS event selection criteria and event migration effects were determined by detailed Monte Carlo studies of the \k\ and \lam\ production in the events migrating into and out of the selected $Q^2,x,y$ range. The \k\ and \lam\ rate of events migrating into this range is comparable to that of events migrating out. The uncertainty from these sources is at the 5\% level. The additional kinematic restriction of $W>140$~GeV for the LRG comparison introduces a higher uncertainty (7\%) for the results. The mean particle multiplicities show a 2\% uncertainty for NRG DIS events and 5\% for the LRG DIS events. 4) We determined a photoproduction contamination in the event sample of 2.5\%. The event sample which was kinematically restricted to $W > 140$~GeV contains a higher background of 3.5\% as shown in \cite{F2,Z93F2}. We have estimated how these photoproduction events affect our analysis by studying the stability of the results when varying the scattered electron energy and the $\delta$ selection criterion. We quote an uncertainty from this source of 3\%. The influence on the results from initial state radiative events not removed by the $\delta$ requirement is below 3\% except for the lowest $\eta$ point in Fig.~\ref{fig:k0_rate_pt_eta_lab}b where it is 15\%. 5) The \k\ multiplicity versus $Q^2$ is rather sensitive to the background below the $M_{\pi\pi}$ signal. The combinatorial background increases with $Q^2$ due to the observed higher particle multiplicity in events with higher $Q^2$. Also migration effects are non-negligible. Both effects together may induce variations to the measured values between --11\% and +3\% depending on the $Q^2$ bin and on the Monte Carlo simulations used to determine them. We include an overall systematic error of 10\% to our results from these sources. 6) The results for the ratio of the \k\ multiplicity to the charged particle multiplicity are affected by uncertainties similar to those for the \k\ multiplicity alone. The variations resulting from different correction procedures of calculating the mean charged track multiplicity or from using different Monte Carlo samples for the correction are within a few percent. The relative changes of the ratios of \k\ to charged particle multiplicities for the NRG and the LRG data samples are below 5\% when different \ptr\ ranges for the charged particles are considered. 7) The strange quark density of the proton structure function does not affect our acceptance corrections. \section{Summary and discussion} We have measured the \k\ and \lam\ multiplicities for deep inelastic $ep$ scattering events at $\sqrt{s}=296$~GeV with $10~\GeV^2<Q^2<640~\GeV^2$, \ \ $0.0003<x<0.01$ and $y > 0.04$ \ in the ZEUS experiment at HERA. We have restricted the analysis to the \k\ and \lam\ kinematic region \ptr\ $> 0.5 $ GeV and $\left| \eta \right| < 1.3$. About 23\% (20\%) of the \k\ (\lam ) are predicted to be produced within this kinematic range. In this kinematic range the mean number of \k\ (\lam ) per event is 0.289~\pms~0.015~\pms~0.014 \ (0.038 \pms~0.006 \pms~0.002). The results on particle production from lower energy $e^+e^-$ data, which are incorporated in the current DIS Monte Carlo simulation programs (i.e., strange quark suppression factor $P_s/P_u = 0.3$), predict higher \k\ and \lam\ multiplicities than those observed in the data. Using a smaller value of 0.2 reduces the predicted multiplicity and gives a better agreement with the data, especially for \lam\ production. Nevertheless, with $P_s/P_u=0.2$ the prediction for the \lam\ multiplicity is still higher, while the prediction for the \k\ multiplicity is lower than the measured values. The Monte Carlo models allow an adjustment of the production rates of the different particle types by changing other parameters, like the ratio of diquarks to single quarks created from the sea, $P_{qq}/P_q$, as well as the suppression factor for strange diquarks, $(P_{us}/P_{ud})/(P_s/P_d)$. Our results indicate the need for tuning these parameters which requires a detailed measurement of the ratios of pions, kaons, lambdas and protons over a larger kinematic range. This is beyond the scope of this paper. The shapes of the distributions for \k 's and \lam 's are described by both models and do not depend on the chosen parameter $P_s/P_u$. The mean $K^0$ multiplicity of our data shows no indication for a $Q^2$ dependence at fixed $W$. Also, the ratio of \k\ to charged particles is observed to be independent of the kinematic variables in the range studied. We observe \k\ production in DIS events with a large rapidity gap with respect to the proton direction. The \k\ multiplicity in LRG events is approximately a factor of two lower than in non-diffractive DIS events. The ratio of \k\ to charged particles is found to be the same in both samples. Thus we observe no additional enhancement or suppression of neutral kaon production in events with a large rapidity gap compared to events without a gap. \section*{Acknowledgements} The strong support and encouragement by the DESY Directorate have been invaluable. The experiment was made possible by the inventiveness and diligent efforts of the HERA machine group who continued to run HERA most efficiently during 1993. The design, construction and installation of the ZEUS detector have been made by the ingenuity and dedicated efforts of many people from the home institutes who are not listed here. Their contributions are acknowledged with great appreciation. We also gratefully acknowledge the support of the DESY computing and network services.
2951c2e1f80b2e5aa7de35b9a1d504aabf42f429
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Starting from Gromov's uniqueness theorem on fillings of $(S^3,\xi_{\rm std})$ \cite{MR809718} and the Eliashberg-Floer-McDuff theorem \cite{MR1091622} on fillings of $(S^{2n-1},\xi_{\rm std})$, understanding the uniqueness of exact fillings of some contact manifolds has been a fundamental and influential question. In dimension $3$, the intersection theory of holomorphic curves can be used to construct foliations of symplectic fillings. A landmark result is Wendl's theorem on planar contact $3$-folds \cite{MR2605865}, which translates the classification of symplectic fillings into factorizations in mapping class groups. In higher dimensions, only ``homological" foliations by holomorphic curves can be obtained, just like the Eliashberg-Floer-McDuff theorem compared to Gromov's theorem. Based on ``homological" foliations, various generalizations of the Eliashberg-Floer-McDuff theorem were obtained \cite{MR4031531,BGM,geiges2021diffeomorphism,MR2874896}. On the other hand, we studied the filling question from the perspective of Floer theories and obtained various uniqueness results \cite{MR4182808,filling, quotient, product,zhou2019symplectic,ring}. In this note, we show the uniqueness of the integral intersection form for exact fillings of some flexibly fillable contact manifolds, which shall yield uniqueness of diffeomorphism types in some cases. \begin{theorem}\label{thm:main} Let $(Y^{2n-1},\xi)$ be a flexibly fillable contact manifold with the rational first Chern class $c^{{\mathbb Q}}_1(\xi)=0$, then the integral intersection form of any exact filling $W$ with $c^{\mathbb Q}_1(W)=0$ and $H_1(Y;{\mathbb Q})\to H_1(W;{\mathbb Q})$ injective is isomorphic to the integral intersection form of the flexible filling $W_0$. In view of \Cref{prop:SS} below, when $n$ is even or $H^n(W_0;{\mathbb Q})\to H^n(Y;{\mathbb Q})$ is injective, the topological assumptions can be removed. \end{theorem} The proof is based an ad hoc argument of viewing the intersection number as a boundary of a $1$-dimensional moduli space, and show that the count of other boundary components is independent of fillings by neck-stretching. It is important that $Y$ is flexibly fillable rather than just being asymptotically dynamically convex (ADC). On the other hand, a more systematic approach is independently developed by Eliashberg, Ganatra and Lazarev \cite{EGL} using the secondary coproduct defined by Ekholm and Oancea \cite{ekholm2017symplectic}. In particular, their results apply to general ADC contact manifolds. A variant of the topological condition in \Cref{thm:main} was called topologically simple condition in \cite{MR4182808,filling}. Such condition was used to make sure the grading of symplectic cohomology of filling is consistent with the grading on the contact boundary, so that we can do dimension computation after neck-stretching. Results in \cite{MR4182808,filling} hold for the topological condition in \Cref{thm:main} by considering symplectic cohomology generated by orbits that are trivial in $H_1(W;{\mathbb Q})$ instead of only those contractible in $W$. Following a somewhat different perspective as in \cite{quotient}, we can drop the topological condition in some cases. \begin{proposition}[\Cref{cor:SS}]\label{prop:SS} Let $(Y^{2n-1},\xi)$ be a flexibly fillable contact manifold with $c^{{\mathbb Q}}_1(\xi)=0$. If $n$ is even or $H^n(W_0;{\mathbb Q})\to H^n(Y;{\mathbb Q})$ is injective for the flexible filling $W_0$. Then for any exact filling $W$ of $Y$, we have $H^*(W;{\mathbb Z})\to H^*(Y;{\mathbb Z})$ is isomorphic to $H^*(W_0;{\mathbb Z})\to H^*(Y;{\mathbb Z})$. \end{proposition} We have the following applications of \Cref{thm:main}. \begin{corollary}\label{cor:diff} Let $Y$ be the boundary of a flexible Weinstein domain $W^{2n}$ of one of the following types, then any exact filling of $Y$ is diffeomorphic to $W^{2n}$. \begin{enumerate} \item $n=3$ or $7$, and $W^{2n}$ is the flexible version of $T^*S^n$. \item $n \equiv 6 \mod 8$, $W^{2n}$ only has flexible $n$-handles except for the $0$-handle such that the intersection form is even. \end{enumerate} \end{corollary} \begin{proof} In the first case, we have $H^*(W;{\mathbb Q})\to H^*(\partial W;{\mathbb Q})$ is injective. Then by \Cref{prop:SS} and \cite[Theorem E]{filling}, any exact filling $V$ is simply connected. Since $H^*(V;{\mathbb Z})=H^*(W;{\mathbb Z})$ by \cite[Corollary B]{filling} and is freely generated and supported in degree $0$ and $n$ and $\dim V\ge 6$, we have $V$ has a handle decomposition of one $0$-handle and several $n$-handles. By \cite[Corollary 4.6 and the remark afterwards]{smale1962structure}, the diffeomorphism type of such manifold is uniquely determined by the intersection form under the conditions listed. Therefore $V$ is diffeomorphic to $W$ by Theorem \ref{thm:main}. \end{proof} It is worth-noting that the uniqueness of diffeomorphism type above follows from completely different topological argument compared to results in \cite{MR4031531,BGM,geiges2021diffeomorphism,MR1091622,product}, where the uniqueness hinges on $H^*(W;{\mathbb Z})\to H^*(Y;{\mathbb Z})$ being injective and the $h$-cobordism theorem, which is rarely the case in \Cref{cor:diff}. Let $V$ be an exact domain, we call a closed hypersurface $Y\subset V$ an exact contact hypersurface if there exists a Liouville vector field on $V$ that is transverse to $Y$ and bounds a compact domain (which is automatic if $H_{2n-1}(V;{\mathbb Z})=0$). Obstructions to exact hypersurfaces were studied by Cieliebak and Frauenfelder \cite{MR2461235} using Rabinowitz-Floer homology. Using \Cref{thm:main}, we have the following obstructions to exact contact hypersurfaces, where the obstructions from Rabinowitz-Floer homology typically vanish. \begin{corollary} Let $V$ be an exact domain with a trivial intersection form and $c^{{\mathbb Q}}_1(V)=0$, and $W$ is a simply connected flexible Weinstein domain with a non-trivial intersection form, then $\partial W$ can not be embedded into $V$ as an exact contact hypersurface. \end{corollary} \begin{proof} If $\partial W$ embeds as an exact surface hypersurface, then it bounds an exact subdomain $U\subset V$ with $c^{{\mathbb Q}}_1(U)=0$. Then by Theorem \ref{thm:main}, the intersection form of $U$ is non-trivial, which contradicts the triviality of the intersection form on $V$. \end{proof} \begin{remark} In general, we expect to drop the requirement on the hypersurface being exact. Then the bounded domain $U$ may not be exact in general, but will be symplectically aspherical if we assume $V$ is symplectically aspherical. However in this case, the independence argument in \cite{filling} does not work in general due to a technical issue of uncontrolled shrinking in \cite[\S 8]{filling}. \end{remark} \section{Proof of \Cref{thm:main}} The idea of the proof is as follows: take two closed cycles $A,B$ in $H_n(W;{\mathbb Z})$, under the transversality assumption, the intersection number $A\cdot B = \# (A\pitchfork B)$, where the later is a finite set of oriented points. By \cite{filling}, we have $SH^*(W;{\mathbb Z})=0$ for any topologically simple exact filling. Geometrically, it means that there is one curve counted algebraically passing through a fixed point. If we choose the point constraint to be $A\pitchfork B$, this allows us to view $A\cdot B$ as the counting of one boundary component of a $1$-dimensional moduli space. Then we can establish the independence result by looking at the other boundaries and applying neck-stretching. Let $(Y,\xi)$ be a contact manifold such that the rational first Chern class $c_1^{{\mathbb Q}}(\xi)\in H^2(Y;{\mathbb Q})$ is zero. Then after choosing a trivialization of $\det_{{\mathbb C}} \oplus^N\xi$ for some $N\in {\mathbb N}_+$, we can assign a rational Conley-Zehnder index to each non-degenerate Reeb orbit \cite{gironella2021exact,Reeb}. For those orbits with torsion homology classes, the Conley-Zehnder index is independent of $N$ and the trivialization. We say $W$ is a \emph{topologically simple} exact filling of $Y$ if $H_1(Y;{\mathbb Q})\to H_1(W;{\mathbb Q})$ is injective and $c_1^{{\mathbb Q}}(W)=0$. In this case, the symplectic cohomology of $W$ can be graded by ${\mathbb Q}$ and is consistence with boundary Conley-Zehnder index if the trivialization of $\det_{{\mathbb C}} \oplus^N\xi$ can be extended to a trivialization of $\det_{{\mathbb C}} \oplus^N TW$. In particular, if we only consider orbits with torsion homology class in $W$, such class of orbits is independent of $W$ as well as their ${\mathbb Q}$-gradings. \begin{proposition}\label{prop:reeb} Let $(Y^{2n-1},\xi)$ be a flexibly fillable contact manifold with $c^{{\mathbb Q}}_1(\xi)=0$ and a fixed contact form $\alpha_0$. Then for any $D \gg 0$, there exists a contact form $\alpha<\alpha_0$ such that Reeb orbits of $\alpha$ with period smaller than $D$ are non-degenerate and have Conley-Zehnder index $\ge 1$ (for any fixed trivialization of $\det_{{\mathbb C}} \oplus^N\xi$ ). And those with Conley-Zehnder index $1$ are simple. \end{proposition} \begin{proof} This follows from the proof of \cite[Theorem 3.15, 3.17, 3.18]{lazarev2016contact}. For $D \gg 0$ and a suitable $\alpha<\alpha_0$, if we consider Reeb orbits of period smaller than $D$, then they fall into the following two classes: (1) Each subcritical handle of index $k$ creates a simple contractible Reeb orbit with Conley-Zehnder index $n+1-k$, and all multiple covers of it have higher Conley-Zehnder indices; (2) Every loose handle attachment creates (several) contractible simple Reeb orbits of Conley-Zehnder index $1$ and many other orbits with Conley-Zehnder index strictly greater than $1$. \end{proof} Let $\gamma_1,\ldots,\gamma_N$ denote all the Reeb orbits of action smaller than $D$ with Conley-Zehnder index $1$. We assume they are ordered increasingly with respect to their periods. In the definition of filtered positive symplectic cohomology $SH^{*,<D}_+(W)$ with slope $D$, we will use the following special Hamiltonian $H$. Here $W$ is a topologically simple exact filling of $(Y,\alpha)$. \begin{enumerate} \item $H=0$ on $W$ and $H'(r)=D$ for $r>1+w$ for $w>0$. \item $H$ on $Y\times [1,1+w]$ is a small perturbation of $H=f(r)$ with $f''(r)>0$ such that the periodic orbits of $X_H$ are non-degenerate and in a two-to-one correspondence with Reeb orbits of period smaller than $D$. \end{enumerate} More precisely, every non-degenerate Reeb orbits $\gamma$ will split into two Hamiltonian orbits $\hat{\gamma}$ and $\check{\gamma}$ with $\mu_{\rm CZ}(\hat{\gamma}) = \mu_{\rm CZ}(\gamma)+1$ and $\mu_{\rm CZ}(\check{\gamma})=\mu_{\rm CZ}(\gamma)$. Then by degree reason, $[\check{\gamma}_1], \ldots, [\check{\gamma}_N]$ represent classes in $SH_+^{n-1,<D}(W;{\mathbb Z})$. Our grading the cohomological convention given by $n-\mu_{\rm CZ}$. Without loss of generality, we can assume for every $D > 0$, there exists a contact form $\alpha_D<\frac{1}{2}\alpha_0$ such that Proposition \ref{prop:reeb} holds. Let $M_D$ denote the cobordism from $\alpha_D$ to $\alpha_0$ in the symplectization of $(Y,\alpha_0)$. Then we have a transfer map $$SH_+^{*,<\frac{D}{2}}(W\cup M_D;{\mathbb Z}) \to SH_+^{*,<D}(W;{\mathbb Z}).$$ which is compatible with the connecting map to $H^*(W;{\mathbb Z})$. We will stretch on the contact boundary of $W$, the following propositions hold if stretch the almost complex structure sufficiently. \begin{proposition}\label{prop:n} For $D\gg 0$ and sufficiently stretched almost complex structure, we have $$\delta:\langle\, \check{\gamma}_1, \ldots, \check{\gamma}_N \,\rangle \to SH^{n-1, D}_+(W;{\mathbb Z}) \to H^n(W;{\mathbb Z})$$ is surjective and the kernel is independent of the topologically simple exact filling $W$. \end{proposition} \begin{proof} By \cite{filling}, $SH_+^{n-1}(W\cup M_D;{\mathbb Z}) \to H^n(W\cup M_D;{\mathbb Z})=H^n(W;{\mathbb Z})$ is an isomorphism for any filling with the listed topological conditions. In particular, for $D$ big enough, the map $SH_+^{n-1,<\frac{D}{2}}(W\cup M_D;{\mathbb Z}) \to H^n(W;{\mathbb Z})$ is a surjection ($W$ varies w.r.t.\ to $D$, $W\cup M_D$ does not). We can assume threshold of $D$ for it to hold works for the flexible filling $W_0$ as well. We have that $ SH^{n-1, <D}_+(W;{\mathbb Z}) \to H^n(W;{\mathbb Z})$ is surjective by the Viterbo transfer. Moreover, $SH_+^{n-1,<D}(W;{\mathbb Z})$ must be spanned by $[\check{\gamma}_i]$ by degree reason. The remaining part of the proposition follows from that $\check{\gamma}_i$ is matched in the identification of $SH_+^{n-1,<D}(W;{\mathbb Z})\to SH^{n-1}_+(W;{\mathbb Z})\simeq H^n(W;{\mathbb Z})$ with that of the flexible filling for a sufficiently stretched almost complex structure \cite{lazarev2016contact,filling}. \end{proof} In the following, we will use $\alpha,\beta,\gamma$ to stand for Reeb orbits and $\hat{\alpha},\check{\alpha},\overline{\alpha}$ to stand for Hamiltonian orbits, where $\overline{\alpha}$ means that we do not specify whether it is a check or hat orbit. \begin{proposition}\label{prop:id} For $D\gg 0$ and a sufficiently stretched almost complex structure, there is a linear combination of Hamiltonian orbits $\sum a_i\overline{\alpha}_i$ of Conley-Zehnder index $n+1$, such that $[\sum a_i\overline{\alpha}_i] \in SH^{-1,<D}_+(W;{\mathbb Z})$ is sent to $1$ in $H^0(W;{\mathbb Z})$ and it is independent of the filling. \end{proposition} \begin{proof} This element represents the element hitting $1$ under the map $SH_*^{-1}(W;{\mathbb Z}) \to H^0(W;{\mathbb Z}) \to H^0(Y;{\mathbb Z})$. The Proposition follows from that the map above is independent of the filling by \cite[Corollary B]{filling}. \end{proof} In the following, for the simplicity of notation, we will assume $\sum a_i \overline{\alpha}_i$ is represented by a single Hamiltonian orbit $\overline{\alpha}$. The argument below works for linear combinations as long as they represent a closed class. The key point is the that effect of such class does not depend on fillings. Fixing any two closed chains $A,B$ representing classes in $H_n(W;{\mathbb Z})$ with transverse intersection and a periodic orbit $\overline{\alpha}$, we consider the compactified moduli space of the following $${\mathcal M}_{\overline{\alpha},A,B}:=\left\{ u:{\mathbb C} \to \widehat{W} \left| ({\rm d} u - v)^{0,1}=0, u(\infty) = \overline{\alpha}, u(0) \in A, u(1) \in B\right.\right\}$$ where $v = X_{H} \otimes \beta$ with $H$ a Hamiltonian as before and $\beta$ a one form, such that $\beta = {\rm d} t$ near the ends and ${\rm d} \beta\le 0$. Since $H=0$ near $A$ and $B$, the removal of singularity implies that $u$ can be viewed as a map on ${\mathbb C}$. Similarly for another orbit $\overline{\beta}$, we can define ${\mathcal M}_{\overline{\alpha},\overline{\beta}, B}$ and ${\mathcal M}_{\overline{\alpha},A, \overline{\beta}}$. We also define ${\mathcal M}_{\overline{\alpha}, A}$ to be the compactification of the following. $$\left\{u:{\mathbb C} \to \widehat{W}\left| ({\rm d} u- X_{H}{\rm d} t)^{0,1}=0, u(\infty) = \overline{\alpha}, u(0)\in A\right.\right\}/\mathbb{R}$$ \begin{proposition}\label{prop:int} Let $\overline{\alpha}$ be the class in Proposition \ref{prop:id}, then the intersection number is $$A\cdot B = \sum_{i=1}^N\left(\#\left({\mathcal M}_{\overline{\alpha},\check{\gamma}_i,B}\times {\mathcal M}_{\check{\gamma}_i,A}\right)+\# \left({\mathcal M}_{\overline{\alpha}, A,\check{\gamma}_i}\times {\mathcal M}_{\check{\gamma}_i,B}\right) \right).$$ \end{proposition} \begin{proof} It follows from the boundary configuration of ${\mathcal M}_{\overline{\alpha},A,B}$ whose dimension is $1$. Since $\dim {\mathcal M}_{\overline{\beta}, A} = \mu_{\rm CZ}(\beta)-1$, and all periodic orbits have Conley-Zehnder indices greater than $1$ unless they are one of $\check{\gamma}_i$. By degree reason, the Floer type breakings near $0,1$ give rise to the right hand side. Since $\overline{\alpha}$ is closed in the positive symplectic cohomology, the Floer type breakings near $\infty$ at a non-constant orbit sum up to $0$. If we consider the Floer type breakings near $\infty$ at interior of $W$. Then by the integrated maximal principle the curve is contained in $W$, where the equation is the Cauchy-Riemann equation. By the exactness of $W$, such curve must be constant. Therefore such degeneration can be identified with curves $u:{\mathbb C} \to \widehat{W}$ solves $({\rm d} u-X_{H}{\rm d} t)^{0,1}=0$ with $u(\infty)=\overline{\alpha}$ and $u(0)\in A\cap B$ modulo the $\mathbb{R}$ translation. Since $\overline{\alpha}$ is mapped to $1 \in H^0(W;{\mathbb Z})$, i.e.\ the count of curves $u$ with a point constraint at $u(0)$ and $u(\infty)=\overline{\alpha}$ module $\mathbb{R}$ is $1$ when transversality holds. Therefore, this type of degeneration is counted as $A\cdot B$. \end{proof} \begin{proposition}\label{prop:pair} For a sufficiently stretched almost complex structure, we have $$ \#{\mathcal M}_{\check{\gamma}_i,B}=\langle\, \delta([\check{\gamma}_j]), B \,\rangle,$$ where the last pairing is the natural map $H^n(W;{\mathbb Z})\otimes H_n(W;{\mathbb Z}) \to {\mathbb Z}$. \end{proposition} \begin{proof} Given a Morse function $f$ on $W$ such that $\partial_r f >0$ on $\partial W$, then we can represent a cochain complex of $H^*(W;{\mathbb Z})$ by critical points of $f$, then the paring of critical point $x$ with a closed cycle $B$ is the intersection number of the ascending manifold of $x$ with $B$. Following \cite{filling}, the map $\delta$ can be represented counting the moduli space of $(u,l)$ with $u$ solves the Floer equation and $u(0)\in W$ and $l$ is a gradient trajectory from $u(0)$ to a critical point $x$. Therefore $\langle\, \delta([\check{\gamma}_j]), B \,\rangle$ counts the moduli space of $(u,l_1,l_2)$ with $l_1,l_2$ be two half infinite gradient trajectories connected at an index $n$ critical point of $f$. Then by shrinking the time of the gradient flow lines from $\infty$ to $0$ as in \cite[\S 3.1]{filling}, and note that $[\check{\gamma}_i]$ is closed in positive symplectic cohomology and $B$ is closed, the count equals to length $0$ count, which is $\#{\mathcal M}_{\check{\gamma}_i,B}$. \end{proof} To compute ${\mathcal M}_{\overline{\alpha},\check{\gamma}_j,B}$, we perform a full neck-stretching on the boundary. Let $\widehat{Y}$ denote the symplectization $Y\times (0,\infty)$, which is equipped with a Hamiltonian $H$ such that $H=0$ on $Y\times (0,1)$ and after that it is the same as $H$ on $\widehat{W}$. Then we define ${\mathcal N}_{\overline{\alpha},\check{\gamma}_i,\gamma_j}$ to be the compactification of the following moduli space. $$\left\{ u:\mathbb{C}\mathbb{P}^1\backslash \{\infty,0,1\} \to \widehat{Y}\left| ({\rm d} u-X_H\otimes \beta)^{0,1}=0, u(\infty) = \alpha, u(0)=\check{\gamma}_i, u(1) = (0,\gamma_j)\right.\right\}$$ i.e. $u(\infty),u(0)$ are asymptotic to Hamiltonian orbits and $u(1)$ is asymptotic to a Reeb orbit at a negative puncture. We define ${\mathcal N}_{\gamma_j,B}$ to be the compactification of the following moduli space. $$\left\{u:{\mathbb C} \to \widehat{W}\left| ({\rm d} u)^{0,1}=0, u(\infty) = (+\infty, \gamma_j), u(0)\in B\right.\right\}/ \mathbb{R}\times S^1$$ \begin{proposition}\label{prop:SFT1} For a sufficiently stretched almost complex structure, we have $$\# {\mathcal M}_{\overline{\alpha},\check{\gamma}_j,B} = \sum_{k=1}^N \#\left({\mathcal N}_{\overline{\alpha},\check{\gamma}_j, \gamma_k} \times {\mathcal N}_{\gamma_k, B}\right).$$ \end{proposition} \begin{proof} We perform a full neck-stretching along the boundary, then any curve in ${\mathcal M}_{\overline{\alpha},\check{\gamma}_j, B}$ will converge to a SFT building type curve, since $B \subset W$. The top level curve is necessarily connected by \cite[Proposition 9.17]{cieliebak2018symplectic}, with one fixed negative puncture at $1$ which will connect to the component that eventually intersects $B$. But there might be other free moving punctures that will eventually be closed off by holomorphic planes. Let $\gamma$ denote the Reeb orbit on the puncture $1$, and $\beta_i$ be those Reeb orbits on those free punctures. Then the virtual dimension of this moduli space is $\mu_{\rm CZ}(\overline{\alpha})-\mu_{\rm CZ}(\check{\gamma}_j)-(\mu_{\rm CZ}(\gamma)+n-1)-\sum (\mu_{\rm CZ}(\beta_i)+n-3)$. When we choose $D\gg 0$ in the first place, we have all Reeb orbits that can potentially appear must have $\mu_{\rm CZ} \ge 1$. Since we can assume transversely for the upper level curve. Therefore the only possibility is $\gamma$ is one of $\gamma_i$ and there is no $\beta_i$, and we have the expected dimension is $0$. After the top level, we might have several levels of curves in the symplectization, with the highest with only one positive puncture asymptotic to $\gamma_i$. Since $\gamma_i$ is simple, the highest curve is necessarily somewhere injective, hence we can assume transversality for this curve. Since the curve must connect to some component that eventually intersects $B$, therefore the curve must have at least one negative end $\gamma'$, then the expected dimension of the moduli space of this curve is $\mu_{\rm CZ}(\gamma_i)-\mu_{\rm CZ}(\gamma')-\sum_j (\mu_{\rm CZ}(\beta_j)+n-3)-1$. Since $\mu_{\rm CZ}(\gamma_i)$ is the lowest and all SFT grading $\mu_{\rm CZ}(\beta_i)+n-3$ are positive, we have the dimension is negative. As a result, there is no curve in the symplectization. The last part is the curve in the completion $\widehat{W}$, which is exactly ${\mathcal N}_{\gamma_i,B}$ with expected dimension $0$. Since $\gamma_i$ is simple, transversality is not a issue. Therefore the right hand side is the count from the fully stretched almost complex structure. If we assume we start with an almost complex structure that is stretched enough, we may assume in the process of stretching there is no curve in ${\mathcal M}_{\overline{\alpha},\overline{\beta}}$ and ${\mathcal M}_{\overline{\beta},\check{\gamma}_i}$ with expected dimension $-1$. Moreover, we also assume there is no curve in ${\mathcal M}_{\overline{\beta},\overline{\gamma},B}$ with expected dimension $-1$, for otherwise, we have a curve in a moduli space of negative dimension after the fully stretch. Since $B$ is closed, in the process of neck-stretching, we only have ${\mathcal M}_{\overline{\alpha},\check{\gamma}_j,B}$ and $\sum_{k=1}^N \# {\mathcal N}_{\overline{\alpha},\check{\gamma}_j, \gamma_k} \times {\mathcal N}_{\gamma_k, B}$ as boundary corresponding to the two ends of the neck-stretching parameter. \end{proof} We define ${\mathcal N}_{\check{\gamma}_i,\gamma_j}$ to be the compactified moduli space of $$\left\{ u: \mathbb{R} \times S^1 \to \widehat{Y}\left| ({\rm d} u-X_H{\rm d} t)^{0,1}=0, u(\infty)=\check{\gamma}_i, u(-\infty)=(0,\gamma_j)\right.\right\}/\mathbb{R}$$ \begin{proposition} \label{prop:SFT2} For a sufficiently stretched almost complex structure and $H$ sufficient close to the autonomous one which only depends on $r$, we have $$\sum_{i=1}^j\#{\mathcal N}_{\check{\gamma}_j, \gamma_i}\times {\mathcal N}_{\gamma_i, B} = \langle\, \delta([\check{\gamma}_j]), B \,\rangle,$$ and $\#{\mathcal N}_{\check{\gamma}_j,\gamma_j}=1$. \end{proposition} \begin{proof} The proof is similar to Proposition \ref{prop:SFT1} by fully stretching the moduli space ${\mathcal M}_{\check{\gamma}_j, B}$. By a similar dimension argument, the moduli space must break into ${\mathcal N}_{\check{\gamma}_j,\gamma_i}\times {\mathcal N}_{\gamma_i, B}$. Therefore it suffices to prove $i\le j$. When $H$ is autonomous and only depends on $r$, $X_H$ is parallel to the Reeb vector. Therefore for any solution $u \in {\mathcal N}_{\check{\gamma}_j,\gamma_i}$, we have the $\alpha$-energy $\int u^*\alpha \ge 0$, which implies that the period of $\gamma_j$ must be greater than $\gamma_i$ unless $\gamma_i=\gamma_j$. Then for $H$ sufficient close to the autonomous one, we have ${\mathcal N}_{\check{\gamma}_j,\gamma_i} \ne \emptyset$ implies that $i\le j$. Moreover, for the autonomous Hamiltonian, curves in ${\mathcal N}_{\check{\gamma}_j,\gamma_j}$ is necessarily reparametrization of the trivial cylinder since the $\alpha$-energy is $0$. The moduli space is diffeomorphic to $S^1$ and is cut out transversely in the Morse-Bott sense. Then by the same analysis in \cite{bourgeois2009symplectic} and $\gamma_j$ is simple, we have $\#{\mathcal N}_{\check{\gamma}_j,\gamma_j}=1$. One can avoid such perturbation if uses a cascades setup with a autonomous Hamiltonian as in \cite{bourgeois2009symplectic}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main}] If we are given $\langle\, \delta([\check{\gamma}_j]), B\,\rangle$ and $\#{\mathcal N}_{\check{\gamma}_i, \gamma_j}$, we can solve uniquely $\#{\mathcal N}_{\gamma_i, B}$ by Proposition \ref{prop:SFT2}, since the coefficient matrix is triangular with ones on the diagonal. Then by Proposition \ref{prop:int}, \ref{prop:pair} and \ref{prop:SFT1}, we can represent the intersection $A\cdot B$ by ${\mathcal M}_{\overline{\alpha}, \check{\gamma}_i,\gamma_j}$, ${\mathcal M}_{\overline{\alpha}, \gamma_i,\check{\gamma}_j}$, ${\mathcal N}_{\check{\gamma}_i, \gamma_j}$ and $\langle\, \delta([\check{\gamma}_j]), A/B \,\rangle$. The first three moduli spaces are independent of the filling, as they are contained in the symplectization. Note that $H^n(W;{\mathbb Z})$ is independent of filling, and a basis can be represented by combinations of $[\check{\gamma}_i]$ by Proposition \ref{prop:n}. By the universal coefficient theorem, $H_n(W;{\mathbb Z})$ is isomorphic to the free part of $H^n(W;{\mathbb Z})$ since $H^*(W;{\mathbb Z})$ is supported in degree $\le n$. Fixing a basis of a fixed free part of $H^n(W;{\mathbb Z})$ induces a dual basis on $H_n(W;{\mathbb Z})$. We use this dual basis to identify the homology of two fillings. Under this identification, we also identify $\langle\, \delta([\check{\gamma}_j]), A(B) \,\rangle$ for both fillings, hence the intersection form can be identified. \end{proof} \section{Removing the topologically simple assumption} Exploiting the independence of augmentations using grading constraints dates back to the index-positive contact manifolds in \cite{MR2471597,cieliebak2018symplectic}. This notion was generalized by Lazarev \cite{lazarev2016contact} to the notion of asymptotically dynamically convex (ADC) manifolds to contain examples like flexibly fillable contact manifolds with vanishing first Chern class. Several structural maps on ($S^1$-equivariant) symplectic cohomology of exact fillings of ADC manifolds are independent of \emph{topologically simple} fillings \cite{filling,zhou2019symplectic}. Those topological conditions are used to get a ${\mathbb Z}$ grading for the symplectic cohomology generated by \emph{contractible orbits}, as the ADC condition only requires that $\mu_{\rm CZ}(\gamma)+n-3>0$ for contractible Reeb orbits $\gamma$ (which has a canonical ${\mathbb Z}$-valued Conley-Zehnder index, as $c_1(\xi)=0$). Because of this, $(S^{2n-1}/G,\xi_{\rm std})$ is ADC for any finite $G\subset U(n)$ acting freely on $S^{2n-1}$ and $n\ge 2$, as contractible orbits of $(S^{2n-1}/G,\xi_{\rm std})$ are the same as those on $(S^{2n-1},\xi_{\rm std})$. However, those non-contractible orbits on $(S^{2n-1}/G,\xi_{\rm std})$ play an important role in \cite{quotient}, and by \cite[Theorem A]{quotient}, there are no topological simple fillings (even strong fillings) of $(S^{2n-1}/G,\xi_{\rm std})$. Therefore, it is natural to generalize the notion of ADC manifolds as follows to impose conditions on non-contractible orbits. \begin{definition}\label{def:ADC} Let $(Y,\xi)$ be a contact manifold such that $c_1^{{\mathbb Q}}(\xi)=0$. Let $\Psi$ be a trivialization of $\det_{{\mathbb C}}\oplus^N\xi$ for some $N\in {\mathbb N}_+$. We say $(Y,\xi,\Psi)$ is \emph{generalized ADC} if there exist contact forms $\alpha_1>\alpha_2>\ldots$, positive real numbers $D_1<D_2<\ldots$ converging to infinity and a positive number $\epsilon$, such that \emph{all} Reeb orbits of $\alpha_i$ of period up to $D_i$ are non-degenerate and have rational SFT grading \cite{Reeb,quotient} $\mu_{\rm CZ}(\gamma)+n-3\ge \epsilon$. We say $(Y,\xi,\Psi)$ is \emph{generalized TADC}, if in addition, there is a contact form $\alpha$ such that all $\alpha_i>\alpha$. \end{definition} Since the Conley-Zehnder index of a contractible orbit is an integer and independent of the trivialization $\Psi$, it is clear that generalized ADC implies ADC. Moreover, if $c_1^{{\mathbb Q}}(\xi)=0$ and $H_1(Y;{\mathbb Q})=0$, then rational Conley-Zehnder indices are independent of $\Psi$, which is the case for $(S^{2n-1}/G,\xi_{\rm std})$ in \cite{quotient}. But in general, if $H^1(Y;{\mathbb Q})\ne 0$, then the notion of generalized ADC depends on the trivialization $\Psi$. Let $W$ be a strong filling, such that $c_1^{{\mathbb Q}}(W)=0$. After fixing a trivialization $\Psi$ of $\det_{{\mathbb C}}\oplus^N TW$ for some $N\in {\mathbb N}$, using the induced rational Conley-Zehnder indices for Reeb orbits/Hamiltonian orbits, we have a ${\mathbb Q}$-graded symplectic cohomology $SH^*(W;R;\Psi)$ for a ring $R$ as well as the ${\mathbb Q}$-graded positive symplectic cohomology $SH^*_+(W;R;\Psi)$. \begin{example}\label{ex:generalized_ADC} We have the following examples of generalized ADC contact manifolds. \begin{enumerate} \item Let $G\subset U(n)$ such that quotient ${\mathbb C}^m/G$ has an isolated singularity at $0$, the contact link $(S^{2n-1}/G,\xi_{\rm std})$ is generalized ADC if and only if ${\mathbb C}^n/G$ is a terminal singularity by the work of McLean \cite{Reeb}. \item The contact boundary of a flexible Weinstein domain with vanishing rational first Chern class for any trivialization by the arguments in \cite{lazarev2016contact}. More precisely, the contact boundary of a subcritical Weinstein domain is generalized ADC as all of the relevant orbits can be assumed to be contractible (as they wind around cores of handles). When we attach a flexible handle, non-contractible orbits could appear, however the argument of lifting the Conley-Zehnder indices by adding zig-zags in \cite[Theorem 3.18]{lazarev2016contact} works for non-contractible orbits and any fixed trivialization. \item For a closed manifold $Q$, we have that $\det_{{\mathbb C}} \oplus^2 TT^*Q$ is trivialized using the trivial real bundle $\det_{\mathbb{R}}\oplus^2TQ$. We use $\Psi$ to denote the trivialization. Then $(S^*Q,\Psi)$ is generalized ADC if $\dim Q\ge 4$, as the Conley-Zehnder index using such trivialization is the Morse index when the contact form is induced from a metric. This is an example where the notion of generalized ADC depends on $\Psi$, as changing $\Psi$ will increase the Conley-Zehnder indices of some orbits with nontrivial homotopy classes and decrease the same amount for orbits with the opposite homotopy classes, e.g.\ $T^*T^n$. The same holds for any closed orbifold $Q$ with only isolated singularities (then $S^*Q$ is a contact manifold). \item Let $V$ be a Liouville domain, such that $c_1^{{\mathbb Q}}(V)=0$, then $\partial (V\times {\mathbb D})$ is generalized ADC for any trivialization by (the proof of ) \cite[Theorem K]{filling}. \end{enumerate} \end{example} \begin{proposition}\label{prop:spectral_sequence} Assume $(Y,\xi,\Psi)$ is \emph{generalized ADC} and there is an exact orbifold filling $W$, such that $\Psi$ extends to a trivialization $\widetilde{\Psi}$ of $\det_{{\mathbb C}} \oplus^NTW$. Then for any exact orbifold filling $V$ of $Y$, there is a spectral sequence converging to $SH^*_+(V;R)$ (not graded), such that \begin{enumerate} \item The $(N+1)$th page of the spectral sequence is isomorphic to $SH^*_+(W;R;\widetilde{\Psi})$ (filtered by the ${\mathbb Q}$-grading using $\widetilde{\Psi}$) for any coefficient ring $R$. \item The cochain map $\delta_{\partial}$ from the positive cochain complex to the Morse cochain complex of $Y$ is compatible with spectral sequence. On the $(N+1)$th page, the induced map is isomorphic to $SH^*_+(W;R;\widetilde{\Psi})\to H^{*+1}(Y;R)$. \end{enumerate} If $(Y,\xi,\Psi)$ is \emph{generalized TADC}, the same holds for (semi-positive) strong fillings $V,W$ and $R$ the Novikov field. If $V,W$ are strong orbifold fillings, the same holds if we assume transversality issues for the setup of symplectic cohomology are resolved. \end{proposition} \begin{proof} First note that $\mu_{\rm CZ}(x)$ computed using the trivialization $\Psi$ is always a multiple of $\frac{1}{N}$. The proof follows from applying arguments in \cite[\S 3]{quotient} to the spectral sequence associated to the filtration $$F^kC_+(H):=\left\langle x\left| |x|^{\partial}\ge \frac{k}{N} \right.\right\rangle, \quad k\in {\mathbb Z}, $$ where $|x|^{\partial}=n-\mu_{\rm CZ}(x)$. Strictly speaking, we need to apply arguments in \cite[\S 3]{quotient} to the infinite telescope construction of filtered (by $D_i$) positive cochain complexes of $\alpha_i$ in the definition of generalized ADC. \end{proof} As a corollary, we can exploit the degeneracy of the spectral sequence in some special cases, which improves some of the results in \cite{filling}. \begin{corollary}\label{cor:SS} Let $(Y^{2n-1},\xi)$ be the contact boundary of a flexible Weinstein domain $W_0$ with $c^{{\mathbb Q}}_1(W_0)=0$. Then for any exact orbifold filling $W$ of $Y$, we have the following. \begin{enumerate} \item\label{c1} We have $\dim \oplus SH^*_+(W;{\mathbb Q})\le \dim \oplus H^*(W_0;{\mathbb Q})$. \item\label{c2} If $H^n(W_0;{\mathbb Q})\to H^n(Y;{\mathbb Q})$ is injective, then $SH^*(W;{\mathbb Z})=0$, $W$ is a manifold and $H^*(W;{\mathbb Z})\to H^*(Y;{\mathbb Z})$ is isomorphic to $H^*(W_0;{\mathbb Z})\to H^*(Y;{\mathbb Z})$. \item\label{c3} If $n$ is even, then $SH^*(W;{\mathbb Z})=0$, $W$ is a manifold and $H^*(W;{\mathbb Z})\to H^*(Y;{\mathbb Z})$ is isomorphic to $H^*(W_0;{\mathbb Z})\to H^*(Y;{\mathbb Z})$. \end{enumerate} \end{corollary} \begin{proof} By \Cref{ex:generalized_ADC}, there is a trivialization $\Psi$, such that $(Y,\xi,\Psi)$ is generalized ADC. Since the trivialization $\Psi$ is the restriction of a trivialization of $\det_{{\mathbb C}}\oplus^N TW_0$, by \cite{BEE,Subflexible}, we have $SH_+^*(W_0;{\mathbb Q})=H^{*+1}(W_0,{\mathbb Q})$. Then by \Cref{prop:spectral_sequence}, we have $\dim \oplus SH^*_+(W;{\mathbb Q})\le \dim \oplus H^*(W_0;{\mathbb Q})$ for any exact orbifold filling $W$. This proves \eqref{c1}. For \eqref{c2}, by assumption, we have $SH^{*-1}_+(W_0;{\mathbb Q})\simeq H^*(W;{\mathbb Q})\to H^*(Y;{\mathbb Q})$ is injective. Then by \Cref{prop:spectral_sequence}, the spectral sequence map from the spectral sequence of $SH^*_+(W;{\mathbb Q})$ to that of $H^*(Y;{\mathbb Q})$, which on the $(N+1)$th page, is isomorphic to the injective map $SH^*_+(W_0;{\mathbb Q})\to H^{*+1}(Y;{\mathbb Q})$. Since the spectral sequence on $H^*(Y;{\mathbb Q})$ of index gap $1/N$ degenerates from the $(N+1)$ page, the injectivity implies that the spectral sequence on $SH^*_+(W;{\mathbb Q})$ also degenerates at the $(N+1)$ page. And the map $SH^*(W;{\mathbb Q})\to H^{*+1}(Y;{\mathbb Q})$ on the associated graded is the same as $SH^*_+(W_0;{\mathbb Q})\to H^{*+1}(Y;{\mathbb Q})$. Since $1$ is in the image of $SH^*_+(W_0;{\mathbb Q})\to H^{*+1}(Y;{\mathbb Q})$, we know that $1+a$ is in the image of $SH^*_+(W;{\mathbb Q})\to H^{*+1}(Y;{\mathbb Q})$ for $|a|>0$. Therefore, we have $SH^*(W;{\mathbb Q})=0$ and $SH^*_+(W;{\mathbb Q})\simeq H^{*+1}(W;{\mathbb Q})$. The injectivity of $SH^*_+(W_0;{\mathbb Q})\to H^{*+1}(Y;{\mathbb Q})$ implies that $SH^*_+(W;{\mathbb Q})\simeq H^{*+1}(W;{\mathbb Q}) \to H^{*+1}(Y;{\mathbb Q})$ is also injective. As a consequence, we have $c_1^{{\mathbb Q}}(W)=0$. As a consequence. we can use a new trivialization $\Psi'$ of $\det_{{\mathbb C}} \oplus^N \xi$ induced from a trivialization of $\det_{{\mathbb C}} \oplus^N TW$, which clearly extends to a trivialization of $\det_{{\mathbb C}} \oplus^N TW_0$ as $W_0$ is Weinstein with dimension at least $6$. We can run the argument again using $\Psi'$ instead. Since the situation on $W_0$ is independent of such trivialization, we get identification of the cochain complexes and cochain maps on the nose just like the situation in \cite{filling} but for ${\mathbb Z}$ coefficient. As a consequence, the claim follows. That $W$ being a manifold follows from the same argument as in \cite[Theorem C]{gironella2021exact}. For \eqref{c3}, when $n$ is even, we have $H^*(W_0;{\mathbb Q})\to H^{*}(Y;{\mathbb Q})$ is injective on odd degrees. Note that symplectic cohomology is canonically graded by ${\mathbb Z}/2$, and the differentials on the spectral sequence is compatible with the ${\mathbb Z}/2$ grading. By looking at the $(N+1)$th page of spectral sequence map as before, which is injective on even (${\mathbb Z}/2$) degrees of $SH^*_+(W;{\mathbb Q})$, the differential of the $(N+1)$th page of the spectral sequence for $SH^*_+(W;{\mathbb Q})$ must be zero on odd degrees. As a consequence, by induction, the $(N+k)$th page of spectral sequence map is injective on even degrees and the differential of the $(N+k)$th page of the spectral sequence for $SH^*_+(W;{\mathbb Q})$ must be zero on odd degrees for $k>0$. As a consequence, we also have $SH^*(W;{\mathbb Q})=0$ and $SH^*_+(W;{\mathbb Q})\simeq H^{*+1}(W;{\mathbb Q})$. We can also conclude that $H^2(W;{\mathbb Q})\to H^2(Y;{\mathbb Q})$ is injective, hence $c_1^{{\mathbb Q}}(W)=0$. Then we can argue as before. \end{proof}
72715c36dd65682c69402e37bf32240ab1b2a7a8
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Recently the problem of superconductivity near the boundaries of a Bardeen–Cooper–Schrieffer (BCS) superconductor was revisited. The original calculations in BCS theory \cite{de1964boundary,de1966superconductivity,CdGM_Coherence,CdGM_french,abrikosov1965concerning} came to the conclusion that the superconducting gap approaches the surface of a BCS superconductor with zero normal derivative. It was shown in \cite{samoilenka2020boundary,samoilenka2020pair,benfenati2021boundary,barkman2022elevated,samoilenka2021microscopic,hainzl2022boundary} that instead surfaces, corners and edges of a BCS superconductor have in general higher critical temperature than the bulk. The effect is closely connected with the oscillation of density of states near boundaries, allowing to construct a highly inhomogeneous solutions of the gap equation that have higher critical temperature than nearly-uniform solutions. Although the theoretical results also indicated that the effect is strongly dependent on surface quality and hence can be modified by oxidation or different chemical composition of the surface \cite{samoilenka2020boundary,barkman2022elevated}, nonetheless there are experimental reports on boundary superconductivity \cite{fink1969surface,lortz2006origin, janod1993split,khlyustikov2011critical,khlyustikov2016surface,kozhevnikov2007observation,khlyustikov2021surface,mangel2020stiffnessometer,tsindlekht2004tunneling,belogolovskii2010zirconium,khasanov2005anomalous}. The previous theoretical studies were primarily focused on the cases of simplest square or rectangular lattices or continuum theories. That rises the question of the interplay between these effects and the existence of nontrivial localized single-electron states on different lattices. One of the very simplest example one can consider is the case of a honeycomb lattice that has nontrivial boundary states \cite{nakada1996edge, fujita1996peculiar, wakabayashi1999electronic, Wakabayashi_2010, PhysRevB.71.193406, PhysRevB.73.045124,shtanko2018robustness}. To study the interplay between these effects we consider the problem of boundary and bulk critical temperatures on a honeycomb lattice. While the realization of various unconventional superconducting pairing symmetries were proposed for such lattices (for a review see \cite{pangburn2022superconductivity}) our goal is to compare the effects of different symmetries of the lattice on the boundary effects in \cite{samoilenka2020boundary,samoilenka2020pair,benfenati2021boundary,barkman2022elevated,samoilenka2021microscopic,hainzl2022boundary} and to that end, we consider the case of the simplest s-wave pairing interaction within mean-field approximation. Although the considerations would apply also to other systems with similar lattice effects, for brevity, below we refer to the honeycomb system as graphene. \section{Infinite structure} \label{section: infinite} Let us first look at the infinite honeycomb structure made from identical atoms. We divide these atoms into two groups $(A, B)$ to form two sublattices. Effective Hubbard Hamiltonian for the system reads \begin{equation} \label{eq:effective_Hamiltonian} \begin{split} H_\text{eff} = &-t\sum_{\langle \textbf{i},\textbf{j} \rangle} \sum_{\sigma = \uparrow, \downarrow} {\left( a_{\textbf{i},\sigma}^\dagger b_{\textbf{j},\sigma} + b_{\textbf{j},\sigma}^\dagger a_{\textbf{i},\sigma} \right)} \\ &- \mu \sum_{\textbf{i}} \sum_{\sigma = \uparrow, \downarrow} {\left( a_{\textbf{i},\sigma}^\dagger a_{\textbf{i},\sigma} + b_{\textbf{i},\sigma}^\dagger b_{\textbf{i},\sigma} \right)} \\ &-V \sum_{\textbf{i}} {\left( a_{\textbf{i},\uparrow}^\dagger a_{\textbf{i},\uparrow} a_{\textbf{i},\downarrow}^\dagger a_{\textbf{i},\downarrow} + b_{\textbf{i},\uparrow}^\dagger b_{\textbf{i},\uparrow} b_{\textbf{i},\downarrow}^\dagger b_{\textbf{i},\downarrow} \right)}. \end{split} \end{equation} Here $a_{\textbf{i},\sigma}^\dagger (a_{\textbf{i},\sigma})$ is creation (annihilation) operator for electron with spin $\sigma$ on site $A$ in cell which position is described with vector $\textbf{i}=(n, m)$, where $n$ ($m$) specifies horizontal (vertical) position. The same applies to operators $b_{\textbf{i},\sigma}^\dagger$ and $b_{\textbf{i},\sigma}$ which correspond to sites $B$. In Eq. (\ref{eq:effective_Hamiltonian}) first term describes kinetic energy (hopping between nearest-neighbour sites $\langle \textbf{i},\textbf{j} \rangle$ without spin flip), parameterized by the hopping integral $t$ ($t>0$). The second term associated with chemical potential $\mu$ controls filling. The last term describes attraction energy between electrons in the same site using potential $V$ ($V>0$). All parameters $t$, $\mu$, $V$ are assumed to be constant in space. The main focus of this work will be on the physics of boundaries and boundary superconducting state that was recently discussed on square lattices and on continuum \cite{samoilenka2020boundary,samoilenka2021microscopic,barkman2022elevated,barkman2019surface,benfenati2021boundary,samoilenka2020pair}. In order to compare with the previously considered cases, here we focus on s-wave pairing. Further, all energies, $\mu$, $V$, and temperature $T$ are measured in the units of $t$ for simplicity. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{infinite_system_and_1st_BZ.pdf} \caption{(a) Honeycomb lattice in real space, where the red (blue) circles mean an $A$ ($B$)-sublattice site. (b) 1st Brillouin zone for the Bloch's theorem expansion Eq. (\ref{eq:Bloch_theorem}).} \label{fig: honeycomb infinite structure} \end{figure} We apply the Hartree–Fock–Bogoliubov mean-field approximation. The transformed one-particle mean-field Hamiltonian reads \begin{equation} \label{eq:mean-field_Hamiltonian} \begin{split} H_\text{MF} = &-\sum_{\langle \textbf{i},\textbf{j} \rangle} \sum_{\sigma} \left( a_{\textbf{i},\sigma}^\dagger b_{\textbf{j},\sigma} + b_{\textbf{j},\sigma}^\dagger a_{\textbf{i},\sigma} \right) \\ & - \mu \sum_{\textbf{i}} \sum_{\sigma} {\left( a_{\textbf{i},\sigma}^\dagger a_{\textbf{i},\sigma} + b_{\textbf{i},\sigma}^\dagger b_{\textbf{i},\sigma} \right)} \\ &+ \sum_{\textbf{i}} \Bigl( \Delta_{\textbf{i},A} a_{\textbf{i},\uparrow}^\dagger a_{\textbf{i},\downarrow}^\dagger + \Delta_{\textbf{i},B} b_{\textbf{i},\uparrow}^\dagger b_{\textbf{i},\downarrow}^\dagger \\ & + \Delta^{*}_{\textbf{i},A} a_{\textbf{i},\downarrow} a_{\textbf{i},\uparrow} + \Delta^{*}_{\textbf{i},B} b_{\textbf{i},\downarrow} b_{\textbf{i},\uparrow} \Bigr) + \text{const}, \end{split} \end{equation} where introduced superconducting mean-field order parameter $\Delta_{\textbf{i},\text{type}}$ (here type means $A$ or $B$ sublattice) \begin{equation} \label{eq:delta_definition} \Delta_{\textbf{i},A} = -V \langle a_{\textbf{i},\downarrow} a_{\textbf{i},\uparrow} \rangle, \qquad \Delta_{\textbf{i},B} = -V \langle b_{\textbf{i},\downarrow} b_{\textbf{i},\uparrow} \rangle. \end{equation} This parameter is constant in space in the case of an infinite system. Mean-field Hamiltonian is quadratic and it can be diagonalized with the following Bogoliubov transformation for a unit cell consisting of two atoms: \begin{equation} \label{eq:Bogoliubov_transformation} \mqty(a_{\textbf{i},\sigma} \\ b_{\textbf{i},\sigma}) = \sum_\nu^{'} \mqty(u_{\textbf{i}}^\nu \\ y_{\textbf{i}}^\nu) \gamma_{\nu, \sigma} - \sigma \sum_\nu^{'} \mqty(v_{\textbf{i}}^{\nu *} \\ z_{\textbf{i}}^{\nu *}) \gamma_{\nu, -\sigma}^\dagger. \end{equation} Here, operator $\gamma_{\nu, \sigma}^\dagger$ $(\gamma_{\nu, \sigma})$ creates (annihilates) a quasiparticle in the state $\nu$ with the spin $\sigma$ ($\sigma = \uparrow = 1$, $\sigma = \downarrow = -1$), prime sign means summation over states with positive excitation energy. These operators satisfy the standard anticommutation relations $\{ \gamma_{\nu, \sigma},\gamma^\dagger_{\nu^{'}, \sigma^{'}} \} = \delta_{\nu, \nu^{'}}\delta_{\sigma, \sigma^{'}}$, $\{ \gamma_{\nu, \sigma},\gamma_{\nu^{'}, \sigma^{'}} \} = \{ \gamma^\dagger_{\nu, \sigma},\gamma^\dagger_{\nu^{'}, \sigma^{'}} \} = 0$. Diagonalized Hamiltonian reads \begin{equation} H_\text{MF} = E_g + \sum_\nu^{'} \sum_\sigma E^\nu \gamma_{\nu, \sigma}^\dagger \gamma_{\nu, \sigma}, \end{equation} where $E_g$ is ground state energy. $E^\nu$ are excitation energies (we are looking for $E^\nu > 0$) which can be obtained from the following system of Bogoliubov–de Gennes equations with self-consistent conditions: \begin{equation} \label{eq:BdG_eq} \sum_\textbf{j} \begin{pmatrix} H_0 (\textbf{i},\textbf{j}) && \Delta (\textbf{i},\textbf{j}) \\ \Delta^\dagger (\textbf{i},\textbf{j}) && -H_0^* (\textbf{i},\textbf{j}) \end{pmatrix} \mqty(u^{\nu}_\textbf{j} \\ y^{\nu}_\textbf{j} \\ v^{\nu}_\textbf{j} \\ z^{\nu}_\textbf{j}) = E^\nu \mqty(u^{\nu}_\textbf{i} \\ y^{\nu}_\textbf{i} \\ v^{\nu}_\textbf{i} \\ z^{\nu}_\textbf{i}), \end{equation} \begin{equation}\label{eq:delta_in_vectors} \begin{gathered} \Delta_{\textbf{i},A} = V \sum_\nu^{'} u_{\textbf{i}}^\nu v_{\textbf{i}}^{\nu *} \tanh \frac{E^\nu}{2 T}, \\ \Delta_{\textbf{i},B} = V \sum_\nu^{'} y_{\textbf{i}}^\nu z_{\textbf{i}}^{\nu *} \tanh \frac{E^\nu}{2 T}. \end{gathered} \end{equation} where $H_0 (\textbf{i},\textbf{j})$ and $\Delta (\textbf{i},\textbf{j})$ are $2 \times 2$ matrices. Explicit forms of the matrices and derivation of the self-consistent conditions are given in Appendix \ref{app:ap1}. The eigenvalue problem (Eq. (\ref{eq:BdG_eq})) can be significantly simplified in the limit of the infinite size of the system. Due to transnational and rotational symmetries $\Delta_{\textbf{i},A} = \Delta_{\textbf{i},B} = \Delta$ in this case. We use the translational symmetry in both $x$ and $y$ directions because the order parameter is constant in the above-mentioned limit. Applying Bloch's theorem one can expand eigenvectors in plane waves for $\textbf{i} = (n,m)$: \begin{equation} \label{eq:Bloch_theorem} \mqty(u^{\nu}_\textbf{i} \\ y^{\nu}_\textbf{i} \\ v^{\nu}_\textbf{i} \\ z^{\nu}_\textbf{i}) = \frac{1}{\sqrt{N_x N_y / 2}} \sum_{k_x, k_y} e^{i (k_x n + k_y m)} \mqty(\mathcal{U}_\textbf{k} \\ \mathcal{Y}_\textbf{k} \\ \mathcal{V}_\textbf{k} \\ \mathcal{Z}_\textbf{k}), \end{equation} where $N_x$ ($N_y$) is number of atoms in $x$ ($y$) direction, $k_x$ and $k_y$ are wavenumbers which located in the first Brillouin zone (1st BZ). This Brillouin zone (Fig. \ref{fig: honeycomb infinite structure}b) is halved in $k_x$ direction and compressed $2 / \sqrt{3}$ times in $k_y$ direction in comparison to the conventional choice of Brillouin zone for honeycomb lattice (which has a shape of regular hexagon with radius $4 \pi / 3$ for the choice of unit length between nearest sites). Its area $S_\text{1st BZ} = 2 \pi^2$. Here $k_y$ has $N_y$ different values, $k_x$ has $N_x /2$ values because in $x$ direction unit cell that we chose consists of 2 atoms. Substituting Eq. (\ref{eq:Bloch_theorem}) to Eq. (\ref{eq:BdG_eq}) and solving matrix equation one can obtain eigenvalues $E_s$: \begin{equation} \label{eq:energies with delta} E_s = \pm \sqrt{\epsilon^{2}_s + \Delta\Delta^*}, \end{equation} \begin{equation} \label{eq:energies} \begin{gathered} \epsilon_s = -\mu + s \cdot \epsilon_0 (k_x, k_y), \\ \epsilon_0 (k_x, k_y) = \sqrt{3 + 4 \cos{k_x} \cos{k_y} + 2 \cos{2 k_y}}, \end{gathered} \end{equation} where we introduced auxiliary functions $\epsilon_s$ and parameter $s = \pm 1$. One can obtain well-known self-consistent condition with integration over the first Brillouin zone by switching from summation to integration (for the detailed derivation see Appendix \ref{app:ap2}): \begin{equation} \frac{1}{V} = \frac{1}{4 S_{\text{1st BZ}}} \sum_{s = \pm 1} \iint_{\text{1st BZ}} dk_x dk_y \frac{\tanh{\frac{E_s}{2 T}}}{E_s}. \end{equation} This equation contains an implicit temperature dependence of the energy gap. It can be further simplified ($\Delta \rightarrow 0$) to find critical temperature $T_{c1}$: \begin{equation} \label{eq:infinite system integral} \medmath{\frac{1}{V} = \frac{1}{4 S_{\text{1st BZ}}} \iint_{\text{1st BZ}} dk_x dk_y} \left( \frac{\tanh{\frac{\epsilon_+}{2 T_{c1}}}}{\epsilon_+} + \frac{\tanh{\frac{\epsilon_-}{2 T_{c1}}}}{\epsilon_-} \right). \end{equation} This equation allows us to calculate the superconductivity phase diagram (with $V$ and $\mu$ axes): find the transition between superconducting ($\Delta \neq 0$) and normal ($\Delta = 0$) states. The phase diagram is shown in Fig. \ref{fig: phase diagram}, where superconductivity exists above-chosen transition line. Cooling the system leads to decreasing critical pairing in the region $|\mu| \in [0; 3)$, but from Fig. \ref{fig: phase diagram} one can see that it is definitely nonlinear dependence. One can ask two basic questions: \begin{itemize} \item Is there a lower boundary for the curve (how does it look at $T=0$)? \item How does this curve approach zero temperature configuration? \end{itemize} \begin{figure} \centering \includegraphics[width=0.99\linewidth]{honeycomb_infinite_lattice_phase_diagram.pdf} \caption{Infinite honeycomb lattice superconductivity phase diagram in chemical potential--attraction onsite potential coordinates for different critical temperatures. For a given temperature above the transition line, gap is nonzero and vice versa.} \label{fig: phase diagram} \end{figure} Integral in Eq. (\ref{eq:infinite system integral}) was calculated numerically to obtain results in Fig. \ref{fig: phase diagram}. Decreasing temperature leads to increasing numerical errors due to narrowing the region of energies ($|\epsilon_s| \lesssim T$) with the biggest contribution to the integral, so the questions can't be answered using a numerical approach. One can analytically show (see Appendix \ref{app:ap3}) that dominant contribution for the integral in Eq. (\ref{eq:infinite system integral}) close to zero temperature will be \begin{equation} \frac{1}{V} \propto - \ln{T} \end{equation} for $\mu \in (0; 3)$. This tendency is also verified numerically with the result that it holds for $|\mu| \in (0; 1) \cup (1; 3)$, $T < 0.01$ and for higher temperatures when $|\mu|$ is far from exceptional points 0, 1, 3. It also answers the first question by showing that we have $V=0$ boundary at $T=0$ in the region of chemical potential where Fermi surface has nonzero length ($|\mu| \in (0; 3)$). \section{Finite systems} \subsection{Linearized gap equation approach} The method we used to find the critical temperature in the previous section works only for an infinite structure where $\Delta$ is constant. Consider now the problem of calculation of $T_c$ for finite structures without an assumption of constant $\Delta$. When the superconducting transition is second order at mean-field level (all $\Delta_\textbf{i} \rightarrow 0$ when $T \rightarrow T_c$) one can write Bogoliubov–de Gennes equations (\ref{eq:BdG_eq}) up to the leading order in $\Delta$: \begin{equation} \label{eq:Tc_discrete} \frac{1}{V} \Delta_{\textbf{i},\text{type}} = \sum_{\textbf{i}',\text{type}'} K_{\textbf{i},\text{type},\textbf{i}',\text{type}'} \Delta_{\textbf{i}',\text{type}'}, \end{equation} \begin{equation} \begin{gathered} \label{eq:K_matrix} K_{\textbf{i},\text{type},\textbf{i}',\text{type}'} = \sum_{s,s'} \sum_{\textbf{k},\textbf{k}'} \frac{1 - f(\epsilon_s (\textbf{k}))- f(\epsilon_{s'} (\textbf{k}'))}{\epsilon_s (\textbf{k}) + \epsilon_{s'} (\textbf{k}')} \\ \cdot w_{s,\textbf{k}}^* (\textbf{i},\text{type}) w_{s',\textbf{k}'}^* (\textbf{i},\text{type}) w_{s,\textbf{k}} (\textbf{i}',\text{type}') w_{s',\textbf{k}'} (\textbf{i}',\text{type}'), \end{gathered} \end{equation} where $f(E)$ is the Fermi distribution function ($f(E) = (1+e^{E/{T}})^{-1}$), $w_n$ are the one-electron wave functions in the normal state (when $\Delta = 0$) corresponding to eigenenergies $\epsilon_n$. They can be found in many papers \cite{Wakabayashi_2010,Saroka2017Optics,talkachov2022wave}. Here summation over $\textbf{i}',\text{type}'$ means summation over all system sites, summation over $\textbf{k}$ means summation over all allowed $k_x$ and $k_y$, $\epsilon_s$ are eigenenergies in a normal state defined in (\ref{eq:energies}). If the system (Fig. \ref{fig: lattice transformation}\textit{a}) has $N_x$ atoms in the horizontal direction (along the armchair edge) and $N_y$ atoms in the vertical direction (along zigzag edge) matrix $K_{\textbf{i},\text{type},\textbf{i}',\text{type}'}$ has $N_x N_y \cross N_x N_y$ dimensions. Equation (\ref{eq:Tc_discrete}) is an eigenvalue problem: the largest eigenvalue of $K$ matrix gives $V^{-1}$ and the corresponding eigenvector is the energy gap distribution close to superconducting transition. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{lattice_change.pdf} \caption{Honeycomb lattice transformation to a rectangular shape.} \label{fig: lattice transformation} \end{figure} Let us apply the approach first to graphene nanotubes and then to finite rectangular graphene systems. \subsection{Graphene nanotubes} \label{section: Graphene nanotubes} Let us consider nanotubes with open armchair (periodic in $x$ direction, Fig. \ref{fig: lattice transformation}) and zigzag (periodic in $y$ direction) edges. Further, we call them armchair and zigzag nanotubes respectively. Free electron wave functions for the first case are extended states which are described by sine functions \cite{Wakabayashi_2010, wakabayashi2012nanoscale, Onipko2018Revisit, zheng2007analytical, talkachov2022wave}. However, a zigzag nanotube has both extended and localized wave functions \cite{wakabayashi2012nanoscale,talkachov2022wave}. Localized ones are called edge states and are described by exponents which describe the localization of the states near boundaries. The number of edge states equals $N_y/3$ in the limit of wide ($N_x \gg 1$) zigzag nanotube \cite{nakada1996edge,talkachov2022wave}. Hence, the relative amount of edge states is $(3 N_x)^{-1}$ of the total number of states. We employ linearized gap equation approach (Eqs. (\ref{eq:Tc_discrete}), (\ref{eq:K_matrix})) to examine superconducting phase transition in the two types of nanotubes. Wave functions and eigenenergies are used from the Ref. \cite{talkachov2022wave}. System size we used varied from $40 \cross 40$ ($8.4 \cross 4.8$ nm) to $70 \cross 70$ ($14.8 \cross 8.5$ nm). Calculation of $K$ matrix (Eq. (\ref{eq:K_matrix})) is a computationally expensive problem because it scales as $\mathcal{O} [(N_x N_y)^4]$. The other system sizes ($N_x \neq N_y$) were also studied with identical to the case $N_x = N_y$ result. System size effect for the above mentioned systems range, $\forall \mu$, $T > 0.1$ is less than $0.1 \%$ (in $V$). The effect is more significant for smaller systems. For temperatures less than $0.05$ system size effect is noticeable even for $40 \cross 40$ systems. This manifests itself in the form of oscillations superimposed on the overall trend of $V(\mu)$ function. It can be seen on the bottom of Figs. \ref{fig: x periodic}\textit{a}, \ref{fig: y periodic}\textit{a}. The main reason is the following: a 'weight function' (fraction in Eq. (\ref{eq:K_matrix})) is localized in the region $|\epsilon_s (\textbf{k})|, |\epsilon_{s'} (\textbf{k}')| \lesssim T$, density of states discretizes for a lattice, therefore, a smooth shift in chemical potential leads to a step-like change in the amount of non-zero values of $K$ matrix and consequently to significant change in the eigenvalue which is proportional to $V^{-1}$. The amount of non-vanishing values of $K$ matrix is big for high $T$ and discrete change in the amount doesn't have a significant effect. The lowest investigated temperatures are set to be 0.03 and 0.04 for armchair and zigzag nanotubes respectively. The lowest temperatures are chosen as temperatures when the above-mentioned oscillations are visibly detected. They are different for armchair and zigzag nanotubes due to the different density of states. Calculations of density of states for infinite nanoribbons (it is the same as infinite radius nanotubes) show peaky structure \cite{Wakabayashi_2010}, therefore increasing system size won't solve the problem for the low temperatures. Here we didn't discuss the influence of wave functions in Eq. (\ref{eq:K_matrix}) because they are temperature-independent. \begin{figure*} \centering \includegraphics[width=0.99\linewidth]{x_periodic.png} \caption{(\textit{a}) Phase diagram for an armchair nanotube (the system is periodic in $x$ direction and free in $y$ direction). Solid lines correspond to constant critical temperature curves with $T_c$ written close to the line. Big numbers 1 and 2 numerate the regions with different order parameter distributions illustrated in the part (\textit{b}). The dashed line is the 'transition line' between the two regions ($\Delta$ in the bulk and on the boundary are equal). (\textit{c}) Relative change in the $T_c$ for an armchair nanotube in comparison to the infinite graphene sheet. Solid lines are constant-level curves. The dashed line is the same as in the part (\textit{a}).} \label{fig: x periodic} \end{figure*} As a check of our results we employed a self-consistent approach using spectral decomposition of Bogoliubov–de Gennes equations (\ref{eq:BdG_eq}) with Chebyshev polynomials \cite{weisse2006kernel, covaci2010efficient, nagai2012efficient} up to order 2000. It allows us to calculate order parameter distribution for a given set of $\mu$, $V$, $T$. We used it in the following way: Using half-division method we are looking for the $V$ value which gives the largest $\Delta \in [10^{-5}; 10^{-4}]$ in the sample after 1000 iterations of self-consistent equation (\ref{eq:delta_in_vectors}) for given $\mu$ and $T$. The method allows us only to estimate transition $V$ for given $\mu$ and $T$ because we don't achieve full convergence. It always gives us a lower boundary for $V$, which is a few percent lower than $V$ values found from the linearized gap equation for $T>0.1$. Temperature growth leads to a decrease in the difference. However, the spectral Chebyshev polynomial decomposition approach also fails for the low temperatures due to the influence of Gibbs oscillations \cite{gibbs1899fourier}. \subsubsection{Nanotubes with armchair boundary} Figure \ref{fig: x periodic}\textit{a} shows a phase diagram of the superconducting phase transition for an armchair nanotube. Here the critical temperature is called $T_{c2}$ because in general, it differs from the $T_{c1}$ for an infinite sample. We separated the diagram into two regions (1 and 2) where one can note different distributions of the order parameter. In the first region, $\Delta$ on the boundaries (top and bottom of the sample, because system is periodic in \textit{x} direction and open in \textit{y} direction) is higher than $\Delta$ in the center of the sample and the second region with the opposite criterion. It doesn't mean that on the dashed line distribution of the order parameter is uniform (at the line sites with maximal $\Delta$ are located close to the boundary). Figure \ref{fig: x periodic}\textit{b} shows the typical order parameter distributions (normalized to unity) in the regions. Here we used square lattice representation by lattice transformation (Fig. \ref{fig: lattice transformation}). One can see that in the first region the largest gap lies on the boundary, however, in the second region, it lies in the center. One can describe boundary gap enhancement in region 1 as an exponentially decaying function $\Delta(i_y) \propto (e^{-y / \xi} + e^{(y - L_y) / \xi})$, where $\xi (\mu, T)$ is a coherence length, $L_y$ is the nanotube length. This function works badly on the boundaries due to the presence of short-range oscillations (Wilbraham-Gibbs phenomenon \cite{wilbraham1848cambridge, gibbs1899fourier} which is also called Friedel oscillations), but can describe tails that overlap in the bulk. Fitting the function to obtained gap distributions (like in Fig. \ref{fig: x periodic}\textit{b}) one can come to the following conclusion: $\xi (\mu, T)$ is an increasing function of $\mu$ and a decreasing function of $T$ in the region 1. In region 2 boundaries lead to suppression of the gap which can be described by a similar function. Here the coherence length $\xi (\mu, T)$ is a decreasing function of both parameters. The relative change in the critical temperature in comparison to the infinite system (Eq. (\ref{eq:infinite system integral})) is shown in Fig. \ref{fig: x periodic}\textit{c}. Here we restricted the maximal value to 1 ($100 \%$). Almost the whole investigated region has $T_{c2} > T_{c1}$ which means that superconductivity in the armchair nanotube is enhanced in comparison to the infinite graphene sheet. Combining the result with gap distributions (Fig. \ref{fig: x periodic}\textit{b}) we can say that superconductivity survives on the boundaries. However, one can come to the opposite result (boundaries suppress superconductivity) for big values of $\mu$ (almost filled band). One can see that increase in chemical potential leads to a monotonic decrease of relative change in $T_c$ and finally leads to negative values. \subsubsection{Nanotubes with zigzag boundary} Now we switch to the discussion of a zigzag nanotube which is a sample that periodic in \textit{x} direction and open in \textit{y} direction (Fig. \ref{fig: y periodic}). Here one can note a significant change in the behaviour in the region $|\mu| \lesssim 0.5$ (Fig. \ref{fig: y periodic}\textit{a}), where $V$ is an increasing function of $\mu$ and lies lower than for the infinite graphene sheet (Fig. \ref{fig: phase diagram}) and the armchair nanotube (Fig. \ref{fig: x periodic}\textit{a}). In this case, gap is not uniform in $y$ direction, because of the zigzag boundary, where only half of the 'boundary' atoms have two neighbours (Fig. \ref{fig: y periodic}\textit{c} where zoom is shown for $6 \times 6$ boundary region). Again we divide the whole phase diagram into a few regions. Here in regions 1 and 1' the average gap on the boundary is bigger than in the center (Fig. \ref{fig: y periodic}\textit{c}). In region 1, the order parameter in the center is less than 0.001 (after gap normalization). Regions 2 and 3 have the biggest gap in the center (Fig. \ref{fig: y periodic}\textit{c}). We decided to call them differently because of their wide separation in the parameter space. If one defines a boundary as atoms that have an absent neighbour (like red atoms on the right side in Fig. \ref{fig: y periodic}\textit{c}), regions 2 and 3 will slightly change the size and shape without qualitative differences. Another reason to divide regions 2 and 3 is the quantitative gap suppression on the boundaries (Fig. \ref{fig: y periodic}\textit{c}): in region 2 it drops only to half of the $\Delta$ in the bulk, however, in region 3 the suppression is one order higher. \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{y_periodic_2.png} \caption{(\textit{a}) Phase diagram for a zigzag nanotube (the system is periodic in $y$ direction and free in $x$ direction). Solid lines correspond to constant critical temperature curves with $T_c$ written close to the line. Big numbers enumerate the regions with different order parameter distributions illustrated in the part (\textit{c}). Dashed lines are the 'transition lines' between the regions ($\Delta$ in the corresponding locations are equal). (\textit{b}) Relative change in the $T_c$ for a zigzag nanotube in comparison to the infinite graphene sheet. Solid lines are constant-level curves. Dashed lines are the same as in the part (\textit{a}).} \label{fig: y periodic} \end{figure*} Analyzing typical gap distributions for zigzag nanotube (Fig. \ref{fig: y periodic}\textit{c}), one can note that enhancement (in regions 1, 1') or suppression (in regions 2, 3) origins from the boundary atoms which have two neighbours. The exceptionality of the atoms is also underlined in the wave functions \cite{talkachov2022wave} where there are two zero energy states with non-zero wave function only at the sites. There are also approximately $N_y / 3 - 2$ edge states which have almost zero energy. They are the main reason for the significant difference between the zigzag nanotube phase diagram and previously considered systems. Relative change in the critical temperature in comparison to an infinite graphene sheet is shown in Fig. \ref{fig: y periodic}\textit{b}. Here we also restricted the maximal value to 1. In this case in the region $|\mu| \lesssim 0.4$ there is a great increase in $T_c$ which can achieve order of hundreds that correspond to the edge localized nonzero gap states. In region 1', the typical increase in critical temperature has an order of $1\%$. Note, the regions with a decrease of $T_c$ (in comparison to the infinite sample) which almost fully overlap with regions 2 and 3. In contrast to the armchair nanotube where the dashed line (which corresponds to equal $\Delta$ on boundaries and in the center) correlates with the line $T_{c1} = T_{c2}$ only in a small region of $V$ (Fig. \ref{fig: x periodic}\textit{c}). Note that the cross-section for constant $V$ has non-monotonic behaviour in the relative change in $T_c$ (Fig. \ref{fig: y periodic}\textit{b}). \subsubsection{LDOS argument for nanotubes} Boundaries modify edge LDOS and it causes a change in $T_c$ in the region. In the subsection, we investigate the interplay between the LDOS and superconductivity. Thermalized LDOS at energy $E$ for the non-interacting model can be calculates as \cite{zhu2016bogoliubov} \begin{equation} \text{LDOS}_i (E) = - \sum_{s, \textbf{k}} |w_{s, \textbf{k}}(i)|^2 f' \left(E - \epsilon_s(\textbf{k}) \right), \end{equation} where energies $\epsilon_s(\textbf{k})$ are defined in Eq. (\ref{eq:energies}). In BCS theory \cite{de1966superconductivity} the bulk critical temperature for an infinite sample proportional to $\exp (- (V \cdot \text{DOS})^{-1})$. Here we consider local critical temperature and substitution DOS $\rightarrow$ LDOS for non-interacting system. We note that boundary superconductivity is a complex phenomenon with many factors and direct substitution of LDOS is not necessarily sufficient for the assessment of the situation because it can oscillate at length scales much smaller than superconducting coherence lengths leading to nontrivial solutions \cite{samoilenka2020boundary,barkman2022elevated}. There is only one unique direction parallel to the nanotube axis for an armchair nanotube (Fig. \ref{fig: xperiodic LDOS}). LDOS in the direction at half-filling ($\mu = 0$) and $T_c = 0.1$ is shown in Fig. \ref{fig: xperiodic LDOS}. It is normalized by the maximal LDOS value in the sample. Here one can see significant deviations from bulk DOS (which can be seen for the large $i_y$ values in Fig. \ref{fig: xperiodic LDOS}) in the ten sites adjacent to the boundary. Taking into account chemical potential just shifts the picture on $E$ axis by $\mu$. Note that LDOS on the boundary sites varies when moving from the boundary. Let us consider average LDOS on $i_y \in [0; 14]$ (Fig. \ref{fig: xperiodic LDOS}). Figure \ref{fig: xperiodic LDOS - bulk} shows the difference between the averaged boundary LDOS and bulk LDOS as a function of chemical potential for different temperatures. Here one can see that the point where the difference is zero (LDOSes are equal) moves to smaller $\mu$ values when $T$ increases. The line with equal critical temperatures for an armchair nanoribbon and the infinite sample is noted by '0' in Fig. \ref{fig: x periodic}\textit{c}. Increasing the temperature (moving upwards along the '0' line) leads to a decrease in chemical potential. The LDOS model (Fig. \ref{fig: xperiodic LDOS - bulk}) captures qualitative behaviour, however quantities of $\mu$ differ by 8--15 $\%$ from the values in Fig. \ref{fig: x periodic}\textit{c}. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{x_periodic_LDOS.png} \caption{LDOS for an armchair nanotube without interaction for $\mu = 0$, $T_c = 0.1$. The orange plane corresponds to the Fermi level.} \label{fig: xperiodic LDOS} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\linewidth]{x_periodic_LDOS_minus_bulk_LDOS.pdf} \caption{Difference between averaged LDOS at fifteen boundary sites of an armchair nanotube and LDOS in the bulk of the system as a function of chemical potential} \label{fig: xperiodic LDOS - bulk} \end{figure} Now we apply the method for a zigzag nanotube. There are two unique directions parallel to the nanotube axis for a zigzag nanotube (Fig. \ref{fig: yperiodic LDOS}). In one of the directions, the atom at site 0 is not a true boundary atom because it has all three bonds. The atom has LDOS similar to the bulk (Fig. \ref{fig: yperiodic LDOS} which is normalized by the maximal LDOS value in the sample). In the other direction boundary atom LDOS is completely different from the bulk LDOS (Fig. \ref{fig: yperiodic LDOS}). The reason is the existence of edge states with close to zero energy which are localized close to the boundaries \cite{wakabayashi1999electronic,wakabayashi2012nanoscale,Saroka2017Optics,talkachov2022wave}. In the case of a zigzag nanotube, approximately five sites adjacent to the boundary have different LDOS from bulk LDOS which is twice smaller region in comparison to an armchair nanotube. Figure \ref{fig: yperiodic LDOS - bulk} shows the difference between boundary LDOS (averaged over fifteen adjacent to the boundary sites in each of the directions) and bulk LDOS as a function of $\mu$. We remind, that if the value is greater than zero, it means that we have a zigzag edge state, otherwise we have a bulk state. In Fig. \ref{fig: yperiodic LDOS - bulk} one can see qualitative similarity with Fig. \ref{fig: y periodic}\textit{c}: region $\mu \in (0.3; 1.2)$ with negative LDOS difference values for $T < 0.36$ correspond to the region 2 in Fig. \ref{fig: y periodic}\textit{a} (where maximal $T_c = 0.19$). The second similarity is the quantitative concurrence of the boundary between regions 1' and 3 in Fig. \ref{fig: y periodic}\textit{a} and the points for $\mu \in (2; 2.4)$ (Fig. \ref{fig: yperiodic LDOS - bulk}) where LDOS difference equals to zero. The relative difference is less than 2 $\%$. \begin{figure*} \centering \includegraphics[width=0.99\linewidth]{yperiodicLDOS.png} \caption{LDOS for a zigzag nanotube without interaction for $\mu = 0$, $T_c = 0.1$. The orange plane corresponds to the Fermi level.} \label{fig: yperiodic LDOS} \end{figure*} \begin{figure} \centering \includegraphics[width=0.99\linewidth]{y_periodic_LDOS_minus_bulk_LDOS.pdf} \caption{Difference between averaged LDOS at fifteen boundary sites (in two directions from Fig. \ref{fig: yperiodic LDOS}) of a zigzag nanotube and LDOS in the bulk of the system as a function of chemical potential} \label{fig: yperiodic LDOS - bulk} \end{figure} \subsection{Graphene rectangular finite samples} There are four possible finite rectangular graphene geometries (Fig. \ref{fig: finite structures}). One of them (even $N_x$ and odd $N_y$) has a 'closed structure' which means that each atom has at least two neighbours. Three other geometries have two atoms which have only one neighbour. We carried out a similar to the previous section investigation of the four structures. The result is that the three geometries have qualitatively and quantitatively similar phase diagrams which differ from the results for the 'closed structure'. Therefore, first, we consider the case of even $N_x$ and odd $N_y$ geometry and then switch to the three other cases which will be discussed in the example of even both $N_x$ and $N_y$ geometry. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{finite.pdf} \caption{Possible rectangular geometries of the finite size honeycomb lattice.} \label{fig: finite structures} \end{figure} \subsubsection{The 'closed structure' case} On the phase diagram (Fig. \ref{fig: finite 60x61}\textit{a}) one can distinguish five regions with different order parameter distributions. There are four locations where we determine the gap: in the center, in the corners, on vertical and horizontal boundaries. The gap is the same in all corners due to the system symmetry. We will use the average gap value for the boundaries because the order parameter oscillates (without sign change) on vertical boundaries and is also not uniform on horizontal ones (it changes close to the corners). We define regions 1 and 4 as regions where the gap on the vertical boundary is bigger than the gap in three other locations. In the same way, we define regions 2 (the biggest gap is on the horizontal boundaries), 3 (in the corners), and 5 (in the center). The order parameter is enhanced on the zigzag edges and normalized $\Delta$ is smaller than 0.001 in the bulk of the sample in the first region (Fig. \ref{fig: finite 60x61}\textit{c}). In the second region horizontal (armchair) boundaries give rise to the gap enhancement (Fig. \ref{fig: finite 60x61}\textit{c}). Here gap in the bulk is still small, but the boundaries are only slightly suppressed in the corners. The biggest region in $V$, $\mu$ parameter space is the third one, where the gap is localized in the corners (Fig. \ref{fig: finite 60x61}\textit{c}). It is a new gap distribution state which was not observed in nanotubes (Sec. \ref{section: Graphene nanotubes}) because they do not have corners. Region 4 has an increase in the gap on zigzag edges, however gap in the bulk is also significant (Fig. \ref{fig: finite 60x61}\textit{c}). In region 5, the gap is suppressed near all boundaries (Fig. \ref{fig: finite 60x61}\textit{c}). \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{open_stable_geometry.png} \caption{(\textit{a}) Finite rectangular graphene nanoflake ($N_x$ even, $N_y$ odd) phase diagram. Solid lines correspond to constant critical temperature curves with $T_c$ written close to the line. Big numbers enumerate the regions with different order parameter distributions illustrated in the part (\textit{c}). Dashed lines are the 'transition lines' between the regions ($\Delta$ in the corresponding locations are equal). (\textit{b}) Relative change in the $T_c$ for the rectangular nanoflake in comparison to the infinite graphene sheet. Solid lines are constant-level curves. Dashed lines are the same as in the part (\textit{a}).} \label{fig: finite 60x61} \end{figure*} Relative change in the critical temperature in comparison to the infinite graphene sheet is shown in Fig. \ref{fig: finite 60x61}\textit{b} (we still restrict the maximal value to 1). Note the monotonic decrease of the relative change when increasing band filling ($\mu$). The biggest increase is still located for small $\mu$ and $V < 2.5$ where the gap is localized on zigzag edges. In region 2, an increase in $T_c$ has an order of $10 \%$ where the gap is localized on armchair edges. In region 3, it varies from no gain to $30 \%$ increase. In region 4, increase is a few percent where bulk comes into play. Almost the whole of region 5 has a reduction of $T_c$ due to suppression on the boundaries. \subsubsection{The 'non-closed structure' case} We deal with three structures illustrated in Fig. \ref{fig: finite structures} (except the top left one) in the subsection. They have the following common things: two corners are usual ones (like in the previous subsection) and the rest two have an atom with only one bond. The three structures have different arrangements of the corners, however, their phase diagrams almost coincide. That is why we will discuss only one geometry: even $N_x$ and $N_y$ case. We also defined five regions on the phase diagram (Fig. \ref{fig: finite 60x60}\textit{a}) . Definitions of regions 2, 4, and 5 remain the same: the biggest gap on armchair (horizontal) boundaries, zigzag (vertical) boundaries, and in the center respectively. However, now we have two different types of corner states: usual corners and corners with a single bond atom. The gap in the latter type of corner is the biggest in the system in region 1. The largest gap in the system is located in the usual corners in region 3 of the phase diagram in Fig. \ref{fig: finite 60x60}\textit{a}. Region 2 in parameter space became smaller in comparison to the 'closed structure' case (Fig. \ref{fig: finite 60x61}\textit{a}) due to the expansion of region 1. Regions 4 and 5 remained approximately the same. Note, that maximal critical temperature increased from 0.5 (for nanoribbons and 'closed structure' finite sample) to 0.6 in the same considered range of $\mu$ and $V$. \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{open_3_other.png} \caption{(\textit{a}) Finite rectangular graphene nanoflake ($N_x$ and $N_y$ even) phase diagram. Solid lines correspond to constant critical temperature curves with $T_c$ written close to the line. Big numbers enumerate the regions with different order parameter distributions illustrated in the part (\textit{c}). Dashed lines are the 'transition lines' between the regions ($\Delta$ in the corresponding locations are equal). (\textit{b}) Relative change in the $T_c$ for the rectangular nanoflake in comparison to the infinite graphene sheet. Solid lines are constant-level curves. Dashed lines are the same as in the part (\textit{a}).} \label{fig: finite 60x60} \end{figure*} Gap distributions 2--5 for the nanoflake (Fig. \ref{fig: finite 60x60}\textit{c}) are similar to the described in the previous subsection (Fig. \ref{fig: finite 60x61}\textit{c}). The gap distribution in region 1 (Fig. \ref{fig: finite 60x61}\textit{c}) is similar to the distribution in region 3. However, it is localized even in the smaller sample region. The main reason is an atom with one bond can have incredibly high $T_c$ and due to the proximity effect, it opens a gap for a few neighbouring sites. It can be considered similar to a single impurity effect. Relative change in $T_c$ for the structure is shown in Fig. \ref{fig: finite 60x60}\textit{b}. Here region of $|\mu| > 1$ is similar to the one in Fig. \ref{fig: finite 60x61}\textit{b} so we will discuss only $|\mu| \leq 1$. The range of $\mu$, $V$ parameters with a relative increase higher than 1 is the biggest in comparison to all considered structures. For $V$ in the range [1.5; 2] increasing chemical potential leads to a rapid decrease in the relative change in $T_c$ in region 1. One can explain transitions in the finite sample between regions with different gap distributions from an energetic point of view. From sections \ref{section: infinite} and \ref{section: Graphene nanotubes}, we know critical temperatures for bulk state and boundary (armchair and zigzag) states respectively. Consequently one can calculate boundaries between the three regions (bulk and two boundary states) on $V(\mu)$ phase diagram. The method is described in Appendix \ref{app:comparison} with results quantitatively similar to phase diagrams in Figs. \ref{fig: finite 60x61}\textit{a} and \ref{fig: finite 60x60}\textit{a}. \section{Conclusions} In conclusion, recently the problem of superconductivity near boundaries of a BCS superconductor was revisited showing that scattering from the surface is very important and one cannot apply simple approximations for averaging over Friedel oscillations of density of states \cite{samoilenka2020boundary,samoilenka2020pair,benfenati2021boundary,barkman2022elevated,samoilenka2021microscopic}. These references studied the problem in continuum and on a square lattice. Here we studied interplay of this physics with the physics of nontrivial single-electron boundary states. To that end, we considered one of the simplest examples: the problem of superconductivity on a honeycomb lattice with s-wave pairing interaction. We found that the boundary superconductivity in that case allows a great diversity of patterns. The gap patterns include surface superconductivity, including the one with normal bulk, and corner superconductivity but also suppression of superconducting gaps at various surfaces. For the cases of an armchair and zigzag nanotubes, there are two possible gap states: enhanced or suppressed gap at the boundary. The latter state is usually observed for an almost filled (empty) band. However, for a zigzag nanotube, the such state also exists for a filling close to the M point in the Brillouin zone ($\mu = 1$) and pairing potential $V < 2$. In the case of an armchair nanotube gap does not depend on the azimuth, however, a zigzag nanotube has nonuniform gap distribution in the azimuth direction due to the alternation of atoms with two and three bonds on the edges. A zigzag nanotube has a drastically different superconductivity phase diagram (in $V$ and $\mu$ axes) from an infinite sample: in the region of small doping ($|\mu| < 0.4$) pairing potential is much smaller than $V_{\infty}$ for given critical temperature. In the case of fixing $V$, it means that we can get hundreds of times higher $T_c$ for zigzag nanotube boundaries (because of logarithmic dependence $V$ on $T_c$ for infinite sample). A finite rectangular honeycomb sample has at least four different gap states. The first two of them are the boundary states with gap enhancement on the boundaries that were found in nanotubes: either zigzag edge state or armchair edge state. The third one is a corner state with gap enhancement. For one out of four rectangular geometries, the state is single, because all corners are identical. However, three other rectangular geometries have two types of corners: where boundary atoms have two bonds and a type where in the corner one atom has only one bond. The latter state nonzero gap is localized in a smaller sample region in comparison to the first type corner state. The corner state with a single bond atom is more energetically favourable than the zigzag boundary type state for small values of doping ($|\mu|< 0.7$) due to lower $V$ for a given critical temperature. Consequently, the state has an even higher $T_c$. The fourth gap state is the state where boundaries and corners lead to suppression of $T_c$, which emerges for an almost filled band. If one considers the superconducting transitions of a half-filled rectangular honeycomb lattice sheet, one will see the following picture during the cooling process. First, local superconductivity emerges in the corners with a single bond atom (if it exists in the sample). Then nonzero gap appears on zigzag boundaries, and later on armchair boundaries. Arising of the bulk state depends on the pairing potential: if $V<2t$ critical temperature for graphene should be less than 0.01 K, which is complicated to achieve. Therefore, one can see the corner and boundary gap states without any bulk superconductivity. We note that the calculations are based on mean-field approximation and in practice these critical temperatures will be suppressed by fluctuations, however, there are many cases of observation of superconductivity even in zero-dimensional systems. The broader implication of the findings is that they illustrate that the system with normal bulk and nontrivial single-electron surface states can have a strong dependence on critical temperature and gap value on the surface. Some of these features should also persist in multi-layer or twisted bilayer graphene that may also under certain condition exhibit superconductivity only on boundary layers. \begin{acknowledgments} This work was supported by the Knut and Alice Wallenberg Foundation via the Wallenberg Center for Quantum Technology (WACQT) and Swedish Research Council Grants 2016-06122, 2018-03659. We thank Mats Barkman for useful discussions. \end{acknowledgments} \clearpage
68ffd2d6365a953ec45cd3803cd4342fae82a986
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{intro} The Oxford English Dictionary records three different meanings of the noun {\em wake}: \begin{itemize} \item a trail of disturbed water or air left by the passage of a ship or aircraft; \item a watch or vigil held beside the body of someone who has died; \item (especially in Ireland) a party held after a funeral. \end{itemize} Since the Greek word $\sigma\upsilon\mu\pi\omicron\sigma\iota\omicron\nu$ describes a gathering of friends to eat, drink, and converse, the {\em Symposium} in memory of our deceased friend and colleague Gerhard Soff certainly fits the third definition. My lecture will be mainly concerned with the first definition, in a slightly generalized sense. To define the appropriate context, we first need to review some of the salient results from the experiments conducted at the Relativistic Heavy Ion Collider (RHIC). The quenching of QCD jets in relativistic heavy ion collisions due to the energy loss suffered by hard partons as they traverse dense matter was proposed more than two decades ago as an important indicator for the creation of a quark-gluon plasma \cite{Bjorken:1982tu,Gyulassy:1990ye,Wang:1991xy}. Over the past five years, this phenomenenon has been extensively studied experimentally at RHIC. One looks for hadrons with a high transverse momentum $p_T$, which are produced when an energetic, hard scattered parton fragments into hadrons far outside the remnants of the nuclear collision. Since both, the scattering probability and the fragmentation probability, are known or calculable in QCD, the sole unknown is the amount of energy lost by the parton before it fragments into hadrons. This makes it possible to deduce the energy loss from the measured hadron yield. The RHIC data clearly demonstrate a strong suppression of the emission of high-$p_T$ hadrons in Au + Au collisions. The suppression is found to grow in severity with increasing centrality. The suppression factor $R_{AA}$, defined as the ratio of the hadron yield in Au + Au collisions compared to the yield in $p+p$ collisions scaled by the appropriate number of binary nucleon-nucleon interactions in the nuclear collision, reaches about 1/5 in the most central events \cite{Adcox:2001jp}. Most of the observed hadrons originate near the surface of the interaction region oriented toward the detector. Hadrons from the companion jet emitted in the opposite direction, which has a much longer path through the medium, are even more strongly suppressed \cite{Adler:2002tq}. The main emphasis in theoretical studies of jet quenching has been on the description of the energy loss which the leading parton suffers due to the emission of a secondary partonic shower when traversing the medium. Reviews of the theory of medium induced energy loss and its associated phenomenology are found in \cite{Baier:2000mf,Kovner:2003zj,Accardi:2003gp,Jacobs:2004qv}. The main result is that the energy loss of a hard parton in dense QCD matter is dominated by radiative processes involving gluon emission after elastic collisions of the parton with color charges (mostly gluons) contained in the medium. Coherence effects lead to a quadratic dependence of the total energy loss on the traversed path length $L$. The stopping power of the medium is encoded in the quantity \bel{eq01} \hat q = \rho \int q^2 \frac{d\sigma}{dq^2} dq^2 , \end{equation} where $\rho$ denotes the gluon density of the medium and $d\sigma/dq^2$ is the differential cross section for elastic scattering. An interesting question is: What happens to the radiated energy? Or more generally: How does the medium respond to the penetration by the hard parton? The RHIC experiments are just beginning to address this intriguing question. First indications of a medium response have been seen in three sets of data: \begin{enumerate} \item Although energetic hadrons in the direction opposite to a high-$p_T$ trigger hadron are almost completely suppressed in central collisions \cite{Adler:2002tq}, one finds an increased yield of soft, low-$p_T$ hadrons \cite{Adams:2004gp,Wang:2004kf}. \item The angular distribution of the soft hadrons emitted in the direction opposite to an energetic hadron does not seem to peak at $180^{\circ}$ but at a smaller angle \cite{Rak:2005st,Adler:2005ee}. \item Correlated two-hadron emission in the same direction in the momentum range 0.15 GeV/c $< p_T <$ 2 GeV/c is enhanced \cite{Adams:2004pa}. \end{enumerate} Since the quark-gluon plasma is a plasma, after all, it is natural to begin a theoretical investigation of the medium response by applying the tools which have been successfully used to describe the response of a metallic electron plasma to the penetration by a fast ion. Such a charged projectile induces a wake of charge and current density in the target, accompanied by induced electric and magnetic fields. The wake, which has the shape of a Mach cone, reflects significant aspects of the response of the medium. After a brief review of the evidence for this phenomenon in condensed matter physics, we apply the same methods of linear response theory to the system of a relativistic color charge traveling through a QCD plasma. We calculate the plasma reponse to an external point source traveling at a velocity close to the speed of light. In this framework, quantum effects are included implicitly via the dielectric functions, $\epsilon_L$ and $\epsilon_T$. For simplicity, we shall assume that the medium is homogeneous and isotropic, and we disregard finite size effects. We then consider two models for the color response of the QCD plasma: ($i$) the response predicted by QCD perturbation theory, and ($ii$) a response of the kind expected for a strongly coupled, ``liquid'' plasma. We present the results of our calculations of the wake structure behind a fast color charge for both scenarios and discuss the general conditions which must be met for the wake to have a Mach cone-like structure \cite{Ruppert:2005uz,Ruppert:2005sj}. \section{Plasma linear response theory} The linear response of the plasma to an external electromagnetic field has been extensively studied in plasma physics (see e.~g.~\cite{Ichimaru}). In this formalism, a dielectric medium is characterized by the components of the dielectric tensor $\epsilon_{ij}(\omega,k)$. For an isotropic, homogenous medium the dielectric tensor can be decomposed into its longitudinal and transverse components characterized by the dielectric functions $\epsilon_L(\omega,k)$ and $\epsilon_T(\omega,k)$: \bel{eq02} \epsilon_{ij}=\epsilon_L {\cal P}_{L,ij} +\epsilon_T {\cal P}_{T,ij}. \end{equation} Here ${\cal P}_{L,ij}=k_i k_j/ k^2$ and ${\cal P}_T=1-{\cal P}_L$ are the longitudinal and transverse orthonormal projectors with respect to the momentum vector $\vec{k}$. One can relate the dielectric functions $\epsilon_L$ and $\epsilon_T$ to the self-energies $\Pi_L$ and $\Pi_T$ of the in-medium photon via \cite{LeBellac}: \bel{eq03} \epsilon_L(\omega,k)=1-\frac{\Pi_L(\omega,k)}{\omega^2-k^2}, \qquad\qquad \epsilon_T(\omega,k)=1-\frac{\Pi_T(\omega,k)}{\omega^2} . \end{equation} Using Maxwell's equations and the continuity equation in momentum space, the total electric field $\vec{E}_{\rm tot}$ in the plasma is related to the external current $\vec{j}_{\rm ext}$ via: \beal{EtotTOJ} \left[\epsilon_L {\cal P}_{L} + \left(\epsilon_T -\frac{k^2}{\omega^2}\right) {\cal P}_{T}\right] \vec{E}_{\rm tot}(\omega,k) =\frac{4 \pi}{i \omega} \vec{j}_{\rm ext} (\omega,k) . \end{eqnarray} Equation (\ref{EtotTOJ}) has propagating solutions when the determinant constructed from the elements of the tensor vanishes: \bel{eq05} {\rm det}\left|\epsilon_L {\cal P}_{L} + \left(\epsilon_T -\frac{k^2}{\omega^2}\right) {\cal P}_{T}\right|=0 . \end{equation} This equation governs the dispersion relation for the waves in the medium. It can be diagonalized into purely longitudinal and transverse parts, yielding dispersion relations for the longitudinal and transverse dielectric functions \cite{Ichimaru}: \bel{Dispersion} \epsilon_L(\omega,k) = 0, \qquad\qquad \epsilon_T(\omega,k) = (k/\omega)^2. \end{equation} These equations determine the longitudinal and transverse plasma modes. The longitudinal equation is the dispersion relation for density fluctuations in the plasma, namely space-charge fields which can propagate through the plasma without far away from an external perturbation. The charge density induced in the wake by the external charge distribution is: \bel{charge} \rho_{\rm ind}=\left(\frac{1}{\epsilon_L}-1\right)\rho_{\rm ext}. \end{equation} In the transverse gauge, the induced charge density is related to the induced Coulomb potential via the Poisson equation: $k^2\phi_{\rm ind}= 4\pi\rho_{\rm ind}$. Since one can relate the total electric field to the induced charge in linear response theory by \bel{eq09} \vec{j}_{\rm ind}=\frac{i\omega}{4\pi} (1-\epsilon)\vec{E}_{\rm tot} , \end{equation} a direct relation between the external and the induced current can be \ derived using (\ref{EtotTOJ}): \bel{current} \vec{j}_{\rm ind}=\left[\left(\frac{1}{\epsilon_L}-1\right){\cal P}_L + \frac{\omega^2(1-\epsilon_T)}{\omega^2\epsilon_T-k^2} {\cal P}_T\right]\vec{j}_{\rm ext}. \end{equation} The induced charge and the induced current obey the continuity equation: \begin{equation} \label{continuity} i\vec{k}\cdot\vec{j}_{\rm ind}-i\omega \rho_{\rm ind}=0. \end{equation} Finally, we need to specify the form of the external current. For a fully stripped ion (or a single energetic parton) it is appropriate to assume the current and charge density of a point-like charge moving along a straight line trajectory with constant velocity $\vec v$, whose Fourier transform is given by \cite{Neufeld}: \beal{eq11} \vec{j}_{\rm ext} &=& 2\pi q \vec{v} \delta(\omega-\vec{v} \cdot \vec{k}), \nonumber \\ \vec{\rho}_{\rm ext} &=& 2\pi q \delta(\omega-\vec{v} \cdot \vec{k}) . \end{eqnarray} All equations given above, from (\ref{eq02}) to (\ref{eq11}), immediately generalize to QCD by the simple addition of a color index $a=1,\ldots,8$ to the charge, the current, and the field strength, which are all in the adjoint representation of color SU(3) : \begin{equation} q\to q^a, \qquad (\rho,\vec j) \to (\rho^a,\vec j^a), \qquad \vec E \to \vec E^a . \end{equation} Since we are limiting the treatment of the medium response to effects that are linear in the perturbation, all nonlinear terms arising from the nonabelian nature of the color field are discarded. The strength of the color charge of the projectile is defined by $q^a q^a = C_2 \alpha_s$ with the strong coupling constant $\alpha_s=g^2/4\pi$ and the quadratic Casimir invariant $C_2$ ($C_F=4/3$ for a quark or antiquark and $C_A=3$ for a gluon). In this linearized treatment one disregards changes of the color charge while the particle is propagating through the medium. The nonabelian character of the QCD plasma only enters indirectly via the chromo-dielectric functions $\epsilon_{L/T}(\omega,k)$, which are scalars in color space \cite{Weldon:1982aq,Klimov:1982bv,Thoma:1990fm}. Due to the self-interacting nature of the gluon in SU(3), gluons contribute to the polarization of the medium; in fact, they make the largest contribution. The non-radiative part of the energy loss of the incident (color) charge is given by the back reaction of the induced (chromo-)electric field onto the incident particle. The energy loss per unit length is given by \cite{Ichimaru}: \beal{energy1} \frac{dE}{dx} =q \frac{\vec{v}}{v}\,{\rm Re}\,\vec{E}_{\rm ind}(\vec{x}=\vec{v}t,t), \end{eqnarray} where the induced electric field $\vec{E}_{\rm ind}$ is the total electric field minus the vacuum contribution. Using the inverse of (\ref{EtotTOJ}), the induced field is given by: \bel{loss} \vec{E}_{\rm ind} = \left[\left(\frac{1}{\epsilon_L}-1\right){\cal P}_L + \left(\frac{\omega^2}{\omega^2\epsilon_T-k^2} - \frac{\omega^2}{\omega^2-k^2} \right){\cal P}_T\right] \frac{4\pi}{i\omega}\vec{j}_{\rm ext} . \end{equation} From (\ref{energy1}) and ({\ref{loss}) the non-radiative energy loss per unit length is given by \cite{Thoma:1990fm}: \bel{energy2} \frac{dE}{dx} = -\frac{C \alpha_s}{2 \pi^2 v} \int d^3k \frac{\omega_k}{k^2}\,\left[{\rm Im}\,\epsilon_L^{-1} + (v^2 k^2 - \omega_k^2)\,{\rm Im}\,(\omega_k^2\epsilon_T^{-1}- k^2)^{-1} \right] , \end{equation} where $\omega_k=\vec{v}\cdot\vec{k}$. \section{Quick flashback: Electron wakes in metal foils} The formation of electromagnetic wakes induced by a fast ion in the electron plasma of a thin metal foil was investigated starting around 1980 by Groeneveld and collaborators at Frankfurt \cite{Groeneveld1,Groeneveld2}. Their study was motivated, in part, by calculations \cite{Schafer:1978jq,Schafer:1980bg} done by a graduate student, Wolfgang Sch\"afer,\footnote{Before doing this work for his doctoral thesis, Wolfgang had worked under Gerhard Soff's supervision calculating the influence of the vacuum polarization potential on the motion of two colliding heavy nuclei.} who had solved the equations of the previous section for a dielectric function of the Bloch type \cite{Bloch} \bel{Bloch} \epsilon_L = 1+\frac{\omega_p^2}{u^2 k^2-\omega^2} \quad (k\le k_c) , \end{equation} where $\omega_p$ is the plasma frequency and $u$ denotes the sound velocity in the plasma. The calculations showed that the wake in the electron plasma has the form of a Mach cone, similar to the Mach shock phenomena predicted for relativistic nucleus-nucleus collisions \cite{Glassgold,Scheid:1973}. In our publications, we not only showed the detailed shape of the spatial distribution of induced plasma charge and current (see Fig.~\ref{figureMach}), but we also predicted the angular distribution of the electrons which would be emitted when the current wave hits the surface of the metal foil (see Fig~\ref{figureMach}). The peak of this distribution roughly coincides with the ``Mach angle'', given by \bel{Mach} \varphi_{\rm M}={\rm arccos}(u/v) . \end{equation} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Wakefig1a.eps} \hspace{0.05\linewidth} \includegraphics[width=0.45\textwidth]{Wakefig1b.eps} \caption{Left: Spatial distribution of the induced charge density behind an ion traveling through a metal. The plot represents a radial cut through the Mach cone. Right: Predicted angular distribution of the electrons contained in the wake current. \label{figureMach}} \end{center} \end{figure} The first published data confirmed the presence of directed electron emission, and the dependence of the peak angle on the beam energy qualitatively agreed with our predictions \cite{Groeneveld1}. However, it turned out that the results were not well reproducible unless the surface of the foil was extremely clean. Groeneveld's team therefore later repeated the measurement with foils whose surface had been sputter-cleaned. The new data (see Fig.~\ref{figGroene}) showed much more pronounced and reproducible peaks \cite{Groeneveld2}. Their positions as a function of beam energy and Fermi energy of the target nicely followed the Mach relation (\ref{Mach}) in agreement with the predictions \cite{Schafer:1980bg}. The experimentalists continued to study many aspects of this phenomenon in great detail, including the energy spectra and angular distributions of the ejected electrons, the refraction of the Mach wave at the planar surface of the foil, and the response of high $T_c$ superconductors \cite{Groeneveld3}. In recent years, the collective plasma waves excited by fast ions in soft biological tissue have been considered as a mechanism that contributes to the damage to living cells by fast C$^{6+}$ ions \cite{Rothard}, which is an important factor in cancer therapy with heavy ion beams \cite{Cancer}. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{Wakefig2a.eps} \hspace{0.05\linewidth} \includegraphics[width=0.5\textwidth]{Wakefig2b.eps} \caption{Left: Angular distribution of electrons emitted after bombardment of thin metal foils by fully stripped ions. Right: Dependence of the peak position in the angular distribution for various target materials (C, Al, Cu) and different beam energies, in comparison with the predicted Mach angle for a carbon target (from \protect\cite{Groeneveld2}). \label{figGroene}} \end{center} \end{figure} \section{Color wake in the high temperature approximation} We now return to the problem of interest to us: the medium response to an energetic parton. We shall discuss two qualitatively different scenarios. We first assume that the plasma is in the weakly coupled high temperature regime, where the gluon self-energy can be described by the leading order of the high temperature expansion, $T\gg \omega,k$, commonly called the hard-thermal loop (HTL) approximation \cite{Pisarski:1989cs,Braaten:1989mz}. This regime can be expected to be realized far above the deconfinement temperature $T_c$. The dielectric functions read \cite{Klimov:1982bv,Weldon:1982aq}: \beal{eps} \epsilon_L &=& 1+\frac{2m_g^2}{k^2} \left[ 1-\frac{1}{2}x \left( {\rm ln}\left|\frac{x+1}{x-1}\right|-i\pi \Theta \left(1-x^2\right) \right) \right], \\ \epsilon_T &=& 1-\frac{m_g^2}{\omega^2} \left[ x^2+ \frac{x(1-x^2)}{2} \left( {\rm ln}\left|\frac{x+1}{x-1}\right| -i\pi \Theta \left( 1-x^2 \right) \right) \right], \end{eqnarray} where $x=\omega/k$. We first discuss the induced charge and current densities $\rho_{\rm ind}$ and $\vec{j}_{\rm ind}$. The Fourier transform of (\ref{charge}) in cylindrical coordinates is given by: \beal{charge2} \rho^a_{\rm ind}(\rho, z, t) &=& \frac{m_g^3 q^a}{(2 \pi)^2 v} \int_0^\infty d\kappa' \kappa' J_0(\kappa\rho) \times \\ && \int^{\infty}_{-\infty} d\omega' \, {\rm exp} \left[i\omega\left(\frac{z}{v}-t\right)\right] \left(\frac{1}{\epsilon_{L}}-1 \right), \nonumber \end{eqnarray} where $k=\sqrt{\kappa^2+\omega^2/v^2}$ and $\omega=m_g\omega'$, $\kappa=m_g\kappa'$. This shows that the induced charge density $\rho^a_{\rm ind}$ is proportional to $m_g^3$. The cylindrical symmetry around the jet axis restricts the form of the current density vector $\vec{j}^a_{\rm ind}$. It only has non-vanishing components parallel to the beam axis: \beal{current2a} j^a_{\rm v, ind}(\rho, z,t) &=& \frac{m_g^3 q^a}{(2 \pi)^2 v^2} \int_0^\infty d\kappa' \kappa' J_0(\kappa\rho) \\ && \int^{\infty}_{-\infty} d\omega' {\rm exp} \left[i\omega\left(\frac{z}{v}-t\right)\right] \left[\left(\frac{1}{\epsilon_{L}}-1\right) \frac{\omega^2}{k^2} + \frac{1-\epsilon_{T}}{\epsilon_{T}-\frac{k^2}{\omega^2}} \left(v^2-\frac{\omega^2}{k^2}\right) \right] , \nonumber \end{eqnarray} and radially perpendicular to the beam axis: \beal{current2b} j^a_{\rho,\rm ind}(\rho, z ,t) &=& \frac{i m_g^3 q^a}{(2 \pi)^2 v} \int_0^\infty d\kappa' \kappa' J_1(\kappa\rho) \\ && \int^{\infty}_{-\infty} d\omega' {\rm exp} \left[i\omega\left(\frac{z}{v}-t\right)\right] \frac{\omega \kappa}{k^2}\left[\left(\frac{1}{\epsilon_L}-1\right) -\left(\frac{1-\epsilon_T}{\epsilon_T-\frac{k^2}{\omega^2}}\right)\right] . \nonumber \end{eqnarray} Again, the components of the current density are proportional to $m_g^3$. For the dispersion relations (\ref{eps}), longitudinal and transverse plasma modes can only appear in the time-like sector of the $\omega,k$ plane \cite{LeBellac,Weldon:1982aq}. Therefore, collective excitations do not contribute to the charge and current density profile of the wake. Emission analogous to Cherenkov radiation and Mach cones are absent, but the charge carries a screening color cloud along with it. Fig.~\ref{figure1}, which shows the charge density of a colored parton traveling with $v=0.99 c$, illustrates this physically intuitive result. \begin{figure} \centerline{\includegraphics[width=1\linewidth]{Wakefig3.eps}} \caption{Spatial distribution of the induced charge density for a color charge traveling with velocity $v/c=0.99$ in a high temperature QCD plasma where the HTL approximation applies. The left plot shows equi-charge density lines in the rest frame of the charge. \label{figure1}} \end{figure} In spite of the absence of a Mach cone, the particle loses energy due to elastic collisions in the medium, which can be described by formula (\ref{energy2}). This mechanism of energy dissipation has been studied in \cite{Thoma:1990fm}. The integrand in (\ref{energy2}) contributes to the integral in the space-like region only, where $|x|<1$, and hence does not get contributions from frequencies where collective plasma modes exist. This is consistent with the fact that such modes are not excited by the moving color charge. \section{Charge wake induced in a strongly coupled QGP} In the second scenario we investigate what happens if the QCD plasma is a strongly coupled quark-gluon plasma (sQGP) that can be described as a quantum liquid. Our consideration of the second scenario is motivated by the RHIC experimental results on collective flow, which have led to the conclusion that the QCD plasma behaves like a nearly ideal fluid with very low viscosity (see e.~g.\ \cite{Teaney:2003pb}). This implies that long wavelength collective modes are almost undamped, while the short-distance dynamics are strongly dissipative due to large transport cross sections. In this respect, the sQGP resembles the electron plasma in metallic targets described in the previous section. The idea that a fast parton could excite Mach waves in such a QCD plasma and that the sound velocity of the expanding plasma could be determined from the emission pattern of the secondary particles traveling at an angle with respect to the jet axis was first suggested by St\"ocker \cite{Stoecker:2004qu}. The recent work by Casalderrey-Solana and Shuryak \cite{Casalderrey-Solana:2004qm} explores the formation of a conical flow by a hard parton in a quark-gluon plasma within a hydrodynamical framework, but does not specify how the energy of the quenched jet is deposited into the medium. There are presently no theoretical methods available for first principle calculations of the color response functions in a strong coupled QGP plasma. We therefore confine our investigation to a simple model, which encodes the essential differences between a quantum liquid scenario and the weak coupling scenario of the last section. The most prominent difference is the possibility that a plasmon mode may extend into the space-like region of the $\omega-k$ plane above some threshold value $k_s$. As we already emphasized, the sQGP paradigm suggests very low dissipation at small $k$, but large dissipation at high $k$. Our assumption is that a critical momentum $k_c$ separates the regimes of collective and single particle excitation modes in the quantum liquid where the dominant colored modes below $k_c$ are plasmon excitations with negligible dissipative single-particle coupling. Since we are here predominantly interested in collective effects in the plasma, we restrict our study to the region $k<k_c$ and simply cut off all Fourier integrals at $k_c$. \begin{figure} \centerline{\includegraphics[width=0.4\linewidth]{Wakefig4.eps}} \caption{Dispersion relation for the plasmon mode for a weakly coupled plasma described by the HTL formalism (solid red line) and for a strongly coupled plasma described by the modified Bloch formula (\ref{NonBloch}). The latter extends into the space-like region ($k>\omega$) of the $\omega-k$ plane. The dispersion curve of transverse HTL-plasma modes is shown in blue and the light cone ($\omega=k$) is represented in black. The black dot indicates the point $k_s$ where the dispersion curve intersects the light cone. \label{figureDisp}} \end{figure} To be specific, we assume that the dielectric function of the strongly coupled plasma in the $k<k_c$ regime leads to a longitudinal dispersion relation of the form: \begin{eqnarray} \label{Dispersion2} \omega_{\rm L}=\sqrt{u^2 k^2 + \omega_p^2}\,\,, \end{eqnarray} where $\omega_p$ denotes the plasma frequency and $u<c$ is the speed of plasmon propagation, here assumed to be constant. In accordance with (\ref{Dispersion2}) we posit the following dielectric function \bel{NonBloch} \epsilon_L = 1+\frac{\omega_p^2/2}{u^2 k^2 - \omega^2 + \omega_p^2/2} \qquad (k\le k_c) , \end{equation} which differs from the classical, hydrodynamical dielectric function of Bloch \cite{Bloch} by remaining regular in the limit $k,\omega \to 0$. The Bloch function is singular at small $k,\omega$ due to the mixing of the plasmon mode with the phonon mode. Such a mixing cannot occur in the QCD plasma, because the plasmon and phonon belong to different irreducible representations (octet versus singlet) of color SU(3) and because the medium is charge symmetric. \begin{figure} \centerline{\includegraphics[width=1\linewidth]{Wakefig5.eps}} \caption{Spatial distribution of a induced charge density around a color charge traveling with velocity $v=0.55c<u$. The left plot shows equi-charge density lines. The density profile is similar to that of the cloud surrounding a color charge at rest. \label{figure2}} \end{figure} The dielectric function (\ref{NonBloch}) is constructed in such a way that it allows us to study one specific aspect of a quantum liquid scenario: that the plasmon mode may extend into the space-like region of the $\omega-k$ plane. This behavior is illustrated in Fig.~\ref{figureDisp} by the dashed red line, which cuts through the light cone at $k_s$ (at the black dot in Fig.~\ref{figureDisp}) and continues into the space-like region. This contrasts with the dispersion relation of the longitudinal plasmon mode in the HTL formalism (solid red line), which always remains in the time-like region $\omega(k)>k$. The induced wake structure for a supersonically (in the sense $v>u$) traveling color source in such a quantum liquid scenario is, quite generally, a conical Mach wave structure. The principal findings of our study can be expected to hold for any quantum liquid with a plasmon branch similiar to (\ref{Dispersion2}), independent of the exact form of the dielectric function. We here assume a speed of plasmon propagation $u/c= 1/\sqrt{3}$. This differs from the speed of plasmon propagation in the small $k$ limit of the HTL approximation, $u/c=\sqrt{3/5}$ \cite{Weldon:1982aq}, and is also different from the sound velocity in a hadronic resonance gas $u/c \approx \sqrt{0.2}$ \cite{Shuryak:1972zq,Venugopalan:1992hy}. Determining the plasmon mode via Eq.~(\ref{Dispersion2}) reveals that the mode is in the space-like region of the $\omega-k$ plane for $k>k_s=\omega_p/\sqrt{c^2-u^2}$ . Recall that this is different from the high-temperature plasma, where longitudinal and transverse plasma modes only appear in the time-like region, $|x|=|\omega/k|>1$. In the quantum liquid scenario one can expect that the modes with low phase velocity $|x|<u/c$ suffer severe Landau damping because they accelerate the slower moving charges and decelerate those moving faster than the wave. A charge moving with a velocity lower than the speed of plasmon propagation can only excite those modes and not the modes with intermediate phase velocities $u/c<|x|<1$, which are undamped \cite{Ichimaru,Weldon:1982aq}. In this case, the qualitative properties of the color wake are analogous to those of the high temperature plasma: the charge carries a localized screening color cloud with it and Cherenkov emission and Mach cones are absent. Using (\ref{charge2}) and restricting the integration area to the region $k<k_c=2\omega_p$, one can illustrate this for a colored particle traveling with $v=0.55c<u$. Figure \ref{figure2} shows the charge density cloud traveling with the colored parton. If the colored parton travels with a velocity $v$ that is higher than the speed of plasmon propagation $u$, modes with an intermediate phase velocity $u/c<|x|<1$ can be excited. The emission of these plasma oscillations induced by supersonically traveling particles is analogous to Cherenkov radiation, but different in that the density waves are longitudinal, not transverse, excitations of the color field. Figure \ref{figure31} for a color charge traveling with $v/c=0.99$ clearly exhibits the emergence of Mach cones in the induced charge density with an opening angle given by the Mach relation (\ref{Mach}). \begin{figure} \centerline{\includegraphics[width=1\linewidth]{Wakefig6.eps}} \caption{(a) Spatial distribution of the induced charge density from a jet with high momentum and fixed color charge $q^a$ that is traveling with $v=0.99c>u=\sqrt{1/3}c$. (b) Plot showing equi-charge lines in the density distribution for the situation in (a). \label{figure31}} \end{figure} We emphasize that the existence of Mach cones is expected in a plasma in general if the particle is moving faster than the speed of sound in the plasma and if the dispersion relation of the collective mode extends into the space-like region. The wake induced by a colored jet in such a setting leads to regions of enhanced and depleted charge density in the wake, which have the shape of Mach waves trailing the projectile. \section{Summary and Outlook} We have calculated the properties of the color charge density wake of a hard parton traveling through a quark-gluon plasma in linear response theory for two different scenarios: a weakly coupled QGP at $T \gg T_c$ described in the HTL approximation, and a strongly coupled QGP with the properties of a quantum liquid. We found that the wake in the weakly coupled plasma always takes the form of a screening cloud traveling with the particle, while the wake in the strongly coupled plasma assumes the form of a Mach cone, if the parton's velocity exceeds the speed of plasmon propagation and the collective plasma mode has a dispersion relation extending into the space-like region. In general, secondary particle distributions can be used to probe the collective excitations QCD plasma. If these take the form of Mach cones they should reveal themselves by the directed emission of secondary particles from the plasma, similar to what was observed with metallic targets under bombardment by swift ions. If the scenario of a strongly coupled QCD plasma is realized in relativistic heavy ion collisions, one can expect to observe these cones in the angular distribution of secondary particles associated with jets \cite{Stoecker:2004qu,Casalderrey-Solana:2004qm}. As we already mentioned in the introduction, preliminary data from the RHIC experiments (see e.~g.\ Fig.~1 in \cite{Rak:2005st,Adler:2005ee}) suggest that the angular distribution of secondary hadrons in the direction opposite to an energetic hadron is peaked at an azimuthal angle $\Delta \phi \neq \pi$. This is different from what is observed in $p+p$ collisions, where the maximum associated with the away-side jet is clearly located at $\Delta \phi=\pi$. In contrast, two maxima are located at $\Delta \phi \approx \pi \pm 1.1$ in Au + Au collisions. This phenomenon suggests the presence of a Mach shock front traveling with the quenched away-side jet at an angle $\Delta\phi = \pi \pm \varphi_M = \pi \pm {\rm arccos}(u/v)$. If these speculations are confirmed, it will be an interesting theoretical problem to determine the mechanism that excites the shock front. Possible candidates are the collective plasma excitations discussed here \cite{Ruppert:2005uz,Stoecker:2004qu}, the ``thunder'' effect following localized directed energy deposition by the quenched jet \cite{Casalderrey-Solana:2004qm}, or simply the knock-on process due to elastic collisions of plasma particles with the away-side hard parton \cite{Lokhtin:1998ya}. \bigskip \section*{Acknowledgements} This work was supported in part by U.~S.~Department of Energy under grant DE-FG02-05ER41367. JR thanks the Alexander von Humboldt Foundation for support as a Feodor Lynen Fellow. BM gratefully acknowledges partial support from GSI to attend this Symposium. \input{paper.bbl} \end{document}
2c307e63af383d587827b31db64f3aab8dbe288f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section*{Introduction} In this paper we consider the notion of the Weyl filtration dimension and good filtration dimension of modules for a linear algebraic group. These concepts were first introduced by Friedlander and Parshall~\cite{fripar} and may be considered a variation of the notion of projective dimension and injective dimension respectively. (The precise definition is given in~\ref{defn:gfd}.) The Weyl filtration dimension of a module is always at most its projective dimension. In fact, it is often much less. In the situation of algebraic groups the Weyl and good filtration dimensions are always finite for a finite dimensional module (unlike the projective and injective dimensions which are usually infinite). Thus knowing these dimensions give us another tool for calculating the cohomology of an algebraic group. Indeed we use knowledge of these dimensions to calculate various $\Ext$ groups for $G$. We had previously calculated the good filtration dimension of the irreducible modules for $S(n,r)$, the Schur algebra corresponding to $\mathrm{GL}_n(k)$ when $n=2$ and $n=3$ in~\cite{parker1}. We were then able to determine the global dimension of $S(n,r)$. The proof in~\cite{parker1} relies heavily on the use of filtrations of the induced modules $\nabla(\lambda)$, $\lambda$ a dominant weight, by modules of the form $\nabla(\mu)^{\mathrm F} \otimes L(\nu)$. In this paper we instead use the translation functors introduced by Jantzen to calculate properties of the induced modules and the Weyl modules (denoted $\Delta(\lambda)$) for an algebraic group. We first calculate the Weyl filtration dimension (abbreviated wfd), of the induced modules for regular weights (theorem~\ref{thm:wfdlamb}). We then prove $\Ext^i\bigl(\nabla(\lambda),\Delta(\mu)\bigr)\cong k$ when $i=\wfd\bigl(\nabla(\lambda)\bigr)+\wfd\bigl(\nabla(\mu)\bigr)$ and $\lambda, \mu$ regular (theorem~\ref{thm:extiind}). We can then deduce that $\Ext^i\bigl(L(\lambda),L(\mu)\bigr)\cong k$ for $i=\wfd\bigl(L(\lambda)\bigr)+\wfd\bigl(L(\mu)\bigr)$ (corollary~\ref{cor:wfdirr}). These results then enable us to write down the injective and projective dimensions of $L(\lambda)$, $\nabla(\lambda)$ and $\Delta(\lambda)$ for $\lambda$ a regular weight in associated generalised Schur algebras (theorem~\ref{thm:injproj}). We can deduce the value of the global dimension of $S(n,r)$ when $p>n$ and $S(p,mp)$ with $m \in \mathbb{N}$ (theorems~\ref{thm:glob1} and \ref{thm:glob2}). This gives us an alternative proof for $S(2,r)$ (all $p$) and for $S(3,r)$ with $p\ge 5$. Some of this work also appears in the author's PhD thesis~\cite{mythesis}, chapter 6. In general the global dimension of $S(n,r)$ is still not known. Previous values were calculated for $r \le n$ by Totaro~\cite{totaro} (for the classical case) and Donkin~\cite{donkbk}, section 4.8, (for the quantum case). The semi-simple Schur algebras (that is the Schur algebras with zero global dimension) have been determined in~\cite{dotynak} for the classical case and~\cite{erdnak}, theorem (A), for the quantum case. Conjectured values for the remaining cases are presented in \cite{mythesis}, section 6.5. We conclude by showing that analogous results for the Dipper--Donkin quantum group hold and hence for the $q$-Schur algebra. The extent to which similar methods may be applied to category $\mathcal{O}$ is also discussed. The author thanks her PhD supervisor, Stephen Donkin for his great help and encouragement as well as Anton Cox and Karin Erdmann for various comments on preliminary versions of this paper. \section{Preliminaries}\label{sect:prelim} We first review the basic concepts and most of the notation that we will be using. The reader is referred to \cite{humph} and \cite{springer} for further information. This material is also in~\cite{jantz} where is it presented in the form of group schemes. Throughout this paper $k$ will be an algebraically closed field of characteristic $p$. Let $G$ be a linear algebraic group which is connected and reductive. We fix a maximal torus $T$ of $G$ of dimension $n$, the rank of $G$. We also fix $B$, a Borel subgroup of $G$ with $B \supseteq T$ and let $W$ be the Weyl group of $G$. We will write $\mathrm{mod}(G)$ for the category of finite dimensional rational $G$-modules. Most $G$-modules considered in this paper will belong to this category. Let $X(T)=X$ be the weight lattice for $G$ and $Y(T)=Y$ the dual weights. The natural pairing $\langle -,- \rangle :X \times Y \rightarrow \mathbb{Z}$ is bilinear and induces an isomorphism $Y\cong \Hom_\mathbb{Z}( X,\mathbb{Z})$. We take $R$ to be the roots of $G$. For each $\alpha \in R$ we take $\alpha\check{\ }\, \in Y$ to be the coroot of $\alpha$. Let $R^+$ be the positive roots, chosen so that $B$ is the negative Borel and let $S$ be the set of simple roots. Set $\rho = \frac{1}{2} \sum_{\alpha \in R^+} \alpha \in X\otimes_{\mathbb{Z}}\mathbb{Q}$. We have a partial order on $X$ defined by $\mu \le \lambda \Leftrightarrow \lambda -\mu \in \mathbb{N} S$. A weight $\lambda$ is \emph{dominant} if $\langle \lambda, \alpha\check{\ }\, \rangle \ge 0$ for all $\alpha \in S$ and we let $X^+$ be the set of dominant weights. Take $\lambda \in X^+$ and let $k_\lambda$ be the one-dimensional module for $B$ which has weight $\lambda$. We define the induced module, $\nabla(\lambda)= \Ind_B^G(k_\lambda)$. This module has formal character given by Weyl's character formula and has simple socle $L(\lambda)$, the irreducible $G$-module of highest weight $\lambda$. Any finite dimensional, rational irreducible $G$-module is isomorphic to $L(\lambda)$ for a unique $\lambda \in X^+$. Since $G$ is split, connected and reductive we have an antiautomorphism, $\tau$, which acts as the identity on $T$ (\cite{jantz}, II, corollary 1.16). From this morphism we may define $^\circ$, a contravariant dual. It does not change a module's character, hence it fixes the irreducible modules. We define the Weyl module, to be $\Delta(\lambda)=\nabla(\lambda)^\circ$. Thus $\Delta(\lambda)$ has simple head $L(\lambda)$. We return to considering the weight lattice $X$ for $G$. There are also the affine reflections $s_{\alpha,mp}$ for $\alpha$ a positive root and $m\in \mathbb{Z}$ which act on $X$ as $s_{\alpha,mp}(\lambda)=\lambda -(\langle\lambda,\alpha\check{\ }\,\rangle -mp )\alpha$. These generate the affine Weyl group $W_p$. We mostly use the dot action of $W_p$ on $X$ which is the usual action of $W_p$, with the origin shifted to $-\rho$. So we have $w \cdot \lambda = w(\lambda+\rho)-\rho$. If $F$ is an alcove for $W_p$ then its closure $\bar{F}\cap X$ is a fundamental domain for $W_p$ operating on $X$. The group $W_p$ permutes the alcoves simply transitively. We set $C= \{ \lambda \in X \otimes _{\mathbb{Z}} \mathbb{R} \ \mid\ 0< \langle \lambda +\rho, \alpha\check{\ }\, \rangle < p \quad \forall\, \alpha \in R^+ \}$ and call $C$ the \emph{fundamental alcove}. We also set $h = \max \{ \langle \rho, \beta\check{\ }\, \rangle +1 \ \mid\ \beta \in R^+\}$. When $R$ is irreducible then $h$ is the Coxeter number of $R$. In general, it is the maximum of all Coxter numbers of the irreducible components of $R$. We have $C \cap X \ne \emptyset \ \Leftrightarrow \ \langle \rho, \beta\check{\ }\, \rangle < p \quad \forall\, \beta \in R^+ \ \Leftrightarrow \ p \ge h$. A facet $F$ is a \emph{wall} if there exists a unique $\beta\in R^+$ with $\langle \lambda +\rho, \beta\check{\ }\, \rangle =mp$ for some $m \in \mathbb{Z}$ and for all $\lambda \in F$. Let $s_F= s_{\beta, mp}$. This is the unique reflection in $W_p$ which acts as the identity on $F$ and we call $s_F$ the reflection with respect to $F$. Let $\Stab_{W_p}(\lambda)$ be all the elements of $W_p$ which stabilise $\lambda\in X$. We take $\Sigma$ to be the set of all reflections $s_F$ where $F$ is a wall (for $W_p$) with $F \subset \bar{C}$. Thus the set $\Sigma$ consists of the reflections $s_{\alpha ,0}$ with $\alpha \in S$ together with $s_{\beta,p}$ with $\beta$ the longest short root of each irreducible component of the root system $R$. Let $\Sigma^0(\mu)$ be the subset of $\Sigma$ where each element of $\Sigma^0(\mu)$ fixes $\mu$. The affine Weyl group $W_p$ is generated by $\Sigma$. These generators form a presentation for $W_p$ as a Coxeter group so we may define a length function $l(w)$ for $w \in W_p$ which is the length of a reduced expression for $w$ in terms of elements of $\Sigma$. We say that $\lambda$ and $\mu$ are \emph{linked} if they belong to the same $W_p$ orbit on $X$ (under the dot action). If two irreducible modules $L(\lambda)$ and $L(\mu)$ are in the same $G$ block then $\lambda$ and $\mu$ are linked. The category of rational $G$-modules has enough injectives and so we may define $\Ext_G^*(-,-)$ as usual by using injective resolutions (see \cite{benson}, section 2.4 and 2.5). We will usually just write $\Ext$ for $\Ext_G$. \section{Quasi-Hereditary Algebras}\label{sect:qha} In this section we prove some lemmas about module category $\mathrm{mod}(A)$, for a quasi-hereditary algebra $A$ with poset $(\Lambda, \le)$, standard modules $\Delta(\lambda)$ and costandard modules $\nabla(\lambda)$. We will later lift these results to $\mathrm{mod}(G)$. We say $X \in \mathrm{mod}(A)$ has a \emph{good filtration} if it has a filtration $0=X_0 \subset X_1 \subset \cdots \subset X_i=X$ with quotients $X_j/X_{j-1}$ isomorphic to $\nabla(\mu_j)$ for some $\mu_j \in \Lambda$. The class of $A$-modules with good filtration is denoted $\good$, and dually the class of modules filtered by $\Delta(\mu)$'s is denoted $\doog$. We say that $X \in \doog$ has a \emph{Weyl filtration}. The multiplicity of $\nabla(\mu)$ in a filtration of $X \in \good$ is independent of the filtration chosen and is denoted by $\bigl(X:\nabla(\mu)\bigr)$. The composition multiplicity of $L(\mu)$ in $ X\in \mathrm{mod}(S)$ is denoted by $\big[X:L(\mu)\big]$. We point out that even when $\nabla(\mu)=L(\mu)$ then it is still not necessarily true that $\bigl(X:\nabla(\mu)\bigr)=\big[X:L(\mu)\big]$. Some of the important properties of $\good$ and $\doog$ are stated below. \begin{propn} \noindent \begin{enumerate} \item[(i)]\ Let $X \in \mathrm{mod}(A)$ and $\lambda \in \Lambda$. If $\Ext_A^1\bigl(X, \nabla(\lambda)\bigr)\ne 0$ then $X$ has a composition factor $L(\mu)$ with $\mu > \lambda$. \item[(ii)]\label{propn:good}\ For $X \in \doog$, $Y\in \good$ and $i>0$, we have $\Ext_A^i(X,Y)=0$. \item[(iii)]\ Suppose $\Ext_A^1\bigl(\Delta(\mu),M\bigr)=0$ for all $\mu \in \Lambda$ then $M\in \good$. \item[(iv)]\ Let $X \in \good$ (resp. $X \in \doog $) and $Y$ a direct summand of $X$ then $Y\in \good$ (resp. $Y \in \doog$). \end{enumerate} \end{propn} \begin{proof} See ~\cite{donkbk}, A2.2. \end{proof} Suppose $X \in \mathrm{mod} (A)$. We can resolve $X$ by modules $M_i\in \good$ as follows $$0\rightarrow X \rightarrow M_0 \rightarrow M_1 \rightarrow \dots \rightarrow M_d \rightarrow 0.$$ Such a resolution a \emph{good resolution} for $X$. Good resolutions exist for all $A$-modules as $A$ has enough injectives and an injective resolution is also a good resolution. The following definition may be found in~\cite{fripar} where a proof of the equivalence of properties (i) and (ii) may be found (see~\cite{fripar}, proposition 3.4). \begin{defn}\label{defn:gfd} Let $X \in \mathrm{mod} (A)$. We say $X$ has \emph{good filtration dimension} $d$, denoted $\gfd(X)=d$, if the following two equivalent conditions hold: \begin{enumerate} \item[(i)] $0\rightarrow X \rightarrow M_0 \rightarrow M_1 \rightarrow \dots \rightarrow M_d \rightarrow 0$ is a resolution for $X$ with $M_i \in \good$, of shortest possible length. \item[(ii)] $\Ext_A^i(\Delta(\lambda),X)=0$ for all $i>d$ and all $\lambda \in \Lambda$, but there exists $\lambda \in \Lambda$ such that $\Ext_A^d(\Delta(\lambda),X)\ne 0$. \end{enumerate} \end{defn} Similarly we have the dual notion of the \emph{Weyl filtration dimension} of $M$ which we will denote $\wfd(M)$. \begin{lem}\label{lem:wandg} Given $A$-modules $M$ and $N$, we have $$\Ext_A^i(N,M)=0\mbox{ for }i > \wfd(N)+\gfd(M).$$ \end{lem} \begin{proof} See~\cite{parker1}, lemma 2.2. \end{proof} \begin{defn} Let $g=\sup\{ \gfd(X) \mid X \in \mathrm{mod}(A)\}$. We say $A$ has good filtration dimension $g$ and denote this by $\gfd(A)=g$. Let $w=\sup\{ \wfd(X) \mid X \in \mathrm{mod}(A)\}$. We say $A$ has Weyl filtration dimension $w$ and denote this by $\wfd(A)=w$. \end{defn} \begin{rem} In general $\gfd(A)$ is not the good filtration dimension of $A$ when considered as its own left (or right) module. Similar remarks apply to $\wfd(A)$. We will only use $\gfd(A)$ and $\wfd(A)$ in the sense that they are defined above. \end{rem} For a finite dimensional $k$-algebra $A$, the \emph{injective dimension} of an $A$-module $M$, is the length of a shortest possible injective resolution and is denoted by $\inj(M)$. Equivalently we have $\inj(M)=\sup\{ d \mid \Ext_A^d(N,M)\not\cong 0 \mbox{ for } N \in \mathrm{mod}(A)\}.$ The \emph{global dimension} of $A$ is the supremum of all the injective dimensions for $A$-modules, and is denoted by $\glob(A)$. This is equivalent to $\glob(A)=\sup\{ d \mid \Ext_A^d(N,M) \not\cong 0 \mbox{ for some } N,M \in \mathrm{mod}(A)\}.$ We will also denote the \emph{projective dimension} of an $A$-module $M$ by $\proj(M)$. \begin{cor}\label{cor:glob} The global dimension of $A$ has an upper bound of $\wfd(A)+\gfd(A)$. \end{cor} \begin{defn} We say a module $T$ is a \emph{tilting module} if $T$ has both a good filtration and a Weyl filtration. That is $T \in \good \cap \doog$. \end{defn} For each $\lambda \in \Lambda$ there is a unique indecomposable tilting module, $T(\lambda)$, of highest weight $\lambda$ with $[T(\lambda):L(\lambda)]=1$. Every tilting module $T$ can be written as a direct sum of indecomposable tilting modules $T(\mu)$ with $\mu \in \Lambda$~(\cite{donkbk}, theorem A4.2). \begin{defn} Take $\lambda\in \Lambda$. We take a chain $\mu_0 < \mu_1 < \cdots < \mu_{l-1}< \mu_l=\lambda$ with $l$ maximal and $\mu_i \in \Lambda$. We define the length of $\lambda$, $l(\lambda)$ to be $l$. We also define $l(\Lambda)=\max\{l(\lambda)\mid \lambda \in \Lambda\}$. \end{defn} \begin{lem}\label{lem:wfdlen} We have $\wfd\bigl(\nabla(\lambda)\bigr) \le l(\lambda)$. \end{lem} \begin{proof} If $l(\lambda)=0$ then $\lambda$ is minimal so $\nabla(\lambda)=\Delta(\lambda)$ and $\wfd\bigl(\nabla(\lambda)\bigr)=0=l(\lambda)$. Now suppose the lemma is true for $\mu < \lambda$. We have a short exact sequence $$ 0\rightarrow N \rightarrow T(\lambda)\rightarrow \nabla(\lambda) \rightarrow 0$$ where $T(\lambda)$ is the indecomposable tilting module of highest weight $\lambda$. Applying $\Ext_A^*\bigl(-,\nabla(\nu)\bigr)$ for $\nu \in \Lambda$ gives us $$ \Ext_A^{i-1}\bigl(N,\nabla(\nu)\bigl) \rightarrow \Ext_A^i\bigl(\nabla(\lambda),\nabla(\nu)\bigr) \rightarrow \Ext_A^i\bigl(T(\lambda),\nabla(\nu)\bigr) $$ We now take $i > l(\lambda)$. All the $\nabla(\mu)$ appearing in a good filtration of $N$ have $\mu < \lambda$. Hence $l(\mu) < l(\lambda)$ and so $i-1> l(\mu)$. We now use the induction hypothesis to get $\Ext_A^{i-1}\bigl(N, \nabla(\nu)\bigr) =0$. We also have $\Ext_A^i\bigl(T(\lambda),\nabla(\mu)\bigr)=0$ as $T(\lambda)$ is tilting and using proposition~\ref{propn:good} (ii). Hence $\Ext_A^i\bigl(\nabla(\lambda),\nabla(\mu)\bigr)=0$ for $i > l(\lambda)$ and $\wfd\bigl(\nabla(\lambda)\bigr)\le l(\lambda)$. \end{proof} We may similarly prove that $\gfd\bigl(\Delta(\lambda)\bigr)\le l(\lambda)$. \begin{lem}\label{lem:wfdS} $$\wfd{A}=\max\{\wfd\bigl(\nabla(\lambda)\bigr)\mid \lambda\in \Lambda\}.$$ \end{lem} \begin{proof} We certainly have $$\wfd{A}\ge\max\{\wfd\bigl(\nabla(\lambda)\bigr)\mid \lambda\in \Lambda\}.$$ Take $\lambda\in \Lambda$ with $\wfd\bigl(L(\lambda)\bigr)=d=\wfd(A)$. Let $Q$ be the quotient $\nabla(\lambda)/L(\lambda)$. Since $\wfd\bigl(L(\lambda)\bigr)$ was maximal we must have $\wfd(Q) \le d$. Let $\mu\in\Lambda$, the corresponding long exact sequence for the short exact sequence for $Q$ gives us $$ \Ext_A^d\bigl(\nabla(\lambda),\nabla(\mu)\bigr) \rightarrow \Ext_A^d\bigl(L(\lambda),\nabla(\mu)\bigr) \rightarrow \Ext_A^{d+1}\bigl(Q,\nabla(\mu)\bigr) $$ Now $\Ext_A^{d+1}\bigl(Q,\nabla(\mu)\bigr)=0$ for all $\mu\in\Lambda$ by lemma~\ref{lem:wandg}. But there exists $\mu\in \Lambda$ with $\Ext_A^d\bigl(L(\lambda),\break\nabla(\mu)\bigr)\ne 0$. Hence $\Ext_A^d\bigl( \nabla(\lambda),\nabla(\mu)\bigr)\ne 0$. Thus there exists $\lambda\in\Lambda$ with $\wfd\bigl(\nabla(\lambda)\bigr)=d =\wfd(A)$ \end{proof} \begin{rem}\label{rem:wfdS} We may replace the set of $\nabla(\lambda)$ with any set of $A$-modules $\mathcal{X}$ with the property that for all $\lambda \in \Lambda$ there exists $X \in \mathcal{X}$ with $L(\lambda)$ contained in the socle of $X$. We can then repeat the argument above to get $\wfd(A)=\max\{\wfd(X)\mid X \in \mathcal{X}\}$. \end{rem} Now suppose $A$ is a quasi-hereditary algebra with contravariant duality preserving simples. That is there exists an involutory, contravariant functor $^\circ: \mathrm{mod} (A)\rightarrow \mathrm{mod} (A)$ such that, $\Delta(\lambda)^\circ \cong \nabla(\lambda)$ (and $\Ext_A^i(M,N) \cong \Ext_A^i(N^\circ ,M^\circ)$). We will usually shorten this and say $A$ has a simple preserving duality. \begin{rem}It is clear (given the equivalences in the definition for the good filtration dimension) that for $A$ with simple preserving duality and $M$ an $A$-module we have $\wfd(M) = \gfd(M^\circ)$. We will use this without further comment. \end{rem} Thus lemma~\ref{lem:wfdlen} gives an upper bound for $\wfd(A)$ of $l(\Lambda)$. Corollary~\ref{cor:glob} gives, for $A$ with simple preserving duality that $$\glob(A)\le 2\gfd(A)=2\wfd(A)\break \le 2l(\Lambda).$$ We say a subset $\Pi$ of a poset $(\Lambda, \le)$ (not necessarily finite) is \emph{saturated} if for all $\lambda \in \Pi$ then $\mu \le \lambda$ implies that $\mu \in \Pi$. Take $G$ to be a split, connected reductive algebraic group with weight lattice $X$. Suppose $\Pi$ is a finite saturated subset of $X^+$ with respect to the dominance ordering. We may consider $G$-modules all whose composition factors have highest weights lying in $\Pi$. These modules form a subcategory of $\mathrm{mod}(G)$ which is a highest weight category corresponding to a quasi-hereditary algebra which we denote $S(\Pi)$, the generalised Schur algebra (see~\cite{donk1} for more information). We have a natural isomorphism $$\Ext_{S(\Pi)}^i(M,N) \cong \Ext_G^i(M,N)$$ for $S(\Pi)$-modules $M$ and $N$~\cite{donk1}, 2.2d. The costandard and standard modules for $S(\Pi)$ are exactly the induced and Weyl modules for $G$ respectively. Thus as long as we restrict our attention to finite dimensional $G$-modules then we can lift the results from quasi-hereditary algebras to $G$. Generally speaking, a finite dimensional $G$-module does not have a finite injective or projective resolution. It will have, however, have a finite good (and Weyl) resolution. Thus we can lift the definitions of good (and Weyl) filtration dimension to $\mathrm{mod}(G)$. If we take $G=\mathrm{GL}_n(k)$ and $\Pi=\partn(n,r)$ then $S(\Pi)$ is isomorphic to $S(n,r)$, the usual Schur algebra. Thus Schur algebras are quasi-hereditary with poset $\partn(n,r)$ ordered by dominance. \section{Properties of Translation Functors}\label{sect:trans} For any $G$-module $V$ and any $\mu\in X$, set $\pr_\mu V$ equal to the sum of submodules of $V$ such that all the composition factors have highest weight in $W_p \cdot \mu$. Then $\pr_\mu V$ is the largest submodule of $V$ with this property. The following definition is due to Jantzen~\cite{jantz}, II, 7.6. \begin{defn} Suppose $ \lambda$, $\mu\in \bar{C}$. There is a unique $\nu_1 \in X^+\cap W(\mu-\lambda)$. We define the \emph{translation functor} $T_\lambda ^\mu$ from $\lambda$ to $\mu$ via $$T_\lambda ^\mu V = \pr _\mu (L(\nu_1)\otimes \pr_\lambda V)$$ for any $G$-module $V$. It is a functor from $\mathrm{mod}(G)$ to itself. \end{defn} \begin{lem}\label{lem:adjoint} Let $\lambda$ and $\mu \in \bar{C}$, then the functors $T_{\lambda}^\mu$ and $T_{\mu}^\lambda$ are adjoint to each other. For $M,N \in \mathrm{mod}(G)$ we have $\Ext^i(T_\lambda^\mu M,N) \cong \Ext^i(M,T_\mu^\lambda N)$. \end{lem} \begin{proof} See~\cite{jantz}, II, lemma 7.6 (b) and remark 7.6 (2). \end{proof} \begin{propn}\label{propn:transgood} Let $\mu$, $\lambda\in \bar{C}$ and $w \in W_p$ with $w\cdot \mu \in X^+$, then $T_\mu ^\lambda \nabla(w \cdot \mu) $ has a good filtration. Moreover the factors are $\nabla(ww_1 \cdot \lambda)$ with $w_1 \in \Stab_{W_p}(\mu)$ and $ww_1\cdot\lambda \in X^+$. Each different $ww_1 \cdot \lambda$ occurs exactly once. \end{propn} \begin{proof} See~\cite{jantz}, proposition 7.13. \end{proof} \begin{cor}\label{cor:transses} Let $\lambda\in C$ and $\mu \in \bar{C}$. Suppose there is $s \in \Sigma$ with $\Sigma^0(\mu)=\{s\}$. Let $w \in W_p$ with $w\cdot \lambda\in X^+$ and $w\cdot \lambda < ws\cdot \lambda$. Then we have a short exact sequence $$ 0\rightarrow \nabla(w\cdot \lambda) \rightarrow T_\mu^\lambda\nabla(w\cdot \mu) \rightarrow \nabla(ws\cdot \lambda)\rightarrow 0.$$ \end{cor} \begin{proof} See~\cite{jantz}, lemma 7.19 (a). \end{proof} We would like to know when such a situation in the above corollary occurs. Firstly we need a $\lambda \in C$ and this happens when $p \ge h$. We also need a weight $\mu$ lying on the wall between $\lambda$ and $s\cdot \lambda$. This happens when the derived group of $G$ is simply connected and $p \ge h$. See~\cite{jantz}, II, 6.3 (1), for details. We will henceforth assume that $p \ge h$ and that the derived group of $G$ is simply connected. We will also assume that the root system $R$ of $G$ is irreducible, although we believe that theorem~\ref{thm:wfdlamb} is also true in the more general case. We have another partial order on $X$ denoted $\uparrow$. If $\alpha$ is a positive root and $m\in \mathbb{Z}$ then we set $$s_{\alpha,mp} \cdot \lambda \uparrow \lambda \quad\mbox{if and only if}\quad \langle\lambda+\rho , \alpha\check{\ }\,\rangle \ge mp.$$ This then generates an order relation on $X$. So $\mu \uparrow \lambda$ if there are reflections $s_i \in W_p$ with $$\mu=s_m s_{m-1} \cdots s_1 \cdot \lambda \uparrow s_{m-1} \cdots s_1 \cdot \lambda \uparrow \cdots \uparrow s_1 \cdot \lambda \uparrow \lambda .$$ We define $l(\lambda)$ for $\lambda\in X^+$ to be the length of a maximal chain $\mu_0 \uparrow \mu_1 \uparrow \cdots \uparrow \mu_{l-1} \uparrow \mu_l=\lambda$ with $\mu_0 \in \bar{C}$, each $\mu_i \ne \mu_{i+1}$ and $\mu_i \in X$. We will also define $\bar{l}(\lambda)$ for $\lambda \in X^+$ to be the length of a maximal chain $\mu_0 \uparrow \mu_1 \uparrow \cdots \uparrow \mu_{l-1} \uparrow \mu_l=\lambda$ with all $\mu_i \in X^+$. We define $d(\lambda)$ to be the number of hyperplanes separating $\lambda$ and a weight lying in $C$ (we do not count any hyperplanes that $\lambda$ may lie on). Take $n_\alpha$, $d_\alpha \in \mathbb{Z}$ with $\langle \lambda+\rho, \alpha\check{\ }\, \rangle = n_\alpha p +d_\alpha$ and $0< d_\alpha \le p$ for all $\alpha$ a positive root. If $\lambda$ is dominant then $d(\lambda) = \sum_{\alpha>0} n_\alpha$. \begin{lem}\label{lem:alclen} If $\lambda \in C$ and $w \in W_p$ with $w\cdot\lambda \in X^+$ then $\bar{l}(w\cdot\lambda)=l(w\cdot\lambda)= l(w)=d(w\cdot\lambda)$ \end{lem} \begin{proof} Since $w\cdot\lambda$ lies inside an alcove we have that $d(w\cdot\lambda)=l(w)$. (This is true as the alcoves in $X$ can be identified with chambers in the Coxeter complex associated to $W_p$.) It is clear that $l(w\cdot\lambda) \ge \bar{l}(w\cdot\lambda)$. We have using~\cite{jantz}, proposition 6.8, that $\bar{l}(w\cdot\lambda) \ge d(w\cdot \lambda)$. Now take a maximal chain for $w\cdot\lambda$, $\mu_0 \uparrow \mu_1 \uparrow \cdots \uparrow \mu_l=w\cdot\lambda$ with $\mu_0 \in C$ and $\mu_i \in X$. We know that in this chain for $w\cdot\lambda$ we have $d(\mu_i) < d(\mu_{i+1})$, by applying~\cite{jantz}, lemma 6.6. Thus $d(w\cdot\lambda) \ge l(w\cdot\lambda)$. Hence we have the equalities as claimed. \end{proof} \begin{rem}\label{rem:bruhat} If $\lambda \in C$ then the $\uparrow$-ordering on $X^+\cap W_p \cdot \lambda$ is equivalent to the Bruhat ordering on $W_p$. That is we have for $\lambda \in C$ and $w ,v \in W_p$ with $w \cdot \lambda$ and $v \cdot \lambda \in X^+$ that $$ w \cdot \lambda \uparrow v \cdot \lambda\quad\mbox{if and only if}\quad w \le v .$$ This can be seen from the definition of the Bruhat order in \cite{humph2}, section 5.9, and using the previous lemma. See also~\cite{verma}, section 1.6. \end{rem} We have that $[\nabla(\lambda):L(\mu)] \ne 0$ implies $\mu \uparrow \lambda$~\cite{ander1}, corollary 3, (known as the strong linkage principle). Thus when we take $\Pi$, a finite saturated subset of $X^+$ with respect to the $\uparrow$ ordering, the corresponding algebra $S(\Pi)$ is quasi-hereditary, thus we may apply lemma~\ref{lem:wfdlen} to deduce that $\wfd\bigl(\nabla(\lambda)\bigr)\le \bar{l}(\lambda)$. \section{The Weyl Filtration Dimension of the Induced Modules}\label{sect:wfd} \begin{lem}\label{lem:wfdmu} Suppose we have the situation of corollary~\ref{cor:transses}. So we have $\lambda\in C$, $\mu \in \bar{C}\backslash C$, $w\cdot \lambda< ws\cdot \lambda$, $w\cdot \lambda \in X^+$ and $\Sigma^0(\mu)=\{s\}$. If $l(w) \ge 1$ then $\wfd\bigl(\nabla(w \cdot \mu)\bigr)< l(w)$. \end{lem} \begin{proof} It is clear that any non-repeating chain for $w\cdot \mu$, $w_1\cdot\mu \uparrow \cdots \uparrow w_i\cdot \mu \uparrow \cdots \uparrow w_m\cdot \mu=w\cdot\mu$ with $w_i\cdot \mu \in X^+$ gives a non-repeating chain $w_1\cdot \lambda \uparrow \cdots\uparrow w_i \cdot \lambda\uparrow \cdots \uparrow w_m \cdot \lambda=w\cdot \lambda$ with $w_i\cdot \lambda \in X^+$. So $\bar{l}(w\cdot\mu) \le \bar{l}(w\cdot\lambda)=l(w\cdot\lambda)$. If $m=l(w)$ then the chain for $\lambda$ is maximal. So we would have $w_1= 1$ and $w_2= s_{\beta,p}$ for $\beta$ the longest short root of $R$ (as $R$ is irreducible). But then $w_1\cdot \mu = \mu \in X^+$. We also assumed $\mu \in \bar{C} \backslash C$ so $\mu$ must be fixed by $s_{\beta,p}$. So we have $\mu = w_1 \cdot \mu = w_2 \cdot \mu$. But this means the chain for $\mu$ repeats -- a contradiction. Thus $\bar{l}(w\cdot\mu) < l(w\cdot\lambda)=l(w)$ by lemma~\ref{lem:alclen}. Now lemma~\ref{lem:wfdlen} gives us the result. \end{proof} \begin{thm}\label{thm:wfdlamb} Suppose the root system $R$ of $G$ is irreducible and $\lambda \in C$. Then $$\wfd\bigl(\nabla(w\cdot\lambda)\bigr)= l(w).$$ \end{thm} \begin{proof} We proceed by induction on $l(w)$. If $l(w)=0$ then $\nabla(\lambda)=\Delta(\lambda)=L(\lambda)$ so $\wfd\bigl(\nabla(\lambda)\bigr)=0$. Now let $w=s$, $s \in \Sigma$ with $s\cdot \lambda \in X^+$. Take $\mu$ to be a dominant weight on the wall separating $\lambda$ and $s\cdot \lambda$. Such a $\mu$ has $\wfd\bigl(\nabla(\mu)\bigr)=0$. Thus $T_\mu^\lambda\bigl(\nabla(\mu)\bigr)$ is a tilting module of highest weight $s \cdot \lambda$. So the short exact sequence of corollary~\ref{cor:transses} is a Weyl resolution of $\nabla(s \cdot \lambda)$ and so $\wfd\bigl(\nabla(s\cdot \lambda)\bigr)\le 1$. But $\Ext^1\bigl(\nabla(s \cdot \lambda), \nabla(\lambda)\bigr)=k$ by~\cite{jantz}, II, proposition 7.21, and so $\wfd\bigl(\nabla(s\cdot \lambda)\bigr)= 1=l(s)$. Now suppose the theorem is true for all $w \in W_p$ with $l(w)\le l$, $l\ge 1$. We will show the result holds for $ws$ with $s \in \Sigma$. We take $\mu \in \bar{C}\backslash C$ with $\Sigma_0(\mu)=\{s\}$. We have for all $i$, $v \in W_p$ and $v \cdot \lambda \in X^+$ $$\Ext^i\bigl(T_\mu^\lambda\bigl(\nabla(w\cdot\mu)\bigr), \nabla(v \cdot \lambda )\bigr) \cong \Ext^i\bigl(\nabla(w\cdot\mu), T_\lambda^\mu(\nabla(v \cdot \lambda)\bigr)\bigr) \cong \Ext^i\bigl(\nabla(w\cdot\mu), \nabla(v \cdot \mu)\bigr)$$ by lemma~\ref{lem:adjoint} and proposition~\ref{propn:transgood}. So we have \begin{equation}\label{wfdlen} \wfd\bigl(T_\mu^\lambda\bigl(\nabla(w\cdot\mu)\bigr)\bigr)= \wfd\bigl(\nabla(w\cdot\mu)\bigr) < l(w) \end{equation} by lemma~\ref{lem:wfdmu}. Applying $\Ext^*\bigl(-,\nabla(\nu)\bigr)$ with $\nu \in X^+$ to the short exact sequence of corollary~\ref{cor:transses} gives us \begin{multline*} \Ext^{i}\bigl(T_\mu^\lambda\bigl(\nabla(w\cdot\mu)\bigr),\nabla(\nu)\bigr) \rightarrow \Ext^i\bigl(\nabla(w\cdot\lambda),\nabla(\nu)\bigr) \\ \rightarrow \Ext^{i+1}\bigl(\nabla(ws\cdot\lambda),\nabla(\nu)\bigr) \rightarrow \Ext^{i+1}\bigl(T_\mu^\lambda\bigl(\nabla(w\cdot\mu)\bigr),\nabla(\nu)\bigr). \end{multline*} Thus for $i \ge l(w)$ we have $$ \Ext^i\bigl(\nabla(w\cdot\lambda),\nabla(\nu)\bigr) \cong \Ext^{i+1}\bigl(\nabla(ws\cdot\lambda),\nabla(\nu)\bigr) $$ using \eqref{wfdlen} and lemma~\ref{lem:wandg}. Hence $\wfd\bigl(\nabla(ws\cdot\lambda)\bigr)= \wfd\bigl(\nabla(w\cdot\lambda)\bigr)+1=l(w)+1=l(ws)$, as required. \end{proof} We may use the $^\circ$-duality to get that $\gfd(\Delta(w\cdot\lambda))=l(w)$. The previous theorem and lemma~\ref{lem:wandg} give us that for $v \in W_p$ with $v\cdot\lambda \in X^+$ we have $\Ext^{i}\bigl(\nabla(w\cdot\lambda),\Delta(v\cdot\lambda)\bigr)=0$ for $i > l(w)+l(v)$. The following corollary tells us that this bound is strict. \begin{thm}\label{thm:extiind} Suppose $\lambda \in C$, and $w$, $v \in W_p$ with $w\cdot\lambda$, $v\cdot\lambda \in X^+$. Then $$\Ext^{l(w)+l(v)}\bigl(\nabla(w\cdot\lambda),\Delta(v\cdot\lambda)\bigr)\cong k.$$ \end{thm} \begin{proof} We proceed by induction on $l(w)+l(v)$. If $l(w)+l(v)=0$ then $w\cdot\lambda=v\cdot\lambda=\lambda$ so $\Hom\bigl(\nabla(w\cdot\lambda),\Delta(v\cdot\lambda)\bigr)\cong \Hom\bigl(\nabla(\lambda),\Delta(\lambda)\bigr)\cong k$. If $l(w)+l(v)=1$ then either $w\cdot\lambda= \lambda$ or $v\cdot\lambda = \lambda$. Without loss of generality (using the $^\circ$-duality), take $w\cdot\lambda \ne \lambda$. Thus $\Delta(v\cdot\lambda)=\nabla(\lambda)$. Also $l(w)=1$ so $w=s \in \Sigma$. By~\cite{jantz}, II, proposition 7.21 (c), we have $\Ext^1\bigl(\nabla(s \cdot \lambda), \nabla(\lambda) \bigr) \cong k$. Thus the corollary is true for $l(w)+l(v)=1$. Now take $l(w)=l(v)=1$ Applying $\Ext^*\bigl(\nabla(s \cdot \lambda),-\bigr)$ to the $^\circ$-dual of the short exact sequence of corollary~\ref{cor:transses} gives us \begin{equation*} \Ext^1\bigl(\nabla(s \cdot \lambda),T_\mu^{\lambda}\bigl(\Delta(\mu)\bigr)\bigr) \rightarrow \Ext^1\bigl(\nabla(s \cdot \lambda),\Delta(\lambda)\bigr) \rightarrow \Ext^2\bigl(\nabla(s \cdot \lambda),\Delta(s \cdot \lambda)\bigr) \rightarrow 0. \end{equation*} The last zero follows by lemma~\ref{lem:wandg}. Also $$\Ext^1\bigl(\nabla(s \cdot \lambda),T_\mu^{\lambda}\bigl(\Delta(\mu)\bigr)\bigl) \cong \Ext^1\bigl(T^\mu_{\lambda}(\nabla(s \cdot \lambda)),\nabla(\mu)\bigr) \cong \Ext^1\bigl(\nabla(\mu),\nabla(\mu)\bigr) \cong 0. $$ Hence $$ \Ext^2\bigl(\nabla(s \cdot \lambda),\Delta(s \cdot \lambda)\bigr) \cong \Ext^1\bigl(\nabla(s \cdot \lambda),\Delta(\lambda)\bigr) \cong k. $$ Now suppose the corollary is true for all $w, v \in W_p$ with $w\cdot\lambda, v\cdot\lambda \in X^+$ and $l(w)+l(v)\le m$, for some $m\ge 1$. We need to show the result holds for $l(w^\prime)+l(v^\prime)=m+1$, $w^\prime, v^\prime \in W_p$ and $w^\prime \cdot \lambda, v^\prime \cdot \lambda \in X^+$. Without loss of generality we may take $v^\prime = v$ and $w^\prime =ws$ with $s \in \Sigma$. We may also assume that $l(v^\prime)$ or $l(w^\prime)$ is at least 2 so that we can assume $w \ne 1$. (As we have already covered the case with $l(w)=l(v)=1$.) Apply $\Ext^*\bigl(-,\Delta(v\cdot\lambda)\bigr)$ to the short exact sequence of corollary~\ref{cor:transses} to get \begin{multline*} \Ext^{m}\bigl(T_\mu^{\lambda}\bigl(\nabla(w\cdot\mu)\bigr),\Delta(v\cdot\lambda)\bigr) \rightarrow \Ext^{m}\bigl(\nabla(w\cdot\lambda),\Delta(v\cdot\lambda)\bigr) \\ \rightarrow \Ext^{m+1}\bigl(\nabla(ws\cdot\lambda),\Delta(v\cdot\lambda)\bigr) \rightarrow \Ext^{m+1}\bigl(T_\mu^{\lambda}\bigl(\nabla(w\cdot\mu)\bigr),\Delta(v\cdot\lambda)\bigr). \end{multline*} But $\wfd\bigl(T_\mu^\lambda\bigl(\nabla(w\cdot\mu)\bigr)\bigr)< l(w)$ by \eqref{wfdlen} (provided $w \ne 1$). Now we may apply lemma~\ref{lem:wandg} to get that the first and last $\Ext$ groups above are zero. Thus the middle two groups are isomorphic. So by induction we have \begin{equation*} \Ext^{l(w^\prime) +l(v)}\bigl(\nabla(w^\prime \cdot \lambda),\Delta(v\cdot\lambda)\bigr) \cong \Ext^{l(w)+l(v)}\bigl(\nabla(w\cdot\lambda),\Delta(v\cdot\lambda)\bigr) \cong k.\qedhere \end{equation*} \end{proof} \begin{cor}\label{cor:wfdnabd} For $\lambda, \mu \in X^+$ lying inside an alcove and in the same $W_p$-orbit we have $$\wfd\bigl(\nabla(\lambda)\bigr)= d(\lambda)\mbox{,}\quad \gfd\bigl(\Delta(\mu)\bigr)= d(\mu) \quad\mbox{and}\quad \Ext^{d(\lambda)+d(\mu)}\bigl(\nabla(\lambda),\Delta(\mu)\bigr) \cong k.$$ \end{cor} \begin{proof} We have that $\lambda= w \cdot \lambda_0$ and $\mu=v \cdot \lambda_0$ for some $\lambda_0 \in C$. Lemma~\ref{lem:alclen}, theorem~\ref{thm:wfdlamb} and the previous corollary then give us the result. \end{proof} \begin{cor}\label{cor:wfdirr} For $\lambda, \mu \in X^+$ lying inside an alcove and in the same $W_p$-orbit we have $$\wfd\bigl(L(\lambda)\bigr)= d(\lambda) \quad\mbox{and}\quad \Ext^{d(\lambda)+d(\mu)}\bigl(L(\lambda),L(\mu)\bigr) \cong k.$$ \end{cor} \begin{proof} Let $Q$ be the quotient $\nabla(\lambda)/L(\lambda)$. If $L(\nu)$ is a composition factor of $Q$ then $\nu \uparrow \lambda$ and $\nu \ne \lambda$. Thus $l(\nu) < l(\lambda)$. Hence $\wfd(Q) < l(\lambda)=d(\lambda)=l$. Now apply $\Ext^*\bigl(-,\nabla(\nu)\bigr)$ to the short exact sequence $$0 \rightarrow L(\lambda) \rightarrow \nabla(\lambda) \rightarrow Q \rightarrow 0$$ to get $$ \cdots \rightarrow \Ext^l\bigl(Q,\nabla(\nu)\bigr) \rightarrow \Ext^l\bigl(\nabla(\lambda),\nabla(\nu)\bigr) \rightarrow \Ext^l\bigl(L(\lambda),\nabla(\nu)\bigr)\rightarrow 0 $$ where the last zero follows by lemma~\ref{lem:wandg}. We also have that $\Ext^l\bigl(Q,\nabla(\nu)\bigr)=0$ by lemma~\ref{lem:wandg}. Thus $\wfd\bigl(L(\lambda)\bigr)= d(\lambda)=l$ as required. A similar argument yields that \begin{equation*} \Ext^{d(\lambda)+d(\mu)}\bigl(L(\lambda),L(\mu)\bigr) \cong \Ext^{d(\lambda)+d(\mu)}\bigl(\nabla(\lambda),\Delta(\mu)\bigr)\cong k.\qedhere \end{equation*} \end{proof} The result of Ryom-Hansen's in the appendix, theorem 2.4, states that for $\lambda \in C$ and $w,v \in W_p$ with $v \le w$ and $w\cdot\lambda, v\cdot\lambda \in X^+$ $$ \Ext^{l(w)-l(v)}\bigl(L(w \cdot \lambda), \nabla(v\cdot \lambda)\bigr) \cong k. $$ We also know that if $i >l(w) -l(v)$ then $\Ext^i\bigl(L(w \cdot \lambda), \nabla(v\cdot\lambda)\bigr) \cong 0$ by the appendix, lemma 2.1, (see also \cite{jantz}, proposition 6.20). So using this result and given remark~\ref{rem:bruhat} and lemma~\ref{lem:alclen} we may now prove \begin{propn}\label{propn:nabnab} Let $\lambda \in C$, $w,v \in W_p$ with $w\cdot\lambda, v\cdot\lambda \in X^+$ and $v\cdot\lambda \uparrow w\cdot\lambda$ then $$\Ext^{l(w)-l(v)}\bigl(\nabla(w\cdot\lambda),\nabla(v\cdot\lambda)\bigr) \cong k.$$ \end{propn} \begin{proof} We may argue along similar lines to the proof of corollary~\ref{cor:wfdirr}. \end{proof} We now are in a position where we may deduce the projective and injective dimensions of several modules for the generalised Schur algebras. We define $\Pi(\lambda)$ to be the (finite) saturated subset of $X^+$ with respect to the $\uparrow$-ordering whose highest weight is $\lambda$. \begin{thm}\label{thm:injproj} Suppose $\lambda\in X^+$ is regular (lies inside an alcove) then in $\mathrm{mod}(S(\Pi(\lambda)))$ for $\mu \in \Pi(\lambda)$ we have $$ \inj(L(\mu))=\proj(L(\mu))=\proj(\nabla(\mu))=\inj(\Delta(\mu))=d(\mu)+d(\lambda) $$ $$ \inj(\nabla(\mu))=\proj(\Delta(\mu))=d(\lambda)-d(\mu). $$ \end{thm} In particular this gives us information for the blocks of the Schur algebra whose weights are regular. \section{The Global Dimension of $S(n,r)$ when $p>n$}\label{sect:glob} We will now focus on the classical Schur algebra. So $G=\mathrm{GL}_n(k)$, the root system of $\mathrm{GL}_n$ is irreducible and its derived subgroup $\mathrm{SL}_n$ is simply connected. We wish to determine the good filtration dimension and global dimension for $S(n,r)$ (that is for the \emph{whole} Schur algebra, not just for the regular blocks). So we need to know what $d(\lambda)$ is for $\lambda$ a partition and a condition for $\lambda$ to lie inside an alcove. The next two lemmas do this. \begin{lem}\label{lem:dlambA} Suppose $G=\mathrm{GL}_n(k)$ and $\lambda =(\lambda_1,\lambda_2,\ldots, \lambda_n)\in X^+$. Then we have $$d(\lambda)= \sum_{i=1}^{n-1}\sum_{j=i+1}^{n} \left\lfloor \frac{\lambda_i -\lambda_j -i+j-1}{p}\right\rfloor.$$ \end{lem} \begin{proof} Let $e_i=(0,\ldots,0,1,0,\ldots,0)\in X$ with a one in the $i$th position. The $e_i$ form the usual basis of $X$, so $\lambda = (\lambda_1,\lambda_2,\ldots , \lambda_n)=\sum_{i=1}^n \lambda_i e_i$. We take $\omega_i=\sum_{j=1}^i e_j$. We can write $\lambda= \sum_{i=1}^{n-1} (\lambda_i-\lambda_{i+1})\omega_i +\lambda_n\omega_n$. Thus for $\alpha = e_i-e_j\in R^+$, we have $$\langle \lambda+\rho, \alpha\check{\ }\, \rangle =\lambda_i-\lambda_j+j-i.$$ The definition of $d(\lambda)$ then gives us the result. \end{proof} \begin{lem}\label{lem:weightalc} A weight $\lambda \in X^+$ lies inside an alcove if there exist no integers $i$ and $j$ such that $\lambda_i -\lambda_j \equiv i-j \pmod p$. \end{lem} \begin{proof} A weight $\lambda \in X^+$ lies on a wall if there exists $\alpha \in R^+$ such that $\langle \lambda +\rho, \alpha\check{\ }\, \rangle = mp$ for some $m \in \mathbb{Z}$. So a weight $\lambda$ does not lie on a wall if for all $\alpha=e_i-e_j$ we have $\lambda_i-\lambda_j +j-i \not\equiv 0 \pmod p$. \end{proof} We first calculate an upper bound for $\wfd\bigl(S(n,r)\bigr)$. Let $E=L(1,0,\ldots,0)$ be the natural module for $\mathrm{GL}_n$. We take $S^r E$ to be the $r$th symmetric power of $E$ and $\mbox{$\bigwedge$}^r E$ to be the $r$th exterior power. For $\lambda = (\lambda_1,\lambda_2, \ldots, \lambda_n)\in \partn(n,r)$ we take $S^\lambda E= S^{\lambda_1} E \otimes S^{\lambda_2} E \otimes \cdots \otimes S^{\lambda_n} E$ with $S^1 E = E$ and $S^0 E = k$. \begin{lem}\label{lem:wfdS2} $$\wfd\bigl(S(n,r)\bigr)=\max\{\wfd(S^\lambda E)\mid \lambda\in \partn(n,r)\}.$$ \end{lem} \begin{proof} We have that $L(\lambda)$ embeds in $S^\lambda E$ by~\cite{donkbk}, section 2.1 (15)(i)(b). So the set $\mathcal{X}=\{S^\lambda E \mid \lambda \in \partn(n,r)\}$ satisfies the requirements of remark~\ref{rem:wfdS}. \end{proof} For all $\lambda$ and $\mu \in X^+$ the module $\nabla(\lambda) \otimes \nabla(\mu)$ has a good filtration. A proof of this property, for type $A_n$, is given in~\cite{wang1}. It is proved for most other cases in~\cite{donkrat}. The general proof is given in~\cite{mathieu}. The $\nabla(\nu)$ which appear as quotients in this filtration are given by Brauer's character formula~\cite{jantz}, II, lemma 5.8. We can generalise this property to good and Weyl filtration dimensions as below. \begin{lem}\label{lem:wfdten} Let $X$, $Y$ be $G$-modules then we have $$\wfd(X\otimes Y)\le \wfd(X)+ \wfd(Y).$$ \end{lem} \begin{proof} See~\cite{fripar}, proposition 3.4 (c), where the corresponding result for good filtration dimensions is proved. \end{proof} \begin{lem}\label{lem:ses} We have the short exact sequence, \begin{multline*} 0 \rightarrow \nabla(mp-j,1^{j},0^{n-j-1} ) \rightarrow S^{mp-j}E \otimes \mbox{$\bigwedge$}^{j} E \rightarrow \nabla(mp-j+1,1^{j-1},0^{n-j}) \rightarrow 0. \end{multline*} \end{lem} \begin{proof} Since $S^{mp-j}E \otimes \mbox{$\bigwedge$}^{j} E$ has a good filtration by the dual version of lemma~\ref{lem:wfdten}, this follows using characters. \end{proof} \begin{propn}\label{propn:wfdSless} $$\wfd(S^rE) \le (n-1)\left\lfloor\frac{r}{p}\right\rfloor.$$ \end{propn} \begin{proof} We first reduce to the case $S^{mp}E$. Write $r=r_0+pm$. If $0<r_0<p$ then the multiplication $S^{r_0}E\otimes S^{rm}E\to S^rE$ splits~\cite{donkbk}, section 4.8, proposition (12), so that $\wfd(S^rE)\le \wfd(S^{r_0}E\otimes S^{pm}E)\le \wfd(S^{r_0}E)+\wfd(S^{pm}E)= \wfd(S^{pm}E)$ as $S^{r_0}(E) \in \doog$. So suppose $r=mp$. We prove this proposition by induction on $m$. The proposition is clearly true for $m=0$ We note that the modules $\mbox{$\bigwedge$}^jE$ are tilting modules for $S(n,r)$~\cite{donktilt}, lemma 3.4 (ii), and hence have Weyl filtration dimension $0$. Dimension shifting using the induction hypothesis, lemma~\ref{lem:wfdten}, lemma~\ref{lem:ses} and lemma~\ref{lem:wandg} gives us \begin{align*} \Ext^i\bigl(S^rE,\nabla(\mu)\bigr) &\cong \Ext^{i-1}\bigl(\nabla(r-1,1,0^{n-2}),\nabla(\mu)\bigr) \\ &\cong \cdots\cong \Ext^{i-j}\bigl(\nabla(r-j,1^j,0^{n-j-1}),\nabla(\mu)\bigr) \end{align*}% for $\mu\in \partn$ and $i-j>(n-1)(m-1)\ge\wfd(S^{mp-j}E\otimes \mbox{$\bigwedge$}^jE)$. So for $i>(n-1)m$ we have \begin{align*} \Ext^i\bigl(S^rE,\nabla(\mu)\bigr)& \cong \Ext^{i-n+1}\bigl(\nabla(r-n+1,1^{n-1}),\nabla(\mu)\bigr) \\ &\cong\Ext^{i-n+1}\bigl(S^{r-n}E\otimes \mbox{$\bigwedge$}^nE,\nabla(\mu)\bigr) \cong 0 \end{align*}% by induction and lemma~\ref{lem:wfdten}. Hence $\wfd(S^rE) \le (n-1)m$ as required. \end{proof} Let $\lambda \in \partn$ with $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_n)$. We define $$\left\lfloor\frac{\lambda}{p}\right\rfloor = \sum_{i=1}^n \left\lfloor\frac{\lambda_i}{p}\right\rfloor.$$ \begin{cor} $$\wfd\bigl(S(n,r)\bigr) \le (n-1)\left\lfloor\frac{r}{p}\right\rfloor.$$ \end{cor} \begin{proof} We have $\wfd(S^\lambda E)\le \bigl\lfloor\frac{\lambda}{p}\bigr\rfloor$ using lemma~\ref{lem:wfdten} and proposition \ref{propn:wfdSless}. The result now follows using lemma~\ref{lem:wfdS2} and noting that $\bigl\lfloor\frac{\lambda}{p}\bigr\rfloor\le \bigl\lfloor\frac{r}{p}\bigr\rfloor$ for all $\lambda \in \partn$. \end{proof} \begin{thm}\label{thm:glob1} If $p>n$ then the Weyl (and the good) filtration dimension of the Schur algebra $S(n,r)$ is $$\wfd\bigl(S(n,r)\bigr) = (n-1)\left\lfloor\frac{r}{p}\right\rfloor.$$ The global dimension of $S(n,r)$ is twice this value. \end{thm} \begin{proof} The previous corollary tells us that this value for $\wfd(S)$ is an upper bound for all $p$. So for $p>n$ we just need to give a weight in $\partn(n,r)$ whose Weyl filtration dimension attains this bound. We write $r=r_1p +r_0$ for $r_1$, $r_0 \in \mathbb{N}$ and $0\le r_0 \le p-1$. Since $p> n$ we can write $r_0=bn+a$ where $a,b\in \mathbb{N}$ and $0\le a\le n-1$. Consider the weight $\mu=(r_1p +1, 1^{a-1},0^{n-a})+b(1^n)\in \partn(n,r)$. If $a=0$ then we take $\mu=(r_1p,0^{n-1})+b(1^n)$. The weight $\mu$ lies inside an alcove by lemma~\ref{lem:weightalc}. Also $d(\mu)= (n-1)r_1$. Hence $\wfd\bigl(\nabla(\mu)\bigr)=(n-1)r_1$, and so the bound is attained. Theorem~\ref{thm:wfdlamb} also tells us that there is a non-zero $\Ext$ group in degree $2(n-1)r_1$. Hence the global dimension of $S(n,r)$ is twice the Weyl filtration dimension by corollary~\ref{cor:glob}. \end{proof} \begin{thm}\label{thm:glob2} Let $m \in \mathbb{N}$ then the Weyl (and the good) filtration dimension of the Schur algebra $S(p,mp)$ is $$\wfd\bigl(S(p,mp)\bigr) = (p-1)m.$$ The global dimension of $S(p,mp)$ is twice this value. \end{thm} \begin{proof} The weight $(mp,0,\ldots,0)\in\partn$ lies inside an alcove by lemma \ref{lem:weightalc}. The same argument as in the previous proof then gives us the result. \end{proof} The values calculated for $\wfd\bigl(L(\lambda)\bigr)$ with $\lambda$ inside an alcove for $n=2$ and $n=3$ agree with our previous results for $\mathrm{SL}_2$ and $\mathrm{SL}_3$ calculated in ~\cite[sections 3 and 5]{parker1} This gives a new proof for~\cite{parker1}, theorem 3.7, (for all $p$) and \cite{parker1}, theorem 5.12, in the cases where $p \ge 5$ and $p=3$ and $3 \mid r$. It is still an open problem to determine what happens for weights which are not regular. Many of the results above give upper bounds but most of time these bounds are not sharp. Various conjectures are presented for the value of $\wfd(S(n,r))$ in~\cite{mythesis}, section 6.5. \section{The quantum case}\label{sect:quant} We now show that the arguments in sections \ref{sect:wfd} and \ref{sect:glob} generalise to the quantum case. To do this we need the appropriate quantum versions of the results used. We will be using the Dipper-Donkin quantum group $q$-$\mathrm{GL}_n$ defined in~\cite{dipdonk}. Our field $k$ remains algebraically closed but $k$ may now also have zero as well as positive characteristic. Background information can be found in~\cite{donkbk}. The cohomological theory of quantum groups and their $q$-Schur algebras appears in~\cite{donkquant}. When $q=1$ then the module category for $q$-$\mathrm{GL}_n$ is the same as for $\mathrm{GL}_n$. If $q$ is not a root of unity then $\mathrm{mod}(q$-$\mathrm{GL}_n)$ is semi-simple. We will consider the case where $q$ is a primitive $l$th root of unity with $l \ge 2$. All of the structures defined in section~\ref{sect:prelim} have their quantum analogues, which are essentially the same. The most significant difference for us will be that $p$-alcoves and $p$-hyperplanes will be replaced by $l$-alcoves and $l$-hyperplanes. We need the quantum version of translation functors. These are defined in~\cite{andpolwen}, section 8, together with the quantum version of proposition~\ref{propn:transgood}, \cite{andpolwen}, theorem 8.3. All of our proofs in section \ref{sect:wfd} now carry through in the quantum case with $p$ replaced by $l$. So the statement of theorem~\ref{thm:wfdlamb} and \ref{thm:extiind} and their corollaries \ref{cor:wfdnabd} and \ref{cor:wfdirr} are equally valid for the quantum case when $l \ge h$ (even if $k$ has characteristic 0). We also expect that the result in the appendix carries through in the quantum case so that we would also have the quantum version of proposition \ref{propn:nabnab} and theorem \ref{thm:injproj}. We now consider the quantised Schur algebra, $S_q(n,r)$. This can be constructed in the same way as in the classical case. Take the saturated subset of dominant weights $\Pi=\partn(n,r)$, then the quasi-hereditary algebra $S(\Pi)$ is isomorphic to $S_q(n,r)$, the quantised Schur algebra. Moreover we have the same ordering -- namely the $\uparrow$-ordering defined using the action of the affine Weyl group. The proofs in section \ref{sect:glob} work equally well in the quantum case with $p$ replaced by $l$ where $q$ is an $l$th root of unity with $l \ge 2$. So we get an upper bound for $\wfd\bigl(S_q(n,r)\bigr)$ of $(n-1)\bigl\lfloor \frac{r}{l} \bigr\rfloor$. Together with the quantum version of the results of section \ref{sect:wfd} we may now deduce the following theorem. \begin{thm} If $q$ is a primitive $l$th root of unity with $l>n$ then the Weyl (and the good) filtration dimension of the quantised Schur algebra $S_q(n,r)$ is $$\wfd\bigl(S_q(n,r)\bigr) = (n-1)\left\lfloor\frac{r}{l}\right\rfloor.$$ Suppose $l \ge 2$ and let $m \in \mathbb{N}$. Then we have $$\wfd\bigl(S_q(l,ml)\bigr) = (l-1)m.$$ In both these case the global dimension of $S_q(n,r)$ and $S_q(l,ml)$ is twice its Weyl filtration dimension. \end{thm} Again, this result is dependent only on $l$ and not on the characteristic of the field $k$. \section{Category $\mathcal{O}$} There are analogous situations in Category $\mathcal{O}$ defined by Bern\v ste\u\i n, Gel'fand and Gel'fand,~\cite{BGG2}. Category $\mathcal{O}$ is known to be a highest weight category (see \cite{klukoe}, section 4.1 for a basic introduction) so we can apply the general theory of section~\ref{sect:qha}. We use the setup of~\cite{carlin}, although note that~\cite{carlin} uses the terminology `$p$-filtration' for what we have defined to be a Weyl filtration. There $\mathfrak{g}$ is a complex, semi-simple Lie algebra with Cartan subalgebra $\mathfrak{h}$ and Weyl group $W$. We denote the longest element of $W$ by $w_0$. The standard modules for $\mathcal{O}$ are the well-known Verma modules, denoted $M(\lambda)$ for $\lambda \in \mathfrak{h}^*$. We also have that $[M(\mu):L(\lambda)]\ne 0$ if and only if there are positive roots $\gamma_1,\ldots, \gamma_m$ such that there is a chain of inequalities $\mu \ge s_{\gamma_1}(\mu) \ge \cdots \ge s_{\gamma_m}\cdots s_{\gamma_1}(\mu) = \lambda$ (\cite{BGG1}). We may use \cite{carlin}, proposition 3.7, theorem 3.8 and theorem 4.6 to deduce that $\gfd\bigl(M(w \cdot \lambda)\bigr)=\gfd\bigl(L(w\cdot \lambda)\bigr)=l(w_0)-l(w)$ and $\proj\bigl(M(w\cdot \lambda)\bigr) = l(w)$ for $\lambda$ an integral weight inside the dominant Weyl chamber. We may deduce that $\proj\bigl(L(w \cdot \lambda)\bigr) \le 2l(w_0)- l(w)$. These last two statements are consistent with \cite{BGG2}, statements 1 and 2. We also have translation functors and the analogue of proposition~\ref{propn:transgood} and hence the corollary~\ref{cor:transses}. Unfortunately the analogue of lemma~\ref{lem:wfdmu} may no longer be true. Our argument does not work in this situation and indeed already fails for type $A_2$. However, in~\cite{BGG2}, remark in \S7, it is stated that $\Ext^{2l(w_0)}\bigl(L(\lambda), L(\lambda)\bigr) \cong \mathbb{C}$. So there is strong evidence to suggest that $\Ext^{2l(w_0)-l(w)-l(v)}\bigl(L(w\cdot\lambda),L(v\cdot\lambda)\bigr) \cong \mathbb{C}$ for $v$, $w \in W$. The results of~\cite{BGG2}, \S7, are already enough to deduce that the global dimension of $\mathcal{O}$ is $2l(w_0)$. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
120dd35cddfa6cb4477fda73ce32c53dccc61ed7
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} RNA takes part in a variety of important cellular activities, including protein synthesis, intron splicing, gene silencing, and genome rearrangement (Lee et al., 2002; Mochizuki et al., 2002; Gratias \& Betermier, 2003; Yang et al., 2003). Considering the extensive functional repertoire of RNA molecules, it is of interest to determine their (functional) native structures and to understand the (kinetic) process by which they fold into such structures. The native structures of RNA molecules can be computed efficiently, at the functionally relevant secondary structure level, using free-energy minimization methods (Hofacker et al., 1994; Mathews et al., 2004) or, in cases where sufficient homologous sequences are available, by phylogenetic comparisons (Woese \& Pace, 1993; Cannone et al., 2002). Several algorithms have been developed for studying the kinetic process by which RNA molecules fold into their native secondary (Mironov \& Kister, 1985; Morgan \& Higgs, 1996; Flamm et al., 2000; Zhang \& Chen, 2000; Wolfinger et al., 2003; Tang et al., 2004; Ndifon \& Nkwanta, 2005) and tertiary (Abrahams et al., 1990; Gultyaev et al., 1990; Isambert, 2000; Xayaphoumine et al., 2003) structures. The majority of these algorithms (e.g., see Isambert, 2000; Xayaphoumine et al., 2003; Ndifon \& Nkwanta, 2005) operate on a helix-based move-set, involving the formation and dissociation of entire RNA helices. On the other hand, a few of the algorithms (e.g., see Zhang \& Chen, 2000; Wolfinger et al., 2003) operate on a pair-based move-set; they model the RNA folding process as a time-series of structural transitions, involving the formation and dissociation of individual RNA base pairs. Flamm and colleagues (Flamm, 1998; Flamm et al., 2000) have extended the afore-mentioned pair-based move-set by introducing the concept of base pair \textit{shifting,} which makes possible description of the biological process of defect-diffusion, believed to be an important feature of the \textit{in vivo} folding kinetics of RNA (Poerschke, 1974a). The folding model developed in this paper implements this extended pair-based move-set and is inspired by the theory of complex adaptive systems (see Section 3). The applicability of the model is illustrated through several examples based on selected natural and synthetic RNAs (see Section 4). In particular, the folding kinetics of the yeast tRNA$^{Phe}$ is shown to be strongly influenced by modifications to specific hairpin loops. In addition, a characteristic optimal folding temperature $T_{opt}$ $\left( \approx 313K\right) $ of tRNA$^{Phe}$, at which the native state exhibits maximal accessibility, is identified. Furthermore, estimates are obtained for the population dynamics of two alternative stable states of SV11, an RNA species that is replicated by $Q\beta $ replicase (Zamora et al., 1995). The remainder of this paper is organized as follows. In Section 2, we present some concepts related to RNA secondary structures and folding kinetics. We introduce the theory of complex adaptive systems and discuss details of the new folding model in Section \ref{themodel}. In Section \ref% {applics}, we apply the model to some example problems and discuss other possible applications in Section \ref{concl}. \section{Background information} \subsection{RNA secondary structure} Let $X$ be an arbitrary RNA sequence of length $n$. We think of $X$ as a string $X=x_{1}x_{2}\cdots x_{n}$ defined over the nucleotide alphabet $% \left\{ A,C,G,U\right\} $. The nucleotides or bases of $X$ have a propensity to \textit{pair} (or form canonical and non-canonical bonds) with each other. A pair formed by the bases $x_{i}$ and $x_{j}$, $i<j$, is denoted by $% \left( i,j\right) $. Two base pairs $\left( i,j\right) $ and $\left( i\prime ,j\prime \right) $ are said to be compatible if either $i<i\prime <j\prime <j $ or $i<j<i\prime <j\prime $. If we let $H$ be the set of possible pairs that can be formed by the bases of $X$, then a secondary structure $S$ of $X$ can be thought of as a set of mutually compatible base pairs drawn from $H.$ The multiset consisting of all subsets of $H$, including the empty set (i.e., the open chain), forms the conformation space of $X$, denoted here by $\zeta (X)$. Note that incompatible base pairs form pseudoknots, which are prohibited from occurring in the folding model developed in this paper. The model can, however, be readily extended to allow the formation of pseudoknots once reliable thermodynamic parameters for such tertiary structural elements become available. \subsection{Kinetic folding} The kinetic folding of an RNA sequence $X$ at the coarse-grained secondary structure level can be thought of as a time-series of structural transitions, mediated by a set of operations called the move-set (Flamm, 1998). Each operation or \textit{move} converts one secondary structure $% S_{i}\in \zeta \left( X\right) $ into another $S_{j}\in \zeta \left( X\right) $. For each structure $S_{i}\in \zeta \left( X\right) $, the move-set defines a neighborhood $N\left( S_{i}\right) $ such that $S_{j}\in $ $\zeta \left( X\right) $ belongs to $N\left( S_{i}\right) $ if and only if $% d\left( S_{i},S_{j}\right) \leq d\in $ $% \mathbb{R} ^{+}$, where $d\left( S_{i},S_{j}\right) $ is the (Hamming) distance between structures $S_{i}$ and $S_{j}$ and $d$ is the "move distance". Only \textit{% moves} that convert $S_{i}$ into some $S_{j}\in N\left( S_{i}\right) $ are legal. The probability of a legal move is given by a rate equation, an example of which is the Metropolis rule (Metropolis et al., 1953): \begin{equation} k_{ij}=\left\{ \begin{array}{c} e^{\frac{-\left( G_{j}-G_{i}\right) }{RT}}\text{, if }G_{j}<G_{i} \\ 1\text{, \ \ \ \ \ \ \ \ \ \ \ if }G_{j}\geq G_{i}% \end{array}% \right. \end{equation}% where $P\left( S_{i}\rightarrow S_{j}\right) $ is the probability of converting $S_{i}$ into $S_{j}$ $\in N\left( S_{i}\right) $ by a single \textit{move}; $G_{i}$ and $G_{j}$ are, respectively, the free energies of $% S_{i}$ and $S_{j}$, computed using a suitable choice of free-energy parameters (e.g., Mathews et al., 1999). Conventional Monte Carlo RNA folding algorithms execute in each time step $% t+\delta t$ a \textit{move} that converts the nascent RNA structure $S_{t}$ into some structure $S_{t+\delta t}$, where $S_{t}$ denotes the secondary structure of $X$ at time $t$. The time-ordered series of structures $\left\{ S_{t}\right\} _{t\geq 0},$ with $S_{t}\in \zeta \left( X\right) $ and $% S_{t=0}$ the open chain, is called a folding trajectory of $X$. The folding time $\tau _{f}$ associated with a given folding trajectory is the minimum value of $t$ for which $S_{t}$ is the native structure of $X$. A folding trajectory satisfying the following condition is called a folding path (Flamm et al., 2000): $S_{t_{1}}=S_{t_{2}}$ if and only if $t_{1}=t_{2}$. \section{The RNA folding model\label{themodel}} \subsection{Complex adaptive systems} The RNA folding model presented below is inspired by basic ideas from the theory of complex adaptive systems. Specifically, a complex adaptive system (CAS) is characterized by the presence of a diverse ensemble of components that engage in local interactions and an autonomous process that selects a subset of those components for enhancement based on the results of the local interactions (Levin, 1998). From these component-level dynamics emerge important global (i.e., system-level) properties such as self-organization and nonlinearity. Self-organization refers to the emergence of order from local interactions between the components of a CAS. Self-organization tends to drive a CAS towards stable configurations or states. In addition, a CAS exhibits nonlinearity; the rules that govern local interactions between the components of a CAS change as the CAS evolves (Levin, 1998). Consequently, a CAS may evolve along any one of a multitude of trajectories and may attain alternative stable states (Levin, 1998), depending on its particular evolutionary trajectory. See Levin (1998) and the references therein for further information on CASs. \subsection{Details of the model} The kinetic folding of RNA sequences into secondary structures is viewed here as a time-series of structural rearrangements (SRs), involving the formation, dissociation, and \textit{shifting} of individual RNA base pairs. It is modeled as a hierarchically-structured CAS; at the lowest level of the hierarchy are RNA bases and base pairs that engage in local stacking interactions. The results of these stacking interactions determine the probabilities (or fitnesses) of possible SRs. These probabilities are given by% \begin{equation} P_{f}^{\left( i,j\right) }=e^{-\frac{\Delta G^{ij}}{2RT}},\text{ } \label{pf} \end{equation}% \begin{equation} P_{d}^{\left( i,j\right) }=\frac{1}{P_{f}^{\left( i,j\right) }},\text{ and} \label{pd} \end{equation}% \begin{equation} \text{ }P_{s}^{\left( i,j\right) \rightarrow \left( i,k\right) }=P_{d}^{\left( i,j\right) }P_{f}^{\left( i,k\right) }, \label{ps} \end{equation}% where $P_{f}^{\left( i,j\right) }$, $P_{d}^{\left( i,j\right) }$, and $% P_{s}^{\left( i,j\right) \rightarrow \left( i,k\right) }$ are, respectively, the probabilities of formation and dissociation of $\left( i,j\right) $, and of \textit{shifting} $\left( i,j\right) $ into $\left( i,k\right) ,$ $R$\ is the gas constant, $T$ is the absolute temperature, and $\Delta G^{ij}$ is the stacking (including single-base stacking) energy associated with $\left( i,j\right) $. For an isolated base pair $\left( i,j\right) $, \begin{equation} \Delta G^{ij}=\Delta G^{ij}+c\ln \left( j-i\right) ,c\geq 0, \label{isol} \end{equation}% Equation $\left( \text{\ref{isol}}\right) $ takes into account the entropy of a loop of size $\left( j-i\right) $. A suitable value for the parameter $% c $ is $1.75$ (Fisher, 1966). Stacking energy calculations are based on the Turner $3.1$ energy rules (Mathews et al., 1999), at temperature $T=310.5K$, and on the Turner $2.3$ energy rules (Freier et al., 1986), at other temperatures. Equations $\left( \text{\ref{pf}}\right) $, $\left( \text{\ref% {pd}}\right) $, $\left( \text{\ref{ps}}\right) $ are based on the Kawasaki dynamics (1966); here, the dynamics involve transitions between states associated with specific local contexts of an RNA secondary structure. The next level of the hierarchical structure is occupied by SRs. It is at this level that selection operates. An autonomous stochastic sampling process (Baker, 1987) periodically (i.e., from one time step to another) selects a subset of possible SRs for realization based on the fitnesses of the SRs. The ensemble of possible SRs changes from one time step to another thereby assuring its diversity (see below). Note that due to the inter-dependence of local stacking interactions, on which the fitnesses of SRs depend, it is necessary to ensure that the SRs that are selected for realization in the same time step be mutually independent. Therefore if there is an SR involving the base $x_{i}$, then no other SR that involves the nearest-neighbor bases and base pairs of $x_{i}$ can occur in the same time step. Furthermore, in order to prevent the formation of pseudoknots SRs may only involve accessible bases; two bases $x_{i}$ and $x_{j}$, $i<j$, are accessible if for $k=i+1,\ldots ,j-1$ there is no base pair $\left( k,l\right) $ $\left( \text{resp}.\left( l,k\right) \right) $ such that $l>j$ (resp. $l<i$). To understand the model just described, consider the kinetic folding of an RNA sequence $X$. Denote by $\Re $ the set of possible SRs that are available for realization in a given time step. $\Re $ will contain as many elements as there are structures in $N\left( S_{t}\right) $, where $S_{t}$ is the nascent structure of $X$. Each element of $\Re $ is associated with specific bases and base pairs that belong to that element's "local context". For instance, an SR that involves the formation of the base pair $\left( i,j\right) $ is associated with $\left( i,j\right) $ and all nearest-neighbor bases and base pairs of $\left( i,j\right) $. Stacking interactions between the bases and base pairs associated with a given SR determine that SR's probability or fitness. This fitness is used periodically by an autonomous stochastic process to select a subset of SRs from $\Re $ for realization. As SRs are realized (and removed from $\Re $), existing SRs may be become impractical while new SRs may become possible. For instance, the formation of $\left( i,j\right) $ in a given time step may make possible the \textit{% shifting} of $\left( i,j\right) $ into some $\left( i,k\right) $ in the next time step. Conversely, the dissociation of $\left( i,j\right) $ in a given time step will render impractical the \textit{shifting} of $\left( i,j\right) $ into some other base pair in the next time step. New SRs that become possible are added to $\Re $ while those that become impractical are removed from $\Re $. This assures the diversity of possible SRs. Note that the idea that the selective enhancement of components (i.e., SRs in this case) of a given system leads to their removal (or elimination) as well as the elimination of other components from that system appears to be at odds with what takes place in most known CASs. In the present case, the goal of selection is to, indirectly, enhance the thermal stabilities of the local contexts associated with SRs. We note here that the inter-dependence of local stacking interactions, on which the fitnesses of SRs depend, implies that a given SR may "interact" with many other SRs. For instance, an SR that involves the base pair $\left( i,j\right) $ will "interact" with all SRs that involve either of the bases $% x_{i}$ and $x_{j}$. This implies a relatively high average degree of "epistatic" interactions between SRs. Results from studies based on random Boolean networks predict that such a high degree of epistasis leads to rugged fitness landscapes with numerous attractors (Kauffman and Levin, 1987; Kauffman, 1989). This prediction is consistent with the well-known rugged nature of RNA folding energy landscapes. We further note that SRs, as defined in the model, operate at much more local scales of space than is the case with most existing folding methods (e.g., see Flamm et al., 2000; Isambert et. al., 2000; Tang et al., 2004). The fitnesses of SRs depend exclusively on the stacking energies associated with specific local contexts of the nascent RNA structure, $S_{t}$, and not on the free-energies of structures found in $N\left( S_{t}\right) $. Therefore, there is no need for explicit computation of the free-energies of RNA structures in the present model, in contrast to, e.g., the folding method of Flamm et al. (2000). We expect the model to reproduce global characteristics of CASs such as self-organization and nonlinearity. In particular, RNA molecules are "self-organizing" since, through their own internal dynamics, they tend to fold into thermodynamically favorable or stable states. Folding RNA molecules also exhibit nonlinear dynamics, as evinced by their attainment of alternative stable states. In Section 4, we will illustrate such nonlinear dynamics using an example based on SV11. \subsection{Computer implementation of the model} The above model has been implemented in the computer program \textit{kfold}, which is available from the author upon request. For an input RNA sequence of length $n$, the program selects $m=1$ SR, if $n\leq 30$, and $m=7$ SRs, if $n>30$, for realization in each time step. The folding time is incremented in each time step by the reciprocal of the product of $m$ and the sum of the fitnesses of all possible SRs. The number of selected SRs $m$ can be adjusted by the user. Note that the choice of $m$ influences the computer time required to fold an input RNA sequence but has minimal effect on the qualitative folding kinetics of the sequence (see example in Table 1). Also note that in order to speed up folding simulations, the program currently only allows the formation of base pairs that can be stacked. Specifically, a base pair $\left( i,j\right) $ can be stacked if there exists complementary bases $x_{l}$ and $x_{k},$ $l<k,$ such that either $% i-k=l-j=1$ or $k-i=j-$ $l=1$. \begin{equation*} \begin{tabular}{|l|l|l|l|} \hline $m$ & {\small Time steps} & {\small Fraction of A} & {\small Fraction of B} \\ \hline $1$ & $249$ & $\allowbreak 0.40\,$ & $0.60$ \\ \hline $3$ & $89$ & $\allowbreak 0.41\,$ & $0.59$ \\ \hline $5$ & $134$ & $\allowbreak 0.38\,$ & $0.62$ \\ \hline $7$ & $169$ & $\allowbreak 0.42$ & $0.58$ \\ \hline $9$ & $\allowbreak \allowbreak 241$ & $\allowbreak 0.45\,$ & $0.55$ \\ \hline $11$ & $>30000$ & $0.40$ & $0.60$ \\ \hline \end{tabular}% \end{equation*} \begin{quotation} Table 1. Influence of $m$ on folding kinetics for the sequence \begin{equation*} GUCCUUGCGUGAGGACAGCCCUUAUGUGAGGGC, \end{equation*}% \textit{\ with} $n=33$. It was folded with ((((((((((((((.....)))))))))))))) (A) and ((((((....)))))).((((((....)))))) (B) serving as target structures. The fraction of simulations that found either structure within the allowed time scale (i.e., $4\times 10^{4}$ time steps) is similar for different values of $m$. On the other hand, the number of time steps, which reflects the amount of computer time required for folding, decreases as $m$ increases from $1$ to $3$, and subsequently increases with $m$. $50\%$ of simulations failed to find either target structure for $m=11$. The number of time steps given for $m=11$ thus represents a lower bound of its actual value. Note that each data point was averaged from just $500$ folding simulations run at $T=310.5K$. Therefore, there may be errors in the data resulting from limited sampling of possible folding trajectories. \end{quotation} \section{Example applications\label{applics}} We now use the new folding model, as implemented in \textit{kfold}, to study the effects of base modifications and temperature on the folding kinetics of the yeast tRNA$^{Phe}$. We also estimate the population dynamics of two alternative stable states of the synthetic SV11. These examples will demonstrate that the folding model qualitatively reproduces characteristic RNA folding dynamics. Note that in the following examples, the folding times (i.e., mean first passage times) were scaled using experimentally measured folding times (in $\mu s$) of the hairpin $AAAAAACCCCCCUUUUUU$ (Poerschke, 1974b). This was done in order to allow direct comparisons with folding times reported in Flamm (1998). Unless otherwise noted, all folding simulations were run at $T=310.5K$. \subsection{Influence of base modifications on folding kinetics} A number of tRNA sequences are known to contain base modifications. Such modifications, believed to be the consequence of evolutionary optimization, have been shown to improve the foldabilities of some tRNAs (Flamm, 1998; Flamm et al., 2000). We have introduced several base modifications to the individual hairpins of the yeast tRNA$^{Phe}$ sequence and studied the folding kinetics of the modified sequences. All modified bases were prohibited from engaging in bond formation and stacking interactions. The modified sequences are shown in Table 2. \begin{quotation} \begin{equation*} \begin{tabular}{|l|l|l|} \hline {\small Sequence} & {\small Modified Hairpins} & {\small Modified Sequence Positions} \\ \hline ${\small seq1}$ & $1,$ $2$ $\&$ $3$ & $15,17,19,37,38,55,56$ $\&$ $59$ \\ \hline ${\small seq2}$ & $1$ $\&$ $2$ & $15,17,19,37$ $\&$ $38$ \\ \hline ${\small seq3}$ & $1$ $\&$ $3$ & $15,17,19,55,56$ $\&$ $59$ \\ \hline ${\small seq4}$ & $2$ $\&$ $3$ & $37,38,55,56$ $\&$ $59$ \\ \hline ${\small seq5}$ & $None$ & $None$ \\ \hline \end{tabular}% \end{equation*} Table 2. Modified tRNA sequences used in this example. Note that hairpins are labeled from left to right, with "1" representing the left-most hairpin. \end{quotation} We found base modifications to elicit substantial improvements in tRNA foldabilities, in the form of drastic decreases in folding times (see Figure 1). For the unmodified sequence, $seq5$, the fraction of folded sequences (i.e., sequences that have found the native cloverleaf structure) increased relatively slowly, reaching about $45\%$ after the first $600us$, and $100\%$ within $4500\mu s$. For all modified sequences, on the other hand, the fraction of folded sequences increased rapidly, reaching $100\%$ within $% 1500\mu s$. Among these sequences, $seq1$ was the fastest folder with an estimated folding time of $300\mu s$, while $seq4$ was the slowest folder with a folding time of $1500\mu s$. The folding times of $seq2$ and $seq3$ were approximately equal (i.e., about $800\mu s$). These observed effects of base modifications on tRNA folding kinetics are consistent with experimental data, as well as with predictions made by \textit{kinfold} (Flamm, 1998). Note that we were able to simulate much longer folding times for tRNA$^{Phe}$% , up to $4500\mu s$, than was done in Flamm (1998). \begin{figure}[tbp] \begin{center} \includegraphics[width=4.99in]{Doc31.eps} \end{center} \caption{Folding kinetics of modified tRNA sequences (see Table 2). For each sequence, $1000$ \textit{kfold} simulations were run for $1\times 10^{4}% \protect\mu s$ or until the native cloverleaf was found. $p\left( t\right) $ denotes the fraction of simulations that found the cloverleaf within $t% \protect\mu s$. Note that the displayed folding times may not be biologically realistic as they were scaled using the folding kinetics of a hairpin. Further calibration with experimental data is necessary in order to ensure biological relevance of the displayed time scales.} \end{figure} \subsection{Temperature dependence of folding kinetics} We have used the folding model to study the temperature dependence of the folding time $\tau _{f}$ of the yeast tRNA$^{Phe}$. We found a $V$-shaped temperature dependence of $\tau _{f}$ (see Figure 2), suggesting the existence of an optimal folding temperature $T_{opt}\approx 313K$. A possible explanation for the existence of $T_{opt}$ is as follows: At temperatures $T>T_{opt}$, there are numerous structures with similar free-energies as the native cloverleaf. The native cloverleaf is therefore relatively unstable at temperatures above $T_{opt}$ and may be associated with a much smaller basin of attraction in the energy landscape. It therefore takes the folding tRNA longer to "find" the cloverleaf among the ensemble of nonnative states. On the other hand, at temperatures $T<T_{opt}$% , the stability of nonnative structures increases leading to a growth in the number and, perhaps, sizes of nonnative basins of attraction in the energy landscape. These nonnative basins of attraction may decrease the folding tRNA's chances of finding the native state. Both scenarios (i.e., $T>T_{opt}$ and $T<T_{opt}$) lead to suboptimal native state accessibilities. Meanwhile, the maximal accessibility of the native state that is evident at $T=T_{opt}$ suggests the existence at $T_{opt}$ of optimal balance between the thermal stabilities of native and nonnative states. Note that $T_{opt}$ is close to the optimal growth temperature range for many yeast species. Detailed analysis of the temperature dependence of folding kinetics for tRNAs and other functional RNAs from various organisms will allows us to determine if this observation is a consequence of evolutionary optimization or simply a chance occurrence. \begin{figure}[tbp] \begin{center} \includegraphics[width=4.31in]{Doc71.eps} \end{center} \caption{Temperature dependence of the folding time for tRNA obtained from folding simulations run at $T=273,$ $293,$ $313,$ $333,$ $353,$ and $373K$. None of the simulations found the ground state at $T=353$ and $373K$ within $% 2\times 10^{4}\protect\mu s$. The folding times shown for these temperatures therefore represent lower-bounds of their actual values. Energy calculations were based on the Turner $2.3$ energy parameters (Freier et al., 1986), for which extrapolations to temperatures other than $310.5K$ are readily available. Each data point was averaged from just 500 folding simulations. Therefore, there may be errors in the data resulting from limited sampling of possible folding trajectories.} \end{figure} \subsection{(Meta)stable states\label{meta}} During kinetic folding, some RNA molecules may get trapped in long-lived, nonnative states called metastable states/conformations. Examples of such metastable RNA molecules include riboswitches that regulate gene expression in bacteria by switching between alternative stable conformations (Vitreshack et al., 2004). By adopting a repressing conformation, a riboswitch can elicit the premature termination of DNA transcription or the inhibition of protein translation. Detailed \textit{in silico} analysis of the folding kinetics of a metastable RNA molecule can provide insight into its functionality. For instance, Nagel et al. (1999) used computer simulations to identify a metastable structure of the \textit{Hok} mRNA that mediates apoptosis in plasmid R1-free cells. The predictions made by their simulations were in good agreement with experimental data (Nagel, 1999). See Higgs (2000) for other examples of how computer simulations have yielded important insight into the folding kinetics of metastable RNA molecules. SV11 is a 115nt synthetic RNA species that exists in two alternative stable states, a rod-like stable conformation and a multi-component metastable conformation (see Figure 3). While the metastable conformation is a template for Q$\beta $ replicase, the stable conformation is not (Zamora et al., 1995). Several authors have previously performed \textit{in silico} analysis of the folding kinetics of SV11. Some of the authors (Flamm, 1998; Flamm et al., 2000) successfully predicted the existence of the metastable conformation while others (e.g., see Gultyaev et al., 1990; Morgan \& Higgs, 1996) could only do so if folding was constrained to occur in conjunction with transcription. However, none of the authors obtained detailed theoretical estimates of the population dynamics of the molecule's two alternative stable states. \begin{figure}[tbp] \begin{center} \includegraphics[width=4.51in]{Doc61.eps} \end{center} \caption{The stable (A) and metastable (B) structures of SV11. Note that a 33GC79 base pair (see arrow) was removed from the stable structure since \textit{kfold} currently only allows base pairs that can be stacked; there are no complementary bases $x_{l}$ and $x_{k},$ $l<k,$ in the SV11 sequence such that either $33-k=l-79=1$ or $k-33=79-$ $l=1$.} \end{figure} Here we report detailed estimates of the population dynamics of the stable and metastable conformations of SV11. As shown in Figure 4 the population of the metastable state increases steadily, reaching a maximum after about $% 2000\mu s$. In experiments performed using \textit{kinfold}, Flamm (1998) reported that the fraction of the metastable state reached about $16\%$ after $500\mu s$. This estimate is consistent with the results shown in Figure 4. However, we were able to fold the molecule for much longer, up to $% 1.5\times 10^{4}\mu s$, than was done in Flamm (1998). This allowed us to estimate the population dynamics of not just the metastable conformation but also of the stable native conformation. The ratio of the fraction of simulations that found the native conformation to the fraction that found the metastable conformation in the time scale of the simulation was approximately $3$ \ to $1$. Note that the accuracy of these results can be tested directly, in the laboratory. \begin{figure}[tbp] \begin{center} \includegraphics[width=4.97in]{Doc51.eps} \end{center} \caption{Folding kinetics of SV11, obtained from 1000 \textit{kfold} simulations. Each simulation was stopped after it found either of the molecule's two alternative stable states. $p\left( t\right) $ denotes the fraction of simulations that found either state within $t\protect\mu s$. Note that the displayed folding times may not be biologically realistic as they were scaled using the folding kinetics of a hairpin. Further calibration with experimental data is necessary in order to ensure biological relevance of the displayed time scales.} \end{figure} \section{Discussion\label{concl}} In this paper, we introduced a new model for the kinetic folding of RNA sequences into secondary structures that was inspired by the theory of complex adaptive systems. In the folding model, RNA bases and base pairs engage in local stacking interactions that determine the probabilities (or fitnesses) of possible RNA structural rearrangements (SRs). Meanwhile, selection operates at the level of SRs; an autonomous stochastic sampling process periodically selects a subset of possible SRs for realization based on the fitnesses of the SRs. Several examples were used to illustrate the applicability of the model. In particular, certain base modifications were shown to substantially improve the foldability of tRNA$^{Phe}$. In addition, a characteristic optimal folding temperature $T_{opt}\left( \approx 313K\right) $ of tRNA$^{Phe}$ was identified. Furthermore, the model was used to confirm previous experimental results (Zamora et al., 1995) regarding the existence of two alternative stable states of the Q$\beta $ variant SV11, and to obtain (experimentally verifiable) estimates of the population dynamics of those states. The above examples demonstrated, among other things, the emergence from (local) SRs of nonlinear RNA folding dynamics (i.e., the realization of alternative stable states). Other possible applications of the model are discussed below. The analysis of properties of fitness landscapes is currently of interest to researchers in a wide range of fields including the life, computer, and social sciences (e.g., see Hadany \& Beker, 2003; McCarthy, 2004; Skellett et al., 2005). A number of interesting general features of these landscapes such as positive correlations between the degree of epistasis, the number of local optima, and the expected value of the global optimum have thus far been elucidated in the context of adaptive walks on landscapes generated by random Boolean networks (RBNs) (Kauffman \& Levin, 1987; Kauffman, 1989; Skellett et al., 2005) . The simplicity of RBNs and their accessibility to some degree of mathematical analysis make them convenient for use as generators of fitness landscapes. However, some of the general properties of such RBN-generated landscapes may differ from those of landscapes found in Nature. RNA folding kinetics, as modeled in this paper, could serve as a generator of, and therefore assist the analysis of properties of "natural" fitness landscapes (i.e., RNA folding energy landscapes). Note that this proposed use of RNA in the investigation of fitness landscapes differs from the previous related use of the RNA sequence to structure mapping (Schuster \& Stadler, 1994), which was not based on folding kinetics. The model could also be used to study the RNA sequence to structure (or genotype to phenotype) mapping, from a kinetics perspective. Previous investigations, based on the thermodynamics of RNA folding, have made several important findings about the RNA sequence to structure mapping such as the existence of (1) extended neutral networks of sequences that fold into the same secondary structures (2) few common or "typical" structures, realized with relatively high frequencies, and (3) many rare structures that have little or no evolutionary significance (Schuster et al., 1994; Schuster et al., 1998). These findings, together with results from thermodynamics-based RNA optimization experiments (Fontana \& Schuster, 1998), have confirmed previous hypotheses on, and shed light into several important features of the process of molecular evolution, including the role of neutrality in adaptation and the existence of continuous/discontinuous transitions or punctuated equilibria in evolutionary trajectories. It would be interesting to determine how the nature of the RNA sequence to structure mapping, as obtained from thermodynamic folding experiments, changes when RNA folding kinetics is taken into account. It is also possible that an investigation of the genotype to phenotype mapping based on the kinetics of RNA folding will yield further insight into the process of molecular evolution. \section{Acknowledgements} This work was funded in part by DOE-ER63580 and by RCMI Grant no. RR017581. The author thanks Dr. Asamoah Nkwanta and anonymous reviewers for their useful comments on an earlier version of the manuscript. \section{References} \begin{description} \item Abrahams, J.P., van den Berg, M., van Batenburg, E., Pleij, C, 1990. Prediction of RNA secondary structure, including pseudoknotting, by computer simulation.\ Nucl. Acids Res. 18, 3035-3044. \item Baker, J. E., 1987. Reducing bias and inefficiency in the selection algorithm. In: Grefenstette, E. (Ed.), Proceedings of the Second International Conference on Genetic Algorithms and their Application. Lawrence Erlbaum Associates, Hillsdale, pp. 14-21. \item Cannone, J.J., Subramanian, S., Schnare, M.N., Collett, J.R., D'Souza, L.M., Du, Y., Feng, B., Lin, N., Madabusi, L.V., Muller, K.M., Pande, N., Shang, Z., Yu, N., and Gutell, R.R., 2002. The comparative RNA web (CRW) site: an online database of comparative sequence and structure information for ribosomal, intron, and other RNAs. BMC Bioinformatics 3, 2. \item Fisher, M.E., 1966. Effect of excluded volume on phase transitions in biopolymers. J. Chem. Phys. 45, 1469--1473. \item Flamm, C., 1998. Kinetic folding of RNA. Doctoral dissertation. University of Vienna, Austria. \item Flamm, C., Fontana, W., Hofacker, I.L., Schuster, P., 2000. RNA folding at elementary step resolution. RNA 6, 325-338. \item Fontana, W., Schuster, P., 1998. Continuity in evolution: On the nature of transitions. Science 280, 1451-1455. \item Freier, S.M., Kierzek, R., Jaeger, J.A., Sugimoto, N., Caruthers, M.H., Neilson, T., Turner, D.H. 1986. Improved parameters for predictions of RNA RNA duplex stability. Proc. Natl. Acad. Sci. 83, 9373-9377. \item Gratias, A., Betermier, M., 2003. Processing of double-strand breaks is involved in the precise excision of paramecium internal eliminated sequences. Mol. Cell. Biol. 7152--7162. \item Gultyaev, A.P., van Batenburg, F.H.D., Pleij, C.W.A., 1995. The influence of a metastable structure in plasmid primer RNA on antisense RNA binding kinetics. Nucl. Acids Res. 23, 3718-3725. \item Gultyaev, A.P., van Batenburg, F.H.D., Pleij, C.W.A., 1990. The computer simulation of RNA folding pathways using a genetic algorithm. J. Mol. Biol., 250, 37--51. \item Hadany, L., Beker, T., 2003. Fitness-associated recombination on rugged adaptive landscapes. J. Evol. Biol. 16, 862-870. \item Higgs, P.G., 2000. RNA secondary structure: physical and computational aspects. Quat. Rev. Biophys. 33, 199-253. \item Hofacker, I.L, Fontana, W., Stadler, P.F., Bonhoffer, S., Tacker, M., Schuster, P., 1994. Fast folding and comparison of RNA secondary structures.\ Monatsch. Chem. 125, 167-188. \item Isambert, H., Siggia, E.D., 2000. Modeling RNA folding paths with pseudoknots: application to hepatitis delta virus ribozyme. Proc. Natl. Acad. Sci. 97, 6515-6520. \item Kawasaki, K., 1966. Diffusion constants near the critical point for time-dependent Ising models. Phys. Rev. 145, 224-230. \item Kauffman, S.A., 1989. Adaptation on rugged landscapes. In D. Stein (ed.) Lectures in the Sciences of Complexity, lecture volume 1, pp. 527-618. Addison-Wesley, Redwood City. \item Kauffman, S.A., Levin, S.A., 1987. Towards a general theory of adaptive walks on rugged landscapes. J. Theor. Biol. 128, 11-45. \item Lee, N.S., Dohjima, T., Bauer, G., Li, H., Li, M.J., Ehsani, A., Salvaterra, P., Rossi, J., 2002. Expression of small interfering RNAs targeted against HIV-1 rev transcripts in human cells. Nat. Biotechnol. 20, 500-505. \item Levin, S.A., 1998. Ecosystems and the biosphere as complex adaptive systems. Ecosystems 1, 431-436. \item Mathews, D.H, Sabina, J., Zuker, M., Turner, D.H., 1999. Expanded sequence dependence of thermodynamic parameters provides robust prediction of RNA secondary structure.\ J. Mol. Biol. 288, 911-940. \item Mathews, D.H., Disney, M.D., Childs, J.L., Schroeder, S.J., Zuker, M., Turner, D.H., 2004. Incorporating chemical modification constraints into a dynamic programming algorithm for prediction of RNA secondary structure. Proc. Natl. Acad. Sci. 101,7287-7292. \item McCarthy, I.P., 2004. Manufacturing strategy: understanding the fitness landscape. Intl. J. Op. Prod. Mgt. 24, 124-150. \item Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., Teller, E., 1953. Equation of state calculations by fast computing machines. J. Chem. Phys. 21, 1087--1092. \item Mironov, A., Kister, A., 1985. A kinetic approach to the prediction of RNA secondary structures. J. Biomol. Struct. Dyn. 2, 953--962. \item Mochizuki, K., Fine N.A., Fujisawa T., Gorovsky, M.A., 2002. Analysis of a piwi-related gene implicates small RNAs in genome rearrangement in Tetrahymena. Cell 110, 689-699. \item Morgan, S.R., Higgs, P.G., 1996. Evidence for kinetic effects in the folding of large RNA molecules. J. Chem. Phys. 105, 7152--7157. \item Nagel, J.H.A., Gultyaev, A.P., Gerdes, K., Pleij, C.W.A., 1999. Metastable structures and refolding kinetics in hok mRNA of plasmid R1. RNA 5, 1408--1419. \item Ndifon, W., Nkwanta, A., 2005. An agent-oriented simulation of RNA folding and its application to the analysis of RNA conformational spaces. In: Yilmaz, L. (Ed.), Proceedings of the Agent-Directed Simulation Symposium of the 2005 Spring Simulation Multiconference, SCS Press, San Diego, pp. 198-204. \item Poerschke, D., 1974a. Model calculations on the kinetics of oligonucleotide double helix coil transitions. Evidence for a fast chain sliding reaction. Biophys. Chem. 2, 83--96. \item Poerschke, D., 1974b. Thermodynamic and kinetic parameters of an oligonucleotide hairpin helix. Biophys. Chem. 1, 381--386. \item Schuster, P., Fontana, W., Stadler, P., Hofacker, I.L., 1994. From sequences to shapes and back: A case study in RNA secondary structures. Proc. Roy. Soc. (London) B 255, 279-284. \item Schuster, P., Fontana, W., 1998. Chance and necessity: Lessons from RNA. Physica D 133, 427-452. \item Schuster, P., Stadler, P., 1994. Landscapes: complex optimization problems and biopolymer structure. Comput. Chem. 18, 295-314. \item Skellett, B., Cairns, B., Geard, N., Tonkes, B., Wiles, J., 2005. Maximally rugged NK landscapes contain the highest peaks. In: Beyer, H.G., O'Reilley, U.M., Arnold, D.V., et al. (Eds.), Proceedings of the 2005 Genetic and Evolutionary Computation Conference, ACM Press, New York, pp. 579-584. \item Tang, X., Kirkpatrick, B., Thomas, S., Song, G., Amato, N., 2004. Using motion planning to study RNA folding kinetics. In Proceedings of the International Conference on Computational Molecular Biology (RECOMB). ACM Press, San Diego, pp. 252-261. \item Vitreschack, A.G., Rodinov, D.A., Mironov, A.A., Gelfand, M.S., 2004. Riboswitches: the oldest mechanism for the regulation of gene expression? Trends Genet. 20, 44-50. \item Woese, C.R., Pace, N.R., 1993. Probing RNA structure, function, and history by comparative analysis. In: Gesteland, R.F., Atkins, J.F. (Eds.), The RNA World. Cold Spring Harbor Laboratory Press, New York, pp. 91-117. \item Wolfinger, M.T., Svrcek-Seilera, W.A., Flamm, C., Hofacker, I.L., Stadler, P.F., 2003. Efficient computation of RNA folding dynamics. J. Phys. A 37, 4731-4741. \item Xayaphoummine, A., Bucher, T., Thalmann, F., Isambert, H., 2003. Prediction and statistics of pseudoknots in RNA structures using exactly clustered stochastic simulations. Proc. Natl. Acad. Sci. 100, 15310--15315. \item Yang, G., Thompson, J.A., Fang, B., Liu, J., 2003. Silencing of H-ras gene expression by retrovirus-mediated siRNA decreases transformation effciency and tumor growth in a model of human ovarian cancer. Oncogene 22, 5694-5701. \item Zamora, H., Luce, R., Biebricher, C.K., 1995. Design of artificial short-chained RNA species that are replicated by Q replicase. Biochemistry 34, 1261--1266. \item Zhang, W., Chen, S.J., 2002. RNA hairpin-folding kinetics. Proc. Natl. Acad. Sci. 99, 1931--1936. \end{description} \end{document}
bd5fb7c00c1460ab484b72c57da142a6f7e84d51
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Synchronization of chaotic systems has been explored extensively in recent years~\cite{Piko01}. In addition to complete synchronization between two identical chaotic systems~\cite{Fuji83}, various notions of chaotic synchronization have evolved~\cite{Piko01}. Among them, the concept of {\it generalized synchronization} (GS), which refers to a situation in which the states of two systems connected each other via a continuous mapping, has been introduced in order to study coherent behavior between two systems with different dynamics~\cite{Rulk95}. Experimental detection of GS from data is a challenging problem. Because the synchronization manifold of GS has a highly nonlinear structure, conventional statistical tools such as the correlation coefficient does not work. Recently, the interest in the {\it kernel methods} has been stimulated in the machine learning community for analyzing data with nonlinearity in a unified manner~\cite{Shaw04}. Since the great success of Support Vector Machine, a considerable effort has been devoted to derive kernelization of various multivariate analysis methods. Therefore, it is meaningful to explore applicability of the kernel-based methods for analyzing nonlinear dynamics. In this paper, we particularly employ {\it Kernel Canonical Correlation Analysis (Kernel CCA)}~\cite{Akah01} for characterizing GS. We present an example for which Kernel CCA works successfully and also discuss how an optimal value of parameter of Kernel CCA can be chosen. \section{Kernel CCA} Let us start with a formulation of Kernel CCA. For a pair of variables $x\in {\mathbb R}^p$ and $y\in {\mathbb R}^q$, Kernel CCA seeks a pair of nonlinear scalar functions $f: {\mathbb R}^p \to {\mathbb R}$ and $g: {\mathbb R}^q \to {\mathbb R}$ such that the correlation coefficient \begin{eqnarray} \label{rho_F} \rho_{\cal F} = \frac{{\sf cov} (f(x), g(y))}{\sqrt{{\sf var}(f(x))}\sqrt{{\sf var}(g(y))}} \end{eqnarray} between transformed variables is maximized. When a data set $\{(x_n, y_n)\}_{n=1}^N$ is given, the maximal value of $\rho_{\cal F}$ is estimated from the following procedure. Suppose that the nonlinear functions $f$ and $g$ are well approximated by linear combinations of {\it kernels} on data points $(x_i, y_i)$ as $f(x) = \sum_{i=1}^N \alpha_i k(x_i, x)$ and $g(y) = \sum_{i=1}^N \beta_i k(y_i, y)$. For example, a Gaussian kernel $k(x, x') = \exp(-\Vert x - x'\Vert^2/2\sigma^2)$ is used as kernel~\cite{Shaw04}. By substituting the above expressions for $f$ and $g$ into Eq.~(\ref{rho_F}) and replacing covariance ${\sf cov}(\cdot,\cdot)$ and variance ${\sf var}(\cdot)$ with the empirical averages over the data set $\{(x_n, y_n)\}_{n=1}^N$, the maximization problem of Eq.~(\ref{rho_F}) leads to the following generalized eigenvalue problem~\cite{Akah01}: \begin{eqnarray} \label{KCCA} && \left [ \begin{array}{cc} 0 & K_X K_Y \\ K_Y K_X & 0 \end{array} \right ] \left[ \begin{array}{c} \alpha \\ \beta \end{array} \right] = \rho \left [ \begin{array}{cc} K_X(K_X + \kappa I) & 0 \\ 0 & K_Y (K_Y + \kappa I) \end{array} \right ] \left[ \begin{array}{c} \alpha \\ \beta \end{array} \right], \end{eqnarray} where $K_X, K_Y$ are the {\it Gram matrices} $(K_X)_{i, j} = k(x_i, x_j)$ and $(K_Y)_{i, j} = k(y_i, y_j)$ determined from the given data set, and the term $\kappa I$ is introduced in order to avoid over-fitting. The first eigenvalue of Eq.~(\ref{KCCA}) gives the maximal value $\rho_{\cal F}^{\max}$ of $\rho_{\cal F}$ in Eq.~(\ref{rho_F}). $\rho_{\cal F}^{\max}$ is called the {\it canonical correlation coefficient}, and the variables $u=f(x), v=g(y)$ transformed by $f$ and $g$ are called the {\it canonical variates} of Kernel CCA. When the averages of $\{f(x_n)\}_{n=1}^N$ and $\{g(x_n)\}_{n=1}^N$ do not equal to zero, the Gram matrix $K$ should be replaced with the following one: $\tilde{K} = K - (1/N) ({\bf j} \> {}^t{\bf j}) K - (1/N) K ({\bf j} \> {}^t{\bf j}) + (1/N^2) ({\bf j} \> {}^t{\bf j}) K ({\bf j} \> {}^t{\bf j})$, where ${\bf j} = {}^t (1,1,...,1)$~\cite{Shaw04}. \section{Results} As an illustration, let us consider the following one-dimensional linear map driven by the two-dimensional baker's map: \begin{eqnarray} \label{Baker} && [x_1(t+1), x_2 (t+1)] = \left \{ \begin{array}{lll} [a x_1 (t), x_2 (t)/b] \\ \quad \quad {\rm if} \ x_2(t)<b, \\ [a + (1-a)x_1 (t), (x_2 (t) - b)/(1-b)] \\ \quad \quad {\rm if} \ x_2(t) \geq b, \end{array} \right. \\ \label{Linear} && y (t+1) = \gamma y (t) + \cos(2\pi x_1 (t)). \end{eqnarray} The parameter of the baker's map Eq.~(\ref{Baker}) are taken as $a=0.3, b=0.5$ and $\gamma$ in Eq.~(\ref{Linear}) is varied as the control parameter. For $|\gamma | < 1$, the response system Eq.~(\ref{Linear}) is asymptotically stable for all $(x_1, x_2)$ in the unit square $0\le x_1, x_2\le 1$, i.e., the system is in a state of GS. Since the natural measure of the baker's map is uniform in the $x_2$ direction, the driver-response relation in the system of Eqs.~(\ref{Baker}) and (\ref{Linear}) is visualized in the $(x_1, y)$ plane as shown in Fig.~\ref{fig.1}. We observe the transition from smooth to very complicated curves with the increase of $\gamma$~\cite{Hunt97}. We apply Kernel CCA to the system of Eqs.~(\ref{Baker}) and (\ref{Linear}). Here, we employ a Gaussian kernel with $\sigma=0.1$ and $\kappa$ is set to $0.1$. We prepare an orbit with length $N=2\times10^3$ as training data. Figure~\ref{fig.2} shows scatter plots of the canonical variates $(u_n, v_n)$ of Kernel CCA for four different values of $\gamma$ associated with Fig.~\ref{fig.1}. In Fig.~\ref{fig.2}, GS is clearly identified as a cloud of points along the diagonal on the plane of the canonical variates. By comparing graphs in Fig.~\ref{fig.2} with those in Fig.~\ref{fig.1}, the smaller value of the correlation coefficient between canonical variates corresponds to the complexity of the structure of the synchronization manifold. Figure~\ref{fig.3} shows the canonical correlation coefficient $\rho_{\cal F}$ and the Lyapunov dimension $D_L$ of the system of Eqs.~(\ref{Baker}) and (\ref{Linear}) as functions of the control parameter $\gamma$. There is a monotonic relationship between $\rho_{\cal F}^{\max}$ and $D_L$, which allows us to conclude that $\rho_{\cal F}^{\max}$ is a good index for characterization of GS. \begin{figure}[!t] \psfrag{x1}{$x_1$}\psfrag{y}{$y$} \psfrag{u}{$u$}\psfrag{v}{$v$} \parbox{\halftext}{ \includegraphics[width=6.8cm]{PTP_Figure1.eps} \caption{Projections of the strange attractors onto the $(x_1, y)$ plane. $\gamma=0.2$ (a), $0.4$ (b), $0.7$ (c), $0.9$ (d).} \label{fig.1}} \ \ \parbox{\halftext}{ \includegraphics[width=6.4cm]{PTP_Figure2.eps} \caption{Scatter plots of the first canonical variates of Kernel CCA. $\gamma=0.2$ (a), $0.4$ (b), $0.7$ (c), $0.9$ (d).} \label{fig.2}} \end{figure} Next, we mention how the parameters of Kernel CCA can be chosen from data. In order to represent complicated nonlinear structures of the synchronization manifold via linear combinations of Gaussian kernels, the value of $\sigma$ should be chosen adequately. A naive way of choosing $\sigma$ is to maximize $\rho_{\cal F}^{\max}$ estimated by the proposed method. The estimator of $\rho_{\cal F}^{\max}$ as a function of $\sigma$ is shown with the solid line in Fig.~\ref{fig.4}. The index $\rho_{\cal F}^{\max}$ increases monotonically with the decrease of $\sigma$, and $\rho_{\cal F}^{\max} \sim 1$ is attained in the limit of $\sigma\to 0$. Thus the maximization of $\rho_{\cal F}^{\max}$ evaluated from the training data leads to the choice of the smallest value of $\sigma$, which results in over-fitting to the training data and spurious detection of GS. An adequate value of $\sigma$ is not determined in this way. A way to overcome this difficulty is to prepare another set of data (``testing data") separately from the training data, and evaluate $\rho_{\cal F}^{\max}$ from the empirical average over the testing data, while $f$ and $g$ are estimated from the training data. This strategy for testing the performance of the estimated model with new data is called {\it cross validation (CV)}~\cite{Ston74}. The dotted line in Fig.~\ref{fig.4} shows the result of CV. The value of $\rho_{\cal F}^{\max}$ shown with the dotted line takes its maximum at a value of $\sigma\neq 0$. We expect that this value of $\sigma$ gives an optimal description of the system behind data. \begin{figure}[!t] \psfrag{r}{$\gamma$}\psfrag{rho}{$\rho_{\cal F}^{\max}$} \psfrag{DL}{$D_L$}\psfrag{sigma}{$\sigma$} \parbox{\halftext}{ \includegraphics[width=6.2cm]{PTP_Figure3.eps} \caption{The canonical correlation coefficient $\rho_{\cal F}^{\max}$ and the Lyapnov dimension $D_L$ as functions of $\gamma$. $N=2\times 10^3$, $\sigma=0.1$ and $\kappa=0.1$.} \label{fig.3}} \ \ \parbox{\halftext}{ \includegraphics[width=6.6cm]{PTP_Figure4.eps} \caption{$\rho_{\cal F}^{\max}$ as functions of $\sigma$ for $\gamma=0.6$. Orbits with length $N=2\times 10^2$ and $N'=10^4$ are used as training and testing data, respectively. $\kappa = 0.01$.} \label{fig.4}} \end{figure} \section{Conclusions} In summary, we have proposed a new approach for analyzing GS based on Kernel CCA with a successful application to a simple example. It is interesting to apply other kernel-based methods for analyzing various complex phenomena arising from nonlinear dynamical systems. \section*{Acknowledgements} We thank S.~Akaho and K.~Fukumizu for stimulating discussions on the kernel methods.
dddf1dace8080d081daf306c85885facb1466dc3
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The spatial behavior of accelerated expansion of universe has captured much attention in the modern generation of cosmology and astrophysics \cite{1, 2}. Recent evolutions in this era of cosmology have exposed new concepts to acquaint the critical and observational innovations for this accelerated expansion of universe. Different observations may provide direct evidence related to the accelerated expansion as a consequence of high red-shift supernova experiments \cite{3}, while large scale structures \cite{Teg} and cosmic microwave background fluctuations \cite{Spergel} yield indirect evidences. This accelerating expansion of universe is caused due to a mysterious force named as dark energy which retains strong negative pressure. Moreover, it is considered that the mysterious dark energy comprises almost 68\% of the total energy of the universe. Thus, in order to analyze the phenomenon of acceleration expansion, we need some modifications to the classical theory. Such type of problems lead into the search of modified or extended theory of gravity that might succeed in defining the situations in which the general theory of relativity (GR) given unsatisfactory results. As an alternative to GR, various gravitational modified theories have been proposed in the recent years. Some of which are $f(R)$, $f(R,T)$, $f(\mathcal{G})$, $f(R,\mathcal{G})$ and $f( \mathcal{G}, T)$ theories of gravity which have been developed by the composition of curvature scalars, topological invariants along their derivatives. To describe the late time acceleration and dark energy issues, the modifications of GR seem seductive. Furthermore, different cosmological approaches and thoughts provided by these theories are helpful to reveal the secrets behind the phenomenon of accelerated expansion of universe \cite{Capozziello}. In past few years, Einstein's theory of relativity has been amended by many scientists. In these valuable modifications one of the most simplest and well known modified theory \cite{Buch} is $f(R)$ attained by supplanting the term Ricci scalar $R$ with an arbitrary function $f(R)$. These alternative theories of gravity paly a vital role to understand the mysterious nature of the universe, which is responsible for the accelerated expansion of universe \cite{Odintsov1, Odintsov2}. Another remarkable theory which has attained eminence in the last few years is dubbed with the name, is modified Gauss-Bonnet gravity, also known as $f(\mathcal{G})$ gravity \cite{Noj1}-\cite{Cog2}. This modified theory of gravity was established by modifying the Einstein Hilbert action, by replacing term $R$ with $f(R, \mathcal{G})$. In the scenario of large expansion of universe, the additional Gauss-Bonnet term has resolved the deficiencies of $f(R)$ theory of gravity \cite{Noj1}-\cite{Felice2}. It is considered that $f(\mathcal{G})$ gravity is the simplest form of $f(R, \mathcal{G})$ theory of gravity, which is extensively addressed and assumed very supportive to reconstruct any form of cosmological solutions. Here $f(\mathcal{G})$ being the generic function of Gauss-Bonnet invariant term. An interesting significance of $f(\mathcal{G})$ modified gravity \cite{Chiba} is that it may avoid the ghost contributions and supports in regularizing the gravitational action due to the Gauss-Bonnet invariant quantity. Further to explore various cosmic issues as an alternative to dark energy \cite{Santos1}, the modified $f(\mathcal{G})$ gravity provide an influential platform for this purpose. Mak and Harko \cite{Mak} investigated exact solutions of Einstein field equations for some standard models with anisotropic background. An analytical formulation for the solutions of field equations with anisotropic matter source was constructed by Chaisi and Maharaj \cite{Chaisi}. Rahaman et al. \cite{Raha} extended the technique of Krori and Barua solution to the system of strange star with MIT bag model. Kalm et al. \cite{Kalm1, Kalm2} also studied the compact stellar objects by assuming anisotropic source matter under the Krori and Barua metric. The possibility for the existence of higher dimensional compact star was explored by Bhar et al. \cite{Bhar1}. The stability analysis and fundamental formation of anisotropic compact stars were analyzed by Zubair et al. \cite{Zubair1} in $f(R, T)$ theory of gravity. Further, the investigations on the charged anisotropic solutions for the compact objects were formulated by Maurya et al. \cite{S.K}. Recently, Ilyas \cite{56} discussed compact structures in modified Gauss-Bonnet gravity in the presence of charge. \\The relativistic massive objects known as compact stars can be expressed by GR as well as the gravitational extended theories of gravity \cite{Abbas1}-\cite{Cam}. These compact stars have strong gravitational force due to its very small size and immensely massive structure. In astrophysics, the study of compact stars have gained much attention. In recent decades it has been considered a vigorous subject of research due to their fascinating physiognomies and structures. Some exact solutions in the result of a collapsing star with anisotropic stress and heat flux were explored by Goswami et al. in $f(R)$ gravity \cite{Goswami}. The equilibrium condition of compact stars is found by Abbas et al. \cite{Abbas4}, further they also analyzed the physical features in the framework of $f(\mathcal{G})$ gravity. The modified theories of gravity have great contribution in studying and examining the nature of compact stellar structures and matter at high densities \cite{Ast2}-\cite{Ast4}. \\The aim of this paper is to analyze the appearance of $f(\mathcal{G})$ gravity in modeling of realistic configurations of compact stellar objects in the presence of Tolman-Kuchowicz spacetime. In particular, we extend the idea of Jasim et al. \cite{Jasim} in modified Gauss-Bonnet gravity and examine the stability and physical features of compact stars $Cen~ X-3,~ EXO~ 1785-248$ and $LMC~ X-4$. We investigate the various structural properties by choosing the specific $f(\mathcal{G})$ gravity models in the background of anisotropic matter source like evolution of effective energy density and components of radial and tangential pressure, the Tolman-Oppenheimer Volkoff (TOV) equation, mass radius relation, compactness parameter, surface redshift, the stability as well as the different energy bounds, for different experimental data of compact stellar structures. \\The layout of this paper is organized as follows: Section $\textbf{2}$ consists of the mathematical formulation of $f(\mathcal{G})$ gravity in the context of anisotropic matter distributions. Some of viable $f(\mathcal{G})$ gravity models and boundary conditions are demonstrated in section $\textbf{3}$. In section $\textbf{4}$, we calculate the values of the unknown constant for the chosen values of our model parameters by matching the interior metric to Schwarzschild's exterior metric. Section $\textbf{5}$ is devoted to scrutinize some physical attributes and also check the viability of different familiar compact stars via graphical analysis. Last section is based on the conclusive remarks. \section{Equation of Motion for Relativistic Sphere in $f(\mathcal{G})$ Gravity} To study the stellar configurations of compact stars in modified Gauss-Bonnet gravity, we consider the most general action for $f(\mathcal{G})$ gravity as follows \cite{Noj1} \begin{equation}\label{5} S = \int d^4x \sqrt{-g} \Bigg[\frac{R}{2\kappa^2}+f(\mathcal{G})\Bigg] + S_m, \end{equation} where $R$ is Ricci scalar, $\kappa^2 = {8\pi G}$ represents the coupling constant term and $S_m$ is the matter Lagrangian. The Gauss-Bonnet invariant term $\mathcal{G}$ is defined as \begin{equation}\label{6} \mathcal{G} = R^2 - 4R_{\mu\nu}R^{\mu\nu} +R_{\mu\nu\sigma\rho}R^{\mu\nu\sigma\rho}, \end{equation} where $R_{\mu\nu}$ and $R_{\mu\nu\rho\sigma}$ indicate the Ricci and Riemann tensors, respectively. Varying the action (\ref{5}) with respect to metric tensor $g_{\mu\nu}$, the modified field equations turn out to be \begin{eqnarray}\nonumber G_{\mu\nu}+ 8\big[R_{\mu\rho\nu\sigma} + R_{\rho\nu}g_{\sigma\mu} - R_{\rho\sigma}g_{\nu\mu} - R_{\mu\nu}g_{\sigma\rho} + R_{\mu\sigma}g_{\nu\rho} + \frac{R}{2}(g_{\mu\nu}g_{\sigma\rho}-g_{\mu\sigma}g_{\nu\rho})\big]\nabla^{\rho}\nabla^{\sigma}f_\mathcal{G}+(\mathcal{G}f_\mathcal{G}- f)g_{\mu\nu} =\kappa^2T_{\mu\nu},\\\label{fe} \end{eqnarray} where the subscript $\mathcal{G}$ in $f_\mathcal{G}$ represents the derivative with respect to $\mathcal{G}$ and $T_{\mu\nu}$ is the stress-energy tensor defined as \begin{equation}\label{8} T_{\mu\nu}=(\rho+p_t)u_\mu u_\nu-p_tg_{\mu\nu}+(p_r-p_t)v_\mu v_\nu, \end{equation} where $u_\mu=e^{\nu/2} \delta_\alpha^0$, $v_\nu=e^{\lambda/2}\delta_\alpha^1$ are four velocity vectors. We can re-write modified field equations (\ref{fe}) in an alternative form familiar with GR as \begin{eqnarray}\label{teff} G_{\mu\nu} =\kappa^2T_{\mu\nu}^{eff}, \end{eqnarray} where effective stress-energy tensor $T_{\mu\nu}^{eff}$ is given by \begin{eqnarray}\label{70} T_{\mu\nu}^{eff}=T_{\mu\nu}- \frac{8}{\kappa^2}\big[R_{\mu\rho\nu\sigma} + R_{\rho\nu}g_{\sigma\mu} - R_{\rho\sigma}g_{\nu\mu} - R_{\mu\nu}g_{\sigma\rho} + R_{\mu\sigma}g_{\nu\rho} + \frac{R}{2}(g_{\mu\nu}g_{\sigma\rho}-g_{\mu\sigma}g_{\nu\rho})\big]\nabla^{\rho}\nabla^{\sigma}f_\mathcal{G} -(\mathcal{G}f_\mathcal{G}- f)g_{\mu\nu}. \end{eqnarray} It is interesting to notice that effective energy-momentum tensor consist of usual matter contents and also the matter contents from geometric origin. Thus, it seems interesting as this approach may provide all the matter components which may be essential to unveil the phenomenon of dark energy and accelerated expansion. Here we consider the curvature tensor convention by taking the signature of the Riemannian metric as $(+,-,-,-)$ and furthermore Riemann tensor and covariant derivative for a vector field is denoted by $R^{\sigma}_{\mu\nu\rho}=\partial_{\nu}\Gamma^{\sigma}_{\mu\rho}-\partial_{\rho}\Gamma^{\sigma}_{\mu\nu}+ \Gamma^{\omega}_{\mu\rho}\Gamma^{\sigma}_{\omega\nu}-\Gamma^{\omega}_{\mu\nu}\Gamma^{\sigma}_{\omega\rho}$ and $\nabla_{\mu}V_{\nu}=\partial_{\mu}V_{\nu}-\Gamma^{\lambda}_{\mu }V_{\lambda}$ respectively. Further, to examine and investigate the configurations of compact stars, we will choose the spacetime which is static, non-rotating and spherically symmetric every where \cite{Krori}. \begin{equation}\label{9} ds^{2}= e^{\nu(r)}dt^2-e^{\lambda(r)}dr^2-r^2(d\theta^2+\sin^2 \theta d\phi^2). \end{equation} Using equations (\ref{70}) and (\ref{9}) and after some manipulations we acquire the following set of modified field equations for the anisotropic stellar system as \begin{eqnarray}\label{10} \rho^{eff}&&=~~\rho-8e^{-2\lambda}(f_\mathcal{GGG}\mathcal{G}'^{2}+f_\mathcal{GG}\mathcal{G}'')(\frac{e^{\lambda}-1}{r^2}) +4e^{-2\lambda}\lambda' \mathcal{G}'f_\mathcal{GG}(\frac{e^{\lambda}-3}{r^2})-(\mathcal{G}f_\mathcal{G}-f), \end{eqnarray} \begin{eqnarray}\label{11} p_{r}^{eff}&&=~~ ~p_{r}-4e^{-2\lambda}\nu'\mathcal{G}'f_\mathcal{GG}(\frac{e^{\lambda}-3}{r^2})+ (\mathcal{G}f_\mathcal{G}-f),\quad\quad\quad\quad\quad\\\nonumber \end{eqnarray} \begin{eqnarray}\label{12} p_{t}^{eff}&&=~~p_{t}-\frac{4e^{-2\lambda}\nu'}{r}(f_\mathcal{GGG}\mathcal{G}'^{2}+f_\mathcal{GG}\mathcal{G}'')-\frac{2e^{-2\lambda}{\nu'}^{2}f_\mathcal{GG}\mathcal{G}'}{r}-\frac{2e^{-2\lambda}f_\mathcal{GG}\mathcal{G}'}{r}(2\nu''-3\nu'\lambda')+ (\mathcal{G}f_\mathcal{G}-f). \end{eqnarray} Here $\rho$, $p_r$ and $p_t$ are usual energy density, radial pressure and transverse pressure respectively. The system of three equations have five unknown functions namely, $\rho^{eff}$, $p^{eff}_{r}$, $p^{eff}_{t}$, $\lambda$, $\nu$. For the above equations of motion, the expressions for $\rho^{eff}$, $p^{eff}_{r}$ and $p^{eff}_{t}$ are equal to the components of the Einstein tensor. \\\\$\mathbf{\textit{\textbf{Theorem}:}}$ Given a solution of Eqs. (\ref{10})-(\ref{12}), defined by the functions $T_1=\big\{\nu(r), \lambda(r), f(\mathcal{G})\big\}$, if we have a solution in GR defined by $T_2=\big\{\nu(r), \lambda(r)\big\}$, then all the physical attributes are identical for $T_1$ and $T_2$ since $T^{eff}_{\mu\nu}$ in (\ref{teff}) plays the role of stress-energy tensor in GR \cite{M.V}. \\\\The Eqs. (\ref{10})-(\ref{12}) are very much intricate and non-linear because of the involved variable function $f(\mathcal{G})$. Here we consider $\nu=Br^2+2lnC$ and $\lambda=ln(1 + ar^2+br^4)$ with constant parameters $a$, $b$, $C$ and $B$. In this way the above spacetime (\ref{9}) with specified metric potentials is known as Tolman-Kuchowicz spacetime \cite{Jasim}. In order to examine the structure and stability of compact stars, we consider viable $f(\mathcal{G})$ gravity models which enable us to compute the effective energy density $\rho^{eff}$, radial pressure $p^{eff}_{r}$ and transverse pressure $p^{eff}_{t}$. \section{The Realistic Viable $f(\mathcal{G})$ Gravity Models} In this section we give the analysis of compact stars by using the two realistic $f(\mathcal{G})$ gravity models. \subsection{Model $\mathbf{1}$} First, we consider a power-law model with the additional logarithmic correction term \cite{ Schmidt} \begin{eqnarray}\label{13} f_{1}=\alpha_1\mathcal{G}^{n_1}+\beta_1\mathcal{G}ln(\mathcal{G}), \end{eqnarray} where $\alpha_1$, $\beta_1$ and $n_1$ are arbitrary constants to be estimated depending on several physical requirements. Observationally well-consistent cosmic results are obtained by this model due to its extra degrees of freedom allowed in the dynamics \cite{Setare}. To probe the possible existence of the compact stars for the model (\ref{13}), the values of constants are picked in such a way that effective energy density, effective pressure and all energy conditions remain positive for the given model under investigation. By making use of Eq.(\ref{13}), the explicit relation for the effective energy density, effective radial pressure and transverse pressure have been found to be: \begin{eqnarray}\nonumber &&\rho^{eff}=\rho+\frac{8}{(1 +ar^2 + br^4)^3 \mathcal{G}^{3}}\bigg[(a+2br^2)(ar^2+br^4-2)\mathcal{G}(\beta_1\mathcal{G} +( n_1-1 )n_1 \alpha_1 \mathcal{G}^{n_1})\mathcal{G}'-(a\\\nonumber &&+br^2)(1+ar^2+br^4)(\beta_1\mathcal{G} (-\mathcal{G}'^2+ \mathcal{GG''})+( n_1-1) n_1 \alpha_1\mathcal{G}^{n_1}(( n_1-2 )\mathcal{G}'^2+ \mathcal{GG''}))\bigg]\\\label{14} &&+\alpha_1\mathcal{G}^{n_1}-n_1\alpha_1\mathcal{G}^{n_1}-\beta_1\mathcal{G}(1+ln(\mathcal{G})) +\beta_1 \mathcal{G}ln(\mathcal{G}), \end{eqnarray} \begin{eqnarray}\nonumber &&p^{eff}_{r}=p_{r}+\frac{8B(a r^2 +br^4-2)}{r(1+ar^2+br^4)^2\mathcal{G}^2}\bigg[(\beta_1\mathcal{G}+(n_1-1)n_1\alpha_1\mathcal{G}^{n_1})\mathcal{G}'\bigg]-\alpha_1\mathcal{G}^{n_1} -\beta_1\mathcal{G} ln(\mathcal{G})+ n_1\alpha_1\mathcal{G}^{n_1}\\\nonumber &&-\beta_1\mathcal{G}(1+ln(\mathcal{G})),~~~~~~~~~~~~~~~\\\label{15} \end{eqnarray} \begin{eqnarray}\nonumber &&p^{eff}_{t}=p_{t}-\frac{1}{r (1 + a r^2 + b r^4)^3}r\mathcal{G}^{3}\bigg[-8B^2r^2(1 + a r^2 + b r^4)\mathcal{G}(\beta_1\mathcal{G}+( n_1-1)\alpha_1\mathcal{G}^{n_1})\mathcal{G}' \\\nonumber &&+ 8B (2a r^2 + 5b r^4-1)\mathcal{G}(\beta_1\mathcal{G}+(n_1-1)n_1\alpha_1\mathcal{G}^{n_1}\mathcal{G})\mathcal{G'}+r(1+ar^2 + br^4)(8 B\beta_1\mathcal{G}(\mathcal{G}'^2+ \mathcal{GG''})\\\nonumber &&-8B(n_1-1)n_1\alpha_1\mathcal{G}^{n_1}((n_1-2)\mathcal{G}'^2+ \mathcal{GG''}))\bigg]-\alpha_1\mathcal{G}^{n_1}+ n_1\alpha_1\mathcal{G}^{n_1}-\beta_1\mathcal{G}ln(\mathcal{G})+\beta_1\mathcal{G}(1+ln(\mathcal{G})). \\\label{16} \end{eqnarray} \subsection{Model $2$} Next, we consider the realistic $f(\mathcal{G})$ gravity model \cite{Bamba}, which reproduce the current cosmic acceleration, namely \begin{eqnarray}\label{17} f_{2}= \alpha_2\mathcal{G}^{n_2}(\beta_2 \mathcal{G}^{m}+1), \end{eqnarray} where $\alpha_2$, $\beta_2$, $m$ and $n_2$ are arbitrary constants, and $n_2>0$. The model regard to $f_{2}$ is considered worthwhile for the treatment of the finite time future singularities \cite{Noj3}. The physical features of the compact stars for the model (\ref{17}) by using Eqs. (\ref{10})-(\ref{12}) can be defined by the following relation as \begin{eqnarray}\nonumber &&\rho^{eff}=\rho + \frac{8\mathcal{G}^{(n_2-3)}}{ r(1 + a r^2 + b r^4)^3}\bigg[(a+ 2br^2)( a r^2 + b r^4-2 )\alpha_2\mathcal{G} (( n_2-1 ) n_2 + ( m + n_2-1 ) (m + n_2)\beta_2\mathcal{G}^m) \mathcal{G}' \\\nonumber && -r^3(a+br^2)(1 + a r^2 + b r^4)(((n_2-2)(n_2-1)n_2\alpha_2+((n_2-2)(n_2-1)n_2\alpha_2+(m-2)(m-1)m\alpha_2\\\nonumber &&+3mn_2(m+n_2-2)\alpha_2)\beta_2\mathcal{G}^m)\mathcal{G}'^2+ \alpha_2 \mathcal{G}((n_2-1)n_2+(m+n_2-1)(m+n_2)\beta_2 \mathcal{G}^m) \mathcal{G}''\bigg]+\alpha_2 \mathcal{G}^{n_2} (1 + \beta_2 \mathcal{G}^{m})\\\nonumber && - \alpha_2\mathcal{G}^ {n_2} (n_2 + (m + n_2)\beta_2 \mathcal{G}^m),\\\label{18} \end{eqnarray} \begin{eqnarray}\nonumber &&p^{eff}_{r}=p_{r}+\frac{8B(ar^2+br^4-2)\alpha_2\mathcal{G}^{(n_2-2)}}{r (1 + a r^2 + b r^4)^2}\bigg[(n_2-1)n_2 + (m^2+(n_2-1)n_2+m(2n_2-1))\beta_2 \mathcal{G}^m\bigg]\mathcal{G}'-\alpha_2 \mathcal{G}^{n_2} (1\\\label{l9} && + \beta_2 \mathcal{G}^{m}) + \alpha_2\mathcal{G}^ {n_2} (n_2 + (m + n_2)\beta_2 \mathcal{G}^m),\\\nonumber \end{eqnarray} \begin{eqnarray}\nonumber &&p^{eff}_{t}=p_{t}+\frac{8B\mathcal{G}^{(n_2-3)}}{ r(1 + a r^2 + b r^4)^3}\bigg[Br^2(1 + a r^2 + b r^4)\alpha_2\mathcal{G}((n_2-1)n_2+(m+n_2-1)(m +n_2)\beta_2 \mathcal{G}^m)\mathcal{G}' \\\nonumber &&+(2ar^2+5br^4-1)\alpha_2\mathcal{G}((n_2-1)n_2+(m+n_2-1)(m+n_2)\beta_2 \mathcal{G}^m)\mathcal{G}')-r(1 + a r^2 + b r^4)(((n_2\\\nonumber &&-2)(n_2-1)n_2\alpha_2+((n_2 -2)(n_2-1)n_2\alpha_2+(m-2)(m-1)m\alpha_2 +3mn_2(m+n_2-2)\alpha_2)\beta_2\mathcal{G}^m)\mathcal{G}'^2\\\label{20} &&+\alpha_2 \mathcal{G}((n_2-1)n_2+(m+n_2-1)(m+n_2)\beta_2 \mathcal{G}^m) \mathcal{G}''\bigg] -\alpha_2 \mathcal{G}^{n_2} (1 + \beta_2 \mathcal{G}^{m})+ \alpha_2\mathcal{G}^{n_2} (n_2 + (m + n_2)\beta_2 \mathcal{G}^m).\\\nonumber \end{eqnarray} \subsection{Boundary Conditions} The existence of physical and geometric singularities within the star is considered as the most important features in the study of compact stellar objects. For the existence of singularises, we analyze the behavior of both metric potentials $e^{\nu(r)}$ and $e^{\lambda(r)}$ at the center of structure $r=0$. For a physical viability and stability of the model, the metric potentials should be singularity-free, positive, monotonically increasing and regular inside the compact stellar structure. The variation of metric potentials at the center of the star i.e. $e^{\lambda(r=0)}=1$ and $e^{\nu(r=0)}=C^2$ is shown in Fig. $\ref{Fig:1}$. It is observed that the considered election on metric potentials are consistent with the above-mentioned conditions. The graphical behavior shows that value of the both metric potentials at the center is minimum, then it increase nonlinearly and become maximum at the boundary surface. \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=enu.eps,width=0.38\linewidth} & \epsfig{file=elambda.eps,width=0.38\linewidth} & \end{tabular} \caption{Behavior of metric potentials for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$.} \label{Fig:1} \end{figure} \FloatBarrier Further, it is important to mention here for the well behaved compact stellar objects, the following conditions should be satisfied \begin{itemize} \item $c^2\rho^{eff}$ should always be greater than $p^{eff}$ within the range $0\leq r\leq R.$ \item At the surface boundary $r=R$, the radial pressure must be zero i.e $p^{eff}_r(r=R)=0$. \item The effective density gradient $\frac{d^{eff}\rho}{dr}$ must be negative for the range $0\leq r\leq R$, i.e. $(\frac{d\rho^{eff}}{dr})_{r=0}$ and $(\frac{d^2p^{eff}}{dr^2})_{r=0}<0$. \item The effective pressure gradient $\frac{dp^{eff}_{r}}{dr}$ must be negative for the range $0\leq r\leq R$, i.e. $(\frac{dp^{eff}_{r}}{dr})_{r=0}$ and $(\frac{d^2p^{eff}_{r}}{dr^2})_{r=0}<0$. These above two conditions depict that the effective energy density and effective radial pressure should be decreasing towards the boundary of the surface of the structure. \item The velocity of sound speed must not be exceed the speed of light i.e. $\frac{dp^{eff}}{c^2d\rho}<1$. \item The adiabatic index $\Gamma=\frac{\rho^{eff}+p^{eff}_{r}}{p^{eff}_r}(\frac{dp^{eff}_{r}}{d\rho^{eff}})= \frac{\rho^{eff}+p^{eff}_{r}}{p^{eff}_r}v^2_{r}>4/3$, is necessary condition for the stability. \item The surface redshift $z_{s}$ must be finite and positive. \end{itemize} Here, we assume $c=1$. These physical attributes like effective energy density, pressure, sound speed mass, surface redshift are the most significant features describing the structure of compact star. Now to check the appropriated behaviour and capability of characterizing the realistic stars, we plot the graphs of these features. \section{Exterior Metric and Matching Conditions} The intrinsic boundary metric remains the same, whether it is constructed from the interior and exterior geometry of the star. This mechanism ensures that the metric components irrespective of the coordinate system across the boundary surface will remain continuous. No doubt, in theory of GR the Schwarzschild's solution is considered as the appropriate choice to choose from the diverse possibilities of the matching conditions while exploring the compact stellar objects. Also according to the Jebsen-Birkhoff's theorem statement, every spherically symmetric vacuum solution of field equations must be static and asymptotically flat. Furthermore, as concern with the modified $f(\mathcal{G})$ gravity, the Schwarzschild's solution may be accommodated with a proper choice of viable $f(\mathcal{G})$ gravity models for non zero density and pressure. Perhaps, this fact leads to the violation of Birkhof's theorem in modified theories of gravity \cite{Faraoni}. A lot of work on matching conditions has been done by many authors (\cite{Abbas2}-\cite{Abbas3},\cite{Bhar}). The junction conditions that appear in the extended theories of gravity implement some restrictions on the stellar objects are proved by Goswami et al. \cite{Goswami}. However for this goal many authors \cite{Cooney}-\cite{Ast1} have considered the Schwarzschild's solution, giving some fascinating results. At this juncture to solve the field equations under the specified constraint at $r=R$, the pressure $p_r(r=R)=0$, we match the intrinsic metric (\ref{9}) to the vacuum Schwarzschild's exterior metric, given by \begin{equation}\label{18a} ds^2=(1-\frac{2M}{r}) dt^2 -(1-\frac{2M}{r})^{-1} dr^2 -r^2 d\theta^2 -r^2\sin^2\theta d\phi^2, \end{equation} where ``$M$" stands for the total mass within the boundary of the compact star. At the boundary surface $r = R$, the continuity of the metric potentials yield the following expressions \begin{equation}\label{19a} g^{-}_{tt} = g^{+}_{tt}, ~~~~ g^{-}_{rr} = g^{+}_{rr},~~~~\frac{\partial g^{-}_{tt}}{\partial r} = \frac{\partial g^{+}_{tt}}{\partial r}, \end{equation} where interior and exterior solutions are symbolized by $(-)$ and $(+)$ respectively. The values of the constants $a,~b,~B$ and $C$ are obtained by comparing the interior and exterior of the metric such as \begin{equation}\label{20a} a=\frac{1}{R(R-2M)}+\frac{MR}{2(R-2M)^4}-\frac{1}{R^2},~~~~~~~~~~~b=\frac{-M}{2R(R-2M)^4}, \end{equation} \begin{equation}\label{21a} B=\frac{M}{R^3}(1-\frac{2M}{R})^{-1},~~~~~~~~~~~~~~~~~~~~~~ C=e^{\frac{1}{2}{[\ln(1-\frac{2M}{R})-\frac{M}{R}(1-\frac{2M}{R})^{-1}]}}. \end{equation} The approximated values of mass and radius of the compact stars $Cen~ X-3$, $EXO ~1785-248$ and $LMC~ X-4$ are considered to find out these constant values $a, b,~ B$ and $C$ which are given in the following Table $\ref{tab1}$. \begin{table}[ht] \caption{The approximated values of unknown constants $a$, $b$, $B$ and $C$ for compact star candidates $Cen~ X-3$, $EXO~ 1785-248$ and $LMC~ X-4$.} \centering \begin{tabular}{|p{2.8cm}|p{3.3cm}| p{3.3cm}| p{3.3cm}|} \hline \hline Star Model & ~~~ $Cen~ X-3$ &~~~ $EXO~ 1785-248$ &~~~ $LMC ~X-4$ \\ \hline ~~~~ $M$ &~~~ 1.49 $\pm$ 0.08 ~~\cite{Gango} &~~~ 1.3 $\pm$ 0.2~~ \cite{Ozel}& ~~~ 1.29 $\pm$ 0.05~~ \cite{Gango} \\ \hline ~~~~ $R$ &~~~ 9.508 $\pm$ 0.115 &~~~ 9.189 $\pm$ 0.396 &~~~ 9.170 $\pm$ 0.098 \\ \hline ~~~~ $\mu=M/R$ &~~~ 0.2226 & ~~~ 0.2166 & ~~~ 0.2160 \\ \hline ~~~~ $a$ &~~~ 0.0224207 &~~~ 0.0197478 & ~~~ 0.0212739 \\ \hline ~~~~ $b$ &~~~ -0.000151008 &~~~ -0.000124379 &~~~ -0.00014514 \\ \hline ~~~~ $B$ & ~~~ 0.00454877 &~~~ 0.0041604 &~~~ 0.00449728\\ \hline ~~~~ $C$ &~~~ 0.609389 & ~~~ 0.621865 &~~~ 0.623022 \\ \hline \end{tabular} \label{tab1} \end{table} \FloatBarrier \section{Physical Aspects of $f(\mathcal{G})$ Gravity Model} In this section, we study various physical attributes of the anisotropic dense stellar objects such as, effective energy density, radial and transversal pressure, energy bounds, anisotropic factor, mass function, compactification parameter and analysis of surface redshift and adiabatic index of our proposed models for the specific values of the model parameters. \subsection{Energy Density and Pressure Evolutions} The effective energy density and pressure components inside the stellar system show maximal value due to densest nature of compact relativistic objects. The graphical analysis of the effective energy density, radial and transverse pressure for considered compact star candidates with respect to fractional radial coordinate $r/R$ is shown in Figs. $\ref{Fig:2}$ and $\ref{Fig:3}$. \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=densitym1.eps,width=0.27\linewidth} & \epsfig{file=pressurem1.eps,width=0.27\linewidth} & \epsfig{file=pressureptm1.eps,width=0.27\linewidth} & \end{tabular} \caption{Evolution of effective energy density(left panel), effective radial pressure (middle panel)and effective transverse pressure (right panel) for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $1$.} \label{Fig:2} \end{figure} \FloatBarrier \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=densitym2.eps,width=0.27\linewidth} & \epsfig{file=pressurem2.eps,width=0.27\linewidth} & \epsfig{file=pressureptm2.eps,width=0.27\linewidth} & \end{tabular} \caption{Evolution of effective energy density(left panel), effective radial pressure (middle panel) and effective transverse pressure (right panel) for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $2$.} \label{Fig:3} \end{figure} \FloatBarrier From this graphical behavior it is clear that at the center of the compact stars, the effective energy density and components of pressure attain its maximum values and further approaches to the zero on the surface boundary, which point out this fact that our stellar objects show very high compactness. These plots clearly demonstrate the existence of anisotropic configuration of compact stars for our suggested models in $f(\mathcal{G})$ gravity. The numerical values of effective central density and radial pressure for three compact stars are shown in Table $\ref{tab2}$ and $\ref{tab3}$. These physical features are positive finite at the center, which confirms that our present system is free from physical and geometrical singularities. \begin{table}[H] \caption{The numerical values of central density and pressure for the parameters $\rho=0.2$, $p_{r}=0.8$, $p_{t}=0.0011$, $n_1=2$, $\alpha_1=-3.64534*10^6$, $\beta_1=10$ under viable $f(\mathcal{G})$ gravity model $1.$} \centering \begin{tabular}{|p{2.7cm}|p{2.5cm}| p{2.5cm}| p{2.8cm}| p{2.8cm}| p{2.8cm}|} \hline\hline $Star Model$ &~~~ $M$ & ~~~ $R$ &~~~ $\rho_c ~(g/cm^3)$ &~~~ $p_r~~(dyne/cm^2)$ \\ \hline $Cen~ X-3$ & ~~1.49 $\pm$ 0.08 &~~9.508 $\pm$ 0.115 &~~ $2.66509*10^{15}$ & ~~ $2.86296*10^{35}$ \\ \hline $EXO~ 1785-248$ & ~~1.3 $\pm$ 0.2 & ~~9.189 $\pm$ 0.396 &~~ $1.68087*10^{15}$ &~~ $1.88015*10^{35}$ \\ \hline $LMC ~X-4$ &~~1.29 $\pm$ 0.05 & ~~9.170 $\pm$ 0.098 &~~~$2.27270*10^{15}$ & ~~ $2.54677*10^{35}$ \\ \hline \end{tabular} \label{tab2} \end{table} \begin{table}[H] \caption{The numerical values of central density and pressure for the parameters $\rho=0.2$, $p_{r}=1.5$, $p_{t}=0.5$, $n_2=2$, $m=1$, $\alpha_2=-6.74308*10^{-6}$, $\beta_2=2$ under viable $f(\mathcal{G})$ gravity model $2$.} \centering \begin{tabular}{|p{2.7cm}|p{2.5cm}| p{2.5cm}| p{2.8cm}| p{2.8cm}| p{2.8cm}| } \hline\hline $Star Model$ & ~~~$M$ &~~~ $R$ & ~~~$\rho_c~(g/cm^3)$ & ~~~$p_r~~(dyne/cm^2)$ \\ \hline $Cen~ X-3$ &~~ 1.49 $\pm$ 0.08 & ~~9.508 $\pm$ 0.115 &~~$4.85481*10^{15}$ & ~~~$5.2131*10^{35}$ \\ \hline $EXO~ 1785-248$ & ~~~1.3 $\pm$ 0.2 &~~9.189 $\pm$ 0.396 &~~ $3.07022*10^{15}$ &~~ $3.43362*10^{35}$ \\ \hline $LMC ~X-4$ &~~ 1.29 $\pm$ 0.05 & ~~9.170 $\pm$ 0.098 &~~ $4.14364*10^{15}$ &~~ $4.64173*10^{35}$ \\ \hline \end{tabular} \label{tab3} \end{table} The variation of the radial derivative of the effective energy density, radial and transverse pressure are denoted by $\frac{d\rho^{eff}}{dr}$, $\frac{dp^{eff}_{r}}{dr}$ and $\frac{dp^{eff}_{t}}{dr}$ respectively. The graphical representation of these derivatives are shown in Figs. $\ref{Fig4}$ and $\ref{Fig5}$. We observe that at the center $r=0$ these variations show decreasing evolution for the first order derivatives and expressed as \begin{equation}\label{a} \frac{d\rho^{eff}}{dr}<0,~~~~~~~~~~~~~~~~~~~~\frac{dp^{eff}_{r}}{dr}<0. \end{equation} \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=derivativerowm1.eps,width=0.27\linewidth} & \epsfig{file=derivativeprm1.eps,width=0.27\linewidth} & \epsfig{file=derivativeptm1.eps,width=0.27\linewidth} & \end{tabular} \caption{Evolution of $\frac{d\rho^{eff}}{dr}$ (left panel), $\frac{dp^{eff}_r}{dr}$ (middle panel) and $\frac{dp^{eff}_t}{dr}$ (right panel) for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $1$.} \label{Fig4} \end{figure} \FloatBarrier \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=derivativerowm2.eps,width=0.27\linewidth} & \epsfig{file=derivativeprm2.eps,width=0.27\linewidth} & \epsfig{file=derivativeptm2.eps,width=0.27\linewidth} & \end{tabular} \caption{Evolution of $\frac{d\rho^{eff}}{dr}$ (left panel), $\frac{dp^{eff}_r}{dr}$ (middle panel) and $\frac{dp^{eff}_t}{dr}$ (right panel) for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $2$.} \label{Fig5} \end{figure} \FloatBarrier It is also noticed that the second order derivatives of the density and radial pressure at the center $r=0$ showing the maximum value, defined as \begin{equation}\label{b} \frac{d\rho^{eff}}{dr}=0=\frac{dp^{eff}_r}{dr}, ~~~~~~~~~~\frac{d^2\rho^{eff}}{dr^2}<0,~~~~~~~~~~\frac{d^2p^{eff}_{r}}{dr^2}<0. \end{equation} The graphical analysis is shown in Figs. $\ref{Fig6}$ and $\ref{Fig7}$ which clearly depicts the compactness of the stars. \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=doublederivativerowm1.eps,width=0.27\linewidth} & \epsfig{file=doublederivativeprm1.eps,width=0.27\linewidth} & \epsfig{file=doublederivativeptm1.eps,width=0.27\linewidth} & \end{tabular} \caption{Evolution of $\frac{d^2\rho^{eff}}{dr^2}$ (left panel), $\frac{d^2p^{eff}_r}{dr^2}$ (middle panel) and $\frac{d^2p^{eff}_t}{dr^2}$ (right panel) for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $1$.} \label{Fig6} \end{figure} \FloatBarrier \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=doublederivativerowm2.eps,width=0.27\linewidth} & \epsfig{file=doublederivativeprm2.eps,width=0.27\linewidth} & \epsfig{file=doublederivativeptm2.eps,width=0.27\linewidth} & \end{tabular} \caption{Evolution of $\frac{d^2\rho^{eff}}{dr^2}$ (left panel), $\frac{d^2p^{eff}_r}{dr^2}$ (middle panel) and $\frac{d^2p^{eff}_t}{dr^2}$ (right panel) for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $2$.} \label{Fig7} \end{figure} \FloatBarrier \subsection{Energy Conditions} Some physical properties known as energy conditions are very helpful to investigate the presence of the realistic matter distribution. Furthermore, these conditions play a vital role to identify the normal and exotic nature of matter inside the stellar structure model. These energy conditions have captured much attention in the discussion of some cosmological issues. By using the energy conditions one can easily examine the validity of the second law of black hole thermodynamics and Hawking-Penrose singularity theorems \cite{Hawking}. In cosmology many riveting results have been described by the use of energy conditions \cite{Santos}-\cite{Bertolami}. These energy conditions are segregated into null, weak, strong and dominant energy bounds and symbolized by NEC, WEC, SEC and DEC, respectively. These conditions in the presence of anisotropic fluid (\ref{9}) for curvature-matter coupled gravity \cite{Gasperini} are defined as \begin{eqnarray}\nonumber &&NEC:~~~~~~\rho^{eff}+p^{eff}_r\geq0,~~~\rho^{eff}+p^{eff}_t\geq0\nonumber, \\&&WEC:~~~~~~\rho^{eff}\geq0,~~~\rho^{eff}+p^{eff}_r\geq0,~~~\rho+p^{eff}_t\geq0\nonumber, \\&&SEC:~~~~~~\rho^{eff}+p^{eff}_r\geq0,~~~\rho^{eff}+p^{eff}_t\geq0,~~~\rho^{eff}+p^{eff}_r+2p^{eff}_t\geq0\nonumber, \\&&DEC:~~~~~~\rho^{eff}-p^{eff}_r\geq0,~~~\rho^{eff}-p^{eff}_t\geq0. \end{eqnarray} All energy conditions have been satisfied for our chosen $f(\mathcal{G})$ models, as represented graphically in Figs. $\ref{Fig:8}$ and $\ref{Fig:9}$ \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=densitym1.eps,width=0.27\linewidth} & \epsfig{file=row+prm1.eps,width=0.27\linewidth} & \epsfig{file=row+ptm1.eps,width=0.27\linewidth} & \end{tabular} \end{figure} \FloatBarrier \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=row-prm1.eps,width=0.27\linewidth} & \epsfig{file=row-ptm1.eps,width=0.27\linewidth} & \epsfig{file=rowprptm1.eps,width=0.27\linewidth} & \end{tabular} \caption{Plot of energy conditions for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $1$.} \label{Fig:8} \end{figure} \FloatBarrier \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=densitym2.eps,width=0.27\linewidth} & \epsfig{file=row+prm2.eps,width=0.27\linewidth} & \epsfig{file=row+pt.eps,width=0.27\linewidth} & \end{tabular} \end{figure} \FloatBarrier \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=row-prm2.eps,width=0.27\linewidth} & \epsfig{file=row-ptm2.eps,width=0.27\linewidth} & \epsfig{file=rowprptm2.eps,width=0.27\linewidth} & \end{tabular} \caption{Plot of energy conditions for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $2$.} \label{Fig:9} \end{figure} \FloatBarrier \subsection{ The Modified TOV Equation for $f(\mathcal{G})$ Gravity} Energy conservation equation of motion for our system is defined by \begin{equation}\label{23} \bigtriangledown^\mu T^{eff}_{\mu\nu}=0. \end{equation} The modified form of the generalized TOV equation for $f(\mathcal{G})$ gravity can be constructed as \begin{equation}\label{24} \frac{dp_{r}}{dr}+\frac{\nu~'}{2}(\rho +p_r)+\frac{2}{r}(p_{r}-p_{t})=0. \end{equation} The physically acceptable models must be stable under the three forces, viz., gravitational force $(F_g$), hydrostatic force $(F_h)$ and anisotropic force $(F_a)$ in such a way, that the sum of the forces becomes zero for the system to be in equilibrium, proposed by Tolman \cite{Tolman}, and later on Oppenheimer and Volkoff \cite{Oppen} i.e. \begin{equation}\label{25} F_g +F_h+F_a=0, \end{equation} where $F_g= \frac{\nu~'}{2}(\rho +p_r)$, $F_h= \frac{dp_{r}}{dr}$, $F_a= \frac{2}{r}(p_{r}-p_{t})$. From the Fig. $\ref{Fig:10}$, it is clear that mutual effect of all forces $F_g, F_h$ and $F_a$ justify the condition of equilibrium for our system. \begin{figure}[h!] \begin{tabular}{ccccc} \epsfig{file=forcesm1.eps,width=0.32\linewidth} & \epsfig{file=forcesm2.eps,width=0.32\linewidth} & \end{tabular} \caption{Behavior of hydrostatic force $(F_h)$, gravitational force $(F_g)$ and anisotropic force $(F_a)$ for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $1$ on left panel while for model $2$ is on the right panel.} \label{Fig:10} \end{figure} \FloatBarrier \subsection{Stability analysis} Stellar structure's stability plays an important role in examining the physical consistent of the models. To study evolution of stellar structure configuration, the role of stability is considered as a very critical and burning issue. A lot of work on stability analysis has been discussed by many researchers. Here we consider Herrera's cracking concept \cite{Her} to probe the stability of our considered compact stars candidates via technique of radial and transverse sounds speed symbolized by $v^2_{sr}$ and $v^2_{tr}$. The radial and transverse speed of sounds are defined as \begin{equation}\label{27} v^2_{sr}=\frac{dp^{eff}_{r}}{d\rho^{eff}},~~~~~\text{and}~~~~~~ v^2_{st}=\frac{dp^{eff}_{t}}{d\rho}. \end{equation} To preserve causality condition, the radial and transverse sound speed must lie in interval $[0, 1]$ i.e. $0\leq v^2_{sr}\leq1$ and $0\leq v^2_{st}\leq1$, everywhere inside the star, for a physically stable stellar structure. Herrera and collaborators \cite{Her}-\cite{Prisco} constructed new notion of the cracking concept to explore the potentially stable/unstable configurations of stellar structures. One can easily assess that potentially stable/unstable regions within matter configurations are determined by the difference of sound propagation. The region in which specifically the components of radial speed sound is greater than the components of transverse sound speed is known as potentially stable region i.e. $0\leq |v^2_{st}-v^2_{sr}|\leq1$, while for unstable region this inequality doesn't hold. The evolution of radial and transversal speed of sounds for compact star candidates $Cen ~X-3$, $EXO ~1785-248$ and $LMC~ X-4$ can easily seen from the Figs. $\ref{Fig:11}$ and $\ref{Fig:12}$ and it is noted that matter configuration relation is stable, as discussed. \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=rsoundm1.eps,width=0.27\linewidth} & \epsfig{file=tsoundm1.eps,width=0.27\linewidth} & \epsfig{file=sounddiffm1.eps,width=0.27\linewidth} & \end{tabular} \caption{Variation of $v^2_{sr}$, $v^2_{st}$ and $|v^2_{st}-v^2_{sr}|$ for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $1$.} \label{Fig:11} \end{figure} \FloatBarrier \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=rsoundm2.eps,width=0.28\linewidth} & \epsfig{file=tsoundm2.eps,width=0.28\linewidth} & \epsfig{file=sounddiffm2.eps,width=0.28\linewidth} & \end{tabular} \caption{Variation of $v^2_{sr}$, $v^2_{st}$ and $|v^2_{st}-v^2_{sr}|$ for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $2$.} \label{Fig:12} \end{figure} \FloatBarrier \subsection{Mass-Radius Relationship} In this section, we scrutinize the mass of compact stars depending upon the radial function $r$, given as \begin{equation}\label{28} M^{eff}= \int_{0} ^{R} 4\pi \rho^{eff} r^2 dr. \end{equation} The behavior of mass function in Fig. $\ref{Fig:13}$ clearly shows that the mass of compact star is directly proportional to the radius which depicts that mass is regular at core i.e. $M^{eff}\rightarrow0$ as $r\rightarrow0$. Here we can see from the graph that maximum mass is obtained at $r=R$. Furthermore the mass-radius relation in the framework of $f(\mathcal{G})$ gravity is also compatible, while studying the neutron stars \cite{Ast2}. Moreover, Buchdahl \cite{Buchdahl} found a limit for the mass to radius ratio for the static spherically symmetric model with anisotropic perfect fluid case, should be bounded like $\frac{2M}{R}<\frac{8}{9}$. \subsection {Compactification Factor and Redshift Analysis} The compactification factor $\mu(r)$ can be expressed by mass to the radius ratio, and defined as \begin{equation}\label{29} \mu(r)=\frac{M^{eff}}{r} =\frac{1}{r}\int_{0} ^{R} 4\pi \rho^{eff} r^2 dr. \end{equation} \\Furthermore, surface redshift can be determined by the following form \begin{equation}\label{30} z_{s}=\frac{1}{\sqrt{1-2\mu}}-1. \end{equation} \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=mass.eps,width=0.29\linewidth} & \epsfig{file=compact.eps,width=0.29\linewidth} & \epsfig{file=redshift.eps,width=0.29\linewidth} & \end{tabular} \caption{Variation of the mass (left panel), compactness factor (middle panel) and redshift (right panel) for $Cen X-3$, $EXO 1785-248$ and $LMC X-4$ compact stars.} \label{Fig:13} \end{figure} \FloatBarrier The strong physical interaction between particles inside the star and its equation of state can be described by surface redshift. The variation of surface redshift and compactness factor for suggested compact stars with respect to fractional radial coordinate is shown in Fig. $\ref{Fig:13}$. It increases towards the boundary surface of compact stars but vanishes at the center. In our case, all compact stars satisfy Buchdahl condition and the allowed maximum value for surface redshift is $z_{s}\leq5.211$ \cite{B.V}. \subsection{Adiabatic Index Analysis} The stiffness of equation of state can be described by the term adiabatic index, for a given energy density, and it also illustrates the stability of the both relativistic and non-relativistic compact stars. The concept of the dynamical stability against infinitesimal radial adiabatic perturbation of the stellar system has been developed by Chandrasekhar (as a pioneer) \cite{Chand} and later on this idea has been successfully tested by many authors \cite{Heint}-\cite{Bombaci} for both the isotropic and anisotropic stellar objects. In their work it is estimated that for a dynamically stable stellar objects the adiabatic index must be greater than $\frac{4}{3}$ in all internal points. The notation of the adiabatic index corresponding to radial and transverse pressure for anisotropic fluid is defined as \begin{equation}\label{31} \Gamma_r=\frac{\rho^{eff}+p^{eff}_{r}}{p^{eff}_r}(\frac{dp^{eff}_{r}}{d\rho^{eff}})= \frac{\rho^{eff}+p^{eff}_{r}}{p^{eff}_r}v^2_{sr}. \end{equation} \begin{equation}\label{31} \Gamma_t=\frac{\rho^{eff}+p^{eff}_{t}}{p^{eff}_t}(\frac{dp^{eff}_{t}}{d\rho^{eff}})= \frac{\rho^{eff}+p^{eff}_{t}}{p^{eff}_t}v^2_{st}. \end{equation} \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=adibaticrm1.eps,width=0.33\linewidth} & \epsfig{file=adibatictm1.eps,width=0.33\linewidth} & \end{tabular} \caption{Variation of adiabatic index for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $1$.} \label{Fig:14} \end{figure} \FloatBarrier \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=adiabaticrm2.eps,width=0.33\linewidth} & \epsfig{file=adiabatictm2.eps,width=0.33\linewidth} & \end{tabular} \caption{Variation of adiabatic index for stars, $Cen X-3$, $EXO 1785-248$ and $LMC X-4$, under viable $f(\mathcal{G})$ gravity model $2$.} \label{Fig:15} \end{figure} \FloatBarrier The behavior of adiabatic index is shown in Figs. $\ref{Fig:14}$ and $\ref{Fig:15}$. From the graph it is clear that the value of adiabatic indices is greater than $\frac{4}{3}$, which confirms the stability of our proposed models. \subsection{The Measurement of Anisotropy} In case of compact star modeling the interior structure of relativistic stellar objects can be illustrated by the term anisotropy, expressed as \begin{equation}\label{32} \triangle=\frac{2}{r}(p^{eff}_{t}-p^{eff}_{r}), \end{equation} which yields the information as regards the anisotropic behavior of the model. We check the anisotropy behavior graphically with the help of observationally data of the considered stars, which is presented in Table $1$. If $p^{eff}_{t}>p^{eff}_{r}$ then this depicts that anisotropic pressure is directed outward and this leads to $\triangle>0$, while if $p^{eff}_t<p^{eff}_r$ the anisotropy turns negative i.e. $\triangle<0$, and this identifying that anisotropic being drawn inward. The graphical analysis of anisotropic measurement corresponding to fractional radial coordinate $r/R$ shows decreasing behavior for the considered stars, suggesting that $p^{eff}_t < p^{eff}_r$, as shown in Fig. $\ref{Fig:16}$. \begin{figure}[h!] \begin{tabular}{cccc} \epsfig{file=anistropym1.eps,width=0.3\linewidth} & \epsfig{file=Anistropym2.eps,width=0.3\linewidth} & \end{tabular} \caption{Variation of the anisotropy factor under viable $f(\mathcal{G})$ gravity model $1$ is on the left panel, while for model $2$ is on the right panel.} \label{Fig:16} \end{figure} \FloatBarrier \section{Concluding Remarks} In order to detect the equitable model for realistic geometry of internal compact stellar structures not only in GR but also in modified $f(\mathcal{G})$ gravity, is considered as an attracting challenge. Our motivation is to examine the real composition of these compact stars in their internal cores under the consideration of two different viable $f(\mathcal{G})$ gravity models. For this goal, we have certified these models for three different observed compact stars, labeled as $Cen~ X-3$, $EXO ~1785-248$ and $LMC ~X-4$ with an anisotropic source matter by considering the Tolman- Kochowicz spacetime \cite{Jasim}, with metric potentials $\nu=Br^2+2lnC$ and $\lambda=ln(1 + ar^2+br^4)$, where $a$, $b$, $C$ and $B$ are constant parameters. These arbitrary constant values are constructed by matching the interior of a metric with Schwarzschild's exterior metric. This aspect is very valuable to observe the physical behavior of the compact stars by indicating their radii and masses in terms of the arbitrary constants. \\The main aim of this study is to deal the compact stars with formulation of analytical models by considering anisotropic static source configurations in the context of modified $f(\mathcal{G})$ gravity. The graphical analysis and interpretation of these results exhibit some conspicuous properties related to these anisotropic compact stars as follows: \\ \begin{itemize} \item Geometry of the space time is described by the metric potentials. The evolution of metric potentials $e^{\nu}$ and $e^{\lambda}$ with respect to the fractional radial coordinate $r/R$ in Fig. $\ref{Fig:1}$ satisfy the condition $e^{\lambda(r=0)}=1$ and $e^{\nu(r=0)}=C^2$. Both metric potentials have minimum value at the core of star and then increase monotonically away from the center to surface. For a physical viability and stability of the suggested models, metric potentials should be positive, finite and free from the geometrical singularities. The graphical behavior in Fig. $\ref{Fig:1}$ clearly shows that our metric potentials are consistent and satisfy all the above requirements. \item The variation of effective energy density, radial and tangential pressure corresponds to the fractional radial coordinate $r/R$ for both models is regular at the center. The graphical behavior depicts that the effective energy density and both pressures are free from the central singularities. It is clear from the Figs. $\ref{Fig:2}$ and $\ref{Fig:3}$ that these features attain maximum value at the center and show continuously decreasing behavior away from the center to the boundary of the star. The numerical values of effective energy density and radial pressure for suggested models at the center of the three compact objects namely $Cen~ X-3$, $EXO ~1785-248$ and $LMC~ X-4$ are given in Table $\ref{tab2}$ and $\ref{tab3}$. These numerical values clearly show that the value of central density is higher than the surface density. This fact assures the high compactness of most dense stellar compact objects. \item It has been observed from the Figs. $\ref{Fig4}$ - $\ref{Fig7}$ that the radial derivatives of effective energy density and anisotropic pressures are negative, and at the center these values vanish. This fact confirms high compactness at core of the star. \item It can be noted from the Figs. $\ref{Fig:8}$ and $\ref{Fig:9}$ that all energy bounds for our proposed models are well satisfied which exhibit the realistic matter content. \item To check that whether all forces namely, gravitational force $(F_g$), hydrostatic force $(F_h)$ and anisotropic force $(F_a)$ are in equilibrium for our models, we studied the TOV equation in modified $f(\mathcal{G})$ gravity frame of reference. Fig. $\ref{Fig:10}$ yields that all the forces are in equilibrium, which endorse stability of our system. \item The radial and tangential speeds of sound for compact stars are denoted by $v^2_{sr}$ and $v^2_{st}$. The values of the square of sound speeds lie within the range $0$ and $1$. From the Figs. $\ref{Fig:11}$ and $\ref{Fig:12}$, it can be easily seen that our models are consistent with the causality condition. Further, our present system is consistent with the Herrera cracking condition i.e. $0\leq |v^2_{st}-v^2_{sr}|\leq1$, which confirms the stability of our system. \item Fig. $\ref{Fig:13}$ shows that the calculated mass for our suggested models is very close to the standard observational data, which depicts that our mass function is regular at the center of core. Further, the evolution of compactification factor and the behavior of surface redshift with respect to the fractional radial coordinate favor our models, as the values of compactness and redshift factor satisfy the required limits. \item The radial $\Gamma_r$ and tangential $\Gamma_t$ adiabatic indices have been constructed in Figs. $\ref{Fig:14}$ and $\ref{Fig:15}$. For a dynamically stable stellar objects the adiabatic index must be greater than $\frac{4}{3}$ in all internal points. It is clear from the graphical representation that the value both adiabatic indices are greater than $\frac{4}{3}$ throughout the star, which establish the stable nature of our proposed models. \\ \end{itemize} In the study of compact stellar structures, the role of modified $f(\mathcal{G})$ gravity is very alluring. The study of possible existence of compact stars and particle physics with their extremely dense cores has enforced the researchers for more reliable solutions of the modified field equations. As a final comment, in this present study we have favorably showed singularity free and entirely stable stellar system, which is advisable to express the anisotropic nature of compact stars, by employing the Tolman- Kuchowicz metric. Here we observed that our proposed models in the framework of modified $f(\mathcal{G})$ gravity are consistent and stable, as all physical attributes of compact stars follow physically accepted patterns. \section*{References}
c8babf0102639fd129a1659f049749da180d9b26
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Entanglement is a unique feature of a quantum system and entanglement entropy, defined through the von Neumann entropy (vNE) measure, is one of the most widely used quantitative measures of entanglement. \cite{Osborne2002,Calabrese2004,Vidal2004,Kopp2007} Consider a composite system that can be partitioned into two subsystems $A$ and $B$. The vNE of either of the subsystems is $s_{A}=-\mathrm{Tr}_{A}\rho_{A}\ln \rho_{A}=s_{B}= -\mathrm{Tr}_{B}\rho_{B}\ln \rho_{B}$. Here, the reduced density matrix $\rho_{A}$ is obtained by tracing over the degrees of freedom in $B$: $\rho_{A}=\mathrm{Tr}_{B}|\psi_{A B}\rangle\langle\psi_{AB}|$ and similarly for $\rho_{B}$. In general, for a pure state $|\psi_{A B}\rangle$ of a composite system, the reduced density matrix is a mixture, and the corresponding entropy is a good measure of entanglement. The scaling behavior of entanglement entropy is a particularly useful characterization near a quantum phase transition \cite{Vidal2004}. The entanglement entropy can show nonanalyticity at the phase transition even when the ground state energy (the quantum analog of the classical free energy) is analytic. While these ideas have been studied in a number of translation-invariant models,\cite{Vidal2004,Wu2004,Calabrese2004} there have been far fewer investigations of random quantum critical points (for notable exceptions, see Ref.~\onlinecite{Refael2004}). In particular, noninteracting electrons moving in a disordered potential can undergo continuous quantum phase transitions between an extended metallic and a localized insulating state as the Fermi energy is varied across a critical energy $E_C$. Well known examples are the Anderson transition in three dimensions and the integer quantum Hall (IQH) plateau transition in two dimensions where the ground state energy does not exhibit any nonanalyticity. In contrast, vNE will be shown to exhibit nonanalyticity at these transitions and a scaling behavior. At the outset, it should be emphasized that because of the single particle and disorder-dominated nature of these quantum phase transitions, entanglement as characterized by vNE and its critical scaling behavior are fundamentally different from those calculated for interacting systems. This statement will be made more precise later. In a noninteracting electronic system close to a disordered critical point, the wave function intensity at energy $E$, $|\psi_E(r)|^2$, fluctuates strongly at each spatial point $r$ and, consequently, has a broad (non-Gaussian) distribution even in the thermodynamic limit.\cite{Castellani1986} This non-self-averaging nature of the wave function intensity is characterized through the scaling of its moments. In particular, moments of normalized wave function intensity, $P_q$ (called the generalized inverse participation ratios), obey the finite-size scaling ansatz, \begin{align} \label{definition of Pq} P_q(E) \equiv \sum_{r} \overline{ \left| \psi_E(r)\right|^{2q}} \sim L^{-\tau_q} \, \mathcal{F}_q\big[(E-E_C)L^{1/ \nu}\big]. \end{align} Here, $L$ is the system size, $\nu$ is the exponent characterizing the divergence of correlation length, $\xi_E \sim |E-E_C|^{-\nu}$. $\tau_q$ is called the multifractal spectrum, and the overbar denotes averaging over different disorder realizations. $\mathcal{F}_q(x)$ is a scaling function with $\mathcal{F}_q(x\rightarrow 0) = 1$ close to the critical point $E=E_C$. When $E$ is tuned away from $E_C$, the system either tends towards an ideal metallic state with $P_q(E) \sim L^{-D(q-1)}$ ($D$ being the number of spatial dimensions) or becomes localized with $P_q(E)$ independent of $L$. Below, we first show that the disorder-averaged vNE can be expressed as a derivative of $P_q$ and thus, its scaling behavior follows from multifractal analysis. After that, we apply our formalism to understand the numerical results on vNE at the three dimensional Anderson localization and IQH plateau transitions. vNE in the Anderson localization problem was studied previously,\cite{Kopp2007,Varga2007} but the connection with mulitfractality and the unique features of vNE at these quantum phase transitions have not been clearly elucidated. \section{Entanglement Entropy in Disordered Noninteracting Electronic Systems} Even though the disorder induced localization problem can be studied in a single particle quantum mechanics language, there is no obvious way to define entanglement entropy in this picture. However (see Ref.~\onlinecite{Zanardi2002}), entanglement can be defined using the site occupation number basis in the second-quantized Fock space. Let us divide the lattice of linear size $L$ into two regions, $A$ and $B$. A single particle eigenstate of a lattice Hamiltonian at energy $E$ is represented in the site occupation number basis as \begin{align} | \psi_E \rangle &= \sum _{r \in A \cup B} \psi_E(r) \, |1 \rangle_r \bigotimes_{r^{\prime} \ne r } \, |0 \rangle_{r^{\prime}} \end{align} Here $\psi_E(r)$ is the normalized single particle wave function at site $r$ and $|n\rangle_r$ denotes a state having $n$ particles at site $r$. We decompose the above sum over lattice sites $r$ into the mutually orthogonal terms, \begin{align}\label{decomposition of state} | \psi_E \rangle = |1 \rangle_A \otimes |0 \rangle_{B} + |0 \rangle_A \otimes |1 \rangle_{B} \end{align} where \begin{align} |1 \rangle_A &= \sum _{r \in A} \psi_E(r) |1 \rangle_r \bigotimes_{r^{\prime} \ne r } |0 \rangle_{r^{\prime}}, \,|0 \rangle_A = \bigotimes_{r \in A } |0 \rangle_{r} \end{align} with analogous expressions for the $|1 \rangle_B$ and $|0 \rangle_B$ states. Notice that these states have the normalization \begin{align} \langle 0|0 \rangle_A = \langle 0|0 \rangle_B = 1, \, \langle 1|1 \rangle_A = p_A, \, \langle 1|1 \rangle_B = p_B, \end{align} where \begin{align} p_{A}= \sum_{r \in A} |\psi_E(r)|^2, \end{align} and similarly for $p_B$ with $p_A + p_B =1$. To obtain the reduced density matrix $\rho_A$, we trace out the Hilbert space over $B$ in the density matrix $\rho = | \psi_E \rangle \langle \psi_E |$. This gives, \begin{align} \rho_A & = |1 \rangle_A \langle 1| + p_B |0 \rangle_A \langle 0|. \end{align} The corresponding vNE is given by \begin{align}\label{bipartite entanglement} s_A = - p_A \ln p_A - p_B \ln p_B. \end{align} In the above equation, we see that manifestly $s_A = s_B$. More importantly, $s_A$ is bounded between $0$ and $\ln 2$ for any eigenstate. This is in sharp contrast to the entanglement entropy in interacting quantum systems where it can be arbitrarily large near the critical point. The reason for this is also clear: Even though we used a second-quantized language, we are dealing with a single particle state rather than a many body correlated state. Consequently, the entanglement entropy does not grow arbitrarily large as a function of the size of $A$. We also observe that at criticality, if the whole system size becomes very large in comparison with the subsystem $A$, we can restrict the subsystem to be a single lattice site and study the scaling dependence with respect to the overall system size $L$. Then, using the ansatz of scale invariance, we can always find the scaling of the entanglement as a function of the subsystem size $l$ since near criticality, only the dimensionless ratio $L/l$ can enter any physical quantity. To extract scaling, we find the bipartite entanglement of a single site $r$ with the rest of the system and sum this over all lattice sites in the system. Using Eq. \eqref{bipartite entanglement}, we write this as \begin{align}\label{single site entropy sum} S(E) &= - \sum_{r \in L^d} \Bigl\{ |\psi_E(r)|^2 \ln |\psi_E(r)|^2 \nonumber \\ & \quad + \left[1- |\psi_E(r)|^2 \right] \ln \left[ 1- |\psi_E(r)|^2 \right]\Bigr\}. \end{align} To leading order, the second term inside the square bracket in Eq. \eqref{single site entropy sum} can be dropped since $\left| \psi_{E}(r)\right|^2 \ll 1$ at all points $r$ when the states are close to the critical energy. We can readily relate the disorder average (denoted by overbar) of this entropy to the multifractal scaling in Eq. \eqref{definition of Pq} and get the $L$ scaling as \begin{align}\label{EE summed over all sites} \overline{S}(E)\approx -\frac{dP_{q}}{dq}\bigg|_{q=1} \approx \frac{d\tau_q}{dq}\bigg|_{q=1} \ln L - \frac{\partial \mathcal{F}_q}{\partial q} \bigg|_{q=1}. \end{align} We do not know the general form of the scaling function $\mathcal{F}_q$, but we can get the approximate $L$ dependence of the entropy in various limiting cases. For the exactly critical case when $\mathcal{F}_q \equiv 1$ for all values of $q$, we get \begin{align}\label{EE critical scaling} \overline{S}(E) \sim \alpha_1 \ln L, \end{align} where the constant $\alpha_1 ={d\tau_{q}/dq}|_{q=1}$ is unique for each universality class. From the discussion following Eq. \eqref{definition of Pq}, the leading scaling behavior of $\overline{S}(E)$ in the ideal metallic and localized states is given by $D \ln L$ and $\alpha_1 \ln \xi_E$, respectively. From the limiting cases, we see that, in general, $\overline{S}(E)$ has the approximate form \begin{align}\label{approximate form for EE of single energy state} \overline{S}(E) \sim \mathcal{K}[(E-E_C)L^{1/\nu}] \ln L, \end{align} where the coefficient function $\mathcal{K}(x)$ decreases from $D$ in the metallic state to $\alpha_1$ at criticality and then drops to zero for the localized state. We will see that this scaling form is verified in our numerical simulations. \section{Entanglement in the three dimensional Anderson Model} The scaling form for the entanglement entropy averaged over all eigenstates of the single particle Hamiltonian is also of interest since this scaling can change as a function of disorder strength. To be specific, let us consider the 3D Anderson model on a cubic lattice. The Hamiltonian is \begin{equation}\label{Hamiltonian_Anderson} H=\sum_i V_i c_i^\dag c_i-t\sum_{\langle i,j\rangle}(c_i^\dag c_j+H.c.), \end{equation} where $c_i^{\dag}$($c_i$) is the fermionic creation (annihilation) operator at site $i$ of the lattice, and the second sum is over all nearest neighbors. We set $t=1$, and the $V_i$ are random variables uniformly distributed in the range $[-W/2,W/2]$. It is known \cite{MacKinnon1981} that as $W$ is decreased from a very high value, extended states appear at the band center below the critical disorder strength $W_c=16.3$, and a recent work \cite{Slevin2001} reported the localization length exponent $\nu=1.57\pm 0.03$. The analysis leading to Eq. \eqref{approximate form for EE of single energy state} also holds when we study wave functions at a single energy, say $E=0$ and increase the disorder strength in the Anderson model across the critical value $W_c$. In this case, the states at $E=0$ evolve continuously from fully metallic to critical and then finally localized, resulting in the approximate form for the entanglement entropy as \begin{align}\label{generalscaling2} \overline{S}(E=0,w,L) \sim \mathcal{C}(wL^{1/\nu})\ln L, \end{align} where $w=(W-W_c)/W_c$ is the normalized relative disorder strength and $\mathcal{C}(x)$ is a scaling function. In particular, as mentioned before, $\mathcal{C}(x) \to D$ as $w \to -1$, $\mathcal{C}(x) \to 0$ as $w \to \infty$, and $\mathcal{C}(x) =\alpha_1$ when $w=0$. Next, we look at the energy-averaged entropy. We average Eq. \eqref{EE summed over all sites} over the entire band of energy eigenvalues and construct the vNE, \begin{align} \overline{S}(w,L) = \frac{1}{L^3}\sum_{E} \overline{S}(E,w,L), \end{align} where $L^3$ is also the total number of states in the band. Then using Eqs. \eqref{approximate form for EE of single energy state} and \eqref{generalscaling2}, one can show that close to $w=0$, \begin{align}\label{generalscaling1} \overline{S}(w,L) \sim C +L^{-1/\nu}f_{\pm}\big(wL^{1/\nu}\big)\ln L, \end{align} where $C$ is an \emph{L independent} constant, and $f_\pm(x)$ are two universal functions corresponding to the two regimes $w>0$ and $w<0$. \begin{figure} \centering \includegraphics[width=\columnwidth]{scalinganderson1.eps}\\ \caption{(Color online) Scaling curve in the 3D Anderson model. With the choice of $\nu=1.57$ and $C=12.96$, all data collapse to a universal functions $f_\pm(x)$. The two branches correspond to $w<0$ and $w>0$.} \label{Scaling_Anderson1} \end{figure} We numerically diagonalize the Hamiltonian [Eq. \ref{Hamiltonian_Anderson}] in a finite $L\times L\times L$ system with periodic boundary conditions. The maximum system size is $L=13$, and the results are averaged over 20 disorder realizations. The scaling form of $\overline{S}(w,L)$ is given by Eq. \eqref{generalscaling1}. Figure~\ref{Scaling_Anderson1} shows the results of the data collapse with a choice of $\nu=1.57$, and the nonuniversal constant $C=12.96$ is determined by a powerful algorithm described in Ref.~\onlinecite{Goswami2007}. The successful data collapse reflects the nonanalyticity of the von Neumann entropy and accuracy of the multifractal analysis. We also use the transfer matrix method \cite{Kramer1996} to study the energy dependence of $\overline{S}(E,w,L)$ by considering a quasi-one-dimensional (quasi-1D) system with a size of $(mL)\times L\times L$, $m\gg 1$. We use $L$ up to $18$, and $m=2000 \gg 1$ is found to be sufficient. To compute vNE, we divide the quasi-1D system into $m$ cubes labeled by $I=1,2,\ldots,m$, each containing $L^3$ sites. We normalize the wave function within each cube and compute the vNE, $\overline{S^I}(E,W,L)$, in the $I^{\text{th}}$ cube, and finally $\overline{S}(E,W,L)$ is obtained by averaging over all cubes. \begin{figure} \centering \includegraphics[width=\columnwidth]{sew.eps}\\ \caption{(Color online) $\overline{S}(E,W,L)$ as a function of $E$ and $W$ computed in a system with $L=10$. The square shows the mobility edge reported in Ref.~\onlinecite{Bulka1987}. Because of the finiteness of the system, the transition from the localized to the delocalized region is smooth.} \label{S_EW} \end{figure} A typical $\overline{S}(E,W,L)$ with $L=10$ is shown in Fig.~\ref{S_EW}. The value of $\overline{S}(E,W,L)$ is normalized by $\ln(L^3)$ such that $\overline{S} \to 1$ in a fully extended state. The energy $E$ is normalized by $(W/2+6)$, which is the energy range of nonzero density of states.\cite{Wegner1981} The mobility edge computed in Ref.~\onlinecite{Bulka1987} is also plotted in Fig.~\ref{S_EW}. The validity of the scaling form in Eq.~(\ref{generalscaling2}) is seen in Fig.~\ref{scaling_Anderson2}. In particular, the function $\mathcal C(x)$ shows the expected behavior. \section{Entanglement in the integer quantum Hall system} Consider now the second example, the integer quantum Hall system in a magnetic field $B$. The Hamiltonian can be expressed \cite{Huckestein1995} in terms of the matrix elements of the states $|n,k\rangle$, where $n$ is the Landau level index and $k$ is the wave vector in the $y$ direction. Focussing on the lowest Landau level $n=0$, with the impurity distribution $\overline{V(\mathbf{r})V(\mathbf{r'})}=V_0^2\delta(\mathbf{r}-\mathbf{r'})$, the matrix element $\langle 0,k|V|0,k'\rangle$ can be generated as in Ref.~\onlinecite{Huckestein1995}. \begin{figure} \centering \includegraphics[width=\columnwidth]{scalinganderson2.eps}\\ \caption{(Color online) The quantity ${\cal C}$ in Eq.~(\ref{generalscaling2}). The range of the system sizes is too small to observe the weak $L$ dependence. Inset: $\overline{S}(E=0,W,L)$ as a function of $\ln L$ for three different $W$.} \label{scaling_Anderson2} \end{figure} Now, consider a two dimensional square with a linear dimension $L=\sqrt{2\pi} Ml_B$, where $l_B=(\hbar/eB)^{1/2}$ is the magnetic length and $M$ is an integer, with periodic boundary conditions imposed in both directions. We discretize the system with a mesh of size $\sqrt{\pi} l_B/\sqrt{2}M$. The Hamiltonian matrix is diagonalized and a set of eigenstates $\{|\psi_a\rangle=\sum_k \alpha_{k,a}|0,k\rangle\}_{a=1}^{M^2}$ is obtained with corresponding eigenvalues $\{E_a\}_{a=1}^{M^2}$. The energies are measured relative to the center of the lowest Landau band \cite{Ando1974} in units of $\Gamma=2V_0/\sqrt{2\pi}l_B$. Finally, for each eigenstate the wave function in real space can be constructed as \begin{equation} \psi_a(x,y)=\langle x,y|\psi_a\rangle=\sum_k \alpha_{k,a}\psi_{0,k}(x,y), \label{wave function_IQHE} \end{equation} where $\psi_{0,k}(x,y)$ is the lowest Landau level wave function with a momentum quantum number $k$. The dimension of the Hamiltonian matrix increases as $N_k\sim M^2$, making it difficult to diagonalize fully. Instead, we compute only those states $|\psi_a\rangle$ whose energies lie in a small window $\Delta$ around a preset value $E$, i.e. $E_a\in[E-\Delta/2,E+\Delta/2]$. We ensure that $\Delta$ is sufficiently small ($0.01$) while at the same time, there are enough states in the interval $\Delta$ (at least 100 eigenstates). We now uniformly break up the $L\times L$ square into nonoverlapping squares $\mathcal{A}_i$ of size $l\times l$, where $l=l_B\sqrt{\pi/2}$, independent of the system size $L$. For each of the states, we compute the coarse grained quantity $\int_{(x,y)\in\mathcal{A}_i}|\psi_a(x,y)|^2\mathrm{d}x\mathrm{d}y$. The computation of the vNE for a given eigenstate follows the same procedure described for the Anderson localization. Finally, by averaging over states in the interval $\Delta$, the vNE $\overline{S}(E, L)$ is obtained at the preset energy $E$. The scaling form of $\overline{S}(E,L)$ is given by Eq. \eqref{approximate form for EE of single energy state} with $E_C = 0$ and is $\overline{S}(E,L)=\mathcal{K}(|E|L^{1/\nu})\ln L$. A good agreement with the numerical simulations is seen in Fig.~\ref{scaling_IQHE}. \begin{figure} \vskip 5 mm \centering \includegraphics[width=\columnwidth]{scalingiqhe.eps} \caption{(Color online) Scaling of the von Neumann entropy $\overline{S}(E)$ for the IQHE. $M$ instead of $L$ is used in the data collapse with the accepted value of $\nu=2.33$.} \label{scaling_IQHE} \end{figure} \section{Conclusions} We have clearly established the formalism for computing the entanglement entropy near quantum critical points in noninteracting disordered electronic systems. We have also identified its relation with the well-studied notion of multifractality and illustrated our concepts through numerical simulations of two important models, the 3D Anderson transition and the IQH plateau transition. This work represents a starting point to study entanglement in electronic systems with both disorder and interactions. \section{Acknowledgements} This work was supported by NSF Grant No. DMR-0705092 (S.C. and X.J.), NSF MRSEC Program under Grant No. DMR-0213745, the NSF Career grant DMR-0448820 and the Research Corporation (I.A.G. and A.R.S.). A.R.S. and I.A.G. acknowledge hospitality at the Institute for Pure and Applied Mathematics, UCLA where this work was started. S.C. would also like to thank the Aspen Center for Physics.
8892060ac15a18c1055b81b27e574d02e3ff8f26
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{The overlap operator and the sign function at nonzero quark chemical potential} \label{sec:sign} The overlap Dirac operator \cite{Narayanan:1994gw,Neuberger:1997fp} provides an exact solution of the Ginsparg-Wilson relation and hence implements chiral symmetry in lattice QCD even at finite lattice spacing. At zero quark chemical potential the overlap operator requires the computation of the sign function of the Hermitian Wilson-Dirac operator, for which efficient methods have been developed \cite{Neuberger:1998my,vandenEshof:2002ms}. To describe QCD at nonzero baryon density (see Ref.~\cite{Stephanov:2007fk} for a review), a quark chemical potential $\mu$ is introduced in the QCD Lagrangian. The massless overlap Dirac operator at nonzero $\mu$ was defined in Ref.\ \cite{Bloch:2006cd} as \begin{equation} D_{\text{ov}}(\mu) = 1 + \gamma_5\sign(H_\text{w}(\mu)) \label{Dov} \end{equation} with $H_\text{w}(\mu)=\gamma_5 D_\text{w}(\mu)$. $D_\text{w}(\mu)$ is the Wilson-Dirac operator at nonzero chemical potential \cite{Hasenfratz:1983ba} \begin{align} \label{Dw} [D_\text{w}(\mu)]_{nm} = \; & \delta_{n,m} - \kappa \sum_{j=1}^3 (1+\gamma_j) U_{n,j} \delta_{n+\hat j,m} - \kappa \sum_{j=1}^3 (1-\gamma_j) U^\dagger_{n-\hat j,j} \delta_{n-\hat j,m} \\ & \phantom{\delta_{n,m}} - \kappa (1+\gamma_4) e^\mu U_{n,4} \delta_{n+\hat 4,m} - \kappa (1-\gamma_4) e^{-\mu} U^\dagger_{n-\hat 4,4} \delta_{n-\hat 4,m} \:, \notag \end{align} where $\kappa = 1/(8+2m_\text{w})$ with negative Wilson mass $m_\text{w} \in (-2,0)$, $\gamma_\nu$ with $\nu=1,\ldots,4$ are the Dirac gamma matrices in Euclidean space, $\gamma_5=\gamma_1\gamma_2\gamma_3\gamma_4$, and $U_{n,\nu}$ are $SU(3)$ matrices. The exponential factors $e^{\pm\mu}$ implement the quark chemical potential on the lattice. For $\mu \ne 0$ the argument $H_\text{w}(\mu)$ of the sign function in Eq.\eqref{Dov} becomes non-Hermitian, and one is faced with the problem of defining and computing the sign function of a non-Hermitian matrix. Consider a given matrix $A$ of dimension $N$ and a generic function $f$. Let $\Gamma$ be a collection of closed contours in $\mathbb{C}$ such that $f$ is analytic inside and on $\Gamma$ and such that $\Gamma$ encloses the spectrum of $A$. Then the function $f( A )$ of the matrix $A$ can be defined by \cite{Dunford} \begin{equation} f(A) = \frac{1}{2\pi i} \oint_\Gamma f(z) (z I - A)^{-1} dz \:, \label{fcontour} \end{equation} where the integral is defined component-wise and $I$ denotes the identity matrix. From this definition it is easy to derive a spectral function definition. If the matrix $A$ is diagonalizable, i.e., $A=U \Lambda U^{-1}$ with a diagonal eigenvalue matrix $\Lambda=\diag(\lambda_i)$ and $U\in\text{Gl}(N,\mathbb C)$, then \begin{align} \label{fA} f(A) = U \diag(f(\lambda_i)) U^{-1} \:. \end{align} If $A$ cannot be diagonalized, a more general spectral definition can be derived from Eq.~\eqref{fcontour} using the Jordan decomposition \cite{Golub, Bloch:2007aw}. Non-Hermitian matrices typically have complex eigenvalues, and applying Eq.~\eqref{fA} to the sign function in Eq.~\eqref{Dov} requires the evaluation of the sign of a complex number. The sign function needs to satisfy $[\sign(z)]^2=1$ and, for real $x$, $\sign(x) = \pm 1$ if $x \gtrless 0$. To satisfy these properties, it has become standard to define \begin{equation} \sign(z) \equiv \frac{z}{\sqrt{z^2}} = \sign(\re(z)) \:, \label{sgnz} \end{equation} where in the last equality the cut of the square root is chosen along the negative real axis. Using the definition \eqref{sgnz} in the spectral definition \eqref{fA} and reordering the eigenvalues according to the sign of their real part allows one to write the matrix sign function as \begin{equation} \label{signA} \sign(A) = U \left( \begin{array}{cc} +I & \\ & -I \end{array} \right) U^{-1} \:. \end{equation} The sign function satisfies $\sign(A)^2 = I$, and a short calculation \cite{Bloch:2006cd} shows that for this reason the overlap operator $D_{\text{ov}}(\mu)$ as defined in Eq.~(\ref{Dov}) satisfies the Ginsparg-Wilson relation. Moreover, this definition agrees with the result obtained when deriving Eq.~\eqref{Dov} from the domain-wall fermion formalism at $\mu\ne0$ \cite{Bloch:2007xi}. \section{Arnoldi method and function approximation for a non-Hermitian matrix} \label{NumApp} A numerical implementation of the sign function using the spectral definition \eqref{fA} is only possible for small matrices, as a full diagonalization becomes too expensive as the matrix grows. Alternatively, matrix-based iterative algorithms for the computation of the matrix sign function have been around for many years, see Ref.~\cite{Higham97} and references therein. These are efficient for medium-sized problems, but are still unaffordable for the very large matrices occurring in typical lattice QCD simulations. Therefore, another iterative method is required which approximates the vector $y=\sign(A) x$, rather than the full sign matrix itself. Such iterative methods are already extensively used for Hermitian matrices \cite{Vor88,Drus98}. Most of these methods are derived from the Lanczos method, which uses short recurrences to build an orthonormal basis in a Krylov subspace. Krylov subspace methods have also been introduced for non-Hermitian matrices \cite{gallopoulos89parallel,hochbruck}. The two most widely used methods to compute a basis for the Krylov subspace are the Arnoldi method and the two-sided Lanczos method. In contrast to the Hermitian case, the Arnoldi method requires long recurrences to construct an orthonormal basis for the Krylov subspace, while the two-sided Lanczos method uses two short recurrence relations at the cost of losing orthogonality. Here we describe a Krylov subspace approximation based on the Arnoldi method to evaluate $f(A)x$ for a generic function of a non-Hermitian matrix. We aim to construct an approximation to $f(A)x$ using a polynomial of degree $k-1$ with $k \ll N$. For any $k$ there exists a \emph{best} polynomial approximation $\hat y = P_{k-1}(A) x$ of degree at most $k-1$, which is the orthogonal projection of $f(A)x$ on the Krylov subspace ${\cal K}_{k}(A,x) = \myspan(x, Ax, \ldots, A^{k-1} x)$. An orthonormal basis $V_k=(v_1,\ldots,v_k)$ for the Krylov subspace ${\cal K}_k(A,x)$ is constructed using the Arnoldi recurrence \begin{equation} A V_k = V_k H_k + \beta_k v_{k+1} e_k^T \:, \label{Arnoldi} \end{equation} where $v_1=x/\beta$, $\beta=|x|$, $H_k$ is an upper Hessenberg matrix, $\beta_k=H_{k+1,k}$, and $e_k$ is the $k$-th basis vector in $\mathbb{C}^{k}$. Then $V_k V_k^\dagger$ is a projector on the Krylov subspace, and the projection $\hat y$ of $f(A) x$ on ${\cal K}_k(A,x)$ can formally be written as \begin{equation} \hat y = V_k V_k^\dagger f(A) x \:. \label{yproj2} \end{equation} However, to compute the projection \eqref{yproj2} one would already have to know the exact result $f(A) x$. Therefore, a method is needed to approximate the projected vector $\hat y$. From Eq.~\eqref{Arnoldi} it follows that \begin{equation} H_k = V_k^\dagger A V_k \:, \label{HVAV} \end{equation} which suggests the approximation \cite{gallopoulos89parallel} \begin{equation} f(H_k) \approx V_k^\dagger f(A) V_k \:. \label{fAapprox} \end{equation} As $x = \beta V_k e_1$, Eq.\ \eqref{fAapprox} can be substituted in Eq.\ \eqref{yproj2}, finally yielding the approximation \begin{equation} \hat y \approx \beta V_k f(H_k) e_1 \:. \label{yproj3} \end{equation} In this approximation the computation of $f(A)$ is replaced by that of $f(H_k)$, where $H_k$ is of much smaller size than $A$. $f(H_k) e_1$ should be evaluated by some suitable numerical method. The computation of the matrix sign function using Eq.\ \eqref{yproj3} converges to the exact solution (see the $m=0$ curve in the left pane of Fig.~\ref{DeflArn}). Unfortunately, in the case of the sign function, the convergence as a function of the size of the Krylov subspace is very slow if some of the eigenvalues are close to the function discontinuity along the imaginary axis. This problem can be resolved by deflation of these critical eigenvalues. For Hermitian matrices, it is well known that the computation of the sign function can be improved by deflating the eigenvalues smallest in absolute value \cite{vandenEshof:2002ms}. Assume that $m$ critical eigenvalues $\lambda_i$ of $A$ with orthonormal eigenvectors $u_i$ have been computed. Then \begin{equation} f(A) x = \sum_{i=1}^m f(\lambda_i) ( u_{i}^{\dagger} x ) u_i + f(A) x_\perp \:, \label{fAxdefl} \end{equation} where $x=x_\parallel + x_\perp$ with $x_\parallel=\sum_{i=1}^m ( u_{i}^{\dagger} x ) u_i$ and $x_\perp = x - x_\parallel$. The first term on the right-hand side of Eq.~\eqref{fAxdefl} can be computed exactly, while the second term can be approximated using a Krylov subspace method for $f(A) x_\perp$. Deflation will allow for a much smaller-sized Krylov subspace. For non-Hermitian matrices the eigenvectors are no longer orthogonal, and the simple decomposition into orthogonal subspaces, leading to Eq.\ \eqref{fAxdefl}, no longer holds. In the next two sections we will develop two alternative deflation schemes for the non-Hermitian case. \section{Schur deflation} \label{Schurdefl} We construct the subspace $\Omega_m + {\cal K}_k(A,x)$, which is the sum of the subspace $\Omega_m$ spanned by the right eigenvectors corresponding to $m$ critical eigenvalues of $A$ and the Krylov subspace ${\cal K}_k(A,x)$. Assume that $m$ critical eigenvalues and right eigenvectors of $A$ have been computed. From this, one can construct $m$ Schur vectors $s_i$, which form an orthonormal basis of $\Omega_m$, satisfying \begin{equation} A S_m = S_m T_m\:, \label{pSd} \end{equation} where $S_m=(s_1,\ldots, s_m)$ and $T_m$ is an $m \times m$ upper triangular matrix whose diagonal elements are the eigenvalues corresponding to the Schur vectors. We propose a modified Arnoldi method to construct an orthogonal basis of the composite subspace $\Omega_{m} + {\cal K}_k(A,x)$. That is, each Arnoldi vector is orthogonalized not only against the previous ones, but also against the Schur vectors $s_{i}$. In analogy to \eqref{Arnoldi}, this process can be summarized as \begin{equation} A \begin{pmatrix} S_{m} & V_{k} \end{pmatrix} = \begin{pmatrix} S_{m} & V_{k} \end{pmatrix} \begin{pmatrix} T_{m} & S_{m}^{\dagger} A V_{k} \\ 0 & H_{k} \end{pmatrix} + \beta_k v_{k+1} e_{m+k}^T \label{eq:modArnoldi} \end{equation} with $v_1=x_\perp/\beta$, where $x_\perp=( 1 - S_m S_m^\dagger ) x$ is the projection of $x$ onto the orthogonal complement $\Omega^\perp$ of $\Omega_{m}$ and $\beta=|x_\perp|$. The Hessenberg matrix \begin{equation} H = \begin{pmatrix} T_m & S_m^\dagger A V_k \\ 0 & H_k \end{pmatrix} \label{eq:newH} \end{equation} satisfies a relation similar to Eq.~\eqref{HVAV}, namely $H = Q^\dagger A Q$, where the columns of $Q=(S_m \;\, V_k)$ form an orthonormal basis of $\Omega_{m} + {\cal K}_k(A,x)$. In analogy to Sec.~\ref{NumApp} we construct the approximation \begin{equation} f(A) x \approx Q f(H) Q^\dagger x \:. \label{yproj4} \end{equation} Because of the block structure \eqref{eq:newH} of $H$, the matrix $f(H)$ can be written as \begin{equation} f(H) = \begin{pmatrix} f(T_m) & Y \\ 0 & f(H_k) \end{pmatrix} \:, \label{eq:f_H} \end{equation} where $Y$ reflects the coupling between both subspaces and satisfies the Sylvester equation \begin{equation} T_m Y - Y H_k = f(T_m) X - X f(H_k) \label{SylvEq} \end{equation} with $X=S_m^\dagger A V_k$. Combining \eqref{yproj4} and \eqref{eq:f_H}, we obtain \begin{eqnarray} f( A ) x & \approx & S_{m} f( T_{m} ) S_{m}^{\dagger} \: x + \begin{pmatrix} S_{m} & V_{k} \end{pmatrix} \begin{pmatrix} Y \\ f( H_{k} ) \end{pmatrix} \beta e_{1} \: . \label{eq:formula} \end{eqnarray} $f(T_m)$ and $f(H_k)e_1$ are computed with some suitable numerical method, and the mixed triangular/Hessenberg Sylvester equation \eqref{SylvEq} for $Y$ can be solved efficiently with the method of Ref.~\cite{Bloch:2007aw}. \section{LR-deflation} \label{LRdefl} An alternative deflation in the same composite subspace $\Omega_m +{\cal K}_k(A,x)$ can be constructed using both the left and right eigenvectors corresponding to the critical eigenvalues. Assume that $m$ critical eigenvalues of $A$ and their corresponding left and right eigenvectors have been computed, \begin{align} A R_m = R_m \Lambda_m \: , \qquad L_m^\dagger A = \Lambda_m L_m^\dagger\:, \label{LRev} \end{align} where $\Lambda_m$ is the diagonal matrix of critical eigenvalues, and $R_m=(r_1,\ldots,r_m)$ and $L_m=(\ell_1,\ldots,\ell_m)$ are the matrices with the corresponding right and left eigenvectors, respectively. The left and right eigenvectors corresponding to different eigenvalues are orthogonal. If the eigenvectors are normalized such that $\ell_i^\dagger r_i=1$, then $L_m^\dagger R_m = I_m$, and $R_m L_m^\dagger$ is an oblique projector on the subspace $\Omega_m$ spanned by the right eigenvectors. Let us now decompose $x$ as $x = x_{\parallel} + x_{\ominus}$, where $x_{\parallel} = R_m L_m^\dagger x$ and $x_{\ominus} = x-x_{\parallel}$. Then \begin{equation} f(A) x = R_m f(\Lambda_m) L_m^\dagger x + f(A) x_{\ominus} \:. \label{fAxLR} \end{equation} The first term on the right-hand side, which follows from Eq.~\eqref{LRev}, can be evaluated exactly, while the second term can be approximated by applying the Arnoldi method described in Sec.~\ref{NumApp} to $x_{\ominus}$. An orthonormal basis $V_k$ is constructed in the Krylov subspace ${\cal K}_k(A,x_{\ominus})$ using the Arnoldi recurrence \eqref{Arnoldi}, with $v_1=x_{\ominus}/\beta$ and $\beta=|x_{\ominus}|$. Successive operations of $A$ on $x_{\ominus}$ will yield no contributions along the $m$ critical eigendirections, hence ${\cal K}_k(A,x_\ominus)$ does not mix with $\Omega_m$. Applying the Arnoldi approximation \eqref{yproj3} to Eq.~\eqref{fAxLR} yields the final approximation \begin{equation} f(A) x \approx R_m f(\Lambda_m) L_m^\dagger x + \beta V_k f(H_k) e_1 \:. \label{fAxLRArn} \end{equation} Again, the first column of $f(H_k)$ will be computed with some suitable numerical method. \section{Results} \label{Results} \begin{figure} \centering \includegraphics[width=75mm]{arnoldi_4444} \includegraphics[width=75mm]{arnoldi_6666} \caption{\label{DeflArn}Accuracy of approximation \protect\eqref{fAxLRArn} for $y=\sign(H_\text{w}(\mu)) x$ with $x=(1,1,\ldots,1)$ at $\mu=0.3$, for a $4^4$ lattice (left) and a $6^4$ lattice (right). The relative error $\epsilon=\| \tilde y - y \| / \| y \| $ is shown as a function of the Krylov subspace size $k$ for various numbers of deflated eigenvalues $m$ using the LR-deflation. } \end{figure} We used the methods described above to compute the sign function occurring in the overlap Dirac operator \eqref{Dov} for a $4^4$ and a $6^4$ lattice gauge configuration. Deflation of critical eigenvalues is essential because $H_\text{w}(\mu)$ has eigenvalues close to the imaginary axis. In practice, these eigenvectors need to be computed to high accuracy, as this will limit the overall accuracy of the function approximation. This was done with ARPACK \cite{arpack}. The modified Arnoldi method was implemented in C++ using the optimized ATLAS BLAS library \cite{atlas-hp}. The convergence of the method is illustrated in Fig.~\ref{DeflArn}, where the accuracy of the approximation is shown as a function of the Krylov subspace size. The various curves correspond to different numbers of deflated eigenvalues, using the LR-deflation scheme. Without deflation ($m=0$) the need for large Krylov subspaces would make the method unusable. Clearly, deflation highly improves the efficiency of the numerical method: as more eigenvalues are deflated, smaller Krylov subspaces are sufficient to achieve a given accuracy. Furthermore, the deflation efficiency seems to grow with increasing lattice volume. Indeed, although the matrix size $N$ for the $6^4$ lattice is more than 5 times larger than in the $4^4$ case, the Krylov subspace only has to be expanded by a factor of 1.2 to achieve a given accuracy of $10^{-8}$ (for $m \approx 0.008 N$). It is also interesting to note that the modified Arnoldi approximation \eqref{fAxLRArn} for $f(A) x$ is very close to the \emph{best} approximation in the composite subspace, which is given by the orthogonal projection of $f(A) x$ on $\Omega_m + {\cal K}_k(A,x)$, as was checked numerically. The results for the Schur deflation are not shown here, but are very similar to those for the LR-deflation. The Schur deflation is slightly less accurate, and requires more CPU time per evaluation, mainly because of the additional orthogonalization of the Arnoldi vectors with respect to the Schur vectors. However, the time taken by its initialization phase is halved, as it only requires the computation of the right eigenvectors, and the best choice of deflation scheme will depend on the number of vectors $x$ for which $\sign(H_\text{w}) x$ needs to be computed. If one needs to apply both $\sign(H_\text{w})$ and its adjoint, then, obviously, the LR-deflation will be the better choice. A more detailed discussion of both deflation schemes can be found in Ref.~\cite{Bloch:2007aw} \section{Conclusion} In this talk we presented an algorithm to approximate the action of a function of a non-Hermitian matrix on an arbitrary vector, when some of the eigenvalues of the matrix lie in a region of the complex plane close to a discontinuity of the function. The method approximates the solution vector in a composite subspace consisting of a Krylov subspace augmented by the eigenvectors corresponding to a small number of critical eigenvalues. Two deflation variants were presented based on different subspace decompositions: the Schur deflation uses two coupled orthogonal subspaces, while the LR-deflation uses two decoupled but non-orthogonal subspaces. Deflation explicitly takes into account the contribution of the critical eigenvalues. This allows for smaller-sized Krylov subspaces, which is crucial for the efficiency of the method. The method was applied to the overlap Dirac operator of lattice QCD at nonzero chemical potential, where the importance of deflation was clearly demonstrated. \section*{Acknowledgments} This work was supported in part by DFG grants FOR465-WE2332/4-2 and Fr755/15-1. \bibliographystyle{h-elsevier3}
75e8686545baab64487bccc383f7d43b7d2a9afa
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:introduction} The general properties of the 90 cm sky are not very well known and even less is known at VLBI resolution. Previous snapshot surveys at these wavelengths have only targeted the brightest sources and were plagued by poor sensitivity, radio interference and limited coherence times. Furthermore, the field of view that could be imaged was typically limited by the poor spectral and temporal resolution of early generation hardware correlators and the available data storage and computing performance at the time. As a result, although several hundred 90 cm VLBI observations have been made over the past two decades, images of only a few tens of sources have been published e.g. \citet{alt95}; \citet{laz98}; \citet{chu99}; \citet{cai02}. With such a small sample it is difficult to quantify the total population and nature of these sources. In particular, the sub-arcsecond and sub-Jansky population of 90 cm sources is largely unexplored. Recent improvements to the EVN hardware correlator at JIVE \citep{van04}, have enabled significantly finer temporal and spectral resolution. Combined with vast improvements in storage and computing facilities, it is now possible to image fields as wide as, or even wider than, the FWHM of the primary beam of the observing instrument. To complement the hardware improvements, new approaches to calibration and imaging have been developed to better utilise the available data and processing platforms. For example, \cite{gar05} performed a deep VLBI survey at 20 cm of a $36\arcmin$ wide field by using a central bright source as an in-beam calibrator. The approach was ideal for survey work as it permitted the imaging of many potential target sources simultaneously by taking advantage of the full sensitivity of the observation across the entire field of view. We have applied a similar technique at 90 cm by piggybacking on an existing VLBI observation of the gravitational lens B0218$+$357 and the nearby quasar J0226$+$3421, with the aim of surveying a 28 deg$^{2}$ field around each of the sources. The results provide an important indication of what may be seen by future low-frequency instruments such as the Low Frequency Array (LOFAR), European LOFAR (E-LOFAR) and the Square Kilometre Array (SKA). In this paper, we present the results of a 90 cm wide-field VLBI survey that covers two partially overlapping regions of 28 deg$^{2}$ each, surveying 618 radio source targets at angular resolutions ranging between 30 and 300 mas. For sources located at a redshift of $z=1$, the linear resolution corresponding to 30 mas is 230 pc. A \emph{WMAP} cosmology \citep{spe03} with a flat Universe, $H_{0}=72$ km s$^{-1}$ Mpc$^{-1}$ and $\Omega_{m}=0.29$ is assumed throughout this paper. \section{Observations and Correlation} A VLBI observation of the gravitational lens B0218$+$357 was made on 11 November 2005 using all ten NRAO Very Long Baseline Array (VLBA) antennas, the Westerbork Synthesis Radio Telescope (WSRT) as a phased array and the Jodrell Bank, 76$-$m Lovell Telescope (JB). The primary aim of this observation was to investigate, in detail, propagation effects in the lensing galaxy and the substructure in the lens. The secondary aim, as investigated in this paper, was to serve as a wide-field test observation to study the faint source population at 90 cm over a good fraction of the primary beam. The observation spanned 14 hours with approximately 6 hours of data recorded at JB and WSRT and 13 hours at the VLBA stations. Ten minutes scans of the target source B0218$+$357 ($\alpha=02\rah21\ram05\fs4733$ and $\delta=35\arcdeg56\arcmin13\farcs791$) were interleaved with three minute scans of the nearby quasar J0226$+$3421 ($\alpha=02\rah26\ram10\fs3332$ and $\delta=34\arcdeg21\arcmin30\farcs286$). Five minute scans of the fringe finder 3C84 were made approximately every four hours. Dual circular and cross polarisation data were recorded across four 4 MHz IFs centred on 322.49, 326.49, 330.49 and 610.99 MHz respectively, resulting in a total data recording rate of 128 Mbits s$^{-1}$. The 610.99 MHz data were only recorded at the VLBA antennas. The data were correlated at the European VLBI Network (EVN) correlator at the Joint Institute for VLBI in Europe (JIVE, Dwingeloo, the Netherlands) in multiple passes to create a single-IF, single polarisation, wide-field data set and a multi-IF, dual circular polarisation, narrow-field data set. The narrow-field data set concentrated on the B0218$+$357 with greater sensitivity and the results of this observation will be presented elsewhere (Wucknitz et al., in preparation). The wide-field data consisted of a single polarisation (LL), single IF with a 4 MHz band centred on 322.49 MHz. A third correlator pass centred on another source was used to create a second wide-field data set with a single IF and RR polarisation but was not used in our data reduction process. To reduce the effects of bandwidth smearing and time averaging smearing, and thus image the largest possible field, the EVN correlator generated data with 512 spectral points per baseline and an integration time of 0.25 s. The spectral and temporal resolution exercised the current physical limits of the JIVE hardware correlator and resulted in a final data set size of 77.5 Gbytes. The wide-field data set has a one sigma theoretical thermal noise of $\sim$1.2 and $\sim$0.7 mJy/beam for the quasar and gravitational lens, respectively. \section{VLBI Calibration and Imaging} \label{sec:vlbical} The data from the narrow-field data set were used to perform the initial editing and calibration of the phase reference to take advantage of the increased sensitivity available with the additional bands and polarisations. Nominal corrections to counter the effects of the total electron content (TEC) of the ionosphere were applied with the AIPS\footnote{The Astronomical Image Processing System (AIPS) was developed and is maintained by the National Radio Astronomy Observatory, which is operated by Associated Universities, Inc., under co-operative agreement with the National Science Foundation} task TECOR. An amplitude calibration table was derived from measures of the system temperature of each antenna throughout the observation using the AIPS task APCAL, and applied to the data set. Delays across the IF bands, which were assumed to be constant throughout the observation, were calibrated by fringe fitting on 3C84. A multi-band fringe fit was then performed on the quasar. The flagging and calibration tables of the narrow-field data set were transferred to the wide-field data set using a ParselTongue\footnote{A Python scripting tool for AIPS. ParselTongue was developed in the context of the ALBUS project, which has benefited from research funding from the European Community's sixth Framework Programme under RadioNet R113CT 2003 5058187. ParselTongue is available for download at \url{http://www.radionet-eu.org/rnwiki/ParselTongue}} script. Further editing was applied to the wide-field data set to remove the frequency band edges and frequency channels adversely affected by RFI. The bandpass for the observation was calibrated against observations of 3C84. As both the phase reference and the gravitational lens were to be used as in-beam calibrators for their respective fields, accurate calibration of the amplitudes and phases of both fields was essential. The calibration was complicated by the complex structure of both sources. To account for this structure a new DIFMAP \citep{she94} task, \emph{cordump}\footnote{The \emph{cordump} patch is available for DIFMAP at \url{http://astronomy.swin.edu.au/$\sim$elenc/DifmapPatches/}} \citep{len06}, was developed to enable the transfer of all phase and amplitude corrections made in DIFMAP during the imaging process to an AIPS compatible SN table. The \emph{cordump} task greatly simplified the calibration of both of the fields. First, the phase reference data were averaged in frequency and exported to DIFMAP where several iterations of modelling and self-calibration of both phases and amplitudes were performed. \emph{cordump} was then used to transfer the resulting phase and amplitude corrections back to the unaveraged AIPS data set. After application of these corrections, the DIFMAP model of the phase reference source was subtracted from the AIPS (u-v) data set. The quasar self-calibration solutions were then applied to the lens field as an initial calibration for that field. The calibration for the lens field was further refined using the same approach as for the quasar and upon completion the DIFMAP model of the lens was subtracted from the calibrated field. The images of the phase reference and the lens had measured RMS noise of 1.8 and 1.0 mJy beam$^{-1}$, respectively. The higher than theoretical noise is attributed to substantial levels of RFI observed on some baselines and the shorter time available with the WSRT and JB observations. In the first phase of the imaging process, the AIPS task IMAGR was used to make naturally weighted dirty images and beams of regions selected from WENSS data \citep{ren97} of the two fields being surveyed, the source selection criteria are described in detail in \S~\ref{sec:selection}. Targets falling within a certain annulus around each field were imaged simultaneously using the multi-field option within IMAGR, the DO3D option was used to reduce non-coplanar array distortion. The data from both fields were kept in an unaveraged form to prevent smearing effects during imaging. For each target, the dirty image subtended a square of approximately $51\arcsec$ on each side, an area that covers approximately half that of the WENSS beam at the observation declination ($54\arcsec\times92\arcsec$). Since the dirty image of each target source contains $\sim2\times10^5$ synthesized-beam areas, a conservative $6\sigma$ detection threshold was imposed to avoid spurious detections. Furthermore, only the inner 75\% of each dirty image was searched for candidate detections to avoid erroneous detections as a result of map edge effects. For each positive detection the coordinate of the VLBI peak flux density was recorded. This first imaging step was used to determine whether the target source had been detected with the VLBI observation. Based on our detection criteria, we estimate a false detection rate of approximately one in every 3300 images. While we expect the majority of unresolved WENSS sources to have peaks that fall within the imaged areas, approximately 9.5\% of the WENSS sources exhibit resolved structure. For these sources, we would not detect bright compact components that may exist outside of the central region that was imaged. During the calibration process, it was noted that the lens had significantly weaker signal on the longer baselines compared to that of the quasar. To test the effectiveness of the refined lens field self-calibration solutions, we re-imaged one of the B0218$+$357 field sources, B0215.1$+$3710, with only the phase reference self-calibration solutions applied. B0215.1$+$3710 is located on the side of the lens field that is furthest from the phase reference ($\sim3.45\arcdeg$) and so is most sensitive to changes from the nominal conditions that were corrected for. The target-calibrated dirty image for this source has an rms noise of 4.7 mJy beam$^{-1}$ and a peak of 42 mJy beam$^{-1}$. With only the phase reference self-calibration solutions applied, the rms noise is 6.6 mJy beam$^{-1}$ and the peak 21 mJy beam$^{-1}$. It is clear that without the refined calibration, B0215.1$+$3710 would not have been detected above the $6\sigma$ threshold. The second phase of the imaging process involved creating a (u,v) shifted data set for each of the positive detections, using the AIPS task UVFIX, such that the new image centre coincided with the coordinate of the image peak recorded in the first phase. The shifted data sets were averaged in frequency, effectively reducing the field of view of each of the targeted sources to approximately $0.5\arcmin$, and then exported to DIFMAP. In DIFMAP, the visibilities were averaged over 10 second intervals to reduce the size of the data set and to speed up the imaging process. Each target was imaged in DIFMAP, with natural weighting applied, using several iterations of model fitting. Phase self-calibration was performed between iterations to adjust for the varying effects of the ionosphere across the field. During the imaging process it was noted that the self-calibration phase corrections varied significantly between fields and even among sources within each field. It is believed that these were due to ionospheric variations that occurred across the survey field. Observations at 90 cm will invariably suffer degradation as a result of ionospheric variations and the nominal TEC corrections made in the initial calibration stages assumed that these corrections would be valid across the entire field. This is not a valid assumption when imaging extremely large fields. To provide position dependent corrections within a field, ParselTongue scripts were developed to implement two alternate methods that could be tested against the data. The first method calculated ionospheric corrections based on TEC measures to each source of interest in the field of view and applied differential corrections, based on the TEC correction already made at the phase centre, prior to imaging. This allowed a differential correction to be applied after self calibrating on the bright central sources in each field of view of these observations. The corrections did not result in any significant improvement in the resulting images. We suspect that the currently available TEC solutions may be too coarse, both spatially and temporally, to account for the ionospheric variations across the field. The second method used a parameterized ionospheric model to determine the corrections to each source of interest in the field of view. As with the TEC corrections, these were applied after the central source in each field had been self calibrated. Preliminary tests of these corrections indicated that they performed better than the differential TEC solutions with improvements of the order of a few percent in flux density observed in approximately 70\% of sources tested. The testing of these libraries is not yet complete and only the 11 wide-field sources of \citet{len06} were used in our initial tests. Further tests will be required to more robustly analyse the performance of the two approaches to ionospheric calibration. Following our first attempt to survey the inner $0\arcdeg-1\arcdeg$ region of each field \citep{len06} we discovered that the positional accuracy of the detected sources degraded significantly with radial distance from the phase centre when compared to the positions derived from observations with other instruments. While most sources observed with other instruments only had a positional accuracy of $\sim1\arcsec$ it was still clear that our fields were being scaled by a factor of $0.99871\pm7\times10^{-5}$, a factor that corresponds to an offset of $53\pm3$ frequency channels in our data set. Interestingly, this appeared to corresponded to the 50 lower-band channels that were flagged during editing in the 31DEC05 version of AIPS that was used for the processing of the data. When the processing was repeated in the 31DEC06 version of AIPS, the positional discrepancies disappeared so we suspected that there may have been a software issue with the earlier version of AIPS. However, as wider fields were imaged we discovered four sources that were common to both of the fields. Each of these sources should have been well aligned between the two fields, however, discrepancies of $0.41\arcsec-0.65\arcsec$ were being observed in approximately the same position angle. This was also indicative of a radial scaling but to a lesser degree, $0.999927\pm1.4\times10^{-5}$, corresponding to an offset of $3\pm0.6$ channels. The source of this error has not yet been identified, however all of the sources positions and images in this paper have been corrected to account for this effect. \section{Survey annuli, survey depths, and source selection} \label{sec:selection} We split the survey of each field into six annuli based on the radial distance from the correlation phase centre of that field. These are referred to as the $0\arcdeg-0.25\arcdeg$, $0.25\arcdeg-0.5\arcdeg$, $0.5\arcdeg-1\arcdeg$, $1\arcdeg-1.5\arcdeg$, $1.5\arcdeg-2\arcdeg$, and $2\arcdeg-3\arcdeg$ annuli in field 1 (centred on J0226$+$3421) and field 2 (centred on B0218$+$357). Our survey attempts to detect sources at large radial distances from the antenna pointing position and the correlation phase centre. To reduce the effect of bandwidth and time-averaging smearing, increasingly restrictive (u,v) ranges are employed in the outer annuli. Furthermore, the fall-off of the response of the primary beam is an effect that significantly limits the sensitivity within each annulus. In particular, WSRT and JB have significantly narrower primary beams ($\sim1\arcmin$ and $\sim0.5\arcdeg$ HWHM respectively), owing to their larger effective aperture, compared to the VLBA ($\sim1.3\arcdeg$ HWHM). As such, the WSRT data were only used to image the source directly at the phase centre, whilst the JB observatory data was only used in the $0\arcdeg-0.25\arcdeg$ annulus (restrictions in (u,v) range effectively excludes the JB data from the $0.25\arcdeg-0.5\arcdeg$ annulus even though it has a significant response within this annulus). Since only the VLBA antennas were used outside the $0\arcdeg-0.25\arcdeg$ annulus, the reduced response in the other annuli is composed of only three independent components: the VLBA primary beam response ($R_{VLBA}$) and the reduced response due to bandwidth and time-averaging smearing ($R_{bw}$, $R_{t}$). The combined reduced response, $R$, is given by $R=R_{bw}R_{t}R_{VLBA}$. We have estimated $R_{bw}$ and $R_{t}$ following \citet{bri99} and have adopted a fitted function used to model the VLA antennas, as documented in the AIPS task PBCOR, to model the primary beam response of the VLBA under the assumption that the 25 m antennas in these arrays have a similar response \citep{gar05}. In Table \ref{tab:tabfields}, we calculate the total response and estimated $1\sigma$ rms noise at the outer edge of each annulus of the two surveyed fields. The (u,v) range has been restricted to limit the effects of bandwidth and time-averaging smearing to at most a few percent. The WENSS catalogue was used as a guide for potential targets in the survey. Table \ref{tab:tabfields} lists the total number of WENSS sources that exist within each annulus of each field. For a WENSS source to be detected by our survey it must have a peak flux density, $S_{P}($WENSS$)$, that satisfies the constraint $S_{P}($WENSS$)>6\sigma R^{-1}$. Estimates of this limit at the edge of each annulus, $S_{P}$, and the number of WENSS sources that meet this constraint, $\langle N_{VLBI}\rangle$, are listed in Table \ref{tab:tabfields}. Even though it was estimated that many of the WENSS sources would fall below our detection limits, for completeness, we targeted all WENSS sources in the $0\arcdeg-2\arcdeg$ annuli of each field given the possibility that some sources might exhibit strong variability. Between $2\arcdeg-3\arcdeg$ only candidate sources that were within our sensitivity limits were targeted. \section{Results} A total of 618 WENSS sources were targeted by the 90 cm wide-field VLBI survey at radial distances of up to $2.89\arcdeg$ from the phase centre of the survey fields. The complete survey of all WENSS sources within the inner $0\arcdeg-2\arcdeg$ annulus of each field did not detect any source that had peak flux below our sensitivity limit, or that was partially resolved in WENSS. The combined total area imaged around each of the targeted sources in these fields represents $\sim0.5\%$ of the area surveyed. Of all of the WENSS sources targeted, a total of 272 sources, 95 in the J0226$+$3421 field (field 1) and 177 in the more sensitive B0218$+$357 field (field 2), would have peak flux densities above our VLBI detection limits, if unresolved by our VLBI observations. The WENSS characteristics of these sources (position, peak flux density per solid beam angle, integrated flux and where available, the WENSS/NVSS spectral index $\alpha$ where $S_{\nu}\propto\nu^{\alpha}$) are listed in Tables \ref{tab:tabf1} and \ref{tab:tabf2} for fields 1 and 2, respectively. Where the target source is also detected by the VLBI survey, the corrected source position (see \S~\ref{sec:vlbical}), VLBI peak flux density per solid beam angle and integrated flux are listed in the following row. The VLBI peak and integrated flux density have been corrected for the primary beam response but not for bandwidth and time-averaging smearing losses. Based on these losses and uncertainties in the amplitude calibration, we estimate the absolute flux density scales of the VLBI observations to be better than $10\%$ in the inner $0.5\arcdeg$ of each field and better than $20\%$ elsewhere. A total of 27 sources were detected and imaged by the survey, eight of these sources detected in the J0226$+$3421 field (field 1) and the remaining 19 detected in the more sensitive B0218$+$357 field (field 2). Nine of the sources were detected outside of the half-power point of the VLBA primary beam (HWHM $\sim1.3\arcdeg$). Four sources, B0223.1$+$3408, B0221.9$+$3417, B0219.2$+$3339 and B0223.5$+$3542, were detected in both fields in the region of overlap. B0219.2+3339, in particular, was detected at $2.06\arcdeg$ from the phase centre of the second field and is well past the quarter power point of the VLBA primary beam, it is also the only source detected in the outer annulus of the survey. Table \ref{tab:tabastrometry} lists all of the detected sources, the detection field, the distance from the phase centre of that field, the restoring beam size and the one sigma residual RMS noise. A positional comparison of our corrected source positions is also made with respect to the best known radio positions, where $d_{E}$ and $\theta_{E}$ are the observed offset and position angle from this position, and $d_{\sigma}$ is the offset in terms of the combined one sigma position error of the two compared positions. The VLBI positions in the $0\arcdeg-0.25\arcdeg$ and $0.25\arcdeg-0.5\arcdeg$ annuli are limited by errors introduced by the ionosphere. We estimate a one sigma error of $\sim3-12$ mas in each coordinate. The positional accuracy of the outer, heavily tapered, annuli are further limited by RMS noise errors. We estimate the one sigma error in each coordinate of these outer fields to be better than 15 mas, 20 mas and 30 mas for the $0.5\arcdeg-1\arcdeg$, $1\arcdeg-1.5\arcdeg$ and $1.5\arcdeg-2\arcdeg$ annuli, respectively. The positional accuracy of B0219.2+3339 is also expected to be $\sim30$ mas as it is located quite close to the $2\arcdeg$ boundary. After a correction was applied for an apparent scaling effect in the image (see \S~\ref{sec:vlbical}), residual errors of 40 mas, 100 mas, 200 mas and 180 mas were measured for the cross-field detections of sources B0223.1$+$3408, B0221.9$+$3417, B0219.2$+$3339 and B0223.5$+$3542 respectively. These extremely wide-field sources exhibited significant ionospheric phase fluctuations that distorted their initial dirty images. While the fluctuations were partially corrected for with phase self-calibration, it is believed that this may have adversely affected the position accuracy. Contour maps for each of the sources detected in field 1 are shown in Figure \ref{fig:figf1} and those detected in field 2 are shown in Figure \ref{fig:figf2}. While not formally part of the wide-field survey, the fringe finder 3C84 has also been imaged and is shown in Figure \ref{fig:fig3c84}. The total survey required nearly six weeks of processing on a single 2 GHz computer, required $\sim200$ gigabytes of workspace, and generated a total of $\sim20$ gigabytes of image data. We estimate that six years of processing would be required to completely image the FWHM beam of the VLBA at 320 MHz using similar techniques. Fortunately, the problem can be easily broken down to run efficiently in the parallel environment of a supercomputing cluster. With a basic 100-node cluster, the entire FWHM beam of the VLBA could be imaged within three weeks using a simple brute force method that would image targets on individual processors. For a single field, the cluster would generate mosaic of the primary beam comprising of $\sim1$ terabyte of image data. More elaborate algorithms may be employed to improve the efficiency of this processing further, for example, by using a recursive approach that creates successively smaller sub-fields by performing a combination of (u-v) shifting and data averaging. \subsection{Comments on individual sources} \subsubsection{3C84} For 3C84, we measure a VLBI peak flux density of 2.33 Jy beam$^{-1}$ and an integrated flux of 6.07 Jy, whereas the WENSS peak flux density and integrated flux is 19.396 Jy beam$^{-1}$ and 42.8 Jy respectively. This is the only source in our sample of imaged sources with extended structure in WENSS, where it has an estimated size of $115\arcsec\times84\arcsec$ at a position angle of $115\arcdeg$. In our VLBI image we have recovered $\sim14\%$ of the WENSS flux. Similar VLBI observations at 327 MHz, with a larger synthesised beam, measure a slightly greater integrated flux of 7.47 Jy \citep{ana89} suggesting that the missing flux is most likely related to the larger-scale structure that is resolved out by VLBI observations. We measure a Largest Angular Size (LAS) of 150 mas which corresponds to a Largest Linear Size (LLS) of $\sim50$ pc at its measured redshift of $z=0.017559\pm0.000037$ \citep{str92}. A 15 GHz VLBA contour map \citep{lis05} is shown overlaid with our 90 cm image of 3C84 in Figure \ref{fig:fig3c84}. The smaller scale structures within this image appear to align with the jet-like feature that appears in our 90 cm image and extends 100 mas to the south of the core. \subsubsection{B0223.1+3408 (J0226$+$3421, 4C$+$34.07)} The quasar, J0226$+$3421, was imaged with the full (u,v) range and recovers approximately 80\% of the WENSS flux. The contour map of this source is shown in Figure \ref{fig:figf1}(a) and is overlaid with a naturally weighted 2 cm A$-$configuration VLA with Pie Town contour map (Wucknitz et al., in preparation). The source is dominated by a bright core ($\sim0.85$ Jy) and an extended lobe to the west ($\sim1.6$ Jy). A weaker lobe appears to the north ($\sim0.28$ Jy) and a partially resolved hot spot ($\sim0.14$ Jy) approximately mid-way between the core and the western lobe. The source has a LAS of $1.15\arcsec$ which corresponds to a LLS of $\sim9$ kpc at its measured redshift of $z=2.91\pm0.002$ \citep{wil98}. All of the large-scale structures observed at 90 cm with VLBI are also clearly detected at 2 cm with the VLA and Pie Town. MERLIN$+$VLBI images at 18 cm \citep{dal95} detect the core and western lobe with large-scale structure and positions that are consistent with our image, however, their observations do not detect the northern lobe or hot spot. The source was also detected in the second field and is shown in Figure \ref{fig:figf2}(q) with an image of the field 1 source restored using the same beam. An offset of 40 mas is observed between the field 1 and field 2 source after correcting for the larger-scale offset described in \S~\ref{sec:vlbical}. \subsubsection{B0218.0+3542 (B0218$+$357)} B0218.0$+$3542 is a gravitational lens that has been mapped at higher frequencies \citep[e.g.][]{big01,wuc04}, with VLBI \citep[e.g.][]{big03}, and at various wavelengths by \citet{mit06}. The source is the smallest known Einstein radio ring \citep{pat93}. As this source was the main target of the original observation, it is placed in the most sensitive field and annulus of the survey and has been imaged with the full (u,v) range. Our image of the source, Figure \ref{fig:figf2}(a), has been restored with a beam that is $\sim4$ times larger than normal to highlight the large-scale structure within the source. The source has a LAS of 690 mas and is dominated by the A and B lensed images to the west and east, respectively. The two images are separated by $\sim340$ mas and appear to have weaker components that are mirrored on either side of the lens. These weaker components may be a small portion of a lensed jet that is tangentially stretched. Our observations recover $\sim54\%$ of the WENSS flux suggesting the presence of structures that are fully resolved out even with our shortest baselines. L-Band VLA images of the source seem to suggest that there is indeed a larger-scale emission surrounding the source \citep{ode92}. The measured redshift of the lensing galaxy $z=0.68466\pm0.00004$ \citep{bro93} and that of the lensed object is $z=0.944\pm0.002$ \citep{coh03}. B0218.0+3542 will be studied in greater detail with the high sensitivity, narrow-field observations of this source at 327 MHz and at 610 MHz by Wucknitz et al. (in preparation). \subsubsection{B0221.9$+$3417} The VLBI source we detect within the B0221.9$+$3417 field, Figure \ref{fig:figf1}(b), is offset by $9.94\arcsec$ and at a position angle of $146\arcdeg$ compared to the WENSS position. The source is also detected in field 2, Figure \ref{fig:figf2}(o). The separation between both detections is within 100 mas, after correcting for the larger-scale offset described in \S~\ref{sec:vlbical}, and both have a similar flux density confirming that the detected source is indeed at this position. The position offset also exists when compared against the same source in NVSS, the 365 MHz Texas survey \citep{dou96} and VLSS \citep{coh07}. As only 11\% of the WENSS flux was recovered by the VLBI observation, the position offset hints at a larger component $\sim10\arcsec$ to the north-west of the VLBI source. Furthermore, NVSS lists a fitted source size with a major axis of $16\arcsec$ and the Texas survey categorises the source as a symmetric double with a component separation of $13\pm2\arcsec$ at a position angle of $155\pm11\arcdeg$. These observations are consistent with the VLBI observation if we assume a compact south-eastern component has been detected. The VLBI source has a weaker component 360 mas to the north-west that is directly in line with the WENSS source. We measure a LAS of 360 mas for the VLBI source and an LLS 2.8 kpc at its measured redshift of $0.852\pm0.002$ \citep{wil02}. \subsubsection{B0221.6$+$3406B} B0221.6$+$3406B, as shown in Figure \ref{fig:figf1}(c), has a complex morphology. The source has an LAS of 830 mas which corresponds to a LLS of 6.7 kpc at a measured redshift of $z=2.195\pm0.003$ \citep{wil98}. Our VLBI observations have recovered $\sim90\%$ of the WENSS flux suggesting that there is little or no extended structure above what has already been imaged. Based on the integrated flux densities in WENSS and NVSS, the source has a spectral index of $-0.93$. \subsubsection{B0223.9$+$3351} B0223.9$+$3351, as shown in Figure \ref{fig:figf1}(d), appears to be an AGN with a 180 mas jet extension to the north. The source has a LAS of 420 mas which corresponds to a LLS of 3.5 kpc at a measured redshift of $z=1.245\pm0.004$ \citep{wil02}. Our VLBI observations have recovered $\sim50\%$ of the WENSS flux. \subsubsection{B0215.4$+$3536} B0215.4$+$3536 is an ultra-steep spectrum source, a characteristic that is an excellent tracer of galaxies at redshifts $z\geq2$ \citep[e.g.][and references therein]{rot94}. The source has a LAS of $7.1\arcsec$ which corresponds to a LLS of $\geq60$ kpc for a redshift of $z\geq2$. Figure \ref{fig:figf2}(e) shows our 90 cm VLBI image overlaid with an L-Band VLA image \citep{rot94}. The core and peaks within the two lobes align very closely, to within $0.1\sigma$, with the VLA image. A compact component of emission, possibly a jet interaction region, is detected by our observations approximately mid-way between the core and the south-west lobe and appears to align with extended edge of that lobe. Approximately 50\% of the WENSS flux is recovered by our observation, the remaining flux is likely associated with the extended lobes and is resolved out by our observation. \subsubsection{B0219.2$+$3339} B0219.2$+$3339 is detected in both field 1 and field 2 and is shown in Figures \ref{fig:figf1}(g) and \ref{fig:figf2}(s), respectively. A significant residual offset of 200 mas exists between these two independent detections even after correcting for the larger-scale offset described in \S~\ref{sec:vlbical}. The phases of this source in the J0226$+$3421 field appeared to be more heavily affected by the ionosphere than those in the B0218$+$357 field and it is believed that the offset may have been introduced by the phase self-calibration process. The source has a LAS of 900 mas which corresponds to a LLS of 6.6 kpc for a measured redshift of $z=0.752\pm0.002$ \citep{wil02}. \subsubsection{B0214.5$+$3503} B0215.4$+$3503 has been previously imaged by \citet{rot94} at L-Band with the VLA and is classified as an ultra-steep spectrum source. Contours of the L-Band VLA image are shown overlaid with the 90 cm VLBI observation in Figure \ref{fig:figf2}(h). The source has a LAS of $2.8\arcsec$ which corresponds to a LLS of $\geq24$ kpc if a redshift of $z\geq2$ is assumed. The two components of the VLBI source align very closely, to within $0.2\sigma$, with the VLA image. Approximately 75\% of the WENSS flux is recovered by our observation, the remaining flux is likely associated with the extended structure observed in the VLA image. \subsubsection{B0223.5$+$3542} B0223.5$+$3542 is detected in both field 1 and field 2 and is shown in Figures \ref{fig:figf1}(h) and \ref{fig:figf2}(j), respectively. A significant residual offset of 180 mas exists between these two independent detections even after correcting for the larger-scale offset described in \S~\ref{sec:vlbical}. The phases of the source in the J0226$+$3421 field appeared to be more heavily affected by the ionosphere than those in the B0218$+$357 field and it is believed that the offset may have been introduced by the phase self-calibration process. This may have also adversely affected the measure of the integrated flux density which is approximately 60\% greater in the J0226$+$3421 field compared to the B0218$+$357 field. The two components of the source are separated by 830 mas. Approximately 90\% of the WENSS flux is recovered in the B0218$+$357 field observation of this source. \subsubsection{B0223.8$+$3533} Observations of this field revealed an unresolved compact source, Figure \ref{fig:figf2}(k), approximately $17\arcsec$ from the position reported by WENSS and over $25\arcsec$ from the more accurate position reported by NVSS, these are by far the greatest offsets observed in all of our detected sources. NVSS also places a limit on the fitted major axis of this source at less than $15\arcsec$, a size that does not encompass our observed source. Furthermore, our measure of integrated flux of 90 mJy is more than twice that measured by WENSS. The size, position and integrated flux density of our VLBI source suggest that it may be an unrelated transient or highly variable source that has been serendipitously detected within the target field. The existence of this source was tested by splitting the VLBI data into four equal length periods and independently imaging each of these. The source was detected in all four data sets with an integrated flux of $92\pm11$ mJy beam$^{-1}$. Furthermore, as would be expected for an unresolved source, scalar averaging of the visibility amplitudes over 30 minute intervals revealed amplitudes that were approximately equal on all baselines. \subsubsection{B0211.9$+$3452} We observe an almost $7\arcsec$ offset at a position angle of $13\arcdeg$ between our VLBI detection of this source, Figure \ref{fig:figf2}(l), and the best known position \citep{con98}. NVSS places a limit on the fitted HWHM model of this source at less than $28.6\arcsec$ and also notes the presence of large residual errors which are indicative of a complex source. As only 30\% of the WENSS flux was recovered by the VLBI observation there is a suggestion that we have detected a compact component of a larger source. This is also supported by the 365 MHz Texas survey which classifies the source as an asymmetric double with a separation of $27\pm1\arcsec$ at a position angle of $25\pm2\arcdeg$. \section{Discussion} Our survey results indicate that at least $10\%$ of moderately faint (S$\sim100$ mJy) sources found at 90 cm contain compact components smaller than $\sim0.1$ to $0.3$ arcsec and stronger than $10\%$ of their total flux densities. This is a strict lower limit as the sensitivity of our observation was limited by the primary beam at the edge of the survey fields. None of the surveyed sources that were even slightly resolved by WENSS were detected. Similarly, none of the WENSS sources that were below the sensitivity limits of the VLBI observation were detected either, suggesting that none of these sources had significantly increased in brightness since the WENSS observations were carried out. The apparent lack of sources varying above our detection threshold must at least in part be due to resolution effects. As 90\% of the WENSS sources above the VLBI detection threshold are not detected they must be at least partially resolved at the VLBI resolution and the compact component of the radio emission would only be a fraction of the WENSS flux density. For the compact component of these sources to vary enough to be detected with VLBI, they must increase in strength by factors of perhaps at least a few (the reciprocal of the ratio of compact flux to WENSS flux) to be detectable with VLBI; the compact component of the flux needs to increase above the VLBI sensitivity limit. Resolution effects are masking variability in these sources. As discussed below, the detection of one apparent highly variable source in the VLBI data is rather remarkable. The interpretation of our detection statistics is complicated, in that the survey has a non-uniform sensitivity over both fields, due to the primary beam response of the VLBA antennas and the fact that we are imaging objects well beyond the half-power points of the primary beam. In addition, due to time and bandwidth smearing effects, as one images objects further from the phase centre, data on the long baselines is discarded, since the smearing effects make imaging difficult. A consequence of this is that the angular resolution is also non-uniform across the surveyed fields, with low resolution far from the phase centre. Not only is the flux limit variable across the field, the brightness temperature sensitivity also varies. It is possible to estimate the detection statistics of our survey for a uniform flux density and brightness temperature limit by considering sources not too far from the phase centre and for a flux sensitivity between the extremes at the phase centre and field edge. For example, if a sensitivity limit of 30 mJy beam$^{-1}$ is considered (achieved in the $0.25 - 0.5$ degree annulus of field 1 and in the $0.5 - 1$ degree annulus of field 2, and exceeded in the lower radius annuli in each field), 11 out of 55 possible sources are detected, a detection rate of 20\%, higher than the strict lower limit of 10\% estimated above for all sources at all annuli. \citet{gar05} performed a similar survey of the NOAO Bootes field at 1.4 GHz, using the NRAO VLBA and 100 m Green Bank Telescope. The survey covered a total of 0.28 deg$^{2}$, one hundredth of the area covered by our survey, and detected a total of 9 sources. The survey achieved sensitivities of $0.074-1.2$ mJy beam$^{-1}$ that enabled the detection of both weak and extended sources, whereas our 90 cm observations detected mainly compact sources or slightly resolved bright sources. Nonetheless, we can estimate the number of detections in this region that could be achieved using the 90 cm survey techniques described in this paper. The 0.28 deg$^2$ NOAO Bootes field contains a total of 13 WENSS sources, 6 of which have integrated flux densities $>30$ mJy. Based on our detection rate of 20\% for such sources we would expect to detect one WENSS source at 90 cm. Assuming a median spectral index of -0.77, only two of the \citet{gar05} sources have integrated flux densities above our 30 mJy beam$^{-1}$ limit at 90 cm, however, one of these is extended and would have a VLBI peak flux density that falls below our limit. Thus the observations of \citet{gar05} are consistent with our 90 cm VLBI results for sources with a peak flux density above 30 mJy beam$^{-1}$. Estimates of the percentage of sources detected with VLBI gives an estimate of the relative contribution of AGN (that contain compact radio emission and are detectable with VLBI) and starburst galaxies (which contain low brightness temperature radio emission not detectable with VLBI). Analysis of the ratio of starburst galaxies to AGN as a function of redshift (at high redshifts) can help to determine the initial sources of ionising radiation early in the Universe. As very little redshift data for our surveyed sources are available, such an analysis is not currently possible with this dataset. In practice, VLBI data at an additional frequency is also required, to confirm that the compact radio emission attributed to AGN has plausible spectral indices. The distribution of morphologies in the detected survey sources are typical of AGN. 10/27 sources are unresolved point sources, consistent with core-dominated AGN. A further 8/27 are clearly resolved into double component sources, consistent with being core-jet AGN or double-lobed radio galaxies. 7/27 sources have complex or extended structures, not obviously clear double components. Again, these sources may be core-jet AGN or radio galaxies. The remaining 2/27 sources are the gravitational lens and the quasar at the phase centres of the two fields. The serendipitous detection of a likely highly variable, very compact source near the target WENSS source B0223.8$+$3533 is intriguing. The total area imaged by this survey represents $\sim0.5\%$ of the area within the $0\arcdeg-2\arcdeg$ annulus and is equivalent to $\sim2.2\%$ of the FWHM of the VLBA primary beam. While it is difficult to place any limits on the real population of variable sources based on this one observation, it does highlight the importance of imaging wide-fields completely, in order to improve our understanding of such sources. \subsection{Future Prospects} The observations presented here demonstrate that extremely wide-field surveys can now be piggybacked on current and future VLBI observations at 90 cm. While this survey has mainly concentrated on detecting and imaging sources already detected by other surveys, we find tantalising evidence of a transient or highly variable source. We were fortunate to have found one that appeared in close proximity to one of our target sources but this may not always be the case. This provides a motive to take on a more ambitious survey of the entire field. Such a survey is not beyond the reach of current technology, it would require at most $\sim45$ times more processing compared to the project presented here, in order to image the entire primary beam of the VLBA using a similar faceted approach. While this is not the most efficient means of detecting transients, it will help progress the development of algorithms and techniques needed for next generation, survey-class instruments that operate at wavelengths or sensitivities not matched by current instruments. The observations presented in this paper were limited by the spectral and temporal resolution of the EVN correlator at the time of the observation. To minimise the effects of bandwidth and time-averaging smearing it was necessary to compromise resolution and image noise. Future technical developments in the capabilities of correlators will allow wide-field, global VLBI studies to be conducted without such restrictions. In particular, software correlators can provide extremely high temporal and spectral resolution, limited only by the time it takes to process the data \citep{del07}. They also allow for some pre-processing to be applied during the correlation process to, for example, mitigate the effects of radio interference or to correlate against multiple phase centres simultaneously. \subsection{Implications for LOFAR and SKA} The results of these observations provide important information on the nature and incidence of compact, low-frequency radio sources, with consequences for next generation, low-frequency instruments such as LOFAR and the SKA. LOFAR is currently being deployed across The Netherlands but remote stations are already under construction in neighbouring countries, in particular Germany. Other countries (e.g. UK, France, Sweden, Italy and Poland) are also expected to join this European expansion of LOFAR (E-LOFAR), extending the longest baseline from a few hundred, to a few thousand km. This development will provide LOFAR with sub-arcsecond resolution at its highest observing frequency (the $120-240$ MHz high-band). One concern associated with extending LOFAR to much longer baselines is whether enough cosmic sources will remain unresolved - this characteristic is required in order to ensure there are enough calibrator sources in the sky in order to calibrate the instrument across its full, very wide, field-of-view. The observations presented here suggest that at least one tenth of all radio sources (at the several tens of mJy level) are likely to exhibit compact VLBI radio structure in the LOFAR high-band. In all likelyhood, an even larger fraction of the E-LOFAR source population will therefore be bright and compact enough to form a grid of calibrator sources across the sky. From our results, we estimate the spatial number density of relatively bright (S$>10$ mJy) and compact (LAS$<200$ mas) sources at 240 MHz to be $\sim3$ deg$^{-2}$. The aggregate total of these compact sources within a beam should serve as a good calibrator for E-LOFAR and enable most of the low-frequency radio sky to be imaged with excellent sub-arcsecond resolution and high dynamic range. Extrapolation to LOFAR's low-band (10$-$80 MHz) is probably very dangerous, but there is every reason to believe that a large number of these sources will remain compact. In order to assess the relative numbers of starburst galaxies and AGN as a function of redshift, obviously large redshift surveys need to take place for these radio continuum objects. Such a survey could be conducted using the redshifted HI signal from these galaxies, using the SKA. \acknowledgements E.L. acknowledges support from a Swinburne University of Technology Chancellor's Research Scholarship, a CSIRO Postgraduate Student Research Scholarship, ATNF co-supervision, and the hospitality of JIVE where part of this work was carried out. This work was supported by the European Community's Sixth Framework Marie Curie Research Training Network Programme, Contract No. MRTN-CT-2004-505183 ``ANGLES''. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The European VLBI Network is a joint facility of European, Chinese, South African and other radio astronomy institutes funded by their national research councils.
56d2281642bd4a50e135e4eeb203640b6b4d59dc
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The AdS/CFT correspondence \cite{MGW} has more and more revealed the deep relations between the $\mathcal{N}=4$ super Yang-Mills (SYM) theory and the string theory in $AdS_5\times S^5$, where classical string solutions play an important role \cite{GKP,AT,JP}. The energies of classical strings have been shown to match with the anomalous dimensions of the gauge invariant operators, while an open string ending on a curve at the boundary of $AdS_5$ has been analyzed to study the strong coupling behavior of the Wilson loop in the gauge theory \cite{RY,JM,DGO}. Alday and Maldacena have used the AdS/CFT correspondence to compute the planar 4-gluon scattering amplitude at strong coupling in the $\mathcal{N}=4$ SYM theory \cite{AM} and found agreement with the result of a conjectured form regarding the all-loop iterative structure and the IR divergence of the perturbative gluon amplitude \cite{ABD}. The 4-gluon scattering amplitude has been evaluated as the string theory computation of the 4-cusp Wilson loop composed of 4 lightlike segments in the T-dual coordinates, where a certain open string solution in $AdS_5$ space is found to minimize the area of the string surface whose boundary conditions are determined by the massless gluon momenta, and a dimensional regularization is used to regularize the IR divergence. Before the IR regularization the worldsheet surface of this particular solution \cite{AM} is related by a certain conformal SO(2,4) transformation to the 1-cusp Wilson loop surface found in \cite{MK} (see also \cite{YM}). The non-leading prefactor of the gluon amplitude has been studied \cite{AFK} and the IR structure of $n$-gluon amplitudes has been fully extracted from a local consideration near each cusp \cite{EB}, where the 1-cusp Wilson loop solution is constructed even in the presence of the IR regularization. By computing the 1-loop string correction to the 1-cusp Wilson loop solution, the 1-loop coefficient in the cusp anomaly function $f(\lambda)$ of the gauge coupling $\lambda$ has been derived as consistent with the energy of a closed string with large spin $S$ in $AdS_5$ \cite{KRT}. Moreover, the 2-loop coefficient in $f(\lambda)$ has been presented \cite{RT} to agree with the results of \cite{BBK,BKK} for the strong coupling solution of the BES equation \cite{BES} in the gauge theory side. Based on the string sigma-model action a whole class of string solution for the 4-gluon amplitude has been constructed \cite{MMT} under the constraint that the Lagrangian evaluated on the string solution takes a constant value. Applying the dressing method \cite{SV} used for the study of the giant magnons and their bound or scattering states \cite{HM,ND} to the elementary 1-cusp Wilson loop solution of \cite{MK}, new classical solutions for Euclidean worldsheets in $AdS_5$ \cite{JKS} have been constructed, where the surfaces end on complicated, timelike curves at the boundary of $AdS_5$. Several investigations associated with planar gluon amplitudes have been presented \cite{BHT,DKS,KR,LAM,DHK,NSV,JMS}. In ref. \cite{AM} the planar 4-gluon amplitude at strong coupling has been constructed by deriving the classical string sigma-model action evaluated on the 4-cusp Wilson loop surface whose edge traces out a rhombus on the projected two-dimensional plane at the boundary of $AdS_5$. Based on the Nambu-Goto string action we will apply various conformal SO(2,4) transformations to the elementary 1-cusp Wilson loop solution of \cite{MK}. We will show how the obtained string surface configurations satisfy the string equations of motion derived from the Nambu-Goto string action. We will observe that there appear various kinds of Wilson loop solutions which are separated into the 2-cusp Wilson loop solutions and the 4-cusp ones. \section{SO(2)$\times$SO(4) transformations of the 1-cusp Wilson loop solution} We consider the 1-cusp Wilson loop solution of \cite{MK}, where the open string world surface ends on two semi infinite lightlike lines and is given by \begin{equation} r = \sqrt{2}\sqrt{y_0^2 - y_1^2} \label{els}\end{equation} embedded in an $AdS_3$ subspace of $AdS_5$ with the metric written in the T-dual coordinates by \cite{AM} \begin{equation} ds^2 = \frac{-dy_0^2 + dy_1^2 + dr^2}{r^2}. \end{equation} Here we take the static gauge where $(y_0,y_1)$ are regarded as worldsheet directions to write the Nambu-Goto action \begin{equation} S = \frac{R^2}{2\pi}\int dy_0dy_1 \frac{1}{r^2} \sqrt{D}, \hspace{1cm} D = 1 + \left(\frac{\partial r}{\partial y_1}\right)^2 - \left( \frac{\partial r}{\partial y_0}\right)^2, \end{equation} from which the equation of motion for $r$ is given by \begin{equation} \frac{2\sqrt{D}}{r^3} = \partial_0\left(\frac{\partial_0r}{r^2\sqrt{D}}\right) - \partial_1\left(\frac{\partial_1r}{r^2\sqrt{D}}\right). \label{ste}\end{equation} The solution (\ref{els}) is confirmed to satisfy the eq. (\ref{ste}) with $\sqrt{D}=i$, which implies that the Lagrangian is purely imaginary when it is evaluated on the solution (\ref{els}). Then the amplitude $\mathcal{A}\sim e^{iS}$ has an exponential suppression factor. The Poincare coordinates in $AdS_5$ with the boundary $r=0$, \begin{equation} ds^2 = \frac{-dy_0^2 + dy_1^2 + dy_2^2 + dy_3^2 + dr^2}{r^2} \end{equation} are related to the embedding coordinates $Y_M \; (M= -1,0,\cdots,4)$ on which the conformal SO(2,4) transformation is acting linearly by the following relations \begin{eqnarray} Y^{\mu} = \frac{y^{\mu}}{r}, \hspace{1cm} \mu = 0, \cdots, 3, \nonumber \\ Y_{-1} + Y_4 = \frac{1}{r}, \hspace{1cm} Y_{-1} - Y_4 = \frac{r^2 + y_{\mu}y^{\mu} }{r}, \\ -Y_{-1}^2 - Y_0^2 + Y_1^2 + Y_2^2 + Y_3^2 + Y_4^2 = -1. \nonumber \end{eqnarray} The elementary 1-cusp solution (\ref{els}) is expressed in terms of $Y_M$ as \begin{equation} Y_0^2 - Y_{-1}^2 = Y_1^2 - Y_4^2, \hspace{1cm} Y_2 = Y_3 = 0. \label{emy}\end{equation} Let us make an SO(2,4) transformation defined by \begin{equation} \left(\begin{array}{c}Y_0 \\ Y_{-1} \end{array} \right) = P \left(\begin{array}{c}Y_0' \\ Y_{-1}' \end{array} \right), \hspace{1cm} \left(\begin{array}{c}Y_1 \\ Y_2 \\ Y_3 \\ Y_4 \end{array} \right) = Q \left(\begin{array}{c}Y_1' \\ Y_2' \\ Y_3' \\ Y_4' \end{array} \right) \end{equation} with \begin{equation} P = \frac{1}{\sqrt{2}} \left( \begin{array}{cc} 1 & -1 \\ 1 & 1 \end{array} \right) \equiv P_1, \hspace{1cm} Q = \left( \begin{array}{cccc} \frac{1}{\sqrt{2}} & 0 & 0 & -\frac{1}{\sqrt{2}} \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \frac{1}{\sqrt{2}} & 0 & 0 & \frac{1}{\sqrt{2}} \end{array} \right) \equiv Q_1. \end{equation} This SO(2)$\times$SO(4) rotation of the elementary 1-cusp solution (\ref{emy}) generates a configuration \begin{equation} Y_0' Y_{-1}' = Y_1' Y_4', \end{equation} which is equivalently expressed in terms of the Poincare coordinates as \begin{equation} r = \sqrt{y_0^2 - y_1^2 - \frac{y_0 - y_1}{y_0 + y_1} } \equiv \sqrt{A}, \label{yrs}\end{equation} where the prime has been suppressed for convenience. When the string configuration (\ref{yrs}) is substituted into (\ref{ste}), $\sqrt{D}$ is so compactly given by $i/|y_0 + y_1|$ that the right hand side (RHS) of (\ref{ste}) is separated into two parts for the region $y_0 + y_1>0$ \begin{eqnarray} \frac{1}{iA^{3/2}} \left[ \left(y_1 + 2y_0 + \frac{y_1}{(y_0+y_1)^2} \right) + \left( y_0 + 2y_1 + \frac{y_0}{(y_0+y_1)^2} \right)\right] \nonumber \\ + \frac{3}{2iA^{5/2}} \left[ \left( -y_0(y_0+y_1) + \frac{y_1}{y_0+y_1} \right) \partial_0A + \left( -y_1(y_0+y_1) + \frac{y_0}{y_0+y_1} \right) \partial_1A \right], \end{eqnarray} whose second $1/A^{5/2}$ part becomes proportional to $1/A^{3/2}$ and then the equation of motion (\ref{ste}) is satisfied. For the region $y_0 + y_1<0$ the string equation is similarly satisfied. The solution (\ref{yrs}) shows that the surface ends on the lines specified by $y_0 = y_1, y_0 = -y_1 \pm 1$ where two cusps are located at $(y_0,y_1) = (\pm 1/2, \pm 1/2)$. The SO(2,4) transformations given by $P=1_2, 2\times 2$ unit matrix, $Q =Q_1$ and $P = -i\sigma_2$ that interchanges $Y_0$ and $Y_{-1}$, $Q=Q_1$ produce the following configurations \begin{equation} {Y_0'}^2 - {Y_{-1}'}^2 = - 2Y_1' Y_4', \hspace{1cm} {Y_0'}^2 - {Y_{-1}'}^2 = 2Y_1' Y_4' \end{equation} respectively, which turn out to be \begin{eqnarray} r^2 &=& y_0^2 - ( y_1 + 1 )^2 \pm 2\sqrt{y_0^2 + (y_1 + 1)^2 - 1}, \label{rty} \\ r^2 &=& y_0^2 - ( y_1 - 1 )^2 \pm 2\sqrt{y_0^2 + (y_1 - 1)^2 - 1}. \label{try}\end{eqnarray} In order to show that the latter surface equation obeys the string equation (\ref{ste}) we parametrize $r$ as $r=\sqrt{y_0^2 - (y_1 -1)^2 + 2\sqrt{B} } \equiv \sqrt{A}$ for the plus sign, and $B \equiv y_0^2 + (y_1 - 1)^2 - 1$. In this case $\sqrt{D}$ is again a pure imaginary $\sqrt{D} = i/\sqrt{B}$. The RHS of (\ref{ste}) has also two parts \begin{equation} \frac{1}{iA^{3/2}\sqrt{B}}[2B + (y_1 -1)^2 + y_0^2 ] + \frac{3}{iA^{5/2}\sqrt{B}}[(y_1 -1)^2(1 - \sqrt{B})^2 - y_0^2(1 + \sqrt{B})^2 ], \end{equation} whose second part again becomes proportional to $1/A^{3/2}$ and combines with the first part to yield $2i/A^{3/2}\sqrt{B}$ which is just the left hand side (LHS) of (\ref{ste}). For the plus sign of (\ref{try}) at the boundary of $AdS_3, r = 0$, the surface ends on two lines $y_0 = -y_1 + \sqrt{2} + 1, y_0 = y_1 - (\sqrt{2} + 1)$ in $y_1 \ge 1+ 1/\sqrt{2}$ and two lines $y_0 = -y_1 - (\sqrt{2} - 1), y_0 = y_1 + \sqrt{2} - 1$ in $y_1 \le 1 - 1/\sqrt{2}$. The region in the outside of the circle defined by $y_0^2 + (y_1 -1)^2 = 1$ is allowed and there are two cusps located at $(y_0,y_1) = (0,\sqrt{2} +1), (0,-\sqrt{2} +1)$, where one semi infinite lightlike line and one finite lightlike line meet at each cusp for $y_0 \ge 0$ and the allowed region specified by $r^2 \ge 0$ is separated into $y_0 \ge 0$ part and $y_0 \le 0$ part. For the minus sign of (\ref{try}) the surface ends on two lines $y_0 = -y_1 + \sqrt{2} + 1, y_0 = y_1 + \sqrt{2} - 1$ in $y_0 \ge 1/\sqrt{2}$ and two lines $y_0 = -y_1 - (\sqrt{2} - 1), y_0 = y_1 - (\sqrt{2} + 1)$ in $y_0 \le -1/\sqrt{2}$. There are two cusps located at $(y_0,y_1) = (\pm\sqrt{2}, 1)$. The string surface ends on the two semi infinite lightlike lines which emerge from the one cusp $(\sqrt{2},1)$ for the region $y_0 \ge \sqrt{2}$. The former surface (\ref{rty}) is similarly shown to be a two-cusp Wilson loop solution. If the other conformal transformations are performed by $P = P_1, Q = 1_4, 4\times 4$ unit matrix and $P = P_1$, \begin{equation} Q = \left( \begin{array}{cccc} 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right) \equiv Q_2, \end{equation} that interchanges $Y_1$ and $Y_4$, we have two curves \begin{equation} -2Y_0' Y_{-1}' = {Y_1'}^2 - {Y_4'}^2, \hspace{1cm} 2Y_0' Y_{-1}' = {Y_1'}^2 - {Y_4'}^2, \end{equation} which are expressed in terms of the Poincare coordinates as \begin{eqnarray} r^2 &=& (y_0 + 1)^2 - y_1^2 \pm 2\sqrt{(y_0 + 1)^2 + y_1^2 - 1}, \label{rhy} \\ r^2 &=& (y_0 - 1)^2 - y_1^2 \pm 2\sqrt{(y_0 - 1)^2 + y_1^2 - 1}, \label{hry}\end{eqnarray} respectively. These expressions are compared with (\ref{rty}) and (\ref{try}) under the exchange of $y_0$ and $y_1$. The two curves (\ref{rhy}) and (\ref{hry}) also obey the string eq. (\ref{ste}) in the same way as (\ref{try}). For the plus sign of (\ref{rhy}) the surface ends on two lines $y_0 = -y_1 + \sqrt{2} - 1, y_0 = y_1 - (\sqrt{2} + 1)$ in $y_1 \ge 1/\sqrt{2}$ and two lines $y_0 = -y_1 - (\sqrt{2} + 1), y_0 = y_1 + \sqrt{2} - 1$ in $y_1 \le -1/\sqrt{2}$ which meet at two cusps $(-1,\pm\sqrt{2})$ respectively. For the minus sign of (\ref{rhy}) the surface ends on two lines $y_0 = -y_1 + \sqrt{2} - 1, y_0 = y_1 + \sqrt{2} - 1$ in $y_0 \ge -1 +1/\sqrt{2}$ and two lines $y_0 = -y_1 - (\sqrt{2} + 1), y_0 = y_1 - (\sqrt{2} + 1)$ in $y_0 \le -1 -1/\sqrt{2}$ which meet at two cusps $(\sqrt{2}-1,0)$ and $(-\sqrt{2}-1,0)$ respectively. Similarly for the plus sign of (\ref{hry}) two cusps are located at $(1, \pm\sqrt{2})$ and for the minus sign there are two cusps $(\pm\sqrt{2}+1,0)$. The remaining SO(2,4) isometry generated by $P = 1_2$ and $Q = Q_2$ yields a configuration ${Y_0'}^2 - {Y_{-1}'}^2 = {Y_4'}^2 - {Y_1'}^2$ with a slight sign difference from the starting solution (\ref{emy}). In the Poincare coordinates it is given by \begin{equation} r^2 = y_0^2 - y_1^2 \pm \sqrt{2( y_0^2 + y_1^2 ) - 1 }, \label{fry}\end{equation} whose surface ends on four lines $y_0 =y_1 \pm 1, y_0 = -y_1 \pm 1$ which meet at two cusps $(0, \pm1)$ for the plus sign and at two cusps $(\pm1,0)$ for the minus sign. The string surface (\ref{fry}) can be also confirmed to obey the string equation (\ref{ste}) in a similar way to the solution (\ref{try}). Now we consider the conformal SO(2,4) transformations that generate a non-zero value of $y_2$. First we set $P = P_1$ and \begin{equation} Q = \left( \begin{array}{cccc} \cos\alpha & -\sin\alpha & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ \sin\alpha & \cos\alpha & 0 & 0 \end{array} \right) \equiv Q_3(\alpha) \end{equation} to have \begin{equation} Y_0' Y_{-1}' = - \frac{\cos 2\alpha}{2}({Y_1'}^2 - {Y_2'}^2) + \sin 2\alpha Y_1' Y_2', \hspace{1cm} Y_4' = 0, \end{equation} which give a surface \begin{eqnarray} r &=& \sqrt{1 + y_0^2 - y_1^2 - y_2^2} \equiv \sqrt{A}, \nonumber \\ y_0 &=& - \frac{\cos 2\alpha}{2}(y_1^2 - y_2^2) + \sin 2\alpha y_1y_2. \label{cs}\end{eqnarray} At $\alpha = \pi/4$ this surface reduces to \begin{equation} r = \sqrt{(1 - y_1^2)(1 - y_2^2)}, \hspace{1cm} y_0 = y_1y_2, \label{fcu}\end{equation} which show that the Wilson loop at the boundary is composed with four cusps and four lightlike lines and takes a square form for its projection on the $(y_1, y_2)$ plane \cite{AM}. We choose a static gauge that $(y_1, y_2)$ are the worldsheet coordinates for the Euclidean open string surface to express the Nambu-Goto action as \begin{eqnarray} S &=& \frac{R^2}{2\pi}\int dy_1dy_2 \frac{\sqrt{D}}{r^2}, \nonumber \\ D &=& 1 + (\partial_ir)^2 - (\partial_iy_0)^2 - (\partial_1r \partial_2y_0 - \partial_2r\partial_1y_0)^2. \label{dr}\end{eqnarray} The equation of motion for $y_0$ is given by \begin{equation} \partial_i\left(\frac{\partial_iy_0}{r^2\sqrt{D}}\right) = \partial_1\left( \frac{\partial_2rC}{r^2\sqrt{D}}\right) - \partial_2\left(\frac{\partial_1rC} {r^2\sqrt{D}}\right), \hspace{1cm} C \equiv \partial_1r\partial_2y_0 - \partial_2r\partial_1y_0 \label{rdy}\end{equation} and the equation of motion for $r$ takes a form \begin{equation} \frac{2}{r^3}\sqrt{D} = -\partial_i\left(\frac{\partial_ir}{r^2\sqrt{D}}\right) + \partial_1\left(\frac{\partial_2y_0C}{r^2\sqrt{D}}\right) - \partial_2\left( \frac{\partial_1y_0C}{r^2\sqrt{D}}\right). \label{cdr}\end{equation} The insertion of the expression (\ref{cs}) into $C$ in (\ref{rdy}) and $D$ in (\ref{dr}) yields $C= -[2\cos2\alpha y_1y_2 + \sin2\alpha (y_1^2 - y_2^2)]/\sqrt{A}$ and $D = 1$. When the surface (\ref{cs}) is substituted into the string equation (\ref{rdy}) its RHS can be so rewritten by $(\partial_2r/r)\partial_1(C/r\sqrt{D}) - (\partial_1r/r)\partial_2(C/r\sqrt{D})$ that it is evaluated as $-2y_0(y_1^2 + y_2^2 - 2)/A^2$ which equals to the LHS of (\ref{rdy}). The RHS of (\ref{cdr}) is expressed as sum of two parts \begin{eqnarray} \frac{1}{A^{3/2}} [2- 3(y_1^2 + y_2^2)] + \frac{3}{A^{5/2}}[ \left( y_0( -\cos2\alpha y_1 + \sin2\alpha y_2 ) - y_1 \right)^2 \nonumber \\ + \left( y_0( \cos2\alpha y_2 + \sin2\alpha y_1 ) - y_2 \right)^2 - (2\cos2\alpha y_1y_2 + \sin2\alpha (y_1^2 - y_2^2) )^2 ], \end{eqnarray} whose second part becomes proportional to $1/A^{3/2}$ and is summed up with the first part into $2/A^{3/2}$, which is the LHS of (\ref{cdr}). For $\alpha = \pi/2$ or $(\alpha = 0)$ the string surface (\ref{cs}) is given by \begin{eqnarray} y_0 &=& \frac{y_1^2 - y_2^2}{2}, \hspace{1cm} \left( y_0 = -\frac{y_1^2 - y_2^2}{2} \right), \nonumber \\ r &=& \sqrt{\left(1 + \frac{y_1 + y_2}{\sqrt{2}}\right)\left(1 - \frac{y_1 + y_2}{\sqrt{2}}\right)\left(1 + \frac{y_1 - y_2}{\sqrt{2}} \right)\left(1 - \frac{y_1 - y_2}{\sqrt{2}}\right) }, \end{eqnarray} from which we see that the Wilson loop at the boundary $r = 0$ has a square-form projection on the $(y_1,y_2)$ plane characterized by the four lines, $y_2 = \pm y_1 + \sqrt{2}, y_2 = \pm y_1 - \sqrt{2}$ and contains four cusps located at $(y_0,y_1,y_2) = (1,\pm\sqrt{2},0), (-1,0,\pm\sqrt{2})$. This square in the $(y_1,y_2)$ plane is produced by making a $\pi/4$-rotation of the square of the 4-cusp solution (\ref{fcu}). Let us perform the SO(2,4) transformations specified by $P = 1_2, Q = Q_3(\pi/4)$ and $P = -i\sigma_2, Q = Q_3(\pi/4)$ to derive two string configurations \begin{equation} {Y_0'}^2 - {Y_{-1}'}^2 = -2Y_1' Y_2', \hspace{1cm} {Y_0'}^2 - {Y_{-1}'}^2 = 2Y_1' Y_2' \end{equation} with $Y_4'=0$, which are further represented by \begin{eqnarray} y_0 &=& \sqrt{1 - 2y_1y_2}, \hspace{1cm} r = \sqrt{2 - (y_1 + y_2)^2}, \label{bay} \\ y_0 &=& \sqrt{1 + 2y_1y_2}\equiv \sqrt{B}, \hspace{1cm} r = \sqrt{2 - (y_1 - y_2)^2} \equiv \sqrt{A}. \label{ba}\end{eqnarray} In order to see how the configuration (\ref{ba}), for instance, satisfies the string equations we calculate $C$ and $\sqrt{D}$ to be compact expressions $C = -(y_1^2 - y_2^2)/\sqrt{AB}, \sqrt{D} =1/\sqrt{B}$. The RHS of (\ref{rdy}) is estimated as \begin{equation} -\partial_1\frac{(y_1 - y_2)(y_1^2 - y_2^2)}{A^2} - \partial_2 \frac{(y_1 - y_2)(y_1^2 - y_2^2)}{A^2} = - \frac{2(y_1 - y_2)^2}{A^2}, \end{equation} which agrees with the LHS of (\ref{rdy}). The RHS of (\ref{cdr}) can be separated into a $1/A^{3/2}$ part and a $1/A^{5/2}$ part as \begin{equation} \frac{1}{A^{3/2}\sqrt{B}}[2B- (y_1 - y_2)^2 - 2(y_1^2 + y_2^2)] + \frac{1}{A^{5/2}\sqrt{B}}[ 6(y_1 - y_2)^2B - 3(y_1^2 - y_2^2)^2], \label{ab}\end{equation} whose second part becomes $3(y_1 - y_2)^2/A^{3/2}\sqrt{B}$ which makes (\ref{ab}) equal to $2/A^{3/2}\sqrt{B}$, the LHS of (\ref{cdr}). The projection of the surface (\ref{ba}) at the boundary of $AdS_5$ on the $(y_1,y_2)$ plane is composed of two separated parallel lines, $y_2 = y_1 + \sqrt{2}$ on which $y_0 = |\sqrt{2}y_1 + 1|$ and $y_2 = y_1 - \sqrt{2}$ on which $y_0 = |\sqrt{2}y_1 - 1|$. In the region defined by $A>0$, that is, the region between the two parallel lines, $B$ also takes a positive value. From one cusp located at $(y_0,y_1,y_2) = (0,-1/\sqrt{2},1/\sqrt{2})$ two semi infinite lightlike lines emerge on a plane specified by $y_2 = y_1 + \sqrt{2}$, while on a plane $y_2 = y_1 - \sqrt{2}$ two semi infinite lightlike lines intersect transversly at the other cusp located at $(0,1/\sqrt{2},-1/\sqrt{2})$. Thus the Wilson loop is composed of two separated parts each of which is represented by two semi infinite lightlike lines with a cusp. Therefore the solution (\ref{ba}) as well as (\ref{bay}) is regarded as a two-cusp Wilson loop solution. There remain two conformal transformations defined by $P = 1_2, Q = Q_3(\pi/2)$ and $P = 1_2, Q = Q_3(0)$, which produce the following configurations \begin{eqnarray} {Y_0'}^2 - {Y_{-1}'}^2 &=& {Y_2'}^2 - {Y_1'}^2, \\ {Y_0'}^2 - {Y_{-1}'}^2 &=& {Y_1'}^2 - {Y_2'}^2 \label{yyo}\end{eqnarray} with $Y_4'=0$, which are respectively written by \begin{eqnarray} y_0 &=& \sqrt{1 - y_1^2 + y_2^2}, \hspace{1cm} r = \sqrt{2 - 2y_1^2}, \label{yrb} \\ y_0 &=& \sqrt{1 + y_1^2 - y_2^2}\equiv \sqrt{B}, \hspace{1cm} r = \sqrt{2 - 2y_2^2} \equiv \sqrt{A}. \label{yra}\end{eqnarray} For the surface (\ref{yra}) $C$ and $\sqrt{D}$ are evaluated as $C = 2y_1y_2/\sqrt{AB}, \sqrt{D} =1/\sqrt{B}$. In this case the demonstration of (\ref{rdy}) is simpler than the above cases due to $\partial_1r =0$. The eq. (\ref{cdr}) is also confirmed to hold in a way similar to (\ref{ab}). In ref. \cite{KRT} the solution (\ref{yyo}) has been analyzed in the string sigma-model action in the conformal gauge and the leading 1-loop correction to it has been computed together with the 1-loop correction to the starting 1-cusp solution (\ref{els}), and further the 2-loop correction to the latter solution has been studied \cite{RT}. In ref. \cite{KRT} the solution of (\ref{yyo}) was presented by \begin{eqnarray} Y_0 &=& \frac{1}{\sqrt{2}} \cosh (\alpha \sigma + \beta \tau), \hspace{1cm} Y_{-1} = \frac{1}{\sqrt{2}} \cosh (\alpha \tau - \beta\sigma), \nonumber \\ Y_1 &=& \frac{1}{\sqrt{2}} \sinh (\alpha \sigma + \beta \tau), \hspace{1cm} Y_2 = \frac{1}{\sqrt{2}} \sinh (\alpha \tau - \beta\sigma), \hspace{1cm} Y_3 = Y_4 =0, \label{coh}\end{eqnarray} where $\alpha^2 + \beta^2 =2$, the Euclidean worldsheet coordinates $(\tau,\sigma)$ take values in the infinite interval. The parametrization (\ref{coh}) is equivalently transformed in terms of the Poincare coordinates into \begin{eqnarray} r &=& \frac{\sqrt{2}}{\cosh(\alpha \tau - \beta\sigma)}, \hspace{1cm} y_0 = \frac{\cosh (\alpha\sigma + \beta \tau)}{\cosh(\alpha\tau - \beta\sigma)}, \nonumber \\ y_1 &=& \frac{\sinh(\alpha\sigma + \beta\tau)}{\cosh(\alpha\tau - \beta\sigma)}, \hspace{1cm} y_2 = \tanh (\alpha\tau - \beta\sigma), \end{eqnarray} which indeed satisfies the eq. (\ref{yra}). The projection of the surface (\ref{yra}) at the boundary of $AdS_5$, $r=0$ on the $(y_1,y_2)$ plane is composed of two separated parallel lines, $y_2 = 1$ and $y_2 = -1$ on which $y_0$ is specified by the same equation $y_0 = |y_1|$. From a different viewpoint the projection of the surface (\ref{yra}) on the $(y_0,y_1)$ plane at a fixed value of $y_2$ in $-1 < y_2 < 1$ is described by a hyperbolic curve $y_0 = \sqrt{y_1^2 +(1 -y_2^2)}$, while that projection at the boundary value $y_2 =1$ or $y_2 =-1$ becomes two semi infinite lightlike lines intersecting transversly at a cusp located at $(y_0,y_1,y_2) = (0,0,1)$ or $(0,0,-1)$. Thus the solution (\ref{yra}) as well as (\ref{yrb}) has two cusps in the same way as (\ref{ba}) and (\ref{bay}). \section{Conformal boosts of the 4-cusp Wilson loop solution} We turn to the conformal boost transformations of the 4-cusp solution $Y_0Y_{-1} = Y_1Y_2, Y_3=Y_4=0$ (\ref{fcu}) and see whether the transformed configurations are solutions of the string equations for the Nambu-Goto action. The boost in the (-1,4) plane given by \begin{equation} Y_4 = \gamma(Y_4' - vY_{-1}'), \hspace{1cm} Y_{-1} = \gamma(Y_{-1}' - vY_4') \label{vb}\end{equation} with $\gamma = 1/\sqrt{1-v^2}$ is performed to yield $Y_4' - vY_{-1}'=0$ and $\gamma Y_0'(Y_{-1}' - vY_4')=Y_1'Y_2'$, which are represented in terms of the Poincare coordinates as \begin{equation} y_0' = \gamma(1+v)y_1'y_2', \hspace{1cm} r' = \sqrt{ \frac{1-v}{1+v} + {y_0'}^2 - {y_1'}^2 - {y_2'}^2 } \equiv \sqrt{A}. \label{ypr}\end{equation} Alternatively the boost (\ref{vb}) can be described by \begin{equation} r' = \sqrt{\frac{1-v}{1+v}}r, \hspace{1cm} y_1' = \sqrt{\frac{1-v}{1+v}} y_1, \hspace{1cm} y_2' = \sqrt{\frac{1-v}{1+v}}y_2 \label{rp}\end{equation} owing to the relation $1/r' = Y_{-1}' + Y_4' = \gamma (1+v)/r$. Below the prime will be omitted. For the eq. (\ref{rdy}) $C$ is given by $C=-\gamma(1+v)(y_1^2 -y_2^2)/\sqrt{A}$, while $D$ takes a simple value $D=1$. The RHS of (\ref{rdy}) is computed by \begin{eqnarray} -\frac{2\gamma(1+v)}{A^3}\Biggl[A\left( \gamma^2(1+v)^2y_1y_2 (y_1^2 + y_2^2) - 2y_1y_2 \right) \nonumber \\ + (y_1^2 - y_2^2)\left( -(\gamma^2(1+v)^2y_1^2 -1 )y_2\partial_1A + (\gamma^2(1+v)^2y_2^2 -1 )y_1\partial_2A \right) \Biggr], \end{eqnarray} whose $1/A^3$ part vanishes owing to the symmetric form of $A$ for the exchange of $y_1$ and $y_2$. The $1/A^2$ part becomes equal to the LHS of (\ref{rdy}). The RHS of the other string equation (\ref{cdr}) is calculated by \begin{equation} \frac{1}{A^{3/2}}[2- 3\gamma^2(1+v)^2(y_1^2 + y_2^2)] + \frac{3(y_1^2 + y_2^2)}{A^{5/2}}[\gamma^4(1+v)^4y_1^2y_2^2 + 1 - \gamma^2(1+v)^2(y_1^2 + y_2^2)], \label{var}\end{equation} whose second $1/A^{5/2}$ part is so simplified into $3(y_1^2 + y_2^2)(1+v)/(A^{3/2}(1-v))$ that (\ref{var}) becomes $2/A^{3/2}$ which is just the LHS of (\ref{cdr}). Since $r'$ of (\ref{ypr}) is represented with the prime by \begin{equation} r' = \sqrt{\frac{1 + v}{1 - v}} \sqrt{\left( \frac{1-v}{1+v} - {y_1'}^2 \right)\left( \frac{1-v}{1+v} - {y_2'}^2\right) }, \label{rvy}\end{equation} the projection of the string surface on the $(y_1', y_2')$ plane at the $AdS_5$ boundary is the square with side length $2\sqrt{(1-v)/(1+v)} \equiv 2a$, which is compared to the square with side length 2 for the starting basic solution (\ref{fcu}). Following the IR dimensional regularization of ref. \cite{AM} we define the following regularized Nambu-Goto action \begin{equation} S = \frac{\sqrt{\lambda_d c_d}}{2\pi} \int dy_1' dy_2' \frac{\sqrt{D}}{{r'}^{2+ \epsilon}} \label{acd}\end{equation} with $d = 4-2\epsilon, c_d= 2^{4\epsilon}\pi^{3\epsilon}\Gamma(2+\epsilon), \lambda_d = \lambda\mu^{2\epsilon}/(4\pi e^{-\gamma})^{\epsilon}, \gamma = -\Gamma'(1)$, where $\lambda_d$ is described by the IR cutoff scale $\mu$ and the dimensionless four dimensional coupling $\lambda$ to match the field theory side. Substituting the solution (\ref{rvy}) into the action (\ref{acd}) and making a variable transformation (\ref{rp}) to carry out an integral over the inside of the square we have \begin{equation} -iS = B_{\epsilon} \int_{-1}^1 dy_1dy_2 \frac{1}{[(1-y_1^2)(1-y_2^2)]^{1 + \frac{\epsilon}{2} }} = B_{\epsilon} \frac{\pi\Gamma\left(-\frac{\epsilon}{2}\right)^2} {\Gamma\left(\frac{1-\epsilon}{2}\right)^2 } \label{sep}\end{equation} with $B_{\epsilon} = \sqrt{\lambda_d c_d}/2\pi a^{\epsilon}$, where $i$ is due to the Euclidean worldsheet and a double pole appears when $\epsilon < 0 \rightarrow 0$. From the structure of the action (\ref{dr}) if $r$ and $y_{\mu}$ are solutions of the string equations, then the rescaling configuration specified by $ar$ and $ay_{\mu}$ with an arbitrary constant $a$ also satisfies the string equations. The solution (\ref{rp}) exhibits a rescaling solution. Let us perform the following boost in the (0,4) plane for the 4-cusp solution (\ref{fcu}) \begin{equation} Y_4 = \gamma(Y_4' - vY_0'), \hspace{1cm} Y_0 = \gamma(Y_0' - vY_4'). \label{ybt}\end{equation} We obtain a string surface described by \begin{equation} r' = \sqrt{ 1 - \frac{2b}{\gamma}y_0' + {y_0'}^2 - {y_1'}^2 - {y_2'}^2 } \equiv \sqrt{A}, \hspace{1cm} b = \gamma v \label{gab}\end{equation} and $v{y_0'}^2 - y_0' + \gamma y_1'y_2' =0$ for which we choose \begin{equation} y_0' = \frac{\gamma}{2b} ( 1- \sqrt{1- 4by_1'y_2'} ) \equiv \frac{\gamma}{2b} ( 1- \sqrt{B} ), \label{bga}\end{equation} which reduces to the starting solution (\ref{fcu}) in the limit $v \rightarrow 0$. We consider the $b < 1$ case. Below the prime will be suppressed. For the eq. (\ref{rdy}) $C$ is evaluated as $C = -\gamma(y_1^2 - y_2^2)/\sqrt{AB}$ from the involved expressions (\ref{gab}) and (\ref{bga}), while $D$ in (\ref{dr}) has many complicated terms but can be cast into a simple form $\sqrt{D} = 1/\sqrt{B}$ through $\gamma^2 = 1+ b^2$. The substitution of these expressions into the RHS of (\ref{rdy}) leads to \begin{eqnarray} &\frac{\gamma}{A^2}\left[ 2b\left( 1- \frac{\gamma^2}{2b^2} \right) \frac{y_1^2 + y_2^2}{\sqrt{B}} + 4y_1y_2 + \frac{\gamma^2}{b} (y_1^2 + y_2^2) \right]& \\ &+ \frac{2\gamma}{A^3}(y_1^2 - y_2^2) \left[ \left( \left(1- \frac{\gamma^2}{2b^2} \right) \frac{by_2}{\sqrt{B}} + y_1 + \frac{\gamma^2}{2b}y_2 \right)\partial_2A - \left( \left(1- \frac{\gamma^2}{2b^2} \right) \frac{by_1}{\sqrt{B}} + y_2 + \frac{\gamma^2}{2b}y_1 \right)\partial_1A \right],& \nonumber \end{eqnarray} whose second $1/A^3$ part vanishes owing to the symmetric expression of $A$ for the exchange of $y_1$ and $y_2$ and the remaining first part becomes coincident with the LHS of (\ref{rdy}). In order to prove the eq. (\ref{cdr}) we devote ourselves to the $1/A^{5/2}$ part of its RHS \begin{eqnarray} \frac{3}{A^{5/2}\sqrt{B}}\Biggl[ \left( 1- \frac{\gamma^2}{2b^2} \right)^2 b^2(y_1^2 + y_2^2) +2b\sqrt{B}\left( 1- \frac{\gamma^2}{2b^2} \right) \left(2y_1y_2 + \frac{\gamma^2}{2b}(y_1^2 + y_2^2) \right) \nonumber \\ + (1 - 4by_1y_2 )\left( \frac{2\gamma^2}{b}y_1y_2 + \left( 1 + \frac{\gamma^4}{4b^2} \right)(y_1^2 + y_2^2) \right) - \gamma^2(y_1^2 - y_2^2)^2 \Biggr], \end{eqnarray} which turns out to be $6b(2y_1y_2 + \gamma^2(y_1^2 + y_2^2)/2b )/A^{3/2} \sqrt{B}$, which further cancels against the remaining $1/A^{3/2}$ part to leave $2/A^{3/2}\sqrt{B}$, that is, the LHS of (\ref{cdr}). The insertion of (\ref{bga}) into (\ref{gab}) with $r'=0$ leads to the projection of the Wilson loop on the $(y_1',y_2')$ plane, which is expressed as \begin{equation} ({y_1'}^2 + {y_2'}^2)^2 - \frac{\gamma^2}{b^2}({y_1'}^2 + {y_2'}^2) (1 - 2by_1'y_2' ) + \frac{\gamma^4}{b^2}{y_1'}^2{y_2'}^2 - \frac{4}{b}y_1'y_2' + \frac{1}{b^2} = 0. \end{equation} This equation can be factorized into \begin{equation} \left[ \left( y_2' + \frac{y_1'}{b} \right)^2 - \frac{1}{b^2} \right] \left[ ( y_2' + by_1')^2 - 1 \right] = 0, \label{fac}\end{equation} which gives four lines $y_2' + y_1'/b = \pm 1/b, y_2' + by_1' = \pm1$ that form a rhombus on the $(y_1',y_2')$ plane. The boost (\ref{ybt}) gives $1/r' = Y_{-1}' + Y_4' = Y_{-1} + \gamma(Y_4 + vY_0)$ and $y_0'/r' = \gamma(Y_0 + vY_4)$, which become $1/r' = (1 + by_0)/r$ and $y_0' = \gamma y_0r'/r$ through the starting solution (\ref{fcu}). Thus the boost is represented in terms of the Poincare coordinates as \begin{equation} y_0' = \frac{\gamma y_1y_2}{1 + by_1y_2}, \hspace{1cm} y_1' = \frac{y_1}{1 + by_1y_2}, \hspace{1cm} y_2' = \frac{y_2}{1 + by_1y_2}. \label{gap}\end{equation} The eq. (\ref{bga}) is changed into the first equation in (\ref{gap}) when the second and third equations in (\ref{gap}) are substituted into (\ref{bga}), while the eq. (\ref{gab}) becomes \begin{equation} r' = \frac{\sqrt{(1-y_1^2)(1-y_2^2)} }{1 + by_1y_2}, \label{rsq}\end{equation} which vanishes at $y_1 = \pm 1$ and $y_2 = \pm1$. The four cusps of the Wilson loop are determined from (\ref{gap}) with $y_1 = \pm 1, y_2 = \pm1$ to be located in the coordinates $(y_0',y_1',y_2')$ as \begin{eqnarray} \left( -\frac{\sqrt{1 + b^2}}{1- b}, -\frac{1}{1- b}, \frac{1}{1- b} \right), \hspace{1cm} \left( \frac{\sqrt{1 + b^2}}{1+ b}, \frac{1}{1+ b}, \frac{1}{1 + b} \right), \nonumber \\ \left( \frac{\sqrt{1 + b^2}}{1 + b}, -\frac{1}{1 + b}, -\frac{1}{1 + b} \right), \hspace{1cm} \left(-\frac{\sqrt{1 + b^2}}{1- b}, \frac{1}{1- b}, -\frac{1}{1 - b} \right). \label{pos}\end{eqnarray} We again see that the projection of the Wilson loop on the $(y_1',y_2')$ plane is a rhombus. Alternatively the cusp positions (\ref{pos}) are determined from the intersections of the four lines defined by (\ref{fac}). The four lightlike segments between the adjacent cusps characterize the four massless gluon momenta so that the parameter $b$ is related with the ratio of the Mandelstam variables $s, t$ as $s/t = (1 + b)^2/(1 - b)^2$. The classical Nambu-Goto action (\ref{acd}) evaluated on this 4-cusp Wilson loop solution is represented through (\ref{gap}) and (\ref{rsq}) as \begin{eqnarray} -iS &=& \frac{\sqrt{\lambda_d c_d}}{2\pi} \int_{-1}^1 dy_1dy_2 \frac{1 - by_1y_2}{(1 + by_1y_2)^3} \frac{1}{r'^{2+\epsilon} \sqrt{B(y_i')}} \nonumber \\ &=& \frac{\sqrt{\lambda_d c_d}}{2\pi} \int_{-1}^1 dy_1dy_2 \frac{(1 + by_1y_2)^{\epsilon}}{[(1 - y_1^2)(1 - y_2^2)]^{1+\frac{\epsilon}{2}} }. \end{eqnarray} By expanding the integrand as a power series on $b$ we evaluate the integral over $y_1$ and $y_2$ as \begin{equation} \frac{\sqrt{\lambda_d c_d}}{2\pi} \frac{\pi \Gamma\left( -\frac{\epsilon}{2}\right)^2}{\Gamma\left(\frac{1-\epsilon}{2}\right)^2} F\left( \frac{1}{2}, -\frac{\epsilon}{2}, \frac{1-\epsilon}{2}; b^2 \right). \label{fgp}\end{equation} Here in view of (\ref{gab}) and (\ref{bga}) we present a string configuration using an arbitrary constant $a$ \begin{eqnarray} r' &=& \sqrt{ a^2 - \frac{2ab}{\gamma}y_0' + {y_0'}^2 - {y_1'}^2 - {y_2'}^2 } \equiv \sqrt{A}, \hspace{1cm} b = \gamma v, \label{ary} \\ y_0' &=& \frac{\gamma}{2b} ( a - \sqrt{a^2- 4by_1'y_2'} ) \equiv \frac{\gamma}{2b} ( a- \sqrt{B} ), \label{abb}\end{eqnarray} whose $v \rightarrow 0$ limit reduces to (\ref{ypr}) with $a$ replaced by $\sqrt{(1-v)/(1+v)}$. This string surface is confirmed to satisfy the string equations with $C=-\gamma(y_1^2 - y_2^2)/\sqrt{AB}$ and $\sqrt{D} = a/\sqrt{B}$. This solution is just a rescaling solution for (\ref{gap}) and (\ref{rsq}) as expressed by \begin{eqnarray} y_0' &=& \frac{a\gamma y_1y_2}{1 + by_1y_2}, \hspace{1cm} y_1' = \frac{ay_1}{1 + by_1y_2}, \hspace{1cm} y_2' = \frac{ay_2}{1 + by_1y_2}, \label{ayb} \\ r' &=& \frac{a\sqrt{(1-y_1^2)(1-y_2^2)} }{1 + by_1y_2}. \label{ray}\end{eqnarray} The insertion of (\ref{ayb}) into (\ref{ary}) leads to (\ref{ray}) and the eq. (\ref{abb}) becomes the first equation in (\ref{ayb}) when the second and third equations are substituted. The classical action of this rescaling solution is similarly evaluated as \begin{equation} -iS = \frac{\sqrt{\lambda_d c_d}}{2\pi} \int_{-1}^1 dy_1dy_2 \frac{a^2(1 - by_1y_2)}{(1 + by_1y_2)^3} \frac{a}{r'^{2+\epsilon} \sqrt{B(y_i')}}, \end{equation} which gives (\ref{fgp}) with the $\sqrt{\lambda_d c_d}/2\pi$ factor replaced by $B_{\epsilon}=\sqrt{\lambda_d c_d}/2\pi a^{\epsilon}$, whose $\epsilon$ expansion leads to the exponential form of the planar 4-gluon scattering amplitude at strong coupling of \cite{AM}. In ref. \cite{AM} for the 4-cusp Wilson loop solution the string sigma-model action was used, while based on the Nambu-Goto action we have demonstrated that the 4-cusp Wilson loop configuration indeed solves the string equation and evaluated the classical action on this 4-cusp solution. There is another boost in the (-1,1) plane specified by \begin{equation} Y_1 = \gamma(Y_1' - vY_{-1}'), \hspace{1cm} Y_{-1} = \gamma(Y_{-1}' - vY_1'), \label{gvy}\end{equation} which produces a string surface \begin{equation} y_0' = \frac{y_2'(y_1' - v)}{1 - vy_1'} \equiv \frac{y_2'(y_1' - v)}{B}, \hspace{1cm} r' = \sqrt{ 1 + {y_0'}^2 - {y_1'}^2 - {y_2'}^2 } \equiv \sqrt{A}. \label{yby}\end{equation} This configuration is not symmetric under the interchange of $y_1'$ and $y_2'$. Although $C$ is obtained by an involved form $C = [-y_1(y_1-v)/B + (1-v^2)y_2^2/B^2]/\sqrt{A}$ here with unprimed expressions, $\sqrt{D}$ is calculated to be a simple form $\sqrt{D}=\sqrt{1-v^2}/B$. The string equation (\ref{rdy}) is proved as follows. The RHS of (\ref{rdy}) is evaluated as \begin{eqnarray} &\frac{y_2}{A^2} \left[ \sqrt{1-v^2}\left( 2y_1X - (1-y_1^2)\partial_1X \right) - \frac{2B^2}{\sqrt{1-v^2}} \left( \frac{(1-v^2)(y_1 - v)}{B^3}X + \left( \frac{(1-v^2)y_2^2(y_1 - v)}{B^3} - y_1 \right) \frac{1-v^2}{B^3} \right) \right]& \nonumber \\ &+\frac{2X}{A^3} \left[ \sqrt{1-v^2}y_2(1-y_1^2)\partial_1A + \frac{B^2}{\sqrt{1-v^2}}\left( \frac{(1-v^2)y_2^2(y_1 - v)}{B^3} - y_1 \right)\partial_2A \right], & \end{eqnarray} where $X = -y_1(y_1-v)/B^2 + (1-v^2)y_2^2/B^3$ and the $1/A^3$ part vanishes through $A = (1-y_1^2)(1 - y_2^2(1-v^2)/B^2)$ in spite of the asymmetric expression. The remaining $1/A^2$ part agrees with the LHS of (\ref{rdy}). For the string equation (\ref{cdr}), the $1/A^{5/2}$ terms of the RHS are gathered together into a sum of terms with odd powers of $1/B$ \begin{equation} \frac{3\sqrt{1-v^2}(1-y_1^2)}{A^{5/2}B} \left[ y_1^2 + \frac{(1-v^2)y_2^2(1 - y_1^2)}{B^2} - \frac{(1-v^2)^2y_2^4}{B^4} \right], \end{equation} which turns out to be a $1/A^{3/2}$ term as $3\sqrt{1-v^2}( y_1^2 + (1-v^2)y_2^2/B^2)/BA^{3/2}$. It combines with the remaining $1/A^{3/2}$ terms to be equated with $2\sqrt{1-v^2}/BA^{3/2}$, that is, the LHS of (\ref{cdr}), where the $1/B^3$ terms are canceled out and the $1/B^2$ terms can be changed into a $1/B$ term. The string solution (\ref{yby}) can be expressed as \begin{equation} r' = \frac{1}{1 - vy_1'}\sqrt{ (1- {y_1'}^2) \left( (1-vy_1')^2 - (1-v^2){y_2'}^2 \right) } \end{equation} so that the projection of the Wilson loop on the $(y_1', y_2')$ plane is a trapezium formed by the four lines $y_1' = \pm1$ and $y_2' = \pm(vy_1' - 1)/\sqrt{1-v^2}$. In $(y_0',y_1',y_2')$ the four cusps are located at \begin{eqnarray} \left(-\sqrt{ \frac{1+v}{1-v}}, -1, \sqrt{\frac{1+v}{1-v}} \right), \hspace{1cm} \left( \sqrt{ \frac{1-v}{1+v}}, 1, \sqrt{\frac{1-v}{1+v}} \right), \nonumber \\ \left(\sqrt{ \frac{1+v}{1-v}}, -1, -\sqrt{\frac{1+v}{1-v}} \right), \hspace{1cm} \left( -\sqrt{ \frac{1-v}{1+v}}, 1, -\sqrt{\frac{1-v}{1+v}} \right), \label{fov}\end{eqnarray} which imply that the Wilson loop consists of the four lightlike segments. The boost (\ref{gvy}) is alternatively expressed as \begin{equation} y_0' = \frac{y_1y_2}{\gamma(1 + vy_1)}, \hspace{1cm} y_1' = \frac{v + y_1} {1 + vy_1}, \hspace{1cm} y_2' = \frac{y_2}{\gamma(1 + vy_1)}, \end{equation} which lead to (\ref{fov}) again and are substituted into the second eq. of (\ref{yby}) to be \begin{equation} r' = \frac{\sqrt{(1-y_1^2)(1-y_2^2)}}{\gamma(1 + vy_1)}. \end{equation} Then the action (\ref{acd}) can be evaluated on this classical solution as \begin{eqnarray} -iS &=& \frac{\sqrt{\lambda_d c_d}}{2\pi} \int_{-1}^1 dy_1dy_2 \frac{1 - v^2}{\gamma(1 + vy_1)^3} \frac{\sqrt{1-v^2}}{r'^{2+\epsilon} B(y_1')} \nonumber \\ &=& \frac{\sqrt{\lambda_d c_d}}{2\pi} \int_{-1}^1 dy_1dy_2 \frac{(\sqrt{1 + b^2} + by_1)^{\epsilon}} {[(1 - y_1^2)(1 - y_2^2)]^{1+\frac{\epsilon}{2}} }. \end{eqnarray} The integral is calculated by expanding the integrand as a power series on $b$ as \begin{equation} \sqrt{\pi} \frac{\Gamma\left(\frac{1}{2}\right)\Gamma\left( -\frac{\epsilon}{2}\right)}{\Gamma\left(\frac{1- \epsilon}{2}\right)^2} (1 + b^2 )^{\epsilon/2} \sum_{n=0}^{\infty}\left( \frac{b^2}{1 + b^2} \right)^n \frac{\Gamma\left(n- \frac{\epsilon}{2} \right) }{n!}, \end{equation} whose summation factor is evaluated as $\Gamma(-\epsilon/2) (1- b^2/(1+b^2))^{\epsilon/2}$ through $(1-z)^{\alpha} = F(-\alpha,\beta,\beta,z)$ so that we obtain $\pi\Gamma(-\epsilon/2)^2/\Gamma((1-\epsilon)/2)^2$. This result is compared with (\ref{sep}) with $a=1$ or (\ref{fgp}) with $b=0$, the Mandelstam variables $s=t$, which is consistent with the observation that the locations (\ref{fov}) of the four cusps indicate $s=t$. Here we discuss the remaining boost in the (0,1) plane defined by \begin{equation} Y_1 = \gamma(Y_1' - vY_0'), \hspace{1cm} Y_0 = \gamma(Y_0' - vY_1'), \end{equation} which yields a string configuration \begin{equation} y_0' = \frac{y_1'(y_2' + v)}{1 + vy_2'}, \hspace{1cm} r' = \sqrt{ 1 + {y_0'}^2 - {y_1'}^2 - {y_2'}^2 }. \label{zrv}\end{equation} For (\ref{zrv}) the projection of the Wilson loop on the $(y_1',y_2')$ plane shows a trapezium in the same way as that for (\ref{yby}). Although this configuration is transformed into (\ref{yby}) under the interchanges $y_1' \leftrightarrow y_2'$ and $v \leftrightarrow -v$, we can show that it satisfies the string equations in the same way as the solution (\ref{yby}). \section{Conclusions} Starting with the elementary 1-cusp Wilson loop solution of \cite{MK}, we have constructed various kinds of string configurations by performing the conformal SO(2,4) transformations in the embedding coordinates of $AdS_5$ and then rewriting the results back in the Poincare coordinates. Analyzing the obtained string surfaces to see where they end on at the $AdS_5$ boundary, we have read off the shapes of the Wilson loops. In order to see whether the conformal transformed string configurations are extrema of the worldsheet area we have demonstrated that they indeed solve the involved string equations of motion for the Nambu-Goto string action. In these demonstrations the string Lagrangian $\sqrt{D}/r^2$ does not take a constant value in our worldsheet coordinates but it is important that $\sqrt{D}$ takes a simple manageable expression. We have made two types of SO(2)$\times$SO(4) transformations that are characterized by two kinds of SO(4) rotations such that one does not change $Y_2$ and the other interchanges $Y_2$ and $Y_4$. We have observed that the former type of transformaions generate a variety of 2-cusp Wilson loop solutions, while the latter type of them produce not only the 4-cusp Wilson loop solutions but also the 2-cusp Wilson loop solutions. The projection of the latter 2-cusp Wilson loop surfaces on the $(y_1,y_2)$ plane is two separated parallel lines, which is compared with the square-form projection of the 4-cusp Wilson loop surface. Applying the boost SO(2,4) transformations to the basic 4-cusp solution with the square-form projection we have constructed three kinds of 4-cusp solutions whose projections are the rescaled square, the rhombus and the trapezium. By combining the conformal boost in the (0,4) plane and the rescaling we have derived a 4-cusp Wilson loop surface whose projection is a rescaled rhombus. Based on the Nambu-Goto action in the IR dimensional regularization we have obtained the classical Euclidean action evaluated on this 4-cusp solution and reproduced the same exponential expression of the 4-gluon amplitude as derived in \cite{AM} from the string sigma-model action.
047afb7b6f772b533c958fad81005938eb156e3d
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:intro} The theory of group actions on $\Lambda$-trees goes back to early 1960's. Lyndon introduced abstract length functions on groups \cite{Lyndon:1963}, axiomatizing Nielsen cancellation method; he initiated the study of groups with real valued length functions. Chiswell related such length functions with group actions on $\mathbb{Z}-$ and $\mathbb{R}$-trees, providing a construction of the tree on which the group acts. Tits gave the first formal definition of an $\mathbb{R}$-tree \cite{Tits:1977}. In his seminal book \cite{Serre:1980} Serre laid down fundamentals of the theory of groups acting freely on simplicial trees. In the following decade Serre's novel approach unified several geometric, algebraic, and combinatorial methods of group theory into a unique powerful tool, known today as Bass-Serre theory. In their very influential paper \cite{MoSh} Morgan and Shalen linked group actions on $\mathbb{R}$-trees with topology and generalized parts of Thurston's Geometrization Theorem; they introduced $\Lambda$-trees for an arbitrary ordered abelian group $\Lambda$ and the general form of Chiswell's construction. Thus, it became clear that abstract length functions with values in $\Lambda$ and group actions on $\Lambda$-trees are just two equivalent approaches to the same realm of group theory questions. The unified theory was further developed in an important paper by Alperin and Bass \cite{AB} where authors state a fundamental problem in the theory of group actions on $\Lambda$-trees: find the group theoretic information carried by a $\Lambda$-tree action (analogous to Bass-Serre theory), in particular, describe finitely generated groups acting freely on $\Lambda$-trees ($\Lambda$-free groups). One of the main breakthroughs in this direction is Rips' Theorem, that describes finitely generated $\mathbb{R}$-free groups (see \cite{GLP,BF}). The structure of finitely generated $\mathbb{Z}^n$-free and $\mathbb{R}^n$-free groups was clarified in \cite{Bass:1991,Guirardel:2004}. Introduction of infinite $\Lambda$-words was one of the major recent developments in the theory of group actions. In \cite{Myasnikov_Remeslennikov_Serbin:2005} Myasnikov, Remeslennikov and Serbin showed that groups admitting faithful representations by $\Lambda$-words act freely on some $\Lambda$-trees, while Chiswell proved the converse \cite{Chiswell:2005}. This gives another equivalent approach to group actions. Now one can bypass the axiomatic view-point on length functions and work instead with $\Lambda$-words in the same manner as with ordinary words in standard free groups. This allows one to bring into the play naturally and in one package such powerful techniques as Nielsen's method, Stallings' graph approach to subgroups, and Makanin-Razborov type of elimination processes (see papers \cite{Myasnikov_Remeslennikov_Serbin:2005, Myasnikov_Remeslennikov_Serbin:2006, Kharlampovich_Myasnikov:2005(2), Kharlampovich_Myasnikov:2006, Kharlampovich_Myasnikov:2010, KMRS2, DM, KMS1, KMS2, Nikolaev_Serbin:2011, Nikolaev_Serbin:2011(2), Serbin_Ushakov:2011(1)}). In the case when $\Lambda$ is equal to either $\mathbb{Z}^n$ or $\mathbb{Z}^\infty$ all these techniques are effective, so many algorithmic problems for $\mathbb{Z}^n$-free groups become decidable, in particular, the subgroup membership problem. In this paper for an arbitrary group $G$ of infinite words over an ordered abelian group $\Lambda$ we construct a $\Lambda$-tree $\Gamma_G$ equipped with a free action of $G$. Moreover, we show that $\Gamma_G$ is a universal tree for $G$ in the sense that it isometrically embeds in every $\Lambda$-tree equipped with a free $G$-action compatible with the original length function on $G$. The construction is extremely simple and natural: one just folds every pair of infinite words in $G$ along their common initial segments to get the tree $\Gamma_G$. Furthermore, in the case $\Lambda = \mathbb{Z}^n$ the construction is effective. Besides, it sheds some light on the nature of the initial Chiswell's argument, why it worked and where it came from. \section{Preliminaries} \label{sec:prelim} In this section we introduce basic notions in the theory of $\Lambda$-trees. \subsection{$\Lambda$-trees} \label{subs:lambda_trees} A set $\Lambda$ equipped with addition $+$ and a partial order $\leqslant$ is called a {\em partially ordered} abelian group if \begin{enumerate} \item[(1)] $\langle \Lambda, + \rangle$ is an abelian group, \item[(2)] $\langle \Lambda, \leqslant \rangle$ is a partially ordered set, \item[(3)] for all $a,b,c \in \Lambda$, $a \leqslant b$ implies $a + c \leqslant b + c$. \end{enumerate} An abelian group $\Lambda$ is called {\em orderable} if there exists a linear order $\leqslant$ on $\Lambda$, satisfying the condition (3) above. In general, the ordering on $\Lambda$ is not unique. An ordered abelian group $\Lambda$ is called {\it discretely ordered} if $\Lambda^+$ has a minimal non-trivial element (we denote it $1_\Lambda$). In this event, for any $a \in \Lambda$ we have $$a + 1_\Lambda = \min\{b \mid b > a\},\ \ \ a - 1_\Lambda = \max\{b \mid b < a\}.$$ For elements $a,b$ of an ordered group $\Lambda$ the {\it closed segment} $[a,b]$ is defined by $$[a,b] = \{c \in A \mid a \leqslant c \leqslant b \}.$$ Let $X$ be a non-empty set and $\Lambda$ an ordered abelian group. A {\em $\Lambda$-metric on $X$} is a mapping $p: X \times X \longrightarrow \Lambda$ such that for all $x,y,z \in X$ \begin{enumerate} \item[(M1)] $p(x,y) \geqslant 0$, \item[(M2)] $p(x,y) = 0$ if and only if $x = y$, \item[(M3)] $p(x,y) = p(y,x)$, \item[(M4)] $p(x,y) \leqslant p(x,z) + p(y,z)$. \end{enumerate} A {\em $\Lambda$-metric space} is a pair $(X,p)$, where $X$ is a non-empty set and $p$ is a $\Lambda$-metric on $X$. If $(X,p)$ and $(X',p')$ are $\Lambda$-metric spaces, an {\it isometry} from $(X,p)$ to $(X',p')$ is a mapping $f: X \rightarrow X'$ such that $p(x,y) = p'(f(x),f(y))$ for all $x,y \in X$. A {\em segment} in a $\Lambda$-metric space is the image of an isometry $\alpha: [a,b]_\Lambda \rightarrow X$ for some $a,b \in \Lambda$ and $[a,b]_\Lambda$ is a segment in $\Lambda$. The endpoints of the segment are $\alpha(a), \alpha(b)$. We call a $\Lambda$-metric space $(X,p)$ {\em geodesic} if for all $x,y \in X$, there is a segment in $X$ with endpoints $x,y$ and $(X,p)$ is {\em geodesically linear} if for all $x,y \in X$, there is a unique segment in $X$ whose set of endpoints is $\{x,y\}$. A {\em $\Lambda$-tree} is a $\Lambda$-metric space $(X,p)$ such that \begin{enumerate} \item[(T1)] $(X,p)$ is geodesic, \item[(T2)] if two segments of $(X,p)$ intersect in a single point, which is an endpoint of both, then their union is a segment, \item[(T3)] the intersection of two segments with a common endpoint is also a segment. \end{enumerate} Let $X$ be a $\Lambda$-tree. We call $e \in X$ an {\em end point} of $X$ if, whenever $e \in [x,y] \subset X$ either $e = x$ or $e = y$. A {\em linear subtree from $x \in X$} is any linear subtree $L$ of $X$ having $x$ as an end point. $L$ carries a natural linear ordering with $x$ as least element. If $y \in L$ then $L_y = \{ z \in L \mid y \leqslant z\}$ is a linear subtree from $y$. A maximal linear subtree from $x$ is an {\em $X$-ray from $x$}. Observe that (Proposition 2.22 \cite{AB}) that if $L, L'$ are $X$-rays from $x, x'$ respectively such that $L \cap L' \neq \emptyset$ then $L \cap L'$ is either a closed segment or $L \cap L' = L_v$ for some $v \in X$. In fact, we call $X$-rays $L$ and $L'$ {\em equivalent} if $L \cap L' = L_v$ for some $v \in X$. The equivalence classes of $X$-rays for this relation are called {\em ends} of $X$. We say that group $G$ acts on a $\Lambda$-tree $X$ if every element $g \in G$ defines an isometry $g : X \rightarrow X$. Note, that every group has a trivial action on a $\Lambda$-tree, that is, all its elements act as identity. Let a group $G$ act as isometries on a $\Lambda$-tree $X$. $g \in G$ is called {\em elliptic} if it has a fixed point. $g \in G$ is called an {\em inversion} if it does not have a fixed point, but $g^2$ does. If $g$ is not elliptic and not an inversion then it is called {\em hyperbolic}. For a hyperbolic element $g \in G$ define a characteristic set $$Axis(g) = \{ p \in X \mid [g^{-1} \cdot p, p] \cap [p, g \cdot p] = \{p\} \},$$ which is called the {\em axis of $g$}. $Axis(g)$ meets every $\langle g \rangle$-invariant subtree of $X$. A group $G$ acts {\it freely} and {\it without inversions} on a $\Lambda$-tree $X$ if for all $1 \neq g \in G$, $g$ acts as a hyperbolic isometry. In this case we also say that $G$ is $\Lambda$-free. \subsection{Length functions} \label{subs:length} Let $G$ be a group and $\Lambda$ be an ordered abelian group. Then a function $l: G \rightarrow \Lambda$ is called a {\it (Lyndon) length function} on $G$ if the following conditions hold \begin{enumerate} \item [(L1)] $\forall\ x \in G:\ l(x) \geqslant 0$ and $l(1) = 0$, \item [(L2)] $\forall\ x \in G:\ l(x) = l(x^{-1})$, \item [(L3)] $\forall\ x, y, z \in G:\ c(x,y) > c(x,z) \rightarrow c(x,z) = c(y,z)$, \noindent where $c(x,y) = \frac{1}{2}(l(x)+l(y)-l(x^{-1}y))$. \end{enumerate} It is not difficult to derive the following two properties of Lyndon length functions from the axioms (L1)--(L3) \begin{itemize} \item $\forall\ x, y \in G:\ l(xy) \leqslant l(x) + l(y)$, \item $\forall\ x, y \in G:\ 0 \leqslant c(x,y) \leqslant min\{l(x),l(y)\}$. \end{itemize} The axiom below helps to describe the connection between $\Lambda$-valued Lyndon length functions and actions on $\Lambda$-trees. \begin{enumerate} \item [(L4)] $\forall\ x \in G:\ c(x,y) \in \Lambda.$ \end{enumerate} \begin{theorem} \cite{Chiswell:1976} Let $G$ be a group and $l: G \to \Lambda$ a Lyndon length function satisfying (L4). Then there are a $\Lambda$-tree $(X,p)$, an action of $G$ on $X$ and a point $x \in X$ such that $l = l_x$. \end{theorem} \subsection{Infinite words} \label{subs:inf_words} Let $\Lambda$ be a discretely ordered abelian group with the minimal positive element $1$. It is going to be clear from the context if we are using $1$ as an element of $\Lambda$, or as an integer. Let $X = \{x_i \mid i \in I\}$ be a set. Put $X^{-1} = \{x_i^{-1} \mid i \in I\}$ and $X^\pm = X \cup X^{-1}$. A {\em $\Lambda$-word} is a function of the type $$w: [1,\alpha_w] \to X^\pm,$$ where $\alpha_w \in \Lambda,\ \alpha_w \geqslant 0$. The element $\alpha_w$ is called the {\em length} $|w|$ of $w$. \smallskip By $W(\Lambda,X)$ we denote the set of all $\Lambda$-words. Observe, that $W(\Lambda,X)$ contains an empty $\Lambda$-word which we denote by $\varepsilon$. Concatenation $uv$ of two $\Lambda$-words $u,v \in W(\Lambda,X)$ is an $\Lambda$-word of length $|u| + |v|$ and such that: \[ (uv)(a) = \left\{ \begin{array}{ll} \mbox{$u(a)$} & \mbox{if $1 \leqslant a \leqslant |u|$} \\ \mbox{$v(a - |u|)$ } & \mbox{if $|u| < a \leqslant |u| + |v|$} \end{array} \right. \] Next, for any $\Lambda$-word $w$ we define an {\it inverse} $w^{-1}$ as an $\Lambda$-word of the length $|w|$ and such that $$w^{-1}(\beta) = w(|w| + 1 - \beta)^{-1} \ \ (\beta \in [1,|w|]).$$ A $\Lambda$-word $w$ is {\it reduced} if $w(\beta + 1) \neq w(\beta)^{-1}$ for each $1 \leqslant \beta < |w|$. We denote by $R(\Lambda,X)$ the set of all reduced $\Lambda$-words. Clearly, $\varepsilon \in R(\Lambda,X)$. If the concatenation $uv$ of two reduced $\Lambda$-words $u$ and $v$ is also reduced then we write $uv = u \circ v$. \smallskip For $u \in W(\Lambda,X)$ and $\beta \in [1, \alpha_u]$ by $u_\beta$ we denote the restriction of $u$ on $[1,\beta]$. If $u \in R(\Lambda,X)$ and $\beta \in [1, \alpha_u]$ then $$u = u_\beta \circ {\tilde u}_\beta,$$ for some uniquely defined ${\tilde u}_\beta$. An element ${\rm com}(u,v) \in R(\Lambda,X)$ is called the ({\emph{longest}) {\it common initial segment} of $\Lambda$-words $u$ and $v$ if $$u = {\rm com}(u,v) \circ \tilde{u}, \ \ v = {\rm com}(u,v) \circ \tilde{v}$$ for some (uniquely defined) $\Lambda$-words $\tilde{u}, \tilde{v}$ such that $\tilde{u}(1) \neq \tilde{v}(1)$. Now, we can define the product of two $\Lambda$-words. Let $u,v \in R(\Lambda,X)$. If ${\rm com} (u^{-1}, v)$ is defined then $$u^{-1} = {\rm com}(u^{-1},v) \circ {\tilde u}, \ \ v = {\rm com} (u^{-1},v) \circ {\tilde v},$$ for some uniquely defined ${\tilde u}$ and ${\tilde v}$. In this event put $$u \ast v = {\tilde u}^{-1} \circ {\tilde v}.$$ The product ${\ast}$ is a partial binary operation on $R(\Lambda,X)$. \smallskip An element $v \in R(\Lambda,X)$ is termed {\it cyclically reduced} if $v(1)^{-1} \neq v(|v|)$. We say that an element $v \in R(\Lambda,X)$ admits a {\it cyclic decomposition} if $v = c^{-1} \circ u \circ c$, where $c, u \in R(\Lambda,X)$ and $u$ is cyclically reduced. Observe that a cyclic decomposition is unique (whenever it exists). We denote by $CR(\Lambda,X)$ the set of all cyclically reduced words in $R(\Lambda,X)$ and by $CDR(\Lambda,X)$ the set of all words from $R(\Lambda,X)$ which admit a cyclic decomposition. \smallskip Below we refer to $\Lambda$-words as {\it infinite words} usually omitting $\Lambda$ whenever it does not produce any ambiguity. The following result establishes the connection between infinite words and length functions. \begin{theorem} \label{co:3.1} \cite{Myasnikov_Remeslennikov_Serbin:2005} Let $\Lambda$ be a discretely ordered abelian group and $X$ be a set. Then any subgroup $G$ of $CDR(\Lambda,X)$ has a free Lyndon length function with values in $\Lambda$ -- the restriction $L|_G$ on $G$ of the standard length function $L$ on $CDR(\Lambda,X)$. \end{theorem} The converse of the theorem above was obtained by I. Chiswell \cite{Chiswell:2005}. \begin{theorem} \label{chis} \cite{Chiswell:2005} Let $G$ have a free Lyndon length function $L : G \rightarrow A$, where $\Lambda$ is a discretely ordered abelian group. Then there exists a set $X$ and a length preserving embedding $\phi : G \rightarrow CDR(\Lambda,X)$, that is, $|\phi(g)| = L(g)$ for any $g \in G$. \end{theorem} \begin{corollary} \label{chis-cor} \cite{Chiswell:2005} Let $G$ have a free Lyndon length function $L : G \rightarrow \Lambda$, where $\Lambda$ is an arbitrary ordered abelian group. Then there exists an embedding $\phi : G \to CDR(\Lambda',X)$, where $\Lambda' = \mathbb{Z} \oplus \Lambda$ is discretely ordered with respect to the right lexicographic order and $X$ is some set, such that, $|\phi(g)| = (0,L(g))$ for any $g \in G$. \end{corollary} Theorem \ref{co:3.1}, Theorem \ref{chis}, and Corollary \ref{chis-cor} show that a group has a free Lyndon length function if and only if it embeds into a set of infinite words and this embedding preserves the length. Moreover, it is not hard to show that this embedding also preserves regularity of the length function. \begin{theorem} \label{chis-cor-1} \cite{Khan_Myasnikov_Serbin:2007} Let $G$ have a free regular Lyndon length function $L : G \rightarrow \Lambda$, where $\Lambda$ is an arbitrary ordered abelian group. Then there exists an embedding $\phi : G \rightarrow R(\Lambda', X)$, where $\Lambda'$ is a discretely ordered abelian group and $X$ is some set, such that, the Lyndon length function on $\phi(G)$ induced from $R(\Lambda',X)$ is regular. \end{theorem} \section{Universal trees} \label{sec:universal} Let $G$ be a subgroup of $CDR(\Lambda,X)$ for some discretely ordered abelian group $\Lambda$ and a set $X$. We assume $G,\ \Lambda$, and $X$ to be fixed for the rest of this section. Every element $g \in G$ is a function $$g: [1,|g|] \rightarrow X^{\pm},$$ with the domain $[1,|g|]$ which a closed segment in $\Lambda$. Since $\Lambda$ can be viewed as a $\Lambda$-metric space then $[1,|g|]$ is a geodesic connecting $1$ and $|g|$, and every $\alpha \in [1,|g|]$ we view as a pair $(\alpha, g)$. We would like to identify initial subsegments of the geodesics corresponding to all elements of $G$ as follows. \smallskip Let $$S_G = \{(\alpha,g) \mid g \in G, \alpha \in [0,|g|]\}.$$ Since for every $f,g \in G$ the word $com(f,g)$ is defined, we can introduce an equivalence relation on $S_G$ as follows: $(\alpha,f) \sim (\beta,g)$ if and only if $\alpha = \beta \in [0, c(f,g)]$. Obviously, it is symmetric and reflexive. For transitivity observe that if $(\alpha,f) \sim (\beta,g)$ and $(\beta,g) \sim (\gamma,h)$ then $0 \leqslant \alpha = \beta = \gamma \leqslant c(f,g), c(g,h)$. Since $c(f,h) \geqslant \min\{c(f,g), c(g,h)\}$ then $\alpha = \gamma \leqslant c(f,h)$. Let $\Gamma_G = S_G / \sim$ and $\epsilon = \langle 0, 1 \rangle$, where $\langle \alpha, f \rangle$ is the equivalence class of $(\alpha,f)$. \begin{prop} \label{pr:lambda_tree} $\Gamma_G$ is a $\Lambda$-tree, \end{prop} \begin{proof} At first we show that $\Gamma_G$ is a $\Lambda$-metric space. Define the metric by $$d(\langle \alpha, f \rangle, \langle \beta, g \rangle) = \alpha + \beta - 2\min\{\alpha, \beta, c(f,g)\}.$$ Let us check if it is well-defined. Indeed, $c(f,g) \in \Lambda$ is defined for every $f,g \in G$. Moreover, let $(\alpha, f) \sim (\gamma,u)$ and $(\beta, g) \sim (\delta,v)$, we want to prove $$d(\langle \alpha, f \rangle, \langle \beta, g \rangle) = d(\langle \gamma, u \rangle, \langle \delta, v \rangle)$$ which is equivalent to $$\min\{\alpha,\beta,c(f,g)\} = \min\{\alpha,\beta,c(u,v)\}$$ since $\alpha = \gamma,\ \beta = \delta$. Consider the following cases. \begin{enumerate} \item[(a)] $\min\{\alpha,\beta\} \leqslant c(u,v)$ Hence, $\min\{\alpha,\beta,c(u,v)\} = \min\{\alpha,\beta\}$ and it is enough to prove $\min\{\alpha,\beta\}$ $= \min\{\alpha,\beta,c(f,g)\}$. From length function axioms for $G$ we have $$c(f,g) \geqslant \min\{c(u,f),c(u,g)\},\ \ c(u,g) \geqslant \min\{c(u,v),c(v,g)\}.$$ Hence, $$c(f,g) \geqslant \min\{c(u,f),c(u,g)\} \geqslant \min\{c(u,f), \min\{c(u,v),c(v,g)\} \}$$ $$ = \min\{c(u,f),c(u,v),c(v,g)\}.$$ Now, from $(\alpha, f) \sim (\gamma,u),\ (\beta, g) \sim (\delta,v)$ it follows that $\alpha \leqslant c(u,f),\ \beta \leqslant c(v,g)$ and combining it with the assumption $\min\{\alpha,\beta\} \leqslant c(u,v)$ we have $$c(f,g) \geqslant \min\{c(u,f),c(u,v),c(v,g)\} \geqslant \min\{\alpha,\beta\},$$ or, in other words, $$\min\{\alpha,\beta,c(f,g)\} = \min\{\alpha,\beta\}.$$ \item[(b)] $\min\{\alpha,\beta\} > c(u,v)$ Hence, $\min\{\alpha,\beta,c(u,v)\} = c(u,v)$ and it is enough to prove $c(f,g) = c(u,v)$. Since $$c(u,f) \geqslant \alpha > c(u,v),\ c(v,g) \geqslant \beta > c(u,v),$$ then $\min\{c(u,f),c(u,v),c(v,g)\} = c(u,v)$ and $$c(f,g) \geqslant \min\{c(u,f),c(u,v),c(v,g)\} = c(u,v).$$ Now we prove that $c(f,g) \leqslant c(u,v)$. From length function axioms for $G$ we have $$c(u,v) \geqslant \min\{c(v,g),c(u,g)\} = c(u,g) \geqslant \min\{c(v,g),c(u,v)\} = c(u,v),$$ that is, $c(u,v) = c(u,g)$. Now, $$c(u,v) = c(u,g) \geqslant \min\{c(u,f),c(f,g)\},$$ where $\min\{c(u,f),c(f,g)\} = c(f,g)$ since otherwise we have $c(u,v) \geqslant c(u,f) \geqslant \alpha$ - a contradiction. Hence, $c(u,v) \geqslant c(f,g)$ and we have $c(f,g) = c(u,v)$. \end{enumerate} By definition of $d$, for any $\langle \alpha, f \rangle,\ \langle \beta, g \rangle$ we have $$d(\langle \alpha, f \rangle, \langle \beta, g \rangle) = d(\langle \beta, g \rangle, \langle \alpha, f \rangle) \geqslant 0,$$ $$d(\langle \alpha, f \rangle, \langle \alpha, f \rangle) = 0.$$ If $$d(\langle \alpha, f \rangle, \langle \beta, g \rangle) = \alpha + \beta - 2 \min\{\alpha, \beta, c(f,g)\} = 0$$ then $\alpha + \beta = 2 \min\{\alpha,\beta,c(f,g)\}$. It is possible only if $\alpha = \beta \leqslant c(f,g)$ which implies $\langle \alpha, f \rangle = \langle \beta, g \rangle$. Finally, we have to prove the triangle inequality $$d(\langle \alpha, f \rangle, \langle \beta, g \rangle) \leqslant d(\langle \alpha, f \rangle, \langle \gamma, h \rangle) + d(\langle \beta, g \rangle, \langle \gamma, h \rangle)$$ for every $\langle \alpha, f \rangle,\ \langle \beta, g \rangle,\ \langle \gamma, h \rangle \in \Gamma_G$. The inequality above is equivalent to $$\alpha + \beta - 2\min\{\alpha, \beta, c(f,g)\} \leqslant \alpha + \gamma$$ $$ - 2 \min\{\alpha, \gamma, c(f,h) + \beta + \gamma - 2\min\{\beta,\gamma,c(g,h)\}\}$$ which comes down to $$\min\{\alpha,\gamma,c(f,h)\} + \min\{\beta,\gamma,c(g,h)\} \leqslant \min\{\alpha,\beta,c(f,g)\} + \gamma.$$ \smallskip First of all, observe that for any $\alpha,\beta,\gamma \in \Lambda$ the triple $(\min\{\alpha, \beta\},$ $\min\{\alpha,\gamma\},$ $\min\{\beta,\gamma\})$ is isosceles. Hence, by Lemma 1.2.7(1) \cite{Chiswell:2001}, the triple $$(\min\{\alpha,\beta,c(f,g)\},\ \min\{\alpha,\gamma,c(f,h)\},\ \min\{\beta,\gamma,c(g,h)\})$$ is isosceles too. In particular, $$\min\{\alpha,\beta,c(f,g)\} \geqslant \min\{ \min\{\alpha,\gamma,c(f,h)\},\ \min\{\beta,\gamma,c(g,h)\} \}$$ $$ = \min\{\alpha,\beta,\gamma,c(f,h),c(g,h)\}.$$ Now, if $$\min\{\alpha,\beta,\gamma,c(f,h),c(g,h)\} = \min\{\alpha,\gamma,c(f,h)\}$$ then $\min\{\beta,\gamma,c(g,h)\} = \gamma$ and $$\min\{\alpha,\gamma,c(f,h)\} + \min\{\beta,\gamma,c(g,h)\} \leqslant \min\{\alpha,\beta,c(f,g)\} + \gamma$$ holds. If $$\min\{\alpha,\beta,\gamma,c(f,h),c(g,h)\} = \min\{\beta,\gamma,c(g,h)\}$$ then $\min\{\alpha,\gamma,c(f,h)\} = \gamma$ and $$\min\{\alpha,\gamma,c(f,h)\} + \min\{\beta,\gamma,c(g,h)\} \leqslant \min\{\alpha,\beta,c(f,g)\} + \gamma$$ holds again. So, $d$ is a $\Lambda$-metric. \smallskip Finally, we want to prove that $\Gamma_G$ is $0$-hyperbolic with respect to $\epsilon = \langle 0, 1 \rangle$ (and, hence, with respect to any other point in $\Gamma_G$). It is enough to prove that the triple $$((\langle \alpha, f \rangle \cdot \langle \beta, g \rangle)_\epsilon,\ (\langle \alpha, f \rangle \cdot \langle \gamma, h \rangle)_\epsilon,\ (\langle \beta, g \rangle \cdot \langle \gamma, h \rangle)_\epsilon)$$ is isosceles for every $\langle \alpha, f \rangle,\ \langle \beta, g \rangle,\ \langle \gamma, h \rangle \in \Gamma_G$. But by definition of $d$ the above triple is isosceles if and only if $$(\min\{\alpha,\beta,c(f,g)\},\ \min\{ \alpha, \gamma, c(f,h) \},\ \min\{ \beta, \gamma, c(g,h) \})$$ is isosceles which holds. \smallskip So, $\Gamma_G$ is a $\Lambda$-tree. \end{proof} Since $G$ is a subset of $CDR(\Lambda,X)$ and every element $g \in G$ is a function defined on $[1_A,|g|]$ with values in $X^\pm$ then we can define a function $$\xi : (\Gamma_G - \{\epsilon\}) \rightarrow X^\pm,\ \ \xi(\langle \alpha, g \rangle) = g(\alpha).$$ It is easy to see that $\xi$ is well-defined. Indeed, if $(\alpha, g) \sim (\alpha_1,g_1)$ then $\alpha = \alpha_1 \leqslant c(g,g_1)$, so $g(\alpha) = g_1(\alpha_1)$. Moreover, since every $g \in G$ is reduced then $\xi(p) \neq \xi(q)^{-1}$ whenever $d(p,q) = 1_A$. $\xi$ can be extended to a function $$\Xi : geod(\Gamma_G)_\epsilon \to R(\Lambda,X),$$ where $geod(\Gamma_G)_\epsilon = \{ (\epsilon,p] \mid p \in \Gamma_G \}$, so that $$\Xi(\ (\epsilon, \langle \alpha, g \rangle]\ )(t) = g(t),\ t \in [1_A,\alpha].$$ That is, $\Xi(\ (\epsilon, \langle \alpha, g \rangle]\ )$ is the initial subword of $g$ of length $\alpha$, and $$\Xi(\ (\epsilon, \langle |g|, g \rangle]\ ) = g.$$ On the other hand, if $g \in G$ and $\alpha \in [1_A,|g|]$ then the initial subword of $g$ of length $\alpha$ uniquely corresponds to $\Xi(\ (\epsilon, \langle \alpha, g \rangle]\ )$. If $(\alpha, g) \sim (\alpha_1,g_1)$ then $\alpha = \alpha_1 \leqslant c(g,g_1)$, and since $g(t) = g_1(t)$ for any $t \in [1_A,c(g,g_1)]$ then $$\Xi(\ (\epsilon, \langle \alpha, g \rangle]\ ) = \Xi(\ (\epsilon, \langle \alpha_1, g_1 \rangle]\ ).$$ \begin{lemma} \label{le:subword_prod} Let $u,v \in R(\Lambda,X)$. If $u \ast v$ is defined then $u \ast a$ is also defined, where $v = a \circ b$. Moreover, $u \ast a$ is an initial subword of either $u$ or $u \ast v$. \end{lemma} \begin{proof} The proof follows from Figure \ref{pic1}. \begin{figure}[htbp] \label{pic1} \centering{\mbox{\psfig{figure=canc_subword.eps,height=2in}}} \caption{Possible cancellation diagrams in Lemma \ref{le:subword_prod}.} \end{figure} \end{proof} Now, since for every $\langle \alpha, g \rangle \in \Gamma_G,\ \Xi(\ (\epsilon, \langle \alpha, g \rangle]\ )$ is an initial subword of $g \in G$ then by Lemma \ref{le:subword_prod}, $f \ast \Xi(\ (\epsilon, \langle \alpha, g \rangle]\ )$ is defined for any $f \in G$. Moreover, again by Lemma \ref{le:subword_prod}, $f \ast \Xi(\ (\epsilon, \langle \alpha, g \rangle]\ )$ is an initial subword of either $f$ or $f \ast g$. More precisely, $$f \ast \Xi(\ (\epsilon, \langle \alpha, g \rangle]\ ) = \Xi(\ (\epsilon, \langle |f| - \alpha, f \rangle]\ )$$ if $f \ast \Xi(\ (\epsilon, \langle \alpha, g \rangle]\ )$ is an initial subword of $f$, and $$f \ast \Xi(\ (\epsilon, \langle \alpha, g \rangle]\ ) = \Xi(\ (\epsilon, \langle |f| + \alpha - 2 c(f^{-1},g), f\ast g \rangle]\ )$$ if $f \ast \Xi(\ (\epsilon, \langle \alpha, g \rangle]\ )$ is an initial subword of $f \ast g$. \smallskip Hence, we define a (left) action of $G$ on $\Gamma_G$ as follows: $$f \cdot \langle \alpha, g \rangle = \langle |f| + \alpha - 2\min\{\alpha,c(f^{-1},g)\}, f \rangle$$ if $\alpha \leqslant c(f^{-1},g)$, and $$f \cdot \langle \alpha, g \rangle = \langle |f| + \alpha - 2\min\{\alpha,c(f^{-1},g)\}, f\ast g \rangle$$ if $\alpha > c(f^{-1},g)$. \smallskip The action is well-defined. Indeed, it is easy to see that $f \cdot \langle \alpha, g \rangle = f \cdot \langle \alpha_1, g_1 \rangle$ whenever $(\alpha, g) \sim (\alpha_1,g_1)$. \begin{lemma} \label{le:isometric} The action of $G$ on $\Gamma_G$ defined above is isometric. \end{lemma} \begin{proof} Observe that it is enough to prove $$d(\epsilon, \langle \alpha, g \rangle) = d(f \cdot \epsilon, f \cdot \langle \alpha, g \rangle)$$ for every $f,g \in G$. Indeed, from the statement above it is going to follow that the geodesic tripod $(\epsilon,\langle |g|,g \rangle, \langle |h|,h \rangle)$ is isometrically mapped to the geodesic tripod $(\langle |f|,f \rangle, f \cdot \langle |g|,g \rangle, f \cdot \langle |h|,h \rangle)$ and isometricity follows. We have $$d(\epsilon, \langle \alpha, g \rangle) = d(\langle 0, 1 \rangle, \langle \alpha, g \rangle) = 0 + \alpha - 2 \min\{ 0, \alpha, c(1,g) \} = \alpha,$$ $$d(f \cdot \epsilon, f \cdot \langle \alpha, g \rangle) = d(\langle |f|,f \rangle, f \cdot \langle \alpha, g \rangle).$$ Consider two cases. \begin{enumerate} \item[(a)] $\alpha \leqslant c(f^{-1},g)$ Hence, $$d(\langle |f|,f \rangle, f \cdot \langle \alpha, g \rangle) = d(\langle |f|,f \rangle, \langle |f| - \alpha, f \rangle)$$ $$ = |f| + |f| - \alpha - 2 \min\{|f|,|f|-\alpha,c(f,f)\} = |f| + |f|-\alpha - 2(|f|-\alpha) = \alpha.$$ \item[(b)] $\alpha > c(f^{-1},g)$ Hence, $$d(\langle |f|,f \rangle, f \cdot \langle \alpha, g \rangle) = d(\langle |f|,f \rangle, \langle |f| + \alpha - 2c(f^{-1},g), f\ast g \rangle)$$ $$ = |f| + |f| + \alpha - 2c(f^{-1},g) - 2\min\{|f|,|f| + \alpha - 2c(f^{-1},g),c(f,f\ast g\})$$ $$= 2|f| + \alpha - 2c(f^{-1},g) - 2\min\{|f| + \alpha - 2c(f^{-1},g),c(f,f\ast g)\}.$$ Let $f = f_1 \circ c^{-1},\ g = c \circ g_1,\ |c| = c(f^{-1},g)$. Then $|f| + \alpha - 2c(f^{-1},g) = |f_1|+\alpha-c(f^{-1},g) > |f_1|$. At the same time, $c(f,f \ast g) = |f_1|$, so $\min\{|f| + \alpha - 2c(f^{-1},g),c(f,f\ast g)\} = |f_1|$ and $$d(\langle |f|,f \rangle, f \cdot \langle \alpha, g \rangle) = 2|f| + \alpha - 2c(f^{-1},g) - 2|f_1| = 2|f| + \alpha - 2|c| - 2|f_1| = \alpha$$. \end{enumerate} \end{proof} \begin{prop} \label{pr:action} The action of $G$ on $\Gamma_G$ defined above is free and $L_\epsilon(g) = |g|$. Moreover, $\Gamma_G$ is minimal with respect to this action if and only if $G$ contains a cyclically reduced element $h \in G$, that is, $|h^2| = 2|h|$. \end{prop} \begin{proof} {\bf Cialm 1.} The stabilizer of every $x \in \Gamma_G$ is trivial. \smallskip Next, suppose $f \cdot \langle \alpha, g \rangle = \langle \alpha, g \rangle$. First of all, if $\alpha = 0$ then $|f| + \alpha - 2\min\{\alpha,c(f^{-1},g)\} = |f|$ then $|f| = \alpha = 0$. Also, if $c(f^{-1},g) = 0$ then $|f| + \alpha - 2\min\{\alpha,c(f^{-1},g)\} = |f| + \alpha$ which has to be equal to $\alpha$ form our assumption. In both cases $f = 1$ follows. Assume $f \neq 1$ (which implies $\alpha,\ c(f^{-1},g) \neq 0$) and consider the following cases. \begin{enumerate} \item[(a)] $\alpha < c(f^{-1},g)$ Hence, from $$\langle \alpha, g \rangle = \langle |f| - \alpha, f \rangle$$ we get $\alpha = |f| - \alpha \leqslant c(f,g)$. In particular, $|f| = 2 \alpha$. Consider the product $f \ast g$. We have $$f = f_1 \circ com(f^{-1},g)^{-1},\ g = com(f^{-1},g) \circ g_1.$$ Since $\alpha < c(f^{-1},g)$ then we have $com(f^{-1},g) = c_\alpha \circ c,\ |c_\alpha| = \alpha$. Hence, $$f = f_1 \circ c^{-1} \circ c_\alpha^{-1},\ g = c_\alpha \circ c \circ g_1.$$ On the other hand, from $|f| = 2\alpha$ we get $|f_1| + |c| = \alpha \leqslant c(f,g)$, so, $com(f,g)$ has $f_1 \circ c$ as initial subword. That is, $g = f_1 \circ c \circ g_2$, but now comparing two representations of $g$ above we get $c_\alpha = f_1 \circ c^{-1}$ and $c_\alpha \ast c \neq c_\alpha \circ c$ - a contradiction. \item[(b)] $\alpha = c(f^{-1},g)$ We have $f = f_1 \circ c_\alpha^{-1},\ g = c_\alpha \circ g_1,\ |c_\alpha| = \alpha$. From $\langle \alpha, g \rangle = \langle |f| - \alpha, f \rangle$ we get $\alpha = |f| - \alpha \leqslant c(f,g)$, so $|f| = 2\alpha$ and $|f_1| = \alpha$. Since $|f_1| = \alpha \leqslant c(f,g)$ then $g = f_1 \circ g_2$ from which it follows that $f_1 = c_\alpha$. But then $f_1 \ast c_\alpha^{-1} \neq f_1 \circ c_\alpha^{-1}$ - contradiction. \item[(c)] $\alpha > c(f^{-1},g)$ Hence, from $$\langle \alpha, g \rangle = \langle |f| + \alpha - 2 c(f^{-1},g), f \ast g \rangle$$ we get $\alpha = |f| + \alpha - 2 c(f^{-1},g) \leqslant c(g,f \ast g)$. In particular, $|f| = 2 c(f^{-1}, g)$. Consider the product $f \ast g$. We have $$f = f_1 \circ c^{-1},\ g = c \circ g_1,$$ where $c = com(f^{-1},g)$. Hence, $|f_1| = |c| < \alpha \leqslant c(g,f \ast g) = c(g, f_1 \circ g_1)$. It follows that $g = f_1 \circ g_2$ and, hence, $c = f_1$ which is impossible. \end{enumerate} \smallskip {\bf Cialm 2.} $L_\epsilon(g) = |g|$ \smallskip We have $L_\epsilon(g) = d(\epsilon, g \cdot \epsilon)$. Hence, by definition of $d$ $$d(\langle 0,1\rangle, g \cdot \langle 0,1\rangle) = d(\langle 0, 1 \rangle, \langle |g|, g \rangle) = 0 + |g| - 2 \min\{ 0, |g|, c(1,g) \} = |g|.$$ \smallskip {\bf Cialm 3.} $\Gamma_G$ is minimal with respect to the action if and only if $G$ contains a cyclically reduced element $h \in G$, that is, $|h^2| = 2|h|$. \smallskip Suppose there exists a cyclically reduced element $h \in G$. Let $\Delta \subset \Gamma_G$ be a $G$-invariant subtree. First of all, observe that $\epsilon \notin \Delta$. Indeed, if $\epsilon \in \Delta$ then $f \cdot \epsilon \in \Delta$ for every $f \in G$ and since $\Delta$ is a tree then $[\epsilon, f \cdot \epsilon] \in \Delta$ for every $f \in G$. At the same time, $\Gamma_G$ is spanned by $[\epsilon, f \cdot \epsilon],\ f \in G$, so, $\Delta = \Gamma_G$ - a contradiction. Let $u \in \Delta$. By definition of $\Gamma_G$ there exists $g \in G$ such that $u \in [\epsilon, g \cdot \epsilon]$. Observe that $A_g \subseteq \Delta$. Indeed, for example by Theorem 1.4 \cite{Chiswell:2001}, if $[u,p]$ is the bridge between $u$ and $A_g$ then $p = Y(g^{-1} \cdot u,\ u,\ g \cdot u)$. In particular, $p \in \Delta$ and since for every $v \in A_g$ there exist $g_1, g_2 \in C_G(g)$ such that $v \in [g_1 \cdot p,\ g_2 \cdot p]$ then $A_g \subseteq \Delta$. Observe that if $g$ is cyclically reduced then $\epsilon \in A_g$, that is, $\epsilon \in \Delta$ - a contradiction. More generally, $\Delta \cap A_f = \emptyset$ for every cyclically reduced $f \in G$. Hence, let $[p,q]$ be the bridge between $A_g$ and $A_h$ so that $p \in A_g,\ q \in A_h$. Then by Lemma 2.2 \cite{Chiswell:2001}, $[p,q] \subset A_{gh}$, in particular, $p,q \in A_{gh}$. It follows that $A_{gh} \subseteq \Delta,\ q \in A_{gh} \cap A_h$, and $\Delta \cap A_h \neq \emptyset$ - a contradiction. Hence, there can be no proper $G$-invariant subtree $\Delta$. \smallskip Now, suppose $G$ contains no cyclically reduced element. Hence, $\epsilon \notin A_f$ for every $f \in G$. Let $\Delta$ be spanned by $A_f,\ f \in G$. Obviously, $\Delta$ is $G$-invariant. Indeed, let $u \in [p,q]$, where $p \in A_f,\ q \in A_g$ for some $f,g \in G$. Then $h \cdot u \in [h \cdot p, h \cdot q]$, where $h \cdot p \in h \cdot A_f = A_{hfh^{-1}},\ h \cdot q \in h \cdot A_g = A_{hgh^{-1}}$, that is, $h \in \Delta$. Finally, $\epsilon \in \Gamma_G - \Delta$. \end{proof} \begin{prop} \label{pr:universal} If $(Z,d')$ is a $\Lambda$-tree on which $G$ acts freely as isometries, and $w \in Z$ is such that $L_w(g) = |g|,\ g \in G$ then there is a unique $G$-equivariant isometry $\mu: \Gamma_G \to Z$ such that $\mu(\epsilon) = w$, whose image is the subtree of $Z$ spanned by the orbit $G \cdot w$ of $w$. \end{prop} \begin{proof} Define a mapping $\mu : \Gamma_G \rightarrow Z$ as follows $$\mu(\langle \alpha, f \rangle) = x\ {\rm if}\ d'(w,x) = \alpha,\ d'(f \cdot w,x) = |f|-\alpha.$$ Observe that $\mu(\epsilon) = \mu(\langle 0,1\rangle) = w$ \smallskip {\bf Claim 1.} $\mu$ is an isometry. \smallskip Let $\langle \alpha, f \rangle,\ \langle \beta, g \rangle \in \Gamma_G$. Then by definition of $d$ we have $$d(\langle \alpha, f \rangle, \langle \beta, g \rangle) = \alpha + \beta - 2 \min\{ \alpha, \beta, c(f,g)\}.$$ Let $x = \mu(\langle \alpha, f \rangle),\ y = \mu(\langle \beta, g \rangle)$. Then By Lemma 1.2 \cite{Chiswell:2001} in $(Z,d')$ we have $$d'(x,y) = d(w,x) + d(w,y) - 2\min\{d(w,x),d(w,y),d(w,z)\},$$ where $z = Y(w,f\cdot w, g\cdot w)$. Observe that $d(w,x) = \alpha,\ d(w,y) = \beta$. At the same time, since $L_w(g) = |g|,\ g \in G$ then $$d(w,z) = \frac{1}{2}(d(w,f\cdot w) + d(w,g\cdot w) - d(f\cdot w,g\cdot w)) = \frac{1}{2}(|f| + |g| - |f^{-1} g|) = c(f,g),$$ and $$d(\mu(\langle \alpha, f \rangle), \mu(\langle \beta, g \rangle)) = d'(x,y) = \alpha + \beta - 2\min\{\alpha,\beta,c(f,g)\} = d(\langle \alpha, f \rangle, \langle \beta, g \rangle).$$ \smallskip {\bf Claim 1.} $\mu$ is equivariant. \smallskip We have to prove $$\mu(f\cdot \langle \alpha, g \rangle) = f \cdot \mu(\langle \alpha, g \rangle).$$ Let $x = \mu(\langle \alpha, g \rangle),\ y = \mu(f\cdot \langle \alpha, g \rangle)$. By definition of $\mu$ we have $d'(w,x) = \alpha,\ d'(g \cdot w,x) = |g|-\alpha$. \begin{enumerate} \item[(a)] $\alpha \leqslant c(f^{-1},g)$ Hence, $$f \cdot \langle \alpha, g \rangle = \langle |f| - \alpha, f \rangle.$$ and to prove $y = f\cdot x$ it is enough to show that $d'(w,f\cdot x) = |f|-\alpha$ and $d'(f\cdot w,f\cdot x) = \alpha$. Observe that the latter equality holds since $d'(f\cdot w,f\cdot x) = d'(w,x) = \alpha$. To prove the former one, by Lemma 1.2 \cite{Chiswell:2001} we have $$d(w,f\cdot x) = d'(w,f\cdot w) + d'(f\cdot x, f\cdot w)$$ $$ - 2\min\{d'(w,f\cdot w), d'(f\cdot x, f\cdot w), d'(f\cdot w,z)\},$$ where $z = Y(w,f\cdot w, (fg)\cdot w)$. Also, $$d'(f\cdot w,z) = \frac{1}{2}(d'(f\cdot w,w) + d'(f\cdot w,(fg)\cdot w)- d'(w,(fg)\cdot w))$$ $$ = \frac{1}{2}(|f|+|g|-|f^{-1}g|) = c(f^{-1},g).$$ Since, $d'(w,f\cdot w) = |f|,\ d'(f\cdot x, f\cdot w) = \alpha$ then $\min\{d'(w,f\cdot w), d'(f\cdot x, f\cdot w), d'(f\cdot w,z)\} = \alpha$, and $$d'(w,f\cdot x) = |f|+\alpha - 2\alpha = |f|-\alpha.$$ \item[(b)] $\alpha > c(f^{-1},g)$ Hence, $$f \cdot \langle \alpha, g \rangle = \langle |f| + \alpha - 2c(f^{-1},g), f\ast g \rangle.$$ and to prove $y = f\cdot x$ it is enough to show that $d'(w,f\cdot x) = |f| + \alpha - 2c(f^{-1},g)$ and $d'(f\cdot x,(fg) \cdot w) = |fg|-(|f| + \alpha - 2c(f^{-1},g))$. Observe that $d'(f\cdot x,(fg) \cdot w) = d'(x,gw) = |g| - \alpha = |fg|-(|f| + \alpha - 2c(f^{-1},g))$, so the latter equality holds. By Lemma 1.2 \cite{Chiswell:2001} we have $$d(w,f\cdot x) = d'(w,f\cdot w) + d'(f\cdot x, f\cdot w)$$ $$ - 2\min\{d'(w,f\cdot w), d'(f\cdot x, f\cdot w), d'(f\cdot w,z)\},$$ where $z = Y(w,f\cdot w, (fg)\cdot w)$. Also, $$d'(f\cdot w,z) = \frac{1}{2}(d'(f\cdot w,w) + d'(f\cdot w,(fg)\cdot w)- d'(w,(fg)\cdot w))$$ $$ = \frac{1}{2}(|f|+|g|-|f^{-1}g|) = c(f^{-1},g).$$ $d'(w,f\cdot w) = |f|,\ d'(f\cdot x, f\cdot w) = \alpha$, so $\min\{d'(w,f\cdot w), d'(f\cdot x, f\cdot w), d'(f\cdot w,z)\} = d'(f\cdot w,z) = c(f^{-1},g)$, and $$d(w,f\cdot x) = |f| + \alpha - 2c(f^{-1},g).$$ \smallskip {\bf Claim 1.} $\mu$ is unique. \smallskip Observe that if $\mu' : \Gamma_G \rightarrow Z$ is another equivariant isometry such that $\mu'(\epsilon) = w$ then for every $g \in G$ we have $$\mu'(\langle |g|,g\rangle) = \mu'(g\cdot \langle 0,1\rangle) = g \cdot \mu'(\langle 0,1\rangle) = g\cdot w.$$ That is, $\mu'$ agrees with $\mu$ on $G\cdot \epsilon$, hence $\mu = \mu'$ because isometries preserve geodesic segments. Thus, $\mu$ is unique. Moreover, $\mu(\Gamma_G)$ is the subtree of $Z$ spanned by $G \cdot w$. \end{enumerate} \end{proof} \section{Examples} \label{sec:examples} Here we consider two examples of subgroups of $CDR(\Lambda, X)$, where $\Lambda = \mathbb{Z}^2$ and $X$ an arbitrary alphabet, and explicitly construct the corresponding universal trees for these groups. \begin{example} \label{ex:1} Let $F = F(X)$ be a free group with basis $X$ and the standard length function $|\cdot|$, and let $u \in F$ a cyclically reduced element which is not a proper power. If we assume that $\mathbb{Z}^2 = \langle 1, t \rangle$ is the additive group of linear polynomials in $t$ ordered lexicographically then the HNN-extension $$G = \langle F, s \mid u^s = u \rangle$$ embeds into $CDR(\mathbb{Z}^2, X)$ under the following map $\phi$: $$\phi(x) = x,\ \forall\ x \in X,$$ \[ \mbox{$\phi(s)(\beta)$} = \left\{ \begin{array}{ll} \mbox{$u(\alpha)$,} & \mbox{if $\beta = m |u| + \alpha, m \geqslant 0, 1 \leqslant \alpha \leqslant |u|$,} \\ \mbox{$u(\alpha)$,} & \mbox{if $\beta = t - m |u| + \alpha, m > 0, 1 \leqslant \alpha \leqslant |u|$.} \end{array} \right. \] \begin{figure}[htbp] \label{pic2} \centering{\mbox{\psfig{figure=ex_1.eps,height=1.8in}}} \caption{$\Gamma_G$ as a $\mathbb{Z}$-tree of $\mathbb{Z}$-trees.} \end{figure} It is easy to see that $|\phi(s)| = t$ and $\phi(s)$ commutes with $u$ in $CDR(\mathbb{Z}^2, X)$. To simplify the notation we identify $G$ with its image $\phi(G)$. Every element $g$ of $G$ can be represented as the following reduced $\mathbb{Z}^2$-word $$g = g_1 \circ s^{\delta_1} \circ g_2 \circ \cdots \circ g_k \circ s^{\delta_k} \circ g_{k+1},$$ where $[g_i, u] \neq 1$. Now, according to the construction described in Section \ref{sec:universal}, the universal tree $\Gamma_G$ consists of the segments in $\mathbb{Z}^2$ labeled by elements from $G$ which are glued together along their common initial subwords. Thus, $\Gamma_G$ can be viewed as a $\mathbb{Z}$-tree of $\mathbb{Z}$-trees which are Cayley graphs of $F(X)$ and every vertex $\mathbb{Z}$-subtree can be associated with a right representative in $G$ by $F$. The end-points of the segments $[1, |g|]$ and $[1, |h|]$ labeled respectively by $g$ and $h$ belong to the same vertex $\mathbb{Z}$-subtree if and only if $h^{-1} g \in F$. \begin{figure}[htbp] \label{pic3} \centering{\mbox{\psfig{figure=ex_2.eps,height=2in}}} \caption{Adjacent $\mathbb{Z}$-subtrees in $\Gamma_G$.} \end{figure} In other words, $\Gamma_G$ is a ``more detailed'' version of the Bass-Serre tree $T$ for $G$, in which every vertex is replaced by the Cayley graph of the base group $F$ and the adjacent $\mathbb{Z}$-subtrees of $\Gamma_G$ corresponding to the representatives $g$ and $h$ are ``connected'' by means of $s^{\pm}$ which extends $g \cdot Axis(u)$ to $h' \cdot Axis(u)$, where $h'^{-1} h \in F$ and $g^{-1} h \in s^{\pm} F$. \end{example} The following example is a generalization of the previous one. \begin{example} \label{ex:2} Let $F = F(X)$ be a free group with basis $X$ and the standard length function $|\cdot|$, and let $u, v \in F$ be cyclically reduced elements which is not a proper powers and such that $|u| = |v|$. The HNN-extension $$H = \langle F, s \mid u^s = v \rangle$$ embeds into $CDR(\mathbb{Z}^2, X)$ under the following map $\psi$: $$\psi(x) = x,\ \forall\ x \in X,$$ \[ \mbox{$\psi(s)(\beta)$} = \left\{\begin{array}{ll} \mbox{$u(\alpha)$,} & \mbox{if $\beta = m |u| + \alpha, m \geqslant 0, 1 \leqslant \alpha \leqslant |u|$,} \\ \mbox{$v(\alpha)$,} & \mbox{if $\beta = t - m |v| + \alpha, m > 0, 1 \leqslant \alpha \leqslant |v|$.} \end{array} \right. \] It is easy to see that $|\psi(s)| = t$ and $u \circ \psi(s) = \psi(s) \circ v$ in $CDR(\mathbb{Z}^2, X)$. Again, to simplify the notation we identify $H$ with its image $\psi(H)$. The structure of $\Gamma_H$ is basically the same as the structure of $\Gamma_G$ in Example \ref{ex:1}. The only difference is that the adjacent $\mathbb{Z}$-subtrees of $\Gamma_H$ corresponding to the representatives $g$ and $h$ are ``connected'' by means of $s^{\pm}$ which extends $g \cdot Axis(u)$ to $h' \cdot Axis(v)$, where $h'^{-1} h \in F$ and $g^{-1} h \in s^{\pm} F$. \begin{figure}[htbp] \label{pic4} \centering{\mbox{\psfig{figure=ex_3.eps,height=2in}}} \caption{Adjacent $\mathbb{Z}$-subtrees in $\Gamma_H$.} \end{figure} \end{example} \section{Effective $\Lambda$-trees} \label{se:effect} In this section we introduce some basic notions concerning effectiveness when dealing with groups of infinite words and corresponding universal trees. \subsection{Infinite words viewed as computable functions} \label{subs:comp_func} We say that a group $G = \langle Y \rangle,\ Y = \{y_1, \ldots, y_m\}$ has an {\em effective representation by $\Lambda$-words over an alphabet $X$} if $G \subset CDR(\Lambda,X)$ and \begin{enumerate} \item[(ER1)] each function $y_i : [1,|y_i|] \to X^\pm$ is computable, that is, one can effectively determine $y_i(\alpha)$ for every $\alpha \in [1,|y_i|]$ and $i \in [1,m]$, \item[(ER2)] for every $i,j \in [1,m]$ and every $\alpha_i \in [1, |y_i|],\ \alpha_j \in [1, |y_j|]$ one can effectively compute $c(h_i,h_j)$, where $h_i = y_i^{\pm 1} \mid_{[\alpha_i,|y_i|]},\ h_j = y_j^{\pm 1} \mid_{[\alpha_j,|y_j|]}$. \end{enumerate} Observe that since every $y_i$ is computable, $y_i^{-1}$ is computable too for every $i \in [1,m]$. Next, it is obvious that concatenation of computable functions is computable, as well as restriction of a computable function to a computable domain. Thus, if $g_i \ast g_j = h_i \circ h_j$, where $g_i = y_i^{\delta_i} = h_i \circ c,\ g_j = y_j^{\delta_j} = c^{-1} \circ h_j,\ \delta_i, \delta_j = \pm 1$, then both $h_i$ and $h_j$ are computable as restrictions $h_i = g_i \mid_{[1,\alpha]},\ h_j = g_j \mid_{[\alpha + 1,|g_j|]}$ for $\alpha = |c| = c(g_i^{-1},g_j)$, and so is $g_i \ast g_j$. Now, using (ER2) twice we can determine $c((g_i \ast g_j)^{-1}, g_k)$, where $g_k = y_k^{\delta_k}, \ \delta_k = \pm 1$. Indeed, $c((g_i \ast g_j)^{-1}, g_k) = c(h_j^{-1} \circ h_i^{-1}, g_k)$, so, if $c(h_j^{-1}, g_k) < |h_j^{-1}|$ then $c((g_i \ast g_j)^{-1}, g_k) = c(h_j^{-1}, g_k)$ which is computable by (ER2), and if $c(h_j^{-1}, g_k) \geqslant |h_j^{-1}|$ then $c((g_i \ast g_j)^{-1}, g_k) = |h_j| + c(h_i^{-1}, h_k)$, where $h_k = g_k \mid_{[|h_j|+1, |g_k|]}$ -- again, all components are computable and so is $c((g_i \ast g_j)^{-1}, g_k)$. It follows that $y_i^{\pm 1} \ast y_j^{\pm 1} \ast y_k^{\pm 1}$ is a computable function for every $i,j,k \in [1,m]$. Continuing in the same way by induction one can show that every finite product of elements from $Y^{\pm 1}$, that is, every element of $G$ given as a finite product of generators and their inverses, is computable as a function defined over a computable segment in $\Lambda$ to $X^\pm$. Moreover, for any $g,h \in G$ one can effectively find $com(g,h)$ as a computable function. In particular, we automatically get a solution to the Word Problem in $G$ provided $G$ has an effective representation by $\Lambda$-words over an alphabet $X$. \subsection{Computable universal trees} \label{subs:comp_unive_trees} Suppose $G$ has an effective representation by $\Lambda$-words over an alphabet $X$ and let $\Gamma_G$ be the universal $\Lambda$-tree for $G$. According to the construction in Section \ref{sec:universal}, every point of $\Gamma_G$ can be viewed as a pair $(\alpha, g)$, where $g \in G$ and $\alpha \in [0, |g|]$. Such a pair is not unique but given another pair $(\beta, f)$ we can effectively find out if both pair define the same point of $\Gamma_G$. Indeed, $(\alpha, g) \sim (\beta, f)$ if and only if $\alpha = \beta \in [0, c(f,g)]$ and $c(f,g)$ can be found effectively. Next, according to the definition given in Section \ref{sec:universal}, for $f \in G$ and $(\alpha, g)$ representing a point in $\Gamma_G$, the image of $(\alpha, g)$ is defined as follows $$f \cdot (\alpha, g) = (|f| + \alpha - 2 \min\{\alpha, c(f^{-1}, g)\}, f)$$ if $\alpha \leqslant c(f^{-1},g)$, and $$f \cdot (\alpha, g) = (|f| + \alpha - 2 \min\{\alpha, c(f^{-1}, g)\}, f \ast g)$$ if $\alpha > c(f^{-1}, g)$. Since $G$ has an effective representation by $\Lambda$-words, it follows that $c(f^{-1},g)$ can be found effectively and $f \ast g$ is a computable function. Thus, $f \cdot (\alpha, g)$ can be determined effectively. Summarizing the discussion above we prove the following result. \begin{theorem} \label{th:effect} Let $G$ be a finitely generated group which has an effective representation by $\Lambda$-words over an alphabet $X$ and let $\Gamma_G$ be the universal $\Lambda$-tree for $G$. Then \begin{enumerate} \item[(1)] $\Gamma_G$ can be constructed effectively in the sense that there exists a procedure (infinite in general) which builds $\Gamma_G$ and every point of $\Gamma_G$ appears in the process after finitely many steps, \item[(2)] the action of $G$ on $\Gamma_G$ is effective in the sense that given a representative of a point $v \in \Gamma_G$ and an element $g \in G$, one can effectively compute a representative of $g \cdot v$. \end{enumerate} \end{theorem} \begin{proof} Let $G = \langle Y \rangle$, where $Y$ is finite. The procedure building $\Gamma_G$ enumerates all finite words in the alphabet $Y^{\pm 1}$ and finds the corresponding computable functions. According to the construction of Section \ref{sec:universal}, every point of $\Gamma_G$ can be viewed as a pair $(\alpha, g)$, where $g \in G$ and $\alpha \in [0, |g|]$, so eventually every point of $\Gamma_G$ appears in the process. This concludes the proof of (1). \smallskip Finally, (2) follows from the discussion preceding the theorem. \end{proof}
e619e47dc92e8744fc1d7033e0f81ddf6ea97131
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{Sintro} The \emph{product formula} in algebraic number theory states that, given an algebraic number $x \ne 0$ in a number field $K$, the product of $|x|$ as $|\cdot|$ ranges over all inequivalent absolute values of $K$ (appropriately normalized) is equal to $1$. In logarithmic form, the sum of $\log |x|$ as $|\cdot|$ ranges over all inequivalent absolute values is $0$. In \cite{Co:pv}, Colmez asked whether an analogous product formula might hold for periods of algebraic varieties, and conjectured that it would hold for periods of abelian varieties with complex multiplication (CM-abelian varieties). He proved that, for abelian varieties with complex multiplication by \emph{abelian} extensions of $\mathbb Q$, such a product formula holds (in logarithmic form) up to an (unknown) rational multiple of $\log 2$ (\cite[Th\'{e}or\`{e}me 0.5 and discussion after Conjecture 0.4]{Co:pv}). A key step in this proof was provided by work of Coleman and McCallum (\cite{CM:sr}, \cite{Co:fm}) on understanding stable models of quotients of Fermat curves in mixed characteristic $(0,p)$, where $p$ is an odd prime. These quotients are $\mathbb Z/p^n$-covers of the projective line, branched at three points. The unknown rational multiple of $\log 2$ was necessary in \cite{Co:pv} precisely because the stable models of $\mathbb Z/2^n$-covers of the projective line, branched at three points, in mixed characteristic $(0, 2)$, were not well-understood at the time. This problem was solved by the author in \cite{Ob:fm}, where a complete description of the stable models of such covers was given. In this paper, we use the results of \cite{Ob:fm} to complete the proof of Colmez's product formula for abelian extensions of $\mathbb Q$ by eliminating the multiple of $\log 2$ in question. Colmez first looks at the example of $2 \pi i$, which is a period for the variety $\mathbb G_m$, rather than for an abelian variety. For each prime $p$, one can view $2\pi i$ as an element $t_p$ of Fontaine's ring of periods $\mathbf{B}_p$, and its $p$-adic absolute value is $|t_p|_p = p^{1/(1-p)}$. The archimedean absolute value $|\cdot|_{\infty}$ is the standard one, so $|2 \pi i|_{\infty} = 2\pi$. The logarithm of the product of all of these absolute values is $$\log 2\pi - \sum_{p < \infty} \frac{\log p}{p-1}.$$ This sum does not converge, but formally, it is equal to $\log 2\pi - \frac{\zeta'(1)}{\zeta(1)}$, where $\zeta$ is the Riemann zeta function. Using the functional equation of $\zeta$ (and ignoring the $\Gamma$ factors), we obtain $\log 2 \pi - \frac{\zeta'(0)}{\zeta(0)}$, which is equal to $0$. In this sense, we can say that the product formula holds for $2 \pi i$. The above method can be adapted to give a definition of what it means to take the logarithm of the product of all the absolute values of a period, and thus to give a product formula meaning. Many subtleties arise, and the excellent and thorough introduction to \cite{Co:pv} discusses them in detail. We will not attempt to recreate this discussion. Instead, we will just note that Colmez shows that the product formula for periods of CM-abelian varieties with complex multiplication by an abelian extension of $\mathbb Q$ (in logarithmic form) is equivalent to the formula \begin{equation}\label{Eproductformula} ht(a) = Z(a^*, 0) \end{equation} for all $a \in \mc{CM}^{ab}$ (\cite[Th\'{e}or\`{e}me II.2.12(iii)]{Co:pv}). Here, $\mc{CM}^{ab}$ is the vector space of $\mathbb Q$-valued, locally constant functions $a: \text{Gal}(\mathbb Q^{ab}/\mathbb Q) \to \mathbb Q$ such that, if $c$ represents complex conjugation, then $a(g) + a(cg)$ does not depend on $g \in G_{\mathbb Q}$. Such a function can be decomposed into a $\mathbb C$-linear combination of Dirichlet characters whose L-functions do not vanish at $0$. If $a \in \mc{CM}^{ab}$ then we define $a^* \in \mc{CM}^{ab}$ by $a^*(g) = a(g^{-1})$. Also, $Z(\cdot, 0)$ is the unique $\mathbb C$-linear function on $\mc{CM}^{ab} \otimes \mathbb C$ equal to $\frac{L'(\chi, 0)}{L(\chi, 0)}$ when its argument is a Dirichlet character $\chi$ whose $L$-function does not vanish at $0$. Lastly, $ht(\cdot)$ is a $\mathbb C$-linear function on $\mc{CM}^{ab} \otimes \mathbb C$ related to Faltings heights of abelian varieties (see \cite[Th\'{e}or\`{e}me 0.3]{Co:pv} for a precise definition, also \cite{Ya:cs}). Colmez shows (\cite[Proposition III.1.2, Remarque on p.\ 676]{Co:pv}) that \begin{equation}\label{Eerror} Z(a^*, 0) - ht(a) = \sum_{p \text{ prime}} w_p(a) \log p, \end{equation} where $w_p: \mc{CM}^{ab} \to \mathbb Q$ is a $\mathbb Q$-linear function (depending on $p$) that will be defined in \S\ref{Sfrob}. He then further shows that $w_p(a) = 0$ for all $p \geq 3$ and all $a \in \mc{CM}^{ab}$ (\cite[Corollaire III.2.7]{Co:pv}). Thus (\ref{Eproductformula}) is correct up to a adding a rational multiple of $\log 2$. Our main theorem (Theorem \ref{Tmain}) states that $w_2(a) = 0$ for all $a \in \mc{CM}^{ab}$, thus proving (\ref{Eproductformula}). We note that, in light of the expression (\ref{Eproductformula}), Colmez's formula is fundamentally about relating periods of CM-abelian varieties to logarithmic derivatives of L-functions. That this can be expressed as a product formula is aesthetically pleasing, but the main content is encapsulated by (\ref{Eproductformula}). In \S\ref{Sfrob}, we define $w_p$ and show how it is related to De Rham cohomolgy of Fermat curves. In \S\ref{Smonodromy}, we write down the important properties of the stable model of a certain quotient of the Fermat curve $F_{2^n}$ of degree $2^n$ ($n \geq 2$) over $\mathbb Q_2$, and we discuss the monodromy action on the stable reduction. In \S\ref{Sdiffforms}, we show how knowledge of this stable model, along with the monodromy action, allows us to understand the Galois action on the De Rham cohomology of $F_{2^n}$. In \S\ref{Sformula}, we show how this is used to prove that $w_2(a) = 0$. Lastly, in \S\ref{Scomputations}, we collect some technical power series computations that are used in \S\ref{Sdiffforms}, but would interrupt the flow of the paper if included there. \subsection{Conventions}\label{Sconventions} The letter $p$ always represents a prime number. If $x \in \mathbb Q/\mathbb Z$, then $\fracpart{x}$ is the unique representative for $x$ in the interval $[0, 1)$. The standard $p$-adic valuation on $\mathbb Q$ is denoted $v_p$, and the subring $\mathbb Z_{(p)} \subseteq \mathbb Q$ consists of the elements $x \in \mathbb Q$ with $v_p(x) \geq 0$. If $K$ is a field, then $\ol{K}$ is its algebraic closure and $G_K$ is its absolute Galois group. \section{Galois actions on De Rham cohomology}\label{Sfrob} The purpose of this section is to define the function $w_p(a)$ from (\ref{Eerror}). In order to make this definition, one must first consider a particular rational factor of the Jacobian of the $m$th Fermat curve (where $m$ is related to $a$). This factor will have complex multiplication, and we will choose a de Rham cohomology class that is an eigenvector for this complex multiplication. One can then define the ``$p$-adic valuation" of such a cohomology class, and this valuation essentially determines $w_p(a)$. Recall that the action of $G_{\mathbb Q}$ on roots of unity gives a homomorphism $\chi: G_{\mathbb Q} \to \hat{\mathbb Z}^{\times}$. This factors through $\text{Gal}(\mathbb Q^{ab}/\mathbb Q)$, giving an isomorphism $\text{Gal}(\mathbb Q^{ab}/\mathbb Q) \cong \hat{\mathbb Z}^{\times}$, called the \emph{cyclotomic character}. Multiplication by the cyclotomic character gives a well-defined action of $G_{\mathbb Q}$ on $\mathbb Q/\mathbb Z$, factoring through $\text{Gal}(\mathbb Q^{ab}/\mathbb Q)$. The following definitions are from \cite[III]{Co:pv}. Recall that $\mc{CM}^{ab}$ is the vector space of $\mathbb Q$-valued, locally constant functions $a: \text{Gal}(\mathbb Q^{ab}/\mathbb Q) \to \mathbb Q$ such that, if $c$ represents complex conjugation, then $a(g) + a(cg)$ does not depend on $g \in G_{\mathbb Q}$ (\cite[p.\ 627]{Co:pv}). For $r \in \mathbb Q/\mathbb Z$, define an element $a_r \in \mc{CM}^{ab}$ by $$a_r(g) = \fracpart{gr} - \frac{1}{2}.$$ One can show that the $a_r$ generate $\mc{CM}^{ab}$ as a $\mathbb Q$-vector space. For $r \in \mathbb Q/\mathbb Z$, set $v_p(r) = \min(v_p(\fracpart{r}), 0)$, and set \begin{equation}\label{Epdef} r_{(p)} = p^{-v_p(r)}r \in \mathbb Q/\mathbb Z. \end{equation} Set $$V_p(r) = \begin{cases} 0 & r \in \mathbb Z_{(p)}/\mathbb Z \\ (\fracpart{r} - \frac{1}{2})v_p(r) - \frac{1}{(p-1)p^{-v_p(r)-1}}(\fracpart{\frac{r_{(p)}}{p}} - \frac{1}{2}) & \text{otherwise,} \end{cases}$$ where $\frac{r_{(p)}}{p}$ is the unique element of $\mathbb Z_{(p)}/\mathbb Z$ such that $\frac{r_{(p)}}{p} \cdot p = r_{(p)}$. Let $q = (\rho, \sigma, \tau) \in (\mathbb Q/\mathbb Z)^3,$ such that $\rho+\sigma+\tau = 0$ and none of $\rho$, $\sigma$, or $\tau$ are $0$. Let $m$ be a positive integer such that $m\rho = m\sigma = m\tau = 0$. Let $\epsilon_q = \fracpart{\rho} + \fracpart{\sigma} + \fracpart{\tau} - 1$. Let $F_m$ be the $m$th Fermat curve, that is, the smooth, proper model of the affine curve over $\mathbb Q$ given by $u^{m} + v^{m} = 1$, and let $J_m$ be its Jacobian. Write $\fracpart{\rho} = \frac{a}{m}$ and $\fracpart{\sigma} = \frac{b}{m}$. Consider the closed differential form $$\eta_{m, q} := m\fracpart{\rho+\sigma}^{\epsilon_q}u^av^b\frac{v}{u}d\left(\frac{u}{v}\right)$$ on $F_m$. We can view its De Rham cohomology class as a class $\omega_{m, q} \in H^1_{DR}(J_m) \cong H^1_{DR}(F_m)$ over $\mathbb Q$. It turns out that there is a particular rational factor $J_q$ of $J_m$ with complex multiplication, and a class $\omega_q \in H^1_{DR}(J_q)$, such that the pullback of $\omega_q$ to $J_m$ is $\omega_{m, q}$. Furthermore, $\omega_q$ is an eigenvector for the complex multiplication on $J_q$. As is suggested by the notation, the pair $(J_q, \omega_q)$ depends only on $q$, not on $m$, up to isomorphism (\cite[p.\ 674]{Co:pv}). Now, $G_{\mathbb Q}$ acts diagonally on $(\mathbb Q/\mathbb Z)^3$ by the cyclotomic character. If $\gamma \in G_{\mathbb Q}$, then $J_q = J_{\gamma q}$ (\cite[p.\ 674]{Co:pv}). Fix an embedding $\ol{\mathbb Q} \hookrightarrow \ol{\mathbb Q}_p$, which gives rise to an embedding $G_{\mathbb Q_p} \hookrightarrow G_{\mathbb Q}$. If $\gamma$ lies in the inertia group $I_{\mathbb Q_p} \subseteq G_{\mathbb Q_p} \subseteq G_{\mathbb Q}$, then $\gamma$ acts on $J_q$, and thus on $H^1_{DR}(J_q, \ol{\mathbb Q}_p)$. We have $\gamma^*\omega_q = \beta_{\gamma}(q) \omega_{\gamma q}$ where the constant $\beta_{\gamma}(q)$ lies in some finite extension of $\mathbb Q_p$ (\cite[pp.\ 676-7]{Co:pv}). We also note that $I_{\mathbb Q_p}$ acts on $H^1_{DR}(F_m, \ol{\mathbb Q}_p) \cong H^1_{DR}(J_m, \ol{\mathbb Q}_p)$ via its action on $F_m$. One derives \begin{equation}\label{Epullback} \gamma^*\omega_{m, q} = \beta_{\gamma}(q) \omega_{m, \gamma q}. \end{equation} If $K$ is a $p$-adic field with a valuation $v_p$, then there is a notion of $p$-adic valuation of $\omega \in H^1_{DR}(A)$ whenever $A$ is a CM-abelian variety defined over $K$ and $\omega$ is an eigenvector for the complex multiplication (\cite[p.\ 659]{Co:pv}---note that $\omega_q$ is such a class). By abuse of notation, we also write this valuation as $v_p$. It has the property that, if $c \in K$, then $v_p(c\omega) = v_p(c) + v_p(\omega)$. \begin{lemma}\label{Laux} If $\gamma \in I_{\mathbb Q_p}$, then $v_p(\omega_q) - v_p(\omega_{\gamma q}) = v_p(\beta_{\gamma}(q))$. \end{lemma} \begin{proof} By \cite[Th\'{e}or\`{e}me II.1.1]{Co:pv}, we have $v_p(\omega_q) = v_p(\gamma^*\omega_q)$. The lemma then follows from the definition of $\beta_{\gamma}(q)$. \end{proof} Let $b_q = a_{\rho} + a_{\sigma} + a_{\tau} \in \mc{CM}^{ab}$. There is a unique linear map $w_p: \mc{CM}^{ab} \to \mathbb Q$ such that $$w_p(b_q) = v_p(\omega_q) - V_p(q)$$ (\cite[Corollaire III.2.2]{Co:pv}). This is the map $w_p$ from (\ref{Eerror}). Recall from \S\ref{Sintro} that Colmez showed $w_p(a) = 0$ for all $p \geq 3$ and all $a \in \mc{CM}^{ab}$. In Theorem \ref{Tmain}, we will show that $w_2(a) = 0$ for all $a \in \mc{CM}^{ab}$. \section{Fermat curves}\label{Sfermat} In general, a branched Galois cover $f: Y \to X := \mathbb P^1$ defined over $\mathbb Q_p$ does not necessarily have good reduction. However, assuming that $2g(X) + r \geq 3$ (where $r$ is the number of branch points), one can always find a finite extension $K/\mathbb Q_p$ with valuation ring $R$, and a \emph{stable model} $f^{st}: Y_R \to X_R$ for the cover (i.e., $f^{st}$ is a finite map of flat $R$-curves whose generic fiber is $f$, and where $Y^{st}$ has reduced, stable fibers, considering the specializations of the ramification points of $f$ as marked points). The special fiber $\ol{f}: \ol{Y} \to \ol{X}$ of $f^{st}$ is called the \emph{stable reduction} of the cover. Furthermore, there is an action of $G_{\mathbb Q_p}$ on $\ol{f}$ (called the \emph{monodromy action}), given by reducing its canonical action on $f$, and this action factors through $\text{Gal}(K/\mathbb Q_p)$. For more details, see \cite{DM}, \cite{Ra}, \cite{Liu}. Calculating the stable reduction and monodromy action of a cover can be difficult, even when the Galois group is simple (see, e.g., \cite{LM}, where Lehr and Matignon calculate the stable reduction and monodromy action of $\mathbb Z/p$-covers branched at arbitrarily many equidistant points). Restricting to three branch points can simplify matters. A major result of Coleman and McCallum (\cite{CM:sr}) calculated the stable reduction of all cyclic covers of $\mathbb P^1$ branched at three points, when $p \neq 2$. From this, the monodromy action was calculated in \cite{Co:fm}, which sufficed to prove Colmez's product formula up to the factor of $\log 2$. The case $p=2$ (for three-point covers) is somewhat more complicated, and requires new techniques by the author in \cite{Ob:fm}. Enough details are given in \cite{Ob:fm} to calculate the monodromy action explicitly, which we do to the extent we need to in \S\ref{Smonodromy}. The work in \S\ref{Sdiffforms} and \S\ref{Sformula} mimics the work in \cite{Co:fm} and \cite{Co:pv}, respectively, to show how a knowledge of the monodromy action leads to a proof of the product formula. \subsection{The monodromy action}\label{Smonodromy} Fix $n \geq 2$. Let $f:Y \to X := \mathbb P^1$ be the branched cover given birationally by the equation $y^{2^n} = x^a(x-1)^b$, defined over $\mathbb Q_2$, where $x$ is a fixed coordinate on $\mathbb P^1$. Assume for this entire section that $a$ is odd, that $1 \leq v_2(b) \leq n-2$, and that $0 < a, b < 2^n$. Set $s = n-v_2(b)$ (this makes $2^s$ the branching index of $f$ at $x=1$). Thus $s \geq 2$. Let $K/\mathbb Q_2$ be a finite extension, with valuation ring $R$, over which $f$ admits a stable model $f^{st}: Y^{st} \to X^{st}$. Let $k$ be the residue field of $K$. We write $I_{\mathbb Q_2} \subseteq G_{\mathbb Q_2}$ for the inertia group. Let $\ol{f}: \ol{Y} \to \ol{X}$ be the special fiber of $f^{st}$ (called the \emph{stable reduction} of $f$). We focus on the monodromy action of the inertia group $I_{\mathbb Q_2}$ on $\ol{f}$. Throughout this section we write $v$ for the valuation on $K$ satisfying $v(2) = 1$. We will allow finite extensions of $K$ as needed. The following proposition is the result that underlies our entire computation. \begin{proposition}[\cite{Ob:fm}, Lemma 7.8]\label{P2fermatreduction} \begin{enumerate}[(i)] \item There is exactly one irreducible component $\ol{X}_b$ of $\ol{X}$ above which $\ol{f}$ is generically \'{e}tale. \item Furthermore, $\ol{f}$ is \'{e}tale above $\ol{X}_b^{sm}$, i.e., the smooth points of $\ol{X}$ that lie on $\ol{X}_b$. \item Let $$d = \frac{a}{a+b} + \frac{\sqrt{2^nbi}}{(a+b)^2}.$$ and extend $K$ finitely (if necessary) so that $K$ contains $d$, as well as an element $e$ such that $v(e) = n - \frac{s}{2} + \frac{1}{2}$. Here, $i$ can be either square root of $-1$ and $\sqrt{2^nbi}$ can be either square root of $2^nbi$. Then, in terms of the coordinate $x$, the $K$-points of $X$ that specialize to $\ol{X}_b^{sm}$ form a closed disc of radius $|e|$ centered at $d$. \item For each $k$-point $\ol{u}$ of $\ol{X}_b^{sm}$, the $K$-points of $X$ that specialize to $\ol{u}$ form an open disc of radius $|e|$. \end{enumerate} \end{proposition} \begin{remark}\label{Rcoleman} The result \cite[Lemma 7.8]{Ob:fm} is more general, in that it proves an analogous statement when $2$ is replaced by any prime $p$. Such a result was already shown in \cite{Co:fm} when $p$ is an odd prime (with some restrictions in the case $p=3$). In \cite[Lemma 7.8]{Ob:fm}, $k$ is assumed to be algebraically closed, but as long as we restrict to $k$-points in Proposition \ref{P2fermatreduction}(iv), everything works. \end{remark} For any $K$-point $w$ in the closed disc from Proposition \ref{P2fermatreduction}(iii), write $\ol{w}$ for its specialization to $\ol{X}_b$, which is a $k$-point. For such a $w$, if $t$ is defined by $x = w + et$, then $\hat{\mc{O}}_{X^{st}, \ol{w}} = R[[t]],$ where $\hat{\mc{O}}_{X^{st}, \ol{w}}$ is the completion of the local ring $\mc{O}_{X^{st}, \ol{w}}$ at its maximal ideal. The variable $t$ is called a \emph{parameter} for $\hat{\mc{O}}_{X^{st}, \ol{w}}$. One thinks of $R[[t]]$ as the ring of functions on the open unit disc $|t| < 1$, which corresponds to the open disc $|x - w| < |e|$. For $\gamma \in I_{\mathbb Q_2}$, let $\chi(\gamma) \in \mathbb Z_2^{\times}$ be the cyclotomic character applied to $\gamma$. Maintain the notation $d$ from Proposition \ref{P2fermatreduction}. \begin{lemma}\label{Lgaloisaction} Fix $\gamma \in I_{\mathbb Q_2}$. Let $a'$ (resp.\ $b'$) be the integer between $0$ and $2^n -1$ congruent to $\chi(\gamma)a$ (resp.\ $\chi(\gamma)b$) modulo $2^n$. Let $$d' = \frac{a'}{a'+b'} + \frac{\sqrt{2^nb'i}}{(a'+b')^2}$$ (here $i$ is the same square root of $-1$ chosen in the definition of $d$, but $\sqrt{2^nb'i}$ can be either choice of square root). Then we have $\ol{\gamma(d)} = \ol{d'}$. \end{lemma} \begin{proof} We first claim that \begin{equation}\label{E0} d' \equiv \frac{a}{a+b} + \chi(\gamma)^{-3/2}\frac{\sqrt{2^nbi}}{(a+b)^2} \pmod{2^n}, \end{equation} where $\sqrt{2^nbi}$ is chosen as in the definition of $d$, as long as the square root of $\chi(\gamma)$ is chosen correctly. One verifies easily that \begin{equation}\label{E1} \frac{a'}{a'+b'} \equiv \frac{a}{a+b} \pmod{2^n}. \end{equation} One also sees easily that \begin{equation}\label{E2} \frac{\sqrt{2^nb'i}}{(a'+b')^2} \equiv \frac{1}{\chi(\gamma)^2}\frac{\sqrt{2^nb'i}}{(a+b)^2} \pmod{2^n}. \end{equation} Now, \begin{equation}\label{E3} \sqrt{2^nb'i} = \sqrt{2^n(\chi(\gamma)b + r)i} = \chi(\gamma)^{1/2}\sqrt{2^nbi}\sqrt{1 + \frac{r}{\chi(\gamma)b}}, \end{equation} where $r$ is some integer divisible by $2^n$, where $\sqrt{1 + \frac{r}{\chi(\gamma)b}}$ is chosen to be no further from $1$ than from $-1$, where $\sqrt{2^nbi}$ is chosen as in the definition of $d$, and where $\chi(\gamma)^{1/2}$ is chosen to make the equality work. But $v\left(\frac{r}{\chi(\gamma)b}\right) \geq s$, and thus $$v\left(\sqrt{1 + \frac{r}{\chi(\gamma)b}} - 1 \right) \geq s-1 \geq \frac{s}{2}$$ (recall that we assume $s \geq 2$). Since $v(\sqrt{2^nbi}) = n - \frac{s}{2}$, it follows from (\ref{E3}) that $$\sqrt{2^nb'i} \equiv \chi(\gamma)^{1/2}\sqrt{2^nbi} \pmod{2^n}.$$ Combining this with (\ref{E1}) and (\ref{E2}) proves the claim. Now, $$\gamma(d) = \frac{a}{a+b} + \zeta_{\gamma} \frac{\sqrt{2^nbi}}{(a+b)^2},$$ where $\zeta_{\gamma}$ is a fourth root of unity that depends on $\gamma$. In particular, $$\zeta_{\gamma} = \begin{cases} \pm i & \chi(\gamma) \equiv 3 \pmod{4} \\ \pm 1 & \chi(\gamma) \equiv 1 \pmod{4}.\end{cases}$$ In both cases, one computes that $\zeta_{\gamma} \equiv \chi(\gamma)^{-3/2} \pmod{2}$. So \begin{equation}\label{E4} \gamma(d) \equiv \frac{a}{a+b} + \chi(\gamma)^{-3/2}\frac{\sqrt{2^nbi}}{(a+b)^2} \pmod{2^{n-\frac{s}{2} + 1}}. \end{equation} Combining this with (\ref{E0}), we obtain that $$\gamma(d) \equiv d' \pmod{2^{\min(n, n - \frac{s}{2} +1)}}.$$ Since $s \geq 2$, this implies $$\gamma(d) \equiv d' \pmod{2^{n - \frac{s}{2} + 1}}.$$ By Proposition \ref{P2fermatreduction}(iv), $\gamma(d)$ and $d'$ specialize to the same point, and we are done. \end{proof} \begin{remark}\label{Rdval} Note that $v(d) = v(d') = 0$ and $v(d-1) = v(d'-1) = n-s.$ \end{remark} Combining Proposition \ref{P2fermatreduction} and Lemma \ref{Lgaloisaction}, and using the definitions of $d$, $d'$, $e$, and $\gamma$ therein, we obtain: \begin{corollary}\label{Cparameters} If $x = d+et$, then $t$ is a parameter for $\mbox{Spec } \hat{\mc{O}}_{X^{st}, \ol{d}}$. Likewise, if $x = d' + et'$, then $t'$ is a parameter for $\mbox{Spec } \hat{\mc{O}}_{X^{st}, \ol{\gamma(d)}}$. \end{corollary} \subsection{Differential forms}\label{Sdiffforms} Maintain the notation of \S\ref{Smonodromy}, including $d$, $d'$, $e$, and $\gamma$. All De Rham cohomology groups will be assumed to have coefficients in $K$. As in \S\ref{Sfrob}, let $q = (\rho, \sigma, \tau) \in (\mathbb Q/\mathbb Z)^3,$ such that $\rho+\sigma+\tau = 0$. Furthermore, suppose $\fracpart{\rho} = \frac{a}{2^n}$ with $a$ odd and $\fracpart{\sigma} = \frac{b}{2^n}$ with $1 \leq v(b) \leq n-2$. Set $\epsilon_q = \fracpart{\rho} + \fracpart{\sigma} + \fracpart{\tau} - 1$. Let $F_{2^n}$ be the Fermat curve given by $u^{2^n} + v^{2^n} = 1$, defined over $\mathbb Q_2$, and let $J_{2^n}$ be its Jacobian. Let $\omega_{2^n, q}$ be the element of $H^1_{DR}(F_{2^n}) \cong H^1_{DR}(J_{2^n})$ given by the differential form $$\eta_{2^n, q} = 2^n\fracpart{\rho+\sigma}^{\epsilon_q}u^av^b\frac{v}{u}d\left(\frac{u}{v}\right).$$ Recall that this is the pullback of a cohomology class $\omega_q$ on a rational factor $J_q$ of $J_{2^n}$. One can rewrite $\eta_{2^n, q}$ as $$\fracpart{\rho+\sigma}^{\epsilon_q}u^{a-2^n}v^{b-2^n}d(u^{2^n})$$ (cf.\ \cite[(1.2)]{Co:fm}). Making the substitution $y = u^av^b$ and $x = u^{2^n}$ shows that $\eta_{2^n, q}$ (and thus $\omega_{2^n, q}$) descends to the curve $Y$ given by the equation $y^{2^n} = x^a(x-1)^b$ (which we will also call $F_{2^n, a, b}$), and is given in $(x,y)$-coordinates by $$\eta_{2^n, q} = \frac{\fracpart{\rho+\sigma}^{\epsilon_q}}{x(1-x)}ydx.$$ If $\gamma \in I_{\mathbb Q_2}$, then $\fracpart{\gamma \rho} = \frac{a'}{2^n}$ and $\fracpart{\gamma \sigma} = \frac{b'}{2^n}$, where $a'$ and $b'$ are as in Lemma \ref{Lgaloisaction}. Letting $\gamma \in I_{\mathbb Q_2}$ act on $(\mathbb Q/\mathbb Z)^3$ diagonally via the cyclotomic character, we define $\eta_{2^n, \gamma q}$, $\omega_{2^n, \gamma q}$, and $\omega_{\gamma q}$ as above. Now, $\eta_{2^n, \gamma q}$ (and thus $\omega_{2^n, \gamma q}$) descends to the curve $F_{2^n, a', b'}$ given by the equation $(y')^{2^n} = x^{a'}(x-1)^{b'}$, where $y' = u^{a'}v^{b'}$. Then $\eta_{2^n, \gamma q}$ is given in $(x, y')$-coordinates by $$\eta_{2^n, \gamma q} = \frac{\fracpart{\gamma \rho + \gamma \sigma}^{\epsilon_{\gamma q}}}{x(1-x)}y'dx.$$ Note that we can identify $F_{2^n, a', b'}$ with $F_{2^n, a, b}$ via $y' = y^hx^j(1-x)^k$, where $h$, $j$, and $k$ are such that $a' = ha + 2^nj$ and $b' = hb + 2^nk$. Recall from (\ref{Epullback}) that, for each $\gamma \in I_{\mathbb Q_2}$, there exists $\beta_{\gamma}(q) \in K$ (after a possible finite extension of $K$) such that $\gamma^*\omega_{2^n, q} = \beta_{\gamma}(q)\omega_{2^n, \gamma q}$ in $H^1_{DR}(J_{2^n})$. We will compute $\beta_{\gamma}(q)$ by viewing $\omega_{2^n, q}$ and $\omega_{2^n, \gamma q}$ as cohomology classes on $F_{2^n, a, b} = F_{2^n, a', b'}$. The following proposition relies on calculations from \S\ref{Scomputations}. \begin{proposition}[cf.\ \cite{Co:fm}, Corollary 7.6]\label{Pdiffformratio} We have $$v(\beta_{\gamma}(q)) = v(\fracpart{\rho})(\fracpart{\rho} - \fracpart{\gamma \rho}) + v(\fracpart{\sigma})(\fracpart{\sigma} - \fracpart{\gamma \sigma}) + v(\fracpart{\tau})(\fracpart{\tau} - \fracpart{\gamma \tau}).$$ \end{proposition} \begin{proof} We work with the representatives $\eta_{2^n, q}$ and $\eta_{2^n, \gamma q}$ of $\omega_{2^n, q}$ and $\omega_{2^n, \gamma q}$ on the curve $F_{2^n, a, b} = F_{2^n, a', b'}$. If $x = d+et$, then Proposition \ref{Pasymptoticvaluation} defines (after a possible finite extension of $R$) a power series $\alpha(t) \in R[[t]]$ such that $$\alpha(t)^{2^n} = x^a(x-1)^bd^{-a}(d-1)^{-b}$$ (Remark \ref{Rreminder}). Corollary \ref{Cdiffform} defines $\tilde{\alpha}(t) = \frac{d(1-d)\alpha(t)}{x(1-x)} \in R[[t]]$ (after substituting $x = d+et$), and shows that the valuation of the coefficient of $t^{\ell}$ in $\tilde{\alpha}(t)$ is $\frac{1}{2}S(\ell)$, the number of ones in the base $2$ expansion of $\ell$. Since $y^{2^n} = x^a(x-1)^b$, we have \begin{equation}\label{Efirstomega} \eta_{2^n, q} = \frac{\sqrt[2^n]{d^a(d-1)^b}\fracpart{\rho + \sigma}^{\epsilon_q} \tilde{\alpha}(t)}{d(1-d)} \, e\, dt = \mu d^{\fracpart{\rho} - 1} (d-1)^{\fracpart{\sigma} - 1} \fracpart{\rho + \sigma}^{\epsilon_q} \tilde{\alpha}(t) e\, dt, \end{equation} where $\mu$ is some root of unity and $d^{\fracpart{\rho}}$, $(d-1)^{\fracpart{\sigma}}$. are calculated using some choices of $2^n$th roots. Likewise, letting $d'$ be as in Lemma \ref{Lgaloisaction} and setting $x' = d' + et'$, we have \begin{equation}\label{Esecondomega} \eta_{2^n, \gamma q} = \mu' (d')^{\fracpart{\gamma \rho}-1} (d'-1)^{\fracpart{\gamma \sigma}-1} \fracpart{\gamma \rho + \gamma \sigma}^{\epsilon_{\gamma q}} \tilde{\alpha}'(t') e\, dt', \end{equation} where $\mu'$ is some root of unity, and $\tilde{\alpha}'(t')$ is some power series in $t'$ whose coefficients have the \emph{same} valuations as the coefficients of $\tilde{\alpha}(t)$ (Remark \ref{Rindependence}). By Corollary \ref{Cparameters}, $t$ (resp.\ $t'$) is a parameter for $\mbox{Spec } \hat{\mc{O}}_{X^{st}, \ol{d}}$ (resp.\ $\mbox{Spec } \hat{\mc{O}}_{X^{st}, \ol{\gamma(d)}}).$ Since the map $Y^{st} \to X^{st}$ is completely split above $\ol{d}$ (Proposition \ref{P2fermatreduction}(ii)), we can also view $t$ as a parameter for $\mbox{Spec } \hat{\mc{O}}_{Y^{st}, \ol{u}}$ for any point $\ol{u} \in \ol{Y}$ above $\ol{d}$. Then $t'$ can be viewed as a parameter for $\mbox{Spec } \hat{\mc{O}}_{Y^{st}, \gamma(\ol{u})}$. Write $\eta_{2^n, q} = \sum_{\ell=0}^{\infty} z_\ell t^\ell dt$ and $\eta_{2^n, \gamma q} = \sum_{\ell=0}^{\infty} z'_\ell (t')^\ell dt'$. By \cite[Theorem 4.1]{Co:fm} (setting $q=1$ in that theorem), $$v(\beta_{\gamma}(q)) = \lim_{i \to \infty} v\left(\frac{z_{\ell_i}}{z'_{\ell_i}}\right),$$ where $\ell_i$ is any sequence such that $\lim_{i \to \infty} v(z_{\ell_i}) - v(\ell_i + 1) = -\infty$. Take $\ell_i = 2^i - 1$. Then, by Remark \ref{Rdval}, Corollary \ref{Cdiffform}, (\ref{Efirstomega}), and (\ref{Esecondomega}), we have $$v(z_{\ell_i}) = (n-s) (\fracpart{\sigma} - 1) - n \epsilon_q + \frac{i}{2} + (n - \frac{s}{2} + \frac{1}{2})$$ and $$v(z'_{\ell_i}) = (n-s) (\fracpart{\gamma \sigma} - 1) - n \epsilon_{\gamma q} + \frac{i}{2} + (n - \frac{s}{2} + \frac{1}{2}).$$ So $$v(\beta_{\gamma}(q)) = (n-s)(\fracpart{\sigma} - \fracpart{\gamma \sigma}) + n (\epsilon_{\gamma q} - \epsilon_q).$$ Some rearranging shows that this is equal to $$n(\fracpart{\gamma \rho} - \fracpart{\rho}) + s(\fracpart{\gamma \sigma} - \fracpart{\sigma}) + n(\fracpart{\gamma \tau} - \fracpart{\tau}),$$ which is equal to the expression in the proposition. \end{proof} \subsection{Finishing the product formula}\label{Sformula} If $\gamma \in I_{\mathbb Q_2}$ and $r \in \mathbb Q/\mathbb Z$, then let $w_{2, \gamma}(r) = w_2(a_r) - w_2(a_{\gamma r})$, where the terms on the right hand side are defined in \S\ref{Sfrob}. The following result is an important consequence of Proposition \ref{Pdiffformratio}. \begin{corollary}\label{Ctransition} Let $\gamma \in I_{\mathbb Q_2}$. If $q = (\rho, \sigma, \tau) \in (\mathbb Q/\mathbb Z)^3$ with $\rho + \sigma + \tau = 0$, and none of $\fracpart{\rho}$, $\fracpart{\sigma}$, or $\fracpart{\tau}$ is $\frac{1}{2}$, then $w_{2, \gamma}(\rho) + w_{2, \gamma}(\sigma) + w_{2, \gamma}(\tau) = 0$. \end{corollary} \begin{proof} This has already been proven in \cite[Lemme III.2.5]{Co:pv} when any of $\rho$, $\sigma$, or $\tau$ is in $\mathbb Z_{(2)}/\mathbb Z$, so we assume otherwise. Furthermore, \cite[Lemme III.2.6]{Co:pv} states that $w_{2, \gamma}(\alpha) = w_{2, \gamma}(\alpha')$ whenever $\alpha - \alpha' \in \mathbb Z_{(2)}/\mathbb Z$. For each $\alpha \in (\mathbb Q/\mathbb Z)$, there is a unique $\alpha' \in \mathbb Q/\mathbb Z$ such that $\alpha - \alpha' \in \mathbb Z_{(2)}/\mathbb Z$ and $\fracpart{\alpha'} = \frac{j}{k}$, where $k$ is a power of $2$. Furthermore, if $\rho + \sigma + \tau = 0$, then $\rho' + \sigma' + \tau' = 0$. So we may assume that the denominators of $\rho$, $\sigma$, and $\tau$ are powers of $2$. Let $n$ be minimal such that $\fracpart{\rho} = \frac{a}{2^n}$, $\fracpart{\sigma} = \frac{b}{2^n}$, and $\fracpart{\tau} = \frac{c}{2^n}$, with $a$, $b$, $c \in \mathbb Z$. Then $n \geq 3$. Assume without loss of generality that $v_2(b) \geq \max(v_2(a), v_2(c))$. Then $a$ and $c$ must be odd, and $1 \leq v(b) \leq n-2$ (cf.\ \S\ref{Sdiffforms}---recall that we assume that $\fracpart{\sigma} \notin \{0, \frac{1}{2}\}$). One can then copy the proof of \cite[Lemme III.2.5]{Co:pv}, with our Proposition \ref{Pdiffformratio} substituting for \cite[Corollary 7.6]{Co:fm}. In more detail, $$w_{2, \gamma}(\rho) + w_{2, \gamma}(\sigma) + w_{2, \gamma}(\tau) = w_2(b_q) - w_2(b_{\gamma q}).$$ Using the definitions from \S\ref{Sfrob}, this is equal to $$V_2(b_{\gamma q}) - V_2(b_q) + v_2(\omega_q) - v_2(\omega_{\gamma q}),$$ which is equal to $$V_2(b_{\gamma q}) - V_2(b_q) - v(\beta_{\gamma}(q)),$$ by Lemma \ref{Laux}. By Proposition \ref{Pdiffformratio} and the fact that $\rho_{(2)}$, $(\gamma \rho)_{(2)}$, $\sigma_{(2)}$, $(\gamma \sigma)_{(2)}$, $\tau_{(2)}$, and $(\gamma \tau)_{(2)}$ are all zero (Equation (\ref{Epdef})), this is equal to zero. \end{proof} \begin{corollary}\label{Czero} For all $\gamma \in I_{\mathbb Q_2}$ and $r$ in $\mathbb Q/\mathbb Z$, we have $w_{2, \gamma}(r) = 0$. \end{corollary} \begin{proof} If $\fracpart{r} \in \{0, \frac{1}{2}\}$, then $r = \gamma r$, thus $w_{2, \gamma}(r) = 0$ by definition. We also have $w_{2, \gamma}(-r) = -w_{2, \gamma}(r)$ for all $r \in \mathbb Q/\mathbb Z$ (this follows from plugging $(\rho, \sigma, \tau) = (r, -r, 0)$ into Corollary \ref{Ctransition}, unless $\fracpart{r} = \frac{1}{2}$, in which case it is obvious). Plugging any triple $(a, b, -(a+b))$ into Corollary \ref{Ctransition} then shows that $$w_{2, \gamma}(a) + w_{2, \gamma}(b) = w_{2, \gamma}(a+b),$$ as long as none of $\fracpart{a}$, $\fracpart{b}$, or $\fracpart{a+b}$ is $\frac{1}{2}$. We now claim that, if $k > 4$ is even, and if $a \in \mathbb Q/\mathbb Z$ satisfies $\fracpart{a} = \frac{1}{k}$, then $w_{2, \gamma}(ja) = jw_{2, \gamma}(a)$ for $1 \leq j \leq \frac{k}{2} -1$ and for $\frac{k}{2} + 1 \leq j \leq k$. Admitting the claim, we set $j = k$ to show that $w_{2, \gamma}(a) = 0$, which in turn shows that $w_{2, \gamma}(ja) = 0$ for all $j$ above. Since any $r \in ([0, 1) \cap \mathbb Q)\backslash \{\frac{1}{2}\}$ is the fractional part of some such $ja$, the claim implies the corollary. To prove the claim, we note by additivity of $w_{2, \gamma}$ that $w_{2, \gamma}(ja) = jw_{2, \gamma}(a)$ for $1 \leq j \leq \frac{k}{2} - 1$. By additivity again (using $(\frac{k}{2} - 1)a$ and $2a$, neither of which has fractional part $\frac{1}{2}$), we have $w_{2, \gamma}((\frac{k}{2} + 1)a) = (\frac{k}{2} + 1)w_{2, \gamma}(a)$. Then, additivity shows that $w_{2, \gamma}(ja) = jw_{2, \gamma}(a)$ for $\frac{k}{2} + 1 \leq j \leq k$. \end{proof} \begin{theorem}\label{Tmain} We have $w_2(a) = 0$ for all $a \in \mc{CM}^{ab}$. \end{theorem} \begin{proof} This follows from Corollary \ref{Czero} exactly as \cite[Corollaire III.2.7]{Co:pv} follows from \cite[Lemme III.2.6(i)]{Co:pv}. \end{proof} Theorem \ref{Tmain} completes the proof of Colmez's product formula when the field of complex multiplication is an abelian extension of $\mathbb Q$. \begin{remark}\label{Ralready} Colmez already proved Corollary \ref{Czero} when $r \in \frac{1}{8}\mathbb Z_{(2)}/\mathbb Z$ (\cite[Lemma III.2.8]{Co:pv}). This was used to give a geometric proof of the Chowla-Selberg formula (\cite[III.3]{Co:pv}). \end{remark} \section{Computations}\label{Scomputations} The results of this section are used only in the proof of Proposition \ref{Pdiffformratio}. \subsection{Base $2$ expansions}\label{Sbase2} Let $S(\ell)$ be the sum of the digits in the base $2$ expansion of $\ell$, or $\infty$ if $\ell \in \mathbb Q \backslash \{0, 1, 2, \ldots\}$. It is clear that $S(\ell) = 1$ iff $\ell$ is an integer and a power of $2$. Note also that if $\ell_1$ and $\ell_2$ are positive integers whose ratio is a power of $2$, then $S(\ell_1) = S(\ell_2)$. \begin{lemma}\label{Lbase2} If $\ell_1$ and $\ell_2$ are nonnegative integers, then $S(\ell_1 + \ell_2) \leq S(\ell_1) + S(\ell_2)$. Equality never holds if $\ell_1 = \ell_2$. Furthermore, if $\ell$ is a positive integer, there are exactly $2^{S(\ell)} - 2$ ordered pairs of positive integers $(\ell_1, \ell_2)$ such that $\ell_1 + \ell_2 = \ell$ and $S(\ell_1) + S(\ell_2) = S(\ell)$. \end{lemma} \begin{proof} The first two assertions are clear from the standard addition algorithm. Now, for positive integers $\ell_1$ and $\ell_2$, we have $S(\ell_1 + \ell_2) = S(\ell_1) + S(\ell_2)$ exactly when no carrying takes place in the addition of $\ell_1$ and $\ell_2$ in base $2$. This happens when $\ell_1$ is formed by taking a nonempty, proper subset of the $1$'s in the base $2$ expansion of $\ell$, and converting them to zeros. There are $2^{S(\ell)-2}$ such subsets, proving the lemma. \end{proof} The following lemma gathers several elementary facts. The somewhat strange phrasings will pay off in \S\ref{Sfermat}. Notice that all inequalities are phrased in terms of something being less than or equal to $\frac{1}{2}S(\ell)$. \begin{lemma}\label{Lpossibilities} Let $\ell$ be a positive integer. \begin{enumerate}[(i)] \item $2S(\frac{\ell}{4}) - 2 \leq \frac{1}{2}S(\ell)$ iff $\ell \geq 4$ is a power of $2$. \item $S(\frac{\ell}{2}) - 1 \leq \frac{1}{2}S(\ell)$ iff $S(\ell) \leq 2$ and $\ell$ is even. \item There are exactly $2^{S(\ell)} - 2$ ordered pairs of positive integers $(\ell_1, \ell_2)$ such that $\ell_1 + \ell_2 = \ell$ and $\frac{1}{2}S(\ell_1) + \frac{1}{2}S(\ell_2) \leq \frac{1}{2}S(\ell)$. \item If $\ell_1$ and $\ell_2$ are distinct positive integers such that $2(\ell_1 + \ell_2) = \ell$, then $S(\ell_1) + S(\ell_2) - 1 \leq \frac{1}{2}S(\ell)$ iff $S(\ell_1) = S(\ell_2) = 1$ and $S(\ell) = 2$. \item If $\ell_1$, $\ell_2$, and $\ell_3$ are positive integers, not all distinct, such that $\ell_1 + \ell_2 + \ell_3 = \ell$, then it is never the case that $\frac{1}{2}S(\ell_1) + \frac{1}{2}S(\ell_2) + \frac{1}{2}S(\ell_3) \leq \frac{1}{2}S(\ell)$. \item If $\ell_1$ and $\ell_2$ are distinct positive integers such that $\ell_1 + 3\ell_2 = \ell$, then it is never the case that $\frac{1}{2}S(\ell_1) + \frac{3}{2}S(\ell_2) \leq \frac{1}{2}S(\ell)$. \item If $\ell_1$, $\ell_2$, and $\ell_3$ are distinct positive integers such that $\ell_1 + \ell_2 + 2\ell_3 = \ell$, then it is never the case that $\frac{1}{2}S(\ell_1) + \frac{1}{2}S(\ell_2) + S(\ell_3) \leq \frac{1}{2}S(\ell)$. \item If $\ell_1$, $\ell_2$, $\ell_3$, and $\ell_4$ are distinct nonnegative integers such that $\ell_1 + \ell_2 + \ell_3 + \ell_4 = \ell$, then it is never the case that $\frac{1}{2}S(\ell_1) + \frac{1}{2}S(\ell_2) + \frac{1}{2}S(\ell_3) + \frac{1}{2}S(\ell_4) + 1 \leq \frac{1}{2}S(\ell)$. \end{enumerate} \end{lemma} \begin{proof} Parts (i) and (ii) are easy, using that $S(\ell/4)$ and $S(\ell/2)$ are either equal to $S(\ell)$ or $\infty$. Part (iii) follows from Lemma \ref{Lbase2}. Part (iv) follows from that fact that $S(\ell) = S(\ell_1 + \ell_2) \leq S(\ell_1) + S(\ell_2)$. Parts (v), (vi), (vii), and (viii) also follow from Lemma \ref{Lbase2}. \end{proof} \subsection{Power series}\label{Spower} As in \S\ref{Sfermat}, let $f: Y \to X = \mathbb P^1$ be the branched cover of smooth curves given birationally by $y^{2^n}=x^a(x-1)^b$ where $a$ is odd, $1 \leq v_2(b) \leq n-2$, and $0 < a, b < 2^n$. Throughout this section, we take $K/\mathbb Q_2$ to be a finite extension over which $f$ admits a stable model, and $R$ to be the ring of integers of $K$. We will take further finite extensions of $K$ and $R$ as necessary. The valuation $v$ on $K$ (and any finite extension) is always normalized so that $v(2) = 1$. Throughout this section, we fix a square root $i$ of $-1$ in $K$. We let $f^{st}: Y^{st} \to X^{st}$ be the stable model of $f$, and $\ol{f}: \ol{Y} \to \ol{X}$ its stable reduction (\S\ref{Smonodromy}). Set $d = \frac{a}{a+b} + \frac{\sqrt{2^nbi}}{(a+b)^2}$, and $s:= n - v_2(b) \geq 2$. Let $\ol{d}$ be the specialization of $d$ in $\ol{X}$. Let $e$ be any element of $R$ with valuation $n- \frac{s}{2} + \frac{1}{2}$. If $x = d+et$, then $t$ is a parameter of $\hat{\mc{O}}_{X^{st}, \ol{d}}$ (Corollary \ref{Cparameters}). We set $$g(x) = x^a(x-1)^bd^{-a}(d-1)^{-b}.$$ Note that $g(d) = 1$. \begin{lemma}\label{Lnormalize} Expanding out $g(x)$ in terms of $t$ yields an expression of the form $$\gamma(t) := g(d+et) = \sum_{\ell=0}^{\infty} c_{\ell} t^\ell$$ where $c_0 = 1$, $v(c_2)=n$, $\frac{c_1^2}{c_2} \equiv 2^{n+1}i\pmod{2^{n+2}}$, and $v(c_{\ell}) > n + \frac{1}{2}S(\ell)$ for all $\ell \geq 3$. In particular, $v(c_1) = n + \frac{1}{2}$. \end{lemma} \begin{remark}\label{Rfinite} Of course, the ``series" above is actually just a polynomial. \end{remark} \begin{proof} The claim at the beginning of the proof of the $p=2$ part of \cite[Lemma C.2]{Ob:fm} proves everything except the statement for $\ell \geq 3$. The continuation of the proof of \emph{loc. cit.} leads to $$v(c_\ell) = n + 1 + \frac{\ell-2}{2}(s+1) - v(\ell) > n + \ell - 1 - v(\ell)$$ (recall, $s \geq 2$). It is easy to see that $\ell > 1 + v(\ell) + \frac{1}{2}S(\ell)$ for $\ell \geq 3$, from which the lemma follows. \end{proof} We wish to understand the $2^n$th root of $\gamma(t)$. It turns out that it is easier to do this by first taking a $2^{n-2}$th root, and then a $4$th root. \begin{lemma}\label{L4thpower} After possibly replacing $R$ by a finite extension, the power series $\gamma(t) = \sum_{\ell=0}^{\infty} c_\ell t^\ell$ from Lemma \ref{Lnormalize} has a $2^{n-2}$nd root in $R[[t]]$ of the form $$\delta(t) = \sum_{\ell=0}^{\infty} d_\ell t^\ell,$$ where $d_0 = 1$, $v(d_2) = 2$, $\frac{d_1^2}{d_2} \equiv 8i \pmod{16}$, and $v(d_\ell) > 2 + \frac{1}{2}S(\ell)$ for $\ell \geq 3$. In particular, $v(d_1) = \frac{5}{2}$. \end{lemma} \begin{proof} Let $w = \gamma(t) - 1$. Binomially expanding $(1 + w)^{1/2^{n-2}}$ gives $$\delta(t) = 1 + \frac{w}{2^{n-2}} + \sum_{j=2}^{\infty} \binom{1/2^{n-2}}{j} w^j.$$ The valuation of $\binom{1/2^{n-2}}{j}$ is $$S(j) - j - j(n-2) = S(j) + j - jn.$$ On the other hand, the valuation of $c_{\ell}$ (the coefficient of $t^{\ell}$ in $w$) is at least $n + \frac{1}{2}S(\ell) - \frac{1}{2}$ (Lemma \ref{Lnormalize}, the equality only holds for $\ell = 2$). So, by Lemma \ref{Lbase2}, the coefficient of $t^\ell$ in $w^j$ for $j \geq 2$ has valuation greater than $jn + \frac{1}{2} S(\ell) - \frac{j}{2}$ (equality could only occur if $\ell = 2j$, but in fact, does not, because $j(n + \frac{1}{2}S(2) - \frac{1}{2}) > jn + \frac{1}{2}S(2j) - \frac{j}{2}).$ Combining everything, the coefficient of $t^{\ell}$ in $\binom{1/2^{n-2}}{j} w^j$ (for $j \geq 2$) has valuation greater than $S(j) + \frac{j}{2} + \frac{1}{2}S(\ell)$, which is at least $2 + \frac{1}{2}S(\ell)$. Thus, for the purposes of the lemma, we may replace $\delta(t)$ by $1 + \frac{w}{2^{n-2}}$. The lemma then follows easily from Lemma \ref{Lnormalize}. \end{proof} \begin{proposition}\label{Pasymptoticvaluation} After possibly replacing $R$ by a finite extension, the power series $\delta(t) = \sum_{\ell=0}^{\infty} d_\ell t^{\ell}$ from Lemma \ref{L4thpower} has a $4$th root in $R[[t]]$ of the form $$\alpha(t) = \sum_{j=0}^{\infty} a_{\ell} t^{\ell},$$ where $a_0 = 1$, and $$a_\ell \equiv d_1^{\ell}(1 + i)^{S(\ell) -5\ell} \pmod{(1+i)^{S(\ell) + 1}}.$$ In particular, $v(a_\ell) = \frac{1}{2}S(\ell)$. \end{proposition} \begin{remark}\label{Rreminder} Note that $\alpha(t)$ is a $2^n$th root of $$g(d+et) = x^a(x-1)^bd^{-a}(d-1)^{-b},$$ where $x = d+et$. \end{remark} \begin{proof}[of Proposition \ref{Pasymptoticvaluation}] By Proposition \ref{P2fermatreduction}(ii), the stable model $f^{st}$ of $f$ splits completely above $\hat{\mc{O}}_{X^{st}, \ol{d}} = R[[t]]$. Thus, by \cite[Proposition 3.2.3 (2)]{Ra:ab}, $x^a(x-1)^b$ (when written in terms of $t$) is a $2^n$th power in $R[[t]]$. This does not change when it is multiplied by the constant $d^{-a}(d-1)^{-b}$ (as long as we extend $R$ appropriately), so we see that $\alpha(t)$ lives in $R[[t]]$ (this can also be shown using an explicit computation with the binomial theorem). We have the equation \begin{equation}\label{Etwoexpansions} \left(1 + \sum_{\ell=1}^{\infty}a_\ell t^\ell\right)^4 \equiv 1 + \sum_{\ell=1}^{\infty}d_\ell t^\ell. \end{equation} We prove the proposition by strong induction, treating the base cases $\ell = 1$, $2$ separately. Recall that $v(d_1) = \frac{5}{2}$ and $v(d_2) = 2$. For $\ell=1$, we obtain from (\ref{Etwoexpansions}) that $d_1 = 4a_1$, so $$a_1 = \frac{d_1}{4} \equiv d_1(1+i)^{-4} \pmod{(1+i)^{2}}.$$ For $\ell=2$, we obtain $$d_2 = 4a_2 + 6a_1^2 = 4a_2 + \frac{3}{8}d_1^2,$$ so $a_2 = \frac{d_2}{4} - \frac{3}{32}d_1^2$. Using that $\frac{d_1^2}{d_2} \equiv 8i \pmod{16}$ (Lemma \ref{L4thpower}), one derives that $\frac{d_2}{4} \equiv \frac{d_1^2}{32i} \pmod{2}$. Thus, $$a_2 \equiv (-i-3) \frac{d_1^2}{32} \equiv d_1^2(1+i)^{-9} \pmod{(1+i)^2},$$ proving the proposition for $\ell = 2$. Now, suppose $\ell > 2$. Then (\ref{Etwoexpansions}) yields (setting $a_j = 0$ for any $j \notin \mathbb Z$, and with all $\ell_i$ assumed to be positive integers): \begin{align*} d_{\ell} &= 4a_{\ell} + 6a_{\ell/2}^2 + 4a_{\ell/3}^3 + a_{\ell/4}^4 + \sum_{\substack{\ell_1 +\ell_2 = \ell \\ \ell_1 < \ell_2}} 12 a_{\ell_1}a_{\ell_2} + \sum_{\substack{\ell_1 +\ell_2 + \ell_3 = \ell \\ \ell_1 < \ell_2 < \ell_3}} 24 a_{\ell_1}a_{\ell_2}a_{\ell_3}\\ &+ \sum_{\substack{\ell_1 + 2\ell_2 = \ell \\ \ell_1 \ne \ell_2}} 12 a_{\ell_1}a_{\ell_2}^2 + \sum_{\substack{\ell_1 + 3\ell_2 = \ell \\ \ell_1 \ne \ell_2}} 4 a_{\ell_1}a_{\ell_2}^3 + \sum_{\substack{\ell_1 + \ell_2 + \ell_3 + \ell_4 = \ell \\ \ell_1 < \ell_2 < \ell_3 < \ell_4}} 24 a_{\ell_1}a_{\ell_2}a_{\ell_3}a_{\ell_4}\\ &+ \sum_{\substack{\ell_1 + \ell_2 + 2\ell_3 = \ell \\ \ell_1 < \ell_2, \ell_3 \ne \ell_1, \ell_3 \ne \ell_2}} 12 a_{\ell_1}a_{\ell_2}a_{\ell_3}^2 + \sum_{\substack{2\ell_1 + 2\ell_2 = \ell \\ \ell_1 < \ell_2}} 6 a_{\ell_1}^2a_{\ell_2}^2. \end{align*} or \begin{align*} a_{\ell} &= -\frac{1}{4}d_{\ell} + \frac{3}{2}a_{\ell/2}^2 + a_{\ell/3}^3 + \frac{1}{4}a_{\ell/4}^4 + \sum_{\substack{\ell_1 +\ell_2 = \ell \\ \ell_1 < \ell_2}} 3 a_{\ell_1}a_{\ell_2} + \sum_{\substack{\ell_1 +\ell_2 + \ell_3 = \ell \\ \ell_1 < \ell_2 < \ell_3}} 6 a_{\ell_1}a_{\ell_2}a_{\ell_3} \\ &+ \sum_{\substack{\ell_1 + 2\ell_2 = \ell \\ \ell_1 \ne \ell_2}} 3 a_{\ell_1}a_{\ell_2}^2 + \sum_{\substack{\ell_1 + 3\ell_2 = \ell \\ \ell_1 \ne \ell_2}} a_{\ell_1}a_{\ell_2}^3 + \sum_{\substack{\ell_1 + \ell_2 + \ell_3 + \ell_4 = \ell \\ \ell_1 < \ell_2 < \ell_3 < \ell_4}} 6a_{\ell_1}a_{\ell_2}a_{\ell_3}a_{\ell_4} \\ &+ \sum_{\substack{\ell_1 + \ell_2 + 2\ell_3 = \ell \\ \ell_1 < \ell_2, \ell_3 \ne \ell_1, \ell_3 \ne \ell_2}} 3 a_{\ell_1}a_{\ell_2}a_{\ell_3}^2 + \sum_{\substack{2\ell_1 + 2\ell_2 = \ell \\ \ell_1 < \ell_2}} \frac{3}{2} a_{\ell_1}^2a_{\ell_2}^2. \end{align*} Since we need only determine $a_{\ell}$ modulo $(1+i)^{S(\ell)+1}$, and all terms on the right hand side have half-integer valuation, we can ignore all terms with valuation greater than $\frac{1}{2}S(\ell)$. Using the inductive hypothesis, along with Lemmas \ref{L4thpower} and \ref{Lpossibilities} (v), (vi), (vii), and (viii), we obtain \begin{equation}\label{Ereduced} a_{\ell} \equiv \frac{3}{2}a_{\ell/2}^2 + \frac{1}{4}a_{\ell/4}^4 + \sum_{\substack{\ell_1 +\ell_2 = \ell \\ \ell_1 < \ell_2}} 3 a_{\ell_1}a_{\ell_2} + \sum_{\substack{2\ell_1 + 2\ell_2 = \ell \\ \ell_1 < \ell_2}} \frac{3}{2} a_{\ell_1}^2a_{\ell_2}^2 \pmod{(1+i)^{S(\ell) + 1}}. \end{equation} If $\ell$ is a power of $2$ (i.e., $S(\ell) = 1$), then by Lemma \ref{Lpossibilities} (i)--(iv), the induction hypothesis, and (\ref{Ereduced}), we have \begin{align*} a_{\ell} \equiv \frac{3}{2}a_{\ell/2}^2 + \frac{1}{4}a_{\ell/4}^4 &\equiv d_1^{\ell}\left(\frac{3}{2}(1+i)^{2 - 5\ell} + \frac{1}{4}(1+i)^{4 - 5\ell}\right)\\ &\equiv d_1^{\ell}(3i - 1)(1+i)^{-5\ell} \equiv d_1^{\ell} (1+i)^{1- 5\ell} \pmod{(1+i)^2}, \end{align*} thus proving the proposition for such $\ell$. For all other $\ell$, we have (by Lemma \ref{Lpossibilities} (i), the induction hypothesis, and (\ref{Ereduced})) that \begin{equation}\label{Ereduced2} a_{\ell} \equiv \frac{3}{2}a_{\ell/2}^2 + \sum_{\substack{\ell_1 +\ell_2 = \ell \\ \ell_1 < \ell_2}} 3 a_{\ell_1}a_{\ell_2} + \sum_{\substack{2\ell_1 + 2\ell_2 = \ell \\ \ell_1 < \ell_2}} \frac{3}{2} a_{\ell_1}^2a_{\ell_2}^2 \pmod{(1+i)^{S(\ell) + 1}}. \end{equation} By Lemma \ref{Lpossibilities} (ii) and (iv), the first and last terms matter only when $S(\ell) = 2$ and $\ell$ is even, in which case their combined contribution is $$3d_1^{\ell}(1+i)^{4-5\ell} \pmod{(1+i)^3},$$ which is trivial. So in any case, we need only worry about the middle term. By Lemma \ref{Lpossibilities} (iii), the middle term is the sum of $2^{S(\ell) - 1} - 1$ subterms, each congruent to $$3d_1^{\ell}(1+i)^{S(\ell) - 5\ell} \pmod{(1+i)^{S(\ell) + 1}}.$$ Since $S(\ell) \geq 2$, this sum is in turn congruent to $$d_1^{\ell}(1+i)^{S(\ell) - 5\ell} \pmod{(1+i)^{S(\ell) + 1}},$$ proving the proposition. \end{proof} \begin{corollary}\label{Cdiffform} In the notation of Proposition \ref{Pasymptoticvaluation}, the power series $$\tilde{\alpha}(t) := \frac{d(1-d)\alpha(t)}{x(1-x)} = \frac{d(1-d)\alpha(t)}{(d+et)(1-d-et)}$$ has the form $$\sum_{i=0}^{\infty} \tilde{a}_{\ell} t^{\ell},$$ where $\tilde{a}_0 = 1$ and $v(\tilde{a}_{\ell}) = v(a_{\ell}) = \frac{1}{2}S(\ell)$ for all $\ell$. \end{corollary} \begin{proof} Recall that $v(1-d) = n-s$ (Remark \ref{Rdval}), that $v(e) = n - \frac{s-1}{2}$, and that we assume $2 \leq s \leq n-1$. Set $\mu = -\frac{e}{d}$ and $\nu = \frac{e}{1-d}$. Then $v(\mu) = n-\frac{s-1}{2} > 1$ and $v(\nu) = \frac{s+1}{2} > 1$. Expanding $\tilde{\alpha}(t)$ out as a power series yields $$\tilde{\alpha}(t) = \alpha(t)(1 + \mu t + \mu^2t^2 + \cdots)(1 + \nu t + \nu^2 t^2 + \cdots) = \alpha(t)(1 + \xi_1t + \xi_2 t^2 + \cdots),$$ where $v(\xi_{\ell}) > \ell$ for all $\ell$. The constant term is $1$, so $\tilde{a}_0 = 1$. The coefficient of $t^{\ell}$ is $$\tilde{a}_{\ell} = a_{\ell} + \xi_{\ell} + \sum_{j=1}^{\ell-1} a_{\ell-j}\xi_j.$$ We know $v(a_{\ell}) = \frac{1}{2}S(\ell)$. We have seen that $v(\xi_{\ell}) > \ell > \frac{1}{2}S(\ell)$. Also, for $1 \leq j \leq \ell-1$, we have $$v(a_{\ell-j}\xi_j) > \frac{1}{2}S(\ell-j) + j > \frac{1}{2}S(\ell-j) + \frac{1}{2}S(j) \geq \frac{1}{2}S(\ell).$$ By the non-archimedean property, we conclude that $v(\tilde{a}_{\ell}) = \frac{1}{2}S(\ell)$. \end{proof} \begin{remark}\label{Rindependence} Note that $v(\tilde{a}_{\ell})$ does not depend on $a$ or $b$. \end{remark} \begin{acknowledgements} I thank Pierre Colmez, Dick Gross, Johan de Jong, Melissa Liu, and Shouwu Zhang for useful conversations. In particular, I thank Michel Matignon for pointing me in the direction of this question. I also thank the referee for useful expository suggestions. \end{acknowledgements}
e3fecc15d719da52fbbb03be9f7212653bb0d29f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{introduction} In the present paper, we prove an explicit coproduct formula for quantum groups $U_q(\mathfrak{g}),$ where $\mathfrak{g}=\mathfrak{sp}_{2n}$ or $\mathfrak{g}=\mathfrak{so}_{2n}$ are simple Lie algebras of type $C,$ $D$ respectively. Consider a Weyl basis of the Lie algebra $\mathfrak{g},$ $$ u[k,m]=[\ldots [[x_k,x_{k+1}],x_{k+2}], \ldots , x_m], $$ see \cite[Chapter VI, \S 4]{Ser} or \cite[Chapter IV, \S 3, XVII]{Jac}. Here, $x_i=x_{2n-i}$ and in case $C_n,$ we have $k\leq m\leq 2n-k,$ whereas in case $D_n,$ the sequence $x_1,x_2, \ldots , x_{2n-1}$ has no term $x_{n-1}$ and $k\leq m<2n-k.$ If we replace the Lie operation by skew brackets, then the above basis becomes a set of PBW generators for the related quantum group $U_q(\mathfrak{g}).$ We then find the coproduct of those PBW generators: \begin{equation} \Delta (u[k,m])=u[k,m]\otimes 1+g_{km}\otimes u[k,m] \label{c} \end{equation} $$ +\sum _{i=k}^{m-1}\tau _i(1-q^{-1})g_{ki}\, u[i+1,m]\otimes u[k,i], $$ where $g_{ki}$ is a group-like element that corresponds to $u[k,i],$ and almost all $\tau $ equal $1.$ More precisely, in case $C_n,$ there is one exception: $\tau _{n-1}=1+q^{-1}$ if $m=n.$ In case $D_n,$ the exception is: $\tau _{n-1}=0$ if $m=n;$ and $\tau _{n-1}=p_{n\, n-1}$ otherwise. Recall that the same formula is valid for $U_q(\mathfrak{sl}_{n+1})$ and $U_q(\mathfrak{so}_{2n+1}).$ In case $A_n$ there are no exceptions \cite[Lemma 3.5]{KA}. In case $B_n,$ the main parameter $q$ becomes $q^2,$ and we have an exception $\tau_n=q,$ whereas in the sequence $x_1, x_2, \ldots , x_{2n},$ the variable $x_n$ appears twice: $x_i=x_{2n-i+1},$ see \cite[Theorem 4.3]{Kh11}. In case $B_2,$ an explicit formula was established by M. Beattie, S. D\v{a}sc\v{a}lescu, \c{S}. Raianu, \cite{BDR}. In the formula, if $i\geq 2n-m,$ then $u[i+1,m]$ does not appear in the list of the above PBW generators because $m>2n-(i+1).$ The elements $u[k,m]$ with $m>$ $2n-k$ are defined in a similar manner, $$u[k,m]=[x_k,[x_{k+1}, \ldots , [x_{m-1},x_m]\ldots ]].$$ The formula remains valid for those elements as well, in which case all of the $\tau $ equal $1,$ except for $\tau _n=1+q^{-1}$ if $k=n$ in case $C_n,$ and $\tau _n=0$ if $k=n$ in case $D_n$ (by definition, in case $D_n,$ the sequence that defines $u[n,m]$ has the form $x_n, x_{n+2}, x_{n+3}, \ldots , x_m).$ In other words, whereas the PBW generators do not span a subcoalgebra, the formula remains valid for a basis of the subcoalgebra generated by them. Furthermore, the formula demonstrates that the PBW generators span a left coideal. We are reminded that M. Rosso \cite{Ros0} and H. Yamane \cite{Yam} separately constructed PBW generators for $U_q(\mathfrak{sl}_{n+1}).$ Then, G. Lusztig \cite{Lus} found PBW bases for arbitrary $U_q(\mathfrak{g})$ in terms of his famous automorphisms defining the action of braid groups. A coproduct formula for PBW generators $E_{\beta }$ in Lusztig form appeared in the paper by S.Z. Levendorski and Ya. S. Soibelman \cite[Theorem 2.4.2]{LS90}: \begin{equation} \Delta (E_{\beta }^n)-(E_{\beta }\otimes 1+q^{H\beta }\otimes E_{\beta })^n \in U_h(\mathfrak{n}_+)_{\beta }\otimes U_h({\mathfrak B}_+). \end{equation} Recently I. Heckenberger and H.-J. Schneider \cite[Theorem 6.14]{HS} proved a similar formula within a more general context: \begin{equation} \Delta _{\mathfrak{B}(N)}(x)-x\otimes 1 \in {\bf k}\langle N_{{\beta }_{l-1}}\rangle {\bf k}\langle N_{{\beta }_{l-2}}\rangle \cdots {\bf k}\langle N_{{\beta }_1}\rangle\otimes \mathfrak{B}(N), \ \ x\in N_{\beta _l}. \end{equation} Although these formulas have no explicit form, they are convenient for inductive considerations, particularly in the study of one-sided coideal subalgebras. We develop the coproduct formula by the same method as that in \cite{Kh11} for the case $B_n.$ Firstly, we demonstrate that the values of the elements $u[k,m]$ in $U_q(\mathfrak{g})$ are almost independent of the arrangement of brackets (Lemmas \ref{ins}, \ref{ins1}, \ref{Dins}, \ref{Dins1}). Then, using this fact, we demonstrate that these values form a set of PBW generators (Propositions \ref{strB}, \ref{DstrB}). Next, we find the explicit shuffle representation of those elements (Propositions \ref{shu}, \ref{Dshu}). In case $C_n$ (as well as in cases $A_n$ and $B_n)$ these PBW generators are proportional to shuffle comonomials. This proportionality makes it easy to find the coproduct of those elements inside the shuffle coalgebra. Because there is a clear connection (\ref{copro}) between the coproduct in $U_q(\mathfrak{g})$ and the coproduct in the shuffle coalgebra, we can set up the coproduct formula (Theorem \ref{cos}). In case $D_n,$ each PBW generator is either proportional to a comonomial or a linear combination of two comonomials. These two options allows one to find the coproduct inside the shuffle coalgebra and deduce the coproduct formula (Theorem \ref{Dcos}). The set of PBW generators for $U_q(\mathfrak{g})$ is the union of those sets for positive and negative quantum Borel subalgebras. Thus, we focus only on the positive quantum Borel subalgebra $U_q^+(\mathfrak{g}).$ \section{Preliminaries} \subsection{Skew brackets} Let $X=$ $\{ x_1, x_2,\ldots, x_n\} $ be a set of quantum variables; that is, associated with each $x_i$ there are an element $g_i$ of a fixed Abelian group $G$ and a character $\chi ^i:G\rightarrow {\bf k}^*.$ For every word $w$ in $X,$ let $g_w$ or gr$(w)$ denote an element of $G$ that appears from $w$ by replacing each $x_i$ with $g_i.$ In the same manner, $\chi ^w$ denotes a character that appears from $w$ by replacing each $x_i$ with $\chi ^i.$ Let $G\langle X\rangle $ denote the skew group algebra generated by $G$ and {\bf k}$\langle X\rangle $ with the commutation rules $x_ig=\chi ^i(g)gx_i,$ or equivalently $wg=\chi ^w(g)gw,$ where $w$ is an arbitrary word in $X.$ If $u,$ $v$ are homogeneous in each $x_i,$ $1\leq i\leq n$ polynomials, then the skew brackets are defined by the formula \begin{equation} [u,v]=uv-\chi ^u(g_v) vu. \label{sqo} \end{equation} We use the notation $\chi ^u(g_v)=p_{uv}=p(u,v).$ The form $p(\hbox{-},\hbox{-})$ is bimultiplicative: \begin{equation} p(u, vt)=p(u,v)p(u,t), \ \ p(ut,v)=p(u,v)p(t,v). \label{sqot} \end{equation} In particular $p(\hbox{-},\hbox{-})$ is completely defined by $n^2$ parameters $p_{ij}=\chi ^{i}(g_{j}).$ The brackets satisfy an analog of the Jacobi identity: \begin{equation} [[u, v],w]=[u,[v,w]]+p_{wv}^{-1}[[u,w],v]+(p_{vw}-p_{wv}^{-1})[u,w]\cdot v. \label{jak1} \end{equation} The antisymmetry identity transforms as follows: \begin{equation} [u,v]=-p_{uv}[v,u]+(1-p_{uv}p_{vu})u\cdot v \label{cha} \end{equation} The Jacobi identity (\ref{jak1}) implies a conditional identity: \begin{equation} [[u, v],w]=[u,[v,w]],\hbox{ provided that } [u,w]=0. \label{jak3} \end{equation} By the evident induction on length, this result allows for the following generalization: \begin{lemma} \cite[Lemma 2.2]{KL}. If $y_1,$ $y_2,$ $\ldots ,$ $y_m$ are homogeneous linear combinations of words such that $[y_i,y_j]=0,$ $1\leq i<j-1<m,$ then the bracketed polynomial $[y_1y_2\ldots y_m]$ is independent of the precise arrangement of brackets: \begin{equation} [y_1y_2\ldots y_m]=[[y_1y_2\ldots y_s],[y_{s+1}y_{s+2}\ldots y_m]], \ 1\leq s<m. \label{ind} \end{equation} \label{indle} \end{lemma} Another conditional identity is: if $[u,v]=0$ (that is, $uv=p_{uv}vu$), then \begin{equation} [u,[v,w]]=-p_{vw}[[u,w],v]+p_{uv}(1-p_{vw}p_{wv})v\cdot [u,w]. \label{jja} \end{equation} The brackets are related to the product by ad-identities: \begin{equation} [u\cdot v,w]=p_{vw}[u,w]\cdot v+u\cdot [v,w], \label{br1f} \end{equation} \begin{equation} [u,v\cdot w]=[u,v]\cdot w+p_{uv}v\cdot [u,w]. \label{br1} \end{equation} It is easy to verify all of the identities developing the brackets by (\ref{sqo}). \smallskip \subsection{Quantum Borel algebra} The group $G$ acts on the free algebra ${\bf k}\langle X\rangle $ by $ g^{-1}ug=\chi ^u(g)u,$ where $u$ is an arbitrary monomial in $X.$ The skew group algebra $G\langle X\rangle $ has a natural Hopf algebra structure $$ \Delta (x_i)=x_i\otimes 1+g_i\otimes x_i, \ \ \ i\in I, \ \ \Delta (g)=g\otimes g, \ g\in G. $$ Let $C=||a_{ij}||$ be a symmetrizable Cartan matrix and let $D={\rm diag }(d_1, \ldots , d_n)$ be such that $d_ia_{ij}=d_ja_{ji}.$ We denote a Kac-Moody algebra defined by $C,$ see \cite{Kac}, as $\mathfrak g.$ Suppose that parameters $p_{ij}$ are related by \begin{equation} p_{ii}=q^{d_i}, \ \ p_{ij}p_{ji}=q^{d_ia_{ij}},\ \ \ 1\leq i,j\leq n. \label{KM1} \end{equation} In this case the multiparameter quantization $U^+_q ({\mathfrak g})$ is a homomorphic image of $G\langle X\rangle $ defined by Serre relations with the skew brackets in place of the Lie operation: \begin{equation} [\ldots [[x_i,\underbrace{x_j],x_j], \ldots ,x_j]}_{1-a_{ji} \hbox{ times}}=0, \ \ 1\leq i\neq j\leq n. \label{KM2} \end{equation} By \cite[Theorem 6.1]{Khar}, the left-hand sides of these relations are skew-primitive elements in $G\langle X\rangle .$ Therefore the ideal generated by these elements is a Hopf ideal, hence $U^+_q ({\mathfrak g})$ has the natural structure of a Hopf algebra. \subsection{PBW basis.} Recall that a linearly ordered set $V$ is said to be a {\it set of PBW generators} (of infinite heights) if the set of all products \begin{equation} g\cdot v_1^{n_1}\cdot v_2^{n_2}\cdot \ \cdots \ \cdot v_k^{n_k}, \ \ \ g\in G, \ \ v_1<v_2<\ldots <v_k\in V \label{pbge} \end{equation} is a basis of $U_q^+(\mathfrak{g}).$ We fix the order $x_1>x_2>\ldots >x_n$ on the set $X.$ On the set of all words in $X,$ we fix the lexicographical order with the priority from left to right, where a proper beginning of a word is considered to be greater than the word itself. A non-empty word $u$ is called a {\it standard Lyndon-Shirshov} word if $vw>wv$ for each decomposition $u=vw$ with non-empty $v,w.$ The {\it standard arrangement} of brackets $[u]$ on a standard word $u$ is defined by induction: $[u]=[[v][w]],$ where $v, w$ are the standard words such that $u=vw$ and $v$ has the minimal length \cite{pSh1}, \cite{pSh2}, see also \cite{Lot}. In \cite{Kh4}, it was proven that the values of bracketed standard words corresponding to positive roots with the lexicographical order form a set of PBW generators (of infinite heights) for $U_q^+(\mathfrak{g}),$ where $\mathfrak{g}$ is a Lie algebra of infinite series $A, B, C, D.$ \smallskip \subsection{Shuffle representation} The {\bf k}-algebra $A$ generated by values of $x_i,$ $1\leq i\leq n$ in $U_q^+(\mathfrak{g})$ is not a Hopf subalgebra because it has no nontrivial group-like elements. Nevertheless, $A$ is a Hopf algebra in the category of Yetter-Drinfeld modules over {\bf k}$[G].$ In particular, $A$ has a structure of a braided Hopf algebra with a braiding $\tau (u\otimes v)=p(v,u)^{-1}v\otimes u.$ The braided coproduct $\Delta ^b:A\rightarrow A\underline{\otimes }A$ is connected with the coproduct on $U_q^+(\mathfrak{g})$ as follows \begin{equation} \Delta ^b(u)=\sum _{(u)}u^{(1)}\hbox{gr}(u^{(2)})^{-1}\underline{\otimes} u^{(2)}, \hbox{ where }\ \Delta (u)=\sum _{(u)}u^{(1)}\otimes u^{(2)}. \label{copro} \end{equation} The tensor space $T(W),$ $W=\sum x_i{\bf k}$ also has the structure of a braided Hopf algebra, which is the {\it braided shuffle algebra} $Sh_{\tau }(W)$ with the coproduct \begin{equation} \Delta ^b(u)=\sum _{i=0}^m(z_1\ldots z_i)\underline{\otimes} (z_{i+1}\ldots z_m), \label{bcopro} \end{equation} where $z_i\in X,$ and $u=(z_1z_2\ldots z_{m-1}z_m)$ is the tensor $z_1\otimes z_2\otimes \ldots \otimes z_{m-1}\otimes z_m,$ called a {\it comonomial}, considered as an element of $Sh_{\tau }(W).$ The braided shuffle product satisfies \begin{equation} (w)(x_i)=\sum _{uv=w}p(x_i,v)^{-1}(ux_iv), \ \ (x_i)(w)=\sum _{uv=w}p(u,x_i)^{-1}(ux_iv). \label{spro} \end{equation} The map $x_i\rightarrow (x_i)$ defines a homomorphism of the braided Hopf algebra $A$ into the braided Hopf algebra $Sh_{\tau }(W).$ This is extremely useful for calculating the coproducts due to formulae (\ref{copro}) and (\ref{bcopro}). If $q$ is not a root of $1,$ then this representation is faithful. Otherwise, its kernel is the largest Hopf ideal in $A^{(2)},$ where $A^{(2)}$ is the ideal of $A $ generated by values of $x_ix_j,$ $1\leq i,j\leq n.$ See details in P. Schauenberg \cite{Sch}, M. Rosso \cite{Ros}, M. Takeuchi \cite{Tak1}, D. Flores de Chela and J.A. Green \cite{FC}, N. Andruskiewitsch, H.-J. Schneider \cite{AS}, V. K. Kharchenko \cite{Kh03}. \section{Relations in $U_q^+({\mathfrak sp}_{2n}).$} Throughout the following three sections, we fix a parameter $q$ such that $q^3\not=1,$ $q\not= -1.$ If $C$ is a Cartan matrix of type $C_n,$ then relations (\ref{KM1}) take the form \begin{equation} p_{ii}=q, \ 1\leq i<n;\ \ p_{i\, i-1}p_{i-1\, i}=q^{-1}, \ 1<i<n;\ p_{ij}p_{ji}=1,\ j>i+1; \label{b1rel} \end{equation} \begin{equation} \ p_{nn}=q^2, \ \ p_{n-1\, n}p_{n\, n-1}=q^{-2}. \label{b1rell} \end{equation} In this case, the quantum Borel algebra $U^+_q (\mathfrak{sp}_{2n})$ is a homomorphic image of $G\langle X\rangle $ subject to the following relations \begin{equation} [x_i,[x_i,x_{i+1}]]=[[x_i,x_{i+1}],x_{i+1}]=0, \ 1\leq i<n-1; \ \ [x_i,x_j]=0, \ \ j>i+1; \label{relb} \end{equation} \begin{equation} [[x_{n-1},x_n],x_n]=[x_{n-1},[x_{n-1},[x_{n-1},x_n]]]=0. \label{relbl} \end{equation} \begin{lemma} If $u$ is a standard word independent of $x_n,$ then either $u$ $=x_kx_{k+1}\ldots x_m,$ $k\leq m< n,$ or $[u]=0$ in $U_{q}^+(\mathfrak{sp}_{2n}).$ Here $[u]$ is a nonassociative word with the standard arrangement of brackets. \label{nul} \end{lemma} \begin{proof} The Hopf subalgebra of $U_q^+(\mathfrak{sp}_{2n})$ generated by $x_i,$ $1\leq i<n$ is the Hopf algebra $U_{q}^+({\mathfrak sl}_{n})$ defined by the Cartan matrix of type $A_{n-1}.$ By this reason the third statement of \cite[Theorem $A_n$]{Kh4} applies. \end{proof} \begin{definition} \rm In what follows, $x_i,$ $n<i<2n$ denotes the generator $x_{2n-i}.$ Respectively, $v(k,m),$ $1\leq k\leq m<2n$ is the word $x_kx_{k+1}\cdots x_{m-1}x_m,$ whereas $v(m,k)$ is the opposite word $x_mx_{m-1}\cdots x_{k+1}x_k.$ If $1\leq i<2n,$ then $\phi (i)$ denotes the number $2n-i,$ so that $x_i=x_{\phi (i)}.$ \label{fis} \end{definition} \begin{definition} \rm If $k\leq i<m<2n,$ then we set \begin{equation} \sigma _k^m\stackrel{df}{=}p(v(k,m),v(k,m)), \label{mu11} \end{equation} \begin{equation} \mu _k^{m,i}\stackrel{df}{=}p(v(k,i),v(i+1,m))\cdot p(v(i+1,m),v(k,i)). \label{mu1} \end{equation} \label{slo} \end{definition} \begin{lemma} For each $i,$ $k\leq i<m$ we have \begin{equation} \mu _k^{m,i}=\sigma _k^m(\sigma _k^i \sigma _{i+1}^m)^{-1}. \label{mu23} \end{equation} \label{mu} \end{lemma} \begin{proof} Because $p(\hbox{-},\hbox{-})$ is a bimultiplicative map, there is a decomposition \begin{equation} p(ab,ab)=p(a,a)p(b,b)\cdot p(a,b)p(b,a). \label{mu55} \end{equation} Applying this equality to $a=v(k,i),$ $b=v(i+1,m),$ we get the required relation. \end{proof} \begin{lemma} If $1\leq k\leq m<2n,$ then \begin{equation} \sigma_k^m =\left\{ \begin{matrix} q^2,\hfill &\hbox{if } m=\phi (k);\hfill \cr q,\hfill &\hbox{otherwise}.\hfill \end{matrix} \right. \label{mu21} \end{equation} \label{sig} \end{lemma} \begin{proof} The bimultiplicativity of $p(\hbox{-},\hbox{-})$ implies that $\sigma _k^m=\prod _{k\leq s,t\leq m}p_{st}$ is the product of all coefficients of the $(m-k+1)\times (m-k+1)$-matrix $||p_{st}||.$ By (\ref{b1rel}) all coefficients on the main diagonal equal $q$ except $p_{nn}=q^2.$ If $m<n$ or $k>n,$ then for non-diagonal coefficients, we have $p_{st}p_{ts}=1$ unless $|s-t|=1,$ whereas $p_{s\, s+1}p_{s+1\, s}=q^{-1}.$ Hence, $\sigma _k^m=q^{m-k+1}\cdot q^{-(m-k)}=q.$ If $m=n$ or $k=n$ but not both, then we have $p_{nn}=q^2,$ $p_{n\, n-1}p_{n-1\, n}=q^{-2}.$ By the above reasoning, we get $\sigma _k^m=q^{(m-k)+2}\cdot q^{-(m-k-1)-2}=q.$ Of course, if $k=n=m$ then $\sigma _k^m=p_{nn}=q^2.$ In the remaining case, $k<n<m,$ we use induction on $m-k.$ By (\ref{mu55}) we have \begin{equation} \sigma _k^{m+1}=\sigma _k^m\cdot q\cdot p(v(k,m),x_{m+1})\cdot p(x_{m+1}, v(k,m)). \label{si69} \end{equation} We shall prove that if $k<n<m,$ then \begin{equation} p(v(k,m),x_{m+1})\cdot p(x_{m+1}, v(k,m))=\left\{ \begin{matrix} 1,\hfill &\hbox{if } k=\phi (m)-1;\hfill \cr q^{-2},\hfill &\hbox{if } k=\phi (m);\hfill \cr q^{-1},\hfill &\hbox{otherwise.}\hfill \end{matrix} \right. \label{si45} \end{equation} The left hand side of the above equality is $\prod_{k\leq t\leq m} p_{t\, m+1}p_{m+1\, t}.$ If $m=n,$ then by \ref{b1rel} and \ref{b1rell}, the factor $p_{t\, n+1}p_{n+1\, t}$ differs from 1 only if $t\in \{ n-2, n-1, n \} $ and related values are respectively $q^{-1}, q^2, q^{-2}.$ Hence, if $k<n-1=\phi (m)-1,$ then the total product is $q^{-1};$ if $k=n-1=\psi (m)-1,$ then this is $1;$ if $k=n=\phi (m),$ then this is $q^{-2}.$ If $m>n,$ then the factor $p_{t\, m+1}p_{m+1\, t}$ differs from 1 only if $$t\in \{ \phi (m)-2, \phi (m)-1, \phi (m), m \}$$ and related values are respectively $q^{-1}, q^2, q^{-1}, q^{-1}.$ Therefore if $k<\phi (m)-1,$ then the whole product is $q^{-1};$ if $k=\phi (m)-1,$ then this is $1;$ if $k=\phi (m),$ then this is $q^{-2};$ if $k>\phi (m),$ then this is again $q^{-1}.$ This completes the proof of (\ref{si45}). To complete the inductive step, we use (\ref{si45}) and inductive hypothesis: if $k$ $=\phi (m)-1,$ then $\sigma _k^{m+1}$ $=q\cdot q\cdot 1$ $=q^2;$ if $k$ $=\phi (m),$ then $\sigma _k^{m+1}$ $=q^2\cdot q\cdot q^{-2}$ $=q;$ otherwise $\sigma _k^{m+1}$ $=q\cdot q\cdot q^{-1}$ $=q.$ \end{proof} \smallskip We define the bracketing of $v(k,m),$ $k\leq m<2n$ as follows. \begin{equation} v[k,m]=\left\{ \begin{matrix} [[[\ldots [x_k,x_{k+1}], \ldots ],x_{m-1}], x_m],\hfill &\hbox{if } m<\phi (k);\hfill \cr [x_k,[x_{k+1},[\ldots ,[x_{m-1},x_m]\ldots ]]],\hfill &\hbox{if } m>\phi (k);\hfill \cr [\! [v[k,m-1],x_m]\! ],\hfill &\hbox{if } m=\phi (k),\hfill \end{matrix}\right. \label{ww} \end{equation} where in the latter term, $[\! [u,v]\! ]\stackrel{df}{=}uv-q^{-1}p(u,v)vu.$ Conditional identity (\ref{ind}) demonstrates that the value of $v[k,m]$ in $U_q^+(\mathfrak{sp}_{2n})$ is independent of the precise arrangement of brackets, provided that $m\leq n$ or $k\geq n.$ Now we are going to analyze what happens with the arrangement of brackets if $k< n<m\neq \phi (k).$ \begin{lemma} If $k\leq n\leq m<\phi (k),$ then the value in $U_q^+(\mathfrak{sp}_{2n})$ of the bracketed word $[y_kx_{n}x_{n+1}\cdots x_m],$ where $y_k=v[k,n-1],$ is independent of the precise arrangement of brackets. \label{ins} \end{lemma} \begin{proof} To apply (\ref{ind}), it suffices to check $[y_k,x_t]=0,$ where $n<t\leq m$ or, equivalently, $\phi (m)\leq t<n.$ By (\ref{jak3}) we have $$ [y_k,x_t]=\hbox{\Large[}[v[k,t-2],v[t-1,n-1]],x_t\hbox{\Large]} =\hbox{\Large[}v[k,t-2], [v[t-1,n-1],x_t]\hbox{\Large]}. $$ By Lemma \ref{nul} the element $[v[t-1,n-1],x_t]$ equals zero in $U_{q}^+(\mathfrak{so}_{2n})$ because the word $u(t-1,n)x_t$ is independent of $x_n,$ it is standard, and the standard bracketing is precisely $[v[t-1,n],x_t].$ \end{proof} \begin{lemma} If $k\leq n,$ $ \phi (k)<m,$ then the value in $U_q^+(\mathfrak{sp}_{2n})$ of the bracketed word $[x_kx_{k+1}\cdots x_ny_m],$ where $y_m=v[n+1,m],$ is independent of the precise arrangement of brackets. \label{ins1} \end{lemma} \begin{proof} To apply (\ref{ind}), we need the equalities $[x_t,y_m]=0,$ $k\leq t<n.$ The polynomial $[x_t,y_m]$ is independent of $x_n.$ Moreover, $[x_t,y_m]$ is proportional to $[y_m, x_t]$ due to antisymmetry identity (\ref{cha}) because $$p(x_t, y_m)p(y_m,x_t)=p_{t\, t+1}p_{tt}p_{t\, t-1}\cdot p_{t+1\, t}p_{tt}p_{t-1\, t}=1.$$ The equality $[y_m, x_t]=0$ turns to the proved above equality $[v[k,n-1],x_t]=0$ if one renames the variables $x_{n+1}\leftarrow x_k,$ $x_{n+2}\leftarrow x_{k+1}, \ldots .$ \end{proof} \section{PBW generators of $U_q^+(\mathfrak{sp}_{2n})$} \smallskip \begin{proposition} If $q^3\neq 1,$ $q\neq -1,$ then values of the elements $v[k,m],$ $k\leq m\leq \phi (k)$ form a set of PBW generators with infinite heights for the algebra $U_q^+(\mathfrak{sp}_{2n})$ over {\bf k}$[G].$ \label{strB} \end{proposition} \begin{proof} A word $v(k,m)$ is a standard Lyndon-Shirshov word provided that $k$ $\leq m$ $<\phi (k)$. By \cite[Theorem $C_n,$ p. 218]{Kh4} these words with the standard bracketing, say $[v(k,m)],$ become a set of PBW generators if we add to them the elements $[v_k]\stackrel{df}{=}[v[k,n-1], v[k,n]]$ $1\leq k<n.$ We shall use induction on $m-k$ in order to demonstrate that the value in $U_q^+(\mathfrak{sp}_{2n})$ of $[v(k,m)],$ $k\leq m<\phi (k)$ is the same as the value of $v[k,m]$ with the bracketing given in (\ref{ww}). If $m\leq n,$ then the value of $v[k,m]$ is independent of the arrangement of brackets, see Lemma \ref{indle}. If $k<n<m,$ then according to \cite[Lemma 7.18]{Kh4}, the brackets in $[v(k,m)]$ are set by the following recurrence formulae (we note that $[v(k, m)]=[v_{k\, \phi (m)}]$ in the notations of \cite{Kh4}): \begin{equation} \begin{matrix} [v(k,m)]=[x_k[v(k+1, m)]], \hfill & \hbox{if } m<\phi (k)-1; \hfill \cr [v(k, m)]=[[v(k, m-1)]x_m], \hfill & \hbox{if } m=\phi (k)-1.\hfill \end{matrix} \label{wsk} \end{equation} In the latter case, the induction applies directly. In the former case, using induction and Lemma \ref{ins}, we have $$[v(k+1, m)]=v[k+1, m]=[v[k+1,n-1], v[n,m]].$$ At the same time $[x_k,x_t]=0,$ $n\leq t\leq m$ because $x_t=x_{\phi (t)}$ and $\phi (t)\geq \phi (m)>k+1.$ This implies $[x_k, v[n,m]]=0.$ Applying conditional identity (\ref{jak3}), we get $$ [v(k, m)]=[x_k[v[k+1,n-1], v[n,m]]]={\big [}[x_kv[k+1,n-1]], v[n,m]{\big ]}=v[k,m]. $$ It remains to analyze the case $m=\phi (k).$ We have to demonstrate that if in the set $V$ of PBW generators of Lyndon-Shirshov standard words one replaces the elements $[v_k]$ with $v[k,\phi (k)],$ $1\leq k<n$ then the obtained set is still a set of PBW generators. To do this, due to \cite[Lemma 2.5]{KhT} with $T\leftarrow \{ v[k,\phi (k)], 1\leq k<n\} ,$ $S\leftarrow U_q(\mathfrak{sp}_{2n}),$ it suffices to see that the leading term of the PBW decomposition of $v[k,\phi (k)]$ in the generators $V$ is proportional to $[v_k].$ By definition (\ref{ww}) with $m=\phi (k),$ we have $$ v[k,m]=v[k,m-1]x_m-q^{-1}\pi x_mv[k,m-1] $$ $$ =-q^{-1}\pi [x_m,v[k,m-1]]+(1-q^{-1}\pi \pi^{\prime})v[k,m-1]\cdot x_m, $$ where $\pi =p(v(k,m-1),x_m),$ $\pi ^{\prime }=p(x_m,v(k,m-1)).$ The second term of the latter sum is a basis element (\ref{pbge}) in the PBW generators $V.$ This element starts with $v[k,m-1]$ which is lesser than $[v_k].$ Hence it remains to analyze the bracket $[x_k,v[k,m-1]].$ If $k=n-1,$ then $[x_k,v[k,m-1]]$ $=[x_{n-1}, [x_{n-1},x_{n}]]$ $=[v_k].$ If $k<n-1,$ then by Lemma \ref{ins} we have \begin{equation} [x_k,v[k,m-1]]=[ x_k, [v[k,n], v[n+1, m-1] ] ]. \label{esk} \end{equation} The basic relations (\ref{relb}) imply $[x_k,[x_k,x_{k+1}]]=0,$ $[x_k, v[k+2,n]]=0.$ By Lemma \ref{indle} value of $v[k,n]$ is independent of the arrangement of brackets, $$v[k,n]=[[x_k,x_{k+1}],v[k+2,n]],$$ hence $[ x_k, v[k,n] ]=0.$ By Eq. (\ref{jja}) with $u\leftarrow x_k,$ $v\leftarrow v[k,n]$, $w\leftarrow v[n+1, m-1],$ the right hand side of (\ref{esk}) is a linear combination of the following two elements: \begin{equation} [ x_k, v[n+1, m-1]], v[k,n] ], \ \ \ \ v[k,n]\cdot [ x_k, v[n+1, m-1]]. \label{sk} \end{equation} The latter element starts with a factor $v[k,n]$ which is lesser than $[v_k].$ Hence it remains to prove that the leading term of the former element is proportional $[v_k].$ By downward induction on $k$ we shall prove the following decomposition \begin{equation} v[n+1, m-1]=\alpha \, v[k+1,n-1]+\sum_{s=k+2}^{n-1}\gamma _s \, v[s,n-1]\cdot U_s,\ \ \alpha \neq 0. \label{desk} \end{equation} If $k=n-2,$ then this decomposition reduces to $x_{n+1}=x_{n-1}.$ Let us apply $[\hbox{-}, x_m]$ to the both sides of the above equality. Using (\ref{cha}), we see that $[v[k+1,n-1],x_m]$ is proportional to $v[k,n-1]+\gamma _{k+1}\, v[k+1,n-1]\cdot x_{m},$ whereas (\ref{br1f}) implies $[v[s,n-1]\cdot U_s, x_m]$ $=v[s,n-1]\cdot [U_s, x_{k-1}]$ for $s\geq k+2.$ This completes the inductive step. Let us apply $[x_k, \hbox{-}]$ to both sides of the already proved Eq. (\ref{desk}). By (\ref{br1}), we get \begin{equation} [x_k,v[n+1, m-1]]=\alpha \, v[k,n-1]+\sum_{s=k+2}^{n-1}\gamma _s^{\prime } \, v[s,n-1]\cdot [x_k,U_s]. \label{de1} \end{equation} Finally, let us apply $[\hbox{-},v[k,n]]$ to both sides of (\ref{de1}). In this way we find a decomposition of the first element of $(\ref{sk})$: \begin{equation} [[x_k,v[n+1, m-1]], v[k,n]]=\alpha \, [v_k]+\sum_{s=k+2}^{n-1}\gamma _s^{\prime } \, v[s,n-1]\cdot [x_k,U_s]\cdot v[k,n] \label{de11} \end{equation} $$ -\sum_{s=k+2}^{n-1}\gamma _s^{\prime \prime} \, v[k,n]\cdot v[s,n-1]\cdot [x_k,U_s]. $$ All summands, except the first one, start with $v[k,n],$ $v[s,n-1]$ that are lesser than $[v_k].$ Hence, the leading term, indeed, is proportional to $[v_k].$ \end{proof} \begin{proposition} Let $k\leq m<2n.$ In the shuffle representation, we have \begin{equation} v[k,m]=\alpha _k^m\cdot (v(m,k)), \ \ \alpha _k^m\stackrel{df}{=}\varepsilon_k^m (q-1)^{m-k}\cdot \prod _{k\leq i<j\leq m}p_{ij}, \label{shur} \end{equation} where \begin{equation} \varepsilon _k^m =\left\{ \begin{matrix} 1+q,\hfill &\hbox{if } k\leq n\leq m, m\neq \phi (k);\hfill \cr 1+q^{-1},\hfill &\hbox{if } m=\phi (k)\neq n;\hfill \cr 1,\hfill &\hbox{otherwise.}\hfill \end{matrix} \right. \label{eps} \end{equation} \label{shu} \end{proposition} \begin{proof} We use induction on $m-k.$ If $m=k,$ then the equality reduces to $x_k=(x_k).$ a). Consider first the case $m<\phi (k).$ By the inductive supposition, we have $v[k,m-1]=\alpha _k^{m-1}\cdot (w),$ $w=v(m-1,k).$ Using (\ref{spro}), we may write $$ v[k,m]=\alpha _k^{m-1}\{ (w)(x_m)-p(w,x_m)\cdot (x_m)(w)\} $$ \begin{equation} =\alpha _k^{m-1}\sum _{uv=w}\{ p(x_m,v)^{-1}-p(v,x_m)\} (ux_mv), \label{sum1} \end{equation} where $p(v,x_m)=p(w,x_m)p(u,x_m)^{-1}$ because $w=uv.$ If $m\leq n,$ then relations (\ref{b1rell}) imply $p(v,x_m)p(x_m,v)=1$ with only one exception being $v=w.$ Hence, sum (\ref{sum1}) has just one term. The coefficient of $(x_mw)=(v(m,k))$ equals $$ \alpha _k^{m-1}p(w,x_m)(p(w,x_m)^{-1}p(x_m,w)^{-1}-1)=\alpha _k^{m-1}\prod_{i=k}^{m-1}p_{im} \cdot (p_{m-1\, m}^{-1}p_{m\, m-1}^{-1}-1). $$ If $m<n,$ then the latter factor equals $q-1,$ whereas if $m=n,$ then it is $q^2-1=\varepsilon_k^{n}(q-1).$ Suppose that $m>n$ and still $m<\phi (k).$ In decomposition (\ref{sum1}), we have $v=v(s,k),$ $k\leq s<m$ and hence $p(x_m,v)p(v,x_m)=\prod\limits_{t=k}^{s}p_{mt}p_{tm}.$ The product $p_{mt}p_{tm}$ differs from $1$ only if $t\in \{ \phi(m)-1, \phi (m), \phi (m)+1, m-1\};$ related values are $q^{-1},$ $q^2,$ $q^{-1},$ $q^{-1}$ if $m>n+1,$ and they are $q^{-1},$ $q^2,$ $q^{-2}$ if $m=n+1,$ $\phi (m)+1=m-1.$ This implies \begin{equation} p(x_m,v)p(v,x_m)= \left\{ \begin{matrix} q^{-1},\hfill &\hbox{ if } s=\phi (m)-1, \hbox{ or } s=m-1;\hfill \cr q,\hfill &\hbox{ if } s=\phi (m);\hfill \cr 1,\hfill &\hbox{otherwise.}\hfill \end{matrix} \right. \label{pups} \end{equation} Hence, in (\ref{sum1}), only three terms remain with $s=\phi (m)-1,$ $s=\phi (m),$ and $s=m-1.$ If $s=\phi (m)-1$ or $s=\phi (m),$ then $(ux_mv)$ equals $$ ux_mv=v(m-1,\phi (m)+1)x_m^2v(\phi (m)-1,k), $$ whereas the coefficient of the comonomial $(ux_mv)$ in sum (\ref{sum1}) is $$ p(x_m,v_0)^{-1}-p(v_0,x_m)+p(x_m,x_mv_0)^{-1}-p(x_mv_0, x_m), $$ where $v_0=v(\psi (m)-1,k).$ Taking into account (\ref{pups}), we find the above sum: $$ p(v_0,x_m)(q-1+q\cdot q^{-1}-q)=0. $$ Thus, in (\ref{sum1}) only one term remains, with $v=v(m-1,k),$ $u=\emptyset $. This term has the required coefficient: $$ \alpha _k^m=\alpha _k^{m-1}(p(x_m,w)^{-1}-p(w,x_m))=\alpha _k^{m-1}p(w,x_m)(q-1). $$ b). Consider the case $m>\phi (k).$ By the inductive supposition, we have $$v[k+1,m]=\alpha _{k+1}^{m}\cdot (w),\ \ \ w=v(m,k+1).$$ Using (\ref{spro}), we get $$ v[k,m]=\alpha _{k+1}^m\{ (x_k)(w)-p(x_k,w)\cdot (w)(x_k)\} $$ $$ =\alpha _{k+1}^{m}\sum _{uv=w}\{ p(u,x_k)^{-1}-p(x_k,w)p(x_k,v)^{-1}\} (ux_kv). $$ \begin{equation} =\alpha _{k+1}^{m}\sum _{uv=v(m,k+1)}p(x_k,u\{ p(u,x_k)^{-1}p(x_k,u)^{-1}-1\} (ux_kv). \label{sum2} \end{equation} If $k\geq n,$ then $p(u,x_k)p(x_k,u)=1,$ unless $u=w.$ Hence, (\ref{sum2}) has only one term, and the coefficient equals $$ \alpha _{k+1}^{m}p(x_k,w)(p(w,x_k)^{-1}p(x_k,w)^{-1}-1)=\alpha _{k+1}^{m}p(x_k,w)(p_{k+1\, k}^{-1}p_{k\, k+1}^{-1}-1). $$ If $k>n,$ then the latter factor equals $q-1,$ whereas if $k=n,$ then it is $q^2-1$ $=(q-1)\varepsilon _n^m$ as claimed. Suppose that $k<n.$ In this case, $x_k=x_t$ with $m>t\stackrel{df}{=} \phi (k)>\phi (n)=n.$ Let $u=v(m,s).$ If $s>t+1,$ then $u$ depends only on $x_i,$ $i<k-1,$ and relations (\ref{b1rel}), (\ref{b1rell}) imply $p(x_k,u)p(u,x_k)=1.$ If $s<t,$ $s\neq k+1,$ then $k+1<n$ (otherwise $s=n=k+1),$ and we have $p(x_k,u)p(u,x_k)$ $=p_{k-1\, k}p_{kk}p_{k+1\, k}\cdot p_{k\, k-1}p_{kk}p_{k+1\, k}$ $=1$ because $x_t=x_k.$ Hence, three terms remain in (\ref{sum2}) with $s=t,$ $s=t+1,$ and $s=k+1.$ If $u=v(m, t)$ or $u=v(m, t+1),$ then $ux_kv$ $=v(m,t+1)x_k^2v(t-1,k),$ whereas the coefficient of the corresponding tensor is $$ p(v(m,t+1),x_k)^{-1}-p(x_k,v(m,t+1))+p(v(m,t),x_k)^{-1}-p(x_k,v(m, t)) $$ $$ =p(x_k,v(m, t+1))\{ p_{k-1\, k}^{-1}p_{k\, k-1}^{-1}-1+ p_{kk}^{-1}p_{k-1\, k}^{-1}p_{k\, k-1}^{-1}-p_{kk}\} =0 $$ because $p_{kk}=q,$ $p_{k-1\, k}p_{k\, k-1}=q^{-1},$ and $p_{kr}p_{rk}=1$ if $r>t+1.$ Thus, only one term remains in (\ref{sum1}), and $$ \alpha _k^m=\alpha _{k+1}^m(p(w,x_k)^{-1}-p(x_k,w))=\alpha _{k+1}^mp(x_k,w)(q-1). $$ c). Let $m=\phi (k)\neq n.$ In this case, $x_m=x_k,$ $p_{kk}=q.$ By definition (\ref{ww}) we have \begin{equation} v[k,m]=v[k,m-1]\cdot x_k-q^{-1} p(v(k,m-1),x_m)x_k\cdot v[k,m-1]. \label{kk} \end{equation} Case a) allows us to find the shuffle representation $$v[k,m-1]=\alpha _k^{m-1} (w),\ \ \ w=v(m-1,k).$$ Hence the right-hand side of (\ref{kk}) in the shuffle form is $$ \alpha_k^{m-1} \sum _{uv=w} \left( p(x_k,v)^{-1}- q^{-1} p(v(k,m-1),x_m) \cdot p(u, x_k)^{-1}\right) \cdot (ux_kv) $$ \begin{equation} =\alpha_k^{m-1} \sum _{uv=v(m-1,k)} p(v,x_m) \left( p(x_k,v)^{-1}p(v,x_k)^{-1}- q^{-1} \right) \cdot (ux_kv). \label{bli} \end{equation} The coefficient of $(v(k,m))$ related to $u=\emptyset ,$ $v=v(m-1,k)$ equals \begin{equation} \alpha_k^{m-1}p(v,x_m)\cdot (p(x_k,v)^{-1}p(v,x_k)^{-1}-q^{-1}) =\alpha_k^{m-1}(1-q^{-1})\cdot \prod _{i=k}^{m-1}p_{im}. \label{uje} \end{equation} Here we have used $x_k=x_m$ and Eq. (\ref{si45}) with $m\leftarrow m-1,$ $k\leftarrow k.$ It remains to show that all other terms in (\ref{bli}) are canceled. In this case we would have $$ \varepsilon _k^m=\varepsilon _k^{m-1} (1-q^{-1})(q-1)^{-1}=(1+q)(1-q^{-1})(q-1)^{-1}=1+q^{-1} $$ as required. If $u=v(m-1,k),$ $v=\emptyset $ or $u=v(m-1,k+1),$ $v=x_k,$ then $$ux_kv=v(m-1,k+1)x_k^2,$$ whereas the total coefficient of the related comonomial is proportional to $$ 1-q^{-1}\cdot p(u,x_m)p(u,x_k)^{-1}+p_{kk}^{-1}-q^{-1}\cdot p_{kk}=0. $$ Let $u=v(m-1,s),$ $v=v(s-1,k),$ $k+1<s<m.$ The whole coefficient of the comonomial $(ux_kv)$ takes the form $$ \alpha _k^{m-1}p(v(s-1,k),x_m)\cdot \left( p(x_k, v(s-1,k))^{-1}p(v(s-1,k),x_k)^{-1}-q^{-1})\right) . $$ The latter factor equals $\prod\limits_{t=k}^{s-1}p_{kt}^{-1}p_{tk}^{-1}-q^{-1}.$ The product $p_{kt}p_{tk}$ differs from $1$ only if $t\in \{ k, k+1\}$ and related values are $q^2$ and $q^{-1}.$ This implies that the coefficient of $(ux_kv)$ has a factor $q^{-2}\cdot q-q^{-1}=0.$ \end{proof} \section{Coproduct formula for $U_q^+({\mathfrak{sp}}_{2n})$} \begin{theorem} In $U_q^+(\mathfrak{sp}_{2n})$ the coproduct on the elements $v[k,m],$ $k\leq m<2n$ has the following explicit form \begin{equation} \Delta (v[k,m])=v[k,m]\otimes 1+g_{km}\otimes v[k,m] \label{co} \end{equation} $$ +\sum _{i=k}^{m-1}\tau _i(1-q^{-1})g_{ki}\, v[i+1,m]\otimes v[k,i], $$ where $\tau _i=1$ with two exceptions, being $\tau _{n-1}=1+q^{-1}$ if $m=n,$ and $\tau _n=1+q^{-1}$ if $k=n.$ Here $g_{ki}=\hbox{\rm gr}(v(k,i))=g_kg_{k+1}\cdots g_i.$ \label{cos} \end{theorem} \begin{proof} By Proposition \ref{shu} we have the shuffle representation \begin{equation} v[k,m]=\alpha _k^m\cdot (v(m,k)). \label{Cco2} \end{equation} Using (\ref{bcopro}), it is easy to find the braided coproduct of the comonomial shuffle: $$ \Delta ^b_0((v(m,k))=\sum _{i=k}^{m-1}(v(m,i+1))\underline{\otimes }(v(i,k)), $$ where for short we put $\Delta _0^b(U)=\Delta^b(U)-U\underline{\otimes }1-1\underline{\otimes }U.$ Taking into account (\ref{Cco2}), we have \begin{equation} \Delta ^b_0(v[k,m])=\alpha _k^m\cdot \sum _{i=k}^{m-1} (\alpha _{i+1}^m)^{-1}v[i+1,m]\underline{\otimes }(\alpha _i^k)^{-1}v[k,i]. \label{co5} \end{equation} Formula (\ref{copro}) demonstrates that the tensors $u^{(1)}\otimes u^{(1)}$ of the (unbraided) coproduct and tensors $u^{(1)}_b\underline{\otimes } u^{(1)}_b$ of the braided one are related by $u^{(1)}_b=u^{(1)}{\rm gr}(u^{(2)})^{-1},\ $ $u^{(2)}_b=u^{(2)}.$ The equality (\ref{co5}) provides the values of $u^{(1)}_b$ and $u^{(2)}_b.$ Hence we may find $u^{(1)}$ $=\alpha _k^m(\alpha _k^i\alpha _{i+1}^m)^{-1}$ $\cdot v[i+1,m]g_{ki}$ and $u^{(2)}=v[k,i],$ where $g_{ki}={\rm gr}(v[k,i]).$ The commutation rules imply $$ v[i+1,m]g_{ki}=p(v(i+1,m), v(k,i))g_{ki}v[i+1,m]. $$ Thus, the coproduct has the form (\ref{co}), where $$\tau _i(1-q^{-1})=\alpha _k^m(\alpha _k^i\alpha _{i+1}^m)^{-1}p(v(i+1,m), v(k,i)).$$ The definition of $\alpha _k^m$ given in (\ref{shur}) shows that $$ \alpha _k^m(\alpha _k^i\alpha _{i+1}^m)^{-1}=\varepsilon _k^m(\varepsilon_k^i\varepsilon_{i+1}^m)^{-1}\cdot p(v(k,i),v(i+1,m)) $$ because $$ \left( \prod _{k\leq a<b\leq i}p_{ab}\prod _{i+1\leq a<b\leq m} p_{ab}\right) ^{-1}\prod _{k\leq a<b\leq m}p_{ab}=p(v(k,i),v(i+1,m)). $$ The definition of $\mu _k^{m,i}$ given in (\ref{mu1}) implies $$\tau _i(1-q^{-1})=\varepsilon _k^m(\varepsilon_k^i\varepsilon_{i+1}^m)^{-1} (q-1)\mu _k^{m,i};$$ that is, $\tau _i=\varepsilon _k^m(\varepsilon_k^i\varepsilon_{i+1}^m)^{-1}q\mu _k^{m,i}.$ By (\ref{mu23}), we have $\mu _k^{m,i}=\sigma _k^m(\sigma _k^i\sigma _{i+1}^m)^{-1}.$ Using (\ref{mu21}) and (\ref{eps}), we see that \begin{equation} \varepsilon _k^m\sigma _k^m=\left\{ \begin{matrix} q^2+q,\hfill &\hbox{if } k\leq n\leq m, k\neq m;\hfill \cr q^2,\hfill &\hbox{if } k=n=m;\hfill \cr q,\hfill &\hbox{otherwise}.\hfill \end{matrix} \right. \label{tau1} \end{equation} Now, it is easy to check that the $\tau $'s have the following elegant form \begin{equation} \tau_i=\varepsilon _k^m\sigma _k^m(\varepsilon _k^i\sigma _k^i)^{-1}(\varepsilon _{i+1}^m\sigma _{i+1}^m)^{-1} q \label{tau1} \end{equation} $$ =\left\{ \begin{matrix} 1+q^{-1},\hfill &\hbox{if } i=n-1, m=n; \hbox{ or } k=i=n;\hfill \cr 1,\hfill &\hbox{otherwise}.\hfill \end{matrix} \right. $$ \end{proof} {\bf Remark 1.} \label{t} If $q$ is a root of $1,$ say $q^t=1,$ $t>2,$ then the shuffle representation is not faithful. Therefore in this case, the formula (\ref{co}) is proved only for the Frobenius-Lusztig kernel $u_q(\mathfrak{sp}_{2n}).$ Nevertheless, all tensors in (\ref{co}) have degree at most 2 in each variable. At the same time, general results on combinatorial representation of Nichols algebras \cite[Section 5.5]{Ang} demonstrate that in case $C_n,$ the kernel of the natural projection $U_q(\mathfrak{sp}_{2n})\rightarrow u_q(\mathfrak{sp}_{2n})$ is generated by polynomials of degree grater then 2 in (or independent of) each given variable. Hence (\ref{co}) remains valid in this case as well. \section{Relations in $U_q^+(\mathfrak{so}_{2n})$} In what follows, we fix a parameter $q$ such that $q\not= -1.$ If $C$ is a Cartan matrix of type $D_n,$ then relations (\ref{KM1}) take the form \begin{equation} p_{ii}=q, \ 1\leq i\leq n;\ \ p_{i\, i-1}p_{i-1\, i}=p_{n-2\, n}p_{n\, n-2}=q^{-1}, \ 1<i<n;\ \label{Db1rel} \end{equation} \begin{equation} p_{ij}p_{ji}=p_{n-1\, n}p_{n\, n-1}=1,\hbox{ if } j>i+1\, \& \, (i,j)\neq (n,n-2). \label{Db1rell} \end{equation} The quantum Borel algebra $U^+_q (\mathfrak{so}_{2n})$ can be defined by the condition that the Hopf subalgebras $U_{n-1}$ and $U_n$ generated, respectively, by $x_1,x_2,\ldots , x_{n-1}$ and $x_1,x_2,\ldots , x_{n-2}, x_n$ are Hopf algebras $U_q(\mathfrak{sl}_n)$ of type $A_{n-1},$ and by one additional relation \begin{equation} [x_{n-1},x_n]=0. \label{Drelb} \end{equation} Recall that $x_i,$ $n<i<2n$ denotes the generator $x_{2n-i},$ whereas if $1\leq i<2n,$ then $\phi (i)$ equals $2n-i,$ so that $x_i=x_{\phi (i)},$ see Definition \ref{fis}. \begin{definition} \rm We define words $e(k,m),$ $1\leq k\leq m<2n$ in the following way: \begin{equation} e(k,m)=\left\{ \begin{matrix} x_kx_{k+1}\cdots x_{m-1}x_m,\hfill & \hbox{ if } m<n \hbox{ or } k>n;\cr x_kx_{k+1}\cdots x_{n-2}x_nx_{n+1}\cdots x_m, \hfill & \hbox{ if } k<n-1<m;\hfill \cr x_nx_{n+1}\cdots x_m,\hfill & \hbox{ if } k=n-1<m;\hfill \cr x_nx_{n+2}x_{n+3}\cdots x_m,\hfill & \hbox{ if } k=n.\hfill \end{matrix} \right. \label{De} \end{equation} Respectively, $e(m,k)$ is the word opposite to $e(k,m).$ Further, we define a word $e^{\prime }(k,m)$ as a word that appears from $e(k,m)$ by replacing the subword $x_nx_{n+1},$ if any, with $x_{n-1}x_n.$ Respectively, $e^{\prime }(m,k)$ is the word opposite to $e^{\prime }(k,m).$ We see that $e(k,m)$ coincides with $v(k,m)$ if $m<n$ or $k>n.$ If $k<n-1<m,$ then $e(k,m)$ appears from $v(k,m)$ by deleting the letter $x_{n-1}$ (but not of $x_{n+1}$!). Similarly, if $k=n,$ then $e(n,m)$ appears from $v(n,m)$ by deleting the letter $x_{n+1},$ whereas if $k=n-1,$ then we have $e(n-1,m)=v(n,m).$ We have to stress that according to this definition $e(n-1,n)=e(n,n)=e(n,n+1)=x_n.$ \label{Dsl} \end{definition} \begin{lemma} If $1\leq k\leq m<2n,$ then \begin{equation} p(e(k,m),e(k,m))=\sigma_k^m =\left\{ \begin{matrix} q^2,\hfill &\hbox{if } m=\phi (k);\hfill \cr q,\hfill &\hbox{otherwise}.\hfill \end{matrix} \right. \label{Dmu21} \end{equation} \label{Dsig} \end{lemma} \begin{proof} If the word $e(k,m)$ does not contain a subword $x_nx_{n+1},$ then it belongs to either $U_n$ or $U_{n-1}$ that are isomorphic to $U_{q}^+(\mathfrak{sl}_n).$ Hence we have $p(e(k,m),e(k,m))$ $=q.$ Let $k \leq n-1 <m.$ In this case $e(k,m+1)=e(k,m)x_{m+1}$ which allows one to use induction on $m-n+1.$ If $m=n,$ then $e(k,n)$ does not contain a sub-word $x_nx_{n+1}.$ Because $p(\hbox{-},\hbox{-})$ is a bimultiplicative map, we may decompose \begin{equation} p(e(k,m+1),e(k,m+1))=\sigma _k^m\cdot q\cdot p(e(k,m),x_{m+1})\cdot p(x_{m+1}, e(k,m)). \label{Dsi69} \end{equation} Using relations \ref{Db1rel} and \ref{Db1rell} we shall prove \begin{equation} p(e(k,m),x_{m+1})\cdot p(x_{m+1}, e(k,m))=\left\{ \begin{matrix} 1,\hfill &\hbox{if } k=\phi (m)-1;\hfill \cr q^{-2},\hfill &\hbox{if } k=\phi (m);\hfill \cr q^{-1},\hfill &\hbox{otherwise.}\hfill \end{matrix} \right. \label{Dsi45} \end{equation} The left hand side of the above equality is $\prod_{k\leq t\leq m,\, t\neq n-1} p_{t\, m+1}p_{m+1\, t}.$ If $m>n+1,$ then by \ref{Db1rel} and \ref{Db1rell} the factor $p_{t\, m+1}p_{m+1\, t}$ differs from 1 only if $t\in \{ \phi (m)-2, \phi (m)-1, \phi (m), m \}$ and related values are respectively $q^{-1}, q^2, q^{-1},q^{-1}$ whereas the product of all those values is precisely $q^{-1}.$ Hence, if $k<\phi (m)-1,$ then the whole product is $q^{-1};$ if $k=\phi (m)-1,$ then this is $1;$ if $k=\phi (m),$ then this is $q^{-2};$ if $k>\phi (m),$ then this is again $q^{-1}.$ If $m=n+1,$ then nontrivial factors are related to $t\in \{ n-3, n-2, n, n+1\}$ with values $q^{-1}, q^2, q^{-1},q^{-1},$ respectively. Hence, we arrive to the same conclusion with $k<n-2=\phi (m)-1;$ $k=n-2=\phi (m)-1;$ and $k=n-1=\phi (m).$ Finally, if $m=n,$ then there is just one nontrivial factor which relates to $t=n-2$ with value $q^{-1},$ so that if $k\leq n-2=\phi (m)-2,$ then the total product is $q^{-1};$ if $k=n-1=\psi (m)-1,$ then this is $1.$ This completes the proof of (\ref{Dsi45}). To complete the inductive step we use (\ref{Dsi45}) and inductive hypothesis: if $k$ $=\phi (m)-1,$ then $\sigma _k^{m+1}$ $=q\cdot q\cdot 1$ $=q^2;$ if $k$ $=\phi (m),$ then $\sigma _k^{m+1}$ $=q^2\cdot q\cdot q^{-2}$ $=q;$ otherwise $\sigma _k^{m+1}$ $=q\cdot q\cdot q^{-1}$ $=q.$ \end{proof} \begin{lemma} If the word $e(k,m)$ contains the subword $x_nx_{n+1};$ that is $k<n<m,$ then for each $i,$ $k\leq i<m$ we have \begin{equation} p(e(k,i),e(i+1,m))\cdot p(e(i+1,m),e(k,i))=\sigma _k^m(\sigma _k^i \sigma _{i+1}^m)^{-1}=\mu _k^{m,i}. \label{Dmu23} \end{equation} \label{Dmu} \end{lemma} \begin{proof} If $k<n<m,$ then for $i\neq n-1$ there is a decomposition $e(k,m)=e(k,i)e(i+1,m)$ which implies (\ref{Dmu23}) because the form $p(\hbox{-},\hbox{-} )$ is bimultiplicative. For $i=n-1$ there is another equality $e^{\prime }(k,m)=e(k,i)e(i+1,m).$ Certainly $p(e^{\prime }(k,m), e^{\prime }(k,m))$ $=p(e(k,m), e(k,m))$ $=\sigma _k^m.$ Hence (\ref{Dmu23}) is still valid. \end{proof} \smallskip \ We define the bracketing of $e(k,m),$ $k\leq m<2n$ as follows. \begin{equation} e[k,m]=\left\{ \begin{matrix} [[[\ldots [x_k,x_{k+1}], \ldots ],x_{m-1}], x_m],\hfill &\hbox{if } m<\phi (k);\hfill \cr [x_k,[x_{k+1},[\ldots ,[x_{m-1},x_m]\ldots ]]],\hfill &\hbox{if } m>\phi (k);\hfill \cr [\! [e[k,m-1],x_m]\! ],\hfill &\hbox{if } m=\phi (k),\hfill \end{matrix}\right. \label{Dww} \end{equation} where as above $[\! [u,v]\! ]=uv-q^{-1}p(u,v)vu.$ Conditional identity (\ref{ind}) demonstrates that the value of $e[k,m]$ in $U_q^+(\mathfrak{so}_{2n})$ is independent of the precise arrangement of brackets, provided that $m\leq n$ or $k\geq n.$ \begin{lemma} If $k<n<m<\phi (k),$ then the value in $U_q^+(\mathfrak{so}_{2n})$ of the bracketed word $[y_kx_{n+1}x_{n+2}\cdots x_m],$ where $y_k=e[k,n],$ is independent of the precise arrangement of brackets. \label{Dins} \end{lemma} \begin{proof} To apply (\ref{ind}), it suffices to check $[y_k,x_t]=0,$ where $n+1<t\leq m$ or, equivalently, $\phi (m)\leq t<n-1.$ We have $$ [y_k,x_t]=\hbox{\Large[}[e[k,t-2],e[t-1,n]],x_t\hbox{\Large]} =\hbox{\Large[}e[k,t-2], [e[t-1,n],x_t]\hbox{\Large]}. $$ The polynomial $[e[t-1,n],x_t]$ is independent of $x_{n-1},$ so that it belongs to the Hopf subalgebra $U_n=U_{q}^+(\mathfrak{sl}_{n}).$ By \cite[Theorem $A_n$]{Kh4}, the element $[e[t-1,n],x_t]$ equals zero in $U_{q}^+(\mathfrak{sl}_{n})$ because the word $e(t-1,n)x_t$ is standard, and the standard bracketing is $[e[t-1,n],x_t].$ \end{proof} \begin{lemma} If $k<n,$ $ \phi (k)<m,$ then the value in $U_q^+(\mathfrak{so}_{2n})$ of the bracketed word $[x_kx_{k+1}\cdots x_{n-2}x_ny_m],$ where $y_m=e[n+1,m],$ is independent of the precise arrangement of brackets. \label{Dins1} \end{lemma} \begin{proof} To apply (\ref{ind}), we need the equalities $[x_t,y_m]=0,$ $k\leq t<n-1.$ The polynomial $[x_t,y_m]$ belongs to the subalgebra $U_{n-1}.$ Moreover, $[x_t,y_m]$ is proportional to $[y_m, x_t]$ due to antisymmetry identity (\ref{cha}) because $p(x_t, y_m)p(y_m,x_t)$ $=p_{t\, t+1}p_{tt}p_{t\, t-1}\cdot p_{t+1\, t}p_{tt}p_{t-1\, t}$ $=1.$ The equality $[y_m, x_t]=0$ turns to the proved above equality $[e[k,n],x_t]=0$ if one renames the variables $x_{n+1}\leftarrow x_k,$ $x_{n+2}\leftarrow x_{k+1}, \ldots .$ \end{proof} \section{PBW generators of $U_q^+(\mathfrak{so}_{2n})$} \begin{proposition} If $q\neq -1,$ then values of the elements $e[k,m],$ $k\leq m<\phi (k)$ form a set of PBW generators with infinite heights for the algebra $U_q^+(\mathfrak{so}_{2n})$ over {\bf k}$[G].$ \label{DstrB} \end{proposition} \begin{proof} All words $e(k,m),$ $k\leq m<\phi (k)$ are standard Lyndon-Shirshov words, and by \cite[Theorem $D_n,$ p. 225]{Kh4} under the standard bracketing, say $[e(k,m)],$ they form a set of PBW generators with infinite heights. By induction on $m-k$ we prove that the values in $U_q^+(\mathfrak{so}_{2n})$ of $[e(k,m)]$ equal the values of $e[k,m]$ with bracketing given in (\ref{Dww}). If $m\leq n,$ then by Lemma \ref{indle} we have nothing to prove. If $k<n<m,$ then according to \cite[Lemma 7.25]{Kh4}, the brackets in $[e(k,m)]$ are set by the following recurrence formulae (we note that $[e(k, m)]=[e_{k\, \phi (m)}]$ in the notations of \cite{Kh4}): \begin{equation} [e(k,m)]=\left\{ \begin{matrix} [x_k[e(k+1, m)]], \hfill & \hbox{if } m<\phi (k)-1; \hfill \cr [[e(k, m-1)]x_m], \hfill & \hbox{if } m=\phi (k)-1.\hfill \end{matrix} \right. \label{Dwsk} \end{equation} In the latter case the induction applies directly. In the former case using induction and Lemma \ref{Dins} we have $[e(k+1, m)]=e[k+1, m]=[e[k+1,n], e[n+1,m]].$ At the same time $[x_k,x_t]=0,$ $n< t\leq m$ because $x_t=x_{\phi (t)}$ and $\phi (t)\geq \phi (m)>k+1$ $\, \& \, (k,\phi (t))\neq (n-2,n).$ This implies $[x_k, e[n+1,m]]=0.$ Applying the conditional identity (\ref{jak3}), we get $$ [e(k, m)]=[x_k[e[k+1,n], e[n+1,m]]]={\big [}[x_ke[k+1,n]], e[n+1,m]{\big ]}=e[k,m]. $$ \end{proof} \section{Shuffle representation for $U_q^+(\mathfrak{so}_{2n})$} In this section, we are going to find the shuffle representation of elements $e[k,m],$ $1\leq k\leq m<2n.$ If $e(k,m)$ has not $x_nx_{n+1}$ as a subword, then $e[k,m]$ belongs to a Hopf subalgebra of type $A_n$: this is either $U_{n-1}$ $=U_q^+(\mathfrak{sl}_n)$ or $U_n$ $=U_q^+(\mathfrak{sl}_n).$ At the same time in the considered above case $C_n,$ the elements $x_1,x_2, \ldots , x_{n-1}$ generate precisely a Hopf subalgebra $U_q(\mathfrak{sl}_n).$ Hence we may apply Proposition \ref{shu}: \begin{equation} e[k,m]=\alpha _k^m\cdot (e(m,k)), \label{Dng0} \end{equation} where \begin{equation} \alpha _k^m=\left\{ \begin{matrix} (q-1)^{m-k}\cdot \prod\limits _{k\leq i<j\leq m}p_{ij}, \hfill & \hbox{ if } m<n \hbox{ or } k>n; \hfill \cr \ & \ \cr (q-1)^{m-n-1}\cdot \prod\limits _{n\leq i<j\leq m,\, i,j\neq n+1}p_{ij},\hfill & \hbox{ if } k=n;\hfill \cr \ & \ \cr (q-1)^{n-k-1}\cdot \prod\limits _{k\leq i<j\leq m,\, i,j\neq n-1}p_{ij},\hfill & \hbox{ if } m=n.\hfill \end{matrix} \right. \label{Dng3} \end{equation} \begin{proposition} Let $1\leq k<n<m<2n.$ In the shuffle representation, we have \begin{equation} e[k,m]=\alpha _k^m\cdot \{ (e(m,k))+p_{n-1,n}(e^{\prime }(m,k))\} , \label{Dshur} \end{equation} where \begin{equation} \alpha _k^m=\epsilon _k^m (q-1)^{m-k-1}\cdot \prod _{k\leq i<j\leq m,\, i,j\neq n-1}p_{ij} \label{Dshur1} \end{equation} with \begin{equation} \epsilon _k^m=\left\{ \begin{matrix} q^{-1},\hfill & \hbox{ if } m=\phi (k);\hfill \cr 1, \hfill & \hbox{otherwise.}\hfill \end{matrix} \right. \label{Dshur2} \end{equation} \label{Dshu} \end{proposition} \begin{proof} a). Consider first the case $m<\phi (k).$ We use induction on $m-n.$ Let $m-n=1.$ Condition $n+1=m<\phi (k)$ implies $k<n-1.$ Hence by Lemma \ref{Dins} we have $e[k,n+1]=[e[k,n],x_{n+1}],$ whereas (\ref{Dng0}) implies $e[k,n]=\alpha _k^n(e(n,k)).$ Using (\ref{spro}), we may write $$ e[k,n+1]=\alpha _k^n\{ (e(n,k))(x_{n+1})-p(e(n,k),x_{n+1})\cdot (x_{n+1})(e(n,k))\} $$ \begin{equation} =\alpha _k^{n}\sum _{uv=e(n,k)}\{ p(x_{n+1},v)^{-1}-p(v,x_{n+1})\} (ux_{n+1}v), \label{Dsum1} \end{equation} where $p(v,x_{n+1})=p(e(n,k),x_{n+1})p(u,x_{n+1})^{-1}$ because $e(n,k)=uv.$ We have $$ p(x_{n+1},v)^{-1}-p(v,x_{n+1})=p(v,x_{n+1})\cdot \{ p(x_{n+1},v)^{-1}p(v,x_{n+1})^{-1}-1 \}. $$ At the same time equality $x_{n+1}=x_{n-1}$ and relations (\ref{Db1rel}), (\ref{Db1rell}) imply $$ p(x_{n+1},v)p(v,x_{n+1})=\left\{ \begin{matrix} q^{-1},\hfill & \hbox{ if } v=e(n,k) \hbox{ or } v=e(n-2,k);\hfill \cr 1, \hfill & \hbox{otherwise.}\hfill \end{matrix} \right. $$ Hence in the decomposition (\ref{Dsum1}) two terms remain $$ \alpha _k^np(e(n,k),x_{n+1})(q-1)(x_{n+1}x_nx_{n-2}\cdots x_k)=\alpha _k^{n+1} (e(n+1,k)) $$ and $$ \alpha _k^np(e(n-2,k),x_{n+1})(q-1)(x_nx_{n-1}x_{n-2}\cdots x_k)=\alpha _k^{n+1} p_{n-1,n}(e^{\prime }(n+1,k)), $$ for $p_{n,n+1}^{-1}=p_{n,n-1}^{-1}=p_{n-1,n}$ due to (\ref{Db1rell}). This completes the first step of induction. Suppose that equalities (\ref{Dshur}) and (\ref{Dshur1}) are valid and still $m+1<\phi (k).$ Then Lemma \ref{Dins} implies $e[k,m+1]=[e[k,m],x_{m+1}].$ By (\ref{spro}) we have $$ [(e(m,k)),(x_{m+1})]=\sum _{uv=e(m,k)}p(v,x_{m+1})\cdot \{ p(x_{m+1},v)^{-1}p(v,x_{m+1})^{-1}-1 \}(ux_{m+1}v). $$ Relations (\ref{Db1rel}), (\ref{Db1rell}) imply that $$ p(x_{m+1},v)p(v,x_{m+1})=\left\{ \begin{matrix} q, \hfill & \hbox{ if } v=e(\phi (m)-1,k);\hfill \cr q^{-1}, \hfill & \hbox{ if } v=e(m,k) \hbox{ or } v=e(\phi (m)-2,k) ;\hfill \cr 1, \hfill & \hbox{ otherwise.}\hfill \end{matrix}\right. $$ Thus in the decomposition just three terms remain. Two of them, corresponding to $v=e(\phi (m)-1,k)$ and $v=e(\phi (m)-2,k),$ are canceled: $$ p(x_{\phi (m)-1},x_{m+1})(q^{-1}-1)+(q-1)=q(q^{-1}-1)+(q-1)=0. $$ Thus $$ [(e(m,k)),(x_{m+1})]=\{ (q-1)\cdot \prod _{k\leq i\leq m,\ i\neq n-1}p_{i\, m+1} \} \, (e(m+1,k)). $$ In perfect analogy, we have $$ [(e^{\prime }(m,k)),(x_{m+1})]=\{ (q-1)\cdot \prod _{k\leq i\leq m,\ i\neq n-1}p_{i\, m+1} \} \, (e^{\prime }(m+1,k)). $$ The inductive supposition yields $e[m,k]=\alpha _k^m\cdot \{(e(m,k))+p_{n-1,n}(e^{\prime }(k,m))\} .$ Hence to complete the induction, it suffices to note that $$\alpha _k^{m+1}=\alpha _k^m\cdot (q-1)\cdot \prod _{k\leq i\leq m,\ i\neq n-1}p_{i\, m+1}.$$ b). Similarly consider the case $m>\phi (k)$ using downward induction on $n-k.$ Let $k=n-1.$ Condition $m>\phi (k)$ implies $m\geq n+2.$ Hence by Lemma \ref{Dins1} we have $e[n-1,m]=[x_n,e[n+1,m]],$ whereas (\ref{Dng0}) and (\ref{Dng3}) imply $e[n+1,m]$ $=\alpha _{n+1}^m(e(m,n+1)).$ Using (\ref{spro}), we may write $$ e[n-1,m]=\alpha _{n+1}^m \{ (x_n)(e(m,n+1))-p(x_n,e(m,n+1))\cdot (e(m,n+1))(x_n)\} $$ \begin{equation} =\alpha _{n+1}^{m}\sum _{uv=e(m,n+1)}\{ p(u,x_n)^{-1}-p(x_n,u)\} (ux_nv), \label{Dsum2} \end{equation} where $p(x_n,u)=p(x_n, e(m,n+1))p(x_{n},v)^{-1}$ because $e(m,n+1)=uv.$ We have $$ p(u,x_n)^{-1}-p(x_n,u)=p(x_n,u)\cdot \{ p(u,x_n)^{-1}p(x_n,u)^{-1}-1 \}. $$ Equality $x_{n+1}=x_{n-1}$ and relations (\ref{Db1rel}), (\ref{Db1rell}) imply that $p(u,x_n)p(x_n,u)=1$ unless $u=e(m,n+1)$ or $u= e(m,n+2).$ In these two exceptional cases, the product equals $p_{n+2\, n}p_{n\, n+2}=q^{-1}$ because $p_{n\, n+1}p_{n+1\, n}=1.$ Hence in the decomposition (\ref{Dsum2}) two terms remain $$ \alpha _{n+1}^{m}p(x_n,e(m,n+1))(q-1)(x_m\cdots x_{n+1}x_n)=\alpha _{n-1}^{m}(e(m,n-1)) $$ and $$ \alpha _{n+1}^{m}p(x_n,e(m,n+2))(q-1)(x_m\cdots x_{n+2}x_nx_{n+1})=\alpha _{n-1}^{m}\cdot p_{n\, n+1}^{-1}(e^{\prime }(m,n-1)). $$ This completes the first step of induction because $p_{n\, n+1}^{-1}=p_{n-1\, n}.$ Suppose that equalities (\ref{Dshur}) and (\ref{Dshur}) are valid and still $m>\phi (k-1)=\phi (k)+1.$ Lemma \ref{Dins1} implies $e[k-1,m]=[x_{k-1},e[k,m]].$ We have $$ [(x_{k-1}),(e(m,k))]=\sum _{uv=e(m,k)}p(x_{k-1},u)\cdot \{ p(u,x_{k-1})^{-1}p(x_{k-1},u)^{-1}-1 \}(ux_{k-1}v). $$ Relations (\ref{Db1rel}), (\ref{Db1rell}) imply that $$ p(u,x_{k-1})p(x_{k-1},u)=\left\{ \begin{matrix} q \hfill & \hbox{ if } u=e(m, \phi (k)+1);\hfill \cr q^{-1} \hfill & \hbox{ if } u=e(m,k) \hbox{ or } u=e(\phi (k)+2,k) ;\hfill \cr 1 \hfill & \hbox{ otherwise.}\hfill \end{matrix}\right. $$ Hence in the decomposition (\ref{Dsum1}) just three terms remain. Two of them, corresponding to $u=e(m, \phi (k)+1)$ and $u=e(m,\phi (k)),$ are canceled: $$ p(x_{k-1},x_{\phi (k)+1})(q^{-1}-1)+(q-1)=q(q^{-1}-1)+(q-1)=0. $$ Thus $$ [(x_{k-1}),(e(m,k)]=\{ (q-1)\cdot \prod _{k\leq j\leq m,\ j\neq n-1}p_{k-1\,j} \} \, (v(m,k-1)). $$ In perfect analogy, we have $$ [(x_{k-1}),(e^{\prime }(m,k)]=\{ (q-1)\cdot \prod _{k\leq j\leq m,\ j\neq n-1}p_{k-1\,j} \} \, (v^{\prime }(m,k-1)). $$ The inductive supposition states $e[m,k]=\alpha _k^m\cdot \{(e(m,k))+p_{n-1,n}(e^{\prime }(k,m))\} .$ Hence it remains to note that $$\alpha _{k-1}^{m}=\alpha _k^m\cdot (q-1)\cdot \prod _{k\leq j\leq m,\ j\neq n-1}p_{k-1\, j}.$$ c). Let $m=\phi (k)\neq n.$ In this case, $x_m=x_k,$ $\epsilon_k^m=q^{-1}.$ If $k=n-1,$ $m=n+1,$ then $e(n-1,n)=x_n$ and by Definition \ref{Dww} we have $$ e[n-1,n+1]=x_nx_{n+1}-q^{-1}p_{n\, n+1}x_{n+1}x_n=(1-q^{-1})x_nx_{n+1} $$ since due to (\ref{Drelb}) we have $x_{n+1}x_n=p_{n-1\, n}x_nx_{n+1}$ with $x_{n+1}=x_{n-1}$ and $p_{n\, n+1}p_{n-1\, n}$ $=1.$ In the shuffle form, we get $$ (x_n)(x_{n+1})=(x_nx_{n+1})+p_{n+1\, n}^{-1}(x_{n+1}x_n)=p_{n\, n+1}\cdot \{ (x_{n+1}x_n)+p_{n-1\, n}(x_nx_{n+1})\}. $$ It remains to note that $e(n-1, n+1)$ $=x_nx_{n+1},$ $e(n+1,n-1)$ $=x_{n+1}x_n,$ $e^{\prime }(n-1, n+1)$ $=x_{n-1}x_n,$ $e^{\prime }(n+1,n-1)$ $=x_nx_{n-1}$ $=x_nx_{n+1}.$ Let $k<n-1.$ By definition (\ref{Dww}) we have \begin{equation} e[k,m]=e[k,m-1]\cdot x_m-q^{-1}p(e(k,m-1),x_m)x_k\cdot e[k,m-1]. \label{Dkk} \end{equation} Already done case a) allows us to find the shuffle representation $$ e[k,m-1]=\alpha _k^{m-1} \cdot \{ (e(m-1,k))+p_{n-1\, n}(e^{\prime }(m-1,k))\} . $$ We have $$ [\![(e(m-1,k)),(x_m)]\!]=\sum _{uv=e(m-1,k)}p(v,x_{m})\cdot \{ p(x_{m},v)^{-1}p(v,x_{m})^{-1}-q^{-1} \}(ux_{m}v). $$ Relations (\ref{Db1rel}), (\ref{Db1rell}) imply that $$ p(x_{m},v)p(v,x_{m})=\left\{ \begin{matrix} 1, \hfill & \hbox{ if } v=\emptyset \hbox{ or }v=e(m-1,k);\hfill \cr q^2, \hfill & \hbox{ if } v=x_k;\hfill \cr q, \hfill & \hbox{ otherwise.}\hfill \end{matrix}\right. $$ Therefore in the decomposition just three terms remain. Two of them, corresponding to $v=\emptyset $ and $v=x_k,$ are canceled: $$ p(x_k,x_{m})(q^{-2}-q^{-1})+(1-q^{-1})=q(q^{-2}-q^{-1})+(1-q^{-1})=0. $$ Thus $$ [\![(e(m-1,k)),(x_{m})]\!] =\{ (1-q^{-1})\cdot \prod _{k\leq i<m,\ i\neq n-1}p_{i\, m} \} \, (e(m,k)). $$ In perfect analogy, we have $$ [\![(e^{\prime }(m-1,k)),(x_{m})]\!] =\{ (1-q^{-1})\cdot \prod _{k\leq i<m,\ i\neq n-1}p_{i\, m} \} \, (e^{\prime }(m,k)). $$ It suffices to note that $1-q^{-1}=\epsilon _k^m (q-1),$ and by definition $$\alpha _k^{m}=\alpha _k^{m-1}\cdot \epsilon _k^m (q-1)\cdot \prod _{k\leq i<m,\ i\neq n-1}p_{i\, m}.$$ The proposition is completely proved. \end{proof} \section{Coproduct formula for $U_q(\mathfrak{so}_{2n})$} \begin{theorem} In $U_q^+(\mathfrak{so}_{2n})$ the coproduct on the elements $e[k,m],$ $k\leq m<2n$ has the following explicit form \begin{equation} \Delta (e[k,m])=e[k,m]\otimes 1+g_{km}\otimes e[k,m] \label{Dco} \end{equation} $$ +\sum _{i=k}^{m-1}\tau _i(1-q^{-1})g_{ki}\, e[i+1,m]\otimes e[k,i], $$ where $\tau _i=1,$ with two exceptions, being $\tau _n=0$ if $k=n,$ and $\tau _{n-1}=0$ if $m=n;$ and $\tau _{n-1}=p_{n\, n-1}$ otherwise. Here $g_{ki}={\rm gr}(e(k,i))$ is a group-like element that appears from the word $e(k,i)$ under the substitutions $x_{\lambda }\leftarrow g_{\lambda }.$ \label{Dcos} \end{theorem} \begin{proof} If the word $e(k,m)$ does not contain the subword $x_{n}x_{n+1},$ then $e[k,m]$ belongs to either $U_{n-1}$ or $U_n.$ Both of these Hopf algebras are isomorphic to $U_q^+(\mathfrak{sl}_{n}).$ Hence if $m\leq n$ or $k\geq n,$ then we have nothing to prove. Suppose that $k<n<m.$ In this case by Proposition \ref{Dshu} we have the shuffle representation \begin{equation} e[k,m]=\alpha _k^m\cdot \{ (e(m,k))+p_{n-1\, p}(e^{\prime }(m,k))\} , \label{Dco2} \end{equation} where $(e(m,k))$ is a comonomial shuffle $$ (e(m,k))=\left\{ \begin{matrix}(x_mx_{m-1}\cdots x_{n+2}x_{n+1}x_nx_{n-2}\cdots x_k), \hfill & \hbox{ if } k<n-1; \hfill \cr (x_mx_{m-1}\cdots x_{n+2}x_{n+1}x_n), \hfill & \hbox{ if } k=n-1, \hfill \end{matrix} \right. $$ whereas $(e^{\prime }(m,k))$ is a related one: $$ (e^{\prime }(m,k))=\left\{ \begin{matrix}(x_mx_{m-1}\cdots x_{n+2}x_nx_{n-1}x_{n-2}\cdots x_k), \hfill & \hbox{ if } k<n-1; \hfill \cr (x_mx_{m-1}\cdots x_{n+2}x_nx_{n-1}), \hfill & \hbox{ if } k=n-1. \hfill \end{matrix} \right. $$ Using (\ref{bcopro}) it is easy to find the braided coproduct of the comonomial shuffles: $$ \Delta ^b_0((e(m,k)))=\sum _{i=k}^{n-2}(e(m,i+1))\underline{\otimes }(e(i,k)) +\sum _{i=n}^{m-1}(e(m,i+1))\underline{\otimes }(e(i,k)), $$ $$ \Delta ^b_0((e^{\prime }(m,k)))=\sum _{i=k}^{n-1}(e^{\prime }(m,i+1))\underline{\otimes }(e(i,k)) +\sum _{i=n+1}^{m-1}(e(m,i+1))\underline{\otimes }(e^{\prime }(i,k)), $$ where for short we define $\Delta _0^b(U)=\Delta^b(U)-U\underline{\otimes }1-1\underline{\otimes }U.$ Taking into account (\ref{Dco2}), we have $$ (\alpha _k^m)^{-1}\Delta ^b_0(e[k,m])=\left( \sum _{i=k}^{n-2} (\alpha _{i+1}^m)^{-1}e[i+1,m]\underline{\otimes }(e(i,k))\right) +p_{n-1\, n}(e(m,n))\underline{\otimes }(e(n-1,k)) $$ $$ +(e(m,n+1)) \underline{\otimes } (e(n,k))+\sum _{i=n+1}^{m-1}(e(m,i+1))\underline{\otimes } (\alpha _k^i)^{-1}e[k,i]. $$ Relation (\ref{Dng0}) applied to $e[k,i],$ $i\leq n$ and $e[i+1,m],$ $i\geq n$ allows one to rewrite the right hand side of the above equality in terms of $e[i,j]:$ $$ =\left( \sum _{i=k}^{n-2}(\alpha _k^i)^{-1}(\alpha _{i+1}^m)^{-1}e[i+1,m]\underline{\otimes }e[k,i]\right) +p_{n-1\, n}(\alpha _n^m)^{-1}(\alpha _k^{n-1})^{-1}e[n,m]\underline{\otimes }e[k,n-1] $$ $$ + (\alpha _{n+1}^m)^{-1} (\alpha _k^n)^{-1} e[n+1,m] \underline{\otimes } e[k,n]+ \sum _{i=n+1}^{m-1}(\alpha _{i+1}^m)^{-1}(\alpha _k^i)^{-1}e[i+1,m]\underline{\otimes }e[k,i]. $$ Thus, we have \begin{equation} \Delta ^b_0(e[k,m]) =\sum _{i=k}^{m-1}\gamma _i\, e[i+1,m]\underline{\otimes }e[k,i], \label{Dco5} \end{equation} where \begin{equation} \gamma _i=p_{n-1\,n}^{\delta _{n-1}^i} \cdot \alpha _k^m (\alpha _k^i \alpha _{i+1}^m)^{-1}, \label{Dco6} \end{equation} whereas $\delta _{n-1}^i$ is the Kronecker delta. Our next step is to see that for all $i,$ $k\leq i<m$ we have \begin{equation} \gamma _i=p_{n\, n-1}^{\delta _{n-1}^i} \cdot (q-1)\epsilon _k^m (\epsilon _k^i \epsilon _{i+1}^m)^{-1}p(e(k,i),e(i+1,m)). \label{gam} \end{equation} All factors except the $\epsilon $'s in (\ref{Dco6}) have the form $(q-1)^s\prod _{A}p_{ab},$ where $A$ is a suitable set of pairs $(a,b)$ and $s$ is an integer exponent. Due to bimultiplicativity of the form $p(\hbox{-},\hbox{-}),$ the same is true for the right hand side of (\ref{gam}). Hence it suffices to demonstrate that the sum of exponents of the factors in (\ref{Dco6}) equals $1,$ and the resulting product domains in (\ref{Dco6}) and (\ref{gam}) are the same, or at least they define the same product. If $i<n-1,$ then using (\ref{Dco6}), (\ref{Dng0}), and Proposition \ref{Dshu}, we have the required equality for the exponents, $$ (m-k-1)-(i-k)-(m-i-2)=1, $$ and for the product domains: $$ \{k \leq a<b\leq m,\, a,b\neq n-1\} \setminus (\{ k\leq a<b\leq i\} \cup \{ i+1\leq a<b\leq m,\, a,b \neq n-1\} ) $$ $$ =\{k\leq a\leq i <b\leq m,\, a,b \neq n-1\} . $$ If $i\geq n,$ then similarly we have the required equality for the exponents, $$ (m-k-1)-(i-k-1)-(m-i-1)=1, $$ and for the product domains: $$ \{k \leq a<b\leq m,\, a,b\neq n-1\} \setminus (\{ k\leq a<b\leq i, \, a,b \neq n-1\} \cup \{ i+1\leq a<b\leq m, \} ) $$ $$ =\{k\leq a\leq i <b\leq m,\, a,b \neq n-1\} . $$ In the remaining case, $i=n-1,$ we have $e(k,i)$ $=x_k\cdots x_{k-2}x_{k-1},$ $e(i+1,m)$ $=x_nx_{n+2}\cdots x_m.$ Due to (\ref{Dng3}), the exponent is $$ (m-k-1)-(n-1-k)-(m-n-1)=1, $$ whereas the product domain of $\alpha _k^m (\alpha _k^{n-1}\alpha _n^m)^{-1}$ reduces to $$ \{k \leq a<b\leq m,\, a,b\neq n-1\} \setminus (\{ k\leq a<b\leq n-1\} \cup \{ n\leq a<b\leq m,\, a,b\neq n+1 \} ) $$ \begin{equation} =\{k\leq a<n-1< b\leq m\} \cup \{ n+1=a<b\leq m\} \cup \{ (n,n+1)\}. \label{Dco9} \end{equation} However, in this case the product domain for $\alpha _k^{n-1}$ is not a subset of the product domain for $\alpha _k^{m}.$ Therefore additionally to the product defined by (\ref{Dco9}), there appears a factor $\prod\limits_{k\leq a<b=n-1}p_{ab}^{-1}$ and a factor $p_{n-1\, n}$ that comes from (\ref{Dco6}) due to $\delta _{n-1}^i=1.$ The latter factor cancels with the factor defined by the subset $ \{ (n,n+1)\}$ since $p_{n-1\, n}p_{n\, n+1}=1,$ whereas the product domain of the former factor must be added to the product domain of $p(e(k,i),e(i+1,m))$: \begin{equation} \{ k\leq a\leq n-1<b\leq m,\, b\neq n+1\} \cup \{k\leq a<b=n-1\}. \label{Dco11} \end{equation} It remains to compare the products defined by (\ref{Dco9}) without the last pair and that defined by (\ref{Dco11}). The set $\{ k\leq a<n-1<b\leq m,\, b\neq n+1 \}$ is a subset of the first sets in (\ref{Dco9}) and (\ref{Dco11}). After cancelling the pairs from that set, (\ref{Dco9}) and (\ref{Dco11}) transform to, respectively, \begin{equation} \{k\leq a<n-1< b=n+1\} \cup \{ n+1=a<b\leq m\} \label{Dco91} \end{equation} and \begin{equation} \{ a=n-1<b\leq m,\, b\neq n+1\} \cup \{k\leq a<b=n-1\}. \label{Dco12} \end{equation} The first set of (\ref{Dco91}) and the second set of (\ref{Dco12}) define the same product because $x_{n+1}=x_{n-1}$ and $p_{a\, n+1}=p_{a\, n-1}.$ By the same reason $p_{n-1\, b}=p_{n+1\, b},$ hence the first set of (\ref{Dco12}) defines the same product as the second set of (\ref{Dco91}) up to one additional factor, $p_{n-1\, n}$ that corresponds to the pair $(n-1,n).$ This factor is canceled by the first factor $p_{n,n-1}$ that appears in (\ref{gam}) due to $\delta _{n-1}^i=1.$ The equality (\ref{gam}) is completely proved. \smallskip Now we are ready to consider the (unbraided) coproduct. Formula (\ref{copro}) demonstrates that the tensors $u^{(1)}\otimes u^{(1)}$ of the coproduct and tensors $u^{(1)}_b\underline{\otimes } u^{(1)}_b$ of the braided coproduct are related by $u^{(1)}_b=u^{(1)}{\rm gr}(u^{(2)})^{-1},\ $ $u^{(2)}_b=u^{(2)}.$ The equality (\ref{Dco5}) provides the values of $u^{(1)}_b$ and $u^{(2)}_b.$ Hence we may find $u^{(1)}=\gamma _ie[i+1,m]g_{ki}$ and $u^{(2)}=e[k,i],$ where $g_{ki}={\rm gr}(e[k,i]).$ The commutation rules imply $$e[i+1,m]g_{ki}=p(e(i+1,m), e(k,i))g_{ki}e[i+1,m].$$ Therefore the coproduct has the form (\ref{Dcos}), where $$\tau _i(1-q^{-1})=\gamma _i \, p(e(i+1,m), e(k,i)).$$ Applying (\ref{gam}) and Lemma \ref{Dmu} we get $$ \tau _i(1-q^{-1})=p_{n\, n-1}^{\delta _{n-1}^i} \cdot (q-1)\epsilon _k^m (\epsilon _k^i \epsilon _{i+1}^m)^{-1} \cdot \sigma _k^m (\sigma_k^i \sigma_{i+1}^m)^{-1}. $$ Lemma \ref{Dsig} and Eq. ( \ref{Dshur2}) imply that $\epsilon _k^m\sigma _k^m$ equals $q$ for all $k,m$ without exceptions. Hence $$\epsilon _k^m (\epsilon _k^i \epsilon _{i+1}^m)^{-1} \cdot \sigma _k^m (\sigma_k^i \sigma_{i+1}^m)^{-1}=q^{-1},$$ and we have \begin{equation} \tau_i=p_{n\, n-1}^{\delta _{n-1}^i} =\left\{ \begin{matrix} p_{n\, n-1},\hfill &\hbox{if } i=n-1;\hfill \cr 1,\hfill &\hbox{otherwise}.\hfill \end{matrix} \right. \label{Dtau} \end{equation} The theorem is completely proved. \end{proof} {\bf Remark 2.} If $q^t=1,$ $t>2,$ then (\ref{Dco}) remains valid due to precisely the same arguments that were given in Remark 1, see page \pageref{t}. {\bf Remark 3.} In fact, the exceptions $\tau _n=0$ if $k=n,$ and $\tau _{n-1}=0$ if $m=n$ can be omitted in the statement of the above theorem. Indeed, the related tensors are, respectively, $e[n+1,m]\otimes e[n,n]$ and $e[n,n]\otimes e[k,n-1],$ whereas by definition $e[n,n]=[\![x_n,x_n]\!] =x_n\cdot x_n-q^{-1}p(x_n,x_n)x_n\cdot x_n=0.$ So that, we may assume $\tau_n =1,$ $\tau _{n-1}=p_{n\, n-1}$ as well.
91d0cb23957d1f407b302c304a48aa640df1d35e
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} For more than two decades, Bose-Einstein condensates (BECs) have provided a fruitful experimental, computational, and theoretical testbed for investigating nonlinear phenomena. In the mean-field limit, a BEC is governed by the Gross-Pitaevskii (GP) equation \cite{book1}, which is a nonlinear Schr\"odinger (NLS) equation with an external potential. The NLS equation is important in many fields \cite{Sul}, and many ideas from disciplines such as nonlinear optics have proven important for investigations of BECs. Moreover, the ability to control various parameters in the GP equation makes it possible to create a wide range of nonlinear excitations, and phenomena such as bright \cite{B1,B2}, dark \cite{D1,D2,D3}, and gap \cite{G} solitary waves (and their multi-component \cite{MC} and higher-dimensional \cite{Pots1,Pots2} generalizations) have been studied in great detail using a variety of external potentials \cite{Pots1,Pots2}. The GP equation's cubic nonlinearity arises from a BEC's interatomic interactions, which are characterized by the $s$-wave scattering length. The sign and magnitude of such interactions can be controlled using Feshbach resonances \cite{Koehler,feshbachNa,ofr}, and this has led to a wealth of interesting theoretical and experimental scenarios \cite{theor,B1,B2,exp}. In a recent example, Feshbach resonances were used to induce spatial inhomogeneities in the scattering length in Yb BECs \cite{Tak}. Such {\it collisional inhomogeneities}, which amount to placing the BEC in a nonlinear potential in addition to the usual linear potential, can lead to effects that are absent in spatially uniform condensates~\cite{NLPots,Chang,Summary}. This includes adiabatic compression of matter waves \cite{our1}, enhancement of the transmission of matter waves through barriers \cite{our2}, dynamical trapping of solitary waves \cite{our2}, delocalization transitions of matter waves \cite{LocDeloc}, and many other phenomena. Nonlinear potentials have also led to interesting insights in studies of photonic structures in optics \cite{kominis}. In the present paper, we study the situation that arises when spatial inhomogeneities in nonlinear and linear potentials are tailored in such a way that they compensate each other to yield a constant-density solution of the GP equation. We demonstrate how to engineer this scenario in experiments and investigate it for a step-like configuration of the potentials. This situation is particularly interesting because the inhomogeneity is {\it not} mirrored in the BEC's density profile, which makes the step indistinguishable from a homogeneous linear and nonlinear potential in {\em static} density measurements. We show that the step is nevertheless revealed {\em dynamically} in an impurity-dragging experiment~\cite{bpa_imp}, and we observe the emission of dark solitary waves when the dragging speed is above a critical velocity (which is different inside and outside of the step). This spontaneous emergence of solitary waves motivates their study as a dynamical entity in this setting. We use effective-potential theory to examine the existence and potential dynamical robustness of dark and bright quasi-one-dimensional (quasi-1D) solitary waves for various step-potential parameters. We find that dark solitary waves are always dynamically unstable as stationary states inside of the step, although the type of their instability varies depending on the step parameters. In contrast, bright solitary waves experience a symmetry-breaking bifurcation as the step width is increased, so we analyze their dynamics using a phase-plane description of their motion through the step. Our effective-potential picture enables not only the unveiling of interesting bifurcation phenomena but also an understanding of the potential dynamical outcomes of the interaction of solitary waves with such steps. In this paper, we highlight the fundamental difference between linear and nonlinear potentials in the dynamics of a quantum degenerate one-dimensional Bose gas. In the static picture, one type of potential can be adjusted to completely compensate the other, so that there is no difference to the simple homogeneous potential landscape. However, the dynamical picture is different, as a flow of the Bose gas across inhomogeneities displays interesting dynamics. In the present investigation, we use step potentials to illustrate this phenomenon. The remainder of this paper is organized as follows. We first present our model and its associated physical setup. We then discuss a proposal for the experimental implementation of the compensating linear and nonlinear potentials that we discussed above. We then discuss the problem of dragging a moving defect through the step and the ensuing spontaneous emergence of solitary waves. We then examine the existence, stability, and dynamics of the solitary waves both theoretically and computationally. Finally, we summarize our findings and propose several directions for future study. \section{Model and Setup} \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth,clip,trim = 0 5 0 6]{drag_no_title.pdf} \caption{[Color online] Numerical computations of defect dragging in the quasi-1D GP equation. (Left) Emission of a dark solitary wave as a defect is dragged through the step. (Right) The same computational experiment but without a step (so there is no solitary-wave emission). The defect speed is \(v=0.6\), and the other parameter values are \(\gamma = -1\) and \(\Delta V = 0.5\).} \label{fig-dd} \end{figure} We start with the three-dimensional (3D) time-dependent GP equation and consider a cigar-shaped condensate by averaging over the transverse directions to obtain a quasi-1D GP equation \cite{book1,Pots1,Pots2}. In performing the averaging, we assume that the BEC is strongly confined in the two transverse directions with a trapping frequency of \(\omega_\perp\)~\footnote{Our considerations can be extended to more accurate quasi-1D models \cite{accurate}, but we employ the quasi-1D GP setting for simplicity.}. The solution of the quasi-1D GP equation is a time-dependent macroscopic wavefunction $\Psi(z,t)$. We use the standing-wave ansatz \(\Psi(z,t) = \phi(z)e^{-i\mu t}\) to obtain the time-independent GP equation \begin{equation}\label{gpe} -\frac{1}{2}\phi_{zz} -\mu \phi + V_{\mathrm{ext}}(z)\phi + g(z)\vert\phi\vert^2\phi=0\,, \end{equation} where \(\phi\) is measured in units of \((2\vert a_0\vert)^{-1/2}\) and \(g(z)\) is a spatially varying nonlinearity associated with the (rescaled) scattering length \(a(z)\) via \(g(z) = a(z)/\vert a_0\vert\). We measure length in units of \(a_\perp \equiv \sqrt{\hbar/(m\omega_\perp)}\) and time in units of $\omega_\perp^{-1}$, where \(m\) is the mass of the atomic species forming the condensate. The constant \(a_0\) is the value of the scattering length in the collisionally homogeneous system. Equation (\ref{gpe}) has two conserved quantities: the number of atoms \(N = (a_\perp/[2\vert a_0\vert])\int^{+\infty}_{-\infty}\vert\Psi\vert^2 dz\) and the Hamiltonian \cite{Pots2}. For a square-step linear potential, one can use the Thomas-Fermi approximation ($\phi_{zz} = 0$) for the ground state \cite{Pots2}. Equating the densities inside and outside of the step then gives the constraint \begin{equation} \label{gamma} \gamma = \frac{\Delta V}{\Delta g}=\frac{V_0-\mu}{g_0}\,, \end{equation} where \(V_{0}\) and \(g_{0}\) are the constant background linear and nonlinear potentials, and \(\Delta V\) and \(\Delta g\) are the differences between the step and background values of \(V(z)\) and \(g(z)\). The parameter \(\gamma\) thus measures (and balances) the relative strengths of the steps in the linear and nonlinear potentials. To preserve smoothness, we implement the steps using hyperbolic tangent functions: \begin{align} \!\!\!V(z) \!&=\! V_0 +\Delta V(z) = V_0 \!+\! \frac{\Delta V}{2}\left[\tanh(z_+)-\tanh(z_-)\right], \notag \\ \!\!\!g(z) \!&=\! g_0 + \Delta g(z) = g_0 \!+\! \frac{\Delta g}{2}\left[\tanh(z_+)-\tanh(z_-)\right], \end{align} where \(z_{\pm} = (z\pm z_0)/s\), the step width is $2z_0$, and \(s\) controls the sharpness of the step edges. From equation (\ref{gamma}), it follows that \(\Delta V = \gamma \Delta g\). For the remainder of this article, we take \(V_0=0\) and \(\vert g_1\vert=\vert\mu\vert=1\). This yields \(\gamma = -1\) and corresponds to nonlinear and linear steps of equal and opposite depths/heights. \section{Proposal for Experimental Implementation} Techniques for manipulating cold quantum gases have become both advanced and accurate, and they allow experimentalists to form a variety of potentials with optical and/or magnetic fields, especially near microstuctured atom chips \cite{atomchiprev, chipNJP}. It was shown recently that spatially varying nonlinear potentials, which have been of theoretical interest for several years \cite{NLPots,Chang,Summary}, can be used address a novel scenario that can also be implemented experimentally \cite{Tak}. Straightforward implications of a spatial inhomogeneity of the coupling coefficient $g$ include static density variations as a result of the inhomogeneous mean field. To distinguish this type of effect from more subtle dynamical and beyond-mean-field phenomena, it is desirable to compensate linear and nonlinear contributions of the potential in such a way that the static density profile remains homogeneous (as would be the case if all potentials were homogeneous). In this section, we discuss how such a situation can be achieved experimentally. (In the next section, we will give an example of a purely dynamical phenomenon that arises from it.) A spatially varying magnetic field $B(z)$ results in a proportionally varying linear potential $V(z)=m_F g_F \mu_B B(z)$ for magnetic spin states (where the magnetic quantum number is $m_F$, the Land\'{e} factor is $g_F$, and the Bohr magneton is $\mu_B$) at sufficiently low magnetic fields within the regime of validity of the linear Zeeman effect. For specific atomic species and spin states, there is an additional resonant dependence (a Feshbach resonance \cite{FeshRev}) of $g$ on the magnetic field: \begin{equation} g(B)=g_\mathrm{bg}\left(1-\frac{\Delta}{B-B_0}\right)\,, \end{equation} where $g_\mathrm{bg}$ is the background coupling constant, $B_0$ is the resonance field, and $\Delta$ is the resonance width. The condition of compensating linear and nonlinear potentials is fulfilled within the Thomas-Fermi approximation when \begin{equation}\label{req} n \frac{\partial g}{\partial B}=-\frac{\partial V}{\partial B}\,. \end{equation} In theory, this implies for any given density $n$ that there is a field $B_c$ near the resonance $B_0$ that satisfies equation (\ref{req}). Consequently, the density must remain constant for any static profile $B(z)$ as long as $B(z) - B_c$ is sufficiently small (so that $g(B)$ is an approximately linear function of $B$). In practice, however, large nonlinearities lead to fast three-body recombination losses from traps and hence have to be avoided \cite{FeshRev}. An atomic species with appropriate properties is cesium, for which the above conditions are fulfilled at typical densities of $10^{13}-10^{14}$ cm$^{-3}$ for fields near the narrow Feshbach resonances at 19.8 G and 53.5 G \cite{ChinCs}. Optical dipole traps near the surface of atom chips \cite{chipdipole} provide an environment in which magnetic fields can be accurately tuned to and varied about the critical magnetic fields $B_c$ at the above parameter values. One can bring the trap close to independent microstructures on the surface of the chip by coating the surface with a highly reflective layer so that a standing light wave forms a 1D optical lattice whose near-surface wells can be loaded with the atomic sample. Alternatively, one can focus a single laser beam to a position near the surface at a frequency that is slightly below that of the main atomic transition (i.e., one can red-detune it). In this case, integrated optics and microlenses might help to reduce the atom-surface distance $d_\mathrm{surf}$ to the single-micron regime. Once the trap is placed and populated with an atomic sample, currents that pass through appropriately shaped surface-mounted conductor patterns produce the necessary magnetic field profiles that we described above. The field-tailoring resolution and hence the width of a possible step is limited by $d_\mathrm{surf}$. It is feasible to reduce this length to roughly $1\mu$m in current experiments. In particular, one can exploit the lattice approach \cite{chipdipole}, in which the closest wells form at $d_\mathrm{surf}\approx \lambda$, where the wavelength $\lambda$ is in the optical range (i.e., $\lambda\lesssim 1\mu$m). \section{Dragging a Defect Through the Step} Using the above techniques, the effect of a step on the static denisty profile can be removed by construction. In this case, it is interesting to investigate if and how the density profile is modified when a step is moving relative to the gas. We show by performing computational experiments that the presence of steps in the linear and nonlinear potentials can be revealed by dragging a defect through the BEC \cite{bpa_imp,hakim}. For the linear and nonlinear steps that we described above, the condensate density is constant within and outside of the step. However, the speed of sound $c$ is different in the two regions: \begin{equation} c = \sqrt{g(z)n(z)}\,, \end{equation} where \(n(z) = \vert\phi(z)\vert^2\) is the BEC density \cite{sound}. To perform computations that parallel viable experiments, we simulate a moving defect using a potential of the form \begin{equation} V(z,t) = Ae^{-[z-r(t)]^2/w^2}\,, \end{equation} where \(r(t) = r(0) + vt\) represents the center of a defect that moves with speed \(v\) and $A$ and $w$ are amplitude- and width-related constants. The dynamics of defects moving in a BEC are sensitive to the speed of the defect relative to the speed of sound: speeds in excess of the speed of sound (i.e., supercritical defects) lead to the formation of dark solitary waves travelling behind the defect, whereas speeds below the speed of sound (i.e., subcritical defects) do not~\cite{hakim}. There are three possible scenarios. First, when the speed is subcritical, there is a density depression with essentially the same functional form as the linear potential. This changes shape slightly in the presence of the step; it deepens and widens for a step with \(\Delta g < 0\), and it becomes shallower and narrower when \(\Delta g > 0\)~\footnote{Additionally, initialization of the moving step or an impact on the step produces small-amplitude, oscillatory Hamiltonian shock waves \cite{hoefer}.}. When the speed is larger but still subcritical, the situation is similar---except that the depression distorts slightly, giving rise to a density hump in front of the defect. Second, when the defect speed is supercritical within the step region but subcritical outside of it, we expect the nucleation of dark solitary waves in the step region. Because the defect's speed is smaller than the background sound speed, the emission of solitary waves downstream of the defect becomes a clear indication of the presence of a step. We demonstrate this scenario in Fig.~\ref{fig-dd}. The third possible scenario involves a defect that is supercritical in both regions. \section{Existence, Stability, and Dynamics of Solitary Waves. Part I: Theoretical Analysis} Our scheme for applying compensating steps to the linear and nonlinear potentials and our ensuing observation that solitary waves emerge from moving steps warrant a detailed investigation of the dynamics in this scenario. In particular, we examine the existence and stability of solitary-wave solutions as a function of step parameters (especially step width). \subsection{Bogoliubov-de Gennes Analysis} We apply the Bogoliubov-de Gennes (BdG) ansatz \begin{align}\label{linstab} \!\!\!\Psi(z,t) = e^{-i\mu t}\left[\phi_0(z) \!+\! \sum_j(u_j(z)e^{-i\omega_jt} \!+\! v^*_j(z)e^{i\omega_jt})\right] \end{align} to the time-dependent quasi-1D GP equation. Equation (\ref{linstab}) defines the linear eigenfrequencies \(\omega_j\) for small perturbations that are characterized by eigenvectors \(u_j(z)\) and \(v_j(z)\). Linearizing the time-dependent GP equation about the reference state \(\phi_0(z)\) using equation (\ref{linstab}) yields the BdG eigenvalue problem. The eigenfrequencies \(\omega_j\) come in real (marginally stable) or imaginary (exponentially unstable) pairs or as complex (oscillatorally unstable) quartets. In our analytical approach, we examine perturbations of the time-independent GP equation (\ref{gpe}) with constant potentials $V(z) \equiv V_0 = 0$ and $g(z) \equiv g_0 =\pm1$. The perturbations in the linear and nonlinear steps are thus \(\Delta g(z)\) and \(\Delta V(z) = \gamma\Delta g(z)\). We introduce \(\epsilon \equiv \vert\Delta g\vert\) as a small parameter and (to facilitate presentation) use the term ``negative width" to describe a step with $\Delta g < 0$. When $g_0=\pm 1$, equation (\ref{gpe}) has two families of (stationary) soliton solutions, which are characterized by center position $\xi$ and chemical potential $\mu$. The case $g_0=-1$ yields bright solitons: \begin{equation} \phi_{\mathrm{bs}}(z-\xi) = \eta_{\mathrm{bs}} \sech\left(\eta_{\mathrm{bs}}(z-\xi)\right)\,, \end{equation} where \(\eta_{\mathrm{bs}}=\sqrt{-2\mu}\) and \(\mu<0\). The case $g_0=1$ yields dark solitons: \begin{equation} \phi_{\mathrm{ds}}(z-\xi) = \eta_{\mathrm{ds}} \tanh\left(\eta_{\mathrm{ds}}(z-\xi)\right)\,, \end{equation} where \(\eta_{\mathrm{ds}}=\sqrt{\mu}\) and \(\mu>0\). \subsection{Effective-Potential Theory} We use a Melnikov analysis to determine the persistence of bright~\cite{Sand} and dark solitary waves \cite{Pel}. Stable (respectively, unstable) solitary waves exist at minima (respectively, maxima) of an effective potential $M_{\mathrm{bs}}$. We find that bright solitary waves can, in principle, be stable within the step in the potentials. However, in contrast to the bright solitary waves, stationary dark solitary waves are generically unstable within the step. To determine the persistence of a bright solitary wave, we calculate when its center position induces its associated Melnikov function (i.e., perturbed energy gradient)~\cite{Sand} to vanish. This yields the equation \begin{align}\label{melbright} M_{\mathrm{bs}}'(\xi_0) &= \int_{-\infty}^\infty \biggl[\frac{d[\Delta V(z)]}{dz}\phi_{\mathrm{bs}}^2(z-\xi_0) \notag \\ &+ \frac{1}{2}\frac{d[\Delta g(z)]}{dz}\phi_{\mathrm{bs}}^4(z-\xi_0)\biggr] dz = 0 \end{align} for the first derivative of the potential at the solitary-wave center $\xi = \xi_0$. The GP equation without a potential is spatially homogeneous, and it possesses translational and $U(1)$-gauge symmetries. These symmetries are associated with a quartet of eigenfrequencies at the origin. When the translational symmetry is broken (e.g., by the steps in \(V(z)\) and \(g(z)\)), a pair of eigenfrequencies leaves the origin. Tracking their evolution makes it possible to examine the stability of solitary waves of the perturbed system. We follow these eigenfrequencies by computing the function \begin{align} M_{\mathrm{bs}}''(\xi_0) &= \int_{-\infty}^\infty \biggl[\frac{d^2[\Delta V(z)]}{dz^2}\phi_{\mathrm{bs}}^2(z-\xi_0) \notag \\ &+ \frac{1}{2}\frac{d^2[\Delta g(z)]}{dz^2}\phi_{\mathrm{bs}}^4(z-\xi_0)\biggr] dz\,, \end{align} which determines the concavity of the perturbed energy landscape and is directly associated to the eigenfrequencies of the linearization through~\cite{Sand} \begin{equation} \omega^2 = \frac{1}{2\sqrt{-2\mu}}M_{\mathrm{bs}}''(\xi_0) + O(\epsilon^2)\,, \end{equation} where we note that $M_{\mathrm{bs}}''(\xi_0) = O(\epsilon)$. Stable (respectively, unstable) solitary waves exist at minima (respectively, maxima) of the effective potential $M_{\mathrm{bs}}$. Hence, bright solitary waves can, in principle, be stable within the step. We compute analogous expressions for dark solitary waves, but the Melnikov function now needs to be renormalized due to the presence of a nonzero background density~\cite{Pel}. The first and second derivatives of the effective potential $M_{\mathrm{ds}}$ evaluated at the solitary-wave center $\xi = \xi_0$ are \begin{align} M_{\mathrm{ds}}'(\xi_0) &= \int_{-\infty}^\infty \biggl[\frac{d[\Delta V(z)]}{dz}\left[\eta_{\mathrm{ds}}^2-\phi_{\mathrm{ds}}^2(z-\xi_0)\right]\nonumber \\ &+ \frac{1}{2}\frac{d[\Delta g(z)]}{dz}\left[\eta_{ds}^4- \phi_{\mathrm{ds}}^4(z-\xi_0)\right]\biggr] dz =0 \end{align} and \begin{align} M_{\mathrm{ds}}''(\xi_0) &= \int_{-\infty}^\infty \biggl[\frac{d^2[\Delta V(z)]}{dz^2}\left[\eta_{\mathrm{ds}}^2-\phi_{\mathrm{ds}}^2(z-\xi_0)\right] \nonumber \\ &+ \frac{1}{2}\frac{d^2[\Delta g(z)]}{dz^2}\left[\eta_{\mathrm{ds}}^4- \phi_{\mathrm{ds}}^4(z-\xi_0)\right]\biggr] dz \neq 0\,. \end{align} The expression for the associated eigenfrequencies in this case is~\cite{Pel} \begin{equation} \omega^2 = \frac{1}{4}M_{\mathrm{ds}}''(\xi_0)\left(1-\frac{i\omega}{2}\right) + O(\epsilon^2)\,, \end{equation} where we choose the root that satisfies \(\mathrm{Re}(i\omega)>0\) and we note that $M_{\mathrm{ds}}''(\xi_0) = O(\epsilon)$. The main difference between the spectra for dark and bright solitary waves is that the continuous spectrum associated with the former (due to the background state) lacks a gap about the origin. Consequently, exiting along the imaginary axis is not the only way for eigenfrequencies to become unstable. Even when eigenfrequencies exit toward the real axis, they immediately leave it as a result of their collision with the continuous spectrum; this leads to an eigenfrequency quartet. Thus, stationary dark solitary waves are generically unstable within the step. \section{Existence, Stability, and Dynamics of Solitary Waves. Part II: Computational Results} \begin{figure}[h!t] \centering \includegraphics[width=0.5\textwidth,clip,trim = 0 6 0 5]{Final2_abc.pdf} \vspace{-1.5cm} \caption{[Color online] (Top) Maximum imaginary eigenfrequencies versus step width (where a negative step width means that \(\Delta g < 0\)) for (left) dark solitary waves and (right) bright solitary waves. We show results for the perturbation strengths \(\epsilon = 0.1\) and \(\epsilon = 0.2\). Dashed curves give results for analytical calculations from effective-potential theory, and solid curves give numerical calculations using the BdG equations. The inset in the left panel shows finite-size effects (see the main text). (Bottom) Examples of the corresponding eigenfrequency spectra for $\epsilon = 0.1$. For both bright and dark solitary waves, we show the spectrum for a step width of $2z_0 = 0.25$ on the left and a step width of $2z_0 = -0.25$ on the right. } \label{fig-eigs} \end{figure} We use a fixed-point iteration scheme to identify stationary solitary-wave solutions, solve the BdG equations numerically to determine their corresponding eigenfrequencies, and employ parameter continuation to follow the solution branches as we vary the step width. We start with the $\xi_0 = 0$ branch, which exists for all step widths. In Fig.~\ref{fig-eigs}, we show the development of the eigenfrequencies of this branch of solutions as a function of step width for both dark (left) and bright (right) solitary waves. We obtain good \emph{quantitative} agreement between our results from effective-potential theory and those from BdG computations for the nonzero eigenfrequency associated with the intrinsic (translational) dynamics of the solitary wave. For the case of repulsive BECs ($g > 0$), the branch of solutions at $\xi = 0$ has a real instability for $\Delta g <0$ (i.e., $\Delta V > 0$) and an oscillatory instability for $\Delta g > 0$. We capture both types of instability accurately using effective-potential theory. An interesting but unphysical feature of the dark solitary waves is the presence of small ``jumps'' in the eigenfrequencies. These jumps are finite-size effects that arise from the discrete numerical approximation to the model's continuous spectrum \cite{Joh}. The case of attractive BECs ($g < 0$) is especially interesting. A pitchfork (symmetry-breaking) bifurcation occurs as the step widens; it is supercritical for \(\Delta g < 0\) and subcritical for \(\Delta g > 0\). In this case, oscillatory instabilities are not possible when translational invariance is broken~\cite{Sand}. A direct and experimentally observable consequence of our analysis is that (for $\Delta g>0$) bright solitary waves remain stable for sufficiently large step width, whereas narrowing the step should eventually lead to unstable dynamics. For dark solitary waves, by constrast, we expect the dynamics to be unstable in experiments for all step widths. \begin{figure*}[h!t] \centering \includegraphics[width=1\textwidth,clip,trim = 0 5 0 5]{pplane_no_title.pdf} \caption{[Color online] Phase planes for Newtonian dynamics that describe bright solitary waves in an attractive BEC for four different step widths. The thick dash-dotted lines represent the edges of the step. We highlight the equilibria with dots, triangles, and stars. The light (orange) curves correspond to trajectories that originate at equilibria, and we show other example trajectories as dark (black) curves. The step widths are (upper left) $2z_0 = -1$, (upper right) $2z_0 = -1.4$, (lower left) $2z_0 = -1.8$, and (lower right) $2z_0 = -6$. } \label{fig-pplane} \end{figure*} To further probe the bifurcation, we study the Newtonian dynamics~\cite{dyn} of the bright solitary wave: \begin{equation}\label{newt} m_{\mathrm{eff}}\frac{d^2\xi}{dt^2}=-\nabla U(\xi) = 2M'_{\mathrm{bs}}(\xi)/N\,, \end{equation} where the effective mass is $m_{\mathrm{eff}} = 1/2$. We examine phase portraits of equation (\ref{newt}) by plotting the center-of-mass position $z_{\mathrm{cm}} \approx \xi$ versus the center-of-mass velocity $v_{\mathrm{cm}} \approx \frac{d\xi}{dt}$. As we illustrate in Fig.~\ref{fig-pplane}, this is convenient for examining changes in dynamics as we alter the step width. For narrow steps (e.g., a width of $2z_0 = -1$), there is a center at $z_{\mathrm{cm}}=0$ that straddles two saddle points (stars) just outside of the step (whose edges we indicate using dash-dotted lines). When $\Delta g < 0$ (i.e., $\Delta V >0$), a supercritical pitchfork bifurcation occurs at $2z_0 \approx -1.2$, as the center at the origin transitions to a pair of centers separated by a saddle at the origin (see the top right panel). As the step widens further (bottom left panel), the heteroclinic orbit that previously enclosed the central three equilibria is no longer present, and the centers are now surrounded by homoclinic orbits that emanate from the outer saddle points. Eventually, each outer saddle and its associated center annihilate one another (bottom right panel). When $\Delta g > 0$, the types of equilibria are interchanged (saddles become centers and vice versa). The main difference that occurs in this case is that solitary waves can no longer be reflected by the step; they are all transmitted. As one increases the magnitude of the step width from $0$, there is a saddle flanked by two centers. At the bifurcation point, the central saddle splits into two saddles with a center between them. The changes to the possible trajectories in phase space suggest a viable way to investigate the bifurcation experimentally (and hence to distinguish between narrow and wide steps). The presence of a step alters the path of a moving solitary wave, as is particularly evident by examining the wave speed. As we illustrate in Fig.~\ref{fig-dyn}, the solitary-wave dynamics depends on the number and type of phase-plane equilibria (and hence on the step width). The main panel shows how one can use variations in $v_{\mathrm{cm}}$ of a transmitted bright solitary wave to identify which equilibria are present. The center-of-mass motion of the solitary wave is a particularly useful quantity, as it is directly accessible to experimental measurement through time-resolved detection of spatial density profiles. The techniques outlined above for shaping the nonlinear potential---i.e., engineering the spatial profile $g(x)$ while automatically compensating the linear potential $V(x)$---gives a straightforward method to adjust the step width in the laboratory. We examine trajectories starting from the same initial conditions, $(z_{\mathrm{cm}}(0), v_{\mathrm{cm}}(0)) = (4,-0.22)$, for step widths of $-1,-1.4$, and $-1.8$. The simplest trajectory occurs for the narrowest width ($2z_0 = -1$): as the solitary wave traverses the step, its speed first drops before rising again in the center of the step and then dropping again as it leaves the step (due to its encounter with the two saddles and the center in the phase plane; see Fig.~\ref{fig-pplane}). For wider steps, the dynamics illustrate the effects of the bifurcation: instead of a single peak in the speed, there are now two peaks separated by a well. As the step widens further, the two peaks move outward and follow the centers to the edge of the step. The maximum and minimum in each pair move closer together in both $v_{\mathrm{cm}}$ and $t$ as one approaches the edge of the step. The solitary wave can either be transmitted (as illustrated in Fig.~\ref{fig-dyn}) or reflected by the step. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth,clip,trim = 0 4 0 8]{dyn_no_title.pdf} \caption{[Color online] (Left) Effect of the step on the movement of a bright solitary wave for three different step widths for the GP equation (solid curves) and for numerical solutions of the Newtonian dynamics of the effective-potential (EP) equations (dashed curves). (Right) Contour plots of \(\vert\psi(z,t)\vert^2\) obtained by solving the GP equation numerically for step widths of (top) $-1$ and (bottom) $-1.8$. } \label{fig-dyn} \end{figure} \section{Conclusions} We introduced an experimentally realizable setup to study statically homogeneous BECs in mutually compensating inhomogeneous linear and nonlinear potentials. We showed that---in contrast to the straightforward static scenario---a flowing gas will encounter sound-speed differences, which can induce interesting dynamics such as solitary-wave formation and motion. As a simple demonstration, we examined a step defect, whose width affects the system's dynamics. We conducted a thorough examination of solitary-wave stability and dynamics in this collisionally inhomogeneous setting. We also showed how balancing linear and nonlinear potentials that yield constant-density solutions in the static case can be achieved experimentally. We found that effective-potential theory gives a good \emph{quantitative} description of the existence and eigenfrequencies of both bright and dark solitary waves, and we used it to quantitatively track the evolution of the translational eigenfrequencies as a function of the step width. We identified a symmetry-breaking bifurcation in the case of attractive BECs and illustrated how the presence of the bifurcation is revealed by the motion of solitary waves through the step region. We also found that stationary dark solitary waves are generically unstable through either exponential or oscillatory instabilities. The system that we have studied provides a promising setup for future investigations, as it allows the experimentally realizable possibility of solitary-wave control via accurate, independent tailoring of linear and nonlinear potentials. It would be interesting to explore the phase-coherence properties of a collisionally inhomogeneous 1D quasicondensate, for which phase correlations (at $0$ temperature) decay algebraically with an interaction-dependent exponent \cite{1dpowerlaw}. Quasicondensates have comparitively small density fluctuations \cite{davis}. In contrast to the scenario on which we have focused in the present paper, even a static quasicondensate gas would reveal a step in the nonlinearity in an interference experiment \cite{Kru2010} when the density profile is homogeneous. The study of such quasicondensates and of the phase fluctuations in them is a topic of considerable current interest~\cite{davis}, and it is desirable to enhance understanding of the properties of solitary waves in such systems. \section*{Acknowledgements} PGK acknowledges support from the US National Science Foundation (DMS-0806762), the Alexander von Humboldt Foundation, and the Binational Science Foundation (grant 2010239). PK thanks the EPSRC and the EU for support. We also thank an anonymous referee for helpful comments.
202ebceab47ae899b696d13eb45125c8b55b5a89
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{\label{sec:level1}Introduction} In mid infrared regime ranging from 2 to 12 $\mu$m high power optical pulses have found their applications in near field microscopy or spectroscopy, mid infrared fiber sources, chemical sensing, biomedical surgeries, imaging and so on \cite{sanghera,ghosh1,sensor}. In this regime, chalcogenide glass based optical fibers are found to be highly efficient in generating high power ultra short optical pulses due to their transperancy to mid IR radiations and extraordinary linear and nonlinear properties \cite{chalco,chalco2,book2}. Parabolic pulses (PP) are a special kind of high power optical pulses that can withstand high nonlinearity of optical fiber being freed from optical wave breaking \cite{wavebreak} and also maintains their parabolic temporal profile throughout the propagation length with their characteristic linear chirp across the pulse width when operated in normal group velocity dispersion (GVD) regime \cite{zhang}. Lately, the generation of high power PPs in fiber amplifiers, fiber bragg gratings and passive fibers have already been demonstrated \cite{fermann,kruglov,parm,barh}. However, most of these studies deliberately ignore any detailed analysis of the self-similar states during propagation through real waveguides. This paper presents a numerical study of generation of a high power parabolic pulse through a chalcogenide glass based optical fiber under various input conditions and analyzes its stability during its propagation through the fiber with high nonlinearity, tailored dispersion and suitably customized losses. Most of the developments in the generation of parabolic pulses have been done at telecommunication range of wavelengths ($\sim$ 1.55 $\mu$m) and in active media such as fiber amplifiers \cite{wabnitz,finot}. Parabolic pulses have also been generated in a millimeter long tapered silicon photonic nanowire (Si-PhNW) at $\sim$ 2.2 $\mu$m wavelength which is found to be more stable than that generated at 1.55 $\mu$m \cite{lavdas}. Using a chalcogenide glass based microstructured optical fiber (MOF), PP with $\sim$ 4.98 ps full width at half maximum (FWHM) and $\sim$ 46 W peak power has been generated at 2.04 $\mu$m wavelength \cite{barh}. Although these works demonstrate efficient generation of PPs with quite high power and short temporal width at mid IR, less attention has been paid to investigate the stability of such high power pulses. As these optical pulses are extremely short (in the range of picosecond/sub-picosecond), they are highly susceptible to various fiber nonlinearities as well as dispersion behavior which lead them to break after a few centimeters of propagation. Fiber losses with their limited and nonuniform bandwidth are also an important design issue for consideration, as in practice fiber deformations during its fabrication cause losses which turn out to be fatal for an optical pulse. In our work, we have generated numerically a parabolic pulse at 2.1 $\mu$m wavelength from an input Gaussian pulse with 75 W peak power and 1.9 ps FWHM after its travel through a 20 cm long arsenic sulphide ($As_2S_3$) matrix based up-tapered MOF. Moreover parabolic pulses generated from different input pulse shapes other than Gaussian have been studied. Accordingly, to study the stability of the generated PP, a variable longitudinal loss profile along with its frequency dependence is incorporated at 2.1 $\mu$m wavelength and the corresponding changes in output pulse characteristics have been reported. Further propagation of the generated PP through different dispersion regimes have been investigated and compared for obtaining the most stable propagation dynamics in such geometries. \section{\label{sec:level2}Generation of Parabolic Pulse} \subsection{\label{sec:level3}Numerical Modelling for PP Generation} The study of most nonlinear effects in optical fibers involves the short pulses with widths ranging from a few picoseconds (ps) to a few femtoseconds (fs). Propagation of such short pulses within the optical fiber is accompanied by dispersion and nonlinearity which influence their shapes and spectra. The pulse evolution along the tapered dispersion decreasing MOF has been modeled by solving the following nonlinear Schr\"{o}dinger equation (NLSE). Considering a slowly varying pulse envelope $A(z,T)$, NLSE for propagation of short optical pulses takes the form \cite{book}, \begin{equation} \frac{\partial A}{\partial z}+\beta_1 \frac{\partial A}{\partial T}+i \frac{\beta_2}{2} \frac{\partial^2 A}{\partial T^2} - \frac{\beta_3}{6} \frac{\partial^3 A}{\partial T^3} + \frac{\alpha}{2} A = i \gamma(\omega_0) |A|^2A, \end{equation} where nonlinear parameter $\gamma$ is defined as, \begin{equation} \gamma(\omega_0) = \frac{n_2(\omega_0) \omega_0}{c A_e}. \end{equation} $\alpha$ is the loss parameter, $\beta_1$ and $\beta_2$ are the first and second order dispersions respectively, $\beta_3$ is the third order dispersion (TOD), $A_e$ is the effective mode area of the fiber and $n_2$ is the nonlinear coefficient of the medium. The pulse amplitude is assumed to be normalized such that $|A|^2$ represents the optical power. In an ideal loss-less optical fiber with normal GVD i.e., when value of $\beta_2$ is positive and a hyperbolic dispersion decreasing profile along length of the fiber, the asymptotic solution of NLSE yields a parabolic intensity profile. Under this condition, the propagation of optical pulses is governed by the NLSE of the form \cite{zhang}, \begin{equation} \label{eq:3} i \frac{\partial A}{\partial z}-\frac{\beta_2}{2} D(z) \frac{\partial^2 A}{\partial T^2}-i \frac{\beta_3}{6} \frac{\partial^3 A}{\partial T^3}+ \gamma(z)|A|^2A = 0, \end{equation} where $D(z)$ is length dependent dispersion profile along the tapered length, $\beta_2$($2^{nd}$ order GVD parameter)$>0$, $\beta_3$ is the TOD value and $\gamma(z)$ is longitudinally varying nonlinear (NL) coefficient. By making use of the coordinate transformation, $\xi=\int_{0}^{z}D(z')dz'$ and defining a new amplitude $U(\xi,T)=\frac{A(\xi,T)}{\sqrt{D(\xi)}}$, eq. (3) transforms to, \begin{equation} \label{eq:4} i \frac{\partial U}{\partial \xi}-\frac{\beta_2}{2} \frac{\partial^2 U}{\partial T^2}-i \frac{\beta_3}{6 D(\xi)}\frac{\partial^3 U}{\partial T^3}+ \gamma(z)|U|^2U = i \frac{\Gamma(\xi)}{2}U, \end{equation} where \begin{equation} \label{eq:5} \Gamma(\xi)=-\frac{1}{D}\frac{dD}{d\xi}=-\frac{1}{D^2}\frac{dD}{dz} \end{equation} As $D(z)$ is a decreasing function of $z$, $\Gamma$ in eq.(\ref{eq:5}) is positive since $D$ is a decreasing function with increasing $z$; and hence it mimics as a gain term in eq.(\ref{eq:3}). In the chosen dispersion decreasing fiber (DDF), the varying dispersion term is equivalent to the varying gain term of a fiber amplifier with normal GVD. Specifically, with the choice of $D(z) = \frac{1}{1+\Gamma_0 z}$ the gain coefficient becomes constant, i.e., $\Gamma$ = $\Gamma_0$. The NLS equation in a fiber with normal GVD and a constant gain coefficient permits self-similar propagation of a linearly chirped parabolic pulse as an asymptotic solution. To study the pulse propagation in nonlinear dispersive media Split-step Fourier method (SSFM) has been extensively efficient, which is much faster than any other numerical approach to achieve the same accuracy. In general, dispersion and nonlinearity act together along the length of the fiber. SSFM obtains an approximate solution by assuming that in propagating the optical field over a small distance $h$, the dispersion and nonlinear effects can be considered to act independently. More specifically, propagation from $z$ to $z+h$ is carried out in two steps. In the first step, nonlinearity acts alone while in the second step dispersion acts alone. In this method eq.(\ref{eq:4}) can be written in the form \cite{book} \begin{equation} \frac{\partial A}{\partial z}=(\hat D+\hat N)A, \end{equation} where $\hat D$ is the differential operator that accounts for the dispersion and losses within the medium and $\hat N$ is the nonlinear operator that governs the effect of fiber nonlinearities on pulse propagation. \subsection{Pulse Evolution} We aim to generate a high power parabolic pulse in the mid infrared regime. A parabolic pulse has been efficiently generated through numerical simulation at 2.1 $\mu$m wavelength in arsenic sulphide ($As_2S_3$) based MOF geometry with a solid core, surrounded by a holey cladding consisted of 4 hexagonally arranged rings of air holes embedded in the $As_2S_3$ matrix \cite{ghosh1,barh}. $As_2S_3$ possesses lowest transmission loss ($\alpha_T$ $\sim$ 0.4 dB/m at 2 $\mu$m) among chalcogenide glasses and very high nonlinearity ($n_2$ $\sim$ $4.2\times10^{-18}$ $m^2$/W at 2 $\mu$m). A meter long up-tapered MOF with suitably tailored dispersion and nonlinearity is shown in figure \ref{fig:figure1}. A Gaussian pulse of peak power 75 W and initial full-width-at-half-maximum (FWHM) 1.9 ps was fed at the input end of the fiber and after propagating only 20 cm of the fiber length, the shape of the pulse in time domain is transformed into parabolic. The pulse evolution is shown in figure \ref{fig:figure2}. The Gaussian pulse is reshaped to parabolic under the combined influence of self phase modulation (SPM) and normal GVD. In figure \ref{fig:figure3}(b), the top-hat nature of the output pulse in logarithmic scale carries the hallmark that the generated pulse is essentially parabolic. With the inclusion of third order dispersion (TOD), the parabolic profile of the pulse is still maintained. The output pulse is broadened up to 4.65 ps (figure \ref{fig:figure3}(a)) and a linear chirp is generated across the entire pulse width as shown in the inset of figure \ref{fig:figure3}(a). \begin{figure}[htbp] \includegraphics[width=8cm]{figure1.eps} \caption{(color online) A schematic of the linearly up-tapered MOF. The length of the MOF is considered to be 1 m. At the input end d0 is the individual air hole diameter and d1 the same at the output end, h0 and h1 are the air hole separation at the input and output end, respectively. The chosen taper ratio is 1.05} \label{fig:figure1} \end{figure} \begin{figure}[htbp] \includegraphics[width=8cm]{figure2.eps} \caption{(color online) Pulse evolution from Gaussian input pulse (blue curve) to parabolic pulse (black curve) at only 20 cm length of the fiber.} \label{fig:figure2} \end{figure} \begin{figure}[htbp] \includegraphics[width=8cm]{figure3a_3b.eps} \caption{(color online) (a) Time domain plot of the input Gaussian (black) and the output parabolic pulse (red). The linear chirp across the parabolic pulse width is shown in the inset, (b) the logarithmic plot of the input (black) and the output (red) pulses with linear chirp profile shown in the inset.} \label{fig:figure3} \end{figure} \subsection{Effect of Input Pulse Shapes} In this section we will discuss how various input pulse shapes affect the parabolic pulse evolution through the MOF. In order to investigate this we employ different pulse shapes at the input, such as a hyperbolic secant pulse with $A(0,T) \propto sech(T/T_0)$, a triangular pulse with $A(0,T) \propto tripuls(T/T_0)$ and a supergaussian pulse with $A(0,T) \propto \exp(-(T/T_0)^{2m})$. The input pulses are unchirped with the same peak power and energy as shown in figure \ref{fig:figure4}(a) and the medium of propagation is assumed to be lossless. First, we consider a secant hyperbolic pulse with 159 pJ of energy and 1.685 ps FWHM which after propagating through 15 cm of the fiber length is converted to a parabolic intensity profile having a temporal FWHM of 3.48 ps. The output spectral broadening is estimated to 125 nm. Relative to the PP evolved from a Gaussian input, corresponding PP for the counterpart secant pulse in both temporal and spectral domain is less broadened. With a triangular pulse of energy 159 pJ and FWHM of 1.70 ps, a nearly parabolic pulse is generated after 17 cm propagation through the fiber. Spectral output is broadened by 110 nm. Further, a supergaussian pulse of the same energy and FWHM of 2.21 ps is fed at the input end of the fiber. After propagating only 8 cm, a very weak parabolic intensity profile with FWHM 2.38 ps and a nearly linear chirp across is observed. Further propagation of the pulse results in a triangular shaped temporal profile. The output pulse shapes obtained at optimum length of the fiber are depicted in figure \ref{fig:figure4}(b). A comparative study for the characteristic parameters of the output pulses for different input pulse shapes are presented in Table~\ref{jlab1}. \begin{table*}[t] \centering \caption{\label{jlab1}Comparison of output pulses generated from various input pulse shapes.} \footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Input&Input&Input&Optimum fiber&Fiber length&Output&Output&Energy&Spectral\\ pulse&energy&FWHM&length for PP&before wave&FWHM&energy&conversion&broadening\\ shapes&(pJ)&(ps)&generation (cm)&breaking (cm)&(ps)&(pJ)&efficiency (\%)&(nm)\\ \hline Gaussian&159&1.90&15&25&4.05&159&100&122\\ Hyperbolic&159&1.52&15&22&3.48&159&100&125\\ secant&&&&&&&&\\ Triangular&159&1.70&17&19&3.72&159&100&110\\ Supergaussian&159&2.21&8&13&2.38&159&100&120\\ \hline \end{tabular}\\ \end{table*} \normalsize \begin{figure}[htbp] \includegraphics[width=8cm]{figure4a_4b.eps} \caption{(color online) Plot of (a) four different input pulse shapes - Gaussian (dashed black), hyperbolic secant (solid black), triangular (red) and supergaussian (blue), all having the same energy 159 pJ; and (b) output pulses obtained from various input pulses.} \label{fig:figure4} \end{figure} \section{Stability of Similariton Propagation} \subsection{Loss Window} We start our study with the generation of parabolic pulses in a lossless, suitably dispersion and nonlinearity tailored fiber. Here we examine the stability of the generated PP under the influence of a lossy medium. For this purpose we considered the specific material loss window for the $As_2S_3$ chalcogenide MOF around 2.1 $\mu$m wavelengths, following \cite{losswindow}, as shown in figure \ref{fig:figure5}(a). The material loss of $As_2S_3$ glass is around 0.3 dB/m \cite{losswindow} and the confinement loss of the MOF is taken as 1.0 dB/m \cite{barh}. So a total loss of 1.30 dB/m has been considered as the mean value and certain amount of deliberate fluctuations is introduced. Additionally, our investigation include two different mean values of the over all loss exceeding the previous value. \begin{figure}[h] \includegraphics[width=8cm]{figure5a_5b.eps} \caption{(a) Loss window for $As_2S_3$ matrix based chalcogenide MOF and (b) loss variation of the fiber along the fiber length.} \label{fig:figure5} \end{figure} \subsection{Loss Fluctuations} In order to check the stability of the output pulse spectrum, we introduce a variable loss along the fiber length with certain amount of randomness as shown in figure \ref{fig:figure5}(b), corresponding to three different loss values. To address the tolerance issue of the state-of-the-art fabrication process in terms of loss variation along the fiber length, loss fluctuations as high as 10\% and 20\% around the mean values have been considered. Figure \ref{fig:figure6}(a) illustrates the spectral power reduction due to 10\% fluctuations of various loss values as compared to the lossless spectrum. Accordingly, spectral power change due to 20\% fluctuations of loss values are shown in figure \ref{fig:figure6}(b). Exact quantifications of the spectral modifications in terms of 3dB bandwidth are shown in Table \ref{jlab2}. \begin{table}[htbp] \centering \caption{\label{jlab2}Comparison of PP with variable loss effect.} \footnotesize \begin{tabular}{|c|c|c|c|c|} \hline Loss&Loss&3dB&Maximum&Output\\ (dB/m)&fluctuation&Bandwidth& spectral&energy (pJ)\\ &(\%)&change (\%)&rippling (dB)&\\ \hline 0&0.0&0.0&2.5&159\\ \hline \multirow{2}{*}{1.30}&10&5.5&2.9&127\\ &20&6.9&3.0&\\ \hline \multirow{2}{*}{1.80}&10&8.3&3.4&116\\ &20&9.8&3.5&\\ \hline \multirow{2}{*}{2.30}&10&8.9&3.5&105\\ &20&9.6&3.6&\\ \hline \end{tabular}\\ \end{table} \begin{figure}[htbp] \includegraphics[width=8cm]{figure6a_6b.eps} \caption{(color online) Various output spectra obtained at different loss values with (a) 10\% and (b) 20\% fluctuations of the longitudinal loss profiles respectively.} \label{fig:figure6} \end{figure} \subsection{Different Dispersion Regimes} For the stability analysis of the generated parabolic pulse, we use a two stage propagation of the pulse. Once a parabolic pulse is generated, it has been shown in various experiments that it retains its shape throughout the propagation length and follows self-similar propagation. To investigate the self similar propagation of the PP through a passive medium, we consider MOFs with three different configurations in which the parabolic pulse will be generated in first few centimeters of the fiber length. Hence the PP will be propagating through rest of the fiber length engineered with three distinct dipersion profiles, respectively. Chosen relevant fiber geometries are shown in figure \ref{fig:figure7}. \begin{figure}[htbp] \centering \includegraphics[width=8cm]{figure7.eps} \caption{Dispersion profiles of the (a) up-taper MOF structure, (b) up-down taper MOF and (c) up-no taper MOF.} \label{fig:figure7} \end{figure} Firstly, we consider a fully up tapered MOF in which, up to 20 cm from the input end of the MOF, evolution of the parabolic pulse from a Gaussian seed pulse has been observed. Through rest of the fiber length, the self similar characteristic of the generated PP has been studied. Before explaining the obtained results from this up-tapered MOF, we will reconsider this pulse evolution process in other two cases. The second kind of fiber geometry under consideration is an up-down tapered fiber. Here, the first 20 cm of the total fiber is a dispersion decreasing MOF to genearte the PP efficiently. Then this PP has been fed to a down tapered MOF of same material as the up tapered fiber with increasing dispersion profile along the fiber length. Finally, the parabolic pulse generation and propagation have been studied in an up-straight MOF, where the PP is evolved through 20 cm fiber length and its propagation is made through a untapered MOF with a constant dispersion profile. Results of these chosen three different configurations are shown in figure \ref{fig:figure8}. \begin{figure}[htbp] \includegraphics[width=8cm]{figure8a_8b.eps} \caption{(color online) (a) Plot of temporal profiles of the output pulses obtained from three different MOF configurations at the end of 40 cm length and (b) spectral variation of the corresponding output pulses.} \label{fig:figure8} \end{figure} The output pulses and spectra after propagation through 40 cm fiber length respectively, look almost identical irrespective of their fiber geometries. Notably the striking feature is that the pulses are no longer parabolic in shape as we could expect from the self similar propagation characteristics of parabolic pulses. Rather, we have obtained a parabolic pulse unlike a similariton. The input pulse has evolved to a parabolic shape at 20 cm of the fiber length which is just a transient state of the input pulse evolution in the passive media. On further propagation it essentially transformed into a nearly trapezoidal shape with linear chirp across most of the pulse. As our chosen MOF is a highly nonlinear fiber (HNLF), the large $\gamma$/$\beta_2$ ratio makes SPM to dominate over dispersion. The combined effect of SPM and GVD has resulted in broadening of the pulse but failed to maintain its shape. If we examine the chirp variation of the parabolic pulse in figure \ref{fig:figure3}, it could be seen that its linear nature extends almost over the entire pulse width with steepened transitions at the leading and trailing edges. However in all three chosen fiber configurations (as shown in figure \ref{fig:figure7}) the chirp evolution of the propagating pulse carries an interesting signature of flipping the both edges in temporal domain. In addition as anticipated, the parabolic profile is gradually transformed into a nearly trapezoidal profile with increasing propagation length. The nonmonotonic nature of the chirp is somewhat responsible for the re-reshaping of the propagating pulse \cite{nfactor}. The output spectra carry signature of spectral broadening up to 140 nm and a nearly flat top (fluctuations falls within 3 dB). The unwanted side side-lobes appears as a result of interference between newly generated frequency components. \section{Conclusion} In conclusion, numerically generated parabolic pulses from various input optical pulse shapes such as Gaussian, hyperbolic secant, triangular and super-gaussian keeping initial energy constant has been demonstrated. A in-depth qualitative and quantitative analysis of these PPs has established that the PP obtained by reshaping of the input Gaussian pulse has turned out to be the most efficient irrespective of the choice of chalcogenide glass based MOF designs. Moreover, the PP generation in different length dependent dispersion regimes such as up-taper, up-down taper and up-no taper geometries has been studied. From a direct comparison of PPs obtained from different structures, its evident that PPs look almost similar in every aspect for different cases. Hence, from practical point of view we may propose the up-tapered MOF geometry as preferable fiber structure for generating PP owing to its fabrication friendly geometry. In addition, the stability of the PP has been investigated by introducing longitudinally variable and customized fluctuating loss profiles within the specific loss window of the chosen MOFs. From the pulse dynamics through these dissipative structures, though no significantly adverse effect particularly on the shape of the output spectra was observed, however a reduction in the spectral power along with lesser 3dB bandwidth has been noticed. Moreover, from the propagation characteristics through the dispersion tailored MOFs, it has been settled that the generated parabolic shape is a transient state which merely capable of retaining its shape unless an optimized fiber design/scheme is proposed for self-consistent solution. Our findings would be of key interest for design and fabrication of self-consistent and stable PP sources for mid infrared spectroscopy, fiber-based biomedical surgeries, chemical sensing etc. \section*{Acknowlegdement} S. N. Ghosh acknowledges the financial support by Department of Science and Technology (DST), India as a INSPIRE Faculty Fellow [IFA-12;PH-13].
90f02ffe96ee64d0a01316e5e154dd50a17835c3
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Recent PDF Updates - effect and treatment of LHC data} \begin{wrapfigure}{r}{0.6\columnwidth} \vspace{-0.4cm} \centerline{\includegraphics[width=0.62\textwidth]{figures_Ratio-Boxes.png}} \vspace{-0.3cm} \caption{The CMS measurement of the $t/\bar t$ ratio in $t$-channel production, figure from \cite{Khachatryan:2014iya}.} \vspace{-0.3cm} \label{Fig2} \end{wrapfigure} Each group has produced updates including new data, often including data from the LHC. The recent analysis from the ABM group, ABM12~\cite{Alekhin:2013nda}, now includes more HERA cross-section data, and vector boson production data from ATLAS, CMS and LHCb. The PDF sets are determined together with $\alpha_S$, whose value comes out to be $\alpha_S(m_Z^2)=0.1132$ at NNLO. Top quark pair production data from the LHC is investigated, but not included in the default PDFs. Its inclusion tend to raise the high-$x$ gluon and $\alpha_S(m_Z^2)$ a little, the precise details depending on the top quark mass (and mass renormalization scheme) used. ABM PDF sets currently give the best fit to the ratio of $t$-channel single top to single anti-top production \cite{Khachatryan:2014iya,Aad:2014fwa}, as seen in Fig.~\ref{Fig2}, which is a constraint on $u/d$. \begin{figure} \vspace{-0.2cm} \centerline{\includegraphics[width=0.5\textwidth]{corrptct14nnlo8TeV.png} \includegraphics[width=0.5\textwidth]{corrptct14nnlo13TeV.png}} \vspace{-0.1cm} \caption{The correlation between top pair production in different b$p_T$ bins and the gluon, figures from \cite{Dulat:2015mca}. } \vspace{-0.3cm} \label{Fig3} \end{figure} The CT14 PDF sets~\cite{Dulat:2015mca} have been made recently available at NLO, NNLO, and also at LO. These sets include a variety of LHC data sets as well as the most recent D0 data on electron charge asymmetry. The PDFs also use an updated parametrization based on Bernstein polynomials which peak at a specific $x$. LHC inclusive jet data are included at NLO and also in the NNLO fit. The main change in the PDFs as compared to CT10 is a softer high-$x$ gluon, a smaller strange quark (partially due to correction of the charged current DIS cross section code) and the details of the flavour decomposition, e.g. $\bar u /\bar d$ and the high-$x$ valence quarks. CT14 does not fit top quark production data but does make comparisons, e.g. the correlation of top pair production with the gluon are shown in Fig.{\ref{Fig3}}. \begin{wrapfigure}{r}{0.57\columnwidth} \vspace{-0.6cm} \centerline{\includegraphics[width=0.72\textwidth]{NNPDF30dataproc.png}} \vspace{-0.1cm} \caption{The data included in the NNPDF3.0 analysis, figure from \cite{Ball:2014uwa}. } \vspace{-0.6cm} \label{Fig4} \end{wrapfigure} The NNPDF3.0 PDFs \cite{Ball:2014uwa} are the recent major update within the NNPDF framework. As new data they include HERA inclusive structure function Run II data from H1 and ZEUS (before their combination), more recent ATLAS, CMS and LHCb data on gauge boson production and inclusive jets, and $W+$charm and top quark pair production. A subset of jet data is included at NNLO using an approximate NNLO treatment. The full set of data fit is illustrated in Fig.~\ref{Fig4}. The NNPDF3.0 fitting procedure has been tuned using a closure test, i.e, by generating pseudo-data based on an assumed underlying set of PDFs. One verifies in this case that the output of the fitting procedure is consistent with the a priori known answer. As a by-product, one can investigate directly the origin of PDF uncertainties. The minimization has been optimized based on the closure test. The NNPDF3.0 PDFs display moderate changes in comparison to NNPDF2.3: specifically somewhat smaller uncertainties and a noticeable change in the gluon-gluon luminosity which is mainly due to the change in methodology. \begin{figure} \vspace{-0.0cm} \includegraphics[width=0.5\textwidth]{tnlo} \vspace{0.2cm} \includegraphics[width=0.5\textwidth]{tnnlo} \vspace{-0.7cm} \caption{The MMHT fit to $\sigma_{t\bar t}$ data, figures from \cite{Harland-Lang:2014zoa}.} \vspace{-0.2cm} \label{Fig6} \end{figure} The MSTW group is renamed MMHT due to a change in personnel. The MMHT2014 PDFs~\cite{Harland-Lang:2014zoa} incorporate the improved parametrization and deuteron corrections in the MMSTWW study \cite{Martin:2012da}, and also a change in the heavy flavour scheme, and a change in the branching fraction $B_{\mu} = B(D \to \mu)$ used in the determination of the strange quark from $\nu N \to \mu\mu X$ data. The updated analysis includes new data: the combined HERA structure function data, improved Tevatron lepton asymmetry data, vector boson and inclusive jet data from the LHC (though LHC jet data is not included at NNLO), and top pair cross section data from the Tevatron and LHC. No PDFs change dramatically in comparison to MSTW2008\cite{Martin:2009iq}, with the most significant changes being the shift in the small-$x$ valence quarks already observed in the MMSTWW study, a slight increase in the central value of the strange quark to help the fit to LHC data, and a much expanded uncertainty on the strange distribution. The PDFs are made available with 25 eigenvector pairs for $\alpha_S(m_Z^2) =0.118$ and 0.120 at NLO and 0.118 at NNLO. However, $\alpha_S(m_Z^2)$ is also determined by the NLO and NNLO fits and values of $\alpha_S(m_Z^2)=0.1201$ and $0.1172$ respectively are found, in good agreement with the world average. A dedicated study of the uncertainties in the determination of $\alpha_S(m_Z^2)$ in the MMHT2014 analysis has been presented in~\cite{Harland-Lang:2015nxa}. \begin{wrapfigure}{r}{0.7\columnwidth} \vspace{-0.8cm} \centerline{\includegraphics[width=0.45\textwidth]{ttbarglucomp} \hspace{-0.8cm}\includegraphics[width=0.45\textwidth]{ttbarglunnlocomp}} \vspace{-0.8cm} \caption{Eigenvectors constraints from top cross section data for MMHT.} \vspace{-0.2cm} \label{Fig7} \end{wrapfigure} MMHT fit to data on $\sigma_{t\bar t}$ from the Tevatron (combined cross section measurement from D0 and CDF), and all published data from ATLAS and CMS for $7 {\rm TeV}$ and one point at $8 {\rm TeV}$. They use $m_t = 172.5~{\rm GeV}$ with an error of $1~{\rm GeV}$ and with $\chi^2$ penalty applied. The predictions and the fit are good, with the NLO fit preferring masses slightly below $m_t = 172.5~{\rm GeV}$ and NNLO masses slightly above, see Fig.~\ref{Fig6}. The fit quality to $\sigma_{\bar t t}$ data alone is very sensitive to $m_t$ and $\alpha_S(M_Z^2)$ interplay \cite{Harland-Lang:2015nxa}. In the NLO fit the inclusive $ t\bar t$ cross section data used does not constrain any PDF eigenvectors. Nearly constrains eigenvector number 29 and 31, both of which correspond to a decreased gluon at high $x$ only. 31 is primarily constrained by CDF jet data. In the NNLO fit the inclusive $ t\bar t$ cross section constrains one eigenvector, number 29 and (nearly) 41. Both correspond to increased gluon at high $x$ only. The eigenvectors are shown in Fig.~\ref{Fig7} \begin{figure}[] \vspace{-0.2cm} \centerline{\includegraphics[width=0.46\textwidth]{HERAIINCproc.png} \includegraphics[width=0.52\textwidth]{HERAIICCproc.png}} \vspace{-0.1cm} \caption{Neutral(left) and charged (right) current data from the final HERA combination, figure from \cite{Abramowicz:2015mha}.} \vspace{-0.2cm} \label{Fig8} \end{figure} The data for $t \bar t$ differential distributions are not currently used in PDF determinations as they did not meet cut-off dates for data inclusion and also had missing NNLO corrections which may be important. In comparison with existing PDFs at NLO the $y_{\bar t t}$ distribution tends to be very good, but the $p_{t}$ distribution off in shape, while $m_{\bar t t}$ is somewhere in between). It is interesting to see the NNLO corrections \cite{Czakon:2015owf} improve the comparison to the $p_T$ distribution markedly. Since these updates a HERA combination of all inclusive structure function measurements from Runs I and II has been presented \cite{Abramowicz:2015mha}, and included in the HERAPDF2.0 set. The improved data can be seen in Fig.~\ref{Fig8}. The resulting HERAPDF set has considerably reduced uncertainties, and a much improved constraint on flavour decomposition at moderate and high $x$ due to the difference between neutral current $e^+$ and $e^-$ cross sections, and to much more precise charged current data. The running at different energies gives sensitivity to $F_L(x,Q^2)$ which constrains the gluon. \begin{figure} \vspace{-0.1cm} \includegraphics[width=0.5\textwidth]{uvnnlohera2} \includegraphics[width=0.5\textwidth]{glunnlohera2}\\ \includegraphics[width=0.485\textwidth]{xd-global-newhera-hera-highQ.png} \hspace{0.3cm}\includegraphics[width=0.485\textwidth]{xubar-global-newhera-hera-highQ.png} \vspace{-0.65cm} \caption{MMHT (top) (figures from \cite{Thorne:2015caa}) and NNPDF (bottom) (figures from \cite{Rojo:2015nxa}) PDFs with the inclusion of the final HERA combined data.} \vspace{-0.2cm} \label{Fig9} \end{figure} These HERA combined data have now been included in global fits. Good fits, with little deterioration for other data are obtained for both MMHT \cite{Thorne:2015caa,Harland-Lang:2016yfn} and NNPDF \cite{Rojo:2015nxa}. These also result in small changes in the central PDFs and uncertainties, as shown in Fig.~\ref{Fig9}. Hence, there is no imperative to provide immediate further updates since these will appear soon due to new LHC data. \begin{figure} \vspace{-0.2cm} \centerline{\includegraphics[width=0.49\textwidth]{xg-global} \includegraphics[width=0.49\textwidth]{xg-reduced-v2}} \vspace{-0.2cm} \centerline{\includegraphics[width=0.49\textwidth]{xu-global} \includegraphics[width=0.49\textwidth]{xu-reduced-v2}} \vspace{-0.2cm} \centerline{\includegraphics[width=0.49\textwidth]{qq_global} \includegraphics[width=0.49\textwidth]{qq_reduced-v2}} \vspace{-0.2cm} \centerline{\includegraphics[width=0.49\textwidth]{gg_global} \includegraphics[width=0.49\textwidth]{gg_reduced-v2}} \vspace{-0.3cm} \caption{The comparison of different PDFs (top two plots) and parton luminosities (lower two plots), figures from \cite{Butterworth:2015oua}.} \vspace{-0.9cm} \label{Fig10} \end{figure} The comparison between the most recent versions of the different PDF sets is shown for the gluon and up quark in the upper of Fig.~\ref{Fig10}. There is now excellent agreement between CT14, MMHT2014 and NNPDF3.0, much better than in the previous versions of these PDF sets, but there is still some significant differences in central values and uncertainty between the other PDF sets. The comparison of PDF luminosities is shown also shown in the lower of Fig.~\ref{Fig10}. The $gg$ luminosity now in almost perfect agreement for the three ``global'' sets, but some variation is seen in quark (antiquark) luminosities. \section{Combination of PDF sets} It is not obvious how to combine different ``Hessian'' PDF sets. However, it is now known how to generate ``random'' PDF sets directly from the representation in terms of eigenvectors \cite{Watt:2012tq} \begin{equation} F(\mathcal{S}_k) = F(S_0) + \sum_{j}\left[\!F(S_j^\pm)- F(S_0)\!\right] |R_{jk}| \nonumber \end{equation} Hence, one can combine different PDF sets either at PDF level or predictions. The latter is shown using the last round of global PDFs for the Higgs cross section in Fig.~\ref{Fig13}, and can be applied to the PDFs at a particular $x$ and $Q^2$ value in the same manner. \begin{figure} \vspace{-0.1cm} \centerline{\includegraphics[width=0.9\textwidth]{probability_ggh126_asmz118}} \vspace{-0.1cm} \caption{Combination of distributions for $\sigma_{gg \to H}$ (plot by G. Watt. \cite{WattPDF4LHC}).} \vspace{-0.3cm} \label{Fig13} \end{figure} \begin{figure} \vspace{-0.2cm} \centerline{\includegraphics[width=0.5\textwidth]{xg_MCPDFcombV2_nnlo} \includegraphics[width=0.5\textwidth]{xu_MCPDFcombV2_nnlo}} \vspace{-0.2cm} \centerline{\includegraphics[width=0.5\textwidth]{xdbar_MCPDFcombV2_nnlo} \includegraphics[width=0.5\textwidth]{xs_MCPDFcombV2_nnlo}} \vspace{-0.4cm} \caption{The combination of 300 randomly distributed sets of each of the CT14, MMHT2014 and NNPDF3.0 PDF sets, figures from \cite{Butterworth:2015oua}.} \vspace{-0.1cm} \label{Fig14} \end{figure} The application to the combination of the CT14, MMHT2014 and NNPDF3.0 PDFs is shown in Fig.~\ref{Fig14}. It works well if the PDFs are fairly compatible - both in central value and uncertainty - giving the mean of the central values and a spread which combines the individual PDF uncertainties and the variation in the PDFs. Following this initial development the {Meta-PDF} approach \cite{Gao:2013bia} subsequently showed how refit the combination in terms of a large number Monte Carlo PDFs to a functional form, and hence convert the combination to Hessian set with a relatively small number of eigenvector sets. Further developments showed how to compress the Monte Carlo set to a smaller number \cite{Carrazza:2015hva} and how to use the Monte Carlo sets in the combination as a basis for an extremely precise Hessian representation (MC-H) \cite{Carrazza:2015aoa}. \section{The New PDF4LHC Prescription} \begin{figure} \vspace{-0.0cm} \centerline{\includegraphics[width=0.5\textwidth]{qq_mc900_vs_cmc100} \includegraphics[width=0.5\textwidth]{qq_mc900_vs_cmc100}} \centerline{\includegraphics[width=0.5\textwidth]{qq_mc900_vs_mch100_vs_meta30} \includegraphics[width=0.5\textwidth]{qq_mc900_vs_mch100_vs_meta30}} \vspace{-0.1cm} \caption{Comparison of PDF luminosities for Monte Carlo compression (top) and Hessian compression (bottom), figures from \cite{Butterworth:2015oua}.} \vspace{-0.2cm} \label{Fig15} \end{figure} The improved agreement of the global PDF sets and the means of combining them in a more statistically robust fashion allows for an update in the previous PDF4LHC prescription \cite{Botje:2011sn} for combining PDFs when a single prediction representing a reasonable average prediction and quite conservative uncertainty is required. The sets entering into the combination must satisfy requirements, i.e. be compatible for combination, and at present CT14, MMHT2014 and NNPDF3.0 are included. It has been agreed that this should be for the common value of the coupling $\alpha_S(M_Z^2)=0.118$. The recommendation now allows the use of a single combined PDF set in either Monte Carlo or Hessian form \cite{Butterworth:2015oua}: Monte Carlo - A set of PDF replicas is delivered, where the mean is the central value and the standard deviation the uncertainty; Hessian - a central set and eigenvectors representing orthogonal sources of uncertainty are delivered, and the uncertainty obtained by summing each uncertainty source in quadrature. In each case a single combined set at both $\alpha_S(M_Z^2)=0.1165$ and $\alpha_S(M_Z^2)=0.1195$ is provided to give the $\alpha_S(M_Z^2)$ uncertainty (i.e. $\Delta \alpha_S(M_Z^2)=0.0015$) to be added in quadrature with other uncertainties. Three different options are provided along with suggestions for when they should be used: \noindent {\bf PDF4LHC15-mc:} A compressed {\bf Monte Carlo} set with $N_{\rm rep}=100$ \cite{Carrazza:2015hva}. Contains non-gaussian features -- important for searches at high masses (high $x$). See Fig. \ref{Fig15} for the compressed set compared to the full 900 starting PDFs. \noindent {\bf PDF4LHC15-30:} A symmetric {\bf Hessian} set with $N_{\rm eig}=30$. ({Meta-PDF} approach \cite{Gao:2013bia}.) This has good precision and is useful for many experimental needs and when using nuisance parameters \noindent {\bf PDF4LHC15-100:} A symmetric {\bf Hessian} set with $N_{\rm eig}=100$ (MC-H) \cite{Carrazza:2015aoa}. This has optimal precision if running time is not a problem or extreme accuracy needed. See Fig. \ref{Fig15} for the both Hessian sets compared to the full 900 starting PDFs. \begin{figure}[] \vspace{-0.0cm} \centerline{\includegraphics[width=0.96\textwidth]{correlations.png}} \vspace{-0.5cm} \caption{Comparison of PDF correlations from various means of combination, figures from \cite{Butterworth:2015oua}} \vspace{-0.2cm} \label{Fig17} \end{figure} PDF correlations are maintained by the compression in all cases. An example is shown in Fig.~\ref{Fig17}. The results for cross sections using all the compressed sets for LHC quantities work, at worst, quite well, even in more extreme regions of kinematics, see Fig.~\ref{Fig18}. \begin{figure}[] \vspace{-0.2cm} \centerline{\includegraphics[width=0.48\textwidth]{ciplot_atlas-incljets-eta7NLO} \includegraphics[width=0.48\textwidth]{ciplot_ttbar_ttbarinvmass_13tevNLO}} \vspace{-0.5cm} \caption{Examples of differential cross sections using each means of combination, figures from \cite{Butterworth:2015oua}.} \vspace{-0.2cm} \label{Fig18} \end{figure} Finally, it is important to note that the PDF4LHC prescription is meant for assessment of the PDF uncertainty in searches, discovery, acceptance corrections $\ldots$ (e.g. Higgs, Susy). When comparing theory predictions to experiment in well-determined standard model processes, e.g. jets, $W,Z$ distributions, it is recommended to use the individual PDF sets. Other than the very first measurements at new energies processes such as top pair cross sections, differential distributions etc will tend to fall into the latter category, especially when real precision is reached. \vspace{-0.3cm} \section*{Acknowledgements} \vspace{-0.1cm} I would like to thank the members of the MMHT collaboration and of the PDF4LHC working group for discussions. This work is supported partly by the London Centre for Terauniverse Studies (LCTS), using funding from the European Research Council via the Advanced Investigator Grant 267352. I thank the STFC for support via grant awards ST/J000515/1 and ST/L000377/1. \vspace{-0.3cm}
cca1aaccf58689da362cd6ff0c84427d408ad615
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section*{Introduction} Surfaces isogenous to a product of curves have been introduced by Catanese in \cite{Ca00}. Starting from that paper they have been studied extensively, in particular in the last years. They provide an easy way to construct surfaces of general type with fixed geometrical invariants. Moreover surfaces isogenous to a product are in correspondence with combinatorial structures that a finite group can admit. Via this correspondence several authors have classified these surfaces, as in \cite{BCG06}, \cite{CP09}, \cite{Pe10}, \cite{Gl11}. In this paper we study the cohomology of surfaces isogenous to a product using algebraic methods, in particular group representation theory. The guiding idea behind is that the cohomology of a surface $S\cong \frac{C\times D}{G}$ is completely determined by the action of the group $G$. Although our construction is quite general we apply our result to a specific class in order to prove the following: \begin{reptheorem}{main+} Let $S$ be a regular surface isogenous to a higher product of unmixed type with $\chi(\mathcal{O}_S)=2$. Then there exist two elliptic curves $E_C$ and $E_D$ such that $H^2(S,\mathbb{Q})\cong H^2(E_C\times E_D, \mathbb{Q})$ as rational Hodge structures. \end{reptheorem} This paper is organized as follows: in the first section we recall all the requires definitions and results; in the second one we study the cohomology of surfaces isogenous to a product of unmixed type, and in particular we focus on the case of regular surfaces with $\chi(\mathcal{O}_S)=2$. In the third section we study in detail some special surfaces and in the fourth one we present our main result, together with an important observation about the Picard number of the surfaces we studied. \subsection*{Notation and conventions} In this paper with curve or surface we mean a complex, smooth projective manifold of complex dimension $1$ or $2$ respectively. For a given surface $S$ we denote by $\chi(\mathcal{O}_S)$ the holomorphic Euler characteristic, by $e(S)$ the topological Euler characteristic and by $\rho(S)$ the Picard number of $S$. The invariant $q(S)=h^{1,0}(S)$ is called irregularity: a regular surface $S$ is a surface with $q(S)=0$. We use also standard notation in group theory: $\mathbb{Z}_n=\mathbb{Z}/n\mathbb{Z}$ is the cyclic group of order $n$; $\mathcal{S}_n,\,\mathcal{A}_n$ and $\mathcal{D}_n$ are respectively the symmetric, the alternating and the dihedral group on $n$ elements. \subsection*{Acknowledgements} The author would like to thank his advisor Bert van Geemen for introducing him to the subject. \section{Preliminaries and basic results} In this first section we recall all the definitions and results we need in this paper. In particular in the Section \ref{GAD} we study in detail the group algebra decomposition. \subsection{Surfaces isogenous to a product} \begin{definition} A smooth surface $S$ is said to be isogenous to a product (of curves) if it is isomorphic to a quotient $\frac{C\times D}{G}$ where $C$ and $D$ are curves of genus at least one and $G$ is a finite group acting freely on $C\times D$.\\ If the genus of both curves is greater or equal than two $S$ is said to be isogenous to a higher product. \end{definition} \noindent Let $S\cong \frac{C\times D}{G}$ be a surface isogenous to a product. The group $G$ is identified with a subgroup of $Aut(C\times D)$ via the group action. We set \begin{equation*} G^0:=G\cap\left(Aut(C)\times Aut(D)\right). \end{equation*} The group $Aut(C)\times Aut(D)$ is a normal subgroup of $Aut(C\times D)$ of index one or two, thus or $G=G^0$ or $[G:G^0]=2$. In particular an element in the subgroup $G^0$ acts on each curve and diagonally on the product, conversely an element $g\in G$ but not in $G^0$ acts on the product interchangig factors. \begin{definition} Let $S$ be a surface isogenous to a product. Then $\frac{C\times D}{G}$ is a minimal realization of $S$ if $S\cong \frac{C\times D}{G}$ and $G^0$ acts faithfully on both curves. \end{definition} \begin{proposition}[\cite{Ca00}, Proposition 3.13] Let $S$ be a surface isogenous to a higher product. Then a minimal realization exists and it is unique. \end{proposition} From now on whenever we refer to a surface $S$ isogenous to a higher product we will always assume that it is given by its minimal realization. \begin{definition} Let $S\cong \frac{C\times D}{G}$ be a surface isogenous to a product. $S$ is said to be of unmixed type if $G=G^0$, of mixed type otherwise. \end{definition} We recall some well known results about surfaces isogenous to a product, in particular about their invariants. \begin{proposition}[\cite{Ca00}] Let $S=\frac{C\times D}{G}$ be a surface isogenous to a higher product. Then $S$ is minimal surface of general type. \end{proposition} \begin{proposition}[\cite{Ca00}, Theorem 3.4] \label{InvariantSurface} Let $S\cong\frac{C\times D}{G}$ be a surface isogenous to a product. Then the following equalities hold: \begin{itemize} \item $\chi(\mathcal{O}_S)=\frac{(g(C)-1)(g(D)-1)}{|G|}$; \item $e(S)=4\chi(\mathcal{O}_S)=\frac{4(g(C)-1)(g(D)-1)}{|G|}$; \item $K_S^2=8\chi(\mathcal{O}_S)=\frac{8(g(C)-1)(g(D)-1)}{|G|}$. \end{itemize} \end{proposition} \begin{proposition}\label{q} Let $S\cong\frac{C\times D}{G}$ be a surface isogenous to a product of unmixed type. Then \begin{equation*} q(S)=g\left(C/G\right)+g\left(D/G\right). \end{equation*} \end{proposition} In this paper we focus our attention on regular surfaces isogenous to a higher product of unmixed type with $\chi(\mathcal{O}_S)=2$: \begin{proposition}\label{diamond} Let $S$ be a regular surface isogenous to a higher product with $\chi(\mathcal{O}_S)=2$. Then the Hodge diamond is fixed: \begin{equation*} \begin{array}{ccccc} &&1&&\\ &0&&0&\\ 1&&4&&1\\ &0&&0&\\ &&1&& \end{array} \end{equation*} \end{proposition} \begin{proof} By hypothesis we have $h^{1,0}(S)=0$ and $h^{2,0}=\chi(\mathcal{O}_S)-1=1$: we just have to compute $h^{1,1}(S)$. By Proposition \ref{InvariantSurface} $e(S)=4\chi(\mathcal{O}_S)=8$ and then \begin{equation*} h^{1,1}(S)=e(S)-2+4q(S)-2p_g(S)=4 \end{equation*} \end{proof} Regular surfaces isogenous to a higher product of unmix type with $\chi(\mathcal{O}_s)=2$ have been studied and classified in \cite{Gl11}: see section \ref{cohom} for the details. \subsection{Spherical system of generators} We introduce here the notion of spherical systems of generators and we relate them with ramified coverings of the sphere. We use the same notation of \cite{BCG06}. \begin{definition} Let $G$ be a group and $r\in\mathbb{N}$ with $r\ge 2$. An $r$-tuple $T=[g_1,...,g_r]$ of elements in $G$ is called spherical system of generators of $G$ if $g_1,...,g_r$ is a system of generators of $G$ and we have $g_1\cdot...\cdot g_r=Id_G$.\\ We call $\ell(T):=r$ length of $S$. \end{definition} \begin{definition} Let $A=[m_1,\,...,\,m_r]\in\mathbb{N}^r$ be an $r$-tuple of natural numbers $2\le m_1\le ...\le m_r$. A spherical system of generators $T=[g_1,...,g_r]$ is said to be of type $A=[m_1,...,m_r]$ if there is a permutation $\tau\in\mathcal{S}_r$ such that $ord(g_i)=m_{\tau(i)}$, for $i=1,\,...,\,r$. \end{definition} \begin{proposition}\label{RETspherical} Let $G$ be a finite group and $B=\{b_1,\,...,\,b_r\}\subset\mathbb{P}^1$. Then there is a correspondence between: \begin{itemize} \item Spherical system of generators $T$ of $G$ with length $\ell(T)=r$; \item Galois covering $f:C\to \mathbb{P}^1$ with branch points $B$. \end{itemize} \end{proposition} \begin{proof} It follows from the Riemann Existence Theorem as explained in \cite[Section III.3 and III.4]{Mi95}. \end{proof} \begin{remark} The curve $C$ is completely determined by the branch points $B$ and by the spherical system of generators $T$. In particular the genus can be computed using the Riemann-Hurwitz formula: \begin{equation*} g(C)=1-d+\sum_{i=1}^r\frac{d}{2m_i}(m_i-1) \end{equation*} where $A=[m_1,\,...,\,m_r]$ is the type of $T$. \end{remark} \begin{remark} The correspondence of Proposition \ref{RETspherical} is not one-to-one: indeed distinct spherical systems of generators could determine the same covering.\\ For example let $T_1=[g_1,\,...,\,g_r]$ be a spherical system of generators of $G$ of type $A$ and let $h\in G$. Consider $T_2=[g_1^h,\,...,\,g_r^h]$ where $g^h=h^{-1}gh$: $T_2$ is a spherical system of generators of type $A$ and determines an isomorphic covering. In particular $T_2$ determines exactly the same covering, not only an isomorphic one, and it corresponds to a different choise of the monodromy representation. \end{remark} Let $S=\frac{C\times D}{G}$ be a surface isogenous to a higher product of unmixed type with $q(S)=0$. Then by Proposition \ref{q} we get two ramified coverings of the sphere $f:C\to\mathbb{P}^1$ and $h:D\to\mathbb{P}^1$. Notice that, from a topological point of view, the surface $S$ is determined by $f$ and $h$ under the further condition that the group $G$ acts freely on the product $C\times D$. \begin{definition} Let $T=[g_1,...,g_r]$ be a spherical system of generators of $G$. We denote by $\Sigma(T)$ the union of all conjugates of the cyclic subgroups generated by the elements $g_1,...,g_r$: \begin{equation*} \Sigma(T):=\Sigma([g_1,...,g_r])=\bigcup_{g\in G}\bigcup_{j=0}^\infty\bigcup_{i=1}^r\{g\cdot g_i^j g^{-1}\}. \end{equation*} A pair of spherical systems of generators $(T_1,T_2)$ of $G$ is called disjoint if \begin{equation*} \Sigma(T_1)\cap\Sigma(T_2)=\{Id_G\}. \end{equation*} \end{definition} \begin{proposition}\label{disjoint-free} Let $T_1$ and $T_2$ be two spherical systems of generators of $G$ and let $\pi: C\times D\to \frac{C\times D}{G}$ be the induced covering where $G$ acts on the product via the diagonal action. Then the following conditions are equivalent: \begin{itemize} \item $\pi$ is an étale covering, i.e.\ the action of $G$ is free; \item $(T_1,T_2)$ is a disjoint pair of spherical systems of generators of $G$. \end{itemize} \end{proposition} \begin{proof} We observe that an element $g\in G$ fixes a point in $C$ if and only if $g\in\Sigma(T_1)$ and it fixes a point in $D$ if and only if $g\in\Sigma(T_2)$. Then $g$ fixes a point in $C\times D$ if and only if $g\in \Sigma(T_1)\cap\Sigma(T_2)$. \end{proof} \begin{definition} An unmixed ramification structure for $G$ is a disjoint pair of spherical system of generators $(T_1,T_2)$ of $G$.\\ Let $A_1=[m_{(1,1)},...,m_{(1,r_1)}]$ and $A_2=[m_{(2,1)},...,m_{(2,r_2)}]$ be respectively a $r_1$-tuple and a $r_2$-tuple of natural numbers with $2\le m_{(1,1)}\le...\le m_{(1,r_1)}$ and $2\le m_{(2,1)}\le...\le m_{(2,r_2)}$. We say that the unmixed ramification strucure $(T_1,T_2)$ is of type $(A_1,A_2)$ if $T_1$ is of type $A_1$ and $T_2$ is of type $A_2$. \end{definition} Putting together Proposition \ref{RETspherical} and Proposition \ref{disjoint-free} we get a correspondence between unmixed ramification structures and surfaces isogenous to a product of unmixed type. As already observed, this correspondence is not one-to-one, but it works well in one direction: given an unmixed ramification structure it is uniquely defined a surface isogenous to a product of unmixed type. \subsection{Irreducible rational representation} We recall some results about irreducible complex representation. A full discussion with proofs can be found in \cite{Se77}. Let $G$ be a finite group of order $N$. We denote by $\rho_i: G\to GL(V_i)$, $i=1,\,...,\,m$ its irreducible complex representations, where $m$ is the number of conjugacy classes in $G$. We usually denote by $\rho_1$ the trivial representation. Given a complex representation $\rho: G\to GL(V)$ we denote by $n_\rho(\rho_i)$ the multiplicity of $\rho_i$ in $\rho$. Then we get: \begin{equation*} \rho=\bigoplus_{i=1}^m n_\rho(\rho_i)\rho_i, \end{equation*} Let $\chi_i: G\to \mathbb{C}$ be the character associated to the irreducible complex representation $\rho_i$: the character field $K_i$ is the field $\mathbb{Q}(\chi_i(g))_{g\in G}$. As $\rho_i(g)\in GL(V)$ has finite order, its eigenvalues are roots of unity, hence $K_i$ is a subfield of $\mathbb{Q}(\xi_N)$ where $\xi_N$ is a primitive $N$-th root of unity. \begin{proposition}\label{GaloisAction1} Let $G$ be a finite group of order $N$ and let $\rho_i:G\to GL(V_i)$ be an irreducible complex representation of $G$ with associated character field $K_i$. For every $\sigma\in Gal(K_i/\mathbb{Q})$ there exists an unique irreducible complex representation $\rho_j:G\to GL(V_j)$ with character $\chi_j=\sigma(\chi_i)$. Thus for $\sigma, \rho_i$ and $\rho_j$ as above we set $\sigma(\rho_i)=\rho_j$.\\ In the same way we can define an action of the whole group $Gal_N$ on the irreducible complex representations. \end{proposition} \begin{definition} Let $\rho_i:G\to GL(V_i)$ be an irreducible complex representation of $G$ with character field $K_i$. The dual representation of $\rho_i$ is the irreducible complex representation $\overline{\rho_i}:=\tilde\sigma(\rho_i)$ where $\tilde{\sigma}$ is the complex conjugation.\\ We say that $\rho_i$ is self-dual if $\rho_i=\overline{\rho_i}$ or, equivalently, if $K_i\subseteq\mathbb{R}$. \end{definition} The action of $Gal_N$ splits the set of the irreducible complex representations into distinct orbits such that if two irreducible complex representations $\rho_i$ and $\rho_j$ are in the same orbit then $K_i=K_j$. \begin{proposition}\label{RationalRepresentation} Let $G$ be a finite group of order $N$ and let $\tau:G\to GL(W)$ be an irreducible rational representation. Then there is a unique $Gal_N$-orbit of irreducible complex representations \begin{equation*} \{\sigma(\rho_i)\}_{\sigma\in Gal(K_i/\mathbb{Q})},\quad \rho_i:G\to GL(V_i) \end{equation*} and a positive integer $s$, called Schur index of $\rho_i$, such that \begin{equation}\label{Qrap} \tau_\mathbb{C}:=\tau\otimes_\mathbb{Q}\mathbb{C}=\bigoplus_{\sigma\in Gal(K_i/\mathbb{Q})}s\cdot\sigma(\rho_i). \end{equation} Conversely each irreducible complex representation $\rho_i$ determines an irreducible rational representation $\tau:G\to GL(W)$ such that the equality \eqref{Qrap} holds. \end{proposition} \begin{corollary} Let $\rho:G\to GL(V)$ be a self-dual complex representation such that $K_\rho=\mathbb{Q}$. Then there exists a rational representation $\tau:G\to GL(W)$ and a positive integer $s$ such that $\tau\otimes_\mathbb{Q}\mathbb{C}=s\cdot\rho$. \end{corollary} Any rational representation $\tau:G\to GL(W)$ can be decomposed as sum of irreducible rational representations, exactly as it happens for the complex ones. We will write \begin{equation*} \tau=\bigoplus^t_{j=1}n_\tau(\tau_j)\tau_j, \end{equation*} where $\tau_j:G\to GL(W_j)$, $j=1,...,t$ are the irreducible rational representations of $G$ and $n_\tau(\tau_j)$ is the multiplicity of $\tau_j$ in $\tau$. As in the complex case, we denote by $\tau_1$ the trivial representation. \begin{example}\label{quaternion} Consider the quaternion group $Q_8$: \begin{equation*} Q_8=\left\langle -1,\,i,\,j,\,k|\,(-1)^2=1,\,i^2=j^2=k^2=ijk=-1\right\rangle. \end{equation*} This is the smallest group with an irreducible representation with Schur index different from one. The character table of $Q_8$ is: \begin{equation*} \begin{array}{c|ccccc} &1&-1&\pm i&\pm j&\pm k\\ \hline \chi_1&1&1&1&1&1\\ \chi_2&1&1&1&-1&-1\\ \chi_3&1&1&-1&-1&1\\ \chi_4&1&1&-1&1&-1\\ \chi_5&2&-2&0&0&0 \end{array} \end{equation*} In this case any irreducible complex representation $\rho_i$, $i=1,...,5,$ has character field $K_i=\mathbb{Q}$ and then defines a different Galois orbit. So $G$ has $5$ irreducible rational representations $\tau_j$, $j=1,...,5$. The Schur index of the first four representations has to be one, because the Schur index divides the dimension of the representation. Conversely $\rho_5$ has Schur index two. We can construct the representation $\tau_5$ with $\tau_5\otimes\mathbb{C}=2\rho_5$ setting: \begin{equation*}\label{repqua} \tau(i)=\begin{pmatrix} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1\\ 0 & 0 & 1 & 0 \end{pmatrix},\qquad \tau(j)=\begin{pmatrix} 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 \end{pmatrix}. \end{equation*} \end{example} \subsection{Group Algebra decomposition}\label{GAD} We describe here the so called group algebra decomposition. The main idea is the following: let $\tau:G\to GL(W)$ be a rational representantion and let $W$ be a rational Hodge structure such that $\tau(G)\subseteq End_{Hod}(W)$. Then the action of the group algebra $\mathbb{Q}[G]$ induces a decomposition of $W$ into Hodge subrepresentations. This result is well known in the contest of complex tori (see \cite[Section 13.4]{BL04}): following the same arguments we prove it for Hodge structures. Let $G$ be a finite group with irreducible complex representations $\rho_i:G\to GL(V_i)$, $i=1,\,...,\,m$, as in the previous section. Consider the following elements in $\mathbb{C}[G]$: \begin{equation*} p_i=\frac{dim(V_i)}{\#G}\sum_{g\in G}\chi_i(g)g, \end{equation*} where $\chi_i$ is the character of $\rho_i$. These elements $p_1,\,...,\,p_m$ are central idempotents in the group algebra $\mathbb{C}[G]$, i.e.\ $p_i^2=p_i$ and $p_ig=gp_i$ for all $g\in G$. Moreover we get: \begin{equation}\label{eqc} \tilde\rho_i(p_j)=\begin{cases} Id_{V_i}&\mbox{if }i=j,\\ 0&\mbox{if } i\ne j. \end{cases} \end{equation} Let us consider the group algebra $\mathbb{Q}(\xi_N)[G]$ where $N$ is the order of $G$ and notice that $p_i\in\mathbb{Q}(\xi_N)[G]$ for all $i=1,...,m$. There is a natural action of the Galois group $Gal_N$ on $\mathbb{Q}(\xi_N)[G]$ defined by \begin{equation*} \sigma\left(\sum a_jg_j\right)=\sum\sigma(a_j)g_j \end{equation*} where $\sigma\in Gal_N$.\\ This action agrees with the action defined in Proposition \ref{GaloisAction1}: $\sigma(\rho_i)=\rho_j$ if and only if $\sigma(p_i)=p_j$. Let $\tau_j:G\to GL(W_j)$ be an irreducible rational representation. By Proposition \ref{RationalRepresentation} there exists an irreducible complex representation $\rho_i:G\to GL(V_i)$ such that \begin{equation*} \tau_j=\bigoplus_{\sigma\in Gal(K_i/\mathbb{Q})}s\cdot \sigma(\rho_i). \end{equation*} We define \begin{equation*} q_j=\sum_{\sigma\in Gal(K_i/\mathbb{Q})}\sigma(p_i). \end{equation*} \begin{proposition} Let $G$ be a finite group and let $\tau_j:G\to GL(W_j)$, $j=1,\,...,\,t$ be its irreducible rational representations. Then $q_j\in\mathbb{Q}[G]$ for all $j=1,\,...,\,t$ and \begin{equation}\label{eqq} \tilde\tau_i(q_j)=\begin{cases} Id_{W_i}&\mbox{if }i=j,\\ 0&\mbox{if } i\ne j. \end{cases} \end{equation} \end{proposition} \begin{proof} By definition $q_j\in\mathbb{Q}(\xi_N)[G]$. For all $g\in G$ the coefficient of $g$ in $q_j$ is given by the equation \begin{equation*} c_g:=\frac{dim(V_i)}{\#G}\sum_{\sigma\in Gal(K_i/\mathbb{Q})}\sigma(\chi_i(g)) \end{equation*} By hypothesis $\chi_i(g)\in K_i$ and then $c_g\in\mathbb{Q}$ for all $g\in G$.\\ In order to prove equation \eqref{eqq} we have to complexify it and compare with equation \eqref{eqc}. \end{proof} \begin{corollary} Let $\tau:G\to GL(W)$ be a rational representation. We define $A_j=Im\{\tilde{\tau}(q_j):W\to W\}$. Then \begin{itemize} \item $A_j$ is a rational subrepresentation and $\tau|_{A_j}=m_\tau(\tau_j)\tau_j$; \item $W=\oplus^t_{j=1}A_j$. \end{itemize} \end{corollary} \begin{definition} Let $\tau:G\to GL(W)$ be a rational representation. We call $A_j$ the isotypical component related to the representation $\tau_j$ and we call $W=\oplus^t_{j=1}A_j$ isotypical decomposition of $\tau$. \end{definition} Now we need a classical result of representation theory about the group algebra $\mathbb{C}[G]$: \begin{proposition}\label{isoC} Let $G$ be a finite group and $\rho_i:G\to GL(V_i)$ $i=1,...,m$ its irreducible complex representations. We set $\rho=\oplus_{i=1}^m\rho_i:G\to GL(V)$. Then $\tilde\rho: \mathbb{C}[G]\to\oplus_{i=1}^mEnd(V_i)$ is an algebra isomorphism. \end{proposition} \begin{remark} A similar result can not hold in general for rational representations, since the algebras $\mathbb{Q}[G]$ and $\oplus^t_{j=1}End(W_j)$ need not have the same dimension. \end{remark} In order to avoid indices we work on a single irreducible rational representation $\tau:G\to GL(W)$. Consider $\mathbb{D}:=End_G(W)$, the algebra of $G$-equivariant maps on $W$: \begin{equation*} \mathbb{D}=End_G(W)=\{f\in End(W): \tau(g)f=f\tau(g)\,\forall g\in G\}. \end{equation*} The kernel of any element $f\in \mathbb{D}$ is a subrepresentation of $W$, hence, as $W$ is irreducible, all $f\in \mathbb{D}$ must be isomorphisms of $W$ and then $\mathbb{D}$ is a skew-field (or a division algebra). We consider $W$ as a left vector space over $\mathbb{D}$, then choosing a basis we get: \begin{equation*} W\cong\mathbb{D}^k, \end{equation*} where $k=\dim_\mathbb{D}(W)$.\\ Suppose $\tau_\mathbb{C}=\oplus_{\sigma\in Gal(K_i)}s\cdot\sigma(\rho_i)$, where $\rho_i:G\to GL(V_i)$ is an irreducible complex representation and so \begin{equation*} End_G(W_\mathbb{C})=\oplus_{\sigma\in Gal(K_i)}End_G(V_i^{\oplus s}). \end{equation*} Then: \begin{equation*} \begin{split} &\dim_\mathbb{Q}W=\dim_\mathbb{C}W_\mathbb{C}=s\cdot\dim_\mathbb{C}(V_i)\cdot [K_i:\mathbb{Q}],\\ &\dim_\mathbb{Q}\mathbb{D}=\dim_\mathbb{C}\mathbb{D}_\mathbb{C}=[K_i:\mathbb{Q}]\cdot \dim(End_G(V_i^{\oplus s}))=[K_i:\mathbb{Q}]\cdot s^2,\\ &\dim_\mathbb{D}W=k=\frac{[K_i:\mathbb{Q}]\dim_\mathbb{C}(V_i)\cdot s}{[K_i:\mathbb{Q}]\cdot s^2}=\frac{dim_\mathbb{C}(V_i)}{s}. \end{split} \end{equation*} Recall that the Schur index $s$ is always a divisor of the dimension of the representation and so $k\in\mathbb{N}$. By definition of $\mathbb{D}$, $\tau(g)$ commutes with $\mathbb{D}$ for all $g\in G$ and so the image of $\tilde{\tau}$ lies in $End_\mathbb{D}(W)$. Moreover we observe that \begin{equation}\label{dimensionQ} \begin{split} \dim_\mathbb{Q}(End_\mathbb{D}(W))=&\dim_\mathbb{Q}\mathbb{D}\cdot\dim_\mathbb{D}End_\mathbb{D}(W)=\\ =&(\dim_\mathbb{C}(V_i))^2\cdot [K_i:\mathbb{Q}]. \end{split} \end{equation} \begin{proposition} Let $G$ be a finite group and $\tau_j:G\to GL(W_j)$, $j=1,...,t$, its irreducible rational representations. We set $\mathbb{D}_j=End_G(W_j)$ and $\tau=\oplus_{j=1}^t\tau_j:G\to GL(W)$. Then $\tilde\tau: \mathbb{Q}[G]\to\oplus_{j=1}^tEnd_{\mathbb{D}_j}(W_j)$ is an algebra isomorphism. \end{proposition} \begin{proof} From Proposition \ref{isoC} we get the injectivity. Then it is enough to prove that the two algebras have the same dimension. Of course $\dim\mathbb{Q}[G]=\#G$. Now from equation \eqref{dimensionQ} we get \begin{equation*} \dim_\mathbb{Q}\left(\oplus^t_{j=1}End_{\mathbb{D}_j}(W_j)\right)=\oplus_{i=1}^m(\dim_\mathbb{C}(V_i))^2=\#G. \end{equation*} \end{proof} By choosing a $\mathbb{D}$-basis of $W$ we identiy $End_\mathbb{D}(W)$ with the algebra $Mat(k,\mathbb{D})$ of matrices $k\times k$ with coefficients in $\mathbb{D}$. In particular in $Mat(k,\mathbb{D})$ we have matrices $E_i$ with $1$ at $(i,i)$ and zero elsewhere. Then by the proposition above we are able to find $k$ idempotents $w_1,\,...,\,w_k$ in $\mathbb{Q}[G]$ such that $\tilde\tau(w_i)=E_i$. \begin{remark} This elements $w_1,\,...,\,w_k$ are not unique, since they depend on the choice of a $\mathbb{D}$-basis. \end{remark} This construction holds for all the irreducible rational representations. Given an irreducible rational representation $\tau_j:G\to GL(W_j)$ we denote by $w_{j,1},\,...,\,w_{j,k_j}$ idempotent elements of $\mathbb{Q}[G]$ constructed as above. \begin{proposition}\label{isogenousdecomposition} Let $\tau:G\to GL(W)$ be a rational representation and let $A_1,\,...,\,A_t$ be the isotypical components related to the irreducible rational representations of $G$. For all $j \in 1,..., t$ we define $B_j=Im\{\tilde{\tau}(w_{j,1}):W\to W\}$. Then $A_j\cong B_j^{\oplus k_j}$ for all $j$, $j=1,\,...,\,t$, $k_j=dim_{\mathbb{D}_j}W_j$. \end{proposition} \begin{proof} By construction $w_{j,1}+\,...+\,w_{j,k_j}=q_j$ for all $j=1,\,...,\,t$. Since $\tilde{\tau}(q_j)$ acts as the identity on $A_j$ we get a decomposition: \begin{equation*} A_j=Im\{\tilde{t}(w_{j,1})\}\oplus\,...\oplus\,Im\{\tilde{t}(w_{j,k_j})\}. \end{equation*} Fix a $\mathbb{D}_j$-basis of $W_j$ and consider in $End_{\mathbb{D}_j}(W_j)\cong Mat(k_j,\mathbb{D}_j)$ the matrices $M_i$ with $1$ at $(i,1)$ and zero elsewhere. These matrices provide isomorphisms between $B_j=Im\{\tilde{t}(w_{j,1})\}$ and $Im\{\tilde{t}(w_{j,i})\}$ for all $i=2,\,...,\,k_j$. \end{proof} \begin{definition} Let $\tau:G\to GL(W)$ be a rational representation. We call $B_j$ the isogenous component related to the representation $\tau_j$ and we call $W\cong\oplus^t_{j=1}B_j^{\oplus k_j}$ the group algebra decomposition of $\tau$. \end{definition} \begin{remark} Unlike the isotypical components $A_j$, the isogenous components $B_j$ are not $G$-subrepresentations. Indeed, as observed in the proof of the Proposition \ref{isogenousdecomposition}, the group algebra $\mathbb{Q}[G]$ interchanges the isogenous components. \end{remark} Now that we have defined the group algebra decomposition we relate it with the Hodge structures. First of all we recall the following: \begin{lemma}[\cite{Vo02}, Section 7.3.1]\label{imm} Let $W$ be a rational Hodge strucuture and let $\phi\in End_{Hod}(W)$. Then $Im(\phi)$ is a rational Hodge substrucure. \end{lemma} \begin{proposition} Let $(W,h)$ be a rational Hodge structure, $G$ a finite group and let $\tau:G\to GL(W)$ be a rational representation such that $\tau(G)\subset End_{Hod}(W)$. Then the isotypical and isogenous component of $\tau$ are Hodge substructures. \end{proposition} \begin{proof} We have defined the isotypical and isogenous components of a given representation $\tau:G\to GL(W)$ as the images of opportune elements in $\mathbb{Q}[G]$. Notice that $\tau(G)\subseteq End_{Hod}(W)$ implies $\tilde{\tau}(\mathbb{Q}[G])\subseteq End_{Hod}(W)$. Now we apply Lemma \ref{imm} \end{proof} We conclude this section with the following lemma: \begin{lemma}\label{pari} Let $G$ be a finite group and $\rho_i:G\to GL(V_i)$ its irreducible complex representations. Let $(W,h)$ be a rational Hodge structure of weight $1$ and $\tau:G\to GL(W)$ a rational representation such that $\tau(G)\subset End_{Hod}(W)$. Consider the induced complex representations $\tau_\mathbb{C}:G\to GL(W_\mathbb{C})$ and $\rho=\tau |_{W^{1,0}}:G\to GL(W^{1,0})$. Then: \begin{itemize} \item $n_{\tau_\mathbb{C}}(\rho_i)=n_{\rho}(\rho_i)+n_\rho(\overline{\rho_i})$; \item if $\rho_i$ is self-dual $n_{\tau_\mathbb{C}}(\rho_i)$ is even. \end{itemize} \end{lemma} \begin{proof} The subspaces $W^{1,0}$ and $W^{0,1}$ are subrepresentations of $W_\mathbb{C}$. It follows that if $\tau_\mathbb{C}|_{W^{1,0}}=\rho$ then $\tau_\mathbb{C}|_{W^{0,1}}=\overline{\rho}$, i.e.\ $\tau_\mathbb{C}=\rho\oplus\overline{\rho}.$ Hence the following equalities hold: \begin{equation*} \begin{split} &n_{\tau_\mathbb{C}}(\rho_i)=n_\rho(\rho_i)+n_{\overline{\rho}}(\rho_i),\\ &n_{\overline{\rho}}(\rho_i)=n_\rho(\overline{\rho_i}). \end{split} \end{equation*} In particular if $\rho_i$ is self-dual we get $n_{\tau_\mathbb{C}}(\rho_i)=2n_\rho(\rho_i)$. \end{proof} \subsection{Broughton formula} Let $C$ be a smooth curve of genus $g(C)$ and let $G$ be a finite group of automorphisms of $C$. We will denote by $\varphi$ the natural action induced by $G$ on the first cohomology group $H^1(C,\mathbb{C})$. Let assume $C/G\cong\mathbb{P}^1$ and let $T=[g_1,\,...,\,g_r]$ be the spherical system of generators associated to the ramified covering $f:C\to C/G\cong\mathbb{P}^1$. \begin{proposition}[\cite{Br87}]\label{Broughton} Let $\varphi=\oplus_{i=1}^mn_{\varphi}(\rho_i)\rho_i$ be the decomposition of $\varphi$ into irreducible complex representations. Then, with the notation as above we have: \begin{itemize} \item $n_\varphi(\rho_1)=\langle\varphi,\rho_1\rangle =0$, \item $n_\varphi(\rho_i)=\langle\varphi,\rho_i\rangle =\chi_i(1)(r-2)-\sum_{j=1}^rl_{g_j}(\rho_i)$, \end{itemize} where $\chi_i$ are the characters of the irreducible complex representations $\rho_i:G\to GL(V_i)$ of $G$, $\rho_1$ is the trivial representation, $r=\ell(T)$ is the length of $T$ and $l_{g_j}(\rho_i)$ is the multiplicity of the trivial character in the restriction of $\rho_i$ to $\langle g_j\rangle$. \end{proposition} \begin{remark} The same computations can be done using the Lefschetz fixed point formula (see \cite[Chapter 3.4]{GH94}). However, since we are interested only in the first cohomology groups of curves, Broughton's formula makes calculations faster and easier. \end{remark} The group $G$ induces an action not only on the complex (or real) cohomology, but also in the rational one. These actions are connected since $H^1(C,\mathbb{C})=H^1(C,\mathbb{Q})\otimes\mathbb{C}$. We will denote both actions with $\varphi$.\\ Applying together Proposition \ref{RationalRepresentation} and Proposition \ref{Broughton} we can compute the decomposition of $\varphi$ into irreducible rational representations. Notice that here we are exactly in the situation described in Proposition \ref{pari}: the finite group $G$ acts on $H^1(C,\mathbb{Q})$ that is a rational Hodge structure of weight $1$. Moreover, since $G$ acts holomorphically on $C$, the action on the cohomology preserves the Hodge decomposition and then \begin{equation*} \varphi(G)\subset End_{Hod}(H^1(C,\mathbb{Q})). \end{equation*} \begin{example}\label{z32} Let $G$ be the abelian group $(\mathbb{Z}_3)^2:=(\mathbb{Z}/3\mathbb{Z})^2$. Consider the unmixed ramification structure $(T_1,T_2)$ for $G$: \begin{equation*} \begin{split} T_1=&[(1,1),(2,1),(1,1),(1,2),(1,1)],\\ T_2=&[(0,2),(0,1),(1,0),(2,0)], \end{split} \end{equation*} of type $([3^5],\,[3^4])$. We denote by $f$ and $h$ the corresponding ramified coverings of $\mathbb{P}^1$: \begin{equation*} \begin{split} f:C&\to C/G\cong\mathbb{P}^1,\\ h:D&\to D/G\cong\mathbb{P}^1, \end{split} \end{equation*} where $C$ and $D$ have genus $7$ and $4$ respectively.\\ The character table of $G$ is \begin{equation*} \begin{array}{c|ccccccccc} &Id&(1,0)&(2,0)&(0,1)&(0,2)&(1,1)&(2,2)&(2,1)&(1,2)\\ \hline \chi_1&1&1&1&1&1&1&1&1&1\\ \chi_2&1&\xi_3&\xi_3^2&1&1&\xi_3&\xi_3^2&\xi_3^2&\xi_3\\ \chi_3&1&\xi_3^2&\xi_3&1&1&\xi_3^2&\xi_3&\xi_3&\xi_3^2\\ \chi_4&1&1&1&\xi_3&\xi_3^2&\xi_3&\xi_3^2&\xi_3&\xi_3^2\\ \chi_5&1&\xi_3&\xi_3^2&\xi_3&\xi_3^2&\xi_3^2&\xi_3&1&1\\ \chi_6&1&\xi_3^2&\xi_3&\xi_3&\xi_3^2&1&1&\xi_3^2&\xi_3\\ \chi_7&1&1&1&\xi_3^2&\xi_3&\xi_3^2&\xi_3&\xi_3^2&\xi_3\\ \chi_8&1&\xi_3&\xi_3^2&\xi_3^2&\xi_3&1&1&\xi_3&\xi_3^2\\ \chi_9&1&\xi_3^2&\xi_3&\xi_3^2&\xi_3&\xi_3&\xi_3^2&1&1\\ \end{array} \end{equation*} By Proposition \ref{RationalRepresentation} $G$ has $5$ irreducible $\mathbb{Q}$-representations $\tau_1,\,...,\,\tau_5$ with: \begin{equation*} \begin{split} \tau_1\otimes_\mathbb{Q}\mathbb{C}=&\rho_1,\\ \tau_2\otimes_\mathbb{Q}\mathbb{C}=&\rho_2\oplus\rho_3,\\ \tau_3\otimes_\mathbb{Q}\mathbb{C}=&\rho_4\oplus\rho_7,\\ \tau_4\otimes_\mathbb{Q}\mathbb{C}=&\rho_5\oplus\rho_9,\\ \tau_5\otimes_\mathbb{Q}\mathbb{C}=&\rho_6\oplus\rho_8.\\ \end{split} \end{equation*} We apply the Broughton formula (Proposition \ref{Broughton}) to compute the decomposition of the representaion of the group $G$ on $H^1(C,\mathbb{C})$ and $H^1(D,\mathbb{C})$. We get \begin{equation*} \begin{array}{c|ccccccccc} &\rho_1&\rho_2&\rho_3&\rho_4&\rho_5&\rho_6&\rho_7&\rho_8&\rho_9\\ \hline \varphi_C&0&3&3&3&1&0&3&0&1\\ \varphi_D&0&0&0&0&2&2&0&2&2 \end{array} \end{equation*} for the complex cohomology groups $H^1(C,\mathbb{C})$, $H^1(D,\mathbb{C})$ and \begin{equation*} \begin{array}{c|ccccc} &\tau_1&\tau_2&\tau_3&\tau_4&\tau_5\\ \hline \varphi_C&0&3&3&1&0\\ \varphi_D&0&0&0&2&2 \end{array} \end{equation*} for the rational cohomology groups $H^1(C,\mathbb{Q})$, $H^1(D,\mathbb{Q})$. \end{example} \section{On the Cohomology}\label{cohom} The cohomology of surfaces isogenous to a product has been studied in \cite{CLZ13} and in \cite{CL13}. In these papers the authors focused on the complex cohomology and they study the corresponding Albanese variety. We follow here a completely different approach. Let $S=\frac{C\times D}{G}$ be a surface isogenous to a higher product of unmixed type. Then the second cohomology of $S$ depends on the cohomology of $C$ and $D$ and on the action of $G$. First of all we need a topological lemma: \begin{lemma}[\cite{Ha02}, Proposition 3G.1] \label{CoveringCohomology} Let $\pi:\tilde{X}\to X$ be a (topological) covering space of degree $N$ defined by an action of a group $G$ on $\tilde{X}$. Then with coefficients in a field $F$ whose characteristic is $0$ or a prime not dividing $n$, the map $\pi^*:H^k(X,F)\to H^k(\tilde{X},F)$ is injective with image the subgroup $H^k(\tilde{X},F)^G$. \end{lemma} \begin{proposition}\label{decompositionsecond} Let $S=\frac{C\times D}{G}$ be a surface isogenous to a higher product of unmixed type. Then the second cohomology group of $S$ is given by $H^2(S,\mathbb{Q})\cong U\oplus Z$, where \begin{equation*} \begin{split} U:&=\left(H^2(C,\mathbb{Q})\otimes H^0(D,\mathbb{Q})\right)\oplus\left(H^0(C,\mathbb{Q})\otimes H^2(D,\mathbb{Q})\right),\\ Z:&=\left(H^1(C,\mathbb{Q})\otimes H^1(D,\mathbb{Q})\right)^G. \end{split} \end{equation*} \end{proposition} \begin{proof} We compute the second cohomology of $C\times D$ with the Künneth formula (see \cite[Theorem 3.16]{Ha02}) and we apply Lemma \ref{CoveringCohomology}. Since $G$ acts trivially on the zero cohomology and on the second cohomology of the curves $C$ and $D$ we get the result. \end{proof} \begin{remark} Consider $H^2(S,\mathbb{Q})$ as rational Hodge structure of weight $2$. Then $U,Z\le H^2(S,\mathbb{Q})$ are Hodge substructures. In particular the subspace $U$ has dimension $2$, and $U\otimes_\mathbb{Q}\mathbb{C}\le H^{1,1}(S)$. Then, as rational Hodge structure, it is isomorphic to the Tate structure $\mathbb{Q}^2(-1)$. It follows that $H^2(S,\mathbb{Q})$ is determined, as Hodge structure, by $Z$. \end{remark} We recall a classical result of representation theory: \begin{lemma}\label{productcompl} Let $G$ be a finite group and let $\rho_i:G\to GL(V_i)$ $i=1,\,...,\,m$ be its irreducible complex representations, where $\rho_1$ is the trivial representation. Then \begin{equation*} n_{\rho_i\otimes\rho_j}(\rho_1)=\langle\rho_i\otimes\rho_j,\rho_1\rangle =\begin{cases} 1&\mbox{if }\rho_j=\overline{\rho_i},\\ 0&\mbox{otherwise}. \end{cases} \end{equation*} \end{lemma} We need the corresponding result for rational representation: \begin{proposition}\label{productrepr} Let $G$ be a finite group and let $\tau_j:G\to GL(W_j)$ $j=1,\,...,\,t$ be its irreducible rational representations, where $\tau_1$ is the trivial representation. Then the multiplicity of the trivial representation in $\tau_j\otimes \tau_k$ is: \begin{equation*} n_{\tau_j\otimes\tau_k}(\tau_1)=\begin{cases} s^2[K_i:\mathbb{Q}]&\mbox{if } j=k,\\ 0&\mbox{otherwise}, \end{cases} \end{equation*} where $\tau_j\otimes\mathbb{C}=s\bigoplus_{\sigma\in Gal(K_i/\mathbb{Q})}\sigma(\rho_i)$. \end{proposition} \begin{proof} It follows fom Lemma \ref{productcompl} and Proposition \ref{RationalRepresentation}. \end{proof} Let $\tau_{j_1}:G\to GL(W_1)$ and $\tau_{j_2}:G\to GL(W_2)$ be two irreducible rational representations of $G$. The group acts trivially on $(W_{j_1}\otimes W_{j_2})^G$ and then \begin{equation*} \dim (W_{j_1}\otimes W_{j_2})^G= n_{\tau_{j_1}\otimes\tau_{j_2}}(\tau_1). \end{equation*} In particular $\dim (W_{j_1}\otimes W_{j_2})^G\ne 0$ if and only if $j_1=j_2$, and in this case the dimension is determined by Proposition \ref{productrepr}. Let $S=\frac{C\times D}{G}$ be a surface isogenous to a higher product of unmixed type and let $\tau_j:G\to GL(W_j)$, $j=1,\,...,\,t$ be the irreducible rational representations of $G$. Let $\varphi_C:G\to GL(H^1(C,\mathbb{Q}))$ and $\varphi_D:G\to GL(H^1(D,\mathbb{Q}))$ be the actions induced by $G$ on the first cohomology of curves: \begin{equation*} \begin{split} &\varphi_C=n_C(\tau_1)\tau_1\oplus\,...\oplus\,n_C(\tau_t)\tau_t,\\ &\varphi_D=n_D(\tau_1)\tau_1\oplus\,...\oplus\,n_D(\tau_t)\tau_t. \end{split} \end{equation*} Then each irreducible rational representation $\tau_j$ determines a subspace of the rational Hodge structure $Z$ of dimension \begin{equation*} n_C(\tau_j)n_D(\tau_j)n_{\tau_j\otimes\tau_j}(\tau_1). \end{equation*} In particular we obtain \begin{equation*} \dim Z=\sum_{j=1}^{t}n_C(\tau_j)n_D(\tau_j)n_{\tau_j\otimes\tau_j}(\tau_1). \end{equation*} Let we focus on the case of regular surfaces isogenous to a higher product with $\chi(\mathcal{O}_S)=2$. \begin{proposition}\label{classification} Let $S=\frac{C\times D}{G}$ be a regular surface isogenous to a higher product with $\chi(\mathcal{O}_S)=2$. Then one of the following cases holds: \begin{itemize} \item[a)] There exists an absolutely irreducible rational representation $\tau: G\to GL(W)$ such that \begin{equation*} \begin{split} &n_C(\tau)=n_D(\tau)=2,\\ &n_C(\tau_j)\cdot n_D(\tau_j)=0,\quad\forall\,\tau_j\mbox{ different from }\tau. \end{split} \end{equation*} \item[b)] There exists an irreducible rational representation $\tau:G\to GL(W)$ and an irreducible complex representation $\rho:G\to GL(V)$ with $\tau_\mathbb{C}=2\rho$ such that \begin{equation*} \begin{split} &n_C(\tau)=n_D(\tau)=1,\\ &n_C(\tau_j)\cdot n_D(\tau_j)=0,\quad\forall\,\tau_j\mbox{ different from }\tau. \end{split} \end{equation*} \item[c)] There exists an irreducible rational representation $\tau:G\to GL(W)$ and an irreducible complex representation $\rho:G\to GL(V)$ with $\tau_\mathbb{C}=\rho\oplus\overline{\rho}$ such that \begin{equation*} \begin{split} &n_C(\tau)=1,\quad n_D(\tau)=2,\\ &n_C(\tau_j)\cdot n_D(\tau_j)=0,\quad\forall\,\tau_j\mbox{ different from }\tau. \end{split} \end{equation*} \item[d)] There exist two irreducible rational representations $\tau_{j_1}:G \to GL(W_{j_1})$, $\tau_{j_2}:G\to GL(W_{j_2})$ and two irreducible complex representations $\rho_{i_1}:G\to GL(V_{i_1})$, $\rho_{i_2}:G\to GL(V_{i_2})$ with $\tau_{j_1}\otimes\mathbb{C}=\rho_{i_1}\oplus\overline{\rho_{i_1}}$, $\tau_{j_2}\otimes\mathbb{C}=\rho_{i_2}\oplus\overline{\rho_{i_2}}$ and $j_1\neq j_2$ such that \begin{equation*} \begin{split} &n_C(\tau_{j_1})=n_C(\tau_{j_2})=n_D(\tau_{j_1})=n_D(\tau_{j_2})=1,\\ &n_C(\tau_j)\cdot n_D(\tau_j)=0,\quad\forall\,\tau_j\mbox{ different from }\tau_{j_1},\,\tau_{j_2}. \end{split} \end{equation*} \end{itemize} \end{proposition} \begin{proof} For a regular surface $S$ isogenous to a higher product with $\chi(\mathcal{O}_S)=2$ we have $\dim Z=4$. Notice that, for all the irreducible rational representations, the number $n_C(\tau_j)n_D(\tau_j)n_{\tau_j\otimes\tau_j}(\tau_1)$ is even by Lemma \ref{pari}. Then we can have at most two irreducible rational representations $\tau_j$ such that $n_C(\tau_j)n_D(\tau_j)\ne 0$. Suppose we have only one: then, again for Lemma \ref{pari}, we have three possibilities: \begin{itemize} \item $n_C(\tau_j)=n_D(\tau_j)=2$, $n_{\tau_j\otimes\tau_j}(\tau_1)=1$: this is the case $a$; \item $n_C(\tau_j)=n_D(\tau_j)=1$, $n_{\tau_j\otimes\tau_j}(\tau_1)=4$: this is the case $b$; \item $n_C(\tau_j)=2$, $n_D(\tau_j)=1$, $n_{\tau_j\otimes\tau_j}(\tau_1)=2$: this is the case $c$. \end{itemize} Suppose now we have contributions from two different irreducible rational representations $\tau_{j_1}$ and $\tau_{j_2}$. Then \begin{itemize} \item $n_C(\tau_{j_i})=n_D(\tau_{j_i})=1$, $n_{\tau_{j_i}\otimes\tau_{j_i}}(\tau_1)=2$ for $i=1,2$: this is the case $d$. \end{itemize} \end{proof} \begin{definition} Let $S=\frac{C\times D}{G}$ be a regular surface isogenous to a higher product of unmixed type with $\chi(\mathcal{O}_S)=2$. We say that $S$ is of type $a,\,b,\,c$ or $d$ if the corresponding case of Proposition \ref{classification} holds for $S$. \end{definition} As already mentioned, regular surfaces isogenous to a higher product of unmixed type with $\chi(\mathcal{O}_S)=2$ have been classified by Gleissner. In \cite{Gl11} he proves that only $21$ groups admit an unmixed ramification structure such that the corresponding surface has $\chi(\mathcal{O}_S)=2$ and $q(S)=0$. In particular $7$ groups admit more than one non-isomorphic structures, and he obtains $32$ families of regular surfaces isogenous to a higher product of unmixed type with $\chi(\mathcal{O}_S)=2$. A complete list can be found in Table \ref{tabella} while the explicit forms of the unmixed ramification structures can be found in \cite{Gl11} For all the surfaces in the list we determined if they are of type $a,\,b,\,c$ or $d$. Let $G$ be one of the $14 $ groups in the following list: \begin{equation*} \begin{split} &(\mathbb{Z}_2)^3\rtimes_\varphi\mathcal{S}_4,\quad (\mathbb{Z}_2)^4\rtimes_\varphi \mathcal{D}_5,\quad \mathcal{S}_5,\quad (\mathbb{Z}_2)^4\rtimes_\psi \mathcal{D}_3,\\ &U(4,2),\quad \mathcal{A}_5,\quad \mathcal{S}_4\times\mathbb{Z}_2,\quad \mathcal{D}_4\times (\mathbb{Z}_2)^2,\quad (\mathbb{Z}_2)^4\rtimes_\varphi\mathbb{Z}_2,\\ &\mathcal{S}_4,\quad \mathcal{D}_4\times\mathbb{Z}_2,\quad (\mathbb{Z}_2)^2\rtimes_\varphi\mathbb{Z}_4,\quad (\mathbb{Z}_2)^4,\quad (\mathbb{Z}_2)^3. \end{split} \end{equation*} For all the irreducible complex representations $\rho:G\to GL(V)$ we get $K_\rho\subseteq\mathbb{R}$ and the Schur index of $\rho$ is equal $1$. Therefore the corresponding surfaces $S$ are of type $a$. We verified that also the surfaces related to the groups \begin{equation*} PSL(2,\mathbb{F}_7)\times\mathbb{Z}_2,\quad PSL(2,\mathbb{F}_7),\quad (\mathbb{Z}_2)^3\rtimes_\varphi\mathcal{D}_4 \end{equation*} are of type $a$, although these groups admit irreducible complex representations with $K_\rho\not\subseteq\mathbb{R}$. \begin{example} As example we study in detail the group $PSL(2,\mathbb{F}_7)$. $G:=PSL(2,\mathbb{F}_7)$ has $6$ irreducible complex representations $\rho_1,...,\rho_6$ associated to the charaters $\chi_1,...,\,\chi_6$: \begin{equation*} \begin{array}{c|cccccc} &Id&2&3&4&7a&7b\\ \hline \chi_1&1&1&1&1&1&1\\ \chi_2&3&-1&0&1&\xi&\overline{\xi}\\ \chi_3&3&-1&0&1&\overline{\xi}&\xi\\ \chi_4&6&2&0&0&-1&-1\\ \chi_5&7&-1&1&-1&0&0\\ \chi_6&8&0&-1&0&1&1\\ \end{array} \end{equation*} where $\xi=\frac{-1+i\sqrt{7}}{2}$. By Proposition \ref{RationalRepresentation}, $G$ has only $5$ irreducible rational representations $\tau_1,...,\,\tau_5$. One has \begin{equation*} \begin{split} \tau_1\otimes_\mathbb{Q}\mathbb{C}=&\rho_1,\\ \tau_2\otimes_\mathbb{Q}\mathbb{C}=&\rho_2\oplus\rho_3,\\ \tau_3\otimes_\mathbb{Q}\mathbb{C}=&\rho_4,\\ \tau_4\otimes_\mathbb{Q}\mathbb{C}=&\rho_5,\\ \tau_5\otimes_\mathbb{Q}\mathbb{C}=&\rho_6. \end{split} \end{equation*} The group $PSL(2,\mathbb{F}_7)$ admits two non-isomorphic unmixed structures $(T_{C_1}, T_{D_1})$ and $(T_{C_2}, T_{D_2})$ of types $([7^3],\,[3^2,4])$ and $([3^2,7],\,[4^3])$ respectively. Since there is only one conjugacy class of elements of order $3$ and one conjugacy class of elements of order $4$ in $G$ (denoted in the table above with $3$ and $4$), we can apply the Broughton formula to the curves $D_1$ and $D_2$ easily. We get: \begin{equation*} \begin{array}{c|ccccc} &\tau_1&\tau_2&\tau_3&\tau_4&\tau_5\\ \hline \varphi_{D_1}&0&0&0&0&2\\ \varphi_{D_2}&0&0&0&4&2 \end{array} \end{equation*} So, even if $G$ has two non self-dual representations ($\rho_2$ and $\rho_3$), the surfaces isogenous to a higher product associated to both $(T_{C_1}, T_{D_1})$ and $(T_{C_2}, T_{D_2})$ are of type $a$. \end{example} Finally surfaces related to the groups \begin{equation*} G(128,36),\quad (\mathbb{Z}_2)^4\rtimes_\varphi \mathcal{D}_3,\quad (\mathbb{Z}_2)^3\rtimes_\varphi\mathbb{Z}_4,\quad (\mathbb{Z}_3)^2, \end{equation*} are not of type $a$ and we will study them in the next section. The complete list, with the corresponding type, is summarized in Table \ref{tabella}, at the end of this section. \begin{theorem}\label{main} Let $S=\frac{C\times D}{G}$ be a regular surface isogenous to a higher product of unmixed type with $\chi(\mathcal{O}_S)=2$ and assume that $S$ is of type $a$. Then there exist two elliptic curves $E_C$ and $E_D$ such that $H^2(S,\mathbb{Q})\cong H^2(E_C\times E_D,\mathbb{Q})$ as rational Hodge structures. \end{theorem} \begin{proof} The proof consists of two steps: in the first one we construct the two elliptic curves $E_C$ and $E_D$; in the second one we prove that $H^2(S,\mathbb{Q})\cong H^2(E_C\times E_D,\mathbb{Q})$.\\ \textbf{Step 1}: By hypothesis there exists an absolutely irreducible rational representation $\tau:G\to GL(W)$ such that $n_C(\tau)=n_D(\tau)=2$; let $\dim W=n$. We denote by $A_C$ and $A_D$ the isotypical components related to $\tau$ in $H^1(C,\mathbb{Q})$ and $H^1(D,\mathbb{Q})$: $A_C$ and $A_D$ are, at the same time, rational Hodge substructure and $G$-subrepresentations of dimension $2n$ and we obtain \begin{equation*} Z\cong \left(A_C\otimes A_D\right)^G, \end{equation*} where $Z$ is the Hodge substructure defined in Proposition \ref{decompositionsecond}. Since $\tau$ is an absolutely irreducible rational representation, the corresponding skew-field $\mathbb{D}$ is simply $\mathbb{Q}$. So, by Proposition \ref{isogenousdecomposition}, we get $A_C\cong B_C^{\oplus n}$ and $A_D\cong B_D^{\oplus n}$ where $B_C$ and $B_D$ are Hodge substructures, but no longer $G$-subrepresentations, of $A_C$ and $A_D$ with $\dim B_C=\dim B_D=2$. Via the natural correspondence between complex tori and Hodge structures, there exists two elliptic curves $E_C$ and $E_D$, defined up to isogeny, such that \begin{equation*} B_C\cong H^1(E_C,\mathbb{Q}),\qquad B_D\cong H^1(E_D,\mathbb{Q}), \end{equation*} as rational Hodge structures.\\ \textbf{Step 2}: The Hodge structures of weight two $Z$ and $B_C\otimes B_D$ have the same dimension and the same Hodge numbers; in particular $\dim Z^{2,0}=\dim (B_C\times B_D)^{2,0}=1$. The action of $G$ provides a Hodge homomorphism $A_C\otimes A_D\to Z$: by restriction we get a map $\psi: B_C\otimes B_D\to Z$. Consider the Hodge substructure $Im(\psi)$. We can assume that $\dim Im(\psi)^{2,0}=1$: otherwise we have to change the choice of $B_C$ and $B_D$ in $A_C$ and $A_D$. If $\psi$ is an isomorphism we are done. Otherwise let $k:=\dim Ker(\psi)$. We have the decompositions: \begin{equation*} B_C\otimes B_D\simeq P\oplus Ker(\psi),\qquad Z\simeq Im(\psi)\oplus \mathbb{Q}^k(-1), \end{equation*} where $P$ is a Hodge substructure with $\dim P^{2,0}=1$ and $\dim P^{1,1}=2-k$. The Hodge structures $B_C\otimes B_D$ and $Z$ are isomorphic since \begin{itemize} \item $\psi$ defines an isomorphism between $P$ and $Im(\psi)$, \item $\dim ker(\psi)^{2,0}=0$ and then $ker(\psi)\simeq \mathbb{Q}^k(-1)$. \end{itemize} \end{proof} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $G$& $|G|$ & SGL &$g(C)$ & $g(D)$& type\\ \hline $PSL(2,\mathbb{F}_7)\times\mathbb{Z}_2$& 336 & $\langle 336,209\rangle$ & 17 & 43 & a\\ $(\mathbb{Z}_2)^3\rtimes_\varphi\mathcal{S}_4$& 192& $\langle 192,995\rangle$ & 49 & 9 & a\\ $PSL(2,\mathbb{F}_7)$& 168 & $\langle 168,42\rangle$& 49 & 8 & a\\ $PSL(2,\mathbb{F}_7)$& 168 & $\langle 168,42\rangle$& 17 & 22 & a\\ $(\mathbb{Z}_2)^4\rtimes_\varphi \mathcal{D}_5$& 160 & $\langle 160,234\rangle$& 5 & 81 & a\\ $G(128,36)$ & 128 & $\langle 128,36\rangle$& 17 & 17 & \textbf{b}\\ $\mathcal{S}_5$ & 120 & $\langle 120,34\rangle$& 9 & 31 & a\\ $(\mathbb{Z}_2)^4\rtimes_\varphi \mathcal{D}_3$ & 96 & $\langle 96,195\rangle$& 5 & 49 & \textbf{c}\\ $(\mathbb{Z}_2)^4\rtimes_\psi \mathcal{D}_3$ & 96 & $\langle 96,227\rangle$& 25 & 9 & a\\ $(\mathbb{Z}_2)^3\rtimes_\varphi \mathcal{D}_4$ & 64 & $\langle 64,73\rangle$& 9 & 17 & a\\ $U(4,2)$ & 64 & $\langle 64,138\rangle$& 9 & 17 & a\\ $\mathcal{A}_5$ & 60 & $\langle 60,5\rangle$& 13 & 11 & a\\ $\mathcal{A}_5$ & 60 & $\langle 60,5\rangle$& 41 & 4 & a\\ $\mathcal{A}_5$ & 60 & $\langle 60,5\rangle$& 9 & 16 & a\\ $\mathcal{A}_5$ & 60 & $\langle 60,5\rangle$& 5 & 31 & a\\ $\mathcal{S}_4\times\mathbb{Z}_2$ & 48 & $\langle 48,48\rangle$& 5 & 25& a \\ $\mathcal{S}_4\times\mathbb{Z}_2$ & 48 & $\langle 48,48\rangle$& 9 & 13 & a\\ $\mathcal{S}_4\times\mathbb{Z}_2$ & 48 & $\langle 48,48\rangle$& 13 & 9 & a\\ $\mathcal{S}_4\times\mathbb{Z}_2$ & 48 & $\langle 48,48\rangle$& 3 & 49 & a\\ $(\mathbb{Z}_2)^3\rtimes_\varphi\mathbb{Z}_4$ & 32 & $\langle 32,22\rangle$& 9 & 9 & \textbf{d}\\ $\mathcal{D}_4\times (\mathbb{Z}_2)^2$ & 32 & $\langle 32,46\rangle$& 9 & 9 & a\\ $(\mathbb{Z}_2)^4\rtimes_\varphi\mathbb{Z}_2$ & 32 & $\langle 32,27\rangle$& 17 & 5 & a\\ $(\mathbb{Z}_2)^4\rtimes_\varphi\mathbb{Z}_2$ & 32 & $\langle 32,27\rangle$& 9 & 9 & a\\ $\mathcal{S}_4$ & 24 & $\langle 24,12\rangle$& 5 & 13 & a\\ $\mathcal{S}_4$ & 24 & $\langle 24,12\rangle$& 3 & 25 & a\\ $\mathcal{D}_4\times\mathbb{Z}_2$ & 16 & $\langle 16,11\rangle$& 9 & 5 & a\\ $(\mathbb{Z}_2)^2\rtimes_\varphi\mathbb{Z}_4$& 16 & $\langle 16,3\rangle$& 9 & 5 & a \\ $(\mathbb{Z}_2)^4$ & 16 & $\langle 16,14\rangle$& 9 & 5 & a\\ $\mathcal{D}_4\times\mathbb{Z}_2$ & 16 & $\langle 16,11\rangle$& 3 & 17 & a\\ $(\mathbb{Z}_3)^2$ & 9 & $\langle 9,2\rangle$& 7 & 4 & \textbf{c}\\ $(\mathbb{Z}_2)^3$ & 8 & $\langle 8,5\rangle$& 5 & 5 & a\\ $(\mathbb{Z}_2)^3$ & 8 & $\langle 8,5\rangle$& 3 & 9 & a\\ \hline \end{tabular} \caption{\label{tabella} Complete list of groups that admit an unmixed ramification structure such that the corresponding surfaces $S$ isogenous to a higher product has $\chi(\mathcal{O_S})= 2$ and $q(S)=0$. SGL is the pair that identifies the group in the Small Groups Library (on Magma).} \end{center} \end{table} \section{The exceptional cases} In this section we study one by one the families of surfaces in Table \ref{tabella} not of type $a$.\\ Given a finite group $G$ and an unmixed ramification structure $(T_C,T_D)$ for $G$ we will use the following notation: \begin{itemize} \item $f:C\to\mathbb{P}^1$ and $h:D\to\mathbb{P}^1$ are the Galois covering associated to the spherical system of generators $T_C$ and $T_D$; \item $S=\frac{C\times D}{G}$ is the surface isogenous to a product of unmixed type corresponding to the unmixed ramification structure; \item $Z<H^2(S,\mathbb{Q})$ is the $4$-dimensional Hodge substructure with $\dim Z^{2,0}=1$ defined by \begin{equation*} Z=\left(H^1(C,\mathbb{Q})\otimes H^1(D,\mathbb{Q})\right)^G. \end{equation*} \end{itemize} Let $\tau:G\to GL(W)$ be an irreducible rational representation of $G$ and let $A_C,\,A_D$ be the isotypical components of $H^1(C,\mathbb{Q})$ and $H^1(D,\mathbb{Q})$ related to $\tau$. Assume that $Z\cong(A_C\otimes A_D)^G$: as described in the proof of Theorem \ref{classification} this is exactly what happens for surfaces of type $a,\,b$ and $c$.\\ Let $H\triangleleft G$ be the normal subgroup $H=ker(\tau)$. Then we get \begin{equation}\label{coH} \begin{split} Z=&\left(H^1(C,\mathbb{Q})\otimes H^1(D,\mathbb{Q})\right)^G=\\ =&\left(H^1(C,\mathbb{Q})^H\otimes H^1(D,\mathbb{Q})^H\right)^{G/H}. \end{split} \end{equation} \begin{remark}\label{rem} Notice that, for a general subgroup $H\le G$, we have \begin{equation*} \left(H^1(C,\mathbb{Q})\otimes H^1(D,\mathbb{Q})\right)^H\not\cong\left(H^1(C,\mathbb{Q})^H\otimes H^1(D,\mathbb{Q})^H\right). \end{equation*} For example for $H=G$ we get the Hodge structure $Z$ on the left and the empty vector space on the right, since $C/G\cong D/G\cong\mathbb{P}^1$. Equation \eqref{coH} holds because our specific choice of the subgroup $H$. \end{remark} Using this idea (with appropriate modifications for the case $d$) we extend the result of Theorem \ref{main} to the remaining surfaces. \subsection{Case b}\label{caseb} Let $G$ be the finite group $G=G(128,36)$ with presentation: \begin{equation*} G=\left\langle g_1,\,...\,,g_7\quad\Big|\quad \begin{array}{lll} g_1^2=g_4 & g_2^2=g_5 & g_2^{g_1}=g_2g_3\\ g_3^{g_1}=g_3g_6 & g_3^{g_2}=g_3g_7 & g_4^{g_2}=g_4g_6\\ g_5^{g_1}=g_5g_7 \end{array} \right\rangle, \end{equation*} where $g_i^{g_j}:=g_j^{-1}g_ig_j$; $G$ has order $128$ and it determined by the pair $\langle 128,36\rangle$ in the Small Groups Library on Magma. Consider the unmixed ramification structure $(T_C,\,T_D)$ of type $([4^3],[4^3])$: \begin{equation*} \begin{array}{l} T_C=[g_1g_2g_4g_6, g_1g_4g_5g_6,g_2g_3g_4g_7],\\ T_D=[g_1g_2g_3g_6g_7,g_2g_5g_7,g_1g_3g_4g_7]. \end{array} \end{equation*} By direct computation we verify that the corresponding surface isogenous to a product $S$ is of type $b$, i.e.\ there exists an irreducible rational representation $\tau:G\to GL(W)$, $\dim W=4$ and an irreducible complex representation $\rho:G\to GL(V)$, $\dim V=2$ with $\tau_\mathbb{C}=2\rho$ such that \begin{equation*} \begin{split} &n_C(\tau)=n_D(\tau)=1,\\ &n_C(\tau_j)\cdot n_D(\tau_j)=0\quad\mbox{$\forall\tau_j$ different from $\tau$}. \end{split} \end{equation*} Let $H\triangleleft G$ be the normal subgroup $H:=Ker(\tau)$: a set of generators for $H$ is \begin{equation*} H=\langle g_7,g_6,g_3g_4,g_4g_5\rangle. \end{equation*} The quotient group $G/H$ has order $8$ and it is isomorphic to the quaternion group $Q_8$. Consider the intermediate coverings: \begin{equation*} \xymatrix{ C \ar[r]^H \ar[dr]_{G} & C' \ar[d]^{Q_8}\\ & \mathbb{P}^1} \qquad \xymatrix{ D \ar[r]^H \ar[dr]_{G} & D' \ar[d]^{Q_8}\\ & \mathbb{P}^1} \end{equation*} The curves $C'$ and $D'$ have genus $2$, by Riemann-Hurwitz formula. Moreover the quaternion group $Q_8$ acts on their rational cohomology by the rational representation of dimension $4$ described in Example \ref{quaternion}. By the Remark \ref{rem} we get \begin{equation*} H^2(S,\mathbb{Q})\cong H^2\left(C'\times D',\mathbb{Q}\right)^{Q_8}. \end{equation*} \begin{proposition} Let $S$ be the surface isogenous to a higher product defined above. Then $H^2(S,\mathbb{Q})\cong H^2(E_{\sqrt{-2}}\times E_{\sqrt{-2}},\mathbb{Q})$ where $E_{\sqrt{-2}}$ is the elliptic curve \begin{equation*} E_{\sqrt{-2}}=\frac{\mathbb{C}}{\mathbb{Z}\oplus\sqrt{-2}\mathbb{Z}}. \end{equation*} \end{proposition} \begin{proof} Let $X$ be a curve of genus $2$ such that $Q_8\le Aut(X)$ and $X/Q_8\cong\mathbb{P}^1$. Then its Jacobian is not simple, and in particular it is isogneous to the self-product of the elliptic curve $E_{\sqrt{-2}}$. The action of $Q_8$ induces a Hodge morphism $\psi$ \begin{equation*} \psi: H^1(E_{\sqrt{-2}},\mathbb{Q})\otimes H^1(E_{\sqrt{-2}},\mathbb{Q})\to Z. \end{equation*} Now, arguing as in the step $2$ of the proof of the Theorem \ref{main}, we conclude that \begin{equation*} H^2(S,\mathbb{Q})\cong H^2\left(C'\times D',\mathbb{Q}\right)^{Q_8}\cong H^2(E_{\sqrt{-2}}\times E_{\sqrt{-2}},\mathbb{Q}). \end{equation*} \end{proof} \begin{remark} The covering maps $f:C\to\mathbb{P}^1$ and $h:D\to\mathbb{P}^1$ have both $3$ branching values. It follows that the curves $C$ and $D$ are determined up to isomorphism, by the Riemann Existence Theorem. In particular the pairs $(C,\,f)$ and $(D,\,g)$ are Belyi pairs and the surface $S$ is a Beauville surface. \end{remark} \subsection{Case c}\label{casec} Two groups occur in this case. Let $G$ be the finite group $(\mathbb{Z}_3)^2$ and consider the unmixed ramification structure $(T_C,T_D)$: \begin{equation*} \begin{split} T_C=&[(1,1),(2,1),(1,1),(1,2),(1,1)];\\ T_D=&[(0,2),(0,1),(1,0),(2,0)]. \end{split} \end{equation*} This structure has been already studied in Example \ref{z32}: notice that the corresponding surface isogenous to a product $S$ is of type $c$.\\ In particular, using the notation of Example \ref{z32}, there is an irreducible rational representation $\tau_4:G\to GL(W_4)$ such that \begin{itemize} \item $\tau_4\otimes\mathbb{C}=\rho_5\oplus\rho_9$; \item $n_C(\tau_4)=1$ and $n_D(\tau_4)=2$. \end{itemize} Let $H$ be the normal subgroup $H:=Ker(\rho_5)=Ker(\rho_9)$: a set of generators for $H$ is \begin{equation*} H=\langle(2,1)\rangle. \end{equation*} Notice that $H\cong\mathbb{Z}_3$ and also $G/H\cong\mathbb{Z}_3$. Let us consider the intermediate coverings $C'=C/H$ and $D' =D/H$ of genus $g(C')=1$ and $g(D')=2$.\\ The curve $D'$ is a curve of genus $2$ with an automorphism $\sigma$ of order $3$ such that $D'/\langle\sigma\rangle\simeq\mathbb{P}^1$. It follows that its Jacobian is not simple and in particular it is isogenous to the self-product of an elliptic curve $E_D$. \begin{proposition} Let $S$ be the regular surface isogenous to a product of unmixed type associated to the unmixed structure $(T_C,T_D)$. Then $H^2(S,\mathbb{Q})\simeq H^2(C'\times E_D,\mathbb{Q})$, where $C'$ and $E_D$ are the elliptic curves described above. \end{proposition} \begin{proof} By Remark \ref{rem} we have $H^2(S,\mathbb{Q})\cong H^2(C'\times D',\mathbb{Q})^G$ and we have already observed that the cohomology group $H^1(D',\mathbb{Q})$ decomposes as sum of two Hodge substructures, both of dimension $2$. Now we conclude with the same arguments used in the proof of Theorem \ref{main}. \end{proof} The case of the group $G=(\mathbb{Z}_2)^4\rtimes_\varphi \mathcal{D}_3$ follows in a similar way. This group has $14$ irreducible complex representations with Schur index $1$: $12$ are self-dual while the remaining two are in the same Galois-orbit. So we have an irreducible rational representation $\tau:G\to GL(W)$ such that $\tau\otimes\mathbb{C}$ decompose as sum of two irreducible complex representations. We set $H=Ker(\tau)$ and we proceed as before. \subsection{Case d}\label{cased} Let $G$ be the group $G=(\mathbb{Z}_2)^3\rtimes_\varphi\mathbb{Z}_4$ where $\varphi:\mathbb{Z}_4\to Aut(\mathbb{Z}_2^3)\simeq GL(3,\mathbb{F}_2)$ is defined by \begin{equation*} \varphi(1)= \begin{pmatrix} 1&0&0\\ 0&1&0\\ 1&0&1 \end{pmatrix}. \end{equation*} Consider the unmixed ramification structure $(T_C,T_D)$ of $G$ of type $([2^2,4^2],[2^2,4^2])$: \begin{equation*} \begin{split} T_C=[((1,0,0),2),((1,1,1),2),((0,1,0),1),((0,0,1),3)],\\ T_D=[((1,1,0),0),((1,0,0),0),((1,0,0),3),((1,1,1),1)]. \end{split} \end{equation*} We construct the group $G$ in \cite{magma}: \begin{verbatim} H:=CyclicGroup(4); K:=SmallGroup(8,5); A:=AutomorphismGroup(K); M:=hom<K->K|[K.1->K.1*K.3, K.2->K.2, K.3->K.3]>; Phi:=hom<H->A|[H.1->M]>; G,a,b:=SemidirectProduct(K,H,Phi); G1:=a(K.1); G2:=a(K.2); G3:=a(K.3); G4:=b(H.1); \end{verbatim} With this notation the unmixed ramification structure is given by: \begin{equation*} T_C=[g_1g_4^2,\, g_1g_2g_3g_4^2,\,g_2g_4,\,g_3g_4^3], \quad T_D=[g_1g_2,\,g_1,\,g_1g_4^3,\,g_1g_2g_3g_4]. \end{equation*} By direct calculation we see that the surface $S$ is of type $d$. We denote by $\tau_{j_1}:G \to GL(W_{j_1})$, $\tau_{j_2}:G\to GL(W_{j_2})$ the two irreducible rational representations such that \begin{equation*} \begin{split} &n_C(\tau_{j_1})=n_C(\tau_{j_2})=n_D(\tau_{j_1})=n_D(\tau_{j_2})=1,\\ &n_C(\tau_j)\cdot n_D(\tau_j)=0,\quad \forall j\mbox{ different from }j_1,\,j_2. \end{split} \end{equation*} We set $H_1:=ker(\tau_{j_1})$ and $H_2:=ker(\tau_{j_2})$ of $G$. A set of generators for $H_1$ and $H_2$ are \begin{equation*} \begin{split} H_1&=\langle((1,0,0),0),((0,0,1),0),((0,1,0),2)\rangle=\langle g_1,\,g_3,\,g_2g_4^2\rangle,\\ H_2&=\langle((1,1,0),0),((0,0,1),0),((0,1,0),2)\rangle=\langle g_1g_2,\,g_3,\,g_2g_4^2 \rangle. \end{split} \end{equation*} We observe that: \begin{itemize} \item $G/H_1\cong G/H_2\cong \mathbb{Z}_4$; \item the curves $C_1:=C/H_1$, $C_2:=C/H_2$, $D_1:=D/H_1$ and $D_2:=D/H_2$ have genus $1$. \end{itemize} Consider the intermediate coverings: \begin{equation*} \xymatrix{ C \ar[r]^{H_i} \ar[dr]_{G} & C_i\ar[d]^{\mathbb{Z}_4}\\ & \mathbb{P}^1} \qquad \xymatrix{ D \ar[r]^{H_i} \ar[dr]_{G} & D_i \ar[d]^{\mathbb{Z}_4}\\ & \mathbb{P}^1} \end{equation*} Since $C_i$ and $D_i$, $i=1,\,2$ are elliptic curves with an automorphism of order $4$ they are all isogenous to \begin{equation*} E_i=\frac{\mathbb{C}}{\mathbb{Z}\oplus i\mathbb{Z}}. \end{equation*} By Remark \ref{rem} we get \begin{equation*} Z=\left(H^1(C_1,\mathbb{Q})\otimes H^1(D_1,\mathbb{Q})\right)^G\oplus \left(H^1(C_2,\mathbb{Q})\otimes H^1(D_2,\mathbb{Q})\right)^G. \end{equation*} \begin{proposition} Let $S$ be the surface isogenous to a higher product defined above. Then $H^2(S,\mathbb{Q})=H^2(E_i\times E_i,\mathbb{Q})$, as rational Hodge structures. \end{proposition} \begin{proof} We have already observed that \begin{equation*} Z=\left(H^1(C_1,\mathbb{Q})\otimes H^1(D_1,\mathbb{Q})\right)^G\oplus \left(H^1(C_2,\mathbb{Q})\otimes H^1(D_2,\mathbb{Q})\right)^G. \end{equation*} Up to exchange of $C_1\times D_1$ with $C_2\times D_2$, we can assume that the Hodge structure $W:=(H^1(C_1,\mathbb{Q})\otimes H^1(D_1,\mathbb{Q}))^G$ has dimension $2$ and $\dim W^{2,0}=\dim W^{0,2}=1$. Now following the same idea of the proof of Theorem \ref{main} we get: \begin{equation*} H^2(S,\mathbb{Q})\cong H^2(C_1\times D_1,\mathbb{Q})\cong H^2(E_i\times E_i,\mathbb{Q}). \end{equation*} \end{proof} \begin{remark} Consider, as in the proof of Theorem \ref{main}, the Hodge morphism $\psi:H^1(C_1,\mathbb{Q})\otimes H^1(D_1,\mathbb{Q})\to Z$. Here it is clear that $\psi$ is not an isomorphism since its image $Im(\psi)$ has dimension $2$. \end{remark} \section{Conclusion} \begin{theorem}\label{main+} Let $S$ be a regular surface isogenous to a higher product of unmixed type with $\chi(\mathcal{O}_S)=2$. Then there exist two elliptic curves $E_C$ and $E_D$ such that $H^2(S,\mathbb{Q})\cong H^2(E_C\times E_D, \mathbb{Q})$ as rational Hodge structures. \end{theorem} \begin{proof} It follows from Theorem \ref{main} and the analysis, case by case, of the previous section. \end{proof} \begin{remark} In general the Theorem does not imply the existence of intermediate covering of the curves $C$, $D$. More precisely there are no subgroups $H_C,\,H_D$ of $G$ such that $C/H_C\cong E_C$, $C/H_D\cong E_D$ where $E_C,\,E_D$ are elliptic curves such that $H^2(S,\mathbb{Q})\cong H^2(E_C\times E_D, \mathbb{Q})$. See the following example. \end{remark} \begin{example} Consider once more the unmixed ramification structure studied in Example \ref{z32} and in Section \ref{casec}. Let $G$ be the abelian group $(\mathbb{Z}_3)^2$ and let $T_D$ be the spherical system of generators \begin{equation*} T_D=[(0,2),(0,1),(1,0),(2,0)]. \end{equation*} such that the corresponding curve $D$ has genus $4$. Consider all the $6$ subgroups of $G$. By \cite{magma} we verify that for all subgroups $H\le G$ the quotient curve $D/H$ has genus $0,\,2$ or $4$. In particular there is not any subgroup $H$ such that $D/H$ is an elliptic curve. \end{example} \subsection{About the Picard number}\label{picard} Let $S$ be a regular surfaces isogenous to a higher product of unmixed type with $\chi(\mathcal{O}_S)=2$. We can compute $\rho (S)$, the Picard number of $S$, using Theorem \ref{main+}. Let $E_C$ and $E_D$ be the elliptic curves such that $H^2(E_C\times E_D,\mathbb{Q})\cong H^2(S,\mathbb{Q})$: we get $\rho(S)=\rho(E_C\times E_D)$. The Picard number of an Abelian surface of product type $E_1\times E_2$ is \begin{equation*} \rho(E_1\times E_2)= \begin{cases} 4 & \mbox{if }E_1\sim E_2\mbox{ has complex multiplication},\\ 3 & \mbox{if }E_1\sim E_2\mbox{ but they do not have CM},\\ 2 & \mbox{otherwise}.\end{cases} \end{equation*} A surface $S$ is said to be a surface with maximal Picard number if $\rho(S)=h^{1,1}(S)$: this kind of surfaces are studied in a recent work of Beauville \cite{Be13} where a lot of examples are constructed. As already observed a regular surface $S$ isogenous to a higher product of unmixed type with $\chi(\mathcal{O}_S)=2$ has $h^{1,1}(S)=4$. It follows that the surfaces studied in Sections $\ref{caseb}$ and $\ref{cased}$ are examples of surfaces with maximal Picard number.
89b3a25b8dd57a87b7984de0036b2e4f4bfbaae1
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Machine learning (ML) especially deep learning is playing an increasingly important role in artificial intelligence (AI), and has achieved great success in many domains and applications in the past decades. For example, Convolutional Neural Networks (CNNs) can often achieve even higher accuracy than human beings in image classification and visual object recognition, leading to the fast development of applications such as self-driving, face recognition, handwriting recognition, image retrieval and remote sensing image processing \cite{li2021survey}; Recurrent Neural Networks (RNNs) and Transformer-based models are quite successful in sequence learning and natural language understanding, which boost applications such as time-series prediction, machine translation, speech recognition and chatbot \cite{otter2020survey}; Graph Neural Networks (GNNs) have been widely applied to prediction tasks involving graph structured data in domains such as social network, chemistry and biology \cite{zhou2020graph}. However, the high performance of most ML models relies on a number of labeled samples for (semi-)supervised learning, while such labeled samples are often very costly or not efficient enough to collect in real-word applications. Even though labeled samples can be collected, re-training a complex model from scratch when new prediction targets (e.g., classification labels) emerge is unacceptable in many contexts where real-time is requested or no enough computation resources are accessible. In the paper, we call such ML contexts where no labeled samples or only a very small number of labeled samples are available for a prediction task as \textit{low-resource learning}, following some definitions on low-resource scenarios in the natural language processing (NLP) community \cite{hedderich2021survey,zoph2016transfer}. According to the number of the available labeled samples, we further divide low-resource learning into \textit{zero-shot learning} (ZSL) and \textit{few-shot learning} (FSL). ZSL is formally defined as predicting new classes (labels) that have never appeared in training, where the new classes are named as \textit{unseen classes} while the classes that have samples in training are named as \textit{seen classes} \cite{palatucci2009zero,lampert2009learning,farhadi2009describing}. FSL is to predict new classes for which only a small number of labeled samples are given \cite{fink2005object,fei2006one}. For convenience, we also call such new classes with no enough labeled samples as unseen classes, and the other classes that have a large number of samples used in training as seen classes. Specially, when each unseen class has only one labeled sample given, FSL becomes \textit{one-shot learning} \cite{fei2006one}. ZSL has attracted wide attention in the past decade with quite a few solutions proposed \cite{wang2019survey,chen2021knowledge,fu2018recent}. One common solution is transferring knowledge which could be samples, features (or data representations), models and model parameters from seen classes to unseen classes so as to address the sample shortage and avoid training new models from the scratch \cite{pan2009survey}. For example, in zero-shot image classification, image features that have been already learned by CNNs such as ResNet \cite{he2016deep} from images of seen classes are often directly re-used to build classifiers for unseen classes. The key challenge is selecting the right knowledge to transfer and adaptively combining these transferred knowledge for a new prediction task. To this end, ZSL methods usually utilize auxiliary information that contains inter-class relationships. When ZSL was originally investigated for visual object recognition and image classification, the methods mainly use attributes that describe objects' visual characteristics (a.k.a. class attributes) \cite{lampert2009learning,farhadi2009describing}. Next, class textual information such as class name and sentence description is widely studied using text mining and word embedding techniques \cite{frome2013devise,qiao2016less}. In recent five years, Knowledge Graphs (KGs), which are able to represent richer semantics including not only class attributes, class textual information but also class hierarchy (e.g., taxonomies), relational facts and so on, have attracted wide attention and the KG-augmented ZSL methods have achieved the state-of-the-art performance on many benchmarks and tasks \cite{wang2018zero,kampffmeyer2019rethinking,geng2021ontozsl}. FSL, which started to attract wide attention around when one-shot learning was proposed for image classification \cite{fei2006one}, has a longer history and even more studies than ZSL \cite{wang2020generalizing}. Since the unseen classes have some labeled samples although their sizes are quite small, techniques of \textit{meta learning} (a.k.a. \textit{learn to learn}) \cite{lemke2015metalearning,hospedales2021meta} have been widely applied \cite{yin2020meta}. Meta learning is usually applied by either reducing the parameter searching space in training using meta parameters such as more optimized initial parameter settings, or transforming a classification problem to a metric learning problem where a testing sample is matched with the unseen classes based on their few-shot samples and meta learned mappings. KGs have been utilized to optimise such meta learning-based methods; for example, Sui et al. \cite{sui2021knowledge} retrieved relevant knowledge from a KG named NELL \cite{mitchell2018never} to construct task-relevant relation networks as mapping functions for addressing few-shot text classification. Meanwhile, the aforementioned idea of knowledge transfer can also adopted for addressing FSL, where KG auxiliary information is becoming increasingly popular in recent years \cite{tsai2017improving,chen2018knowledge,peng2019few,zhang2020relation}. For example, Chen et al. \cite{chen2018knowledge} transferred the feature learned by a CNN from flight delay forecasting tasks with a lot of historical records to a new forecasting task with no enough historical records, by exploiting a KG with different kinds of flight related knowledge about e.g., airports and airlines; Peng et al. \cite{peng2019few} extracted a KG from WordNet for representing class hierarchies and then used this KG to augment knowledge transfer for few-shot image classification. \vspace{0.1cm} \noindent\textbf{Motivation and Contribution.} Since KG has become a very popular form for representing knowledge and graph structured data, acting as the foundation of many successful AI and information systems \cite{hogan2021knowledge}, it is quite reasonable to use KGs to augment both ZSL and FSL as discussed above, quite a few papers have been published on KG-aware low-resource learning especially in recent five years, and this research topic is becoming more and more popular. Note KG-aware low-resource learning includes investigations on not only using KGs to augment ZSL and/or FSL but also addressing tasks of the KG itself (such as KG completion) where the KG context is often considered when ZSL or FSL methods are applied or extended. By the middle of December in 2021, we have collected $50$ papers on KG-aware ZSL and $46$ papers on KG-aware FSL. To systematically categorize and compare all the proposed methods, and to present an overall picture of this promising field, a comprehensive survey is now in urgent need. In this paper, we \textit{(i)} introduced KGs and their construction methods for low-resource learning, \textit{(ii)} categorized, analyzed and compared different kinds of KG-aware ZSL and FSL methods, \textit{(iii)} presented the low-resource learning tasks as well as their evaluation resources across multiple domains including computer vision (CV), NLP and KG completion, and \textit{(iv)} discussed the existing challenges and potential directions of KG-aware low-resource learning. This survey is suitable for all AI researchers, especially those who are to enter the domain of low-resource learning, those who have already been working on low-resource learning but are interested in solutions that integrate knowledge representation and reasoning, and those who are working on KG applications and semantic techniques. \vspace{0.1cm} \noindent\textbf{Related Literature Reviews.} There have been several papers that have literature reviews relevant to low-resource learning, but they are all quite different from this survey. \begin{itemize} \item The two survey papers \cite{wang2019survey} and \cite{wang2020generalizing} systematically review the ZSL methods by 2019 and the FSL methods by 2020, respectively, mainly from the perspective of problem setting (e.g., whether the unlabeled testing samples are used or not in training), ML theory (e.g., which prediction error to reduce), and methodology (e.g., data focused, model focused and learning algorithm focused). However, they do not consider the categorization and deep analysis from the perspective of auxiliary information, and failed to collect most KG-aware methods. \item The very recently released paper \cite{hu2021can} reviews both ZSL and FSL methods that use or aim at structured data. Structured data, however, is more general than KG with a much larger scope, and thus \cite{hu2021can} collects only a small part of the papers on KG-aware ZSL and FSL research. It includes $19$ papers about KG-aware ZSL and $21$ papers about KG-aware FSL, while this survey has $50$ papers and $46$ papers, accordingly. This survey also has a more fine-grained method categorization, and additional technical analysis on KGs and their construction for low-resource learning. Meanwhile, \cite{hu2021can} focuses more on addressing problems in structured data by ZSL and FSL methods, but less on augmenting ZSL and FSL methods. \item The paper \cite{chen2021knowledge} is our previous survey and perspective paper published in IJCAI 2021 Survey Track. It briefly categorizes different external knowledge used in ZSL with incomplete reviews on KG-aware ZSL papers, and it does not cover FSL. \item The benchmarking paper \cite{xian2018zero} was published in 2018. It reviews around $10$ ZSL methods that mainly utilize class attribute and text information as the auxiliary information, focusing on their evaluation and result comparison on image classification task. This paper covers neither state-of-the-art ZSL methods proposed in recent $3$ years nor KG-aware ZSL methods. Similarly, the survey paper \cite{fu2018recent} reviews ZSL papers published before 2018, mainly focusing on ZSL studies on CV tasks. \end{itemize} \vspace{0.1cm} \noindent\textbf{Paper Organization.} The remainder of this survey is organized as follows. Section \ref{sec:preliminary} introduces the preliminary, including the definitions of ZSL and FSL, their notations, an overall view of the auxiliary information and general solutions. Section \ref{sec:kg} introduces the definition and scope of KG, as well as KG construction methods in low-resource learning. Section \ref{sec:zsl} reviews KG-aware ZSL methods which are categorized into four paradigms: mapping-based, data augmentation, knowledge propagation and feature fusions. For each paradigm, we further introduce different categories and their corresponding methods. Section \ref{sec:fsl} is similar to Section \ref{sec:zsl} but reviews KG-ware FSL methods. Section \ref{sec:app} introduces the development and resources of KG-aware ZSL and FSL in different tasks across CV, NLP and KG completion. Section \ref{sec:challenge} discusses the existing challenges and the future directions of KG-aware low-resource learning. Section \ref{sec:conclusion} concludes this paper. \section{Preliminary on Low-resource Learning}\label{sec:preliminary} \subsection{Zero-shot Learning} ZSL has been applied in many different tasks, varying from image classification and visual question answering in CV to text classification and knowledge extraction in NLP, from link prediction in KG completion to protein function prediction in bioinformatics. Although the exact ZSL problem definition may vary from task to task and from paper to paper, it can be summarized and expressed in a common way. In this part, we first give the definition and settings of ZSL, and then presents an overall picture of its auxiliary information and the existing method categorization. \subsubsection{Problem Definition} In ML classification, a classifier is trained to approximate a target function $f:x\rightarrow y$, where $x$ represents the input data and $y$ represents the output class which is often known as label. In image or text classification, $x$ is the input image or text while $y$ is the label to output. Sometimes, one input can be annotated by multiple labels, which is known as multi-label classification. Regarding question answering, we refer to giving an answer or multiple answers to a natural language question w.r.t. a given textual context, where $y$ is the answer. Visual question answering is similar but the context is an image or a video. knowledge extraction refers to extracting entities, relations or events from natural language text, where $x$ includes a textual context like a sentence, a paragraph or a document, while $y$ is the entity, relation or event. Meanwhile, we also regard related tasks such as entity and relation linking, which matches an entity or relation mention in text with an entity or relation in a KG, and entity typing, which assigns defined classes to an entity mention, as knowledge extraction. For KG link prediction, the input $x$ is the two elements of a triple as well as their contexts (e.g., associated triples) while $y$ is the triple's third element. Although the output of all these tasks are different, in this survey we sometimes call them as class or label for simplicity. In usual settings of these tasks, the class $y$ is often limited to a given candidate set during prediction. In normal supervised learning, every class to be predicted in testing has associated labeled samples used for training, while \textit{standard ZSL} aims to predict testing samples with some candidate classes that have never appeared in the training samples. Formally, in the \textit {standard ZSL}, we denote \textit{(i)} the training samples as $\mathcal{D}_{tr} = \{(x, y) | x \in \mathcal{X}_{s}, y \in \mathcal{Y}_s\}$ where $\mathcal{X}_{s}$ and $\mathcal{Y}_s$ represent the training sample inputs and the seen classes, respectively; \textit{(ii)} the testing samples as $\mathcal{D}_{te} = \{(x, y) | x \in \mathcal{X}_u, y \in \mathcal{Y}_u\}$ where $\mathcal{X}_{u}$ and $\mathcal{Y}_u$ represent the testing samples to predict and the unseen classes, respectively, with $\mathcal{Y}_u \cap \mathcal{Y}_s = \emptyset$. The target of ZSL methods is to predict the classes of $\mathcal{X}_{u}$ as correctly as possible. When the candidate classes are set to $\mathcal{Y}_u \cup \mathcal{Y}_s$ (i.e., both seen classes and unseen classes are considered in prediction), the problem is known as \textit{generalized ZSL}. In addressing some tasks such as link prediction for KG completion, relation and entity extraction from text, and natural language question answering, the original function $f$ is often transformed into a scoring function by moving $y$ to the input, denoted as $f':(x,y)\rightarrow s$, where $s$ is a score indicating the truth of \gyx{the combination of $x$ and} $y$. With $f'$, the class of a testing sample $x$ in $\mathcal{X}_u$ is often predicted by finding out the class in $\mathcal{Y}_u$ (or $\mathcal{Y}_u \cup \mathcal{Y}_s$) that maximizes the score $s$. Namely, the original label prediction problem is modeled as a ranking problem. For example, in modeling link prediction with a head entity and a tail entity given as the input $x$, the target relation between the two entities is then the class $y$ to predict, and the score $s$ quantifies the relation's correctness for indicating the relationship between the head entity and the tail entity. \subsubsection{Auxiliary Information} The auxiliary information is often represented as symbolic forms such as class attribute, class name, class textual description and KG. To be involved for supporting ZSL, they are often encoded into sub-symbolic representations (i.e., vectors) independently or jointly with some other learning modules. We denote the initial encoding function of a class with its auxiliary information as $h: y \rightarrow \bm{y}$ where the bolded $\bm{y}$ represents the vector of the class $y$, $y \in \mathcal{Y}_u \cup \mathcal{Y}_s$. The raw input $x$ could also be encoded by e.g., some pre-trained models or hand-craft rules for feature extraction before they are input to the prediction model ($f$ or $f'$). This step is optional but is often adopted. We denote this initial encoding function as $g: x \rightarrow \bm{x}$ where the bolded $\bm{x}$ represents the pre-processed vector of the original input $x$, $x\in\mathcal{X}_s\cup\mathcal{X}_u$. Since there are no labeled samples for the unseen classes, ZSL solutions heavily rely on auxiliary information. In early years when ZSL was proposed in around 2009 for image classification, the majority of the solutions utilize class attributes which are often a set of key-value pairs for describing object visual characteristics. The simplest class attributes are those binary annotations; for example, ``furry'' and ``striped'' indicate whether an animal looks furry and striped, respectively \cite{lampert2009learning,lampert2013attribute}; while the annotation ``has wheel'' is used in recognizing vehicles \cite{farhadi2009describing}. Relative attributes which enable comparing the degree of each attribute \gyx{across} classes (e.g., ``bears are furrier than giraffes'') \cite{parikh2011relative} are more expressive. Another kinds of more expressive visual attributes are those associated with real values for quantifying the degree. One typical example is the animal image classification benchmark named Animals with Attributes (AwA), where each attribute annotated to an animal class is associated with a real value ranging from $0$ to $1$ \cite{lampert2013attribute,xian2018zero}. The attribute leads to some very classic methods for zero-shot image classification such as DAP which first predicts the attributes of a testing image and then determines its class based on the predicted attributes \cite{lampert2009learning}. The utilization of attributes in other ZSL tasks is not as popular as in CV tasks, but it is still feasible; for example, \cite{hao2020inductive} utilizes the node attributes with categorical values to address KG link prediction involving unseen entities which have never appeared in the training triples. The advantages and disadvantages of the attribute auxiliary information are quite obvious: it is easy to use and quite accurate with little noise, but it cannot express complex semantics for some tasks and is not easily accessible, usually requiring annotation by human beings or even domain experts. Since around 2013, class textual information, varying from words and phrases such as class names to long text such as sentences and documents for describing classes, started to attract wide attention for addressing ZSL problems. In zero-shot image classification, the studies \cite{norouzi2014zero, socher2013zero,frome2013devise} utilize the class names; the studies \cite{elhoseiny2013write,qiao2016less} prefer to use class sentence descriptions from encyclopedia articles; the study \cite{reed2016learning} collects more fine-grained and compact visual description sentences via crowdsourcing on the Amazon Mechanical Turk (AMT) platform. In zero-shot KG link prediction, the study \cite{qin2020generative} utilizes relation sentence descriptions from the KG itself for addressing unseen relations. In zero-shot entity extraction from text, the study \cite{logeswaran2019zero} predicts unseen entities in text of a new domain by using the entities' encyclopedia articles. To encode the semantics of a class name, one approach is directly using its words' vectors by a word embedding model (such as Word2Vec \cite{mikolov2013distributed}) that has been pre-trained by a general purpose or domain specific corpus. However, this makes class semantic encoding and prediction model training detached with no interaction between them. A coupled approach is jointly learning the prediction model and the class semantic encoding; for example, DeViSE jointly fine-tunes a skip-gram word embedding model and an image classifier \cite{frome2013devise}. Long text such as sentences and documents contains more yet noisy information, and thus some additional methods for feature learning and selection over the text (or text embedding) have been considered. Among the aforementioned studies, \cite{elhoseiny2013write} and \cite{qin2020generative} extract features from the text by the TF-IDF algorithm through which the vectors of some critical words get more weights; the study \cite{qiao2016less} initially encodes the class descriptions into simple bag-of-words vectors, and then jointly learns text features and the image classifier; \cite{reed2016learning} also jointly learns text features and the image classifier, but considers both word-level and character-level text features. In summary, the text information is easy to access for common ZSL tasks. It can be extracted from not only the data of the ZSL tasks themselves but also encyclopedias, Web pages and other online resources. However, it is often noisy with irrelevant words and the words are always ambiguous, failing to accurately express fine-grained, logical or quantified inter-class relationships. In recent years, graph structured knowledge that belong to the scope of KG, such as the class hierarchies and the relational facts, are becoming more and more popular in ZSL research with very promising performance achieved. Such knowledge can often express richer semantics than attributes and text, even including logical relationships, and at the same time, they are become more available with the development of KG construction techniques and the availability of many public KGs such as WordNet \cite{miller1995wordnet}, ConceptNet \cite{speer2017conceptnet} and Wikidata \cite{vrandevcic2014wikidata}. In this survey, we mainly review KG-aware ZSL studies, and thus we use Section \ref{sec:kg} to independently introduce KGs and their construction for low-resource learning, and Section \ref{sec:zsl} to review KG-aware ZSL methods. \subsubsection{Existing Method Categorization} The survey paper \cite{wang2019survey} divides general ZSL methods into the following two categories: \begin{itemize} \item \textit{Classifier-based}. The classifier-based methods are to directly learn a classifier for each unseen class. They could be further divided into \textit{(i)} \textit{Corresponding Methods} which exploit the correspondence between the binary one-vs-rest classifier for each class and its corresponding encoding of the auxiliary information, \textit{(ii)} \textit{Relationship Methods} which calculate and utilize the relationships among classes, and \textit{(iii)} \textit{Combination Methods} which combine classifiers for basic elements that are used to constitute the classes. \item \textit{Instance-based}. The instance-based methods are to obtain labeled samples belonging to the unseen classes and use them for learning and prediction. They are further divided into several subcategories: \textit{(i)} \textit{Projection Methods} which learns a function to project the input and the class encoding into the same space (i.e., the class encodings after projection are regarded as labeled samples), \textit{(ii)} \textit{Instance-borrowing Methods} which transfer samples from seen classes to unseen classes, and \textit{(iii)} \textit{Synthesizing Methods} which obtain labeled samples for the unseen classes by synthesizing some pseudo samples. \end{itemize} This categorization is mainly from the perspective of ML theory and method. It aims at general ZSL methods, no matter what kind of auxiliary information is utilized. In contrast, our categorization which is to be introduced in Section \ref{sec:zsl} is from the perspective of auxiliary information, and focuses on more fine-grained comparison and analysis towards those KG-aware ZSL methods. Meanwhile, since the survey \cite{wang2019survey} was published in 2019 while many KG-aware ZSL methods were proposed in recent two years, the collected KG-aware methods are quite incomplete and they are not fully considered in making the above categorization. \subsection{Few-shot Learning} As ZSL, FSL has been very widely investigated for many different tasks across domains such as CV, NLP, KG completion and urban computing. Although the exact problem definition of FSL varies from task to task and from paper to paper, it can be summarized and uniformly expressed. In this part, we first introduce the definition of FSL and its notations, and then give a brief introduction to its auxiliary information and the existing method categorization. \subsubsection{Problem Definition} Briefly, FSL aims to classify data with the candidate classes that have only a small number of labeled samples. Its definition is very close to ZSL except that the unseen class is associated with some labeled samples which can be utilized in both training and testing. For convenience, we re-use the notations of ZSL, and keep calling the normal classes with a large number of training samples as seen classes ($\mathcal{Y}_s$) and those classes with only a small number of labeled samples as unseen classes ($\mathcal{Y}_u$). As in ZSL, the normal training samples of the seen classes are denoted as $\mathcal{D}_{tr} = \left\{ (x,y) | x \in \mathcal{X}_{s}, y \in \mathcal{Y}_s \right\}$, the testing samples are denoted as $\mathcal{D}_{te} = \left\{ (x,y) | x \in \mathcal{X}_{u}, y \in \mathcal{Y}_u \right\}$, and the target function to learn is denoted as $f: x \rightarrow y$. Specially, the few-shot labeled samples of the unseen classes are denoted as $\mathcal{D}_{few} = \left\{ (x,y) | x \in \mathcal{X}_{few}, y \in \mathcal{Y}_u \right\}$. Note $\mathcal{X}_{few} \cap \mathcal{X}_{te} = \emptyset$. The original target of learning $f$ can also be transformed into learning a scoring function for ranking the candidate classes, i.e., $f': (x,y) \rightarrow s $, where the original input and a candidate class act as the new input and the new output $s$ is a score indicating the truth of the class $y$ w.r.t. $x$. The size of the labeled samples of the unseen class is relative. It can have just one labeled sample (i.e., one-shot learning). It can also have multiple labeled samples which, however, are not enough to train a robust model for the unseen class. To be more specific, we introduce the concept of \textit{expected risk} as in \cite{wang2020generalizing}. For an optimal hypothesis $\hat{h}$ (i.e., the target function $f$), its expected risk is composed of two parts: \textit{(i)} approximation error $\mathcal{E}_{app}$ which measures how close the best hypothesis $h^*$ in a given hypothesis set $\mathcal{H}$ can approximate $\hat{h}$, and \textit{(ii)} estimation error $\mathcal{E}_{est}$ which measures the effect of minimizing the empirical risk of the learned hypothesis $\bar{h}$ w.r.t. the best hypothesis $h^*$ \cite{bottou2018optimization}. As shown in Figure \ref{fig:risk}, model training for FSL unseen classes, which have no enough labeled samples, means there is a much higher estimation error than model training for normal classes that have enough samples. However, it is important to note that the few-shot samples may not always be used for model training (or fine-tuning), but can sometimes be directly utilized in prediction (e.g., methods of the transfer-based paradigm to be introduced in Section \ref{sec:tbp}). \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figure/risk.pdf} \caption{Expected risk with (a) sufficient samples and (b) no enough samples for training \cite{wang2020generalizing}. \label{fig:risk}} \end{figure} \subsubsection{Auxiliary Information} All the auxiliary information used in ZSL such as class textual information, class attribute and class hierarchy can also be used for augmenting FSL. For example, Tsai and Salakhutdinov \cite{tsai2017improving} utilized the word embeddings of the classes and their ancestors in a hierarchy to generate quasi-samples from $\mathcal{D}_{tr}$ (i.e., samples of seen classes) for unseen classes for addressing one-shot learning; while Zhu et al. \cite{zhu2020attribute} utilized visual attributes to address few-shot image classification by proposing an attribute-constrained image representation learning neural network. Besides, domain specific background knowledge can also be applied in the form of e.g., heuristic rules. Wu et al. \cite{wu2018exploit} proposed a simple heuristic idea to augment the samples, i.e., using the nearest label as a pseudo label to annotate each unlabeled sample. Different from ZSL, a small number of labeled samples are available for the unseen classes in FSL. These few-shot samples can also be regarded as an additional kind of auxiliary information. Most FSL methods now focus on fully utilizing these few-shot samples. To address the sample shortage, they prefer some ML algorithms such as multitask learning which allows parameter sharing between tasks, meta learning which directly predicts some parameters and hyper-parameters that are to learn or to adjust, and metric learning which compares a testing sample with the few-shot samples of each unseen class in some space after projection. KG has also been investigated in FSL as a form for representing different kinds of auxiliary information. Since this survey focuses on KG-aware low-resource learning, the KGs and their construction for FSL are introduced in Section \ref{sec:kg}, while methods of KG-aware FSL are reviewed in Section \ref{sec:fsl}. \subsubsection{Existing Method Categorization} According to the aspects that are augmented for addressing the sample shortage, the existing FSL methods are divided into the following three general categories in \cite{wang2020generalizing}: \begin{itemize} \item Methods that augment the \textit{data}. They increase the size of the few-shot samples ($\mathcal{D}_{few}$) via data augmentation by e.g., transforming samples from the training set $\mathcal{D}_{tr}$, transforming samples from similar labeled data, and generating samples from weekly labeled or unlabeled data. \item Methods that augment the \textit{model}. They reduce the original hypothesis set $\mathcal{H}$ to a small one for reducing the searching space in training. They can be further divided into \textit{(i) }methods of multitask learning which is to share parameters from one task to another task or regularize the parameters of the target task, \textit{(ii)} methods of embedding learning which is to project samples to an embedding space where similar and dissimilar samples can be easily discriminated, \textit{(iii)} methods of generative modeling which is to restrict the model distribution, and so on. \item Methods that augment the \textit{algorithm}. They guide and accelerate the searching of the parameters of the best hypothesis $h^*$ by e.g., learning the optimizer and aggregating existing parameters. \end{itemize} This is a systematic categorization towards general FSL. It has a very limited coverage on KG-aware FSL methods, and ignores the role of the auxiliary information especially KGs. In this survey, we categorize and compare KG-aware FSL methods from the perspective of how KG is exploited. See more details in Section \ref{sec:fsl}. \section{Knowledge Graph} \label{sec:kg} \subsection{Definition and Scope} Knowledge Graph (KG) is widely used for representing graph structured knowledge, and has achieved great success in many domains, such as search engine, recommendation system, clinic AI, personal assistant and natural language understanding \cite{Pan2016,hogan2021knowledge,weikum2020machine}. In this part, we first describe what is a KG from the Semantic Web perspective, and then introduce other KG definitions that are widely used in different domains such as CV and NLP. In the Semantic Web, a KG is often largely composed of statements in the form of RDF\footnote{Resource Description Framework, \url{https://www.w3.org/TR/rdf11-concepts/}.} triple \cite{Pan2009,domingue2011handbook}. Each RDF triple is denoted as ($s, p, o$), where $s$ represents the subject which should be an entity, $p$ represents the predicate, and $o$ represents the object, which can be either an entity or a dataype value. Some statements are relational facts. In this case, $o$ is also an entity, and $p$ is a relation between two entities (a.k.a., object property). $s$ and $o$ are also known as the head entity and tail entity, respectively. A set of relational facts composes a multi-relational graph whose nodes correspond to entities and edges are labeled by relations. Some RDF triples represent literals as e.g., entity attributes. In this case, the predicate $p$ uses a data property and $o$ is a literal with some data type such as string, date, integer and decimal. The literals also include KG meta information such as entity's label, textual definition and comment, which are also represented via built-in or bespoke annotation properties. In addition to the relational facts and literals, KGs are often accompanied by an ontology as the schema, using languages from the Semantic Web community such as RDFS\footnote{RDF Schema, \url{https://www.w3.org/TR/rdf-schema/}} and OWL\footnote{Web Ontology Language, \url{https://www.w3.org/TR/owl2-overview/}} for richer semantics and higher quality \cite{domingue2011handbook,horrocks2008ontologies}. They often define hierarchical classes (a.k.a. concepts), properties (i.e., stating the terms used as relations), concept and relation hierarchies, constraints (e.g., relation domain and range, and class disjointness), and logical expressions such as relation composition. The languages such as RDF, RDFS and OWL have defined a number of built-in vocabularies for representing these knowledge, such as \textit{rdf:type}, \textit{rdfs:subClassOf}, \textit{owl:disjointWith} and \textit{owl:someValuesFrom}. Note RDFS also includes some built-in annotation properties such as \textit{rdfs:label} and \textit{rdfs:comment} for defining the above mentioned meta information. With these vocabularies, an ontology can be represented as RDF triples; for example, the subsumption between two classes can be represented by the predicate \textit{rdfs:subClassOf}, while the membership between an instance and a class can be represented by the predicate \textit{rdf:type}. The ontology alone, which is widely used to define domain knowledge, conceptualization and vocabularies such as terms and taxonomies, is also widely recognized as a KG, and the classes are sometimes also called entities. One typical example is SNOMED CT which systematically organizes medical terms as classes (entities) with names, definitions, existential restrictions, tree-like categorizations and so on \cite{schulz2009snomed}. It is worth mentioning that KGs, especially those OWL ontologies and those relational facts equipped with ontologies, can support symbolic reasoning, such as consistency checking for the change \cite{FlourisHPPW06} which can find logical violations, and entailment reasoning which infers hidden knowledge, according to Description Logics \cite{baader2017introduction}. Besides the relational facts, literals and ontologies defined following standards in the Semantic Web community, we also regard graph structured knowledge in some other forms as KGs, according to the terminologies and definitions used in other communities including ML, database, CV and NLP. One popular KG form is Semantic Network which can be understood as a graph that connects different concepts (entities) often with labeled edges for representing different relationships. Two such KGs that are widely used in many domains are WordNet which is a lexical database with different relationships between words \cite{miller1995wordnet} and ConceptNet which stores commonsense knowledge and relationships between different terms \cite{speer2017conceptnet}. We further relax the scope of KGs to single relation graphs such as simple taxonomy (i.e., a set of hierarchical classes) and graphs with weighted edges which may represent some quantitative relationships such as similarity and distance between entities. In this survey, we also regard logical rules of different forms, such as Horn clause, Datalog rules and SWRL\footnote{Semantic Web Rule Language, \url{https://www.w3.org/Submission/SWRL/}} rules, as well as their soft or fuzzy extensions (i.e., weighted rules) \cite{PSTH05}, within the scope of KG. This is because many of these rules can be transformed into equivalent relational facts and ontological knowledge, and vice versa \cite{horrocks2004proposal,krisnadhi2011owl}. They can often be understood as logic models over KGs, through which hidden knowledge can be inferred. \subsection{Construction} Nowadays, there are many existing KGs which are constructed in different ways. Those high quality domain-specific ontologies such as the medical ontology SNOMED \cite{schulz2009snomed} and the food ontology FoodOn \cite{dooley2018foodon} are often directly constructed by domain experts via collaboration, while many general purpose KGs such as Yago \cite{rebele2016yago}, DBpedia \cite{auer2007dbpedia} and Wikidata \cite{vrandevcic2014wikidata} are constructed via crowdsourcing --- they are either extracted from existing crowdsourced resources such as Wikipedia or directly contributed by volunteers. To be more comprehensive, many KGs integrate different knowledge resources and databases; for example, ConceptNet \cite{speer2017conceptnet}, which was originally developed by crowdsourcing, further fused knowledge from DBpedia, Wiktionary, OpenCyc and so on. In fact, solutions and technologies of Linked Open Data \cite{bizer2011linked}, Ontology Network and Ontology Alignment \cite{euzenat2007ontology} can all be used for constructing KGs via integration. Last but not the least, with the development of data mining, ML and other data analysis techniques, knowledge extraction from unstructured and semi-structured data such as the Web pages, tables and text have recently been widely investigated and used for KG construction; for example, NELL is continuously extracted from the Web \cite{mitchell2018never}, while Google's KG is extended with knowledge extracted from tables in Web pages \cite{cafarella2018ten}. For some specific low-resource learning tasks, there are exactly suitable KGs that can be directly applied. For example, Huang et al. \cite{huang2018zero} directly used the event ontology named FrameNet \cite{baker2003framenet} for supporting their zero-shot event extraction method. However, for the majority of the low-resource learning tasks, existing KGs usually cannot be directly applied due to their large sizes and irrelevant knowledge, and an (ad-hoc) KG should be extracted or constructed. In this part, we mainly review techniques of constructing KGs for augmenting low-resource learning for specific tasks. We divide these techniques into three categories: sub-KG extraction from existing KGs, KG construction with task-specific data, and knowledge integration. \subsubsection{Sub-KG Extraction} Given a low-resource learning task, a straightforward solution is re-using an existing KG by extracting relevant knowledge (i.e., a sub-KG). Next we will present those large-scale general purpose KGs that have been exploited, briefly review the corresponding studies and summarize the methods used for extracting sub-KGs. Note in this part we mainly focus on KGs for augmenting low-resource learning, and do not cover KGs that are just used for evaluating the KG completion tasks (i.e., we ignore KGs acting only as the target for completion). \begin{itemize} \item \textbf{WordNet} is the most widely used KG for augmenting both ZSL \cite{kampffmeyer2019rethinking,wang2018zero,liu2018combining,lee2018multi,wei2019residual,akata2015label,li2015zero,geng2020explainable,geng2021ontozsl,amador2021ontology,wang2021zero,chen2020zero} and FSL \cite{chen2020knowledge,tsai2017improving,peng2019few,jayathilaka2021ontology,monka2021learning,akata2015label}. As a large lexical database with several different relationships between words, such as synonym, hyponym, hypernym and meronym \cite{miller1995wordnet}, it is often used to build task-specific class hierarchies, especially for image classification. A typical solution to extract a sub-KG is first matching the classes of the ML task with nodes (entities) in WordNet and then extracting the matched nodes and their neighbouring nodes within k-hops via e.g., breadth-first search. For some benchmarks, especially those image sets extracted from ImageNet, there are already existing matchings between classes and WordNet nodes. For example, in the study by Wang et al. \cite{wang2018zero}, a WordNet sub-graph with 30K nodes are extracted as a KG for an ImageNet subset that has 1K training classes, using the matchings provided by ImageNet. For some other benchmarks, the matchings are often built by simple name comparison with the help of human intervention for high accuracy. For example, Kampffmeyer et al. \cite{kampffmeyer2019rethinking} and Geng et al. \cite{geng2021ontozsl} manually matched all the $50$ classes in an animal image classification benchmark named AwA2 with WordNet nodes. In extracting neighbours of the matched entities, when more hops considered, the extracted sub-KG has a higher knowledge coverage but more irrelevant knowledge. Besides the k-hops neighbourhood, some studies such as \cite{akata2015label} and \cite{monka2021learning} just extract the ancestors or parents of the matched nodes for simple class hierarchies as a sub-KG. \item \textbf{ConceptNet} is a freely-available Semantic Network with commonsense knowledge \cite{speer2017conceptnet}. It stores a large number of entities which are either words or phrases, and facts of quite a few relations including \textit{Synonym}, \textit{IsA}, \textit{RelatedTo}, \textit{HasContext}, \textit{HasA} and so on. The \textit{IsA} relation is used to represent hyponyms and hypernyms. ConceptNet is often used in a similar way as WordNet: a sub-graph which acts as class hierarchies for augmenting ZSL \cite{nayak2020zero,zhang2019tgg,roy2020improving,chen2021zero,nguyen2021dozen,zhang2019integrating,chen2021zerotext} and FSL \cite{zhang2019tgg,yang2021empirical,zhang2019integrating} is extracted via matching classes to entities and selecting neighbouring entities. It is also mostly applied in CV tasks but has also been explored in open information extraction. For example, Nguyen et al. \cite{nguyen2021dozen}, who worked on zero-shot entity extraction from text, first extracted nouns and pronouns with a part-of-speech algorithm from all sentences in the dataset, and then searched for their corresponding entities in ConceptNet and extracted the matched entities and their adjacent ones. Note in many works that utilize ConceptNet, WordNet and some other KGs, the matching between classes and KG entities is rarely introduced in detail and assumed to be fully correct. \item \textbf{Freebase} is a large-scale general purpose KG with relational facts, contributed by multiple sources such as Wikipedia, MusicBrainz (a music database), Notable Names Database (an online database of biographical details of famous people) and volunteers \cite{bollacker2008freebase}. Its official API has been shut down, but it can still be accessed as a dump or via Google's Knowledge Graph API, and has been widely used for investigating KG techniques including KG augmented ZSL \cite{ma2016label,imrattanatrai2019identifying,amador2021ontology} and FSL \cite{zhang2021knowledge,ma2016label}. Different from WordNet and ConceptNet, Freebase is mainly applied in open information extraction. Entity mentions and relation mentions are matched with Freebase entities and relations, respectively, and the types of the matched entities, the super- and sub-relations of the matched relations, or the neighbourhood of the matched entities and relations are extracted as a sub-KG for augmentation. \item \textbf{Wikidata} is being increasingly used for different kinds of applications, but its application for augmenting ZSL and FSL had not attracted any attention until recently when two studies were proposed for augmenting few-shot relation extraction \cite{qu2020few,zhang2021knowledge} and another two studies were proposed for augmenting ZSL \cite{geng2021ontozsl,li2020logic}. Different from the above mentioned methods that extract neighbourhoods of the matched entities or relations, Qu et al. \cite{qu2020few} extracted 10 nearest relations to each matched relation in the embedding space by TransE, for constructing a relation graph. Zhang et al. \cite{zhang2021knowledge} extracted concept-level knowledge of the relations from Wikidata, such as the concepts (classes) that the relation is associated with, and the concepts' hierarchy. Geng et al. \cite{geng2021ontozsl} also extracted concept-level knowledge of all the relations involved in a zero-shot KG link prediction task and constructed an RDFS schema. Li et al. \cite{li2020logic} considered two solutions to utilize Wikidata for augmenting zero-shot relation extraction: they either directly utilized the relation embeddings by TransE or mined logic rules from Wikidata to further integrate the relation embeddings. \item \textbf{DBpedia} is also a large-scale general purpose KG whose knowledge are mainly from Wikipedia \cite{auer2007dbpedia}. It has also been used to augment ZSL, often acting as a complement of relational facts and literals such as entity descriptions \cite{geng2020explainable,geng2021ontozsl,chen2021zero}. DBpedia entities are often retrieved via its lookup service which is based on a lexical index built on entity labels and descriptions\footnote{\url{https://lookup.dbpedia.org/}}, while the entity associated facts and literals can be accessed from a DBpedia dump or its online SPARQL endpoint\footnote{\url{https://dbpedia.org/sparql}}. DBpedia also includes an ontological schema, and its hierarchical classes have been extracted by Amador et al. \cite{amador2021ontology} for augmenting zero-shot KG completion with unseen entities. \item \textbf{NELL} is a popular KG continuously extracted from the Web \cite{mitchell2018never}. We find two ZSL studies and one FSL study that utilize NELL. Wang et al. \cite{wang2018zero} extracted a sub-KG from NELL for zero-shot classification for images from NEIL --- an image repository whose classes are aligned with NELL entities \cite{chen2013neil}. Geng et al. \cite{geng2021ontozsl} extracted an RDFS schema (ontology) from NELL for augmenting zero-shot KG link prediction with unseen relations. Sui et al. \cite{sui2021knowledge} extracted entity concepts (entity classes) from NELL for augmenting few-shot text classification, where entities are retrieved via exactly string matching using entity mentions in the text. \end{itemize} Sub-KGs of some other KGs have also been extracted and exploited for augmenting low-resource learning, but they are not as popular as the above six. Zhang et al. \cite{zhang2021knowledge} extracted concept-level relation knowledge from \textbf{UMLS} --- an ontology of medical concepts \cite{mccray2003upper}, for few-shot relation extraction in the medical domain. Rios et al. \cite{rios2018few} extracted class hierarchies and class descriptions from \textbf{ICD-9} diagnosis and procedure labels for zero-shot and few-shot medical text classification. Luo et al. \cite{luo2020context} extracted a sub-KG for object relationships from \textbf{Visual Genome} --- a knowledge base that stores connections between image visual concepts and language concepts \cite{krishna2017visual}, for augmenting zero-shot object recognition. The KG for augmenting zero-shot VQA in \cite{chen2021zero} also includes facts from \textbf{WebChild} which is a large collection of commonsense knowledge from the Web \cite{tandon2014webchild}, besides knowledge from ConceptNet and DBpedia. \cz{Zhou et al. \cite{zhou2021encoding} trained their zero-shot question answering model with facts extracted from \textbf{WorldTree} (V2.0) \cite{xie2020worldtree} --- a knowledge base that contains explanations for multiple-choice science questions in the form of graph, covering both commonsense and scientific knowledge. } \subsubsection{Task-oriented KG Construction} Instead of utilizing existing KGs, some ZSL and FSL studies build task-specific KGs from some non-KG auxiliary information. The classes' textual information such as labels is the most frequently utilized information for mining inter-class relationships and further for constructing KG edges. Palatucci et al. \cite{palatucci2009zero} connected a word (which corresponds to a class in that task) to another according to their co-occurrence in a text corpus. Lee et al. \cite{lee2018multi} calculated WUP similarity of class labels, and used this similarity to build KG edges for representing positive and negative inter-class relationships. Wei et al. \cite{wei2019residual}, Ghosh et al. \cite{ghosh2020all} and Wang et al. \cite{wang2021zero} all considered calculating and adding edges to entities that are close to each other according to their labels' word embeddings. Class attributes have also been exploited for mining inter-class relationships. Zhang et al. \cite{zhang2019tgg} built a KG for the CUB benchmark which includes images of birds of fine-grained classes, by computing Hadamard product over the part-level class attributes. Hu et al. \cite{hu2021graph} and Chen et al. \cite{chen2020zero} both directly utilized the co-occurrence of class attributes to build edges between KG entities. Specially, Changpinyo et al. \cite{changpinyo2016synthesized} considered both class attributes and word embeddings to calculate weighted edges between entities. Different from the above methods that use some auxiliary information for building KG edges, Zhao et al. \cite{zhao2020knowledge} and Geng et al. \cite{geng2020explainable,geng2021ontozsl} modeled the class attributes as additional KG entities and connected them to the entities of the classes; while Li et al. \cite{li2020transferrable,li2019large} generated new superclasses of the seen and unseen classes by clustering of the class names, so as to constructing class hierarchies for augmenting ZSL and FSL. Domain knowledge, which is often in the form of heuristics and logic rules, has also be used to construct task-specific KGs. Banerjee et al. \cite{banerjee2020self} used heuristics to create a synthetic KG with science facts from the QASC text corpus and commonsense facts from the Open Mind Commonsense knowledge (text) corpus, for addressing both zero-shot and few-shot question answering. Chen et al. \cite{chen2020ontology} added existential restrictions (a kind of description logic that quantifies a class by associated properties) to some classes of an animal taxonomy extracted from WordNet, so as to build an OWL ontology for the animal image classification benchmark AwA2. There are also some ZSL and FSL studies that extract structured knowledge from the task data (samples) for constructing KGs which are further fed back to learning for augmentation. When Ghosh et al. \cite{ghosh2020all} constructed a KG for evaluating methods for few-shot action classification where some videos (samples) are given for each unseen class, they first extracted sample features for each class, then took the mean of these features as a KG entity, and finally calculated the cosine similarity between feature means for edges between entities. Bosselut et al. \cite{bosselut2021dynamic} generated a temporary KG on demand for each prediction request of zeros-shot question answering, using its text context and a Transformer-based neural knowledge model named COMET \cite{bosselut2019comet}. Chen et al. \cite{chen2020zero} added a co-occurrence relation between two classes (food ingredients) by calculating their co-occurrence frequency in the training samples, besides the common class attributes and class hierarchies. \subsubsection{Knowledge Integration} Although some general purpose KGs contain a large quantity of knowledge and are being continuously extended, it is still common that the knowledge extracted from such a KG is incomplete or not fine-grained enough for a specific low-resource learning task. Therefore, some studies proposed to integrate knowledge extracted from different KGs or/and other resources for building a high quality task-specific KG. For example, Chen et al. \cite{chen2021zero} extracted RDF facts from three KGs --- ConceptNet, WebChild and DBpedia to generate a unified commonsense KG for augmenting zero-shot VQA; Geng et al. \cite{geng2020explainable,geng2021ontozsl} integrated class hierarchies from WordNet, relational facts and literals from DBpedia, and knowledge transformed from class attributes for constructing KGs for zero-shot image classification; Chen et al. \cite{chen2020zero} considered and integrated class hierarchies from WordNet, and class co-occurrence relations extracted from class attributes and samples for a KG for zero-shot ingredient recognition from food images. Very recently, Geng et al. \cite{geng2021benchmarking} proposed a benchmarking study, where six KG equipped ZSL benchmarks were created for three different tasks and used for evaluating different methods under different auxiliary information settings. The KG of each benchmark is based on the integration of multiple knowledge resources: those for zero-shot image classification contain knowledge from WordNet, ConceptNet, class attributes, class names and so on, while those for zero-shot KG completion and relation extract contain relation textual information, schema information from Wikidata or NELL, logic rules by human beings and so on. As matching classes to KG entities for sub-KG extraction, the alignment of entities and relations in integrating different knowledge parts now is still mostly based on simple name matching or manual matching. There is little attention to investigating automatic knowledge integration methods for low-resource learning, and the impact of the knowledge quality, such as the matching accuracy and the ratio of relevant or redundant knowledge, is often ignored. \section{KG-aware Zero-shot Learning}\label{sec:zsl} According to the solutions for exploiting KGs, we divide the KG-aware ZSL methods into \textit{Mapping-based Paradigm}, \textit{Data Augmentation Paradigm}, \textit{Propagation-based Paradigm} and \textit{Class Feature Paradigm}. Table \ref{table:zsl} presents a brief summary of each paradigm, as well as more fine-grained method categorizations and their corresponding papers. We will next introduce the details of each paradigm. \begin{table*}[t] \footnotesize{ \centering \renewcommand{\arraystretch}{1.8} \begin{tabular}[t]{m{2.2cm}<{\centering}|m{4.5cm}<{\centering}|m{3cm}<{\centering}<{\centering}|m{3.5cm}<{\centering}}\hline \textbf{Paradigms} &\textbf{Summary} &\textbf{Categories} & \textbf{Papers} \\ \hline \multirow{3}{*}{Mapping-based} & \multirow{3}{*}{\makecell{These methods project the input and/or the \\ class into a common vector space where \\ a sample is close to its class w.r.t. some \\ distance metric, and prediction can be \\ implemented by searching the nearest class.}} & Input Mapping & \cite{palatucci2009zero,li2015zero,ma2016label,liu2018combining,imrattanatrai2019identifying,chen2020ontology,li2020transferrable,li2020logic} \\ \cline{3-4} & & Class Mapping & \cite{akata2013label,akata2015label,changpinyo2016synthesized,shah2019open,nayak2020zero} \\ \cline{3-4} & & Joint Mapping & \cite{ma2016label,huang2018zero,hao2020inductive,roy2020improving,chen2021zero,rios2018few,chen2021zerotext} \\ \hline \multirow{2}{*}{Data Augmentation} & \multirow{2}{*}{\makecell{These methods generate samples or \\ sample features for the unseen classes, \\ utilizing KG auxiliary information.}} & Rule-based & \cite{rocktaschel2015injecting} \\ \cline{3-4} & & Generation Model-based & \cite{zhang2019tgg,qin2020generative,geng2021ontozsl} \\ \hline % \multirow{2}{*}{Propagation-based} & \multirow{2}{*}{\makecell{These methods propagate model parameters \\ or a sample's class beliefs from the \\ seen classes to the unseen classes via a KG.}} & Model Parameter Propagation & \cite{wang2018zero,kampffmeyer2019rethinking,wei2019residual,geng2020explainable,chen2020zero,ghosh2020all,wang2021zero} \\ \cline{3-4} & & Class Belief Propagation & \cite{lee2018multi,luo2020context,bosselut2021dynamic} \\ \hline \multirow{2}{*}{Class Feature} & \multirow{2}{*}{\makecell{These methods encode the input and the class \\ into features often with their KG contexts \\ considered, fuse these features and feed them \\ directly into a prediction model.}} & Text Feature Fusion & \cite{zhao2017zero,shi2018open,logeswaran2019zero,yao2019kg,banerjee2020self,zhou2021encoding,niu2021open,wang2021kepler,amador2021ontology,wang2021inductive,zha2021inductive,wang2021structure,gong2021prompt} \\ \cline{3-4} & & Multi-modal Feature Fusion & \cite{amador2021ontology,nguyen2021dozen,zhang2019integrating,ristoski2021kg} \\ \hline \end{tabular} \vspace{0.1cm} \caption{A summary of KG-aware ZSL paradigms.}\label{table:zsl} } \end{table*} \subsection{Mapping-based Paradigm} The mapping-based paradigm aims to build mapping functions towards the input ($\mathcal{X}_s$ and $\mathcal{X}_u$, or their initial encodings) and/or the classes ($\mathcal{Y}_s$ and $\mathcal{Y}_u$, or their initial encodings), so that their vector representations after mapping are in the same space and whether a class is the label of a sample can be determined by matching their vectors using metrics such as Cosine similarity and Euclidean distance. We denote the mapping function for the input side as $\mathcal{M}$ and the mapping function for the class side as $\mathcal{M}'$. This paradigm has a large overlap with the category of Projection Methods defined in \cite{wang2019survey}, but we prefer to understand such solutions from the perspective of distance metric learning \cite{suarez2018tutorial} instead of sample generation. Meanwhile, the majority of the methods of Project Methods are those that map both the input and the class, while our mapping-based paradigm includes many ZSL methods that only map one side (either the input or the class). The mapping functions $\mathcal{M}$ and $\mathcal{M}'$ refer to mapping models such as neural networks that are learned by optimizing the sample-class matching degree among the labeled training data $\mathcal{D}_{tr}=\left\{(x,y)|x\in\mathcal{X}_s,y\in\mathcal{X}_s \right\}$, such as by minimizing the sum of the input's Euclidean distances to their corresponding classes. They are different from the initial encoding functions $g$ and $h$. The mapping function could either directly use the raw input and the symbolic class auxiliary information as the input, or be fed with their initial encodings. In the former case, the mapping includes the initial encoding. According to the side that the mapping is applied, we divide the ZSL methods of the mapping-based paradigm into three categories: \textit{Input Mapping}, \textit{Class Mapping} and \textit{Joint Mapping}. Figure \ref{fig:mapping} shows these three categories and their insights. \begin{figure} \centering \includegraphics[width=0.98\textwidth]{figure/mapping.pdf} \caption{Method categories and insights of the mapping-based paradigm. The dotted red circle denotes one vector space that the input and/or the class are mapped to. \label{fig:mapping}} \end{figure} \subsubsection{Input Mapping}\label{sec:input_mapping} As shown in Figure \ref{fig:mapping} (a), the input mapping methods learn a mapping model $\mathcal{M}$ to project the input $x$ (or the input's initial encoding $\bm{x}$) into the space of the classes' initial encodings. Input mapping is one of the earliest idea used in ZSL, especially for zero-shot image classification. Palatucci et al. \cite{palatucci2009zero}, who are among the first for ZSL research, proposed a two-stage mapping function denoted as $\mathcal{L}(\mathcal{S}(\cdot))$, where $\mathcal{S}$ represents the first stage which projects the input to individual dimensions of a semantic space, and $\mathcal{L}$ represents the second stage which further projects the output of $\mathcal{S}$ to the class. In a case study on neural activity classification, they used class attributes, which are either from classes' word similarity or manually created via crowdsourcing\footnote{The auxiliary information of \cite{palatucci2009zero} actually belongs to class attribute instead of KG. Since it claims to use Knowledge Base which is often regarded as KG by many researchers, we prefer to present this paper.}, as the intermediate individual dimensions, and set $\mathcal{S}$ and $\mathcal{L}$ to multiple output linear regression and 1-nearest neighbour classifier, respectively. We can also understand $\mathcal{S}$ as the mapping function $\mathcal{M}$ and $\mathcal{L}$ as a function for calculating the distance. Li et al. \cite{li2015zero} proposed to map image features to the semantic space of image tags (classes), where each class's embedding is constructed by the word embeddings of itself and its super classes in a hierarchy. For a testing sample, their method learns to combine the semantic embeddings of seen classes as its vector representation, and compares this vector to all the classes' embeddings using the Cosine similarity. Chen et al. \cite{chen2020ontology} adopted a typical ZSL method named Semantic Autoencoder (SAE) \cite{kodirov2017semantic}, and used ontology embedding for the initial class encoding $\bm{y}$. They used a linear encoder to map the input $\bm{x}$ to the class's semantic embedding space as $\bm{x}'$, where this encoder is learned on $\mathcal{D}_{tr}$ by minimizing \gyx{a distance loss between $\bm{x}'$ and $\bm{y}$ and a reconstruction loss when $\bm{x}'$ is mapped back to $\bm{x}$.} Li et al. \cite{li2020transferrable} proposed to use a Long-Short-Term-Memory (LSTM) network to model the class hierarchy and map the image features learned by a CNN to hierarchical classes. In detail, one LSTM module uses the output of the subclass-level fully connected layer of CNN as input, and another LSTM module uses the output of the above LSTM module as well as the output of the superclass-level fully connected layer of CNN as input and predicts the probability distribution of superclasses. Liu et al. \cite{liu2018combining} used reinforcement learning and an ontology to get a set of rules which represent some visual attributes, and trained one Support Vector Machine (SVM) for each rule as a model indicating whether the rule is true or not. For a testing sample, its predicted rules are used to determine the class. This method is similar to the two-stage mapping function proposed in \cite{palatucci2009zero}, but the first stage is to project the input into a concrete set of rules instead of a vector, which makes the method more interpretable. Input mapping has also be explored in ZSL tasks that extract entities and relations from text. Ma et al. \cite{ma2016label} pre-trained the class (entity type) embeddings using different KG embedding methods such as prototype-driven label embedding and hierarchical label embedding, and then proposed two mapping settings. One setting is to directly project the input (entity mention features) to the class embeddings, while the other belongs to \textit{Joint Mapping} which will be introduced later. The input mapping is implemented by a linear transformation which is implemented by multiplying the input by a matrix of weights, and is learned by minimizing a weighted approximate-rank pairwise loss. Imrattanatrai et al. \cite{imrattanatrai2019identifying} learned initial text representations of relation mentions by word embedding and a Bidirectional LSTM network, and then used a linear transformation function to project these text representations into relation (property) vectors which are calculated with KG TransE \cite{bordes2013translating} embeddings and some ad-hoc relation feature extraction methods. Li et al. \cite{li2020logic} projected the input text representation into the class embedding by a simple linear transformation for zero-shot relation extraction (which is modeled as relation classification), where different class embeddings methods combing word embedding, KG embedding and rule-guided KG embedding were explored and evaluated. \subsubsection{Class Mapping} In contrast to input mapping, the class mapping methods learn a mapping model $\mathcal{M}$ to project the class $y$ (or the class's initial encoding $\bm{y}$) into the space of the input's initial encodings, as shown in Figure \ref{fig:mapping} (b). This idea is not as widely investigated as input mapping, and we gather four methods into this category. The first three are for zero-shot image classification, while the last is for zero-shot KG completion with unseen entities. Akata et al. \cite{akata2013label,akata2015label} proposed to learn a label embedding model as the mapping function which projects the class initial encoding into the feature of the input image. They studied using class hierarchies as the auxiliary information, where each class was initially represented as a multi-hop vector --- the slots of the class and its ancestors are set to 1. Changpinyo et al. \cite{changpinyo2016synthesized} first generated a weighted graph where the relatedness between classes are represented, then introduced phantom classes through which seen and unseen classes can be synthesized by convex combination, and finally projected the vectors of phantom classes into the input. Nayak et al. \cite{nayak2020zero} proposed a novel transformer Graph Convolutional Network (GCN) architecture as the class mapping function which non-linearly aggregates a class's neighbours in the KG to calculate this class's embedding. This method uses a compatibility score as the metric for the distance between the image CNN feature (input) and the class embedding. Shah et al. \cite{shah2019open} predicted KG triples with unseen entities using their text descriptions. Their method first individually embeds the entities from the graph perspective by common link prediction models such as TransE \cite{bordes2013translating} and DistMult \cite{yang2014embedding}, and from the text perspective by a word embedding model and an LSTM network, and then transforms the entities' text embeddings to the space of the entities' graph embeddings, where both linear and non-linear transformation functions such as Multi-Layer Perceptron (MLP) were explored. \subsubsection{Joint Mapping} As shown in Figure \ref{fig:mapping} (c), the joint mapping methods learn one mapping $\mathcal{M}$ from the input $x$ (or the input's initial encoding $\bm{x}$) and another mapping $\mathcal{M}'$ from the class $y$ (or the class's initial encoding $\bm{y}$) at the same time such that the mapped vectors are in the same space where a sample is close to its label w.r.t. some distance metric. When a testing sample is to be predicted, it can be matched with classes in the space after mapping. This idea is often adopted for zero-shot entity/relation extraction where features of both the input (text) and the class (entity/relation in a KG) are jointly learned. The zero-shot entity extraction method in \cite{ma2016label} can support both input mapping and joint mapping. For the later, the entity mention features and the class embeddings are jointly mapped to one common space via multiplying them by matrices (parameters) which are learned by minimizing a weighted approximate-rank pairwise loss. Huang et al. \cite{huang2018zero} mapped the input --- features of event mentions and their structural contexts parsed from the text, and the event types whose auxiliary information is an event ontology, jointly into one vector space using a shared CNN with a structure composition layer, by minimizing an ad-hoc loss. Rios et al. \cite{rios2018few} worked on zero-shot text classification. They matched the input mapping which is text features learned by a CNN, with the class mapping which is by initial word embedding and GCN-based class hierarchy embedding. Chen et al. \cite{chen2021zerotext} also worked on zero-shot text classification by linear joint mapping of the initial input encoding, which is text embedding by BERT, and the initial class encoding, which is word embedding tailored by the ConceptNet KG. Hao et al. \cite{hao2020inductive} investigated joint mapping in zero-shot KG link prediction. They proposed to jointly map the graph structure (input) and the new entity into a vector space for addressing unseen entities, using a ranking motivated loss. The graph structure is mapped via a linear encoder over the one-hot encoding of the KG entities, while the entity is mapped by encoding its attributes using MLP. Roy et al. \cite{roy2020improving} proposed a joint mapping method for zero-shot image classification. It maps the initial class semantic embedding learned by a GCN on commonsense knowledge, and the initial image feature learned by ResNet101 (a CNN), using a non-linear transformation named Relation Network. This network first attaches a fully connected layer to the class semantic embedding, then concatenates its output with the initial input feature, and finally attaches two different fully connected layers. It is learned by minimizing a MSE loss. Note we can also understand this method as a composition of two initial encodings without any mappings and one trainable complex function (i.e., the Relation Network) as the distance metric. Chen et al. \cite{chen2021zero} applied a joint mapping method to zero-shot visual question answering (VQA). They mapped the input (i.e., a pair of image and question) and the KG entity (i.e., the candidate answer) to a common space, where the matched KG entity of the input was taken as the right answer. \subsection{Data Augmentation Paradigm} A straightforward solution for addressing sample shortage in ML is generating data with the guidance of task-relevant knowledge. In ZSL, some methods generate samples or sample features for unseen classes and transform the problem into a standard supervised learning problem. We regard these methods as Data Augmentation Paradigm. According to the method of generating new data, we further divide the existing KG-aware ZSL methods of this paradigm into two categories: \textit{Rule-based} and \textit{Generation Model-based}. \subsubsection{Rule-based} Background knowledge of a task could be explicitly represented by different kinds of rules (or other equivalent logic forms such as schema constraints and templates) which are regarded as a part of the KG. They enable deductive reasoning for hidden knowledge as new samples. This solution has been considered and empirically analyzed to generate new samples for training normal KG embedding and link prediction models \cite{zhang2019iteratively,chen2021owl2vec}, but has not been widely investigated to address zero-shot settings, as far as we know. Rocktaschel et al. \cite{rocktaschel2015injecting} worked on a zero-shot KG completion task which predicts an unseen relation for a pair of entity mentions extracted from the text. They proposed three methods to inject first-order rules which act as commonsense knowledge into a matrix factorization model. One method is logically inferring additional relational facts in advance before training the matrix factorization model. In image classification and some other tasks where the sample input and their features are uninterpretable real value vectors, generating data by rules becomes unfeasible and thus there are few ZSL methods of this category. \subsubsection{Generation Model-based} With the wide investigation of conditional generation, models such as Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} and Variational Auto-encoder (VAE) \cite{kingma2013auto} have become popular tools for generating data for addressing ZSL especially for image classification \cite{geng2021ontozsl,huang2019generative,xian2018feature,zhu2018generative,chen2021knowledge,wang2019survey,zhang2019tgg}. We categorize these methods as Generation Model-based. However, since conditional generation models were not widely applied until around 2018, we only find three ZSL studies that combine them with KG. Qin et al. \cite{qin2020generative} generated multiple features of an unseen relation conditioned on the sentence embedding of its text description for addressing a zero-shot KG completion problem which predicts the tail entity of a triple given a head entity and an unseen relation. The generator is jointly trained together with a discriminator which distinguishes the generated features from the real features of seen relations. Both the generator and the discriminator are neural networks composed of fully connected layers. Note each generated feature of an unseen relation is directly used to calculate a testing triple's score, and the scores by multiple generated features are averaged. Geng et al. \cite{geng2021ontozsl} applied a similar generation and discrimination-based model to not only zero-shot KG completion but also zero-shot image classification. In both tasks, an ontology and its embeddings are used as the condition for generating sample features. In image classification, a one-vs-rest classifier is trained for each unseen class through normal supervised learning after multiple sample features are generated. Zhang et al. \cite{zhang2019tgg} worked on zero-shot image classification by transforming it into a FSL setting. Their method uses a generation module to generates an instance-level graph, where dummy features (instances) are synthesized for those unseen classes by GANs. They finally addressed the FSL problem over the instance-level graph by a propagation module and a meta learning strategy. \subsection{Propagation-based Paradigm} Since the backbone of a KG is a (multi-relation) graph, information propagation by e.g., GNNs is a straightforward and reasonable solution to utilize the inter-class relationship and has been widely investigated for implementing KG-aware ZSL. We classify those KG-aware ZSL methods whose core techniques are based on graph information propagation as Propagation-based Paradigm. These methods align seen and unseen classes with KG entities. Some of them propagate model parameters from seen classes to unseen classes via the graph for building models for unseen classes. The others directly propagate the beliefs (probabilities) of seen classes of a testing sample to infer the beliefs of unseen classes. The former are classified into \textit{Model Parameter Propagation} while the later are classified into \textit{Class Belief Propagation}. The idea of both kinds of methods is shown in Figure \ref{fig:propagation} (a). \begin{figure} \centering \includegraphics[width=0.975\textwidth]{figure/propagation.pdf} \caption{(a) Propagation of \textit{model parameters} or \textit{class beliefs} from some seen classes (blue circles) to an unseen class (red circles) in a KG where classes are aligned with entities. (b) Aggregation of 1-hop neighbourhood (in blue) for embedding an unseen entity (red circle) in propagation-based FSL where some auxiliary triples (i.e., few-shot samples) are given. \label{fig:propagation}} \end{figure} \subsubsection{Model Parameter Propagation} These KG-aware ZSL methods usually utilize the KG's graph structure and some graph propagation models such as GNNs to approximate the parameters of models (classifiers) of unseen classes by aggregating parameters of models of seen classes that are trained by $\mathcal{D}_{tr}$. In image classification, the parameters that are approximated are often those weights that linearly combine image features. Wang et al. \cite{wang2018zero} aligned image classes with WordNet \cite{miller1995wordnet} entities, trained a one-vs-rest classifier for each seen class with image features by a pre-trained CNN named ResNet-50, and then directly used a GCN to predict the image feature combination weights of each unseen class. Wei et al. \cite{wei2019residual} aimed to address the same ZSL problem as \cite{wang2018zero}, but proposed to use a Residual Graph Convolutional Network (ResGCN) which utilized residual connections between hidden layers so as to alleviate the problem of over-smoothing and over-fitting. Ghosh et al. \cite{ghosh2020all} applied a similar idea as the above two papers, using a 6-layer GCN to address zero-shot action recognition which was modeled as video classification. The evaluation, which is based on three different KGs, shows that the accuracy by using GCN is higher than linearly combing the classifiers of the top-4 closest seen classes of an unseen class. Wang et al. \cite{wang2021zero} constructed two single-relation KGs --- one for the class hierarchy from WordNet and the other for the class correlation mined from word embeddings for zero-shot image classification. They used two weight-shared GCNs to utilize the two KGs to approximate classifier parameters for the unseen classes. In training, \gyx{a contrastive loss, which encourages the consistency of the approximated classifiers from different KGs and enhances the discriminability of the different classifiers within the same KG}, is used, together with the parameter approximate loss. Geng et al. \cite{geng2020explainable} and Chen et al. \cite{chen2020zero} both proposed to added some attention mechanism to a GCN for approximating parameters of image classifiers of unseen classes. The method by \cite{geng2020explainable} not only improves the accuracy but also provides explanations for the feature transfer from seen classes to unseen classes. To this end, the authors attached an attention layer after GCN to calculate the weights of contributions of different seen classes to an unseen class so as to find out the most impressive seen classes that are important in transferring features to the unseen class. It is worth mentioning that they also built a KG composed of different kinds of knowledge for generating human understandable explanations. The method by \cite{chen2020zero} uses a GCN to estimate the parameters of multi-label classifiers for zero-shot ingredient recognition from food images. Since the KG, which is composed of knowledge of ingredient hierarchy, ingredient attributes and ingredient co-occurrence, has multiple different relations, an attentive multi-relational GCN is adopted, where different relations have different contributions in parameter propagation. Kampffmeyer et al. \cite{kampffmeyer2019rethinking} investigated a similar idea as the above methods, but proposed that GCN, which is originally developed for classification, is not ideal for parameter regression in ZSL. Instead, they used a Graph Propagation Module (GPM) that consists of \gyx{only two layers}, and its two extensions --- Dense GPM which enables direct information propagation between entities that are indirectly connected by some intermediate entities, and Attentive Dense GPM which further weights the contributions of different neighbouring entities according to their distances to the target entity whose corresponding classifier is to be predicted. According to the evaluation on a large ImageNet benchmark and the KG of WordNet, GPM and its extensions often achieve better performance than the GCN method proposed in \cite{wang2018zero}. \subsubsection{Class Belief Propagation} This kind of propagation-based ZSL methods usually first initialize a testing sample's beliefs (probabilities) of all the classes, and then utilize the class connections in the KG and the propagation model learned from $\mathcal{D}_{tr}$ to infer the beliefs of the unseen classes or the beliefs of both the seen and unseen classes. They are often applied to the case where a sample is associated with multiple classes and these classes' relationships such as co-occurrence can be utilized for decision making. One typical work of this kind is the zero-shot multi-label image classification study by Lee et al. \cite{lee2018multi}, where multiple classes are predicted for each testing image and some classes are unseen in training. The method includes two functions: the first function uses a gated recurrent update mechanism to model the belief propagation between two KG entities, where the propagation between a seen class entity and an unseen class entity is directional (from seen to unseen), while the second function is a standard fully-connected neural network which outputs a final belief for each entity according to its latest belief status after several iterations of propagation. Note that the initial belief status of an entity is determined by the sample's feature and its corresponding class's word embedding. Luo et al. \cite{luo2020context} worked on a task of recognizing multiple interactive objects in an image where some objects are unseen in training. They proposed a method that uses Conditional Random Field to infer the unseen objects using the recognized seen objects in the image and a KG with prior knowledge about the relationships between objects. Bosselut et al. \cite{bosselut2021dynamic} focused on zero-shot question answering. They proposed to construct a context-relevant commonsense KG from deep pre-trained language models, where the question acts as a root entity and the answer choices act as leaf entities, and then they infer over the graph by aggregating paths to find the right answer. Different from \cite{lee2018multi} and \cite{luo2020context}, this work finally predicts only one answer (class) for each question (sample), but associates one question with multiple candidate answers in inference. \subsection{Class Feature Paradigm}\label{sec:zsl_class_feature} Many recent ZSL methods often encode the class $y$ as features (a vector) and then directly use them together with the original input $x$ as the new input of a prediction model. Namely they learn the transformed function $f': (x, y) \rightarrow s$ where $s$ is a score that indicates whether $y$ is a class of $x$, using samples of seen classes, i.e., $\mathcal{D}_{tr}$. The class features can be either separately learned (i.e., using the initial encoding) or jointly learned with the prediction model. In prediction, the unseen classes are also transformed into features in the same way and their combinations with a testing sample are scored by $f'$. We regard this kind of ZSL methods as Class Feature Paradigm. Its general idea is shown in Figure \ref{fig:class_feature}. This actually transforms the ZSL problem into a classic \textit{domain adaption} problem where the input distribution of the training data is different from that of the testing data \cite{ben2010theory}. In this paradigm, the utilization of KGs for augmenting ZSL is often implemented by injecting their semantics into the class features by e.g., semantic embedding. According to the types of the features of $x$ and $y$, we further classify KG-aware ZSL methods of this paradigm into two categories: \textit{Text Feature Fusion} and \textit{Multi-modal Feature Fusion}. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{figure/class_feature.pdf} \caption{Insight of the class feature paradigm for KG-aware low-resource learning. \label{fig:class_feature}} \end{figure} \subsubsection{Text Feature Fusion} In some KG-aware ZSL studies, text is utilized as critical auxiliary information. One typical example is KG completion with unseen entities, where entities are described by name phrases and/or textual descriptions, while another typical example is zero-shot question answering, where the input and the class are actually both text. With the development of word embedding models, especially those pre-trained language models such as BERT \cite{devlin2019bert}, semantics of the text can be well embedded into vectors together with the KG \cite{zhang2019ernie,liu2020k,hao2020enhancing,chen2021owl2vec}. Therefore, there are quite a few KG-aware ZSL methods whose model architectures or frameworks have the following data flow: both the input and the class are represented in the form of text (e.g., sequences of words and sub-words), encoded as vectors by text embedding models often with the KG contexts considered, and finally fused and fed into a prediction model. We regard these methods as the category of text feature fusion. The majority of these methods are applied to address zero-shot KG completion. They use text embeddings to represent unseen entities or relations that are associated with text descriptions or other text-relevant auxiliary information. Zhao et al. \cite{zhao2017zero} adopted the TF-IDF algorithm to combine the embeddings of words to represent each entity with its text description. For each candidate triple, they used the text-based representations of the two entities to calculate its score, where the triple's score function is defined as the sum of the interactions of any two elements of the triple, and the relation's interactions with the head entity and the tail entity are represented by two trainable vectors, respectively. Shi et al. \cite{shi2018open} proposed a zero-shot KG completion method named ConMask for dealing with unseen entities using their names and text descriptions. Birefly, it feeds the text embeddings of the entities and the relation of a triple into a model that is mainly composed of an attention-based relation-dependent text masking module and a CNN-based target fusion module. Niu et al. \cite{niu2021open} followed the general direction of \cite{zhao2017zero} and \cite{shi2018open}, but worked out a new multiple attention-based method with a Bidirectional LSTM network and an attention layer for modeling and utilizing the interaction between the head entity description, head entity name, the relation name, and the tail entity description. Amador et al. \cite{amador2021ontology} focused on triple classification with unseen entities. The ontological information such as the entity's hierarchical classes are utilized by their word embeddings which are combined with the entity's word embeddings by concatenation, averaging or weighted averaging, and fed into a classification model. Wang et al. \cite{wang2021inductive} proposed a commonsense KG link prediction method named InductiveE which can deal with unseen entities by utilizing entity textual descriptions. It first represents an entity using the concatenation of its text embeddings by the fastText word embedding model \cite{joulin2016fasttext} and the last layer [CLS] token of the pre-trained BERT \cite{devlin2019bert}, and then feed the entity representations of the graph into a model composed of an encoder --- a gated-relational GCN and a decoder --- a simplified version of ConvE \cite{dettmers2018convolutional} to predict each triple's score. In training, the initial entity representations are fixed and the encoder-decoder model is learned. Recently, due to the wide investigation of pre-trained language models such as BERT \cite{devlin2019bert}, some methods that fine-tune these models for utilizing textual information for addressing zero-shot KG completion have been proposed. Since they represent a triple's entities and relation as features and feed these futures into a model for prediction, we also regard them as the category of text feature fusion. Different from \cite{wang2021inductive} where BERT is used for initial but fixed entity representations, the entity and relation representations in these methods are trained as BERT is fine-tuned. Gong et al. \cite{gong2021prompt} fine-tuned a BERT model for zero-shot relation extraction, where prompts were constructed as the input using the relation's corresponding knowledge in ConceptNet. Yao et al. \cite{yao2019kg} proposed a KG triple prediction method called KG-BERT. It transforms a triple's head entity, relation and tail entity into a text sequence and then makes triple prediction as a downstream text classification task, where BERT is fine-tuned with given training triples. For unseen entities and relations that have name information, the candidate triples associated with them can be directly predicted by transforming them into text sequences. As KG-BERT, Zha et al. \cite{zha2021inductive} also proposed to predict triples as a downstream text classification task of BERT, utilizing the text information of entities and relations. But they fine-tune BERT using not only single triples but also possible paths that connect two entities where reasoning is conducted explicitly. Wang et al. \cite{wang2021structure} extended KG-BERT with attempts to addressing two cons of KG-BERT: combinatorial explosion in triple inference and failure to utilize structured knowledge. They proposed a structure-aware encoder to represent a triple's text with different combinations and interactions between its entities and relations. They also combined this BERT-based model with traditional KG embedding models such as RotatE \cite{sun2018rotate} for higher Hits@K when K is small, but note that this ensemble scheme cannot work for testing triples with unseen entities and relations. Wang et al. \cite{wang2021kepler} proposed a joint text and entity embedding method named KEPLER which is also able to predict KG triples with unseen entities and relations. It utilizes the text information of the entities and relations to fine-tune the BERT model via a masked language modeling loss, and at the same time train the KG entity and relation embeddings following the same score function and loss as TransE \cite{bordes2013translating}. Besides KG completion, we also find one KG-aware zero-shot question answering study that fuses text features of the input and the class. Banerjee et al. \cite{banerjee2020self} performed question answering via triple learning where the context, question and answer are modeled as a triple, and one element is predicted given the other two elements in a triple. In implementation, a transformer-based model that generates the answer given the text features of the context and question is learned by span masked language modeling, using triples extracted from text. Similarly, Zhou et al. \cite{zhou2021encoding} also modeled the question answering problem as triple prediction with all the text features fused, and learned the prediction model by alternatively masking the subjects and the objects of the training triples which are from a corpus named WorldTree. \subsubsection{Multi-modal Feature Fusion} Different from text feature fusion where the input and the class are both represented as some kind of text and encoded by some text embedding model, the category of multi-modal feature fusion includes those methods whose input features and the class features are of different kinds. In the zero-shot triple classification study \cite{amador2021ontology} mentioned above, the authors also considered representing an entity's hierarchical classes by one-hot vectors instead of using word embeddings. In that case, the two inputs --- the class embeddings and the entity text embedding belong to different kinds. Nguyen et al. \cite{nguyen2021dozen} focuses on cross-domain entity recognition from the text, where the testing entities are not only unseen but from a different domain as the entities involved in training, using an ontology as the auxiliary information. The input sequence is encoded as token features by a pre-trained BERT, while the entity is encoded as graph features learned by a Recurrent GNN over the ontology. These two different kinds of features are fed into an integration network. Similarly, Ristoski et al. \cite{ristoski2021kg} fused the features of the entity mention context and the entity description with the entity graph vector which includes entity mention features extracted from KGs, such as entity types, for zero-shot entity extraction. Zhang et al. \cite{zhang2019integrating} worked on zero-shot text classification. They first fused the input text features, and the class features extracted from ConceptNet, which encode the associated entity of the class, its superclass entities and its description entities by a variant of multi-hop encoding, and then fed them into a CNN classifier. Due to the heterogeneity of the input features, more complicated fusion and prediction models would be requested, and thus methods of this kind are not as common as text feature fusion. It is worth mentioning that methods of multi-model feature fusion can sometimes be understood as a special kind of joint mapping, where the mapping functions and the distance metric are jointly implemented by one model. \section{KG-aware Few-shot Learning}\label{sec:fsl} As KG-aware ZSL, many KG-aware FSL methods also follow the four paradigms of \textit{Mapping-based}, \textit{Data Augmentation}, \textit{Propagation-based} and \textit{Class Feature}. However, some other KG-aware FSL methods, belong to none of the above paradigms. Instead, we regard those that focus on utilizing the few-shot samples by accelerating the adaption in training with meta learning algorithms, as a new paradigm named \textit{Optimization-based}, and regard those that directly transfer models (such as rules) built according to data of seen classes as another new paradigm named \textit{Transfer-based}. Table \ref{table:fsl} presents a brief summary of these paradigms as well as their method categorizations and corresponding papers. We will next introduce the details of each paradigm. \begin{table*}[t] \footnotesize{ \centering \renewcommand{\arraystretch}{1.8} \begin{tabular}[t]{m{2.2cm}<{\centering}|m{4.8cm}<{\centering}|m{2.8cm}<{\centering}<{\centering}|m{3.5cm}<{\centering}}\hline \textbf{Paradigm} &\textbf{Summary} &\textbf{Categories} & \textbf{Papers} \\ \hline \multirow{3}{*}{Mapping-based} & \multirow{3}{*}{\thead{These methods project the input and/or the \\ class into a common vector space where \\ a sample is close to its class w.r.t. some \\ distance metric, and prediction can be \\ implemented by searching the nearest class. \\ ZSL methods can often be extended for FSL.}} & Input Mapping & \cite{jayathilaka2021ontology,ma2016label,monka2021learning} \\ \cline{3-4} & & Class Mapping & \cite{li2020transferrable} \\ \cline{3-4} & & Joint Mapping & \cite{akata2013label,ma2016label,akata2015label,xiong2018one,li2019large,zhao2020knowledge,zhang2020fewa,sui2021knowledge,zhang2021knowledge,rios2018few} \\ \hline \multirow{1}{*}{Data Augmentation} & These methods generate additional samples or sample features for the unseen classes, utilizing KG auxiliary information. & Generation Model-based & \cite{tsai2017improving,wang2019tackling,zhang2020relation} \\ \hline % \multirow{2}{*}{Propagation-based} & \multirow{2}{*}{\thead{These methods propagate model parameters,\\ or class embeddings (or a sample's class beliefs) \\ from the seen classes to the unseen classes \\ via a KG.}} & \thead{Model Parameter \\ Propagation} & \cite{peng2019few,chen2020knowledge} \\ \cline{3-4} & & Embedding Propagation & \cite{hamaguchi2017knowledge,wang2019logic,albooyeh2020out,bhowmik2020explainable,zhao2020attention,dai2020inductively,ali2021improving} \\ \cline{1-4} \multirow{2}{*}{Class Feature} & \multirow{2}{*}{\thead{These methods encode the input and the class \\ into features often with their KG contexts \\ considered, fuse these features and feed them \\ directly into a prediction model.}} & Text Feature Fusion & \cite{banerjee2020self} \\ \cline{3-4} & & \thead{Multi-modal Feature \\ Fusion} & \cite{zhang2019long,yang2021empirical,marino2021krisp}\\ \hline \multirow{2}{*}{Optimization-based} & \multirow{2}{*}{\thead{These methods adopt meta learning algorithms \\ to optimize the training that relies on \\ the few-shot samples.}} & KG-specific Optimization & \cite{wang2019meta,chen2019meta,baek2020learning} \\ \cline{3-4} & & KG-agnostic Optimization &\cite{lv2019adapting,zhang2020fewb,qu2020few,zhang2020fewa,zhang2019tgg,zhang2021knowledge} \\ \hline \multirow{2}{*}{Transfer-based} & \multirow{2}{*}{\makecell{These methods directly apply models of seen \\ classes to unseen classes, often with the \\ few-shot samples utilized in prediction.}} & Neural Network Transfer & \cite{teru2020inductive,liu2021indigo,chen2021topology} \\ \cline{3-4} & & Rule Transfer & \cite{sadeghian2019drum,mihalkova2007mapping,mihalkova2009transfer,davis2009deep,van2015todtler} \\ \hline \end{tabular} \vspace{0.1cm} \caption{A summary of KG-aware FSL paradigms.}\label{table:fsl} } \end{table*} \subsection{Mapping-based Paradigm} The general idea of the mapping-based paradigm of FSL is very close to that of ZSL, as shown in Figure \ref{fig:mapping}. Briefly, it learns a model to project the sample inputs and the classes (or their initial encodings) into one common vector space where a sample is close to the class that it belongs to and far away from other classes w.r.t. some distance metric. In prediction, a testing sample's class can be determined by calculating its distance to all the candidate classes. In contrast to ZSL, FSL has a small number of labeled samples associated with each unseen class. They usually can play an important role and are fully utilized by the FSL methods. \gyx{For example, they can be used to represent the class prototypes in the sample space as the auxiliary information of classes which represent the prototypes of classes in the label space, we thus also view these few-shot samples as a special kind of auxiliary information of the unseen classes in some conditions. Similar to ZSL,} we further categorize the KG-aware FSL methods of the mapping-based paradigm into \textit{Input Mapping}, \textit{Class Mapping} and \textit{Joint Mapping}. \subsubsection{Input Mapping}\label{sec:fsl_input_mapping} The existing ZSL methods of input mapping can often be directly extended to FSL by augmenting the learning of the mapping model with the few-shot samples. Ma et al. \cite{ma2016label} pre-trained the initial class (entity type) embeddings via the KG and then proposed two mapping settings, one of which is directly projecting the text input (entity mention features) to the class embedding for both zero-shot and few-shot entity mention typing. In ZSL setting, the mapping, which is a linear transformation function, is learned by samples of the seen classes alone, while in the FSL setting, it is learned by samples of both seen and unseen classes. Jayathilaka et al. \cite{jayathilaka2021ontology} also learned a mapping function from the input features to the class embedding for few-shot image classification using labeled samples of both seen and unseen classes, where logical relationships such as class disjointness and class subsumption are represented by an ontology and considered in learning the class embeddings by an algorithm named EL Embedding \cite{kulmanov2019embeddings}. Monka et al. \cite{monka2021learning} investigated KG-augmented image classification under a transfer setting where a model trained by a large number of source domain samples is transferred to a target domain which has only a small number of labeled samples. The proposed method uses a KG curated by experts for modeling the relationship between classes, embeds the KG by a variant of GCN, and adopts a contrastive loss to train a MLP which maps the image features to the space of the class embeddings. \subsubsection{Class Mapping} Similar to input mapping, the existing KG-aware ZSL methods of class mapping can be directly extended to the FSL setting by learning the mapping with samples of both seen classes and unseen classes. However, such extension has been rarely investigated. \gyx{The only relevant one is by Li et al. \cite{li2020transferrable}, which projects the embeddings of hierarchical classes into the space of image features learned by CNN to support the zero-shot image classification, and extends to the few-shot setting with almost no modification. The mapping function learning does not use the samples of seen classes, but in prediction,} the average of the CNN features of the few-shot samples as well as the class vector after mapping are used in searching the class label of a testing sample of unseen classes. \subsubsection{Joint Mapping}\label{sec:fsw_joint_mapping} Some KG-aware FSL methods of joint mapping are simple extensions of KG-aware ZSL methods which were originally developed for utilizing the class auxiliary information. Akata et al. \cite{akata2013label,akata2015label} applied their zero-shot image classification method, which jointly projects the WordNet-based class embeddings and the image features into one common space, to FSL by adding an additional loss on the few-shot samples in learning the mappings. Ma et al. \cite{ma2016label} utilized the few-shot samples to augment the training of the mapping function which is originally trained by seen class samples alone in ZSL. Their method considers not only input mapping, but also joint mapping which projects both the text input (entity mentions) and the class (entity type) embeddings, for zero-shot and few-shot entity extraction. Similary, Rios et al. \cite{rios2018few} also used the same joint mapping model, which matches the CNN-based input text mapping with the GCN-based class mapping, for both zero-shot and few-shot text classification. In contrast, some other KG-aware FSL methods of joint mapping are specifically developed for utilizing the few-shot samples. They map the few-shot samples, which can be regarded as a special kind of auxiliary information, and the testing samples into one common vector space. Li et al. \cite{li2019large} jointly learned a mapping of the image CNN features and a mapping of the class embeddings, using labeled samples of seen classes (i.e., $\mathcal{D}_{tr}$). In prediction, it calculates the center of the mapped vectors of few-shot images of each class, and compares a testing image to this center. Although the mapping of the class embeddings is not directly used in prediction, it is used to guide the mapping learning for the image features. Xiong et al. \cite{xiong2018one} worked on one-shot KG completion with unseen relations. They developed a matching network to compare a testing entity pair with the one-shot entity pair of each unseen relation, where the features of an entity pair were learned by a neighbourhood encoder, and a matching score was predicted by an LSTM network. \gyx{Note entity pairs here mean the samples of KG relations.} Zhang et al. \cite{zhang2020fewa} worked on few-shot KG completion with unseen relations, but the general idea of their solution, which is also based on entity pair matching by neighbourhood encoding and matching score calculation, is quite close to that of \cite{xiong2018one}. Zhao et al. \cite{zhao2020knowledge} jointly learned projections of the image features and the knowledge features (the fusion of KG embeddings and text embeddings) into one common space by MLPs, where a cross-entropy loss and two constraint losses over the image features and the knowledge features respectively are used for training. In prediction, a testing sample is compared with the few-shot samples of each unseen class via calculating the Consine similarity after mapping. As the method in \cite{li2019large}, the class embeddings are not directly used in prediction, but used to constrain the learning of the sample mapping. Sui et al. \cite{sui2021knowledge} proposed a KG-aware few-shot text classification method. It compares the testing sample with the few-shot samples of each unseen class using not only a task-agnostic relation network but also a task-relevant relation network which is able to apply diverse metrics for diverse tasks armed with external knowledge extracted from NELL. Zhang et al. \cite{zhang2021knowledge} worked on few-shot relation extraction from text, utilizing concept-level KGs extracted from Wikidata \cite{vrandevcic2014wikidata} or UMLS \cite{mccray2003upper}. They matched testing samples (i.e., entity mention pairs in text) to both few-shot samples and relation meta (i.e,, the relation representations extracted from the embeddings of their associated entities in KGs), and combined the two matching scores. The sample mappings is implemented by a network which considers the sentence features, the entity description features and the KG concept features. \subsection{Data Augmentation Paradigm} There have been some FSL studies that attempt to generate additional samples or sample features for the unseen classes by using KGs. As in KG-aware ZSL, we divide these KG-aware FSL methods into two categories: \textit{Rule-based} and \textit{Generation-based}. The rule-based methods could be directly applied to FSL by e.g., annotating labels to samples via pre-defined heuristic rules as shown in many distant supervision studies \cite{mintz2009distant,chen2021augmenting}, but we have not found any KG-aware FSL studies of this kind yet. Instead, we find some KG-aware FSL studies of generation-based, which usually utilize statistical generation models such as GANs \cite{goodfellow2014generative} and VAEs \cite{kingma2013auto}. We next introduce some works in this category. \subsubsection{Generation-based} The generation-based methods refer to those FSL methods that use some statistical methods to generate labeled samples (or features) conditioned on the auxiliary information. Tsai and Salakhutdinov \cite{tsai2017improving} took an attention mechanism over the KG auxiliary information extracted from WordNet to generate quasi-samples for the unseen classes as additional training samples for one-shot image classification. In generation, probability distribution on the unseen classes is approximated using a regression model for each sample of the seen classes. Wang et al. \cite{wang2019tackling} worked on few-shot KG completion involving both unseen entities and unseen relations. They proposed a triple generator, which generates triple embeddings based on textual descriptions of the entities, using Conditional VAE \cite{sohn2015learning}. Zhang et al. \cite{zhang2020relation} worked out a general feature generation-based framework for addressing unseen relations in two tasks --- few-shot KG completion with unseen relations and few-shot relation extraction from text. The framework uses a standard adversarial transfer learning mechanism to generate relation-invariant features and transfer such features to unseen relations with weighted combination. Regarding the adversarial network adopted in the framework, it uses a CNN to iteratively extract features from the entity pair (or from the text sentence for relation extraction) until the discriminator cannot distinguish features of the seen relations and the unseen relations. \subsection{Propagation-based Paradigm} As KG-aware ZSL, KG-aware FSL also can be addressed by methods of model parameter propagation and class belief propagation via a KG some of whose entities are aligned with seen and unseen classes. However, we find only two KG-aware FSL studies that belong to the category of model parameter propagation, and do not find any methods of the category of class belief propagation. This may be because the current methods usually focus on utilizing the few-shot samples of unseen classes. On the other hand, for few-shot KG completion tasks, the propagation over the KG is widely utilized for addressing unseen entities and relations that have few-shot associated triples. They often aggregate the embeddings of the neighbouring entities and relations, which are usually seen, to get the embedding of an unseen relation or entity. Figure \ref{fig:propagation} (b) shows this idea with an example of aggregating 1-hop neighbours for embedding an unseen entity $e_0$. We regard these methods as a new category named \textit{Embedding Propagation}. \subsubsection{Model Parameter Propagation} Peng et al. \cite{peng2019few} worked on augmenting few-shot image classification by using KGs extracted from e.g., WordNet. They first followed a model parameter propagation idea used in many KG-aware ZSL methods, which uses a GCN and the KG to predict classifier parameters of the unseen classes with classifier parameters of the seen classes, and then they integrated the predicted classifiers with the classifiers learned from the few-shot labeled images. This method can be understood as an ensemble-based extension of those model parameter propagation methods from ZSL to FSL. Chen et al. \cite{chen2020knowledge} proposed a model parameter propagation method for few-shot image classification with a KG whose edges are assigned by correlation weights between classes. It first \gyx{initializes a set of random parameters}, which are \gyx{image classifier weights}, for each class, then utilizes a Gated Graph Neural Network (GGNN) to propagate classifier parameters between classes with multiple iterations, and finally outputs updated classifier parameters for each class. Different from those KG-aware ZSL methods whose parameter propagation models are trained by minimizing the parameter approximation loss on seen classes (such as \cite{wang2018zero}, \cite{geng2020explainable} and \cite{kampffmeyer2019rethinking}), the propagation model (i.e., GGNN) of \cite{chen2020knowledge} is trained with a cross-entropy loss on all the labeled samples and a regularisation term on the classifier weights of all the classes. \subsubsection{Embedding Propagation} The majority of embedding propagation methods for few-shot KG completion tasks such as link prediction and multi-hop reasoning aim at addressing unseen entities or relations which are associated with only a small number of triples. These unseen \gyx{entities/relations} are also named as \textit{out-of-KG \gyx{entities/relations}} in some papers since they are usually not observed in the training KGs. Some embedding propagation methods aim to embed such unseen entities or relations with no need of costly re-training the KG embeddings. As far as we know, Hamaguchi et al. \cite{hamaguchi2017knowledge} proposed the earliest embedding propagation solution for addressing unseen entities. They used a GNN but revised its propagation mechanism for the KG, and adopted a translation-based objective function for scoring the triple and for a loss for training. Wang et al. \cite{wang2019logic} proposed a Logic Attention Network (LAN) to embed such unseen entities via propagation from their neighbouring entities and relations. In LAN, logic rules are exploited to measure neighbouring relations’ usefulness, and neighbours connected by different relations have different weights in embedding an unseen entity. Bhowmik and Melo \cite{bhowmik2020explainable} used a variant of Graph Transformer encoder \cite{yun2019graph} to embed an unseen entity by aggregating its neighbours based on their relevance to a given relation. It predicts the object of a triple, and can explain the prediction by finding out paths from the subject to the object. Ali et al. \cite{ali2021improving} aimed at predicting relations between seen entities and unseen entities (semi-inductive setting), and between unseen entities (fully-inductive setting), utilizing not only the triples but also their Wikidata qualifiers, each of which is composed of a relation and an entity for describing the triple. \gyx{For fully-inductive setting, they} initialized the entity embeddings by entities' textual information using Sentence BERT \cite{reimers2019sentence}, and then propagated to update the entities' embeddings by a graph encoder named StarE \cite{galkin2020message}. Besides complex neural networks, some simpler propagation operations have also been considered for embedding unseen entities. Ali et al. \cite{ali2021improving} used a linear projection \gyx{from the entity embedding to the relation embedding for the semi-inductive setting}. Dai et al. \cite{dai2020inductively} used two modules: an estimator which calculates a candidate set of embeddings for an unseen entity according to its all associated triples using the translation operation of TransE \cite{bordes2013translating} or RotatE \cite{sun2018rotate}, and a reducer which calculates an unseen entity's embedding according to all its candidate embeddings using relation correlation and entity degree. Albooyeh et al. \cite{albooyeh2020out} used some simple aggregation operations such as averaging to get the embedding of an unseen entity from its neighbours. This solution can support any existing KG embedding models such as DistMult \cite{yang2014embedding}, but it requires that the original training procedure of the KG embeddings are adjusted such that the embeddings resemble what is expected at the testing time and are aware of the aggregation operations being used. Some other embedding propagation methods aim at addressing unseen relations (out-of-KG relations) which have a limited number of connections to existing entities. It is required to predict with these unseen relations, without re-training the KG embeddings, but there are now few studies that investigate addressing such unseen relations \gyx{under the propagation paradigm}. The method proposed by Zhao et al. \cite{zhao2020attention} is a relevant one which can support both unseen entities and unseen relations. It mainly uses specific transition functions, aggregation functions and graph attention mechanisms to transform information from the associated triples to an unseen entity or relation, where a translation-based triple score and a margin loss are used for training. It is worth mentioning that the current embedding propagation methods cannot address both unseen entities and unseen relations at the same time. \subsection{Class Feature Paradigm} The class feature paradigm of FSL is close to that of ZSL. Please see Figure \ref{fig:class_feature} for the general idea. Namely, a new function $f': (x,y) \rightarrow s$ is learned, where the class (usually its initial encoding) is used as the input together with the original input, and a score $s$ is predicted indicating whether $y$ is the class of $x$. An auxiliary KG is injected for augmenting FSL via the new input $y$. As in KG-aware ZSL, according to the types of the features of $x$ and $y$, we classify the KG-aware FSL methods of this paradigm into two categories: \textit{Text Feature Fusion} where $x$ and $y$ are both some kinds of text or text features, and \textit{Multi-modal Feature Fusion} where $x$ and $y$ are of different kinds of features (such as image features and text features). It is worth noting that many KG-aware ZSL methods of the class feature paradigm can be extended to support FSL by training $f'$ with both samples of seen classes (i.e., $\mathcal{D}_{tr}$) and few-shot samples of unseen classes (i.e., $\mathcal{D}_{few}$). In this part, we will not discuss extending these ZSL methods (please see Section \ref{sec:zsl_class_feature} for more details), but only review those studies where FSL are originally supported and evaluated. Since class feature fusion under such an FSL setting does not significantly differ from class feature fusion under normal supervised learning settings, we do not find many papers within the scope of this survey. \subsubsection{Text Feature Fusion} There are quite a few text feature fusion methods for KG-aware ZSL, but we only find one such method developed for FSL. It is a KG-aware few-shot question answering method proposed by Banerjee et al. \cite{banerjee2020self}. It fuses the text context, the question and the answer as the input of a transformer-based model which can predict any of the three elements given the other two elements. This transformer-based model is learned by span masked language modeling from a KG whose triples simulate the combination of the context, question and answer, and are extracted from text. Note that the method can also support ZSL and the authors have evaluated it for both ZSL and FSL. \subsubsection{Multi-modal Feature Fusion} Fusing features of different kinds is harder and often requires a more complicated model and more samples for training. We find two KG-aware FSL studies of this category. Zhang et al. \cite{zhang2019long} investigated text relation extraction for long-tailed relations which have few-shot samples (sentences). The proposed method uses a GCN to learn the embedding of each relation, where a KG \gyx{derived from Freebase} is used to model the hierarchical relationship between relations, and then feeds the relation embedding and the sample features (sentence encoding) into an attention-based model. Yang et al. \cite{yang2021empirical} investigated few-shot visual question answering, where only a small number of samples are given for new unseen contexts. The proposed method PICa does not use KGs but relies on GPT-3 \cite{brown2020language} as an implicit and unstructured KG. However a baseline they adopted, named KRISP \cite{marino2021krisp}, is not originally developed for ZSL or FSL but uses KGs such as ConceptNet for augmentation and is applied to the few-shot visual question answering task. In KRISP, the features of the image and the question are first fused by a Transformer-based model and then further fused with the knowledge retrieved from the KG to predict the answer. \subsection{Optimization-based Paradigm} Since the size of few-shot samples are often not large enough to train robust models for unseen classes, some meta learning algorithms have bee applied to optimize the training for fast adaption and for avoiding over-fitting by obtaining e.g., better initial parameter settings, more optimized searching steps and more suitable optimizers. Such FSL methods are regarded as \textit{Optimization-based Paradigm}. In this literature review, we totally collected quite a few KG-aware FSL methods of this paradigm: some of them are for KG completion tasks such as link prediction and multi-hop reasoning with unseen entities or relations that are associated with a small number of triples \cite{chen2019meta,wang2019meta,baek2020learning,zhang2020fewa,zhang2020fewb, lv2019adapting}, while the others are for KG augmented few-shot image classification and few-shot text relation extraction \cite{qu2020few,zhang2019tgg,zhang2021knowledge}. We find some studies develop new meta learning algorithms or revise the existing ones w.r.t. the KG, while some other studies just apply meta learning independently without specifically considering the KG context. We thus classify the FSL methods of this paradigm into \textit{KG-specific Optimization} and \textit{KG-agnostic Optimization}. \subsubsection{KG-specific Optimization} In some FSL studies, the existing meta learning algorithms, such as Model-Agnostic Meta-Learning (MAML) which is to learn a good parameter initialization for a new meta-learning task \cite{finn2017model}, are revised and augmented for the KG context, or some new meta learning algorithms are proposed for the KG context. Chen et al. \cite{chen2019meta} proposed a meta learning framework named MetaR for a few-shot KG completion task, which predicts the tail entity of a new triple with an unseen relation. The insight is to utilize two kinds of relation-specific meta information: relation meta which is a relation's higher-order representation extracted from the embeddings of its associated head entities and tail entities, and gradient meta which guides how the relation meta should be efficiently changed when transferred from few-shot triples to testing triples. The implementation of MetaR includes two learnable components: relation-meta learner which is a fully connected neural network that maps the embeddings of the head entities and tail entities to relation meta, and embedding learner which generates a relation's embedding by the relation meta and the gradient meta, and scores entity pairs for testing. Wang et al. \cite{wang2019meta} worked on a few-shot KG reasoning task which is to predict the tail entity given a head entity and an unseen relation and infer paths from the head entity to the tail entity. They augmented the meta learning method MAML with additional task (relation) specific information encoded by a neighbour encoder based on embedding concatenation and linear transformation operations, and a path encoder based on LSTM. Baek et al. \cite{baek2020learning} worked on a realistic few-shot KG completion task, where relations between seen entities and unseen entities, and between unseen entities are both predicted using GNNs. They proposed a meta learning framework named Graph Extrapolation Network for getting the embeddings of unseen entities, where a set of tasks are formulated with unseen entities simulated via sampling, and the model learns to generalize by meta-training over these formulated tasks. \subsubsection{KG-agnostic Optimization} In some other FSL studies involving KGs, meta learning algorithms are applied in optimization for fast adaption for addressing sample shortage, but the application is independent of the KG context. Lv et al. \cite{lv2019adapting} worked on the same task as \cite{wang2019meta}, i.e., few-show multi-hop KG reasoning with unseen relations. They adopted reinforcement learning to search tail entities and reasoning paths, and directly applied MAML with one relation modeled as one task. Zhang et al. \cite{zhang2020fewb} proposed another method for few-shot multi-hop KG reasoning with unseen relations, where MAML is directly applied for well initializing an on-policy reinforcement learning model for fast adaption. Qu et al. \cite{qu2020few} worked on few-shot relation extraction by modeling the posterior distribution of prototype vectors for different relations. To this end, they first initialized the relation prototype vectors by a BERT model over the samples (i.e., sentences) and a GNN over a global relation graph extracted from different ways, and then effectively learn their posterior distribution by a Bayesian meta-learning method which is related to MAML but can handle the uncertainty of the prototype vectors. It is worth mentioning that meta learning can act as a complement for faster adaption in model training in methods of other paradigms. Zhang et al. \cite{zhang2020fewa} predicted KG triples with unseen relations. Their few-shot relational learning method FSRL, which is regarded as the mapping-based paradigm since it predicts by comparing a testing entity pair with few-shot samples of each unseen relation after mapping (see Section \ref{sec:fsw_joint_mapping}), uses MAML for fast adaption in training. Zhang et al. \cite{zhang2019tgg} attempted to address both zero-shot and few-shot image classification, with an approach named Transfer Graph Generation (TGG) which has a graph generation module for generating instance-level graph, and a propagation module for utilizing this graph for prediction. They trained the whole model with an episodic training strategy of meta learning. Zhang et al. \cite{zhang2021knowledge} used a joint mapping method to predict relations for entity mentions in a sentence. In this method, a knowledge-enhanced prototypical network and a relation meta learning model, which implement the matching between instances and the matching between instance and relation meta, respectively, are trained with gradient meta. \subsection{Transfer-based Paradigm}\label{sec:tbp} Some KG-aware FSL methods directly apply models that are built via data of seen classes ($\mathcal{D}_{tr}$) to predicting data of unseen classes ($\mathcal{D}_{te}$) with the help of the few-shot samples ($\mathcal{D}_{few}$). These methods are categorized into the transfer-based paradigm. It is worth noting that methods of some categories of the other paradigms, such as model parameter propagation, also have an idea of implicitly transferring data or parameters from seen classes to unseen classes. However, methods of the transfer-based paradigm aim to directly apply models learned from $\mathcal{D}_{tr}$ to the prediction in $\mathcal{D}_{te}$. According to the KG-aware FSL papers we have collected, methods of this paradigm are often applied to a special few-shot KG completion context, where one KG composed of triples of seen entities and relations is given for model training, while another KG composed of triples of unseen entities is for completion (prediction). For convenience, we name the first KG as the seen KG and the second KG as the unseen KG. Such a task in common in real-world: the unseen KG can often be an emerging sub-KG that is to be added to the seen KG, or an individual KG of another domain that sometimes has the same relations as the seen KG. Models are learned from the seen KG and applied in the unseen KG whose few-shot triples are used as additional input of the model for predicting new triples. According to the type of the model, we further classify these FSL methods into two categories: \textit{Neural Network Transfer} and \textit{Rule Transfer}. \subsubsection{Neural Network Transfer} Neural networks especially GNNs can encode statistical regularities and structure patterns in a graph. Therefore, for the few-shot KG completion task mentioned above, a few studies investigate transferring a GNN learned from the seen KG to the unseen KG such that the learned patterns are applied for knowledge inference. Teru et al. \cite{teru2020inductive} proposed a method named GraIL. It learns a GNN by extracting subgraphs from the seen KG and labeling their entities with their structural roles (e.g., the shortest distance between two entities), and apply this GNN to predict the relation between two unseen entities in the unseen KG with their neighbouring unseen entities' structural roles. Chen et al. \cite{chen2021topology} extended GraIL by using R-GCN \cite{schlichtkrull2018modeling} for supporting multiple relations in the KG. More importantly, they proposed a relation correlation module which constructs a relation correlation graph whose nodes represent the relations and whose edges indicate the topological correlation patterns between any two relations in the original KG. They learned a Relational Correlation Network over this relation correlation graph of the seen KG, and applied it to the unseen KG by combing its output with the output of GraIL for scoring triples. Liu et al. \cite{liu2021indigo} proposed to first reformulate the original KG as a new graph in the following way: two connected KG entities or an entity and its own, are represented as one graph node, and each node is initialized with features indicating the triples in which the two entities are involved. They then learned a GCN from the graph of the seen KG, which is shown to be able to capture common inference patterns represented in Datalog --- a well known logic rule language, and applied this GCN to predict the node features of the graph of the unseen KG, through which new triples can be determined. \subsubsection{Rule Transfer} Rules in different forms, such as Horn rules and first-order rules, or their weighted versions can be learned from a KG for represent graph patterns and regularities \cite{richardson2006markov,galarraga2013amie,kimmig2012short,yang2017differentiable,zhang2019iteratively}. They may not be as good as neural networks for representing very complicated statistical regularities, but are more interpretable and can better support inductive reasoning. Sadeghian et al. \cite{sadeghian2019drum} proposed a method named DRUM for the aforementioned few-shot KG completion task, where first-order logical rules (such as $brother(X,Z) \land fatherOf(Z,Y) \rightarrow uncleOf(X,Y)$) associated with weights are learned from the seen KG by a differentiable way using the rule mining method named Neural LP \cite{yang2017differentiable}, and these rules are applied in the unseen KG for deductive reasoning for new triples. This work uses the KG relations as the rule predicates and assumes that the relations of the seen and the unseen KGs are the same, such that the rules can be directly transferred. For the situation where the predicates change (e.g., relations of the unseen KG are different from those of the seen KG), we find the following two solutions that have been investigated for rule transfer: \textit{(i)} matching predicates between rules, proposed by Mihalkova et al. \cite{mihalkova2007mapping,mihalkova2009transfer} who transferred rules mined from relational data by Markov Logic Networks (MLNs) \cite{richardson2006markov}, and \textit{(ii)} extracting and transferring higher order rules from first-order rules, proposed by Davis et al. \cite{davis2009deep} and Van et al. \cite{van2015todtler} for transferring rules mined by MLNs. \section{Applications and Resources}\label{sec:app} In this section, we first very briefly revisit the low-resource learning studies for each application, including the adopted auxiliary information and methods, and then introduce some public resources that can be used for developing and evaluating KG-aware methods. \subsection{Computer Vision} \subsubsection{Image Classification} Regarding zero-shot image classification, the early methods mainly utilize class attributes (e.g., \cite{lampert2009learning,lampert2013attribute,farhadi2009describing, parikh2011relative}) and class text information (e.g., \cite{norouzi2014zero, socher2013zero,frome2013devise,elhoseiny2013write,qiao2016less,reed2016learning}), where the mapping-based paradigm and the data augmentation paradigm are the mainstream solutions. However, the state-of-the-art performance on many tasks now are achieved by those methods utilizing KGs constructed by various sources including existing KGs and task-relevant data and domain knowledge (e.g., \cite{zhang2019tgg,roy2020improving,nayak2020zero,wang2018zero,kampffmeyer2019rethinking,geng2021ontozsl}). To utilize the KGs, the propagation-based paradigm starts to be widely adopted in some recent studies such as \cite{wang2018zero,kampffmeyer2019rethinking,geng2020explainable}. To support method development and evaluation, some open benchmarks on KG-aware zero-shot image classification have been proposed: \begin{itemize} \item \textbf{ImageNet}, which is a large-scale image database containing a total of 14 million images from 21K classes \cite{deng2009imagenet}, is widely used in KG-aware ZSL. Each image is labeled with one class, each class is matched to a WordNet \cite{miller1995wordnet} entity, and the class hierarchies from WordNet can be used as the auxiliary information. In studies that experiment with ImageNet \cite{wang2018zero,kampffmeyer2019rethinking}, usually 1K classes with balanced images are used as seen classes for training, while classes that are 2-hops or 3-hops away, or all the other classes are used as unseen classes for testing. The weakness of ImageNet mainly lies in that the auxiliary KG has only class hierarchies (and class names) without any other knowledge such as class attributes and commonsense knowledge. \item \textbf{ImNet-A} and \textbf{ImNet-O}, are two image sets extracted from ImageNet by Geng et al. \cite{geng2021ontozsl,geng2021benchmarking}. ImNet-A includes $80$ classes from $11$ animal species, while ImNet-O including $35$ classes of general objects. In the experiment in \cite{geng2021ontozsl}, ImNet-A is partitioned into 28 seen classes (37,800 images) and 52 unseen classes (39,523 images), while ImNet-O is partitioned into 10 seen classes (13,407 images) and 25 unseen classes (25,954 images). In their latest version released in \cite{geng2021benchmarking}, each benchmark is equipped with a KG which is semi-automatically constructed with several kinds of auxiliary knowledge, including class attribute, class textual information, commonsense knowledge from ConceptNet, class hierarchy (taxonomy) from WordNet and logical relationships such as disjointness. \item \textbf{AwA2}, originally proposed in \cite{xian2018zero}, is a popular zero-shot image classification benchmark with $50$ animal classes ($37, 322$ images) and $85$ real-valued attributes for describing animal visual characteristics. It can also be used to evaluate KG-aware ZSL methods, since the classes are aligned with WordNet entities and the animal taxonomy from WordNet can be used as a simple KG. In the extended version by Geng et al. \cite{geng2021benchmarking}, a KG is constructed for AwA2 with the same types of knowledge as ImNet-A and ImNet-O. Note the term AwA in \cite{geng2021benchmarking} actually refers to AwA2, while the original AwA1 released in \cite{lampert2013attribute} does not have public copyright license for its images, and only some image features are publicly available. To enable vision research on the objects of AwA1 classes, Xian et al. \cite{xian2018zero} contributed a new dataset AwA2 with raw images collected from public Web sources such as Flickr and Wikipedia. \item \textbf{NUS-WIDE} \cite{chua2009nus}, a multi-label image classification dataset including nearly $270K$K images and each image has multiple objects for recognition, is widely used for the evaluation in multi-label ZSL due to the nature of its annotated labels \cite{lee2018multi,huang2020multi,narayan2021discriminative}. To be more specific, the images in the dataset have two versions of label sets. One comprises $1000$ noisy labels collected from Flickr user tags (i.e., NUS-1000) and the other is a dedicated one with $81$ human-annotated concepts (i.e., NUS-81). To perform multi-label ZSL, the labels in NUS-81 is taken as the unseen label set, while the seen label set is derived from NUS-1000 with $75$ duplicated ones removed and thus results in $925$ seen label classes. In KG-aware multi-label ZSL studies such as \cite{lee2018multi}, NUS-WIDE is accompanied by a KG with $3$ types of label relations, including a super-subordinate correlation extracted from WordNet as well as positive and negative correlations computed by label similarities such as WUP similarity \cite{wu1994verb}. \end{itemize} For few-shot image classification, the majority of the existing methods aim at utilizing the few-shot samples by e.g., meta learning, while the KG-aware studies often try to combine benefits from the KG external knowledge and the few-shot samples, which actually looks quite reasonable. Some of them simply extend their mapping-based models which are originally developed for zero-shot image classification by training with additional samples of the unseen classes (e.g., \cite{jayathilaka2021ontology,akata2015label,li2019large}), while some others further generate more data for unseen classes conditioned on KGs (e.g., \cite{tsai2017improving}) or utilize KGs to transfer images features from seen classes to unseen classes (e.g., \cite{chen2020knowledge,peng2019few}). There are also some open benchmarks that can be used for evaluating KG-aware few-shot image classification. The following are some widely used examples: \begin{itemize} \item \textbf{ImageNet-FS} \cite{hariharan2017low} and \textbf{mini-ImageNet} \cite{vinyals2016matching} are two derivatives of the ImageNet dataset. ImageNet-FS covers $1,000$ ImageNet classes with balanced images and these classes are divided into $389$ seen classes and $611$ unseen classes. During evaluation, images of $193$ seen classes and $300$ unseen classes are used for cross validation, while images of the remaining $196$ seen classes and $311$ unseen classes are used for testing. In contrast, mini-ImageNet is relatively small. It has $100$ classes, each of which has $600$ images. These classes are partitioned into $80$ seen classes and $20$ unseen classes. Since ImageNet classes are aligned with WordNet entities, WordNet, which includes class hierarchies and class name information, can be directly used as the external knowledge. \item \textbf{AwA1} \cite{lampert2013attribute}, \textbf{AwA2} \cite{xian2018zero} and \textbf{CUB} \cite{wah2011caltech} are three typical zero-shot image classification benchmarks that can be easily extended for a few-shot setting. AwA1 and AwA2 both have $50$ coarse-grained animal classes, with $40$ of them being seen classes and the remaining being unseen classes. CUB has $200$ fine-grained bird classes, with $150$ of them being seen classes and the remaining be unseen classes. A small number of labeled images (usually $10$) are added for each unseen class so as to support a few-shot setting. Meanwhile, several KGs have been added to these benchmarks for evaluating KG-aware methods. For example, Tsai and Salakhutdinov \cite{tsai2017improving} and Akata et al. \cite{akata2013label,akata2015label} both contributed WordNet classes hierarchies to AwA1 and CUB; Zhao et al. \cite{zhao2020knowledge} constructed a domain-specific KG for CUB based on the attribute annotations of samples; Zhang et al. \cite{zhang2019tgg} exploited ConceptNet to construct a KG for AwA2, and utilized part-level attributes to construct a KG for CUB. \end{itemize} \subsubsection{Visual Question Answering} Visual Question Answering (VQA) is to answer a natural language question given an image as the context. Teney et al. \cite{teney2016zero} first proposed zero-shot VQA. They introduced novel concepts on the text side. Namely, a testing sample was regarded as unseen if there was at least one novel word in its question or answer. Ramakrishnan et al. \cite{ramakrishnan2017empirical} considered novel objects in the image. Namely, an image object that had never appeared in the training images was regarded as unseen. They addressed the problem by pre-training the model with external text corpus and labeled images. KGs have been exploited as auxiliary information for addressing zero-shot VQA, but not widely. Chen et al. \cite{chen2021zero} recently proposed a method of the mapping-based paradigm, where answers that have never appeared in training are predicted via comparing the question and answer embeddings which are achieved with the help of KGs; while Chen et al. \cite{chen2020ontology} built and embedded an OWL ontology for establishing connections between seen answers and unseen answers, and addressed the problem also by a method of the mapping-based paradigm. Quite a few VQA datasets have been published, but only a small number of them have been used to evaluate KG-aware methods for zero-shot VQA: \begin{itemize} \item \textbf{ZS-F-VQA} \cite{chen2021zero}, constructed by re-splitting a fact-based VQA benchmark named F-VQA \cite{wang2017fvqa}, has no overlap between answers of the training samples and answers of the testing samples. It has 5 different splits of the training set and the testing set. In average, the training set has $2,384$ questions, $1,297$ images, and $250$ answers, while the testing set has $2,380$ questions, $1,312$ images, and another $250$ answers. Chen et al. \cite{chen2021zero} extracted facts from three public KGs (DBpedia \cite{auer2007dbpedia}, ConceptNet \cite{speer2017conceptnet} and WebChild \cite{tandon2014webchild}), and constructed a KG as its auxiliary information for evaluating KG-aware methods. \item \textbf{OK-VQA} \cite{marino2019ok} is a recent benchmark where the visual content of an image is not sufficient to answer the question. It has $14,031$ images and $14,055$ questions, and the correct answers are annotated by volunteers. Chen et al. \cite{chen2020ontology} used it for evaluating KG-aware zero-shot VQA, by extracting $768$ seen answers and $339$ unseen answers, using knowledge from ConceptNet as the auxiliary information. \end{itemize} Regarding few-shot VQA, the existing methods often rely on pre-trained language models such as GPT-3 which have already learned a large quantity of knowledge from text corpora (e.g., \cite{yang2021empirical,tsimpoukelli2021multimodal}). To incorporate images, visual language models can be pre-trained with images and text, or images can also be transformed into text by e.g., image captions so as to be utilized in language models (e.g., \cite{banerjee2021weaqa}). Meanwhile, meta learning algorithms have also been utilized for fast model training with only a small number of samples for each unseen answer (e.g., \cite{teney2018visual}). KGs could be quite useful by providing complementary knowledge besides the pre-trained (visual) language models and the few-shot samples, but we only find few-shot VQA studies that involve KGs but no open benchmark resources. Yang et al. \cite{yang2021empirical} proposed a supervised learning method which used knowledge retrieved from KGs for augmenting the question-answer samples, and this method was used as a baseline in comparison with the GTP-3-based method. Marino et al. \cite{marino2021krisp} first fused features of the question and the image by a Transformer-based model, and then fused these features with knowledge from ConceptNet. The aforementioned mentioned zero-shot VQA benchmarks ZS-F-VQA and OK-VQA, which rely on external knowledge and reasoning to get the answer, are quite suitable for evaluating KG-aware methods, and they can be easily adjusted by adding few-shot samples for suppting the few-shot VQA setting. \subsection{Natural Language Processing} \subsubsection{Knowledge Extraction} By knowledge extraction, we refer to those NLP tasks that are to extract structured data including entities, relations, events and so on from natural language text. Note that relational facts, which are sometimes simply called triples in this domain, can also be extracted, after entities and relations are recognized. Since the entities, relations or events can often be directly aligned with elements in a KG (such as a general purpose KG and an event ontology), their relationships represented in the KG can be directly exploited to address both zero-shot and few-shot settings. KG-aware zero-shot methods often follow the mapping-based paradigm utilizing the entities', relations' or events' embeddings in the KG \cite{huang2018zero,ma2016label,imrattanatrai2019identifying,li2020logic}, while KG-aware few-shot methods often follow the optimization-based paradigm for utilize meta learning algorithms for fast training with the few-shot samples \cite{qu2020few,zhang2021knowledge}. Note the mapping-based zero-shot methods could be easily extended to support the few-shot setting by training the mapping functions with samples of both seen classes and unseen classes, as by Ma et al. \cite{ma2016label}. There are also some KG-aware methods that fuse features from a KG with the input features for addressing the zero-shot or the few-shot setting \cite{nguyen2021dozen,zhang2019long}. There have been quite a few benchmarks that can be used for evaluating KG-aware zero-shot and few-shot knowledge extraction methods. Here we introduce several representative benchmarks: \begin{itemize} \item \textbf{BBN}, \textbf{OntoNotes} and \textbf{Wikipedia} are three benchmarks for fine-grained named entity typing, where the entity types are (partially) matched with types in Freebase. They are all adopted by Ma et al. \cite{ma2016label} for evaluating zero-shot entity typing, where the training set has only coarse-grained types, while the testing set has the second-level (fine-grained) types. They also used a set of manually annotated documents (sentences) for validation and testing with a partitioning ratio of 1:9. Specifically, BBN has $2,311$ manually annotated Wall Street Journal articles, with around $48$K sentences and $93$ two-level hierarchical types \cite{weischedel2005bbn}. $47$ out of $93$ types are mapped to Freebase types using the DBpedia Spotlight entity linking tool. $459$ documents ($6.4$K sentences) are used for validation and testing. OntoNotes is an incrementally updated corpus that covers three languages (English, Chinese, and Arabic) and four genres (NewsWire, Broadcast News, Broadcast Conversation, and Web text) \cite{Weischedel2017OntoNotesA}. It has $13,109$ news documents that are manually annotated using $89$ three-level hierarchical types. $76$ manually annotated documents ($1,300$ sentences) are used for validation and testing. Wikipedia has around $780.5$K Wikipedia articles ($1.15$M sentences), and $112$ fine-grained Freebase type annotations. $434$ manually annotated sentences are used for validation and testing. % \item \textbf{NYT10} and \textbf{WEB19} are two benchmarks used in \cite{imrattanatrai2019identifying} for zero-shot relation (property) extraction. NYT10 is constructed by Freebase triples and New York Times (NYT) corpus \cite{riedel2010modeling}. WEB19 is formed by first selecting predicate paths in the FB15k benchmark \cite{bordes2013translating} as properties, then generating samples (a text corpus) associated with these properties using Microsoft Bing search engine API with the aid of human evaluation \cite{imrattanatrai2019identifying}. Under the ZSL setting in \cite{imrattanatrai2019identifying}, $217$ and $54$ properties of WEB19 are set to seen (for training) and unseen (for validation and testing), respectively, while a ll of the $54$ properties of NYT10 are used as unseen (for testing). \item \textbf{ACE05} is a corpus for event extraction, annotated by $33$ fine-grained types which are sub-types of $8$ coarse-grained main types such as Life and Justice from the ACE (Automatic Content Extraction) ontology. Huang et al. \cite{huang2018zero} made two zero-shot event extraction settings: \textit{(i)} predicting $23$ unseen fine-grained sub-types by training on $1$, $3$, $5$, or $10$ seen sub-types; \textit{(ii)} predicting unseen sub-types that belong to other main types by training on seen sub-types of Justice. \end{itemize} It is worth mentioning that all these benchmarks can also be easily extended to support few-shot settings by adding a small number of labeled samples to the unseen classes. For example, for BBN, OntoNotes and Wikipedia, this can be simply implemented by partitioning some annotated sentences in the validation and testing sets into the training set. \subsubsection{Text Classification} In modern text classification, contextual and non-contextual word embedding models such as Word2Vec \cite{mikolov2013distributed} and BERT \cite{devlin2019bert} are often used for text representation, through which external knowledge from corpora can be easily incorporated. Zero-shot text classification can also be addressed by simply representing unseen classes by word embeddings and feeding them into a prediction model together with the original text input (e.g., \cite{ye2020zero}). Therefore, utilizing KGs for augmenting zero-shot text classification has not been widely investigated. Recently, Rios et al. \cite{rios2018few} and Zhang et al. \cite{zhang2019integrating} utilized the class hierarchies extracted from ICD-9 and ConceptNet, respectively, for augmenting CNN-based zero-shot text classifiers, while Chen et al. \cite{chen2021zerotext} used ConceptNet for augmenting a BERT-based zero-shot text classifier where word vectors are tailored by the KG structure for embedding the classes. There are many typical benchmarks for both normal and zero-shot text classification \cite{yin2019benchmarking}. They can be used for evaluating KG-aware text classification with a class splitting setting and a strategy to match classes to KG entities. Here are two typical examples of using existing benchmarks for evaluating KG-aware zero-shot text classification: \begin{itemize} \item \textbf{DBpedia-Wikipedia} is a text classification dataset originally proposed in \cite{Zhang2015CharacterlevelCN}. It has 40,000 training samples and 5,000 testing samples, which are collected from Wikipedia and are annotated by $14$ non-overlapping classes defined in the DBpedia ontology \cite{auer2007dbpedia}. Zhang et al. \cite{zhang2019integrating} proposed two ZSL settings for this dataset: using $11$ or $7$ classes as the seen, and the remaining classes as the unseen. \item \textbf{20 Newsgroups} is a popular text classification dataset which consists of around $20,000$ newsgroup articles that are almost evenly divided into $20$ newsgroups\footnote{\url{http://qwone.com/~jason/20Newsgroups/}}. Zhang et al. \cite{zhang2019integrating} proposed to use $15$ or $10$ newsgroups as the seen classes, and the remaining newsgroups as the unseen classes. \end{itemize} Few-shot text classification is similar to zero-shot text classification: the majority of the solutions directly utilize different kinds of word embeddings while the research on KG-aware method is rare. Besides the ICD-9 augmented CNN classifier which was applied to both few-shot and zero-shot text classification \cite{rios2018few}, the joint mapping method recently proposed by Sui et al. \cite{sui2021knowledge} utilizes knowledge retrieved from NELL for augmenting a network which calculates the matching of the input and the class. The existing text classification benchmarks can be adopted for evaluating KG-aware few-shot text classification. Here is one typical example: \begin{itemize} \item \textbf{ARSC} is a widely used benchmark for binary text classification of sentiment \cite{Blitzer2007BiographiesBB}. It was generated from Amazon reviews for $23$ products (classes). Product reviews with ratings $> 3$ and $<3$ are labeled as positive and negative, respectively; while the rest are discarded. In the evaluation in \cite{sui2021knowledge}, $12$ products including books, DVDs, electronics and kitchen appliances are selected as the unseen classes, for each of which $5$ labeled reviews are given. \end{itemize} \subsubsection{Question Answering} Low-resource question answering (QA)\footnote{The scope of QA is actually quite wide. It often includes or has a high overlap with quite a few problems such as VQA, Knowledge Base QA, Table QA, Machine Reading Comprehension (MRC). In this part, we just refer to the problem of giving an answer or answers to a natural language question w.r.t. a context described by text. } started to attract wide attention in recent years, mainly due to the fast development of pre-trained language models such as BERT and GPT-3 which are inherently capable of addressing ZSL and FSL problems in NLP since a large quantity of knowledge are learned from large-scale corpora and represented as parameters \cite{yang2021empirical,wei2021finetuned,ma2021knowledge}. Similar to text classification, the output answer (class) is often regarded as an additional input and fed to a prediction model together with the original question input. It is worth mentioning that the specific setting of zero-shot QA may vary a bit from study to study. These settings mostly still satisfy our general ZSL definition which mainly requests that the classes (answer labels) for prediction have no associated training data, and are often harder. Ma et al. \cite{ma2021knowledge} regarded testing the model on a dataset that is different from the datasets used for training as a zero-shot QA problem. Under this setting, they evaluated several different methods, including the KG-aware method by \cite{banerjee2020self} and some pretrained language model-based methods, by splitting five different datasets. Note each dataset is constructed by one kind of external knowledge, and different datasets correspond to different tasks with disjoint answer sets. Zhong et al. \cite{zhong2021adapting} proposed to test the QA model in datasets that are different enough from the datasets for model training, where all the datasets are described by properties such as domain, emotion and so on. Very recently, Wei et al. \cite{wei2021finetuned} proposed an even harder setting. They also fine-tuned language models on a collection of datasets, and tested the models on a different dataset. However, the datasets, which are described via instructions, have not only different tasks but also different task types including commonsense QA, summarization, sentiment classification and so on. Although pre-trained language models have contained much knowledge via large scale parameters, symbolic knowledge (including commonsense and domain knowledge with logics) from KGs are often complementary and beneficial for addressing zero-shot QA. Therefore, there have been some KG-aware zero-shot QA studies \cite{bosselut2021dynamic,banerjee2020self}. For example, Banerjee et al. \cite{banerjee2020self} modeled the QA problem via knowledge triple learning where the context, question and answer are modeled as a triple, and the answer is predicted given the context and question. Their knowledge triple learning model is learned from KG triples. \cz{Similar to \cite{banerjee2020self}, Zhou et al. \cite{zhou2021encoding} also framed the multiple-choice QA task as a knowledge completion (triple prediction) problem, where the model is trained by alternatively masking the subjects and the objects in triples}. Bosselut et al. \cite{bosselut2021dynamic} used COMET --- a Transformer-based model trained on commonsense KGs such as ConceptNet \cite{bosselut2019comet} to generate a context-relevant commonsense triples for each QA sample, through which the answer can be directly inferred via symbolic reasoning. KGs are also beneficial to few-shot QA, although pre-trained language models have already contained much knowledge and have achieved good performance. For example, Banerjee et al. \cite{banerjee2020self} directly extended their knowledge triple learning model from zero-shot QA to a few-shot QA setting where $8\%$ of the training data are given as the few-shot samples. Similarly, Bosselut et al. \cite{bosselut2021dynamic} also extended their zero-shot QA method, which infers the answer of a question according to their context-relevant commonsense triples, to few-shot QA by using $4$, $10$ or $20$ validation samples in evaluation. However, due to challenges such as retrieving exactly relevant knowledge from a large KG and injecting KG knowledge into pre-trained language models, the investigation of KG-aware zero-shot and few-shot QA is still quite preliminary. There are quite a few widely used QA benchmarks such as PhysicalIQA which is for commonsense physical reasoning \cite{bisk2020piqa} and MC-TACO which is about multiple choice temporal commonsense (i.e., temporal aspects of events) \cite{zhou2019going}. They can be used for benchmarking zero-shot and few-shot QA after some suitable dataset partitioning strategies. For evaluating KG-aware low-resource methods, we suggest benchmarks that are constructed with KGs or have been partially aligned with KG entities. The tasks of these benchmarks often rely on external knowledge, and their corresponding external KGs can be directly used for evaluating KG-aware methods. Here are some such benchmarks: \begin{itemize} \item \textbf{SocialIQa} \cite{sap2019social} is a large-scale QA resource to evaluates a model’s capability to understand the social dynamics underlying situations described in short text snippets. It has 38K QA pairs. Each sample consists of a context, a question about that context, and three multiple choice answers from crowdsourcing. Commonsense knowledge (i.e., seeds for creating the contexts and answers) are extracted from an event KG ATOMIC \cite{sap2019atomic}. This dataset was used by Bosselut et al. \cite{bosselut2021dynamic} and Banerjee et al. \cite{banerjee2020self} for evaluating their KG-aware zero-shot and few-shot methods. \item \textbf{CommonsenseQA} \cite{talmor2019commonsenseqa} is a challenging dataset for evaluating commonsense QA methods. It has $12,247$ questions in total, while each question has 5 answer candidates. The ground truth answers are annotated by crowdsourcing based on question relevant subgraphs of ConceptNet \cite{speer2017conceptnet}. As SocialIQa, CommonsenseQA was adopted by Banerjee et al. \cite{banerjee2020self} for evaluation. \item \textbf{STORYCS} \cite{lucy2017distributional} consists of short 5-sentence stories with annotated motivations and emotional responses. It is originally for emotion classification, where the labels are drawn from classical theories of psychology. Bosselut et al. \cite{bosselut2021dynamic} transformed the classification task into a QA task by posing an individual question for each emotion label, and used it for evaluating their KG-aware method for both zero-shot and few-shot settings. \item \textbf{aNLI} \cite{bhagavatula2020abductive}, \textbf{QASC} \cite{khot2020qasc}, \textbf{OpenBookQA} \cite{mihaylov2018can} and \textbf{ARC} \cite{bhakthavatsalam2021think} were adopted by Banerjee et al. \cite{banerjee2020self} for evaluating their KG-augmented triple learning model for zero-shot and few-shot QA, besides SocialIQa and CommonsenseQA. Specifically, aNLI which has around 171K QA pairs is a dataset with commonsese knowledge, while QASC, OpenBookQA and ARC, whose sample sizes range from 6K to 10K, are three QA datasets with scientific knowledge. \cz{\textbf{OpenBookQA} and \textbf{ARC} were also adopted by Zhou et al. \cite{zhou2021encoding} for zero-shot QA.} \end{itemize} \subsection{Knowledge Graph Completion} KG completion is to infer new knowledge in a KG, while most existing studies aim at predicting missing relational facts (triples), which is sometimes called link prediction. In this part we mainly consider KG completion for relational facts. Under a low-resource scenario, we are often required to handle the entities or/and relations that newly emerge after the KG embeddings have been learned. Since the majority of current works aim at either unseen entities or unseen relations, and the solutions to addressing unseen entities and unseen relations are often different, we introduce low-resource learning studies for unseen entities and unseen relations separately. \subsubsection{KG Completion with Unseen Entities} To address unseen entities in zero-shot KG completion, the existing studies often utilize their auxiliary information such as names, textual descriptions and attributes, often using methods of the mapping-based paradigm \cite{shah2019open,hao2020inductive} and the class feature paradigm \cite{zhao2017zero,niu2021open,wang2021kepler,amador2021ontology,shi2018open,wang2021inductive,zha2021inductive,wang2021structure,yao2019kg}. With the explosive growth of zero-shot KG completion methods for unseen entities, various benchmarks have been proposed for evaluation. They are usually constructed based on existing commonly used normal KG completion datasets, including FB15k \cite{bordes2013translating}, FB15k-237 \cite{toutanova2015representing}, WordNet11 \cite{socher2013reasoning}, WN18RR \cite{dettmers2018convolutional}, NELL-995 \cite{xiong2017deeppath}, and some other sub-KGs extracted from popular KGs such as DBpedia \cite{auer2007dbpedia} and Wikidata \cite{vrandevcic2014wikidata}. Their entity auxiliary information is often collected from the benchmarks' original KGs or some associated public resources. For example, the textual descriptions of entities in DBpedia50k, FB20k and Wikidata5M can be collected from DBpedia, Freebase and Wikipedia, respectively. In \cite{daza2021inductive}, the textual descriptions of FB15k-237 entities are extracted from the introduction section of their corresponding Wikipedia pages. Although these benchmarks vary in the ratio of unseen entities, their construction often follows a common way as follows. Given an original KG completion benchmark, a set of entities are selected as unseen entities. Namely, their associated triples in the training set are removed, and their associated triples in the testing set are kept. Then the relations that appear in both the training set and the testing set are selected. For a testing triple to predict, there could be two cases: \textit{(i)} either its head or its tail is an unseen entity while the other is a seen entity, and \textit{(ii)} both its head and tail are unseen entities. Accordingly, we regard the benchmark whose testing triples are all of the first case as \textit{semi-ZS}, the benchmark whose testing triples are all of the second case as \textit{fully-ZS}, and the benchmark that has both the first case and the second case testing triples as \textit{mixture-ZS}. Here are some typical benchmarks for zero-shot KG completion with unseen entities: \begin{itemize} \item \textbf{FB15k-237-OWE} \cite{shah2019open} is a typical \textit{semi-ZS} benchmark built on FB15k-237 by a sampling strategy. First, testing triples whose tail entities are to be predicted are collected. Specifically, a set of tail entities are selected, and some associated head entities are randomly picked from the FB15k-237 triples (by uniform sampling over all the associated head entities). Each picked head entity $x$ is removed from the training graph by moving all triples of the form ($x, ?, t$) to the test set and dropping all triples of the form ($?, ?, x$) if $x$ still remains in the training set. Similarly, testing triples whose head entities are to be predicted are collected. Then a testing set is generated by merging the above two kinds of testing triples and removing the testing triples whose relations are not in the training set. This testing set is further splitted into a validation set and the final testing set. The dataset contains $2,081$ unseen entities, $12,324$ seen entities and $235$ relations. The numbers of triples for training, validation and testing are $242,489$, $10,963$ and $36,250$, respectively. \item \textbf{DBpedia50k} and \textbf{DBpedia500k} \cite{shi2018open} are also typical \textit{semi-ZS} benchmarks, constructed in a similar way as FB15k-237-OWE. DBpedia50k has $49,900$ entities and $654$ relations, with $32,388$, $399$ and $10,969$ training, validation and testing triples, respectively. DBpedia500k has $517,475$ entities and $654$ relations, with $3,102,677$, $10,000$ and $1,155,937$ training, validation and testing triples. \item \textbf{Wikidata5M} \cite{wang2021kepler}, originally developed for evaluating text-aware KG embedding methods, is an important \textit{fully-ZS} benchmark. It is constructed based on the Wikidata dump and the English Wikipedia dump. Each entity in Wikidata is aligned to a Wikipedia page and this page's first section is extracted as the entity's textual description. Entities with no Wikipedia pages or with descriptions being shorter than 5 words are discarded. Next, all the relational facts (triples) are extracted from the Wikidata dump. One triple is kept if both of its entities are not discarded, and its relation has a corresponding nonempty page in Wikipedia. Otherwise, this triple is discarded. The benchmark contains $4,594,485$ entities, $822$ relations and $20,624,575$ triplets. To support the zero-shot setting, Wang et al. \cite{wang2021kepler} randomly extracted two sub-KGs as the validation set and the testing set, respectively, and used the remaining as the training set. They ensured that the entities and triples are mutually disjoint across the training set, the validation set and the testing. As a result, the three sets have $4,579,609$, $7,374$ and $7,475$ entities, respectively, $822$, $199$ and $201$ relations, respectively, and $20,496,514$, $6,699$ and $6,894$ triples respectively. \item \textbf{FB20k} \cite{xie2016representation} is a complex KG completion benchmark with testing triples of several different kinds. It has the same training set and validation set as the normal KG completion dataset FB15k, but extends FB15k's testing set by adding testing triples involving unseen entities. Specifically, a candidate set of unseen entities are first selected from Freebase. They should be associated with some entities in FB15k entities within one hop. Then, some new triples are extracted from Freebase and added to the testing set. For each such new triple, its relation should already be in FB15k, either its head or tail should be an unseen entity, and its other entity should already be in FB15k. Consequently, the new testing set has $4$ kinds of triples: those whose head and tail are both seen entities, those whose heads are unseen and whose tails are seen, those whose tails are unseen and whose heads are seen, and those whose heads and tails are both unseen. The first kind of testing triples are for normal KG completion, while the other three kinds are for zero-shot KG completion. So the task of FB20k can be understood as generalized zero-shot KG completion. The numbers of the test triples of the above four types are $57,803$, $18,753$, $11,586$, and $151$, respectively, and all these triples involve $19,923$ entities. The subsets of FB15k-237 and WN18RR proposed in \cite{daza2021inductive} are also typical benchmarks of this type, but they are constructed by selecting $10\%$ of the entities of the original KGs and using their associated triples for testing. \end{itemize} In few-shot KG completion, unseen entities usually have a small number of associated triples that can be utilized. The current methods often aim to fully utilize these triples, mainly by methods of the propagation-based paradigm \cite{zhao2020attention,albooyeh2020out,hamaguchi2017knowledge,bhowmik2020explainable,wang2019logic,dai2020inductively,ali2021improving}, the transfer-based paradigm \cite{teru2020inductive,liu2021indigo,chen2021topology,sadeghian2019drum} and the optimization-based paradigm \cite{baek2020learning}. As zero-shot KG completion, several few-shot KG completion benchmarks with unseen entities have been constructed using existing KG completion benchmarks. According to the type of the entity that an unseen entity is linked to, we categorize these benchmarks into three categories. For the first category, the entity linked to is seen in training, which means the few-shot triples connect the unseen entities with some seen entities. These few-shot triples can be utilized to propagate embeddings from seen entities to unseen entities by e.g., GNNs \cite{hamaguchi2017knowledge,wang2019logic,bhowmik2020explainable}. Typical benchmarks of this category include subsets extracted from WordNet11 by \cite{hamaguchi2017knowledge}, subsets extracted from FB15k by \cite{wang2019logic}, and subsets extracted from WN18RR, FB15k-237 and NELL-995 by \cite{bhowmik2020explainable}. For the second category, the entity linked to is also an unseen entity. These benchmarks are to evaluate the generalization ability of a model trained on one KG to another KG with different entities or to an emerging sub-KG with new entities. Methods of the transfer-based paradigm, which transfer learned graph patterns in the form of GNN or rules, can often be adopted. Typical benchmarks of this category include subsets of WN18RR, FB15k-237 and NELL-995 extracted by \cite{teru2020inductive}. In the third category, the entity linked to can be either unseen or seen. Typical benchmarks include subsets of WN18RR, FB15k-237 and NELL-995 contributed in \cite{baek2020learning} where a meta learning method is applied to learn the embeddings of unseen entities from their few-shot triples. Next, we will introduce more details of some representative benchmarks of each category: \begin{itemize} \item \textbf{Subsets of WordNet11 by Hamaguchi et al.} \cite{hamaguchi2017knowledge} are typical benchmarks of the first category. They are constructed in the following way. First, entities involved in the original testing set are extracted as unseen entities, while all the other entities in the original benchmark are regarded as seen entities. Among these unseen entities, those that are associated to only seen entities in the original training triples are kept and the other are discarded. Second, the original training triples that do not contain any unseen entities are selected for the new training set, those that contain exactly one unseen entity are selected as the few-shot samples, and those that contain two unseen entities are discarded. Next, the new testing set is constructed by reusing the original testing triples and removing those containing no unseen entities. Nine subsets of different scales are extracted for the few-shot setting, by setting the size of testing triples for extracting unseen entities to $1,000$, $3,000$ and $5,000$, and by setting the position for extracting unseen entities to head, tail and both. \item \textbf{Subsets of WN18RR, FB15k-237 and NELL-995 by Teru et al.} \cite{teru2020inductive} are typical benchmarks of the second category. They are constructed in the following way. Given one original benchmark, two disjoint graphs are sampled: the \textit{train-graph} for training the model and the \textit{ind-test-graph} for testing. It is ensured that the two graphs' entities sets are disjoint, while the relations of the \textit{ind-test-graph} are all involved in the \textit{train-graph}. In particular, $10\%$ of the triples of the \textit{ind-test-graph} are randomly selected for testing. These benchmarks are also adopted for evaluation in \cite{chen2021topology}. \item \textbf{Subsets of WN18RR, FB15k-237 and NELL-995 by Baek et al.} \cite{baek2020learning} are typical benchmarks of the third category. These subsets are extracted from each original benchmark in the following way. First, a set of entities, which have a relatively small amount of associated triples, are randomly sampled as the unseen entities, and they are further partitioned and used for constructing three meta sets of triples: a meta-training set, a meta-validation set and a meta-testing sets. The other entities in the original benchmark are regarded as seen entities. Second, triples composed of seen entities alone are extracted to construct a graph named \textit{In-Graph}. Finally, the meta sets are cleaned, such that each of their triples has at least one unseen entity and all the triples are out of \textit{In-Graph}. \end{itemize} \subsubsection{KG Completion with Unseen Relations} Zero-shot KG completion with unseen relations usually utilize the relations auxiliary information such as their names and descriptions, mainly using methods of the data augmentation paradigm \cite{geng2021ontozsl,qin2020generative,geng2021benchmarking} and the class feature paradigm \cite{zha2021inductive,wang2021structure,yao2019kg}, while few-shot KG completion with unseen relations usually relies on the few-shot triples using methods of the optimization-based paradigm \cite{wang2019meta,chen2019meta,zhang2020fewb,lv2019adapting}, the mapping-based paradigm \cite{zhang2020fewa,xiong2018one} and the propagation-based paradigm \cite{zhao2020attention}. Different from KG completion with unseen entities, there are fewer benchmarks for KG completion with unseen relations. We find NELL-ZS and Wiki-ZS for the zero-shot setting, and NELL-One and Wiki-One for the few-shot setting. NELL-ZS and NELL-One are sub-KGs extracted from NELL, while Wiki-ZS and Wiki-One are sub-KGs extracted from Wikidata. Their details are introduced as follows: \begin{itemize} \item \textbf{NELL-ZS} and \textbf{Wiki-ZS} \cite{qin2020generative} are two zero-shot KG completion benchmarks with unseen relations. Each benchmark has three disjoint relation sets: a training set with seen relations, a validation set and a testing set with unseen relations. They compose triples for training, validation and testing, respectively. The entities in the testing triples and the validation triples have all been involved in some training triples. NELL-ZS has $139$, $10$ and $32$ training, validation and testing relations, and $65,567$ entities, while Wiki-ZS has $469$, $20$, $48$ training, validation and testing relations, respectively, and a total of $605,812$ entities. Qin et al. \cite{qin2020generative} used relation textual descriptions as the auxiliary information and implemented a text feature learning and generation based method, while Geng et al. \cite{geng2021ontozsl,geng2021benchmarking} constructed ontological schemas, which contain not only textual information but also relation hierarchies, relation domains and ranges, relation characteristics and so on, for both benchmarks, and proposed a text-aware ontology embedding and generation based method. \item \textbf{NELL-One} and \textbf{Wiki-One} were originally developed by Xiong et al. \cite{xiong2018one} for evaluating few-shot KG completion with unseen relations that have only one triple given. In construction, relations that are associated with less than $500$ triples but more than $50$ are extracted from the original KGs as task relations (i.e., one relation corresponds to one task). In NELL-One, $67$ such relations are extracted and they are partitioned into $51$, $5$ and $11$ for constructing triples of the training set, validation set and the testing set, respectively; while in Wiki-One, $183$ such relations are extracted and they are partitioned into $133$, $16$ and $34$ for constructing triples of the training set, validation set and the testing set, respectively. Meanwhile, $68,545$ entities are extracted for NELL-One, and $4,838,244$ entities are extracted for Wiki-One. In addition, another $291$ and $639$ relations are extracted, respectively, as background relations constructing more triples for the entities. Note that that these two benchmarks can also be simply revised and used to support \textit{i)} zero-shot KG completion as in \cite{wang2021structure}, since the relations in the training set, validation set and testing set are mutually disjoint, and \textit{ii)} $k$-shot ($k \textgreater 1$) KG completion by adding more given triples. \end{itemize} To sum up, we find there are quite a few resources for evaluating KG completion with unseen entities, but there is a shortage of widely recognized and adopted resources. In contrast, the benchmarks for KG completion with unseen relations are widely recognized and used in different studies. Thus the methods could be more fairly compared. Besides, there is a shortage of widely recognized benchmarks for evaluating KG completion with both unseen entities and relations, which is more challenging. \section{Challenges and Future Directions}\label{sec:challenge} \subsection{KG Construction for Augmenting Low-resource Learning} One critical challenge of using KGs for augmenting low-resource learning is constructing a high quality KG with exactly necessary knowledge for a task. As introduced in Section \ref{sec:kg}, there are two mainstream solutions to get the KG: \textit{(i)} directly using the existing general-purpose or domain specific KGs by extracting relevant parts where the alignment between the task data and the KG and between different KGs are often assumed to be given or manually conducted, and \textit{(ii)} extracting knowledge from task specific data such as class annotations and samples (e.g., sense graphs extracted from images and class correlation mined from samples). The challenges and the potential future directions may lie in the follow three aspects. First, the alignment between data and knowledge is rarely investigated, and the impact of wrong mappings as well as other erroneous knowledge in the KG has been never evaluated and analyzed in the current zero-shot and few-shot studies. Applying the existing KG construction tools, especially those for KG alignment, error detection and correction (e.g., \cite{chen2021assertion,he2021bertmap,jimenez2011logmap}), to low-resource learning could lead to higher performance as well as more practical and automatic systems especially for new tasks. Second, the coverage of necessary knowledge and the ratio of irrelevant knowledge are often ignored in investigating a KG's usefulness towards a ZSL or FSL task, and there is a shortage of methods that are able to retrieve the relevant knowledge from a large scale KG for a given task. Thus analysing the impact of KG quality, such as the error rate, the knowledge coverage and the knowledge representation, could also be a promising direction, while some new solutions, such as iterative knowledge retrieval with some feedback from the task in each iteration, are highly needed to make KGs play a more important role in low-resource learning. Third, there is a shortage of resources for evaluating the role of KG in KG-aware ZSL and FSL methods. Some existing ZSL and FSL benchmarks are associated with KGs but the KGs are usually fixed, which makes it harder to investigate different KG settings. Meanwhile, the existing benchmarks are limited to some typical tasks introduced in Section \ref{sec:app}. In the future, more ML tasks in not only CV and NLP, but also other domains such as bioinformatics, health data analysis, urban computation and e-commerce, can be considered for benchmarking, with both general-purpose KGs and domain specific ontologies, taxonomies and logical rules. \subsection{KG-aware Low-resource Learning Paradigms} \subsubsection{Zero-shot Learning} The existing KG-aware ZSL methods lie in four paradigms: mapping-based, data augmentation, propagation-based and class feature (see Table \ref{table:zsl}). The mapping-based paradigm is the most widely investigated, while the data augmentation paradigm has only four methods, one of which belongs to the rule-based category while the remaining three of which belong to the generation model-based category. Directly using rules to generate numeric samples or features is often quite hard, but in some cases such as zero-shot KG completion, we believe it is feasible to use ontological schemas and logical rules to infer symbolic knowledge (e.g., triples) for the unseen entities and/or relations. In contrast, the generation model-based methods (e.g., OntoZSL \cite{geng2021ontozsl}) can well generate numeric samples and features by statistical models. They can flexibly choose the downstream model after data are generated, and thus are not biased to seen or unseen classes in prediction in comparison with the widely investigated mapping-based methods. Therefore, we think that generation-based ZSL methods conditioned on the embeddings of KGs could be a promising solution and is worth of more investigation in the future. Meanwhile, we also think that the idea of belief propagation which belongs to the propagation-based paradigm is quite reasonable, especially for CV tasks such as scene understanding and VQA. Extracting semantic relationships of objects in a scene image (a.k.a. scene graph extraction) has been a popular topic (more can be found in the workshop on Scene Graph Representation and Learning\footnote{\url{https://cs.stanford.edu/people/ranjaykrishna/sgrl/index.html}} and a recent survey paper by Chang et al. \cite{chang2021scene}). When the scene graph extracted from an image is aligned with an external KG and more semantics is added, unseen objects could then be recognized by knowledge inference. The class feature paradigm is also quite popular, especially for tasks whose inputs are also text, such as question answering and entity extraction. By transforming the class as additional text feature input, unknown classes can be addressed by word embeddings through which knowledge from external corpora could be encoded and injected. With the development of large-scale pre-trained language models such as BERT \cite{devlin2019bert} and GPT-3 \cite{brown2020language} as well as prompt learning and paradigm shift techniques \cite{sun2021paradigm,liu2021pre}, which reformulate tasks to make them more suitable for pre-trained language models, ZSL methods of the text feature fusion category will become more popular, while KGs will play a complementary role by providing knowledge that these parameter-based language models cannot represent. It is worth mentioning that there are no KG-aware ZSL methods of the optimization-based paradigm and the transfer-based paradigm, which are proposed in categorizing KG-aware FSL methods. This is because the meta learning algorithms applied in the optimization-based paradigm and the models (neural networks or rules) directly transferred both rely on some related samples of the testing sample to predict as additional input. For example, considering a triple to predict with unseen entities or relation, its neighbouring triples, which form a graph with some patterns, are utilized by the transferred rules or GNNs as critical evidences. It could be a feasible solution by first generating some few-shot samples with rules or generation models, and then applying the optimization-based and the transfer-based methods. \subsubsection{Few-shot Learning} As shown in Table \ref{table:fsl}, there are $6$ paradigms for KG-aware FSL methods, which are more diverse than KG-aware ZSL methods, mainly due to the availability of additional few-shot samples. Some of the KG-aware FSL methods extend the models for ZSL by training them with additional few-shot samples or integrating them with the models or the results that are based on the few-shot samples. Most mapping-based ZSL methods (e.g., \cite{ma2016label,akata2015label,rios2018few}) could be extended to FSL by training the mapping function(s) with additional few-shot samples, while most data augmentation ZSL methods could also be extended to FSL by merging the generated data with the few-shot samples (e.g., \cite{tsai2017improving}). As in ZSL, we find KG-aware data augmentation could be a promising solution but has not been widely investigated for FSL. Some other KG-aware FSL methods are originally developed to utilize the few-shot samples, but the utilization methods are augmented by KGs or extended for KG contexts. Two typical kinds of such methods are the optimization-based and the transfer-based, both of which cannot be applied to ZSL due to their dependence on the few-shot samples. We find they are mostly extensions (or straightforward applications) of meta learning and transfer learning methods to few-shot tasks in KG completion, while KG-augmented meta learning and model transfer for none KG completion tasks have not been widely investigated. In the future, representing knowledge on learning (e.g., background on the task and previous learning experience) as KGs, and integrating them with meta learning or transfer learning algorithms could lead to more general neural-symbolic paradigms that are applicable to different FSL tasks. Regarding the propagation-based paradigm, the category of embedding propagation is new to KG-aware FSL. Methods of this kinds often aim at few-shot KG completion by utilizing the few-shot samples i.e., triples that link unseen entities or relations to seen ones. It would be a promising solution by utilizing both these few-shot links and the unseen entities' or relations' auxiliary information such as textual descriptions, attributes and schemas. \subsection{Low-resource Learning during KG Construction} Nowadays KG construction uses not only heuristics (e.g., hand-craft rules and templates), symbolic knowledge engineering and manual curation, but also ML prediction for (semi-)automation. ML tasks in KG construction range from knowledge extraction (e.g., extracting knowledge from unstructured natural language text and semi-structured tables, Web pages and encyclopedia information boxes) to knowledge curation including KG completion, KG alignment, entity resolution, entity typing, schema learning, error detection and correction, and so on \cite{weikum2020machine,paulheim2017knowledge}. Besides the tasks of KG completion and knowledge extraction from text that we have discussed in this paper, almost all the other prediction-based KG construction tasks would also suffer from sample shortage and lead to low-resource learning problems. One typical example is the task of matching table column types to KG classes, which is the foundation of KG population with tabular data\footnote{Different forms of tables such as databases, CSV and excel files are still among the most popular for data storage, and it is easier to extract high quality knowledge from tables than from totally unstructured text. Thus tabular data often have the highest priority to be utilized for constructing KGs in applications.}. It is common that some ontology classes have no enough table columns for training or some new classes are defined and added to the KG. In that case, some solutions, such as generating synthetic columns according to the hierarchical classes and their entities \cite{chen2019colnet}, are needed. Other tasks in building KGs by tables, such as matching inter-column relationship to KG relation and matching table cells to KG entities, all suffer from similar sample shortage issues. Another typical example is detecting KG erroneous relational facts, where positive samples (i.e., errors) are often hard to collect, and thus many facts for prediction are associated with relations or entities that have never appeared in the training samples. Currently, the sample shortage issue has been widely investigated in extracting knowledge (e.g., entities, relations, triples and events) from text. However, most of the existing low-resource learning studies mainly focus on utilizing existing ML solutions such as distant supervision with heuristics, meta learning algorithms and additional features, and only a small number of them consider integrating KGs. The direction of using KGs for augmenting zero-shot and few-shot text knowledge extraction still has a lot of space for exploration. Using KGs for augmentation in these tasks is actually quite reasonable since the targets for prediction (e.g., entities, relations and events) are often already in an existing KG. Meanwhile, the zero-shot and few-shot settings for knowledge extraction from other kinds of data besides natural language text, such as tabular data and Web pages, should also be investigated so as to making these methods and tools more automatic and adaptable to new or evolving contexts. In KG curation, zero-shot and few-shot link prediction, which are sometimes known as semi-inductive or fully inductive link prediction, have been widely investigated, while the zero-shot and few-shot settings of the other tasks including entity typing, schema learning, entity resolution and error detection and correction have been rarely considered. In the future, we think it necessary and promising to consider more KG curation contexts such as an evolving ontological schema, a continuously updated KG and a heterogeneous KG constructed by integrating multiple KGs, together with different tasks besides link prediction, for building more zero-shot and few-shot problem settings (e.g., incremental ZSL \cite{wei2021incremental}), tasks and evaluation resources. \section{Conclusion}\label{sec:conclusion} KGs have been a popular approach to augment ZSL and FSL methods, while KG construction and curation also encounter many ML prediction tasks with sample shortage. Thus KG-aware low-resource learning has attracted wide attention in domains including CV, NLP, ML and the Semantic Web, and is becoming increasingly popular. In this survey, we systematically reviewed over $90$ KG-aware studies for addressing low-resource ML problems from multiple perspectives including the KG, the methodology and the application. These studies cover KG-augmented ZSL and FSL methods, as well as prediction tasks during KG construction and curation with sample shortage settings. We first introduced KGs that have been applied for low-resource learning as well as methods for constructing task-specific KGs, then separately reviewed the methods for KG-aware ZSL and FSL, by dividing them into different paradigms, each of which was further introduced and summarized with categories, and finally presented the development of ZSL and FSL in different tasks in CV, NLP and KG completion, with resources that can be used for evaluating KG-aware methods listed and summarized. Besides, we also analyzed and discussed the challenges of KG-aware low-resource learning and KG construction as well as future directions on new ZSL and FSL paradigms and methods, and new tasks and resources. \section*{Acknowledgments} This work was supported by the SIRIUS Centre for Scalable Data Access (Research Council of Norway, project 237889), eBay, Samsung Research UK, Siemens AG, and the EPSRC projects OASIS (EP/S032347/1), UK FIRES (EP/S019111/1) and ConCur (EP/V050869/1). \bibliographystyle{ACM-Reference-Format}
f0df580c2cbee89392595a3e4e3bd1f326da1269
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Pre-trained vision and language foundation models~\cite{clip,align} have shown encouraging results toward open-domain visual-concept matching. Benefiting from prompt engineering~\cite{entailment,declaration}, where free-form text prompts are designed for specific task goals, those foundation models can be easily transferred to a wide array of tasks under zero-shot and few-shot scenarios, including image classification~\cite{imagenet}, visual question answering~\cite{how_much_can_clip}, image-text retrieval~\cite{align}, etc. But manually constructing prompts for vision and language models such as CLIP is a tedious, time-consuming process, which usually requires prior domain knowledge and leads to suboptimal solutions. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figure/teaser.pdf} \caption{A conceptual overview of counterfactual prompt learning. CPL constructs counterfactuals by identifying non-spurious feature change that causally causes the prompt change. In this case, the ``\textcolor{orange}{barn}'' feature is the essential cause between Prompt \textbf{A} and \textbf{B}.} \label{fig:teaser} \end{figure} Prompt tuning~\cite{prompt_tuning}, on the other hand, liberates us from manual prompt engineering and automates this process. Prompt tuning methods~\cite{promptingvl,coco,cocoop} are proposed to effectively transfer CLIP to image recognition tasks after tuning a learnable prompt with a few examples of the classes. However, those methods purely conduct empirical risk minimization (ERM) and optimize for predictive accuracy, which often produces spurious, inefficient, or entangled representations~\cite{wang2021desiderata}. Therefore, the generalization ability of existing prompt tuning methods for vision and language models is limited, and they often fail to transfer well to unseen classes or concepts. For example, the image classification performance of the SOTA method CoCoOp~\cite{cocoop} is similar or even degrades on unseen classes when compared with zero-shot CLIP. Learning non-spurious representation for better generalization requires disentangling features that causally determine the prompts. One solution is counterfactual reasoning. Counterfactual (``counter to the facts'') is a concept that describes the human capacity to learn from limited prior experiences by imagining the outcome of an alternative action that could have been taken. So we can do counterfactual intervention by asking ``what if ...'' questions in prompt learning. For example, as shown in Figure~\ref{fig:teaser}, a change in the visual feature of the barn would cause the label to change (if we view the two prompts as two labels). Therefore, we introduce a new causality-based approach, \underline{\textbf{C}}ounterfactual \underline{\textbf{P}}rompt \underline{\textbf{L}}earning (CPL), for non-spurious and efficient prompt learning. First, we introduce a text-based negative sampling strategy to discover the most semantically-similar negative sample based on text similarity. Then we generate a counterfactual example by identifying minimal non-spurious feature change between semantically-similar positive and negative samples that causally causes prompt change. Finally, we adopt contrastive learning in the joint optimization framework (with counterfactual construction) to tune the learnable prompts using both factual and counterfactual examples. The causally fine-tuned prompts will eventually guide vision-and-language foundation models to distinguish images from unseen concepts, thereby improving the generalization ability of prompt learning. We extensively evaluate CPL using seven standard datasets for image classification, two for image-text-retrieval, and one for visual question answering (VQA). We show that CPL outperforms the baseline on all three tasks: on image classification, our method achieves $3.55\%$ average relative improvement on unseen classes across the seven datasets in terms of accuracy; on image-text retrieval, our method improves the most ($4.09\%$ relative improvement in terms of Recall@1) when using $0.5\%$ of total training instances on MSCOCO~\cite{coco} and Flickr30K~\cite{flickr}; on VQA, we gain up to $25.08\%$ relative improvement on the VQAv2~\cite{vqav2} dataset. Our main contributions are summarized below: \begin{itemize \item We introduce \underline{\textbf{C}}ounterfactual \underline{\textbf{P}}rompt \underline{\textbf{L}}earning (CPL), a task-agnostic causality-based prompt learning method to effectively transfer CLIP to unseen concepts for different downstream tasks. \item We propose a text-based negative sampling strategy, where we compute BERTScore~\cite{zhang2019bertscore} between text prompts, based on which we sample the most semantically-similar negative images. \item We introduce a optimization framework that simultaneously constructs counterfactuals by identifying minimal non-spurious feature change, and learns the generalized prompt representation from both factual and counterfactual examples. \item We conduct extensive experiments on image classification, image-text retrieval, and visual question answering, and validate the superiority of CPL to existing prompt tuning methods in transferring effectiveness on unseen concepts. \end{itemize} \section{Related Work} \paragraph{Vision-and-Language Models.~} Vision-and-Language models pre-trained on large-scale image-text pairs have demonstrated great potential in multimodal representation learning~\cite{align,flip,florence}. Among them, the representative CLIP~\cite{clip} benefits from 400M curated data and defines various prompt templates to carry out zero-shot image classification. However, those prompts still require hand-crafted designs. In this work, we automatically learn task-agnostic and task-relevant prompts without human priors. In addition, by considering the counterfactual examples, we can further improve various vision-and-language tasks, including visual question answering and image-text retrieval in a few-shot scenario. \paragraph{Prompt Tuning.~} Many works focus on learning from discrete natural language prompts, e.g., AutoPrompt~\cite{shin2020autoprompt} elicits knowledge from language models with automatically generated discrete prompts. Lately, many other works~\cite{coop, cocoop} directly tune prompts in continuous vector forms. ~\citet{prompt_q_learning} introduces Q-Learning to optimize the soft prompt. P-Tuning v2~\cite{ptuningv2} shows that continuous prompt tuning achieves the same performance as fine-tuning in various settings. Prompt tuning also receives great interest in the computer vision domain. For example, CoOp proposes a continuous prompt optimization strategy to avoid prompt design. CoCoOp~\cite{cocoop} extends CoOp by further learning an instance-conditional network to generate an input-conditional token for each image. However, these methods trained with empirical risk minimization (ERM) may learn to rely on correlations between class labels and spurious attributes by minimizing average training error~\cite{correct_contrast}. They usually learn spurious, inefficient, and entangled representation, lacking generalization ability to unseen scenarios. \paragraph{Counterfactual Reasoning.~} A number of recent works have investigated generating counterfactual images \cite{besserve2020counterfactuals}, or counterfactual text in specific language domains (e.g., court view~\cite{wu2020biased}, dialogue generation~\cite{zhu2020counterfactual}, Natural Language Inference~\cite{kaushik2019learning, semantically_robust_optimization}, named entity recognition~\cite{zeng2020counterfactual}); On the vision end, ~\citet{causal_pose_estimator} proposes to add intervention over the changed domain on images during the data-generation process and steer the generative model to produce counterfactual features to augment the training process. ~\citet{causalvqa} uses automated semantic image manipulations to generate synthetic data to make models more robust against spurious correlations; On the vision and language end, ~\citet{counterfactual_vqa} proposes to generate counterfactual VQA samples by masking critical objects in images or words in questions to augment the training data and gain a huge improvement on the VQAv2 dataset. ~\citet{mutant} proposes template-based counterfactual image augmentation methods. ~\citet{counterfactual_vln} proposes a novel training strategy for visual language navigation that dynamically generates counterfactuals to account for unseen scenarios. To our best knowledge, CPL is the first to apply counterfactual generation to prompt-based few-shot learning for vision and language models. \paragraph{Few-shot Learning.} Recently, many few-shot and efficient learning methods on vision~\cite{PEViT} and language~\cite{efficient_language_learning} tasks have been widely studied. At the same time, like CLIP, several different few-shot learners were proposed. GPT~\cite{gpt3}, as a strong few-shot learner, is capable of performing a new language task by learning from only a few training instances. Frozen~\cite{frozen} is developed based on GPT and made into a multimodal few-shot learner by expanding the soft prompting to include a collection of images and text. Their method demonstrates strong few-shot capabilities on visual question answering and image classification tasks. Similarly, CoCa~\cite{coca} is pre-trained from scratch and end-to-end using both web-scale data and annotated images by considering all labels as text, therefore unifying supervision for learning representations through natural language. It can achieve state-of-the-art performance with few-shot transfer or by minimal task-specific adaptation on a wide range of downstream vision-and-language tasks, including visual recognition, multimodal understanding, crossmodal retrieval, and image captioning. SimVLM~\cite{simvlm} is pre-trained with prefix language modeling on datasets with weak supervision. It exhibits its efficacy on few-shot captioning tasks. Even though all these models mentioned above can already achieve improvement on some few-shot tasks, how to exploit their few-shot reasoning ability using limited training examples still deserves the effort. In this work, we study this direction via the lens of prompt learning utilizing CLIP as a starting point. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{Figure/overview2.png} \caption{The counterfactual prompt learning framework. We freeze the vision encoder $F$ and the text encoder $G$, and only optimize the task-agnostic prompts and the instance-conditioned net $M$ (blue blocks). Please refer to Section~\ref{sec:overview} for the explanation. } \label{fig:overview} \end{figure*} \section{Counterfactual Prompt Learning} \label{sec:method} \subsection{Problem Formulation} Our goal is to learn generalizable prompt representation with limited data. The prompt in CLIP is divided into two parts: task-agnostic prompt $\boldsymbol{p}$ and task-relevant prompt $\boldsymbol{h}$. Task-agnostic prompt $\boldsymbol{p}$ is learned end-to-end automatically. The set of task-relevant prompts $\mathbb{H}=\left\{\boldsymbol{h}_{0}, \boldsymbol{h}_{1}, \ldots, \boldsymbol{h}_{C}\right\}$ is mapped from the label space $\mathbb{Y}$ with some predefined rules hinging on the task type, where $C$ is the total number of classes. The final prompt $\boldsymbol{t}_c$ is the concatenation of the task-agnostic prompt and the task-relevant prompt fed into CLIP's text encoder: $\boldsymbol{t}_{c}=[\boldsymbol{p}, \boldsymbol{h}_{c}]$. Existing works to this problem~\cite{coop,cocoop} propose to first extract visual feature $\boldsymbol{v}$ of each input image by feeding it into CLIP’s vision encoder $F$; and text embeddings are generated by feeding $\left\{\boldsymbol{t}_{c}\right\}_{c=1}^{C}$ into the CLIP’s text encoder $G$. The probability of $i$-th class is computed as \begin{equation} p(\boldsymbol{t}_{i}\mid \boldsymbol{x})=\frac{e^ \frac{<G\left(\boldsymbol{t}_{i}\right), \boldsymbol{v}>}{ \tau}}{\sum_{c=1}^{C} e^ \frac{<G\left(\boldsymbol{t}_{c}\right), \boldsymbol{v}>}{ \tau}}, \label{eq:1} \end{equation} where $\tau$ is the temperature parameter, $<\cdot>$ denotes the cosine similarity. Cross-entropy loss is then minimized and the gradients can be back-propagated via the text encoder $G$ to update the learnable prompt representation $\boldsymbol{p}$. During training, the weights of CLIP always remain frozen. During inference, Eq.~\ref{eq:1} is used to compute the probability for each class. \subsection{Method Overview}\label{sec:overview} An overview of the Counterfactual Prompt Learning (CPL) framework is shown in Figure~\ref{fig:overview}. For pre-processing, we construct task-relevant prompts for all training samples. The goal is to optimize the task-agnostic prompt $\boldsymbol{p}$.\footnote{Together with the instance-conditional net $\boldsymbol{M}$ as introduced in \citet{cocoop}. For simplicity, we will only use $\boldsymbol{p}$ hereafter as $\boldsymbol{p}$ and $\boldsymbol{M}$ are always optimized together.} During training, given a positive image-prompt pair, we first perform \emph{text-based negative sampling} to find the most semantically-similar negative sample based on text similarity scores. Then we adopt a \emph{controllable counterfactual generation} strategy to construct the counterfactual from the positive and negative samples in the visual feature space. Finally, we perform contrastive learning using both generated counterfactual image features and factual image features in a joint optimization framework to fine-tune the task-agnostic prompt $\boldsymbol{p}$, allowing the model to understand non-spurious semantic information and learn generalized prompt representations. \subsection{Controllable Counterfactual Generation}\label{sec:generation} By viewing image feature $ \boldsymbol{v}$ as a potential cause of the label, a non-spurious feature shall be a sufficient cause of the label. So we would like to generate counterfactuals by identifying minimal non-spurious feature change that causes the label change. The illustration of the counterfactual construction process is shown in Figure~\ref{fig:generation}. Given positive image features $\boldsymbol{v}$ and negative image features $\boldsymbol{v^-}$, we can generate negative counterfactual image features $\boldsymbol{v'}$ as below: \begin{equation} \boldsymbol{v'} =(1-\mathbf{u}) \circ \boldsymbol{v} + \mathbf{u}\circ \boldsymbol{v}^{-}, \label{generation} \end{equation} where $\circ$ is the element-wise multiplication and $\mathbf{u}$ is the parameter controlling the amount of negative image feature that replaces the positive image feature. The negative image features are extracted from those images similar to the original image at the semantic level, which we will introduce in Section~\ref{sec:nagative_sampling}. To capture the non-spuriousness, we would like to construct counterfactuals by replacing essential non-spurious features only. This can be achieved by minimizing the amount of feature change $\mathbf{u^*}$ to the original image that can causally incur label change: \begin{equation} \begin{array}{cl} \underset{ \mathbf{u}^*}{\operatorname{minimize}} &\|\mathbf{u}^*\|_{1} \\ \text { s.t. } & \mathbf{u}^*=\arg \underset{\mathbf{u}}{\max} D_{c^-}(\boldsymbol{v'}). \end{array} \label{eq:min-u} \end{equation} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figure/generation.pdf} \caption{Counterfactual generation process. $\boldsymbol{v}$ and $c$ are the positive image feature and label, while $\boldsymbol{v}^-$ and $c^-$ are the negative image feature and label. $\circ$ is element-wise multiplication. By mixing $\boldsymbol{v}$ and $\boldsymbol{v}^-$, the counterfactual image feature $\boldsymbol{v'}$ is predicted as a negative label $c^-$ by the discriminator $D$. $\mathbf{u}$ is minimized so a minimal change to the positive image feature $\mathbf{u}$ is captured here to causally change the label.} \label{fig:generation} \end{figure} Given the factual and counterfactual features $\boldsymbol{v}$ and $\boldsymbol{v'}$, we aim to learn the prompt that can help CLIP better align visual features $\boldsymbol{v}$ and textual features $G(\boldsymbol{t})$ with same semantic meanings. This can be achieved by maximizing the mutual information (MI) between $\boldsymbol{v}$ and $G(\boldsymbol{t})$. Therefore, by minimizing the InfoNCE loss~\cite{infonce}, we can maximize the lower bound on MI$(\boldsymbol{v},G(\boldsymbol{t}))$. To this end, we define the contrastive objective function based on the InfoNCE estimator following~\citet{supervised_contrastive_learning}: \begin{equation} \mathcal{L}_{CL}(\boldsymbol{p}, \mathbf{u}^*) = -log(\frac{e^{\frac{S(\boldsymbol{v},G(\boldsymbol{t}))}{ \tau}}}{ e^{ \frac{S(\boldsymbol{v}, G(\boldsymbol{t}))}{\tau}}+ e^{\frac{S(\boldsymbol{v'}, G(\boldsymbol{t}))}{ \tau}}}), \label{eq:cl} \end{equation} where $S\left(\cdot, \cdot\right) $ is normally the cosine similarity function and $\tau$ is the temperature value. \subsection{Text-based Negative Sampling} \label{sec:nagative_sampling} We then discuss how to perform negative sampling for constructing counterfactual features. As suggested in~\citet{contrastive_learning_hard_sampling}, good negative samples have different labels and are difficult to be distinguished from an anchor point, while their semantic representations are close~\citep{suresh2021not}. Since not all negative samples can serve as useful negatives~\cite{chuang2020debiased}, indiscriminate leverage of these data may harm model robustness and algorithm efficiency. Therefore, during training, in each batch, we only utilize the most semantically-similar one to generate counterfactual image features. Other image samples are filtered out. Semantic concepts may be highly complex in the visual representations, and thus it is hard to directly measure semantic similarity in the visual space. While language is more expressive and naturally preserves semantic meanings. Therefore, we propose a text-based negative sampling method. We first measure the text similarity between prompts with BERTScore~\cite{zhang2019bertscore}, which computes pairwise cosine similarity between reference sentences and candidate sentences using BERT contextual embedding~\citep{devlin-etal-2019-bert}. We compute a similarity matrix with the value of each element being: \begin{equation} \operatorname{sim}({i, j}) = \operatorname{BERTScore}(\boldsymbol{h}_i, \boldsymbol{h}_j). \label{similarity} \end{equation} Denote $\mathcal{B}$ as the collection of sampled instances. During training, each prompt $ \boldsymbol{h}_c\in \mathcal{B}$ ($1 \leq c \leq C$, where $C$ is the size of sampled instances) can be treated as a query. Given a query prompt $\boldsymbol{h}_q$, its most semantically similar prompt (the one with the highest BERTScore) $\boldsymbol{h}_k$ is searched from $\mathcal{B}$. Then we use the CLIP vision encoder to obtain the features of the corresponding positive and negative images $\boldsymbol{v}$ and $\boldsymbol{v}^{-}$. \subsection{Joint Optimization}\label{sec:optimization} In addition to the contrastive learning loss as introduced in Eq.~\ref{eq:cl}, we also adopt the standard cross-entropy loss for training: \begin{equation} \mathcal{L}_{\mathrm{CE}}(\boldsymbol{p})=-\sum_{c} \boldsymbol{y}_{c} \log p\left(\boldsymbol{t}_{c} \mid \boldsymbol{x}\right), \label{eq:ce} \end{equation} where $\boldsymbol{y}_c$ denotes the one-hot ground-truth annotation of the label. We treat all downstream tasks in this work as classification tasks, where the model predicts if the image and text prompt pair is matched or not. Then the task-agnostic prompt $\boldsymbol{p}$ is learned by minimizing the weighted combination of contrastive learning loss and cross-entropy loss: \begin{equation} \mathcal{L}(\boldsymbol{p})= \mathcal{L}_{CE}(\boldsymbol{p})+\lambda \cdot \mathcal{L}_{CL}(\boldsymbol{p}, \mathbf{u}^*), \label{eq:loss} \end{equation} where $\lambda$ determines the weight of $\mathcal{L}_{CL}$. \begin{algorithm}[t] \begin{small} \caption{Counterfactual Prompt Learning} \label{vqa_alg} \begin{algorithmic}[1] \State $\mathbb{X}$: image space \State $\mathbb{Y}$: label space \State $\boldsymbol{h}_{c}$: task-relevant prompt for the $c$-th class \State $\mathbb{H}$: the set of task-relevant prompts \State $\boldsymbol{p}$: the task-agnostic prompt \State $\boldsymbol{v}$: image features \State $\boldsymbol{v}^-$: negative image features \State $\mathbf{u}$: parameter controls the generation of counterfactual image features \Function {$\mathcal{\textcolor{blue}{CPL}}$}{$\mathbb{X}, \mathbb{Y}$} \State $\mathbb{H}\leftarrow \mathbb{Y}$ \State $\boldsymbol{t}_{c}\leftarrow[\boldsymbol{p}, \boldsymbol{h}_{c}]$ \For{each $i,j$} \State $\operatorname{sim}({i, j}) = \operatorname{BERTScore}(\boldsymbol{h}_i, \boldsymbol{h}_j)$~\Comment{Eq.~\ref{similarity}} \EndFor \For{$q$ in the batch} \State $\boldsymbol{v} \leftarrow \boldsymbol{v_q}$ \State Find the index $k$ that maximize $\operatorname{sim}({q, k})$ with the given index $q$ \State $\boldsymbol{v}^- \leftarrow \boldsymbol{v_k}$ \State Generate counterfactual image features \label{step:generation} \Comment{Eq.~\ref{generation}} \State $\mathcal{L}_{CE} \leftarrow$ cross-entropy loss~\Comment{Eq.~\ref{eq:ce}} \State $\mathcal{L}_{CL} \leftarrow$ contrastive loss~\Comment{Eq.~\ref{eq:cl}} \State Update $\boldsymbol{p}$ and $\mathbf{u}$ with the joint optimization loss~\Comment{Eq.~\ref{eq:loss}} \EndFor \label{step:optimizing} \EndFunction \end{algorithmic} \end{small} \end{algorithm} In fact, we can seek to put Eq.~\ref{eq:min-u} and Eq.~\ref{eq:loss} in a single-stage optimization framework. The intuition is that we generate counterfactual image features with minimal feature change that can maximize the negative prediction probability, and at the same time, utilize contrastive learning to learn the prompt that can guide CLIP to explicitly distinguish between factual images and counterfactual images. Putting all pieces together, we have: \begin{equation} \begin{array}{cl} \underset{\boldsymbol{p}, \mathbf{u}^*}{\operatorname{minimize}} & \mathcal{L}_{CE}(\boldsymbol{p}) +\lambda \cdot \mathcal{L}_{CL}(\boldsymbol{p}, \mathbf{u}^*) + \|\mathbf{u}^*\|_{1} \\ \text { s.t. } & \mathbf{u}^*=\arg \underset{\mathbf{u}}{\max} D_{c^-}(\boldsymbol{v'}) \\ \text{where ~} \boldsymbol{v'} &= (1-\mathbf{u}) \circ \boldsymbol{v} + \mathbf{u}\circ \boldsymbol{v}^{-}. \end{array} \label{eq:6} \end{equation} In Eq.~\ref{eq:6}, the gradients can be back-propagated all the way through the text encoder $G$ to the task-agnostic prompt, making use of the rich knowledge encoded in the pre-trained CLIP model to optimize the prompt. Algorithm~\ref{vqa_alg} presents the learning algorithm of CPL. In summary, given few input training samples $\left\{\left(x_{1}, y_{1}\right), \ldots,\left(x_{n}, y_{n}\right)\right\}$, CPL consists of three main steps: (1) compute the similarity matrix between different text prompts within the sampled batch; (2) generate counterfactual image features; (3) optimize $\boldsymbol{p}$ and $\boldsymbol{u}$ with contrastive learning loss and cross-entropy loss. \subsection{Task-relevant Prompt Construction}\label{sec:task-relevant} We construct task-relevant prompts $\mathbb{H}$ for image classification, image-text retrieval, and visual question answering, respectively. For image classification, the prompts are class labels for each task; for image-text retrieval, captions for each image are adopted as prompts; for visual question answering, we first use a pre-trained generative T5 model~\cite{t5} to convert the question-answer pairs into declarative sentences referring to the VQA prompt generation method proposed in~\citet{vqa_prompt}. Then, motivated by~\citet{chain_of_thought}, we add additional category information into the prompt generated from templates based on the question type to help the model perform intermediate reasoning steps. Specifically, we add ``The question is asking about others'' for \emph{Other} questions before the generated declarative sentence. In a similar vein, ``The question is asking about yes or no'' and ``The question is asking about numbers'' are added for \emph{Yes/No} and \emph{Number} questions. \begin{table*}[t] \resizebox{\linewidth}{!}{ \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{llllllllll} \toprule Classes &Method & SUN397& Caltech101 & ImageNet & OxfordPets & StanfordCars & Flowers102 & Food101 & Average\\ \midrule \multirow{3}{*}{Seen} & CLIP & 69.40& 96.51& 72.46& 91.33 & 74.85 & 72.17 & 90.12& 80.98\\ & CoCoOp & 79.08 \inc{13.95}& 97.66 \inc{1.19}& 76.01 \inc{4.90}& 95.18 \inc{4.22}& 70.91 \diff{-5.26} & \textbf{94.65} \inc{31.15}& 90.67 \inc{0.61} & 86.31 \inc{6.58}\\ & CPL (ours)& \textbf{81.05} \inc{16.79}& \textbf{97.70} \inc{1.23}& \textbf{78.81} \inc{8.76} &\textbf{96.69} \inc{5.87} & \textbf{75.51} \inc{0.88} & 93.91 \inc{30.12} & \textbf{93.01} \inc{3.21} & \textbf{88.10} \inc{8.79} \\ \midrule \multirow{3}{*}{Unseen} & CLIP & 75.40& 94.10& 68.09& 97.04& 74.95& \textbf{77.87}& 91.30&82.68\\ & CoCoOp & 76.83 \inc{1.90} & 93.92 \diff{-0.19} & 70.44 \inc{3.45} &97.78 \inc{0.76} & 73.09 \diff{-2.48} & 69.24 \diff{-11.08} & 91.53 \inc{0.25} &81.83 \diff{-1.02} \\ & CPL (ours) & \textbf{80.19} \inc{6.35} &\textbf{94.94} \inc{0.89} & \textbf{73.17} \inc{7.46} & \textbf{98.81} \inc{1.82} & \textbf{78.90} \inc{5.27} & 72.30 \diff{-7.15} & \textbf{93.44} \inc{2.34} &\textbf{84.54} \inc{2.25} \\ \bottomrule \end{tabular}} \caption{Result comparison between CPL and CoCoOp~\cite{cocoop} on seen and unseen classes across seven image classification datasets in terms of accuracy (\%) under the few-shot setting. The relative difference (\%) compared with CLIP is reported in color. } \label{tab:classification} \end{table*} \begin{table}[t] \resizebox{\columnwidth}{!}{ \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{lllll} \toprule Training data used & Method & {Flickr30k}& {MSCOCO}& Average \\ \midrule 0 & CLIP & 83.00 & 53.35&68.18\\ \hline \multirow{2}{*}{0.5\%} & CoCoOp & 82.40 \diff{-0.72} &55.55 \inc{4.12}&68.98 \inc{1.17}\\ & CPL (ours) & \textbf{85.64} \inc{3.18} &\textbf{57.91} \inc{8.55}&\textbf{71.78} \inc{5.28}\\ \midrule \multirow{2}{*}{1\%} & CoCoOp & 84.80 \inc{2.17}&56.62 \inc{6.13}&70.71 \inc{3.71}\\ & CPL (ours) & \textbf{86.91} \inc{4.71} &\textbf{58.43} \inc{9.52}&\textbf{72.67} \inc{6.59}\\ \midrule \multirow{2}{*}{3\%} & CoCoOp & 85.90 \inc{3.49}&58.08 \inc{8.87}&71.99 \inc{5.59}\\ & CPL (ours) & \textbf{87.74} \inc{5.71} &\textbf{59.96} \inc{12.39}&\textbf{73.85} \inc{8.32}\\ \bottomrule \end{tabular}} \caption{Result comparison between CPL and CoCoOp on two image-text retrieval datasets, Flickr30k~\cite{flickr} and MSCOCO~\cite{coco}, on the unseen test sets in terms of Recall@1 (\%). The relative difference (\%) over CLIP is reported in color.} \label{tab:ir} \end{table} \begin{table}[t] \resizebox{\columnwidth}{!}{ \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{lll} \toprule Training data used & Method & VQAv2 \\ \midrule 0 &{CLIP}&11.83\\ \midrule \multirow{3}{*}{0.5\%} & {CoCoOp}& 27.98 \inc{136.52}\\ & {CPL w/o. Category Information}& 31.68 \inc{167.79}\\ & {CPL }& \textbf{33.39} \inc{182.25} \\ \midrule \multirow{3}{*}{1\%} & {CoCoOp}& 28.51 \inc{141.00}\\ & {CPL w/o. Category Information}& 34.70 \inc{193.32}\\ & {CPL }& \textbf{35.66} \inc{201.44}\\ \midrule \multirow{3}{*}{3\%} & {CoCoOp}& 30.18 \inc{155.11}\\ & {CPL w/o. Category Information}& 35.41 \inc{199.32}\\ & {CPL }& \textbf{36.32} \inc{207.02}\\ \bottomrule \end{tabular}} \caption{Result comparison on the VQAv2 dataset~\cite{vqav2} in terms of accuracy (\%). The relative improvements over CLIP are reported in color. Incorporating category information into task-relevant prompts can further improve the performance.} \label{tab:vqa} \end{table} \section{Experiments} \subsection{Tasks and Datasets} \paragraph{Image Classification.~} We employ seven publicly available image classification datasets used in CLIP: SUN397~\cite{sun397}, Caltech101~\cite{caltech}, ImageNet~\cite{imagenet}, OxfordPets~\cite{oxfordpet}, StandfordCars~\cite{standfordcars}, Flowers102~\cite{flower}, and Food101~\cite{food101}. These datasets constitute a comprehensive benchmark, which covers a diverse set of vision tasks including the classification of generic objects, fine-grained image recognition, action classification, etc. To evaluate the generalization ability of methods, we split those datasets into seen and unseen classes. Only images in the seen classes will be used for training. The setting follows the few-shot evaluation protocol in CLIP, where we use 16 shots for training and full test sets for testing. \paragraph{Image-Text Retrieval.~} We consider two datasets for image-text retrieval: MSCOCO~\cite{coco} and Flickr30K~\cite{flickr}. We adopt the widely used Karpathy split~\cite{karpathy2015deep} for both the MSCOCO and Flickr30K datasets, where MSCOCO contains 113/5K/5K for train/validation/test. Flickr30K contains 29K/1K/1K images for train/validation/test. We construct few-shot setting subsets for both CoCoOp and CPL by taking $0.5\%$, $1\%$, and $3\%$ of training instances. We train the model with the subsets and evaluate its performance on the complete test set. We use Recall at 1 (R@1) as the default evaluation metric. \paragraph{Visual Question Answering.~} VQAv2~\cite{goyal} is an extended dataset from the VQA~\cite{vqa} dataset. The questions are categorized into three types: \emph{Number}, \emph{Yes/No}, and \emph{Other}. We set up the experiments following~\citet{bottom}, which treats visual question answering as a classification problem: for each question, the model picks the corresponding answer from a given set of predefined most frequent candidate answers and matches it with the image. The questions are first converted into a masked template using the pre-trained T5 model and predefined rules. The infilled template along with the questions will be turned into prompts that naturally connect questions and answers. The model will predict whether the given prompt and image pairs are matched. We construct the few-shot setting by taking $0.5\%$, $1\%$, and $3\%$ instances for training. \subsection{Implementation Details} \paragraph{Baselines.~} We mainly compare CPL with CoCoOp~\cite{cocoop}, one of the earliest prompt tuning methods proposed for vision-and-language pre-trained models. CoCoOp considers each input image and injects the learnable instance-aware tokens into the context vectors as the final prompt. For a fair comparison, both CPL and CoCoOp adopt CLIP~\cite{clip} as the pre-trained vision-and-language backbone and are compared with respect to their relative improvements over zero-shot CLIP. \paragraph{Prompt Tuning.~} The task-agnostic prompt is randomly initialized from a zero-mean Gaussian distribution with the standard deviation $0.02$, where we set length $L=4$ by default. For vision and language tasks, in contrast to image classification, where an image is labeled by a category, the task-relevant prompts comprise more fine-grained details, usually a sentence. We here similarly tokenize the whole sentence using the CLIP word embedding~\cite{clip}, and feed the tokenized results to the text encoder with task-agnostic prompt vectors, to generate the language embedding for each prompt. In both the image-text retrieval and visual question answering, all data in the test set can be treated as belonging to unseen classes. \subsection{Main Results} \paragraph{Image Classification.~} The experimental results for image classification are shown in Table~\ref{tab:classification}. With better prompts learned from counterfactual examples, our CPL method achieves clear advantages over CoCoOp for both seen and unseen classes across almost all datasets. Particularly on unseen classes, we gain an average relative improvement of $3.55\%$. Meanwhile, CoCoOp shows its poor generalization ability. Specifically, we found that CoCoOp performs worse than CLIP on StandfordCars on both seen and unseen classes, and on Caltech101 and Flower102 on unseen classes, indicating that it tends to learn and leverage spurious relations and could not generalize well on unseen classes in some cases. We believe all these mentioned above can be sufficient evidence that the main idea of CPL, learning non-spurious prompt representation can aid CLIP adapting at test time, is practical. \paragraph{Image-Text Retrieval.~} Table~\ref{tab:ir} reports results on image-text retrieval on the unseen test set. CPL can beat the zero-shot CLIP consistently across the three different settings, demonstrating that CPL can also learn better prompt representation and more effectively exploit the limited amount of data on image-text retrieval. Meanwhile, CoCoOp performs even worse than CLIP on Flickr30k using $0.5\%$ training data, which suggests that a tiny quantity of training data for image-text retrieval can lead to spurious prompt representation if using naïve instance-conditional prompt tuning method. \paragraph{Visual Question Answering.~} For visual question answering, the results are shown in Table~\ref{tab:vqa}. As can be seen, CPL surpasses the baseline CoCoOp with a relative improvement of up to $25.08\%$ when using $1\%$ instances for training. This proves the concept that CPL can be effective on more complicated vision-and-language tasks. In fact, visual question answering is more challenging for zero-shot CLIP which is pre-trained for image-text matching. During pre-training, CLIP sees most sentences similar to captions in image-text retrieval and those captions can be directly used as prompts; while for VQA, question-answer pairs have to be adapted into declarative prompts. Therefore, zero-shot CLIP has poor performance on VQA, but few-shot prompt tuning via CPL can help reduce the prompt domain gap significantly. Apart from the vanilla CPL method, we examined another variant of CPL where we do not add additional category information into the prompt (denoted as CPL w/o. Category Information), the results indicate that constructing task-relevant prompts by adding categorical information contributes to the improvement. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figure/bertscore.pdf} \caption{Visualization of the weights of the controller parameter $\mathbf{u}$ on images. The first column is the original positive examples; the second column is BERT-sampled negative examples; the third column is randomly-sampled negative examples for comparison. The BERTScore between the text prompts of positive examples and sampled examples are shown at the bottom. } \label{fig:visualize} \end{figure} \subsection{Ablation Analysis} \paragraph{Negative Sampling.~} We compare the random sampling vs. BERTScore sampling over ImageNet for image classification, MSCOCO for image-text retrieval, and VQAv2 for visual question answering in Table~\ref{tab:sample}. With more challenging negative examples, BERTScore sampling leads to more effective prompt tuning and overbeats random sampling on all three tasks. The qualitative visualizations of the two sampling strategies are shown in Figure~\ref{fig:visualize}, from which it can be seen that BERTScore-sampled images are much more semantically similar to the original images. \paragraph{Non-spurious Feature Visualization.} We visualize the heatmap of the learned non-spurious feature weights in the image level in Figure~\ref{fig:visualize}. The weights are mainly centralized on the semantically meaningful regions that are aligned to the text prompts. \begin{table}[t] \resizebox{\columnwidth}{!}{ \centering \begin{tabular}{llll} \toprule Method & {ImageNet}& {MSCOCO}& VQAv2 \\ \midrule Random sampling & 75.28&57.78& 33.01 \\ BERTScore sampling &\textbf{76.02}&\textbf{58.43}&\textbf{35.66}\\ \bottomrule \end{tabular} } \caption{Random sampling vs. BERTScore sampling for CPL over three tasks. On ImageNet, we measure the average accuracy across seen and unseen classes. On MSCOCO and VQAv2, we both use 1\% instances for few-shot learning.} \label{tab:sample} \end{table} \paragraph{Number of Shots in Image Classification.~} We then study the effects of the number of shots on CPL for image classification. Following the few-shot evaluation protocol adopted in CLIP, we use $4$, $8$, and $16$ shots for training on ImageNet. From Figure~\ref{fig:shots}, increasing the number of shots keeps improving the performance of both two methods on unseen classes. Meanwhile, CPL outperforms CoCoOp under the three different settings and has lower standard errors. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{Figure/shots.pdf} \caption{Accuracy comparison on ImageNet~\cite{imagenet} unseen classes under three different shots. CPL performs better than CoCoOp consistently and has lower standard errors. } \label{fig:shots} \end{figure} \paragraph{Contribution of Contrastive Learning.} In Section~\ref{sec:method}, we use the coefficient $\lambda$ to weigh the contrastive learning loss and combine it with the cross-entropy loss. It is observed that the scale of contrastive learning loss is smaller, hence we try to use a larger $\lambda$ to balance the two loss terms. Figure~\ref{fig:lambda} shows the average accuracy result across seen and unseen classes on the SUN397 dataset under four different $\lambda$ values. Note that when $\lambda$ is zero, there is no contribution from the contrastive loss and the method actually learns the prompt using standard cross-entropy loss. From experimental results obtained on the SUN397 dataset, we can observe that using $\lambda = 1$ leads to the best performance. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{Figure/lambda.pdf} \caption{Ablation of four different $\lambda$ values on the SUN397 dataset in terms of average accuracy (\%). The performance of CPL peaks at $\lambda =1$.} \label{fig:lambda} \end{figure} \section{Conclusion} In this paper, we propose a Counterfactual Prompt Learning (CPL) framework to avoid time-consuming prompt engineering and learn more generalizable prompt representation for vision and language models. We conduct abundant experiments on seven widely used image classification datasets, two image-text retrieval datasets, and one visual question answering dataset. Our proposed CPL method outperforms the previous prompt tuning baseline and the zero-shot CLIP across the three tasks. In the future, we plan to develop more sophisticated methods based on CPL and extend CPL to other vision and language tasks. \section*{Limitations} There are fairness issues in large pre-trained vision and language models such as CLIP. The proposed prompt learning method in this study automatically learns the prompt and does not address those issues in the pre-trained model. Considering the method is proposed for the few-shot setting, careful inspection and tuning are also needed when testing our method on other biased datasets. The methodologies proposed in~\citet{multimodal_fairness} and~\citet{clip_fairness} may possibly be paired with CPL to potentially address the issues. Another limitation is the absence of explainability in CPL, which is a common problem with existing soft prompt tuning methods. Back-mapping tuned soft prompts representation to natural language is a way for interpretation; however, due to the limited size of vocabulary used by CLIP during the training, prior methods such as searching for the nearest words in the embedding space can not accurately match the vector to natural language. Expanding the dictionary size for CLIP embedding or developing more advanced back-mapping techniques can possibly address the limitation. \section*{Acknowledgments} We would like to thank the support of the Google Ads Faculty Research Award. We also thank the anonymous reviewers for their thought-provoking comments. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the sponsor.
7c8d49e74cb3313d9b449deaed6498ec66628e74
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Quasi two-dimensional charge-transfer salts based on the organic molecule BEDT-TTF (where BEDT-TTF stands for bis-ethylenedithio-tetrathiafulvalene) provide a unique playground for studying the interaction of electronic and structural properties because they exhibit a rich phase diagram ranging from metals and superconductors to charge-ordered and Mott insulators, often combined with magnetic order but also spin-liquid properties \cite{ToyotaBook,LebedBook,MoriBook}. These compounds attract great attention in experimental and theoretical condensed-matter physics because their behavior can be tuned by chemical substitution or external pressure; thus certain properties can be strengthened or suppressed as desired. The wealth of possibilities gets even enlarged by the fact that many of these compounds possess several polymorphic structures (labelled by Greek letters) often with quite different properties. Due to the D$_2$A stoichiometry (with D the organic donor and A the inorganic acceptor components), the conduction bands are supposed to be three-quarter filled, leading to charge-ordered ground states if the intersite Coulomb repulsion is strong enough, as observed in numerous $\alpha$- \cite{Wojciechowski03,Dressel03,Drichko06,Yue10,Ivek11}, $\beta^{\prime\prime}$- \cite{Yamamoto04,Yamamoto06,Yamamoto08,Kaiser10} and $\theta$-compounds \cite{HMori98}, for instance. In dimerized structures, such as the $\kappa$-phase, however, the bands are half-filled, with Mott physics playing the superior role \cite{ToyotaBook,LebedBook} and typically no tendency to charge order \cite{Sedlmeier12}. Among those weakly dimerized phases, the non-mag\-ne\-tic $\beta^{\prime\prime}$-compounds are of particular interest as several species exhibit superconductivity at ambient condition or under pressure \cite{HMori98}. Merino and McKenzie \cite{Merino01} predicted that a superconducting state could be realized by charge fluctuations when the long-range charge order is suppressed that commonly leads to an insulating state. The underlying principle differs from the superconducting mechanism found in the $\kappa$-phase where magnetic fluctuations are of superior importance. Their suggestion was confirmed by different experimental methods such as, infrared spectroscopy \cite{Kaiser10} and NMR \cite{Kawamoto11}. For a better understanding of the superconducting mechanism, it is important to study also the charge-order state above $T_c$. Previously comprehensive optical investigations have been conducted \cite{Yamamoto04,Yamamoto06,Yamamoto08,Kaiser10,Girlando14} on several $\beta^{\prime\prime}$-salts with different correlating strength. Here we want to extend the efforts to the charge-order state in a new $\beta^{\prime\prime}$-salts, which we mainly explore by optical spectroscopy in order to understand the phase diagram of these compounds. Already in 1993 Lyubovskaya and collaborators \cite{Lyubovskaya93,Aldoshina93} synthesized the family of salts with the composition (BEDT-TTF)$_2$Hg(SCN)$_{3-n}$$X_n$ ($X$ = F, Cl, Br and I and n=1 or 2) that has drawn increasing attention recently. Among them \khgcl\ is of particular interest as the compound exhibits sharp metal-insulator transition around $T_{\rm CO}=30$~K without structural changes \cite{Yasin12,Drichko14,Lohle17}. It was also reported \cite{Lyubovskaya95} that in the process of synthesis and crystal growth a new salt of the same chemical composition but different structure and properties was obtained. The present paper is devoted to the complete characterization of \bhgcl, using x-ray analysis, infrared and Raman spectroscopy, transport and ESR measurements. \section{Experiment Method} Single crystals of $\beta^{\prime\prime}$-(BEDT-TTF)$_{2}$Hg(SCN)$_{2}$Cl were prepared by electrochemical oxidation of the BEDT-TTF molecules These crystals are formed in the synthesis along with $\kappa$-(BEDT-TTF)$_2$Hg(SCN)$_2$Cl crystals \cite{Lyubovskaya93,Aldoshina93}. Typically, a 4~ml solution of BEDT-TTF (10~mg, 0.026~mmol) in 1,1,2-trichloroethane (TCE) was added to the anode compartment of the cell, and a 10~ml solution containing Hg(SCN)$_2$ (34.2~mg, 0.108~mmol), [Me$_4$N]SCN$\cdot$KCl (14.3~mg, 0.069~mmol), and dibenzo-18-crown-6 (32.1~mg, 0.09~mmol) in 12\%\ ethanol/TCE was added to the cathode compartment and the anode compartment, to level both sides. Electrocrystallization was carried out at a temperature of $40^\circ$~C and a constant current of 0.5$\mu$A. The crystals of $\beta^{\prime\prime}$- and $\kappa$-phases were unambiguously identified by electron spin resonance (ESR) spectroscopy ($\Delta H_{pp} = 25-38$~G and 60-90~G, respectively) or by temperature of a transition to the insulating state. The resulting single crystals of the $\beta^{\prime\prime}$-phase are platelets with a typical size of $1 \times 1 \times 0.04$~mm$^{3}$; the larger face corresponds to the highly conducting ($ab$)-plane. X-ray diffraction data were collected at $T=293$~K using a Bruker Kappa Apex2duo diffractometer with Mo-K$_{\alpha}$ radiation ($\lambda = 0.71073$~{\AA}). The structure was solved by direct method and refined by full-matrix least-squares techniques on $F^2$ employing the program system SHELX-97 \cite{Sheldrick08,remark1}. The temperature-dependent dc-resistivity was measured by standard four-point technique in a custom-made helium bath cryostat with a cooling and warming rate of less than 0.3~K/min. For the current and voltage contacts $15~\mu$m golden wires were glued directly to the sample with carbon paste; the voltage was fixed to 2~mV. Electron-spin-resonance (ESR) measurements were carried out by utilizing a continuous-wave X-band spectrometer (Bruker ESR 300) equipped with an Oxford ESR 900 cryostat for temperature-dependent measurements between $T=10$ and 240~K. The sample was glued to a quartz rod by vacuum grease in such a way that the external magnetic field is oriented within the conducting plane. The signal could be fitted with Lorentzians; relative paramagnetic susceptibility was estimated from the line intensity using $\chi\propto(\Delta H_{\rm pp})^{2}I_{\rm max}$, where $ \Delta H_{\rm pp}$ is the peak-to-peak field and $I_{\rm max}$ the intensity of the ESR signal. The Raman spectra of \bhgcl\ single crystals were recorded in backscattering geometry within the energy range of $1000-1500~{\rm cm}^{-1}$ with 1~\cm\ spectral resolution using a Horiba Jobin Yvon Labram HR 800 spectrometer equipped with a liquid-nitrogen-cooled CCD detector. The incident He-Ne laser beam ($\lambda = 632.8$~nm) was focused on the ($ab$)-plane of the salt, and the scattered beam was collected without polarization analysis. Temperature-dependent spectra were obtained by mounting the specimen on a continuous-flow helium cryostat and cooling down with a typical rate of 1~K/min. Polarized infrared reflectivity measurements were performed over a broad energy range (from 200 to 8000~\cm, 1~\cm\ resolution) from room temperature down to 10~K using a Brucker Vertex 80v Fourier-trans\-form infrared spectrometer equipped with a microscope (Bruker Hyperion) and a CryoVac microcrystat. The absolute value of reflectivity was obtained by comparison to an aluminum mirror. The optical conductivity was extracted from the reflectivity data via Kramers-Kroning transformation. At low frequencies, the data were extrapolated by the Hagen-Rubens relation for the metallic state and a constant value for the insulator state, while the high-energy part was extended up to 50\,0000~\cm\ using a free-electron model $R(\omega) \propto \omega^{-4}$. \section{Results and Discussion} \subsection{Crystal structure} \begin{figure} \centering \includegraphics[width=\columnwidth]{crystal3.eps} \caption{(Color online) Crystal structure of $\beta^{\prime\prime}$-(BEDT-TTF)$_{2}$Hg(SCN)$_{2}$Cl at room temperature.Unit cell borders are marked with red dashed lines (a)~Projection of the molecular arrangement along the $a$-axis illustrates the alternating cation and anion layers. (b)~Packing pattern of the BEDT-TTF molecules in the donor layer with two distinct molecules, hereafter designated as A and B. The A\,B\,B\,A\,A\,B\,B\,A-stacks are basically form along the $(a-b$)-direction. (c)~Dimer arrangement of the Hg$_2$(SCN)$_{4}$Cl$_2$ unit in the anionic layer.} \label{fig:structure} \end{figure} The unit-cell parameters of \bhgcl\ at room-temperature are summarized in Table~\ref{tab}. \begin{table}[b] \caption{Crystallographic data for $\beta^{\prime\prime}$-(BEDT-TTF)$_{2}$\-Hg(SCN)$_{2}$Cl obtained from x-ray diffraction ($\lambda = 0.71073$~\AA) at ambient conditions ($T=293$~K) \cite{remark1}. For comparison we also list the data for the deuterated analogue \cite{Dyachenko95}.} \begin{tabular}{lll} Chemical formula & C$ _{22}$H$_{16}$S$_{18}$N$_{2}$HgCl~~ & C$ _{22}$D$_{16}$S$_{18}$N$_{2}$HgCl \\ Form. weight $M_{W}$ & 1121.49 & 1137.59 \\ Crystal system & triclinic & triclinic\\ Space group & P$\overline{1}$ & P$\overline{1}$ \\ $a$ (\AA{}) & 9.568(2) & 9.717(3) \\ $b$ (\AA{}) & 10.778(2) & 11.067(3) \\ $c$ (\AA{}) &18.844(4) & 19.348(5) \\ $\alpha$ (deg.) & 90.60(3) & 77.69(2) \\ $\beta$ (deg.) &102.13(3) & 106.90(2)\\ $\gamma$ (deg.) & 113.66(3)& 114.28(3) \\ Volume $V$ (\AA$^{3}$) & 1730.4(6) & 1804.2(7)\\ $Z$ & 2 & 2 \\ Density $D_{c}$ (g/cm$^{3}$) & 2.152 & 2.11 \\ Absorption coeff. & & \\ ~~~$\mu$ (mm$^{-1}$) & 5.635 & 5.4 \\ F(000)& 1094 & 1094 \\ No.\ of refl.\ meas.\ & 41480 & \\ No.\ of indep.\ refl.\ & 9989 & 3784 \\ $R_1$ & 0.0216 & 0.05 \\ $\omega R_2$ & 0.0556 & 0.05 \\ GOF & 0940 & \\ \end{tabular} \label{tab} \end{table} A deuterated sister compound (D$_8$-BEDT-TTF)$_{4}$[Hg(SCN)$_{2}$Cl]$_2$ with a $\beta$-type packing was prepared previously \cite{Dyachenko95,Lyubovskaya95,Yudanova94}. As common for these charge transfer salts, layers of BEDT-TTF radical cations and Hg(SCN)$_{2}$Cl anions are alternatingly stacked along the crystallographic $c$-axis as shown in Fig.~\ref{fig:structure}(a). Within the cationic layers, the arrangement of the BEDT-TTF molecules can be best described by the $\beta^{\prime\prime}_{412}$-type packing motif \cite{TMori98} with tilted stacks along the ($a$-$b$)-axis. As illustrated in Fig.~\ref{fig:structure}(b) two crystallographically non-equivalent BEDT-TTF molecules (labelled A and B) can be distinguished with planar and nonplanar structures of the TTF fragments, respectively. The distances between (A,A) and (B,B) are similar but slightly longer than that of (A,B), causing a weak structural dimerization. Since neutral TTF-fragments are commonly bent while planar TTF-fragments are charged, this might suggest that in the present case the charge is distributed non-uniformly. The anionic layers consist of doubly charged [Hg$_{2}$(SCN)$_{4}$Cl$ _{2} $]$^{2-}$ dimers as shown in Fig.~\ref{fig:structure}(c). This is distinctively different from the single-charged monomeric anions [Hg(SCN)$_{2}$Cl]$^{-} $ present in the $\kappa$-phase analogue \cite{Drichko14}. \subsection{Transport Properties} Fig.~\ref{fig:dc}(a) presents the temperature dependence of the electrical resistivity $\rho(T)$ of \bhgcl\ measured within the highly conducting ($ab)$-plane. At room temperature the conductivity is around $2~(\Omega{\rm cm})^{-1}$, which is a typical value for organic conductors. From $T=300$ down to 150~K this compound shows a weakly metallic behavior, before the resistivity starts to increase slightly. Comparable transition temperatures have been also reported in other charge-ordered insulators such as $\beta^{\prime\prime}$-(BEDT-TTF)$_{3}$[(H$_{3}$O)Ga(C$_{2}$O$_{4}$)$_{3}$]C$_{6}$H$_{5}$NO$_{2}$ \cite{Yamamoto04}, $\beta^{\prime\prime}$-(BEDT-TTF)$_{4}$(ReO$_{4}$)$_{2}$ \cite{Ihara16} and $\theta$-(BEDT-TTF)$_{2}$\-RbZn(SCN) \cite{HMori98}. As the temperature drops below 72~K, $\rho(T)$ exhibits a steep jump and rises dramatically to a value of $10^7~\Omega$cm at $T=20$~K. The inflection point identified around 50~K maybe related to ordering of the ethylene groups. The strong hysteresis observed for the cooling and warming cycles indicates that a structure distortion is involved in the metal-insulator transition. The transition temperature is best defined by the peak in the logarithmic derivative $-{\rm d}\,\ln\rho/{\rm d}T$ {\rm versus} $T^{-1}$ presented in the inset of Fig.~\ref{fig:dc}(b). From the Arrhenius plot $\rho(T)\propto \exp\{\Delta_\rho/k_B T\}$ [Fig.~\ref{fig:dc}(b)] the activation energy is estimated to be around $\Delta_\rho = 60$~meV below $T=60$~K and $\Delta_\rho = 170$~meV in the temperature range from 66 to 72~K; for the deuterated and hydrogenated salts an activation energy of 170~meV was previously reported \cite{Lyubovskaya95}. The lattice constants and volume of the deuterated sibling are expanded compared to \bhgcl; correspondingly the temperature of the sharp metal-insulator transition increases to $T_{CO}=86$~K \cite{Dyachenko95}. Via the interaction between the anion and the terminating ethylene groups of the BEDT-TTF molecules, deuteration basically works as negative pressure. This implies an enhancement of the effective electron-electron correlation in \dhgcl\ as the transfer integrals are smaller. In case of one-dimensional organic conductor it was noted previously that the donor-anion interaction is important for establishing charge order \cite{Pouget15}. \begin{figure} \centering \includegraphics[width=\columnwidth]{dc2.eps} \caption{(Color online) (a)~Temperature dependence of the dc resistivity measured in the conducting ($ab$)-plane of \bhgcl. The inset magnifies the minimum in $\rho(T)$ around $T_{m}=150$~K. (b)~The resistivity plotted as a function of inverse temperature $1/T$. The red straight line indicates an activation energy of $\Delta_\rho = 60$~meV, which corresponds to 700~K. The logarithmic resistivity derivative is plotted in the inset versus the inverse temperature; the dashed line indicates the charge order transition $T_{CO}=72$~K. } \label{fig:dc} \end{figure} \subsection{Magnetic Properties} \label{sec:magnetic} In order to advance our understand of the metal-insulator transition in \bhgcl, we performed ESR measurements as a function of temperature with the external magnetic field oriented parallel to the conducting ($ab$)-plane. In Fig.~\ref{fig:ESR} the temperature evolution of the spin susceptibility $\chi(T)$ and linewidth $\Delta H(T)$ are plotted as a function of temperature; the spin susceptibility data are normalized to $\chi(T=240~{\rm K})$. Upon cooling the spin susceptibility drops slightly at elevated temperatures. Around $T=150$~K the decrease becomes more rapid indicating a semi-metallic behavior with a gradual opening of a gap in accord to the minimum in resistance $\rho(T)$, illustrated in the inset of Fig.~\ref{fig:dc}(a). As the temperature is lowered even further, a non-magnetic insulating state is entered and the spin susceptibility vanishes rapidly as $T\rightarrow 0$. The temperature dependence of linewidth is plotted in Fig.~\ref{fig:ESR}(b); it continuously decreases upon cooling with a pronounced drop around the 120~K and a saturation for $T<T_{\rm CO}$. It is consistent with other charge ordered $\beta^{\prime\prime}$-compounds showing non-magnetic ground state \cite{Kawamoto11,Schlueter01,Ward00,CARNEIRO84}. It is interesting to compare the behavior with the data for \khgcl, where an anti-ferromagnetic insulating ground state was observed \cite{Yasin12}. In the present case a spin-singlet state is realized at low temperatures, and the data $\chi(T)$ can be described with a one-dimensional singlet-triplet picture. From the corresponding Bulaevskii's model \cite{Bulaevskii88}, we expect for the temperature dependence of the spin susceptibility \cite{Dumm00,Dumm11}: \begin{equation} \chi(T)\propto \frac{1}{T} \exp\left\{\frac{-\Delta_\sigma}{k_{B}T}\right\} \quad ; \label{eq:Bulaevskii} \end{equation} here $\Delta_{\sigma}$ is the static spin gap. From Fig.~\ref{fig:ESR} we can see that this simple model fits the experimental data of \bhgcl\ quite well with a spin gap of $\Delta_\sigma = 47$~meV, corresponding to $\Delta_\sigma/k_B = 550$~K. It is interesting to compare this value with the charge gap of similar size derived from the activated behavior of the electronic transport. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{esr3.eps} \caption{(Color online)(a) Temperature dependence of the relative spin susceptivility of \bhgcl. The solid red line represents a fit of $ \chi(T)$ to Bulaevskii's model of excitations across a singlet-triplet gap $\Delta_\sigma = 47$~meV. (b) Temperature dependence of the linewidth $\Delta H_{pp}$ of the salt. } \label{fig:ESR} \end{figure} One should note that the spin susceptibility does not show a significant change at $T_{\rm CO}$; the behavior of Eq.~(\ref{eq:Bulaevskii}) extends smoothly well above 100~K. This provides strong evidence for the spin pairing setting in already around 150~K. We will come to similar conclusions from the analysis of our vibrational investigations in Sec.~\ref{sec:vibrational} \subsection{Optical Properties} \label{sec:reflectivity} Fig.~\ref{fig:ref} displays the normal-incidence optical reflectivity of \bhgcl\ as obtained at room-temperature with light polarized along the three principal directions. Perpendicular to the layers ($E \parallel c$) the reflectance is rather low and basically independent on frequency, resembling an insulator; the vibrational features around 1400~\cm\ will be discussed later in detail (Sec.~\ref{sec:vibrational}). \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{ref2.eps} \caption{(Color online) Optical reflectivity of \bhgcl\ measured at room temperature with light polarized along all three crystallographic axes as indicated. The extrapolations used below 200~\cm\ are indicated by dashed lines. } \label{fig:ref} \end{figure} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{cond4.eps} \caption{(Color online) (a,b) Optical reflectivity and (c,d) corresponding conductivity spectra of \bhgcl\ measured at different temperatures between $T = 300$ and 50~K for both polarizations perpendicular and parallel to the stacks, i.e.\ $E \parallel (a+b)$ and $E \parallel (a-b)$ \cite{remark2}. Note the metal-insulator phase transition at $T_{\rm CO}=72$~K leads to a drastic change in the optical properties: the low-frequency reflectivity drops significantly and the Drude component vanishes for $T=50$~K data (thick blue curves). In the inset the intensity evolution of the 1200~\cm\ mode is plotted in comparison to the frequency shift of the vibronic $ \nu_{3}$(A$_g$) mode as temperature is reduced. } \label{fig:cond} \end{figure*} Within the highly conducting layer a metallic reflectance is observed, which extra\-polates to $R=1$ for $\omega \rightarrow 0$ at elevated temperatures. However, at $T=300$~K only for the polarization $E\perp {\rm stacks}$ a reflection edge can be identified around 5000~\cm. The overall behavior resembles the spectra obtained for other BEDT-TTF salts with $\alpha$-, $\beta^{\prime\prime}$- or $\theta$-stacking pattern \cite{Dressel04}. The optical response is rather anisotropic within the quasi-two-dimensional conducting plane over a wide frequency range. As typical for these non-dimerized BEDT-TTF compounds, the reflectivity in the perpendicular direction is significantly larger than $R(\nu)$ measured along the stacks. This agrees with calculations of the electronic orbitals and bandstructure \cite{TMori98} indicating that the transfer integrals between nearest-neighbor BEDT-TTF molecules are much larger for the interstack direction compared to the direction along the stacks. Apart from the electronic contributions, several vibrational features can be identified in the mid-infrared spectral region. When probed within the conducting plane, these are attributed to the totally symmetric A$ _{g} $ vibrational modes of the BEDT-TTF molecules coupled with electronic excitations through the electron-molecular vibration (emv) interaction \cite{Girlando86,Yartsev90,Girlando11}. The sharp peak at approximately 2100~\cm\ is the CN stretching mode in the (SCN)$^{-1}$ entity of the anion layers \cite{Drichko14}. It does not change significantly with temperature. \subsubsection{Electronic contributions} \label{sec:electronic} The temperature-dependent reflectivity is plotted in Fig.~\ref{fig:cond}(a,b) for a wide frequency range. With lowering $T$ a clear plasma edge develops for both polarizations, but at significantly different frequencies, expressing the in-plane anisotropy: for $E \parallel {rm stacks}$ at 3500~\cm\ and at around 5000~\cm\ perpendicular to it. The position of the edge shifts to higher frequencies upon cooling because the charge carrier density increase as the lattice contracts. The overall reflectivity remains much lower than expected for a typical metal; in particular for the stacking axis, $R(\nu)$ does not exceed 0.5 at $\nu=1000$~\cm\ for $T>100$~K. Such a behavior is commonly observed for organic conductors \cite{Dressel04} reflecting the low carrier density and the influence of electronic correlations. Nevertheless, for both orientations the reflectivity increases as the temperature is reduced until the metal-insulator transition takes place around $T_{\rm CO} = 72$~K. At lower temperatures the metallic properties are lost: $R(\nu)$ substantially drops for $\nu < 1000$~\cm\ in both polarizations. Since the crystal becomes transparent at $T=20$~K, multireflection within the crystal leads to interference fringes that hamper a further analysis; thus we constrain ourselves to the $T=50$~K data \cite{remark2}. A better understanding of the electronic properties can be reached by looking at the temperature-dependent optical conductivity derived via the Kramers-Kronig analysis of the reflectivity spectra and displayed in Fig.~\ref{fig:cond}(c,d). For both polarizations a rather weak Drude-like contribution is observed at room temperature, together with a comparably wide feature in the mid-infrared region centered between 1000 and 2500~\cm. When cooling down to $T=100$~K, the spectral weight shifts to low frequencies and a narrow zero-frequency peak appears indicating a weakening of correlations. This tendency is most pronounced for the high-conductive polarization $E \parallel (a+b)$; parallel to the stacks the spectral weight transferred from the higher energy range (up to 1~eV) piles up in the far-infrared as well as in the mid-infrared range around 1200~\cm. Similar observations have been reported for other organic compounds with quarter-filled conduction bands \cite{Dressel94,Drichko06,Kaiser10,Hashimoto14} and were theoretically described by Merino {\it et al.} \cite{Merino03,Merino05}. According to their calculations of the extended Hubbard model, the charge carriers become increasingly localized as the effective nearest-neighbor Coulomb repulsion $V/t$ gets important; in addition to the Drude-like term, a finite-energy mode develops in $\sigma(\omega)$ due to charge-order fluctuations and shifts to higher frequencies. Driven by electronic correlations a metal insulator transition eventually takes place: the Drude component vanishes and a mid-infrared excitations remain at frequencies comparable to the intersite Coloumb repulsion $V$. It is interesting to compare the behavior with the optical spectra of Mott insulators frequently present in half-filled systems \cite{Faltermeier07,Merino08,Dumm09,Ferber14} where excitations between the lower and upper Hubbard band allow to determine the Coulomb repulsion $U$. In the latter case, the bands are centered around 2500~\cm\ and possess a typical bandwidth of approximately 0.5~eV \cite{Pustogow17}. In our spectra taken on \bhgcl\ for $E$ perpendicular to the stacks we observe a very strong band at 500-700~\cm, resembling the charge fluctuation mode seen in the all-organic superconductor $\beta^{\prime\prime}$-(BEDT-TTF)$_2$\-SF$_5$\-CH$_2$\-CF$_2$SO$_3$ \cite{Kaiser10}. Along the stacks the optical conductivity looks rather different and the main feature develops around 1200~\cm, but is present already above the metal-insulator transition. Although the actual shape of this mode is distorted by the $\nu_3$ antiresonance, in the inset of Fig.~\ref{fig:cond}(d) we compare the intensity of this 1200~\cm\ band to the temperature evolution of the $\nu_{3}({\rm A}_g)$ mode. This similarity in the $T$ dependence implies that the infrared band along the $a$-axis is related to the emv-coupled $\nu_{3}$ vibration. It becomes activated because the structure dimerizes along the stacking direction. The peak lies at much lower position compared to the strongly dimerized $\kappa$-(BEDT-TTF)$_{2}X$ compounds \cite{Dumm09}. Such a low-lying dimer peak was also reported for $\beta$-(BEDT-TTF)$_{2}$ICl$_{2}$ where the application of pressure shifts the mode below the Hubbard band \cite{Hashimoto15}. From the large anisotropy of the electronic spectra we conclude that stripes of charge-poor and charge-rich molecules likely develop along the ($a+b$) direction where the orbital overlap is largest; they alternate along the stacking direction. Such a large anisotropy has also been observed in other organic conductors such as $\alpha$- \cite{Ivek11}, $\theta$- \cite{Wang01} and $\kappa$-BEDT-TTF salts \cite{Drichko14} with charge-order stripe in that orientation. This is supported by calculations using the extended Hubbard model with an anisotropic intersite Coulomb interaction $V$, which favors stripe-type charge order by a smaller gap, rather than checkboard charge order present in a square lattice \cite{Merino05}. Charge excitations within these one-dimensional channels lead to a band around 600~\cm\ very similar to theoretical predictions \cite{Merino03,Merino05} and previous observations \cite{Kaiser10}. Below the phase transition at $T_{\rm CO}= 72$~K, the low-energy parts of $R(\nu)$ and $\sigma(\nu)$ alter drastically. A gap opens in the conductivity spectra around 400~\cm\ and 300~\cm\ for the ($a-b$) and ($a+b$) axis, respectively; we estimate $2\Delta_\omega\approx 40-50$~meV, corresponding to approximately 500~K. The agreement with the activation energy determined from dc resistivity $\rho(T)$ shown in Fig.~\ref{fig:dc} is remarkable. These values are slightly larger compared to what is expected from mean-field ratio theory: $2\Delta = 3.53 k_B T_{\rm CO}$. In this regard our findings are in line with other charge-ordered insulators, such as $\alpha$-(BEDT-TTF)$_{2}$I$_{3}$ ($T_{\rm CO} = 135$~K, $2\Delta_\omega/hc \approx 600$~\cm, \cite{Dressel94,Ivek11}), but smaller than what is reported for $ \theta$-(BEDT-TTF)$_{2}$RbZn(SCN)$_{4}$: $T_{\rm CO} = 190$~K, $2\Delta_\omega/hc = 300$~\cm\ \cite{Wang01}. \subsubsection{Vibrational features} \label{sec:vibrational} The infrared-active molecular vibrational mode $\nu_{27}({\rm B}_{1u})$ mode is regarded as the best probe of the local charge on the molecules \cite{Dressel04,Girlando11,Yue10}. Figure \ref{fig:nu27a} displays the temperature dependence of the optical conductivity spectra measured with the electric field $E||c$, i.e. polarized perpendicular to the conducting layer. Several {\it ungerade}, infrared-active vibrational B$_{u}$ modes are well resolved in the frequency region between 1300 and 1600~\cm, which is free from disturbing electronic background. Three weak bands show up at around 1400 to 1425~\cm\ and exhibit almost no temperature dependence; they can be assigned to the $\nu_{28}({\rm B}_{1u})$ vibration associated with the CH$_{2}$ bending modes \cite{Sedlmeier12}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{nu27_2.eps} \caption{(Color online) Optical conductivity of \bhgcl\ for $E\parallel c$ at different temperatures; the spectra are offset for clarity reasons. Most important is the splitting of the $\nu_{27}({\rm B}_{1u})$ vibrational mode below $T_{\rm CO}=72$~K, indicating a pronounced charge-ordered state that evolves with temperature.} \label{fig:nu27a} \end{figure} The most pronounced feature at room temperature is a very broad band around 1445~\cm\ that corresponds to the $\nu_{27}({\rm B}_{1u})$ band mainly involving the C=C vibrations. Such a large linewidth can be explained by dynamical charge-order fluctuations, which also have been discussed and reported in other $\beta^{\prime\prime}$ compounds \cite{Girlando14,Yamamoto04,Yamamoto06,Yamamoto08,Kaiser10}. Upon cooling the single mode becomes narrower, shifts up in frequency and develops a double peak structure around $T=100$~K. This is not a big surprise considering the fact that two distinct types of BEDT-TTF molecules (A, B) reside in the unit cell, as illustrated in Fig.~\ref{fig:structure}(b). As the temperature is reduced below the phase transition at 72~K, the feature splits into four well distinct peaks at 1442.2, 1449.0, 1469.8 and 1487.6~\cm, giving evidence that the insulating state exhibits charge disproportionation, i.e.\ there are four distinct types of BEDT-TTF molecules. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{splitting.eps} \caption{(Color online) (a)~Frequencies of the $\nu_{27}({\rm B}_{1u})$ modes plotted against temperature. The scale on the right axis corresponds to the charge per site according to Eq.~(\ref{eq:splitting}). (b)~Temperature dependence of the linewidth of the $\nu_{27}$. Above $T=100$~K a single mode is observed (black squares); in the fluctuation regime ($72~{\rm K}<T<100$~K) two modes are fitted and indicated by red dots. Triangle with different colors are used to identify the four modes in the charge-ordered state. The dashed line indicates the metal insulator transition $T_{\rm CO}=72$~K also observed in dc resistivity $\rho(T)$.} \label{fig:splitting} \end{figure} To quantitatively characterize the observed charge imbalance, we fit the spectra of Fig.~\ref{fig:nu27a} by simple Lorentz oscillators. The temperature profile of the center frequency and linewidth at half maximum is displayed in Fig.~\ref{fig:splitting}. The charge per molecule can be evaluated from the vibrational frequency by using the relationship \cite{Yamamoto05,Girlando11}: \begin{equation} \nu_{27}(\rho)=1398\,\text{cm}^{-1}+140(1-\rho)\,\text{cm}^{-1}/e \quad , \label{eq:splitting} \end{equation} where $\rho $ is the site charge in units of the elementary charge $e$; the corresponding scale is indicated on the right axis of Fig.~\ref{fig:splitting}(a). The $\nu_{27}({\rm B}_{1u})$ mode exhibits a slight blue shift from 1462 to 1470~\cm\ as the lattice hardens when cooling down in the metallic state; at $T=100$~K the peak approaches the $\rho = 0.5e$ value. The vibrational feature concomitantly narrows down from 60 to 15~\cm\ because thermal effects diminish. As the phase transition is approached, in the temperature range between 100 and 75~K, the molecular vibrations split in two peaks with the separation growing from approximately 9~\cm\ continuously up to 18~\cm. As the phase transition is approached, the lines become rather broad pointing towards fluctuation effects. A similar broadening in the linewidth was observed in other charge-ordered compounds and related to charge fluctuations already present in the metallic state \cite{Yamamoto04,Yamamoto06}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{raman.eps} \caption{(Color online) Raman shift for the $\nu_{2}$ and $\nu_{3}$ modes of \bhgcl\ at several selected temperatures as indicated. Black arrows indicate the splitting of the $\nu_{2}$ vibration in the charge-ordered state. } \label{fig:Raman} \end{figure} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{vibronic3.eps} \caption{(Color online) Temperature dependence of the emv-activated vibrational modes in \bhgcl\ probed within the conducting $(ab)$-plane. (a,b)~The $\nu_{9}({\rm A}_{g})$ and $\nu_{10}({\rm A}_{g})$ modes show up only in the charge-ordered state and appear more diverse for the polarization (a) perpendicular to the stacks (\textcolor{red} [$E\parallel (a+b)$] compared to (b) the stacking direction [$E\parallel (a-b)$.] (d,e)~The charge sensitive $\nu_{60}({\rm B}_{3u})$ vibration comes as a strong dip for perpendicular polarization (panel d) but exhibits a Fano shape along the stacks (panel e). (g,h)~In the range between 1000 and 1400~\cm\ the emv-coupled fully-symmetric $\nu_3({\rm A}_g)$ vibration shows up as a broad dip indicated by red dashed line. The narrow peak around 1300~\cm\ is assigned to the $\nu_{5}({\rm A}_g)$ CH$_2$ wagging mode. Also seen is the $\nu_{67}({\rm B}_{3u})$ as a strong dip at 1175~\cm\ in both polarizations. The curves are shifted for clarity. In panels (c,f,i) exhibit the temperature evolution of the $\nu_{10}$, $\nu_{60}$ and $\nu_{3}$ modes, respectively, where the left axes (black) correspond to the polarization perpendicular to the stacks and the right axes (red) to the parallel polarization. \label{fig:vibronic} } \end{figure*} When the resistivity data give evidence of a metal-insulator phase transition at $T_{\rm CO}=72$~K also the molecular vibrational modes exhibit a clear splitting in four distinct lines, suggesting the existence of four non-equivalent BEDT-TTF molecules in the unit cell and the breaking of inversion symmetry. The charge difference of the four modes remains temperature independent. This behavior follows the general trend observed in two-dimensional charge-ordered systems, such as $\alpha$-(BEDT-TTF)$_{2}$I$_{3}$ \cite{Yue10,Ivek11,Beyer16} and \khgcl\ \cite{Drichko14,Ivek17b}. Based on Eq.~(\ref{eq:splitting}) the lower-frequency modes correspond to the $+0.68e$ and $+0.64e$ charges on the BEDT-TTF molecule, and the upper frequency ones to $+0.49e$ and $+0.34e$ for $T=10$~K. The maximum charge disproportion $2\delta_\rho =0.34e$ is about half the value found in $\alpha$-(BEDT-TTF)$_{2}$I$_{3}$, similar to the imbalance observed for $\beta^{\prime\prime}$-(BEDT-TTF)$_{4}$$M$(CN)$_{4}\cdot$H$_{2}$O ($M$ = Ni, Pd), but lager than those reported for the polymorph \khgcl\ \cite{Yamamoto08,Drichko14} and the charge-fluctuating superconductor $\beta^{\prime\prime}$-(BEDT-TTF)$_{2}$SF$_{5}$CH$_{2}$CF$_{2}$SO$_{3}$ \cite{Kaiser10}. The charge order pattern forms in such a way that the total energy of the system is minimized; in a first approximation we can neglect the transfer integral and discuss the intersite Coulomb repulsion only. For the $\beta^{\prime\prime}$-BEDT-TTF salts, similar to $\theta$- and $\alpha$-phase crystals, the intermolecular distances are significantly shorter along the stacking axis. Consequently the intermolecular Coulomb repulsion is more pronounced in this direction compared to the perpendicular orientation \cite{MoriBook,Mori00}. This results in a charge alternation along the ($a-b$)-axis: A\,a\,B\,b\,A\,a\,B\,b, denoting the A-type and B-type molecules shown in Fig.~\ref{fig:structure}(b) with capital and small characters referring to the charge rich and charge poor sites; the neighboring stacks will be arranged like B\,b\,A\,a\,B\,b\,A\,a, leading to horizontal stripes. T.Yamamoto {\it et al.} classified some of the charge-ordered BEDT-TTF salts and suggested a phase diagram for the $\beta^{\prime\prime}$-type compounds \cite{Yamamoto08}. The amount of charge disproportionation and the alternation of short and long intermolecular bonds along the stacks puts the \bhgcl\ into group I. Going beyond purely electronic models Mazumdar {\it et al.} suggested alternative patterns for the $\alpha$-phase materials \cite{Mazumdar99,Mazumdar00} that might also be applicable to the present compound; however all experimental results on $\alpha$-(BEDT-TTF)$_{2}$I$_{3}$, such as infrared spectroscopy \cite{Ivek11}, NMR \cite{Ishikawa16} and x-ray studies \cite{Kakiuchi07}, evidence a horizontal stripe structure. Thus we concluded that the charge-ordered state in \bhgcl\ develops horizontal stripes. Apart from the infrared-active $\nu_{27}({\rm B}_{1u})$ mode, there are two fully symmetric C=C stretching vibrations that allow to determine the charge on the BEDT-TTF molecules with the help of Raman spectroscopy. In Fig.~\ref{fig:Raman} the shift of Raman-active $\nu_{2}({\rm A}_g)$ and $\nu_{3}({\rm A}_g)$ modes is displayed for several selected temperatures. At room temperature only one asymmetric band is detected at 1472~\cm\ with a hump at round 1500~\cm. As the temperature is reduced, the 1472~\cm\ peak gets narrow without any shift or splitting, while the 1500~\cm\ feature increases in intensity and splits into two modes as $T=10$~K is approached. We assign the 1472~\cm\ feature to the $\nu_{3}$ molecular vibration, which is insensitive to the charge and strongly couples to the electronic background as shown by the strong infrared intensity \cite{Yamamoto02,Yamamoto04,Yamamoto06,Yamamoto08} and discussed in more detail below. The right band at 1500~\cm\ corresponds to the $\nu_{2}$ mode, which is weakly coupled to electronic background and suitable for estimating the ratio of charge disproportionation. From the splitting of $\nu_{2}({\rm A}_g)$ feature we calculate $2\delta_\rho = 0.2e$ in the charge-ordered state according to \cite{Yamamoto05,Girlando11}: \begin{equation} \nu_{2}(\rho)=1447\,\text{cm}^{-1}+120(1-\rho)\,\text{cm}^{-1}/e \quad . \end{equation} This value is smaller than the charge imbalance estimated from the $\nu_{27}({\rm B}_{1u})$ splitting observed by infrared spectroscopy (Fig.~\ref{fig:splitting}). We cannot rule out that some minor peaks could not be resolved due to the poor signal to noise level in our Raman signal. Several molecular vibrational modes show up rather prominently in the infrared spectra of \bhgcl\ measured within the ($ab$)-plane due to strong coupling to the charge transfer band. These modes are charge sensitive but are also susceptible to structural changes. The $\nu_{10}({\rm A}_{g})$ and $\nu_{9}({\rm A}_{g})$ modes observed in the far-infrared range involve C-S streching vibrations. As shown in Fig.~\ref{fig:vibronic}(a) for $E$ perpendicular to the stacks, the two modes can hardly be detected in the metallic state due to screening effects; but when entering the charge-ordered insulating state several strong peaks can be identified. Within the plane, for $E\parallel {\rm stacks}$ [Fig.~\ref{fig:vibronic}(b)], we always detect two modes around 440 and 460~\cm, assigned to $\nu_{10}$ and $\nu_{9}$, respectively. Upon cooling the two features shift to higher frequencies, become sharper and more intense, as plotted in Fig.~\ref{fig:vibronic}(c). A similar multiple splitting of the $\nu_{9}$ and $\nu_{10}$ modes in the charge-ordered state was reported for other organic BEDT-TTF compounds \cite{Drichko06,Ivek11}. Around $\nu=880$~\cm\ for both polarization directions we identify the $\nu_{60}({\rm B}_{3u})$ mode that involves ring-breathing vibration [Fig.~\ref{fig:vibronic}(d,e)]; this molecular vibration is known to be very sensitive to charge disproportion and dimerization \cite{Musfeldt05}. The mode splits into several dips for both polarizations at low temperatures; for $E\parallel {\rm stacks}$ the intensity strongly increases with lowering the temperature, as displayed in Fig.~\ref{fig:vibronic}(f). Interestingly in the case of dimerized Mott insulators, basically no significant temperature dependence was observed \cite{Sedlmeier12}. The broad dip at around 1300~\cm\ shown in Fig.~\ref{fig:vibronic}(g,h) is assigned as the fully symmetric $\nu_{3}$ vibration that has the strongest emv-coupling constant among all the A$_{g}$ modes and exhibits a down-sift in frequency over 100~\cm\ compared to 1450~\cm\ line measured by Raman spectroscopy (Fig.~\ref{fig:Raman}). There is a rather narrow three-peak structure at 1282, 1293, and 1302~\cm\ related to the $\nu_5({\rm A}_g)$ mode present in both orientations within the ($ab$)-plane, which indicates that the symmetry of the unit cell is broken at low temperatures. For the polarization parallel to the stacks the intensity of the $\nu_3$ mode increases progressively as the temperature decrease, while in the perpendicular directions a strong growth is observed only below the transition temperature $T_{\rm CO}$. By comparing the conductivity spectra of \bhgcl\ in this range with those reported for strongly dimerized $\kappa$-BEDT-TTF salts \cite{Faltermeier07,Dumm09,Sedlmeier12}, we conclude that the intra-dimer charge-transfer band is located much closer to the vibrational modes. In the case of \etcl\ and \etcn\ the bands appear around 3000~\cm\ and consequently the emv coupled vibrations show up as Fano-like features. In line with calculations based on a one-dimensional dimerized tight-binding model \cite{Bozio87}, we suggest that the $\beta^{\prime\prime}$-compound studied here is slightly dimerized. From the gradual enhancement of the vibrational intensity with decreasing temperature for $E\parallel {\rm stacks}$, plotted in Fig.~\ref{fig:vibronic}(c,f,i), we conclude that the dimerization is more pronounced along the stacks than in the direction perpendicular to it. From the analysis of the temperature behavior evolution of the pure vibrational modes and the emv-coupled vibronic features, we can identify the low-temperature ground state as a charge-ordered insulating state, where the gradual structural distortion and charge disproportionation are closely related. Already below 150~K a continuous phase transition sets in gradually, the electrical resistivity turns from a metallic to an insulating behavior, until the insulating charge-ordered state is finalized at $T_{\rm CO}=72$~K and $\rho(T)$ shoots up. In Sec.~\ref{sec:magnetic} we have drawn the same conclusion from the temperature dependence of the ESR spectra. \\ \section{Conclusions} From our comprehensive characterization and intense optical study we conclude that \bhgcl\ enters a charge-ordered insulating state at $T_{\rm CO}=72$~K where pronounced charge disproportionation occurs with $2\delta_\rho=0.34$ obtained from infrared vibrational spectroscopy. The charge-rich molecules arrange in stripes perpendicular to the stacks, i.e. along ($a+b$)-direction. The dc resistivity yields an activated behavior at low temperatures with an energy gap of approximately $\Delta_{\rho}=60$~meV, that agrees well with the gap observed in the optical spectra. The charge imbalance starts to develop in the temperature range $72~{\rm K}<T<100$~K; at elevated temperatures ($T>100$~K) all the way up to room temperature, charge fluctuations can be identified. Around 150~K the metallic behavior turn into a semiconducting temperature dependence of the resistivity. In this temperature range lattice deformations along the BEDT-TTF stack are detected by the simultaneous enhancement of the emv-coupled vibronic mode and dimerization excitation for $E \parallel {\rm stacks}$. The dimerization also leads to a pairing of the electron spins and a spin-gapped magnetic ground state with $\Delta_{\sigma}=47$~meV obtained from our ESR experiments. Temperature dependent x-ray studies could further confirm our conclusions. \begin{acknowledgements} We thank Gabriele Untereiner for continuous experimental support and Andrej Pustogow and Mamoun Hemmida for valuable discussions. We thank Wolfgang Frey for collection of the X-ray data. The project was supported by the Deutsche Forschungsgemeinschaft (DFG) via DR228/39-1 and DR228/52-1 and by the Deutscher Akademische Austauschdienst (DAAD). The work in Chernogolovka was supported by FASO Russia, \#\ 0089-2014-0036. \end{acknowledgements}
d8c4e90daca19cbe668820bd15090d8c52b0f931
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} The U.S. patent system contains around 10 million patents classified in about 500 main classes. However, some classes are much larger than others, some classes are much older than others, and more importantly none of these classes can be thought of as a once-and-for-all well defined entity. Due to its important legal role, the U.S. Patent and Trademark Office (USPTO) has constantly devoted resources to improve the classification of inventions, so that the classification system has greatly evolved over time, reflecting contemporaneous technological evolution. Classifications evolve because new classes are created but also because existing classes are abolished, merged and split. In fact, all current classes in 2015 have been established in the U.S. Patent Classification System (USPCS) after 1899, even though the first patent was granted in 1790 and the first classification system was created in 1829-1830. To give just another example, out of all patents granted in 1976, 40\% are in a different main class now than they were in 1976. To maintain the best possible level of searchability, the USPTO reclassifies patents so that at every single moment in time the patents are classified according to a coherent, up-to-date taxonomy. The downside of this is that the current classification is not meant to reflect the historical description of technological evolution as it unfolded. In other words, while the classification system provides a consistent classification of all the patents, this consistency is not time invariant. Observers at different points in time have a different idea of what is a consistent classification of the past, even when classifying the same set of past patents. In this paper, we focus on the historical evolution of the U.S. patent classification. We present three sets of findings. First we study the evolution of the number of distinct classes, contrasting current and historical classification systems. Recent studies \citep{strumsky2012using,strumsky2015identifying,youn2015invention} have shown that it is possible to reconstruct the long-run evolution of the number of subclasses using the current classification system. This allowed them to obtain interesting results on the types of recombinations and on the relative rates of introduction of new subclasses and new combinations. An alternative way to count the number of distinct categories is to go back to the archives and check how many classes did actually exist at different points in the past. We found important differences between the historical and reconstructed evolution of the classification system. In particular, we find that historically the growth of the number of distinct classes has been more or less linear, with about two and a half classes added per year. By contrast, the reconstructed evolution \--- which considers how many current classes are needed to classify all patents granted before a given date \--- suggests a different pattern with most classes created in the 19$^{th}$ century and a slowdown in the rate of introduction of novel classes afterwards. Similarly, using the historical classes we find that the relationship between the number of classes and the number of patents is compatible with Heaps' law, a power law scaling of the number of categories with the number of items, originally observed between the number of different words and the total number of words in a text \citep{heaps1978information}. Using the reconstructed evolution Heaps' law does not hold over the long run. Knowing the number of distinct classes, the next question is about their growth and relative size (in terms of the number of patents). Thus our second set of findings concerns the size distribution of classes. We find that it is exponential, confirming a result of \citet{carnabuci2013distribution} on a much more restricted sub-sample. We also find that there is no clear relationship between the size and the age of classes, which rules out an explanation of the exponential distribution in terms of simple stochastic growth models in which classes are created once and for all. Third, we hypothesize that new technology fields and radical innovations tend to be associated with a higher reclassification activity. This suggests that the history of reclassification contains interesting information on the most transformative innovations. Our work here is related to \citet{wang2016technological} who study how a range of metrics (claims, references, extensions, etc.) correlate with reclassification for 3 million utility patents since 1994. We used the data since 1976, for which we observe the class of origin and the citations statistics. It appears that reclassified patents are more cited than non-reclassified patents. We also construct a reclassification flow diagram, with aggregation at the level of NBER patent categories \citep{hall2001nber}. This reveals that a non-negligible share of patents are reclassified across NBER categories. We find that patents in ``Computers'' and in ``Electronics'' are often reclassified in other NBER categories, which is not the case of other categories such as ``Drugs''. We then discuss three examples of new classes (Fabric, Combinatorial Chemistry and Artificial Intelligence). Finally, we argue that it is not possible to explain the observed patterns without accounting for reclassification. We develop a simple model in which classes grow according to preferential attachment but have a probability of being split. The model's only inputs are the number of patents and classes in 2015 and the Heaps' law exponent. Despite this extreme parsimony, the model is able to reproduce i) the historical and reconstructed patterns of growth of the number of classes, ii) the size distribution and (partially) the lack of age-size relationship, and iii) the time evolution of the reclassification rates. The empirical evidence that we present and the assumptions we need to make for the model make it clear that the USPCS has evolved considerably and it is hardly possible to think of patent classes as technological domains with a stable definition. The classification system cannot be well understood as a system in which categories are created once-and-for-all and accumulate patents over time. Instead, it is better understood as a system that is constantly re-organized. Because of this, using the current classification system to study a set of older patents is akin to looking at the past with today's glasses. In this paper, we not only show the differences between the historical and reconstructed reality, but we also explain how these differences emerged. The paper is organized as follows. Section \ref{section:motiv} details our motivation, gives some background on categorization and reviews the literature on technological categories. Section \ref{section:data} describes the USPCS and our data sources. Section \ref{section:Nclass} presents our results on the evolution of the number of classes. Section \ref{section:sizedist} discusses the size distribution of classes. Section \ref{section:reclass} presents our results on reclassification since 1976. Section \ref{section:model} presents a model that reproduces the main empirical patterns discovered in the previous sections. The last section discusses the results, motivates further research and concludes. \section{Why is studying classification systems important?} \label{section:motiv} Classification systems are pervasive because they are extremely useful. At a fundamental level, categorization is at the basis of pattern recognition, learning, and sense-making. Producing a discourse regarding technologies and their evolution is no exception. As a matter of fact, theoretical and \emph{a fortiori} empirical studies almost always rely on some sort of grouping \--- or aim at defining one. Historically, the interest in technology classifications has been mostly driven by the need to match technological and industrial activities \citep{schmookler1966invention,scherer1984using,verspagen1997measuring}. Since patented technologies are classified according to their function, not their industry of use or origin, this problem is particularly difficult. Clearly, a good understanding of both industry and patent classification systems is crucial to build a good crosswalk. Here we highlight the need to acknowledge that both classification systems \emph{change}. For this reason our results give a strong justification for automated, probabilistic, data-driven approaches to the construction of concordance tables such as the recent proposal by \citet{lybbert2014getting} which essentially works by looking for keywords of industry definitions in patents to construct technology-industry tables. With the rise of interest in innovation itself many studies have used existing patent classifications to study spillovers across technology domains, generally considering classification as static. For instance \citet{kutz2004examining} studied the growth and distribution of patent classes since 1976; \citet{leydesdorff2008patent}, \citet{antonelli2010recombinant}, \citet{strumsky2012using} and \citet{youn2015invention} studied co-classification patterns; and \citet{caminati2010pattern} and \citet{acemoglu2016innovation} studied the patterns of citations across USPCS or NBER technology classes. Similarly, technological classification systems are used to estimate technological distance, typically between firms or inventors in the ``technology space'' based on the classification of their patent portfolio \citep{breschi2003knowledge,nooteboom2007optimal,aharonson2016mapping,alstott2016mapping}. Additional methodological contributions include \citet{benner2008close}, who have pointed out that using all the codes listed on patents increases the sample size and thus reduces bias in measuring proximity, and \citet{mcnamee2013can} who argues for using the hierarchical structure of the classification system\footnote{In a related context (how professional diversity scales with city size), \citet{bettencourt2014professional} and \citet{youn2016scaling} exploited the different layers of industry and occupation classifications systems to identify resolution-independent quantities. Measuring diversity depends on which layer of the classification system one uses, but in such a way that the infinite resolution limit (deepest classification layer) exists and can be used to characterise universal quantities.}. In spite of this wide use of the current patent classification system, there have been no quantitative studies of the historical evolution of the system apart from the counts of the number of distinct classes by \citet{bailey1946history} and \citet{stafford1952rate}, which we update here. Recently though, \citet{strumsky2012using} originated a renewed interest in patent classification by arguing that the classification of patents in multiple fields is indicative of knowledge recombination. Using the complete record of US patents classified according to the current classification system, \citet{youn2015invention} studied the subclasses (``technology codes''). They found that the number of subclasses used up to a given year is proportional to the cumulative number of patents until about 1870, but grew less and less fast afterwards. Remarkably, however, this slowdown in the ``introduction'' of new subclasses does not apply to new \emph{combinations} of subclasses. \citet{youn2015invention} found that the number of combinations has been consistently equal to 60\% of the number of patents. This finding confirms \possessivecite{strumsky2012using} argument that patent classifications contain useful information to understand technological change over the long-run. Furthermore, the detailed study of combinations can reveal the degree of novelty of specific patents \citep{strumsky2015identifying,kim2016technological}. Besides their use for simplifying the analysis and creating crosswalks, technology taxonomies are also interesting \emph{per se}. A particularly interesting endeavour would be to construct systematic technology phylogenies showing how a technology descends from others \citep{basalla1988evolution,sole2013evolutionary} (for specific examples, see \citet{temkin2007phylogenetics} for cornets and \citet{valverde2015punctuated} for programming languages). But categories are not simply useful to describe reality, they are often used to \emph{construct} it \citep{foucault1966mots}. When categories are created as nouns, they can have a predicate and become a subject. As a result, classification systems are institutions that allow agents to coordinate and agree on how things should be called and on where boundaries should be drawn. Furthermore, classification systems may create a feedback on the system it describes, for instance by legitimizing the items that it classifies or more simply by biasing which items are found through search and reused in recombination to create other items. Categorization thus affects the future evolution of the items and their relation (boundaries) with other items. Along this line of argument, the process of categorization is performative. In summary, data on the evolution of technological classification systems provides a window on how society understands its technological artefacts and legitimizes them through the process of categorization. According to \citet{latour2005reassembling}, social scientists should not over impose their own categories over the actors that they analyze. Instead a researcher should follow the actors and see how they create categories themselves. \citet{nelson2006perspectives} described technological evolution as the co-evolution of a body of practice and a body of understanding. The role of the body of understanding is to ``rationalize'' the practice. According to him this distinction has important implications for understanding evolutionary dynamics, since each body has its own selection criteria. Our argument here is that the evolution of the USPCS reflects how the beliefs of the community of technologists about the mesoscale structure of technological systems coevolves with technological advancements. We consider patent categorization as a process of codification of an understanding concerning the technological system. To see why studying patent categories goes beyond studying patents, it is useful to remember that examiners and applicants do not need to prove that a technology improves our \emph{understanding} of a natural phenomenon; they simply need to show that a device or process is novel and effective at solving a problem. However, to establish a new class, it is necessary to agree that bringing together inventions under this new header actually improves understanding, and thus searchability of the patent system. In that sense we believe that the dynamics of patent classes constitute a window on the ``community of technologists''.\footnote{ Patent officers are generally highly skilled workers. Besides anecdotal evidence on particularly smart patent examiners (Albert Einstein), patent officers are generally highly qualified (often PhDs). That said, \citet{rotkin1999history} mention that classification work was not particularly attractive and that the Classification division had difficulties attracting volunteers. More recently \citet{paradise2012claiming} eludes to ``high turnover, less than ideal wages and heavy workloads''. There is an emerging literature on patent officers' biases and incentives \citep{cockburn2003all,schuett2013patent} but it is focused on the decision to grant the patent. Little is known about biases in classification. } Since classification systems are designed to optimize search, they reflect how search takes place which in turn is indicative of what thought processes are in place. These routines are an integral part of normal problem-solving within a paradigm. As a result, classification systems must be affected by paradigm-switching radical innovations. As noted by e.g. \citet{pavitt1985patent} and \citet{hicks2011structural}, a new technology which fits perfectly in the existing classification scheme may be considered an incremental innovation, as opposed to a radical innovation which challenges existing schemes. A direct consequence is that the historical evolution of the classification system contains a great deal of information on technological change beyond the information contained in the patents\footnote{In labor economics, some studies have exploited classification system changes. \citet{xiang2005new} finds that new goods, as measured by changes to the SIC system, have a higher skill intensity than existing goods. \citet{lin2011technological} and \citet{berger2015industrial} used changes in the index of industries and the dictionary of occupational titles to evaluate new work at the city level.}. We now describe our attempt at reconstructing the dynamics of the U.S. patent classification system. \section{The data: the USPCS} \label{section:data} We chose the USPCS for several reasons. First of all, we chose a patent system, because of our interest in technological evolution but also because due to their important legal role patent systems benefit from resources necessary to be maintained up to date. Among the patent classification systems, the USPCS is the oldest still in use (as of couple of years ago) \citep{wolter2012takes}. It is also fairly well documented, and in english. Moreover, additional files are available: citation files, digitized text files of the original patents from which to get the classification at birth, files on current classification, etc. Finally, it is one of the most if not the most used patent classification system in studies of innovation and technological change. The major drawback of this choice is that the USPCS is now discontinued. This means that the latest years may include a classificatory dynamics that anticipate the transition to the Cooperative Patent Classification\footnote{\url{http://www.cooperativepatentclassification.org/index.html}}, and also implies that our research will not be updated and cannot make predictions specific to this system that can be tested in the future. More generally, we do recognize that nothing guarantees external validity; one could even argue that if the USPCS is discontinued and other classification systems are not, it shows that the USPCS has specificities and therefore it is not representative of other classification systems. Nevertheless, we think that the USPCS had a major influence on technology classifications and is the best case study to start with. \subsection{The early history of the USPCS} The U.S. patent system was established on 31st July 1790, but the need for examination was abolished 3 years later and reestablished only in 1836. As a result, there was no need to search for prior art and therefore the need for a classification was weak. The earliest known official subject matter classification appeared in 1823 as an appendix to the Secretary of State's report to the Congress for that year \citep{rotkin1999history}. It classified 635 patents models in 29 categories such as ``Bridges and Locks'', 1184 in a category named ``For various purposes'', and omitted those which were not ``deemed of sufficient importance to merit preservations''. In 1829, a report from the Superintendent proposed that with the prospect of the new, larger apartments for the Patent office, there would be enough room for a systematic arrangement and classification of models. He appended a list of 14 categories to the report.\footnote{The main titles were Agriculture, Factory machine, Navigation, Land works, Common trades, Wheel carriages, Hydraulicks (the spelling of which was changed in 1830), Calorific and steam apparatus, Mills, Lever and screw power, Arms, Mathematical instruments, Chemical compositions and Fine arts.} In 1830 the House of representatives ordered the publication of a list of all patents, which appeared in December 1830/January 1831 with a table of contents organizing patents in 16 categories, which were almost identical to the 14 categories of 1829 plus ``Surgical instruments'' and ``Horology''.\footnote{An interesting remark on this classification \citep{rotkin1999history} is that it already contained classes based on industry categories (agriculture, navigation, \dots) and classes based on a ``specific mechanical force system'' (such as Lever and screw power).} In July 1836, the requirement of novelty examination came into effect, making the search for prior art more pressing. Incidentally, in December the Patent office was completely destroyed by a fire. In 1837, a new classification system of 21 classes was published, including a Miscellaneous class and a few instances of cross noting\footnote{The first example given by \citet{rotkin1999history} is a patent for a pump classified in both ``Navigation'' and in ``Hydraulics and Hydrostatics''}. The following year another schedule was published, with some significant reorganization and a total number of classes of 22. A new official classification appeared in 1868 and contained 36 main classes. Commenting on this increase in the number of classes, the Commissioner of patents wrote that \citep{rotkin1999history} \begin{quote} ``The number of classes has risen from 22 to 36, a number of subjects being now recognized individually which were formally merged with others under a more generic title. Among these are builder's hardware, felting, illumination, paper, and sewing machines, to each of which subject so much attention has been directed by inventors that a division became a necessity to secure a proper apportionment of work among the corps of examiners.'' \end{quote} Clearly, one of the rationale behind the creation and division of classes is to balance the class sizes, but this was not only to facilitate search. This class schedule was designed with administrative problems in mind, including the assignment of patent applications to the right examiners and the ``equitable apportionment of work among examiners'' \citep{rotkin1999history}. Shortly after 1868 a parallel classification appeared, containing 176 classes used in the newly set up patent subscription service. This led to a new official classification containing 145 classes and published as a book in 1872. The number of classes grew to 158 in 1878 and 164 in 1880. \citet{rotkin1999history} note that the 1880 classification did not contain any form of cross-noting and cross references, by contrast to the 1872 classification. In 1882 classification reached 167 classes and introduced indentation of subclasses at more than one level. The classification of 1882 also introduced a class called ``Electricity'', long before this general purpose technology fully reached its potential. In 1893 it was made clear in the annual report that a Classification division was required ``so that [the history of invention] would be readily accessible to searchers upon the novelty of any alleged invention''. After that, the need for a classification division (and the associated claim for extra budget) was consistently legitimated by this need to ``oppose the whole of prior art'' to every new application. In 1898 the ``Classification division'' was created with a head, two assistants and two clerks, with the purpose of establishing clearer classification principles and reclassifiying all existing patents. This marked the beginning of professional classification at the USPTO. Since then the classification division has been very active and the patent classification system has evolved considerably, as we document extensively in this paper. But before, we need to explain the basic organizing principles of the classification system. \subsection{Rationale and organization of the modern USPCS} \label{section:USPCSrationale} The USPCS attributes to each patent at least one subject matter. A subject matter includes a main class, delineating the main technology, and a subclass, delineating processes, structural features and functional features. All classes and most subclasses have a definition. Importantly, these are the patent claims which are classified, not the whole patent itself. The patent inherits the classification of its claims; its main classification is the classification of its main (``most comprehensive'') claim. There are different types of patents, and they are translated into different types of classes. According to the USPTO\footnote{\url{http://www.uspto.gov/web/offices/pac/mpep/s1502.html}}, ``in general terms, a utility patent protects the way an article is used and works, while a design patent protects the way an article looks.'' The ``classification of design patents is based on the concept of function or intended use of the industrial design disclosed and claimed in the Design patent.''\footnote{\url{http://www.uspto.gov/page/seven-classification-design-patents}}. During the 19$^{th}$ century classification was based on which industry or profession was using the invention, for instance ``Bee culture'' (449) or ``Butchering'' (452). The example of choice \citep{falasco2002bases,uspto2005handbook,strumsky2012using} is that of cooling devices which were classified separately if they were used to cool different things, such as beer or milk. Today's system would classify both as cooling devices into the class ``Heat exchange'' (165), which is the utility or function of the invention. Another revealing example \citep{schmookler1966invention,griliches1990patent} is that a subclass dealing with the dispensing of liquids contains both a patent for a water pistol and one for a holy water dispenser. This change in the fundamental principles of classification took place at the turn of the century with the establishment of the Classification division \citep{falasco2002bases,rotkin1999history}. Progressively, the division undertook to redesign the classification system so that inventions would be classified according their utility. The fundamental principle which emerged is that of ``utility classification by \emph{proximate} function'' \citep{falasco2002bases} where the emphasis on ``proximate'' means that it is the fundamental function of the invention, not some example application in a particular device or industry. For instance ``Agitating'' (366) is the relevant class for inventions which perform agitation, whether this is to wash clothes, churn butter, or mix paint \citep{simmons2014categorizing}. Another classification by utility is the classification by effect or product, where the result may be tangible (e.g. Semiconductors device and manufacture, 438) or intangible (e.g. Audio signal system, 381). Finally, the classification by structure (``arrangement of components'') is sometimes used for simple subject matter having general function. This rationale is the most often used for chemical compounds and stock material. It is rarely used for classes and more often used at the subclass level \citep{uspto2005handbook} Even though the classification by utility is the dominant principle, the three classification rationales (by industry, utility and structure) coexist. Each class ``reflects the theories of classification that existed at the time it was reclassified'' \citep{uspto2005handbook}. In addition, the system keeps evolving as classes (and even more so subclasses) are created, merged and split. New categories emerge when the need is felt by an examiner and approved by the appropriate Technology Center; in this case the USPCS is revised through a ``Classification order'' and all patents that need to are reclassified \citep{strumsky2012using}. An example of how subclasses are created is through alpha subclasses. Alpha subclasses were originally informal collections created by patent examiners themselves to help their work, but were later incorporated into the USPC. They are now created and used as temporary subclasses until they become formalized \citep{falasco2002united,uspto2005handbook}. When a classification project is completed, a classification order is issued, summarising the changes officially, and all patents that need to are, in principle, reclassified. One of the latest class to have been created is ``Nanotechnology (977)'', in October 2004. As noted by \citet{strumsky2012using}, using the current classification system one finds that after reclassification the first nanotechnology patent was granted much earlier\footnote{1986 for \citet{strumsky2012using}, 1978 for \citet{paradise2012claiming} and 1975 according to \citet{strumsky2015identifying} and to the data that we use here (US3896814). Again, these differences reflect the importance of reclassification.}. According to \citet{paradise2012claiming}, large federal research funding led to the emergence of ``nanotechnology'' as a unifying term, which became reflected in scientific publications and patents. Because nanotechnologies were new, received lots of applications and require interdisciplinary knowledge, it was difficult to ensure that prior art was reviewed properly. The USPTO engaged in a classification project in 2001, which started by defining nanotechnologies and establishing their scope, through an internal process as well as by engaging with other stakeholders such as users or other patent offices. In 2004 the Nanotechnology cross-reference digest was established; cross-reference means that this class cannot be used as a primary class. \citet{paradise2012claiming} argues that class 977 has been defined with a too low threshold of 1 to 100 nanometers. Also, reclassification has been encouraged but is not systematic, so that many important nanopatents granted before 2004 may not be classified as such. Another example of class creation worth mentioning is given by \citet{erdi2013prediction} who argue that the creation of ``Fabric (woven, knitted, or nonwoven textile or cloth, etc.)'' (442) created in 1997, could have been predicted based on clustering analysis of citations. \citet{kyebambe2017forecasting} recently generalized this approach, by formulating it as a classical machine learning classification problem: patent clusters are characterized by sets of features (citations, claims, etc.), and only some patent clusters are later on recognized as ``emerging technology'' by being reclassified into a new USPCS main class. In this sense, USPCS experts are labelling data, and \citet{kyebambe2017forecasting} developed a method to create clusters and train machine learning algorithms on the data labelled by USPCS experts. Finally, a last example is that of organic chemistry\footnote{see \url{http://www.uspto.gov/page/addendum-reclassification-classes-518-585}}. Class 260 used to contain the largest array of patent documents but it was decided that this class needed to be reclassified ``because its concepts did not necessarily address new technology and several of its subclasses were too difficult to search because of their size.''. To make smaller reclassification projects immediately available it was decided to split the large class into many individual classes in the range of Classes 518-585. Each of these classes is ``considered an independent class under the Class 260 umbrella''; many of these classes have the same general name such as ``Organic coumpounds \--- part of the class 532-570 series''\footnote{These classes also have a hierarchy indicated by their number, as subclasses within a class schedule usually do.} As argued by \citet{strumsky2012using}, this procedure of introducing new codes and modifying existing ones ensures that the current classification of patents is consistent and makes it possible to study the development of technologies over a long period of time. However, while looking at the past with today's glasses ensures that we look at different periods of the past in a consistent way, it is not the same as reporting what the past was in the eyes of those who lived it. In this sense, we believe that it is also interesting to try and reconstruct the classification systems that were in place in the past. We now describe our preliminary attempt to do so, by listing available sources and constructing a simple count of the number of classes used in the past. \subsection{Dataset construction} \label{section:data-construction} Before describing the data construction in details, let us state clearly three important caveats. First, we focus on main classes, due to the difficulty of collecting historical data at the subclass level. This is an important omission and avenue for further research. Investigating the complete hierarchy could add significant insight, for instance by contrasting ``vertical'' and ``horizontal'' growth of the classification tree, or by exploiting the fact that different layers of system play a different role for search \citep{uspto2005handbook}. Second, we limit our investigations to Primary (``OR'') classes, essentially for simplicity. Multiple classifications are indeed very interesting and would therefore warrant a complete independent study. Clearly, the fact that multiple classifications can be used is a fundamental feature of the current USPCS. In fact it is a key feature of its evolution: as noted above ``cross-noting'' was common in some periods and absent in others, and a recent example of a novel class \--- Nanotechnology \-- happens to be an XR-only class (i.e., used only as secondary classification). Here we have chosen to use only OR classes because it allows us to show the main patterns in relatively simple way. Of course some of our results, in particular those of Section \ref{section:reclass}, are affected by this choice, and further research will be necessary to evaluate the robustness of our results. That said, OR classifications, which are used on patent applications to find the most appropriate examining division \citep{falasco2002united}, are arguably the most important. Third, we limit our investigation to the USPCS, as justified in the beginning of Section \ref{section:data}. We have good reasons for choosing the USPCS in this study, which aims at giving a long-run picture. However, for studying the details of reclassification patterns and firmly establishing reclassification and classification system changes as novel and useful indicators of technological change, future research will need to establish similar patterns in the IPC or CPC. As a result of these choices, our aim is to build a database\footnote{Our data is available at \url{https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/ZJCDCE}} of 1) the evolution of the USPCS primary classes, and 2) the reclassification of patents from one class to the other. To do this we relied on several sources. First, our most original data collection effort concerns the historical number of classes. For the early years our main sources are \citet{bailey1946history} and \citet{rotkin1999history}, complemented by \citet{reingold1960us} and the ``Manual of Classification'' for the 5 years within the period 1908\---1923. For the 1950\---60's, we used mostly a year-specific source named ``General information concerning Patents'' which contained a sentence like ``Patents are classified into $x$ classes''. Unfortunately, starting in 1969 the sentence becomes ``Patents are classified into more than 310 classes''. We therefore switched to another source named ``Index of patents issued from the United States Patent Office'', which contains the list of classes. Starting 1963, it contains the list of classes with their name and number on a separate page\footnote{We had to make some assumptions. In the 1960's, Designs appeared subdivided into ``Industrial arts'' and ``Household, personal and fine arts'', so we assumed that the number of design classes is 2, up to the year 1977 where Design classes appear with their name and number. We implicitly assume that prior to 1977 the design classes were actually subclasses, since in 1977, there were 39 Design classes, whereas the number of (sub)classes used for design patents in 1976 was more than 60. It should be noted though that according to the dates established, some of the current design classes were created in the late 60's. Another issue was that for 1976 the number of Organic compound classes was not clear \-- we assumed it was 6, as listed in 1977. Finally, we sometimes had two slightly different values for the same year due to contradictory sources or because the sources refer to a different month.\label{footnoteDesign}}. For 1985, we used a report of the Office of Technology Assessment and Forecast (OTAF) of the Patent and Trademark Office \citep{otaf1985}. For the years 2001 to 2013, we collected data from the Internet Archive.\footnote{\url{https://archive.org/index.php} where we can find the evolution of the url \url{http://www.uspto.gov/web/patents/classification/selectnumwithtitle.htm}. We added the class ``001'' to the count.} As of February 2016 there are 440 utility classes (including the Miscellaneous 001 and and the ``Information storage'' G9B (established in 2008)), 33 design classes, and the class PLT ``Plant", giving a total of 474 classes.\footnote{ The list of classes available with their dates established contains 476 classes, but it does not contain 001, and it contains 364, 389, and 395 which have been abolished. We removed the abolished classes, and for Figs \ref{fig:Nclasses} and \ref{fig:Heaps} we assumed 001 was established in 1899.}. Second, to obtain reclassification data we matched several files. We obtained ``current'' classifications from the Master Classification File (version mcfpat1506) for patents granted up to the end of June 2015. We matched this with the Patent Grant Authority File (version 20160130) to obtain grant years\footnote{ We first removed 303 patents with no main (OR) classification, and then 92 patents dated January 1st 1800. We kept all patent kinds.}. To obtain the classification at birth, we used the file ``Patent Grant Bibliographic (Front Page) Text Data (January 1976 -- December 2015)'', provided by the USPTO\footnote{at \url{https://bulkdata.uspto.gov/} (Access date: January 7, 2018)}, from which we also gathered citation data. \section{Dynamics of the number of classes and Heaps' law} \label{section:Nclass} Our first result concerns the growth of the number of classes (Fig. \ref{fig:Nclasses}), which we have computed using three different methods. \begin{figure*}[!ht] \centering \includegraphics[scale=0.75]{Nclasses.pdf} \caption{Evolution of the number of distinct classes.} \label{fig:Nclasses} \vspace{5mm} \includegraphics[scale=0.75]{Heaps.pdf} \caption{Heaps' law.} \label{fig:Heaps} \end{figure*} First, we used the raw data collected from the historical sources mentioned in Section \ref{section:data-construction}. Quite unexpectedly, the data suggests a linear growth, with appreciable fluctuations mainly due to the introduction of an entirely new system in 1872 and to design classes in 1977 (see footnote \footref{footnoteDesign}). The grey line shows the linear fit with an estimated slope of 2.41 (s.e. 0.06) and $R^2$ of 0.96 (we treat years with no data as NA, but filling them with the figure from the last observed year does not dramatically affect the results). Second, we have computed, using the Master Classification File for June 2015, the number of distinct classes in which the patents granted up to year $t$ are classified (black line). To do so, we have used all classes in which patents are classified (i.e. including cross-reference classes).\footnote{The (reconstructed) number of classes is slightly lower if we consider only Primary classes, because some classes are used only as a cross-reference, never as primary class. These classes are 902: Electronic funds transfer, 903: Hybrid electric vehicles, 901: Robots, 930: Peptide or protein sequence, 977: Nanotechnology, 976: Nuclear technology, 968: Horology, 987: Organic compounds containing a bi, sb, as, or p atom or containing a metal atom of the 6th to 8th group of the periodic system, 984: Musical instruments, G9B: Information storage based on relative movement between record carrier and transducer. } The pattern of growth is quite different from the historical data. If we consider only the post-1836 data, the growth of the number of classes is sublinear \--- less and less classes are introduced every year. Before 1836, the trend was linear or perhaps exponential, giving a somewhat asymmetric S-shape to the overall picture. Third, we computed the growth of the number of classes based on the dates at which all current classes were established (blue line)\footnote{Collected from \url{https://www.uspto.gov}, page USPCS dates-established}. According to this measure, the first class was created in 1899, when the reorganization of classification started with the creation of the classification division\footnote{``Buckles, Buttons, clasps, etc.'' is an example of a class that was created early under a slightly different name (1872 according to \citet{simmons2014categorizing}, see \citet{bailey1946history} for details) but has a posterior ``date established'' (1904 according to the USPTO). Another example is ``Butchering''.}. Fig. \ref{fig:Heaps} displays the number of classes against the number of patents in a log-log scale. In many systems, it has been found that the number of categories grows as a power law of the number of items that they classify, a result known as Heaps' law (for an example based on a classification system \---the medical subject headings\--- instead of a language, see \citet{petersen2016triple}). Here we find that using the 2015 classification, Heaps' law is clearly violated\footnote{It is possible to obtain a good fit by limiting the fit to the latest periods, however this is arbitrary, and gives a very low Heaps' exponent, leaving unexplained the creation of the vast majority of classes.}. Using the historical data, Heaps' law appears as a reasonable approximation. We estimate the Heaps' exponent to be $0.378$ with standard error of 0.010 and $R^2=0.95$. The inset on the bottom right of Fig. \ref{fig:Heaps} shows that for the latest years, Heaps' law fails: for the latest 2 million patents (about 20\% of the total), almost no classes were created. We do not know whether this slowdown in the introduction of classes is due to a slowdown of radical innovation, or to a more institutionally-driven reason such as a lack of investment in the USPCS due to the expected switch to the Cooperative Patent Classification. Since the joint classification system was first announced on 25 October 2010 \citep{blackman2011classification}, we show this date (more precisely, patent number 7818817 issued on the $26^{th}$) as a suggestive indicator (dashed line on the inset). Another consideration is that the system may be growing more ``vertically'', in terms of the number of layers of subclasses \--- unfortunately here we have to focus on classes, so we are not able to test for this. \section{The size distribution and the age-size relationship} \label{section:sizedist} Besides the creation and reorganization of technological categories, we are interested in their growth and relative sizes. More generally, our work is motivated by the Schumpeterian idea that the economy is constantly reshaping itself by introducing novelty \citep{dopfer2004micro,saviotti2004economic}. The growth of technological domains has been deeply scrutinized in the economics of technical change and development \citep{schumpeter1934theory, dosi1982technological, pasinetti1983structural, pavitt1984sectoral, freeman1997economics, saviotti1996technological,malerba2002sectoral}. A recurring theme in this literature is the high heterogeneity among sectors. When sectors or technological domains grow at different rates, structural change occurs: the relative sizes of different domains is modified. To study this question in a parsimonious way, one may opt for a mesoscale approach, that is, study the size distribution of categories. Our work here is most directly related to \citet{carnabuci2013distribution} who first showed on data for 1963\---1999 that the size distribution of classes is close to exponential. This is an interesting and at first surprising finding, because based on the assumption that all domains grow at the same average rate stochastic growth models such as \citet{gibrat1931inegalites} or \citet{yule1925mathematical} predict a Log-normal or a Pareto distribution, which are much more fat tailed. Instead, we do not see the emergence of relatively very large domains, and this may at first suggest that older sectors do not keep growing as fast as younger ones, perhaps due to technology life-cycles \citep{vernon1966international, klepper1997industry,andersen1999hunt}. However, as we will discuss, we are able to explain the exponential size distribution by keeping Gibrat's law, but assuming that categories are split randomly. \subsection{The size distribution of categories} In this section we study the size distribution of classes, where size is the number of patents in 2015 and classes are defined using the current classification system. We use only the primary classification, so we have only 464 classes. Fig. \ref{fig:ranksize} suggests a linear relationship between the size of a class and the log of its rank, that is, class sizes are exponentially distributed\footnote{ For simplicity we used the (continuous) exponential distribution instead of the more appropriate (discrete) geometric distribution, but this makes no difference to our point. We have not rigorously tested whether or not the exponential hypothesis can be rejected, because the proper hypothesis is geometric and classical test statistics such as Kolmogorov-Smirnov do not easily apply to discrete distributions. Likelihood ratio tests interpreted at the 5\% level showed that it is possible to obtain better fits using two-parameters distributions that extends the exponential/geometric, namely the Weibull and the Negative binomial, especially after removing the two smallest categories which are outliers (contain 4 and 6 patents) and are part of larger series (532 and 520).}. To see this, let $p(k)$ be the probability density of the sizes $k$. If it is exponential, it is $p(k)=\lambda e^{-\lambda k}$. By definition, the rank $r(k)$ of a class of size $k$ is the number of classes that have a larger size, which is $r(k)=N \int_{k}^{\infty} \lambda e^{-\lambda x} dx = N e^{-\lambda k}$, where $N$ is the number of classes. This is equivalent to size being linear in the logarithm of the rank. We estimate the parameter $\lambda$ by maximum likelihood and obtained $\hat{\lambda}=4.71 \times 10^{-5}$ with standard error $0.22 \times 10^{-5}$. Note that $\hat{\lambda}$ is one over the mean size, 21223. We use this estimate to plot the resulting fit in Fig. \ref{fig:ranksize}. \begin{figure}[H] \centering \includegraphics[scale=0.6]{sizerank.pdf} \caption{Rank-size relationship.} \label{fig:ranksize} \end{figure} It is interesting to find an exponential distribution, since one may have expected a power law, which is quite common as a size distribution, and appears often with Heaps' law \citep{lu2010zipf,petersen2016triple}. Since the exponential distribution is a good representation of the data, it is worth looking for a simple mechanism that generates this distribution, which we will do in Section \ref{section:model}. But since many models can generate an exponential distribution we first need to present additional empirical evidence that will allow us to discriminate between different candidate models. \subsection{The age-size relationship} To determine whether older classes contain more patents than younger ones, we first need to note that there are two ways of measuring age: the official date at which the class was established, and the year in which its first patent was granted. As expected, it appears that the year in which a class is established is always posterior to the date of its first patent\footnote{ Apart from class 532. We confirmed this by manually searching the USPTO website. 532 is part of the Organic compound classes, which have been reorganized heavily, as discussed in Section \ref{section:USPCSrationale} }. \begin{figure}[H] \centering \includegraphics[scale=0.55]{agesize.pdf} \caption{Age-size relationship.} \label{fig:agesize} \end{figure} Since these two ways of measuring age can be quite different, we show the age-size (or rather size-birth date) relationship for both in Fig. \ref{fig:agesize}. If stochastic growth models without reclassification were valid, we would observe a negative slope, that is, newer classes should have fewer patents because they have had less time for accumulation from random growth. Instead, we find no clear relationship. In the case of the year established, linear regressions indicated a positive relationship significant at the 10\% but not at the 5\% confidence level, whether or not the two ``outliers'' were removed. Using a log-linear model, we found a significant coefficient of 0.004 after removing the two outliers. In the case of the year of the first patent, the linear model indicated no significant relationship, but the log-linear model delivered a highly significant negative coefficient of -0.005 (which halves and becomes significant at the 10\% level only once the two outliers are removed); In all 8 cases (two different age variables and two different models, removing outliers or not) the $R^2$ was between 0.001 and 0.029. We conclude that these relationships are at best very weak, and in one case of the ``wrong'' sign (with classes established in recent years being on average larger). Whether they are significant or not, our point here is that their magnitude and the goodness of fits are much lower than what one would expect from growth-only models such as \citet{simon1955class}, or its modification with uniform attachment (to match the exponential size distribution). We will come back to the discussion of models later, but first we want to show another empirical pattern and explain why we think reclassification and classification system changes are interesting indicators of technological change. \section{Reclassification activity as an indicator of technological change} \label{section:reclass} It seems almost tautological to say that a radical innovation is hard to categorize when it appears. If an innovation is truly ``radical'', it should profoundly change how we think about a technology, a technological domain, or a set of functions performed by technologies. If this is the case a patent related to a radical innovation is originally hard to classify. It is likely that it will have to be reclassified in the future, when a more appropriate set of concepts has been developed and institutionalized (that is, when the community of technologists have codified a novel understanding about the radical innovation). It is also well accepted that radical innovations may create a new wave of additional innovations, which may or may not cluster in time \citep{silverberg2003breaking} but when they are general purpose we do expect a rise in innovative activity \citep{bresnahan1995general}. A less commented consequence of the emergence and diffusion of General Purpose Technologies (GPTs) is that both due to the sheer increase in the number of patents in this technology, and to the impact of this technology on others, we should expect higher classification volatility. Classification volatility is to be expected particularly in relation to GPTs because by definition GPTs interact with existing technologies and create or reorganize interactions among existing technologies. From the point of view of the classification, the very definition of the objects and their boundaries are transformed. In short, some categories become too large and need to be split; some definitions become obsolete and need to be changed; and the ``best'' grouping of technologies is affected by the birth and death of conceptual relationships between the function, industry of origin or application, and structural features of technologies. In this section we provide a preliminary study. First we establish that this indicator does exist (reclassification rates can be quite high, reaching 100\% if we look far enough in the past). Second, we show that reclassified patents are more cited. Third, we show that reclassification can take place across fairly distant technological domains, as measured by 1-digit NBER categories. Fourth, we discuss three examples of novel classes. \subsection{Reclassification rates} How many patents have been reclassified? To start with, since no classification existed prior to 1829, all patents published before that have been ``(re)classified'' in the sense that their category has been determined several and potentially many years after being granted. The same applies to all patents granted at times where completely different classification systems prevailed, which is the case before 1899. In modern times, classification has evolved but as discussed in Section \ref{section:data}, the overall classification framework put in place at the turn of the century stayed more or less the same. For the period after 1976, we know the original classification of each patent because we can read it on the digitized version of the original paper (see Section \ref{section:data-construction}). After extensive efforts in parsing the data and a few manual corrections, we found an original class for 99.45\% of the post-1976 patents in the Master Classification File mcfpat1506. Out of these 5,615,525 patents, 412,724 (7.35\%) have been reclassified. There are 789 distinct original classes, including 109 with only 1 patent (apart from data errors, this can come from original classes that had no post-1976 patents classified in them). All current classes have been used as original classes except ``001'' which is only used as a miscellaneous class in which they are reclassified\footnote{We removed US6481014.}. \begin{figure}[ht] \centering \includegraphics[scale=0.75]{sharereclass.pdf} \caption{Share of patents granted in a given year that are in a different class in 2015, as compared to when they were granted.} \label{fig:sharereclass} \end{figure} Figure \ref{fig:sharereclass} shows the evolution of the reclassification rate, defined as the share of patents granted in year $t$ which have a different classification in 2015 than in $t$. It appears that as much as 40\% of the 1976's patents belong to a different class now than when they first appear. This reclassification rate declines sharply after that, reaching about 10\% in the 1990's and almost zero thereafter. This is an expected result, since the longer the time since granting the patent, the higher the chances that the classification system has changed. \subsection{Are reclassified patents more cited?} Since there is an established relationship between patent value and the number of citations received \citep{hall2005market}, it is interesting to check if reclassified patents are more cited. Of course, we are only observing correlations, and the relationship between citations and reclassification can work in multiple ways. A plausible hypothesis is that the more active is a technological domain (in terms of new patents and thus new citations being made), the more likely it is that there will be a need for reclassification, if only to keep the classes at a manageable size\footnote{Relatedly, as noted by a referee, if patent examiners are also responsible for reclassification, then their prior art search might be oriented towards patents that they have re-classified, for which their memory is more vivid.}. Another hypothesis is that highly innovative patents are intrinsically ambiguously defined in terms of the classification system existing when they first appear. In any case, since we only have the class number at birth and the class number in 2015, we cannot make subtle distinctions between different mechanisms. However, we can check whether reclassified patents are on average more cited, and we can do so after controlling for the grant year and class at birth. Table \ref{table:cit} shows basic statistics\footnote{We count citations made to patents for which we have reclassification data, from patents granted until June 2015. We removed duplicated citations}. Reclassified patents constitute 7.35\% of the sample, and have received on average more than 24 citations, which is more than twice as much as the non reclassified patents. \begin{table}[ht] \centering \begin{tabular}{|c|rrrr|} \hline & share & mean & median & s.d. \\ \hline All & 100.00 & 11.30 & 4.00 & 26.64 \\ Non reclassified & 92.65 & 10.27 & 4.00 & 23.94 \\ Reclassified & 7.35 & 24.29 & 11.00 & 47.40 \\ \hline \end{tabular} \caption{Patent citations summary statistics.} \label{table:cit} \end{table} \begin{figure}[ht] \centering \includegraphics[scale=0.6]{reg-coeff-time-evol.pdf} \caption{Coefficient of the year-specific regressions of the log of citations received on the reclassification dummy (including dummies for the class of origin or not).} \label{fig:reg-coeff-time-evol} \end{figure} We expect this result to be largely driven by the fact that older patents have both a higher chance to have been reclassified and a higher chance to have accumulated many citations. To investigate the relationship between reclassification and citations in more detail, we regressed the log of total citations received in 2015 on the reclassification dummy and on dummies for the class at birth, for each year separately (and keeping only the patents with at least one citation received, 76.6\%): \[ \log(c_{i})=\alpha_t + \beta_t R_i + \sum_{j=1}^{J_t-1} \gamma_{j,t} D_{i,j} \] where $c_{i}$ is the number of citations received by patent $i$ between its birth (time $t$) and (June) 2015, $R_i$ is a dummy that takes the value of 1 if patent $i$ has a main class code in 2015 different from the one it had when it appeared (i.e. in year $t$), $J_t$ is the number of distinct classes in which the patents born in year $t$ were classified at birth, and $D_{i,j}$ is a dummy that takes the value of 1 if patent $i$ was classified in class $j$ at birth. Note that we estimate this equation separately for every grant year. We include the class at birth dummies because this allows us to consider patents that are ``identical twins'' in the sense of being born in the same class in the same year. The coefficient $\beta$ then shows if reclassified patents have on average received more citations. The results are reported in Fig. \ref{fig:reg-coeff-time-evol}, showing good evidence that reclassification is associated with more citations received. As expected, recent years\footnote{2015 is excluded because no patents had been reclassified} are not significant since there has not been enough time for reclassification to take place and citations to accumulate (the bands represent standard approximate 95\% confidence intervals). We also note that controlling for the class at birth generally weakens the effect (red dashed line compared to black solid line). \subsection{Reclassification flows} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{PatentReclassificationHJTcategory-20180111.pdf} \caption{Reclassification flows.} \label{fig:reclassification_flow} \end{figure*} To visualize the reclassification flows, we consider only the patents that have been reclassified. As in \citet{wang2016technological} we want to construct a bipartite graph showing the original class on one side and the current class on the other side. Since we identify classes by their code number, a potentially serious problem may arise if classes are renumbered, although we believe this tends to be rare given the limited time span 1976\---2015. An example of this is ``Bee culture'' which was class number 6, but since 1988 is class number 449 and class number 6 does no longer exists. However, even in this case, even though these two classes have the same name, we do not know if they are meant to encompass the same technological domain and have just been ``renumbered'', or if other considerations prevailed and renumbering coincides with a more substantive reorganisation. An interesting extension of our work would be to use natural language processing techniques on class definitions to define a measure of reclassification distance more precisely and exclude mere renumbering. To make the flow diagram readable and easier to interpret, we aggregate by using the NBER categories\footnote{For more details on the NBER categories, see the historical reference \citep{hall2001nber} and the recent effort by \citet{marco2015uspto} to attribute NBER (sub) categories to patent applications.}. To assign each class to a NBER category, we used the 2006 version of the NBER classification, which we modified slightly by classifying the Design classes separately, and classifying USPCS 850 (Scanning probe techniques and apparatus) in NBER 4 (Electrical) and USPCS PLT (Plant) in NBER 6 (Others). Fig. \ref{fig:reclassification_flow} shows the results\footnote{See the online version at \url{http://danielykim.me/visualizations/PatentReclassificationHJTcategory/}}. The share of a category means the fraction of reclassified patents whose primary class is in a particular NBER category. The width of the lines between an original category $i$ and a current category $j$ is proportional to the number of reclassified patents whose original class is in category $i$ and current class is in category $j$. Line colors indicate the original category. We can see that patents originally classified in the categories Chemical tend to be reclassified in another class of the category Chemical. The same pattern is observed for the category Drugs. By contrast, the categories Computers \& Communications and Electrical \& Electronics display more cross-reclassifications, in line with \possessivecite{wang2016technological} findings on a restricted dataset. This may indicate that the NBER categories related to computers and electronics are not as crisply defined as those related to Chemical and Drugs, and may be suggestive of the general purpose nature of computers. This could also suggest that that these domains were going through a lot of upheaval during this time period. While there is some ambiguity in interpreting these patterns, they are not \emph{a priori} obvious and point to the same phenomenon as the correlation between citations and reclassifications: dynamic, impact-full, really novel, general purpose fields are associated to more taxonomic volatility. \subsection{Three examples of novel classes} We now complement the study by providing three examples of novel classes, chosen among recently created classes (and excluding cross-reference only classes). We proceed by looking at the origin of patents reclassified in the new class when it is created. We approximate this by looking at the patents that have been granted on a year preceding the birth year of a class, and now appear as reclassified into it. Note that we can determine the class of origin only for patents granted after 1976. We also give as example the oldest reclassified (utility) patent we can find. We discuss each class separately (see Table \ref{table:reclass_main} for basic statistics on each of the three example classes, and Table \ref{table:reclass_origin} for the source classes in each case (``Date'' is the date at which an ``origin'' class was established.) \begin{table}[ht] \centering \begin{tabular}{|p{15mm}p{20mm}p{15mm}p{15mm}|} \hline Class Number & Date established & Size & Size post 1976 \\ \hline 442 & 1997 & 6240 & 2654 \\ 506 & 2007 & 1090 & 1089 \\ 706 & 1998 & 1270 & 1217 \\ \hline \end{tabular} \caption{Basic information for the three novel classes described in the main text. Size is the number of patents that are classified in a class now but were granted before the class was created. Size post 1976 is the same, but excluding all pre-1976 patents, to be compared with the size of classes of origin in Table \ref{table:reclass_origin}.} \label{table:reclass_main} \end{table} \begin{table}[ht] \centering \begin{tabular}{|rp{45mm}cc|} \hline \multicolumn{4}{|c|}{Classes of origins for Class 442}\\ \hline Size & Title & Num. & Date \\ \hline 2615 & Stock material or miscellaneous articles & 428 & 1975 \\ 16 & Compositions & 252 & 1940 \\ 5 & Chemical apparatus and process disinfecting, deodorizing, preserving, or steril & 422 & 1978 \\ \hline \hline \multicolumn{4}{|c|}{Classes of origins for Class 506}\\ \hline 579 & Chemistry: molecular biology and microbiology & 435 & 1979 \\ 127 & Chemistry: analytical and immunological testing & 436 & 1982 \\ 69 & Chemical apparatus and process disinfecting, deodorizing, preserving, or steril & 422 & 1978 \\ \hline \hline \multicolumn{4}{|c|}{Classes of origins for Class 706}\\ \hline 966 & [NA] Information Processing System Organization & 395 & 1991 \\ 195 & [NA] Electrical Computers and Data Processing Systems & 364 & 1977 \\ 41 & Electrical transmission or interconnection systems & 307 & 1952 \\ \hline \end{tabular} \caption{Number of patents pre-dating the creation of a class and reclassified into it, by class of origin; Only the three largest origin classes are shown, with their class number and date established.} \label{table:reclass_origin} \end{table} Motivated by the study of \citet{erdi2013prediction} showing that the emergence of a new class (442) could have been predicted by citation clustering, we study class 442, ``Fabric (woven, knitted, or nonwoven textile or cloth, etc.)''. The class definition indicates that it is ``for woven, knitted, nonwoven, or felt article claimed as a fabric, having structural integrity resulting from forced interassociation of fibers, filaments, or strands, the forced interassociation resulting from processes such as weaving, knitting, needling hydroentangling, chemical coating or impregnation, autogenous bonding (\dots) or felting, but not articles such as paper, fiber-reinforced plastic matrix materials (PPR), or other fiber-reinforced materials (\dots)''. This class is ``an integral part of Class 428 [and as such it] incorporates all the definitions and rules as to subject matter of Class 428.'' The oldest patent reclassified in it was a patent by Charles Goodyear describing how applying caoutchouc to a woven cloth lead to a material with ``peculiar elasticity'' (US4099, 1845, no classification on the paper file). A first remark is that this class was relatively large at birth. Second, an overwhelming majority of patents came from the ``parent'' class 428. Our interpretation is that this is an example of an old branch of knowledge, textile, that due to continued development needs to be more finely defined to allow better classification and retrieval \-- note that the definition of 442 is not only about what the technologies are, but what they are not (paper and PPR). Our second example is motivated by \possessivecite{kang2012science} qualitative study of the process of creation of an IPC class, to which the USPTO participated. \citet{kang2012science} describes that the process of class creation was initiated because of a high number of incoming patents on the subject matter. Her main conclusion is that disputes regarding class delineation were resolved by evaluating the size of the newly created category under certain definitions. Class 506, ``Combinatorial chemistry technology: method, library, apparatus'' includes in particular ``Methods specially adapted for identifying the exact nature (e.g., chemical structure, etc.) of a particular library member'' and ``Methods of screening libraries or subsets thereof for a desired activity or property (e.g., binding ability, etc.)''. The oldest reclassified patent is US3814732 (1974), ``modified solid supports for solid phase synthesis''. It claims polymeric hydrocarbon resins that are modified by the introduction of other compounds. It was reclassified from class 260, ``Chemistry of carbon compounds''. In contrast to 442 or 706 reviewed below, the reclassified patents are drawn relatively uniformly from several categories. Our interpretation is that this is an example of a mid-age technology (chemistry), which due to its interactions with other technologies (computers) develops a novel branch that is largely cross-cutting, but specific enough to warrant the creation of a new class. Our last example is 706, ``Data processing \-- Artificial Intelligence'', which is a ``generic class for artificial intelligence type computers and digital data processing systems and corresponding data processing methods and products for emulation of intelligence (\dots); and including systems for reasoning with uncertainty (\dots), adaptive systems, machine learning systems, and artificial neural networks.''. We chose it because we possess at least some domain knowledge. The oldest reclassified AI patent is US3103648 (1963), which is an ``adaptive neuron having improved output'', nicely echoing the recent surge of interest in neural networks for machine learning (deep learning). It was originally classified in class 340, ``Communications: electrical''. In contrast to the other two examples, we find that the two largest sources were classes that have since been abolished (we recovered the names of 395 and 364 from the ``1996 Index to the US patent classification''; their date established was available from the ``Date Established'' file documented in Section \ref{section:data-construction}). Other classes with the ``Data processing'' header were created during the period, showing that the USPTO had to completely re-organize its computer-related classes around the turn of the millennium. Our interpretation is that this is an example of a highly novel technology, emerging within the broader context of the third and perhaps fourth industrial revolution. Because computers are relatively recent and general purpose, it is very difficult to create taxonomies with stable boundaries. These three examples show strikingly different patterns of technological development and its associated classification volatility. An old branch of knowledge which is deepening (textile), a mid-age branch of knowledge that develops novel interactions with others (chemistry), and a new branch of knowledge (computers) for which classification officers strive to find useful organizational schemes. We acknowledge that these are only examples \-- presumably, some other examples of new classes would follow similar patterns, but other patterns may exists. We have found that about two thirds of post-1976 new classes have more than 90\% of their pre-birth (and post-1976) reclassified patents coming from a single origin (pre-existing class), suggesting that a form of ``branching'' or ``class splitting'' is fairly common, at least when looking at OR classes only. We do not want to put too much weight on these early results, which will have to be systematised, developed further using subclasses and multiple classifications, and, crucially, compared against results obtained using the IPC/CPC. We do think that such a systematic study of classification re-organizations would tell a fairly detailed story of the evolution of technology, but rather than embarking on such a detailed study here we propose to summarize most of what we have learned so far into a simple theoretical model. \section{A simple model} \label{section:model} \begin{figure*}[p!] \centering \includegraphics[width=\textwidth]{simulationresults.pdf} \caption{Simulation results against empirical data (red crosses). See Section \ref{section:model} for details.} \label{fig:simulationresults} \end{figure*} In this section, we propose a very simple model that reproduces several facts described above. As compared to other recent models for size distributions and Heaps' law in innovation systems \citep{tria2014dynamics,marengo2016arrival,lafondSDPC}, the key assumption that we will introduce is that classes are sometimes split and their items reclassified. We provide basic intuition instead of a rigorous discussion\footnote{For instance, we do not claim that the model \emph{in general} produces a certain type of pattern such as a lack of age-size relationship. We simply show that under a specific parametrisation taken from the empirical data (say $\sim$10 million patents, 500 classes, and a Heaps exponent of $0.38$), it produces patterns similar to the empirical data.}. Let us start with the well-known model of \citet{simon1955class}. A new patent arrives every period. The patent creates a new category with probability $\alpha$, otherwise it goes to an existing category which is chosen with probability proportional to its size. The former assumption is meaningful, because in reality the number of categories grows over time. The second assumption is meaningful too, because this ``preferential attachment''/``cumulative advantage'' is related to Gibrat's law: categories grow at a rate independent of their size, so that their probability of getting the next patent is proportional to their size. There are three major problems with this model. First it gives the Yule-Simon distribution for the size distribution of classes. This is basically a power law so it has much fatter tails than the exponential law that we observe. In other words, it over predicts the number of very large categories by a large margin. Second, since older categories have more time to accumulate patents, it predicts a strong correlation between age and size. Third, since at each time step categories are created with probability $\alpha$ and patents are created with probability $1$, the relationship between the number of categories $\alpha t$ and the number of patents $t$ is linear instead of Heaps' constant elasticity relation. A solution to make the size distribution exponential instead of power law is to change preferential attachment for uniform random attachment, that is to choose each category with equal probability. Besides the fact that this new assumption may seem less intuitive than Gibrat's law, this would not solve the second problem because it would still be the case that older categories accumulate more patents. The solution is to acknowledge that categories are not entities that are defined once and for all; instead, they are frequently split and their patents are reclassified. We therefore turn to the model proposed by \citet{ijiri1975some}. It assumes that new categories are introduced over time by splitting existing ones. In its original form the model postulates a linear arrangement of stars and bars. Each star represents a patent, and bars materialize the classes. For instance, if there are 3 patents in class 1 and 1 patent in class 2, we have |***|*|. Now imagine that between any two symbols there is a space. At each period, we choose a space uniformly at random and fill it with either a bar (with probability $\alpha$) or a star (with complementary probability). When a star is added, it means that an existing category acquires a new patent. When a bar is added, it means that an existing category is split into two categories. It turns out that the resulting size-distribution is exponential, as desired. But before we can evaluate the age-size relationship, we need to decide how to measure the age of a category. To do this we propose to reformulate the model as follows. We start with one patent in one category. At each period, we first select an existing category $j$ with probability proportional to its size $k_j$ and add one patent in it. Next, with probability $\alpha$ we create two novel categories by splitting the selected category uniformly at random; that is, we draw a number $s$ from a uniform distribution ranging from 1 to $k_j$. Next, each patent in $j$ is assigned to the new category 1 with the probability being $s/k_j$, or to the new category 2 otherwise. This procedure leads to a straightforward interpretation: the patents are \emph{reclassified} from $j$ to the first or the second new category. These two categories are \emph{established} at this step of the process, and since patents are created sequentially one by one, we also know the \emph{date of the first patent} of each new category. To give a date in calendar years to patents and categories, we can simply use the dates of the real patents. Since $\alpha$ is constant, as in Simon's original model, we are left with the third problem (Heaps' power law is violated). We propose to make $\alpha$ time dependent to solve this issue\footnote{An interesting alternative (instead of using the parameter $\alpha$) would be to model separately the process by which the number of patents grow and patent classification officers split categories.}. Denoting the number of categories by $C_t$ and the number of patents by $t$ (since there is exactly one new patent per period), we want to have $C_t = C_0 t^b$ (Heap's law). This means that $C_t$ should grow at a per period rate of $dC_t/dt=C_0 b t^{b-1}$. Since we have measured $b \approx 0.378$ and we want the number of categories to be 474 when the number of patents is 9,847,315, we can calculate $C_0=C_t/t^b=1.07$. This gives $\alpha_t = 1.07 \times 0.378 \hspace{1mm} t^{0.378-1}$, which we take to be 1 when $t=1$.\footnote{There is a small inconsistency arising because the model is about primary classification only, but the historical number of classes and Heaps' law are measured using all classes, because we could not differentiate cross-reference classes in historical data. Another point of detail is that we could have used the estimated $C_0=0.17$ instead of the calculated one. These details do not fundamentally change our point.} Note how parsimonious the model is: its only inputs are the current number of patents and categories, and the Heaps' exponent. Here we do not attempt to study it rigorously. We provide simulation results under specific parameter values. Fig. \ref{fig:simulationresults} shows the outcome of a single simulation run (black dots and lines), compared to empirical data (red crosses). The first pair of panels (a and b) shows the same (empirical) data as Fig. \ref{fig:Nclasses} and \ref{fig:Heaps} using red crosses. The results from the simulations are the curves. The simulation reproduces Heaps' law well, by direct construction (the grey middle curve on panel b). But it also reproduces fairly well the evolution of the reconstructed number of classes, both the one based on the ``date of first patent'' and the one based on the ``dates established'', and both against calendar time (years) and against the cumulative number of patents. The second pair of panels (c and d) show the age-size relationships, with the same empirical data as in Fig. \ref{fig:agesize}. Panel c shows that the model seems to produce categories whose sizes are \emph{not} strongly correlated with the year in which they were established, as in the empirical data. However, in panel d, in our model there is a fairly strong negative correlation between size and the year of the first patent and this correlation is absent (or is much weaker) in the empirical data. These results for one single run are confirmed by Monte Carlo simulations. We ran the model 500 times and recorded the estimated coefficient of a simple linear regression between the log of size and each measure of age. The insets show the distribution of the estimated coefficients, with a vertical line showing the coefficient estimated on the empirical data. The next panel (e) shows the size distribution in a rank-size form, as in Fig. \ref{fig:ranksize}. As expected, the model reproduces this feature of the empirical data fairly well. However the empirical data is not exactly exponential and may be slightly better fitted by a negative binomial model (which has one more parameter and recovers the exponential when its shape parameter equals one). The top right histogram shows the distribution of the estimated negative binomial shape parameter. The empirical value departs only slightly from the Monte Carlo distribution. Finally, the last panel (f) shows the evolution of the share of reclassified patents, with the empirical data from Fig. \ref{fig:sharereclass} augmented by values of 1 between 1790 and 1899 (since no current categories existed prior to 1899, all patents have been reclassified). Here again, the model reproduces fairly well the empirical pattern. All or almost all patents from early years have been reclassified, and the share is falling over time. That said, for recent years (post 1976), the specific shape of the curve is different. Overall, we think that given its simplicity the model reproduces a surprisingly high number of empirical facts. It allows us to understand the differences between the different patterns of growth of the reconstructed and historical number of classes. Without a built-in reclassification process it would not have been possible to match all these empirical facts \--- if only because without reclassification historical and reconstructed evolution coincide. This shows how important it is to consider reclassification when we look at the mesoscale evolution of the patent system. On the other hand, much more could be done to make the model more interesting and realistic, for instance by also modelling subclasses and requiring that reclassification takes place within a certain distance. \section{Conclusion} In this paper, we have presented a quantitative history of the evolution of the main patent classes within the U.S. Patent Classification System. Our main finding is that the USPCS incurred regular and important changes. For academic researchers, these changes may be perceived as a source of problems, because this suggests that it may not always be legitimate to think that a given patent belongs to one and the same category forever. This means that results obtained using the current classification system may change in the future, when using a different classification system, and even if the very same set of patent is considered. That said, we do not think the effect would be strong. Besides, using the current classification system is still often the best thing to do because of its consistency. Our point here is not to critique the use of the current classification, but to argue that historical changes to the classification system itself contain interesting information that has not been exploited. Our first result is that different methods to compute the growth of the number of classes give widely different results, establishing that the changes to the classification system are very important. Our second result suggests that we do not see very large categories in empirical data because categories are regularly split, leading to an exponential size distribution with no relationship between the age and size of a category. Our third result is that reclassification data contains useful information to understand technological evolution. Our fourth result is that a very simple model that can explain many of the observed patterns needs to include the splitting of classes and the reclassification of patents. Taken together, these results show that it is both necessary and interesting to understand the evolution of classification systems. An important limitation of our study is that it is highly limited in scope: we study the US, at the class level, using main classifications only. A contrasting example we have found is the French patent classification of 1853, which contained 20 groups, was revised multiple times in the $19^{th}$ century but while subclasses were added it kept a total of 20 classes even in the ``modern'' classification of 1904. Similarly, while direct comparison is difficult, our preliminary exploration of other classification systems, such as the IPC and CPC, suggests that they do not feature the same size distribution, perhaps pointing to a different mode of evolution than the one proposed in our model. We believe that our findings are interesting for all researchers working with economic and technological classifications, because we characterized quantitatively the volatility of the patent classification system. We do not know whether they are unstable because collective representations of technological artefacts are context-dependent, or because as more items are introduced and resources invested in classifying them appropriately, collective discovery of the ``true'' mesoscale partition takes place. But clearly, when interpreting the results which rely upon a static snapshot of a classification system, one should bear in mind that classification systems are dynamic. A case in point is the use of technological classes to produce forecasts: how can we predict the evolution of a given class or set of classes several decades ahead, when we know these classes might not even exist in the future? In this paper, we are not proposing a solution to this forecasting issue \-- only raising conceptual problems that classification system changes pose. Further, even if we consider that today's categorization will not change, a subtle issue arises in the production of correct forecasting models. To see this consider developing a time series model describing the growth of some particular classes. To test the forecasting ability of the model, one should perform out-of-sample tests, as e.g. \citet{farmer2016predictable} did for technology performance time series. Part of the past data is used to predict more recent data, and the data which is not used for estimation is compared to the forecasts. Now, note that when we use the current classification, we effectively use data from the present; that is, the delineation of categories for past patents uses knowledge from the present, and it is therefore not entirely valid to evaluate forecasts (there is ``data snooping'' in the sense that one uses knowledge of the future to predict the future). Classification system changes pose serious problems for forecasting but may also bring opportunities: if classification changes reflect technological change then one can in principle construct quantitative theories of that change. Since the patterns described here could be roughly understood using an extremely simple model, it may be possible to make useful forecasts with more detailed models and data, for instance predicting new classes \citep{erdi2013prediction,kyebambe2017forecasting}. This could be useful because patent classification changes are more frequent than changes to other classification systems such as industries, products and occupations. An interesting avenue for future research would be to use the changes of the patent classification system to predict the changes of industry and occupation classification systems, thus predicting the types of jobs of the future. Beyond innovation studies, with the rising availability of very large datasets, digitized and carefully recorded classifications and classification changes will become available. It will be possible to explore classifications as an evolving network and track the splitting, merging, birth and death of categories. This is an exciting new area of research, but the big data that we will accumulate will only (or mostly) cover recent years. This makes historical studies such as the present one all the more important. \bibliographystyle{agsm}
a5e026a991e7df7920801d9445d627d9be5adffc
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Neural sequence-to-sequence models with attention have become the \textit{de facto} methods for machine translation~\cite{bahdanau2014neural,vaswani2017attention}. NMT models require a large amount of parallel data to surpass the quality of phrase-based statistical models, and they are very sensitive to data quality~\cite{koehn2017six}. As a conditional text generation task, machine translation contains both \textit{intrinsic} uncertainty, where a given sentence usually has multiple valid reference translations, and \textit{extrinsic} uncertainty, due to noise in the sentence alignment that produces parallel training data~\cite{ott2018analyzing}. % As an option for handling data uncertainty, latent variable models such as variational autoencoders (VAE) have been investigated in language modeling and conditional text generation~\cite{miao2016neural,zhang2016variational,yang2017improved}. However, in contrast to their success when applied to computer vision tasks~\cite{kingma2013auto,rezende2014stochastic}, VAEs in natural language processing suffer from \textit{posterior collapse}, where the learnt latent code is ignored by the decoder~\cite{bowman2015generating}. In this work, we propose to address posterior collapse when using latent variable models in neural machine translation. First, we provide an analysis of the evidence lower bound (ELBO) used in conditional variational autoencoders (CVAE) commonly used in conditional text generation. Our analysis reveals that optimizing CVAE's ELBO not only inevitably leads to vanishing divergence of the posterior from the prior during training, but also decreasing mutual information between latent codes and data. Based on this insight, we propose two modifications of CVAE's ELBO to address this problem: 1) we explicitly add mutual information back to the training objective in a principled way, and 2) we use a factorized decoder, predicting ``bag of words" as an auxiliary decoding distribution to regularize latent variables, finding that both are complementary. We summarize our contribution as follows: \begin{enumerate} \item % % We improve CVAE by enhancing mutual information between latent variables and data, effectively mitigating posterior collapse in conditional text generation. \item % We apply the proposed model in neural machine translation with the Transformer architecture. Experiments demonstrate that latent variables are not ignored even in the presence of the powerful autoregressive decoder. Compared to variational NMT with CVAE architecture or non-latent Transformer, the proposed improvements yield improved robustness and data-efficiency. \item We extend the proposed model to semi-supervised learning with monolingual data, and show that it has superior performance on self-training by effectively learn from source-side monolingual data. \end{enumerate} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{model_joint2.pdf} \end{center} \caption{Model architecture, including training with only parallel data, and joint training with monolingual data.} \label{fig:training} \end{figure*} \section{Background} \subsection{Neural Machine Translation} Problem instances in machine translation are pairs of sequences \((\bm{x} \triangleq [x_1, \ldots, x_m], \bm{y} \triangleq [y_1, \ldots, y_n])\), where \(\bm{x}\) and \(\bm{y}\) represent the source and target sentences, respectively. Conventionally, a neural machine translation (NMT) model is a parameterized conditional distribution whose likelihood factors in an autoregressive fashion: \begin{equation} p_\theta\left(\bm{y}\mid\bm{x}\right) = \prod_{i}^{|\bm{y}|} p_\theta\left(y_i \mid \bm{x}, \bm{y}_{<i}\right)\text{.} \end{equation} The dominant translation paradigm first represents the source sentence as a sequence of contextualized vectors (using the \emph{encoder}), then decodes this representation token-by-token into a target hypothesis according to the above factorization. The parameters \(\theta\) are learned by optimizing the log-likelihood of training pairs with stochastic gradient methods \cite{bottou2004large}. Decoding the model occurs in a deterministic fashion, using an efficient approximate search like beam search \cite{tillmann-ney-2003-word}. Recently, Transformer with multi-head attention has become the state of the art for NMT \cite{vaswani2017attention}. \subsection{Conditional Variational Autoencoder (CVAE)} Our NMT approach extends the conditional variational autoencoder (CVAE) \cite{sohn2015learning}, of which variational NMT \cite{zhang2016variational} is a particular case. It introduces a latent variable $\bm{z}$ to model the conditional distribution: \begin{equation} p_\theta(\bm{y} \mid \bm{x}) = \int_{\bm{z}}p_\theta(\bm{y}\mid \bm{z}, \bm{x}) \cdot p(\bm{z} \mid \bm{x})\, \mathrm{d}z \text{.} \label{eqn:log-likelihood} \end{equation} However, it is intractable to directly marginalize \(\bm{z}\). Instead, the CVAE objective is to maximize the \textbf{evidence lower bound (ELBO)} of the \mbox{(log-)}likelihood: \begin{multline} \mathcal{L}_{\mathrm{CVAE}}(\phi, \theta; \bm{x}, \bm{y}) = \Expect_{q_{\phi}(\bm{z}\mid \bm{x}, \bm{y})} \left[\log p_\theta(\bm{y}\mid \bm{x}, \bm{z})\right] \\ - \KL(q_{\phi}(\bm{z}\mid \bm{x}, \bm{y}) \parallel p_\theta(\bm{z} \mid \bm{x})) \text{,} \label{eqn:cvae} \end{multline} where $\KL$ represents the Kullback--Leibler (KL) divergence between two distributions. Learning is done by amortized variational inference, where the variational distribution \(q_{\phi}(\bm{z}\mid \bm{x}, \bm{y})\) is an inference network parameterized by \(\phi\). \subsection{Posterior Collapse} Posterior collapse can be explained mathematically by analysis of the ELBO objective, as well as from the perspective of a powerful decoder. We consider both in this subsection. We first provide an analysis of CVAE's objective and identify its drawback. Recall that our computed loss approximates the loss on the true data distribution by using a finite number of samples: \begin{equation} \mathcal{L} = \Expect_{p_{\mathcal{D}}(\bm{x}, \bm{y})} \left[ \mathcal{L}_{\mathrm{CVAE}}(\phi, \theta; \bm{x}, \bm{y}) \right] \end{equation} Thus, the KL term is: \begin{align} &\Expect_{p_\mathcal{D}(\bm{x}, \bm{y})} \left[\KL(q_\phi(\bm{z} \mid \bm{x}, \bm{y}) \parallel p_\theta(\bm{z} \mid \bm{x})) \right] \nonumber \\ &\triangleq \Expect_{p_\mathcal{D}(\bm{x}, \bm{y})}\Expect_{q_{\phi}(\bm{z}\mid \bm{x}, \bm{y})} \left[\log q_{\phi}(\bm{z}\mid \bm{x}, \bm{y}) - \log p(\bm{z}\mid \bm{x})\right] \nonumber \\ &= \sum_{\bm{x}, \bm{y}} q(\bm{x}, \bm{y}, \bm{z}) \log \frac{q (\bm{x}, \bm{y}, \bm{z})}{p(\bm{x}, \bm{y}, \bm{z})} \nonumber \\ &= \Expect_{\bm{x}, \bm{y}, \bm{z}}\log \frac{q(\bm{x}, \bm{y} \mid \bm{z}) q(\bm{z})}{p(\bm{x}, \bm{y}) p(\bm{z})} \nonumber \\ % &= \Expect_{q_\phi(\bm{z})}\Expect_{q_\phi(\bm{x}, \bm{y} \mid \bm{z})} \log \frac{q(\bm{x}, \bm{y} \mid \bm{z})}{p_\mathcal{D}(\bm{x}, \bm{y})} + \Expect_{q_\phi(\bm{x}, \bm{y}, \bm{z})} \log \frac{q(\bm{z})}{p(\bm{z})} \nonumber \\ &= \underbrace{-\Entr(\bm{x}, \bm{y} \mid \bm{z}) + \Entr(\bm{x}, \bm{y})}_{\triangleq \MI_{q_{\phi}}(\bm{z}; \bm{x}, \bm{y})} % + \underbrace{ \Expect_{q_\phi(\bm{z})} \log \frac{q(\bm{z})}{p(\bm{z})} % }_{\triangleq \KL(q_{\phi}(\bm{z}) \parallel p(\bm{z} ) )} \label{eqn:kl} \end{align} The third line comes from multiplying the numerator and denominator by \(p_\mathcal{D}(\bm{x}, \bm{y})\) following \citet{hoffman2016elbo}, the fact that \(p(\bm{z} \mid \bm{x})\) is conditionally independent of \(\bm{y}\), and defining \(p_{\mathcal{D}}(\bm{x}, \bm{y}) \triangleq \frac{1}{N}\) for all \(N\) training samples \((\bm{x}, \bm{y}) \in \mathcal{D}\). The fifth line comes from factoring and conditional independence. As the two resulting terms are non-negative \cite{cover-thomas-2006-elements}, the global minimum of \Cref{eqn:kl} is \(\MI_{q_{\phi}}(\bm{z}; \bm{x}, \bm{y}) = \KL(q_{\phi}(\bm{z}) \parallel p(\bm{z} ) ) = 0 \). Unfortunately, at this point, the consequence of the optimization is that \(\bm{z}\) is conditionally independent of the data. Another explanation of posterior collapse is the ``powerful decoder" perspective: an autoregressive model with large capacity comes to approximate a complex distribution \emph{without using the latent variables} \cite{bowman2015generating,he2019lagging}. This is a challenge for NMT, which requires a powerful decoder such as Transformer with direct attention to the encoder. % \section{Addressing Posterior Collapse} \subsection{CVAE Guided by Mutual Information} \subsubsection{Adding $\MI_{q_{\phi}}(\bm{z}; \bm{x},\bm{y})$ to ELBO} To combat the optimization dilemma from \cref{eqn:kl}, we explicitly add the mutual information term to the CVAE's ELBO and obtain a new training objective: \begin{multline} \label{eq:micvae} \mathcal{L}_{\mathrm{MICVAE}} =\mathcal{L}_{\mathrm{CVAE}} + \MI_{q_{\phi}}(\bm{z}; \bm{x}, \bm{y}) \\ = \Expect_{q_{\phi}(\bm{z}\mid \bm{x}, \bm{y})}\log p(\bm{y}\mid \bm{x}, \bm{z}) - \KL(q_{\phi}(\bm{z}) \parallel p(\bm{z} ) ) % \text{.} \end{multline} The new training objective aims to match the aggregated posterior distribution of the latent variable $q_{\phi}(\bm{z})$ to the aggregated prior distribution $p(\bm{z})$. It can be seen as an extension of InfoVAE~\cite{zhao2017infovae} to conditional generative models, where we have overcome the mismatch between the (joint) data distribution \(p_\mathcal{D}(\bm{x}, \bm{y})\) and the (conditional) log-likelihood objective \(p_\theta(\bm{y} \mid \bm{x})\). % \subsubsection{Guiding $\bm{z}$ to Encode Global Information} Several existing approaches weaken the decoder to encourage latent variables to be utilized, which is not preferred in practice \cite{bowman2015generating,guljarani2016pixelvae}. Here we propose a different approach: explicitly guiding the information encoded in $\bm{z}$ without reducing the decoder's capacity. Inspired by an information-theoretic view of posterior collapse using Bits-Back Coding theory~\cite{wallace-freeman-1987-estimation,Hinton:1993:KNN:168304.168306,chen2016variational}, we add an auxiliary loss for $\bm{z}$ to encode information which cannot be modelled locally by the autoregressive decoder distribution $\prod_t p_\theta(y_t \mid \bm{x}, \bm{y}_{<t})$. We use bag-of-words (BoW) prediction as the auxiliary loss. % It encodes global information while having a non-autoregressive factorization $\prod_t p_\psi(y_t \mid \bm{z})$. The auxiliary decoder complements the autoregressive decoder (which is locally factorized) by combining predictions at the Softmax layer, i.e.\ $p(y_t \mid \bm{x}, \bm{y}_{<t}, \bm{z})$ is a \textbf{mixture of softmaxes} \cite{yang2018breaking}: \begin{multline} p(y_t \mid \cdot) = (1-\lambda) \cdot p_{\theta}(y_t \mid \bm{x}, \bm{y}_{<t}, \bm{z}) \\ + \lambda \cdot p_{\psi}(y_t \mid \bm{z}) \text{.} \end{multline} Thus, the bag-of-words objective regularizes the log-likelihood bound. \subsection{Architecture} \paragraph{ Inference Network} We use discrete latent variables with reparameterization via Gumbel-Softmax~\cite{jang2016categorical} to allow backpropagation through discrete sampling. Compared to the multivariate Gaussian distribution commonly used in VAE and CVAE, our parameterization allows us to explicitly account for multiple modes in the data. To make our model more general, we introduce a \emph{set} of discrete latent variables \(\bm{z} = \{\bm{z}_1, \ldots, \bm{z}_K\}\) which are independently sampled from their own inference networks $\Phi_k$. Specifically, each $\Phi_k$ computes dot product attention with encoder outputs $\bm{h}\in \mathbb{R}^d $: \begin{equation} \bm{C}_k = \text{Softmax}(\frac{\bm{e}_{k}\bm{W}^k(\bm{h}\bm{W}^h)^\top}{\sqrt{d}})\bm{h}\bm{W}^h \text{.} \end{equation} We can now sample $\bm{z}_k$ by Gumbel-Softmax reparameterization trick~\cite{jang2016categorical}: \begin{equation} \begin{split} \bm{z}_k = \text{GumbelSoftmax}(\textbf{C}_k) =\text{softmax}\left(\frac{\textbf{C}_k + \bm{g}}{\tau}\right), \end{split} \end{equation} where $\bm{g}=-\log(-\log(\bm{u})), \bm{u}\sim \text{Uniform}$ is the Gumbel noise and $\tau$ is a fixed temperature (we use $\tau=1$ in this paper). In the inference time, we use a discrete version by directly sampling from the latent variable distribution. \paragraph{BoW Auxiliary Decoder} Given inferred sample $\bf{z} \sim \Phi_k({\textbf{h}})$, the BoW decoder predicts all tokens at once without considering their order. We compute the cross-entropy loss for the predicted tokens over the output vocabulary space \(V\): \begin{equation} \mathcal{L}_{\mathrm{BoW}} = \sum_{i=i}^{|V|} p_i \log \hat{p_\psi}(y_i \mid \bm{z}), \quad \sum_{i=1}^{| V| } p_i = 1 \text{.} \end{equation} We take the empirical distribution $p_i$ to be a token's frequency within a sentence normalized by its total frequency within a mini-batch, mitigating the effect of frequent (stop) words. $\hat{p}_{ \psi}% $ is computed by conditioning on the latent code only, without direct attention to encoder outputs. We use dot-product attention between the latent embeddings and the token embeddings (each of dimensionality \(d\)): \begin{equation} \label{eq:bow_loss} p_{\psi}(y_i \mid \bm{z}) = \text{Softmax}_i \left(\frac{\Embedding(\bm{z})\Embedding^T(V)}{\sqrt{d}}\right) \text{.} \end{equation} \subsection{Training} \label{sec:model_training} We train our model using amortized variational inference, where samples $\bm{z}$ are drawn from the posterior distributions to get a Monte Carlo estimate of the gradient. In addition to standard CVAE supervised learning with parallel data, we also extend our model to be jointly trained by adding monolingual data. \todo[disable,author={R1}]{Instead of adding the extra objective Iq(z;x) from the source monolingual data, why not also consider Iq(z;y) from the target monolingual data? Since the posterior distribution q(z|x,y) is conditioned on both source and target, it would be better to promote mutual info in both directions.} \todo[disable,author={R1}]{How did you compute the mutual info I(z;x)? Better say one or two sentences about the evaluation method. } \paragraph{Semi-supervised learning} We apply the same modification to VAE's ELBO, following \citet{zhao2017infovae}. For jointly training with source-side monolingual data, we add $\MI_{q_{\phi}}(\bm{z}; \bm{x})$ to the ELBO\footnote{Learning to copy the source text has proven useful for low-resource NMT \cite{currey-etal-2017-copied}.}, and for target-side monolingual data, we add $\MI_{q_{\phi}}(\bm{z}; \bm{y})$. % The joint objective sums the modified CVAE and VAE objectives% : \begin{equation} \label{eq:mono_loss} \begin{split} \mathcal{L}_{\mathrm{Mono}} = & \log p(\bm{x} \mid \bm{z}) \\ % &+{} D_{\mathrm{KL}}\left(\frac{1}{N} \sum_{n=1}^N q_{\phi}(z_n | x_n) \;\bigg{|\bigg|}\; \frac{1}{N} \sum_{n=1}^N p(z_n)\right) \\ % \end{split} \end{equation} \begin{equation} \label{eq:joint_loss} \mathcal{L}_{\mathrm{Joint}} = \mathcal{L}_{\mathrm{MICVAE}} + \mathcal{L}_{\mathrm{Mono}} \end{equation} \Cref{alg:main} describes the overall training strategy. \begin{algorithm}[t] \caption{\label{alg:main}Training Strategy} \begin{algorithmic}[1] \STATE $\Phi_{enc}, \Phi_{k=1, ..., K}, \Theta_{dec},\Theta_{BoW} \gets \text{initialize parameters}$ % % % % % \WHILE{$\Theta_{enc}, \Theta_{dec},\Theta_{BoW}, \Phi_{k=1, ..., K}$ have not converged} \STATE{Sample $(\mathbf{x}, \mathbf{y})$ from $D^{\text{bitext}} $} \STATE{Compute $\mathcal{L}_{\mathrm{MICVAE}}$ with \Cref{eq:micvae}} \STATE{Train $\Phi_{enc}, \Theta_{dec}, \Phi_{k=1, ..., K}$ with $\mathcal{L}_{\mathrm{MICVAE}}$} \STATE{Compute $\mathcal{L}_{\mathrm{BoW}}$ with \Cref{eq:bow_loss}} \STATE{Train $\Phi_{enc},\Theta_{BoW}, \Phi_{k=1, ..., K}$ with $\mathcal{L}_{\mathrm{BoW}}$} \IF{\text{self\_training}} \STATE{Sample $\mathbf{x}$ from $D^{\text{mono}} $} \STATE{Compute $\mathcal{L}_{\mathrm{Mono}}$ with \Cref{eq:mono_loss}} \STATE{Train $\Phi_{enc}, \Phi_{k=1, ..., K}$ with $\mathcal{L}_{\mathrm{Mono}}$} \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \section{Experiments} Here we describe our experiments, showing that our techniques have practical value for both mitigating posterior collapse and improving translation quality. \subsection{Setup} \paragraph{Datasets} First, we evaluate our models on standard WMT benchmark datasets. Second, we focus on two representative challenges in NMT: low-resource and robustness to noisy data. \begin{description} \item[WMT14 German--English] We use data from the WMT14 news translation shared task, which has 3.9M sentence pairs for training with BPE tokenization. \item[WMT16 Romanian--English] We use data from the WMT16 news translation shared task. We use the same BPE-preprocessed \cite{sennrich-etal-2016-neural} train, dev and test splits as in \citet{gu2017non} with 608k sentence pairs for training. \item[Low resource benchmark (FLoRes) Sinhala--English] We use the same preprocessed data as in \citet{guzman2019two}. There are 646k sentence pairs. \item[MT for Noisy Text (MTNT) French--English] We use 30K subword units built jointly from source and target sentences, and only keep sentences with less than 100 tokens. For training, there are 34,380 sentence pairs for English-French and 17,616 sentence pairs for French--English \cite{michel2018mtnt}. We also used 18,676 \emph{monolingual} sentences per language from the same data source (Reddit). \end{description} \paragraph{Implementation} All of our models are implemented using Transformer architecture.% For WMT14 De--En and WMT16 Ro--En% , we use the base configuration \cite{vaswani2017attention}: 6 blocks, with 512-dimensional embedding, 2048-dimensional FFN, and 8 attention heads. For FLoRes (low-resource) and MTNT (both low-resource and noisy), we use a smaller Transformer: 4 layers, 256-dimensional embedding, 1024-dimensional inner layers, and 4 attention heads. % Input and output embeddings are shared between the inference network and decoder. We use $T=4$ categorical latent variables each of dimension 16 which are found by grid search over validation set. Auxiliary bag-of-words predictions are combined with decoder prediction with $\lambda=0.1$. All models are optimized using Adam with $\beta_1=0.9$, $\beta_2=0.98$, $\epsilon=1e-8$, weight decay of 0.001, and the same warmup and learning rate schedule as in \citet{ott2018scaling}. All models are trained on 8 \textsc{Nvidia} V100 GPUs with 32K tokens per mini-batch. We train WMT14 De-En with 200k updates and all other models with 100k updates. We employ joint BPE vocabularies. The sizes are 32k for En--De and En--Ro; 30k for Fr--En; and 3k for Si--En. In addition, we use a word dropout rate of 0.4 during training of the baseline and latent variable models, which is complementary to our approach. \paragraph{Baselines} We compare our model to three baselines: 1) \textit{Transformer, non-latent}: standard Transformer model without latent variables (denoted as non-latent), 2) \textit{VNMT}: CVAE model with Gaussian distribution as was proposed in Variational NMT by \citet{zhang2016variational}, which we reimplemented using Transformer, and 3) \textit{DCVAE}: CVAE model with the same discrete latent variables parameterization as ours but without the proposed enhancement on promoting mutual information, i.e., the only differences are the modified ELBO and bag-of-words regularizer. \section{Main Results} \subsection{Preventing Posterior Collapse} \begin{figure*}[t] \begin{center} \includegraphics[width=\textwidth]{kl-ablation-all.pdf} \end{center} \caption{Row (A): comparison of KL and mutual information between baseline (DCVAE, solid triangle, orange color) and our model (solid circle, teal color). Row (B) and (C): ablation study on relative contribution from MICVAE and BoW. All metrics are computed on WMT16 Ro--En validation set during training.} \label{fig:kl_mi} \end{figure*} In this set of experiments, we compare our model to a standard DCVAE without the proposed enhancement in mutual information. We report four metrics of posterior collapse on validate set of WMT Ro--En: \begin{enumerate} \item Kullback--Leibler divergence (KL). \item Mutual information between the latent variable and the data: $\MI_{q_{\phi}}(\bm{z}, \bm{x})$ and $\MI_{q_{\phi}}(\bm{z},\bm{y})$. % \item Negative log-likelihood (NLL) per token. \end{enumerate} \Cref{tab:collapse_metrics} shows that when using standard DCVAE ELBO, even with the common practice of KL annealing (KLA), both the KL loss and mutual information settle to almost 0 which is consistent with the analysis in \Cref{eqn:kl}. We also plot the progression of \(\KL\), \(\MI_{q_{\phi}}(\bm{z}; \bm{x})\), and \(\MI_{q_{\phi}}(\bm{z}; \bm{y})\) during training in \Cref{fig:kl_mi}. The posterior collapse of the baseline model is apparent: both \(\KL\) mutual information terms drop to 0 at the beginning of training as a result ELBO's design. On the other hand, our model, without using any annealing schedule, can effectively increase mutual information and prevent KL loss from settling to a degenerate solution early on. \todo[disable,author=R3]{The comparison of KL term in table 1 seems meaningless, because the term in different methods are different.} \todo[disable,author={Arya}]{Which language is \Cref{tab:collapse_metrics} describing?} \todo[disable,author={Xian}]{Added in the table caption} \begin{table} \caption{Results on improving posterior collapse. The KL value refers to $\KL(q_{\phi}(\bm{z}\mid \bm{x}, \bm{y}) \parallel p(\bm{z} \mid \bm{x} ))$ for DCVAE and $\KL(q_{\phi}(\bm{z}\mid \bm{y} ) \parallel p(\bm{z} \mid \bm{x} ))$ for our model.} \smallskip \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{l r r r r} \toprule Model & \(\KL\) & $\MI_{q_{\phi}}(\bm{z},\bm{x})$ & $\MI_{q_{\phi}}(\bm{z},\bm{y})$ & NLL \\ \midrule DCVAE + KLA & 0.001 & 0.001 & 4.2\textsc{e}-6 & 3.17 \\ Our model & 0.17 & 0.18 & 0.31 & 3.16 \\ \bottomrule \end{tabular} \end{adjustbox} \label{tab:collapse_metrics} \end{table} \begin{comment} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{kl-ablation.pdf} \end{center} \caption{Ablation study on relative effects of modifed ELBO only, and BoW only on KL and mutual information on WMT16 Ro--En validation set during training.} \label{fig:kl-ablation} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{kl-mi-bow_only.pdf} \end{center} \caption{Effect of BoW only on KL and mutual information on WMT16 Ro--En validation set during training.} \label{fig:bow-only} \end{figure*} \end{comment} \todo[disable]{In \Cref{tab:collapse_metrics}, it seems the proposed methods makes NLL worse? Also, just by using the modified ELBO, the KL becomes even smaller than CVAE baseline?} \todo[disable,author={R2}]{As mutual information is a measure of the amount of information contained in one random variable about another random variable, how can it be the KL decrease while the I increase(since claimed the property of KL in VAE setting, the smaller, the better)?} \todo[disable,author={R3}]{Table 1 and figure 2 show that even with Modified ELBO and BOW, the KL terms are still at a very small value. In my experiments wrt VNMT personally, such low KL terms seemed to contribute very little information to the decoders. \textbf{I suggest authors to provide more analysis of whether the z thing does actually play an important role in your model}, such as substituting z with a zero vector.} \todo[disable,author={R3}]{It seems that the bag-of-words decoder plays a much more important role than modified ELBO according to the results in table 1. I think you need to evaluate the contribution of the modified ELBO to your model. Does modified ELBO improves BLEU score ?} \subsection{Translation Quality} We report corpus-level BLEU \cite{papineni2002bleu}\footnote{Particularly, we use detokenized SacreBLEU \cite{post-2018-call}.} on the test sets where the translations are generated by sampling each $z_k$ with soft-assignment (vs. argmax). % \todo[disable]{Are the BLEU scores reported the mean score of several sampled translations (from the latent space)? } \todo[disable, author={R2}]{Sincerely, it's good to report real result, but maybe you should explain why on the WMT16 EN-RO dataset, your model performs slightly worse, as VNMT is supposed to learning better due to the newly added information.} \paragraph{Supervised Learning on Parallel Data} First, we evaluate our model's performance when trained with parallel data on standard WMT datasets. \Cref{parallel_results} shows that our model consistently outperforms both VNMT and DCVAE models---which requires ad-hoc KL annealing (KLA) while on par with a strong Transformer baseline. % \begin{table} \caption{BLEU score on WMT benchmarks.} \smallskip \centering \adjustbox{max width=\linewidth}{ \begin{tabular}{l r r r r } \toprule & \multicolumn{2}{c}{WMT16} & \multicolumn{2}{c}{WMT14} \\ % \cmidrule(lr){2-3} \cmidrule(lr){4-5} % Model & Ro--En & En--Ro & De--En & En--De \\ % \midrule VNMT & 34.20 & 34.27 & 30.35 & 25.84 \\ DCVAE & 34.16 & 34.51 & 29.76 & 25.46 \\ Our model & 34.76 & 34.97 & 31.39 & 26.42 \\ \midrule Non-latent & 34.73 & 34.54 & 30.89 & 26.36 \\ \bottomrule \end{tabular}} \label{parallel_results} \end{table} \begin{comment} \paragraph{Low resource NMT: fully supervised} Next, we evaluate our model on low-resource scenarios which is an unsolved challenge in NMT \cite{koehn2017six}. \Cref{lowres} summarizes the results on two representative low-resource datasets. \begin{table} \caption{BLEU score on low-resource datasets.} \smallskip \centering \adjustbox{max width=\linewidth}{ \begin{tabular}{l r r r r } \toprule & \multicolumn{2}{c}{MTNT} & \multicolumn{2}{c}{FLORES} \\ % \cmidrule(lr){2-3} \cmidrule(lr){4-5} \\ % Model & Fr--En & En--Fr & Si--En & En--Si \\ % \midrule Non-latent & 26.65 & running & TODO & TODO \\ VNMT & 26.90 & running & TODO & TODO \\ DCVAE & 26.37 & running & TODO & TODO \\ Our model & \textbf{28.58} & running & TODO & TODO \\ \bottomrule \end{tabular}} \label{lowres} \end{table} \end{comment} \paragraph{Semi-supervised with Source-side Monolingual Data} Leveraging monolingual data is a common practice to improve low resource NMT. Current approach has been mostly focusing on using target-side monolingual data through ``backtranslation" as a data augmentation, while how to effectively leverage source-side monolingual to facilitate self training is still an open challenge \cite{sennrich2015improving,zhang2016exploiting}. We use the joint training objective described in \Cref{eq:joint_loss}. To have a fair comparison, we also extend VNMT and DCVAE with the same joint training algorithm, i.e., the newly added monolingual data is used to train their corresponding sequence encoder and inference network with standard VAE ELBO. That is, the only difference is that our model was trained to promote mutual information $\MI_{q_{\phi}}(\bm{z}, \bm{x})$ and $\MI_{q_{\phi}}(\bm{z}, \bm{y})$. As shown in \Cref{table:mono}, by doing so the proposed model brings larger gains during self-training with source-side monolingual data. \begin{table} \caption{Translation performance (BLEU) of utilizing source-side monolingual data.} \smallskip \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{l r r} \toprule Model & Fr--En & En--Fr \\ \midrule DCVAE & 26.37 & 26.11 \\ + source mono & 27.30 & 26.40 \\ Our model & 28.58 & 26.31 \\ + source mono & 29.81 & 26.69 \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:mono} \end{table} \paragraph{Robustness to noisy data} While high-quality parallel data is scarce for low-resource language pairs, weakly aligned sentence pairs can be mined from massive unpaired data such as Paracrawl\footnote{\url{https://paracrawl.eu/}}. We evaluate our model's performance when augmenting the training set with increasingly noisy parallel data filtered by Zipporah \cite{xu2017zipporah}. \Cref{fig:si_en} % shows the results in the Sinhala--English direction. Our model always outperforms standard Transformer, which struggles as more (and noisier) data is added. \todo[disable, author={R2}]{on the comparison between with and without mono corpus, they compare to a NON-latent model, which is not fair, as least they should list their results on a unmodified VAE model, i.e., this experiment result is not convincible.} \todo[disable, author={R3}]{Experiment about monolingual data and noisy data lacks comparison with VNMT.} \todo[disable, author={R2}]{I doubt that their model performs better on the noise data just like what word dropout does(which has been proven to be effective). } \begin{figure} \centering \includegraphics[width=\linewidth]{bar_plot-2.pdf} \caption{BLEU when increasing the amount of noisy parallel data in training, Si--En.} \label{fig:si_en} \end{figure} \begin{comment} \paragraph{Data Efficiency} We evaluate the proposed model's sample-efficiency by varying the amount of training data. As is shown in \Cref{table:data-effiency}, \todo[inline]{plot this result.} \begin{table} \caption{Translation performance (BLEU) with increasing amount of parallel data.} \smallskip \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{l r r r} \toprule Model & De-En 500k & De-En 1M & De-En 3.9M \\ \midrule Non-latent & 20.89 & 24.08 &30.89 \\ DCVAE & 20.54 & 23.70 & 29.76 \\ Our model & 21.20 & 24.26 & 31.39 \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:data-effiency} \end{table} \end{comment} \section{Analysis} \subsection{Ablation Study} We further investigate how different ingredients of our proposed approach contribute to preventing posterior collapse and improving translation quality. We conduct further experiments with two variants of the proposed model: 1) modified ELBO only: only adding mutual information term to the training objective, while without gradients from $\mathcal{L}_{\mathrm{BoW}}$, 2) BoW only: which is equivalent to DCVAE combined with Bow decoder. First, we perform the same collapse metrics evaluation as in \Cref{tab:collapse_metrics}. \Cref{fig:kl_mi} (B) suggests that by explicitly adding mutual information term back to the training objective, both $\MI_{q_{\phi}}(\bm{z}, \bm{x})$ and $\MI_{q_{\phi}}(\bm{z}, \bm{y})$ are effectively raised, while the remaining aggregated KL term is still optimized to zero. Such behavior is consistent with the analysis revealed in \Cref{eqn:kl}. On the other hand, regularizing $z$ with BoW decoder only, as is shown in \Cref{fig:kl_mi} (C), is very effective in preventing KL vanishing as well as increasing mutual information. When two approaches are combined, as was shown in \Cref{fig:kl_mi}, the model retain higher mutual information for both $\MI_{q_{\phi}}(\bm{z}, \bm{x})$ and $\MI_{q_{\phi}}(\bm{z}, \bm{y})$. Next, we look into whether such difference in mutual information lead to difference in translation quality. We compare these two models: BoW only (\Cref{fig:kl_mi} (C)) and both (\Cref{fig:kl_mi} (A)) on WMT14 De--En and WMT16 Ro--En test sets. \Cref{table:ablation-bleu} reveals that such difference matters more in low-data regime. \begin{table} \caption{Ablation study on translation quality (BLEU).} \smallskip \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{l r r} \toprule Model & De--En (3.9M) & Ro--En (608K) \\ \midrule Both & 31.39 & 34.76 \\ BoW only &31.14 & 34.22 \\ \bottomrule \end{tabular} \end{adjustbox} \label{table:ablation-bleu} \end{table} \subsection{Analysis of Outputs} Delving into model predictions helps us understand how our model outperforms the others. We provide some 1-best predictions from the Romanian--English data in \Cref{tab:outputs}. Several examples support the fact that our model has more fluent and accurate translations than the baseline or VNMT. VNMT often struggles by introducing disfluent words, and both VNMT and the baseline can select justifiable but incorrect words. For instance, in our second example, the gender and animacy of the possessor are not specified in Romanian. Our model selects a more plausible pronoun for this context. More broadly, we find that the reference translations are quite loose and context-dependent (rather than word-for-word translations), making it difficult for models to reproduce---they give reasonable translations with greater fidelity to source word order and content. (As an extreme example, the English translation of \emph{ed miliband isi cunostea dusmanii} adds information to the beginning: \emph{for all his foolishness ed miliband knew who his enemies were}; no model is able to add this.) Our model often makes superior judgments in terms of lexical choice and fluency. \begin{table}[t] \centering \caption{Translation examples from the baseline Transformer, VNMT, and our model. Disfluent words or absences are marked in \textcolor{red}{red}, and slightly incorrect lexical choice is marked in \textcolor{blue}{blue}. Romanian diacritics have been stripped.} \smallskip \begin{adjustbox}{max width=\linewidth} \begin{tabular}{l} \toprule \textbf{Source}: ma intristeaza foarte tare .\\ \textbf{Reference}: that really saddens me . \\ \textbf{Base}: i am very saddened .\\ \textbf{VNMT}: i am saddened very \textcolor{red}{loudly} . \hfill\emph{(Wrong sense of \emph{tare})}\\ \textbf{Ours}: i am very saddened .\\ \midrule \textbf{Source}: cred ca executia sa este gresita .\\ \textbf{Reference}: i believe his execution is wrong .\\ \textbf{Base}: i believe that \textcolor{blue}{its} execution is wrong .\\ \textbf{VNMT}: i believe that \textcolor{blue}{its} execution is wrong .\\ \textbf{Ours}: i believe that his execution is wrong .\\ \midrule \textbf{Source}: da , chinatown\\ \textbf{Reference}: yes , chinatown\\ \textbf{Base}: yes , chinatown\\ \textbf{VNMT}: yes , \textcolor{red}{thin} \textcolor{blue}{.}\\ \textbf{Ours}: yes , chinatown\\ \midrule \textbf{Source}: nu stiu cine va fi propus pentru aceasta functie .\\ \textbf{Reference}: i do not know who will be proposed for this position .\\ \textbf{Base}: i do not know who will be proposed for this \textcolor{blue}{function} .\\ \textbf{VNMT}: i do not know who will be proposed for this \textcolor{blue}{function} .\\ \textbf{Ours}: i do not know who will be proposed for this position .\\ \midrule \textbf{Source}: recrutarea , o prioritate tot mai mare pentru companii\\ \textbf{Reference}: recruitment , a growing priority for companies\\ \textbf{Base}: recruitment , \textcolor{blue}{an increasing} priority for companies\\ \textbf{VNMT}: recruitment , \textcolor{red}{[article missing]} increasing priority for companies\\ \textbf{Ours}: recruitment , a growing priority for companies\\ \bottomrule \end{tabular} \end{adjustbox} \label{tab:outputs} \end{table} \subsection{Analysis of Latent Variables} Finally, we probe whether different latent variables encode different information. We random sample 100 sentences from two test sets of distinct domains, MTNT (Reddit comments) and WMT (news) with 50 sentences each. We plot the t-SNE projection of their corresponding latent variables samples $z_k$ inferred from $\Phi_k$, $k=1,2,3,4$ respectively. Figure \ref{fig:z_tsne} indicates that different latent variables learn to organize the data in different manners, although there was no clear signal that any of them exclusively specialize in encoding a domain label. We leave an thorough analysis of their information specialization to future work. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{z-plots-2.pdf} \end{center} \caption{t-SNE visualization of $\bm{z}_k$, $k=1,2,3,4$ samples inferred from 100 sentences from two datasets with distinct domains, MTNT (orchid) and WMT news (green).} \label{fig:z_tsne} \end{figure} \section{Related Work} Unlike most prior work in (conditional) text generation, we are able to address posterior collapse without requiring an annealing schedule \cite{bowman2015generating}, a weakened decoder \cite{guljarani2016pixelvae}, or a restriction on the variational family \cite{razavi2018preventing}. Unlike \citet{ma-etal-2018-bag}, who also employ bag-of-words as an objective for NMT, our bag-of-words decoder only has access to \(\bm{z}\), not the encoder states. Conversely, unlike \citet{weng-etal-2017-neural}, our generative decoder has access to both the latent variable and the encoder states, and the bag-of-words prediction is handled by a separate set of parameters. Posterior collapse for text VAE was first identified in language modeling \cite{bowman2015generating}. % VNMT \cite{zhang2016variational} applies CVAE with Gaussian priors to conditional text generation. VRNMT \cite{su2018variational} extends VNMT by modeling the translation process in greater granularity. All of them needed manually designed annealing schedules to increase KL loss to mitigate posterior collapse. Discrete latent variables have been applied to NMT \cite{gu2017non,shen2019mixture,kaiser2017one} but did not use variational inference or address posterior collapse. Tackling posterior collapse has received more attention lately, with general approaches such as aggressively trained inference networks \cite{he2019lagging}, skip connections \cite{dieng2018avoiding}, and more expressive priors \cite{razavi2018preventing,tomczak2017vae}. \section{Conclusion} We have presented a conditional generative model with latent variables whose distribution is learned with variation inference, then applied it to the task of machine translation. Our approach does not require an annealing schedule or a hamstrung decoder to avoid posterior collapse. Instead, by providing a new analysis of the conditional VAE objective to improve it in a principled way and incorporating an auxiliary decoding objective, we measurably rely on the latent variables. In addition to preventing posterior collapse, our approach improves translation quality in terms of BLEU. Empirical evaluation demonstrates that the proposed method has improved performance in dealing with uncertainty in data, including weakly supervised learning from source-side monolingual data as well as noisy parallel data.
eaec81f326f9178fde6e3bb20566c3124d924fd9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Much of the past work on phonetic duration falls into three categories, aimed at gaining phonetic insight, improving the quality of TTS and improving the accuracy of ASR. In the first category, researchers have examined the extent to which certain phonetic factors have an influence on duration (\textit{e.g.} lexical stress~\cite{Klatt1976LinguisticUO,stress1988}, pre-pausal lengthening~\cite{Klatt1976LinguisticUO,Campbell1991SegmentDI}, position~\cite{Luce1985ContextualEO}, word predictability~\cite{Brunelle2015EffectsOL,SherrZiarko2015WordFE,Bell2009} and speaking rate \cite{Chodrof2015}). Typically, only a single factor is studied at a time, and the amount of speech data is small and is taken from just one speaker or a small number of speakers (30 or fewer). Some interesting linguistic questions have been investigated in this way~\cite{Brunelle2015EffectsOL,SherrZiarko2015WordFE,Bell2009}. In the second category, durations are modeled parametrically or non-parametrically using DNNs or LSTM-RNNs trained on much more data than in the first category to set the durations at runtime in a parametric speech synthesizer ~\cite{Henter2016RobustTD,DBLP:journals/corr/RonankiWKH16, Chen2017DiscreteDM,lstmduration}. Typically, hundreds of phonetic features are included and there is no attempt to study the influence of any of these features. The third category is aimed at improving ASR accuracy by attempting to improve the weak duration modeling provided by standard HMM's using so-called Hidden Semi-Markov Models ~\cite{1168477,1659950}. No insight is sought into the influences on duration in this category. Although some improvements in accuracy have been claimed, the methods have not been widely adopted. This third category also includes duration modeling applied to speech recognition at the whole-word level~\cite{Ma2005ContextdependentWD,Power1996DurationalMF}, though this approach is effectively limited to small-vocabulary systems (specifically, digits), which are no longer widely used. The work reported here provides some insight into the phonetic factors controlling duration and aims ultimately to help improve both speech synthesis and recognition. A DNN is used to generate non-parametric output distributions over durations given the phonetic context for each phoneme. We incorporate the duration factors in the model in three ways to investigate their effects on duration prediction individually or in group (see Section 3.2.1). From the output distributions given by the models with or without the lexical stress and pre-pausal information, we show that the DNN is able to learn the lengthening effect of these two features (Section 3.2.2). More data is used than in any other work we are aware of, both in speaker-specific investigations and in speaker-independent investigations, where data from tens of thousands of speakers is used. The most immediate application of this work is to training speech synthesis and recognition systems, where anomalous phonetic durations can indicate discrepancies between a transcription or script and what was actually spoken. \vspace{-2mm} \section{Method} \subsection{Neural Network-Based Modeling} \vspace{-2mm} We used a feedforward DNN to model the duration, as shown in Figure~\ref{fig:system overlook}. The DNN comprises a stack of fully connected layers with the softmax function~\cite{John1990} at the output layer. We use the same number of units in each of the hidden layers. We use rectified linear activation (\textit{ReLU}) for the hidden units and cross-entropy as the loss function. The training procedure is optimized using \textit{ADAM}~\cite{DBLP:journals/corr/KingmaB14}. \vspace{-2mm} \subsection{Input features} \vspace{-2mm} The inputs to the DNN are a concatenation of three types of information: identity of the current phoneme, phonetic properties of adjacent phonemes and duration-related features of the phonemes. The identity of the current phoneme is encoded using a one-hot vector, while the phonetic properties of adjacent phonemes are characterized in a smaller vector (typically 15-dimensions). These phonetic properties include: long/short vowel, voiced/unvoiced consonant, plosive, affricate, nasal, fricative, glide, rhotic, sonorant, labial, alveolar, velar, aspirated and flap. The duration-related features are: \begin{itemize}[noitemsep,leftmargin=*] \vspace{-1mm} \item \textbf{lexical stress}: It has been widely reported that stressed syllables are usually longer than unstressed syllables~\cite{Klatt1976LinguisticUO,stress1988}. When stress information is available we use one bit to show whether the current phone is in a stressed syllable or not, and we also add the stress feature to the current phone and to adjacent phones, since the position relative to the stressed syllable also affects duration. \item \textbf{pre-pausal lengthening}: Speech sounds generally lengthen before a pause~\cite{Klatt1976LinguisticUO}. We add one feature to the central phone to indicate the distance between that phone and the next pause. The value = 1/n, when n, the number of phonemes to the following pause, =1,2,3,4 or 5; or 0 for n>5. \item \textbf{position in the syllable}: Draws information about position from this feature. One bit to show whether the current phone is a consonant preceding the vowel in a syllable. We add this feature to the side phones as well as to the current phone. \item \textbf{word predictability (LM scores)}: Studies in Vietnamese~\cite{Brunelle2015EffectsOL}, Mandarin~\cite{SherrZiarko2015WordFE} and English~\cite{Bell2009} have found that function words are spoken more quickly than non-function words and common words more quickly than rare words, suggesting that this behavior may be language-universal. These studies are consistent with the idea that words with a higher information content (\textit{i.e.} those that are less predictable) are spoken more carefully and hence more slowly. We use n-gram language model scores (on a log probability scale) to indicate the predictability of the word as an inverse measure of its information content. \item \textbf{speaking rate}: We use the ratio of the actual duration of the utterance to the duration the utterance would have given the expected phone durations as speaking rate. The expected phone duration is the average duration of that phone across the whole dataset. \item \textbf{peak fundamental frequency (F0)}: Peak F0 in the vowel was expected to influence duration through its association with \textit{focal lenthening}~\cite{focal_lengthening} in an utterance. However, we were unable to find any influence of peak F0 on duration and it will not be discussed further. \end{itemize} \vspace{-4mm} \subsection{Outputs} We obtain the reference durations from forced-alignment. We group them into 45 bins, starting at a bin corresponding to 30 ms, which is the shortest possible duration (one frame per state with three-state acoustic models) and increasing by 10 ms (one frame) for the first 39 bins. Beyond that point, there are too few samples at 10 ms spacing and the bins are made progressively wider. For example, the 40th and 41st bins correspond to 20ms and 30ms spacing respectively. Durations larger than 670ms are all put into the 45th bin. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{overlook5.png} \caption{An overview of the duration modeling. } \label{fig:system overlook} \vspace{-2mm} \end{figure} \vspace{-4mm} \subsection{Outliers} \vspace{-2mm} We detect outliers using the output from the DNN. We use the value of the bin to which the reference duration belongs as the probability of the duration, and by ranking the probabilities we can get a list of phonemes with the lowest probabilities. These phonemes with unlikely duration, which we regard as outliers, can indicate misalignments or departures from the transcription. Examples are shown in Section 4.1. \section{Experiments and Results} \begin{table*} \vspace{0mm} \caption{\it The duration prediction results for Baseline\_1 with different feature configurations trained on SPK1. The standard errors of the precisions (\%) are in the range from 0.0002 to 0.006. } \vspace{-2mm} \centerline{ \begin{tabular}{|c|ccc|ccc|ccc|} \hline Models & \multicolumn{3}{c}{(i) Just the named feature} & \multicolumn{3}{c}{(ii) Cumulative} & \multicolumn{3}{c}{(iii) Leave one out} \vline\\ \cline{2-4} \cline{5-7} \cline{8-10} & precision & precision\_3 & CE\_loss & precision & precision\_3 & CE\_loss & precision & precision\_3 & CE\_loss \\ pre/post-vocalic & 29.23 & 66.08 & 0.0296 & 29.23 & 66.08 & 0.0296 & 32.76 & 72.30 & 0.0273 \\ stress & 30.63& 68.47 & 0.0287 & 30.76 & 68.62 &0.0286 & 31.42 & 69.87 & 0.0282 \\ pre-pausal & 30.16 & 67.93& 0.0291& 32.28 & 71.26 & 0.0277& 31.81 & 70.49 & 0.0279\\ predictability & 29.19 & 66.08 & 0.0298& 32.52 & 71.70& 0.0276& 32.74 & 72.28 & 0.0272 \\ speaking rate & 29.54& 66.59 & 0.0294 & 33.08 & 72.73 & 0.0272& 32.52 & 71.70 & 0.0276 \\ \hline \end{tabular}} \vspace{-5mm} \label{tabRes_features2} \end{table*} We measure the basic effectiveness of our modeling in three ways: (i) cross-entropy loss on a test set, (ii) a measure we call ``\textit{precision}'', which is the proportion of measured durations whose bin exactly matches the mode of the model's predicted duration distribution. For most predictions (\textit{i.e.} those below 410ms), this is within 10 ms (one frame), and thus the highest precision possible, and (iii) a precision with more tolerance that counts not only the match to the bin corresponding to the peak of the distribution but also the neighboring bin on each side. We denote these three measurements as \textit{CE\_loss}, \textit{precision} and \textit{precision\_3} respectively. All neural networks are built using \textit{Pytorch}~\cite{paszke2017automatic}. \vspace{-2mm} \subsection{Data} We have in-house datasets from two native speakers of American English recorded for TTS purposes. One of the speakers SPK1, is female and the other, SPK2, male. There are 64,795 utterances (33 hours) in the SPK1 dataset and 27,550 utterances (13 hours) in the SPK2 dataset. We also have an in-house dataset, SPK-ASR, of less carefully controlled recordings intended for ASR that contains 540,389 utterances from 535,556 speakers of all ages, including children. We used forced alignment to get the duration for each phone. We used SPK1 and SPK2 for speaker-dependent modeling and SPK-ASR for speaker-independent modeling. The phonetic symbol sets used in these three datasets are different. We used 46, 42 and 50 one-hot vectors to encode the central phone identity for the three datasets respectively. \subsection{Speaker-dependent modeling} \subsubsection{Duration-related features configurations} We used a DNN with 2 hidden layers and 256 hidden units in each layer as a baseline for exploring the feature configurations. We used a minibatch of 64 and train the model for 30 epochs. The learning rate was 0.001 for each epoch. We found that the final result depended to some extent on the random start point of the model built. We therefore ran our precision tests 10 times, each with a different random start and a different randomly selected test set. For each test, we randomly sampled the dataset to use 90\% for training and 10\% for testing. We then computed the overall mean and standard error of the precision. We began with \textit{Baseline\_0} trained on SPK1 (the input to the neural network being just the one-hot vector that encodes the identity of the current phone with no context) and obtained a precision of 19\%. In \textit{Baseline\_1} the input has a context of $\pm 1$ (1 phone on each side of the current phone) and the precision increased to 28.68\%. Thus, we obtained a 9.28\% absolute increase by including context. Baseline\_1 was then augmented with the duration-related features in three ways: (i) adding each one to the Baseline\_1 to see the effect on its own, (ii) cumulatively adding the features to the Baseline\_1 and (iii) including all the features except the named one. The results are shown in Table~\ref{tabRes_features2}. The precision increases as the duration-related features are added, among which the stress has the biggest positive effect. The speaking rate for the utterance and pre-pausal lengthening also have a strong influence. The location of consonants within a syllable (\textit{i.e.} before or after the vowel) has a somewhat weaker influence, as has the predictability of the word containing the phoneme as estimated by a stochastic language model. We also carried out the cumulative experiments on another TTS dataset, SPK2. Figure~\ref{fig:feature} shows that the effect of these factors is very similar for a different speaker. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{features2.png} \vspace{-4mm} \caption{Models with different feature configurations trained separately on two speakers: SPK1 and SPK2. } \vspace{-7mm} \label{fig:feature} \end{figure} \subsubsection{Stress and pre-pausal lengthening effect} \vspace{-2mm} The duration probability distributions in Figure~\ref{fig:lengthing3} give examples showing that the network is able to learn the lengthening effect of stress and pre-pausal features, and the predictions are closer to the measured duration bins (red dashed lines) with these two features on. The /{\ae}/ in ``cancel'' and ``can'', which we denote as ``{\ae}\_cancel'' and ``{\ae}\_can'', have the same context, but different stress values (``can'' as a modal verb normally being unstressed). In Figure~\ref{fig:lengthening}, the two green curves are the same, but knowledge of stress increases the predicted duration for /{\ae}/ in ``cancel'' and reduces it in ``can''. Figure~\ref{fig:lengthening2} compares distributions with and without an input providing the distance to the next pause. Predicted duration distributions for utterance-final ``here'' are shown left to right as the three phonemes /h/ /i/ /\textrhookrevepsilon/. Since the baseline model here has a context of $\pm 1$, the distribution of /\textrhookrevepsilon/ in ``here'' still has the effect of pre-pausal lengthening even without that feature given in the input features. For the /h/ and /i/, knowledge of pause proximity increases the predicted duration. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{stress.png} \caption{/\textnormal{{\ae}}/ in ``cancel'' and ``can'' comparing model outputs when lexical stress information is or is not included. } \label{fig:lengthening} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{prepausal.png} \caption{/h/ /i/ /\textrhookrevepsilon/ in an utterance-final ``here'', comparing distributions with and without an input providing the distance to the next pause.} \label{fig:lengthening2} \end{subfigure} \caption{ Predicted duration distributions. The red dotted line shows the measured duration for one example.} \label{fig:lengthing3} \end{figure} We evaluated the SPK2-trained model on the SPK1 testing set and the SPK1-trained model on the SPK2 test set. Since SPK1 has much more training data than SPK2, we also evaluated the SPK1-trained model with reduced training data size. The results shown in Table~\ref{tab:corss_speaker} suggest that the precision decreases considerably when testing on a different speaker. The results in row 3 with a model trained on SPK1 when using a reduced set to match that available for SPK2 match much more closely the results from training on SPK2 (row 1), suggesting that the difference between the results with the two speakers is largely attributable to the discrepancy in the amount of training material and indicating that more than 10 hours of training speech is needed for optimal model training. This result also largely explains the offset between the two curves in Figure~\ref{tab:corss_speaker}. \begin{table}[ht] \centering \caption{Cross-speaker precision tests (\%). The models used all the duration-related features.} \begin{tabular}{|ccc|} \hline training & SPK2\_test (1h) & SPK1\_test (3h) \\ \hline SPK2 (10h) & 31.00 & 22.70\\ SPK1 (30h)& 23.34 & 32.56 \\ SPK1 (10h)& 22.84 & 31.45 \\ \hline \end{tabular} \vspace{2mm} \label{tab:corss_speaker} \vspace{-8mm} \end{table} \subsubsection{Model configurations} \vspace{-2mm} We trained the DNN with a range of hidden layers ($d \in 1,2,3 $) and hidden units in each layer ($w \in 128,256,512$) and wider context ($\pm 1$, $\pm 2$ and $\pm 3$). \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{configuration.png} \caption{Model configurations. } \label{fig:system conf} \vspace{-2mm} \end{figure} As shown in Figure~\ref{fig:system conf}, the precision improves as the number of parameters in the model is increased and the context is longer. Taking computational efficiency into account, the best configuration for now is three hidden layers with 256 hidden units in each layer and with $\pm 3$ context, which achieves a precision of 35.67\% and precision\_3 of 89.88\%. \vspace{-2mm} \subsection{Speaker-independent modeling} \vspace{-2mm} We applied our duration modeling method with the configuration in Section 3.2.3 to speaker-independent modeling with 80\% of the SPK-ASR dataset as the training data. We obtained a precision of 10.50\% and precision\_3 of 40.30\% on a testing set (10\%). It is more challenging because in the SPK-ASR corpus almost every utterance is from a different speaker and spoken in a spontaneous way. Moreover, stress and LM score have not yet been incorporated. \section{Applications} \subsection{Outlier detection in TTS and ASR} We use the best configuration from Section 3.2.3 to detect outliers for the SPK1 TTS dataset and the ASR dataset. Figure~\ref{fig:tts1},~\ref{fig:tts2} and~\ref{fig:tts3} show three examples from the top outliers in the SPK1 dataset corresponding to three kinds of problems that have been seen to occur in the TTS training corpus. 48 out of 50 outliers are correctly detected as having bad alignments. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{tts1-6.png} \caption{deviation from the script: the speaker says ``businesses'', but the transcription has ``business''; the /\dh/ in ``that'' is consequently misaligned to the end of ``businesses''. } \label{fig:tts1} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{tts2.png} \caption{deviation from the script: the speaker says ``management slash promotional'', having evidently read ``management/promotional'', but the transcription has ``management promotional'', thus the /t/ is misaligned to an unlikely long duration.} \label{fig:tts2} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{tts3.png} \caption{mismatch in the way the word is pronounced relative to the dictionary: the speaker says ``Oriente'' as /\textopeno\textturnr i\textquotesingle\textepsilon nte/, but the pronunciation in the dictionary for ``Oriente'' is /a\textturnr i\textquotesingle\textepsilon nt/ without a final vowel, causing the /t/ to be misaligned to a longer segment.} \label{fig:tts3} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{asr1.png} \caption{mistranscription: the speaker actually says ``wei... weird'' but the transcription is simply ``weird''; the /i/ is consequently aligned to a much longer portion of speech.} \label{fig:asr1} \end{subfigure} \caption{Outlier examples, the upper annotation line is what the speaker says and the lower is from the forced alignment.} \vspace{-4mm} \end{figure} We also observed misalignments in the ASR dataset as well as disfluencies as in Figure~\ref{fig:asr1} (such disfluencies being rare in the speech of the professional speakers producing the TTS dataset). When examining outliers in ASR (Table~\ref{tab:outliers}), 12 out of the top 50 outliers (\textit{i.e.} 24\%) were found to be from children. By contrast, just 11 out 100 randomly selected utterances were judged to be from children, suggesting that children make a disproportionate contribution to the set of outliers. Among the top 50 outliers, 8\% are because of disfluencies, resulting in bad transcriptions and hence bad alignments. By contrast, in the randomly selected utterances, fewer than 2\% were found to have bad alignments. The rest of the outliers evidently get their low scores because the speaker was dictating and hence speaking slowly, had put extreme stress on a word and hence lengthened it, or was speaking in a playful style. \begin{table}[ht] \centering \caption{Proportion of children's speech and bad alignments in the top 50 outliers and the randomly selected utterances for the ASR data.} \begin{tabular}{|ccc|} \hline & Top 50 outliers & Random utts \\ \hline Children's speech & 24\% & 11\%\\ Bad alignments & 8\% & <2\% \\ \hline \end{tabular} \vspace{2mm} \label{tab:outliers} \end{table} \vspace{-8mm} \section{Conclusions} A DNN can provide a useful prediction of the distribution of durations of a phoneme in a specified context. It offers a technique for gaining a basic understanding from large speech corpora (rather than the more usual small set of examples) of how various factors combine to determine phonetic durations in a given language. The prediction is best, at least in American English, when the phonetic properties of at least three phonemes on each side of the phoneme under consideration are provided to the DNN, together with other relevant information, such as lexical stress in the syllable and estimated average speaking rate. Distributions produced in this way can be used to spot improbable durations that often arise from a mismatch between the speech and either the words in the phonetic transcription or the dictionary pronunciations of those words. Low probability durations may also occur because the speech is particularly expressive. In training material for TTS these anomalies can be used to correct transcriptions and dictionary entries as well as to exclude unsuitable speech from the TTS training set. In ASR training material, low duration scores may result from disfluencies (rare in TTS training speech), but the most common cause from our limited sampling of the outliers appears to be unusual timing from expressive speech or dictation mode. This second cause does not invalidate the speech for ASR training purposes, though the first clearly does. Children's speech is overrepresented in the set of low duration scores because phonetic durations in their speech appear to be much more variable than those of adults' speech. So far, this work has been confined to American English. We might speculate that duration information will be particularly useful for ASR in languages such as Japanese, Finnish, Estonian and Arabic~\cite{Zangar2018} that have phonemic length. \section{Acknowledgments} Earlier work on seeking factors that influence durations was carried out by Dominic Hunt and especially by Paul Coles. John Bridle, Barry Theobald and Rin Metcalf made many useful comments and Felipe Espic pointed us to the \textit{ADAM} optimizer. \bibliographystyle{IEEEbib}
9e7ddb45d58a919f221389fc2e77221f44090474
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} In the algorithmic field of computer science, we usually have an optimization problem at hand and a state-of-the-art or a straight-forward exhaustive search algorithm that solves it. The challenge is then to suggest a new algorithm with a better running time, storage or other feature. A different and less traditional approach is to use data reduction, which is a compression of the input data in some sense, and to run the (possibly inefficient) existing algorithm on the compressed data. In this case, the problem of solving the problem at hand reduced to the computing a problem-dependent compression such that: \begin{enumerate} \item an existing algorithm that solves the optimization problem on the original (complete) data, will yield a good approximate solution \emph{for the original data} when applied on the compressed data. \item The time and space needed for constructing the compression and running the optimization algorithm on the coreset will be better than simply solving the problem on the complete data. \end{enumerate} There are many approaches for obtaining such a provable data reduction for different problems and from different fields, such as using uniform sampling, random projections (i.e., the Johnson-Lindenstrauss Lemma), compressed sensing, sketches or PCA. In this paper we focus on a specific type of a reduced data set, called coreset (or core-set) that was originated in computational geometry, but now applied in other fields such as computer vision and machine learning. Our paper is organized into basic sections: results for maintaining coresets over data streams (Section~\ref{section:streaming}) and results for offline coresets (Sections~\ref{section:offline}-\ref{app}). We briefly introduce both of these topics the remainder of this section. Many of our results, along with comparison to prior works, are summarized in Table~\ref{table1} in Section~\ref{sectionRelated}. In the Appendix A (Section 7) we summarize the merge-and-reduce technique that is used in previous approaches~\cite{chen2009coresets, sariela, sarielb, Ackermann2012, FL11}. In the Appendix B (Section 8) we provide an alternative framework that generalizes our main result (Theorem~\ref{firstthm}), applying to a wide-array of constructions although giving a weaker bound. \subsection{Streaming Results} In the streaming model of computation, the input arrives sequentially. This differs from the standard model where the algorithm is given free access to the entire input. Given a memory that is linear in the size of the input, these models are evidently equivalent; therefore the goal of a streaming algorithm is to perform the computation using a sublinear amount of memory. Our stream consists of $n$ elements $p_1, \ldots, p_n$. In the streaming model (or more specifically the insertion-only streaming model, since points that arrive will never be deleted), we attempt to compute our solution using $o(n)$ memory. Sometimes the algorithm will be allowed to pass over the stream multiple times, resulting in another parameter called the number of passes. All of our algorithms use $\text{polylog}(n)$ memory and require only a single pass. Prior to the current work, the merge-and-reduce technique due to Har-Peled and Mazumdar~\cite{sariela} and Bentley and Sax~\cite{sax} was used to maintain a coreset on an insertion-only stream. For a summary of this technique, see Section~\ref{introMergeReduce} in the Appendix. In this paper we introduce an alternative technique that reduces the multiplicative overhead from $\log^{2a+1} n$ to $\log n$ (here, $a$ is the offline construction's dependence on $1/\epsilon$). While our method is not as general as merge-and-reduce (it requires that the function in question satisfies more than just the ``merge'' and ``reduce'' properties, defined in Section~\ref{introMergeReduce}), it is general enough to apply to all $M$-estimators. For the special case of our coreset offline construction for $M$-estimators (introduced in Section~\ref{app}), we use a more tailored method that causes this to be $\log n$ additive overhead. Therefore our streaming space complexity matches our offline space complexity, both of which improve upon the state-of-the-art. The offline coreset construction of~\cite{FL11} has the following structure: first, a bicriterion approximation is computed. Second, points are sampled according a distribution that depends only on the distances between points of the input and their assigned bicriterion centers. This suggests a two-pass streaming algorithm (which we later combine into a single pass): in the first pass, construct a bicriterion using an algorithm such as~\cite{BMO11}. In the second pass, sample according to the bicriterion found in the first pass. This provides a two-pass algorithm for a coreset using $O(\epsilon^{-2} k \log k \log n)$-space. Our contribution is showing how these two passes can be combined into a single-pass. Using the algorithm of~\cite{BMO11} to output $O(k \log n)$ centers at any time, we show that this is sufficient to carry out the sampling (originally in the second pass) in parallel without re-reading the stream. Our main lemma (Lemma~\ref{lemma:sensbound}) shows that the bicriterion, rather than just providing ``central'' points to concentrate the sampling, actually can be thought of as a proof of the importance of points for the coreset (technically, a bound on the ``sensitivity'' that we define at the beginning of Section~\ref{section:streaming}). Moreover, the importance of points is non-increasing as the stream progresses, so we can maintain a sample in the streaming setting without using any additional space. \subsection{Offline Results} The name coreset was suggested by Agarwal, Har-Peled, and Varadarajan in~\cite{{AgaHarVar04}} as a small subset $S$ of points for a given input set $P$, such that any shape from a given family that covers $S$ will also cover $P$, after expanding the shape by a factor of $(1+\eps)$. In particular, the smallest shape that covers $S$ will be a good approximation for the smallest shape that covers $P$. For approximating different cost functions, e.g. the sum of distances to a given shape, we expect that the total weight of the sample will be similar to the number $n$ of input points. Hence, in their seminal work~\cite{sariela}, Har-Peled and Mazumdar used multiplicative weights for each point in $S$, such that the weighted sum of distances from $S$ to a given shape from the family will approximate its sum of distances from the original data. In~\cite{sariela} each shape in the family was actually a set of $k$ points, and the application was the classic $k$-means problem. In this paper, we are given an input set $P$ (called \emph{points}), a family (set) $Q$ of items, called \emph{queries} and a function $f:P\to [0,\infty)$ that is called a \emph{cost} function. A coreset is then a \emph{subset} $S$ of $P$, that is associated with a non-negative weight function $u:S\to[0,\infty)$ such that, for every given query $q\in Q$, the sum of original costs $\sum_{p\in P} f(p,q)$ is approximated by the weighted sum $\sum_{p\in S}u(p)f(p,q)$ of costs in $S$ up to a multiplicative factor, i.e., \[ (1-\eps) \sum_{p\in P} f(p,q) \leq \sum_{p\in S}u(p)f(p,q) \leq (1+\eps)\sum_{p\in P} f(p,q) \] While our framework is general we demonstrate it on the $k$-means problem and its variant. There are at least three reasons for this: (i) This is a fundamental problem in both computer science and machine learning, (ii) This is probably the most common clustering technique that used in practice, (iii) Many other clustering and non-clustering problems can be reduced to $k$-means; e.g. Mixture of Gaussians, Bregman Clustering, or DP-means~\cite{feldman2011scalable,LucicBK15,bachem2015coresets}. In this context we suggest offline coreset constructions for $k$-clustering queries, that can be constructed in a streaming fashion using our streaming approach, and are: \begin{itemize} \item of size linear in $k$ (for $d>\log k$) and arbitrary metric space of dimension $d$. Current coresets that are subset of the input have size at least cubic in $k$~\cite{LS10}. This is by reducing the total sensitivity to $O(1)$ without introducing negative weights that might be conditioned on the queries as in~\cite{FL11}. \item of size independent of $d$ for the Euclidean case of $k$-means (squared distances). This is particular useful for sparse input set of points where $d\geq n$, such as in adjacency matrices of graphs, document-term, or image-object matrices. Recent coreset for sparse $k$-means of~\cite{barger2015k} is of size exponential in $1/\eps$ and thus turn to $O(n)$ when used when the merge-and-reduce tree (where $\eps$ is replaced by $O(\eps/\log (n))$. The result of~\cite{feldman2013turning} for $k$-means is exponential in $k/\eps$ and fails with constant probability, so also cannot be used with streaming. Another result of~\cite{feldman2013turning} suggests a coreset type set for $k$-means of size $O(k/\eps)$ but which is based on projections that loss the sparsity of the data. Similar sparsity loss occurs with other projection-type compression methods e.g. in~\cite{cohen2015dimensionality}. Nevertheless, we use the technique in~\cite{feldman2013turning} to bound the dimension of the $k$-means problem by $O(k/\eps)$. \item of size independent of $d$ for the Euclidean case, and non-squared distances, using weak coresets. These coresets can be used to approximates the optimal solution, but not every set of $k$ centers. Unlike the weak coresets in~\cite{FL11,feldman2007ptas}, we can use any existing heuristic on these coresets, as explained in Section~\ref{rho}. \item Robust to outliers. This is since the general pseudo-metric definition we used (inspired by~\cite{feldman2012data}, support $m$-estimators which is a tool for handling outliers~\cite{tyler1987distribution}. Unlike in~\cite{feldman2012data} our coresets are linear (and not exponential) in $k$, and also independent of $n$ (and not logarithmic in $n$). \end{itemize} \section{Related Work} \label{sectionRelated} The following table summarizes previous work along with our current results. By far, the most widely-studied problems in this class have been the $k$-median and $k$-means functions. In general, the extension to arbitrary $M$-estimators is non-trivial; the first such result was~\cite{feldman2012data}. Our approach naturally lends itself to this extension. $M$-estimators are highly important for noisy data or data with outliers. As one example, Huber's estimator is widely used in the statistics community~\cite{Hampel, Huber}. It was written that ``this estimator is so satisfactory that it has been recommended for almost all situations''~\cite{Zhang}. Our results work not only for Huber's estimator but for all $M$-estimators, such as the Cauchy and Tukey biweight functions which are also well-used functions. Note that in the below table, $\tilde{O}$ notation is used to write in terms of $d$, $\epsilon$, $k$, and $\log n$ (therefore hiding factors of $\log \log n$ but not $\log n$). \begin{table}[h] \begin{tabular}{|c|c|c|c|} \hline Problem & Offline Size & Streaming Size & Paper \\ \hline Euclidean $k$-means & $O(k \epsilon^{-d} \log n)$ & $O(k \epsilon^{-d} \log^{d+2} n)$ & \cite{sariela} \\ Euclidean $k$-means & $O(k^3 \epsilon^{-(d+1)})$ & $O(k^3 \epsilon^{-(d+1)} \log^{d+2} n)$ & \cite{sarielb} \\ Euclidean $k$-means& $O(d k^2 \epsilon^{-2} \log n)$ & $O(d k^2 \epsilon^{-2} \log^8 n)$ & \cite{chen2009coresets} \\ Euclidean $k$-means & $O(d k \log k \epsilon^{-4})$ & $O(d k \log k \epsilon^{-4} \log^5 n)$ & \cite{FL11} \\ Euclidean $k$-means & $\tilde{O}((d/\epsilon)^{O(d)} k \log n)$ & $\tilde{O}((d/\epsilon)^{O(d)} k \log^{O(d)} n)$ & \cite{Ackermann2012}\\ Euclidean $k$-means & $O(\epsilon^{-2} k \log k \min(k/ \epsilon, d) )$ & $O(\epsilon^{-2} k \log k \min(\frac{k}{\epsilon}, d) + k \log n)$ & ** \\ Metric $k$-means & $O(\epsilon^{-2} k^2 \log^2 n)$ & $O(\epsilon^{-2} k^2 \log^8 n)$ & \cite{Chen} \\ Metric $k$-means & $O(\epsilon^{-4} k \log k \log n)$ & $O(\epsilon^{-4} k \log k \log^6 n)$ & \cite{FL11} \\ Metric $k$-means & $O(\epsilon^{-2} k \log k \log n)$ & $O(\epsilon^{-2} k \log k \log n)$& **\\ Euclidean $k$-median& $\tilde{O}(d k^2 \epsilon^{-2} \log n)$ & $O(d k^2 \epsilon^{-2} \log^8 n)$ & \cite{chen2009coresets} \\ Euclidean $k$-median & $O(k \epsilon^{-d} \log n)$ & $O(k \epsilon^{-d} \log^{d+2} n)$ & \cite{sariela} \\ Euclidean $k$-median & $O(k^2 \epsilon^{-d})$ & $O(k^2 \epsilon^{-(d)} \log^{d+1} n)$ & \cite{sarielb} \\ Euclidean $k$-median & $O(d \epsilon^{-2} k \log k )$ & $O(d \epsilon^{-2} k \log k \log^3 n)$ & \cite{FL11} \\ Euclidean $k$-median & $O(d \epsilon^{-2} k \log k)$ & $O(d \epsilon^{-2} k \log k + k \log n)$ & **\\ Metric $k$-median& $O(k^2 \epsilon^{-2} \log^2 n)$ & $O(k^2 \epsilon^{-2} \log^8 n)$ & \cite{chen2009coresets} \\ Metric $k$-median & $O(\epsilon^{-2} k \log k \log n)$ & $O(\epsilon^{-2} k \log k \log^4 n)$ & \cite{FL11} \\ Metric $k$-median & $O(\epsilon^{-2} k \log k \log n)$ & $O(\epsilon^{-2} k \log k \log n)$ & ** \\ Euclidean $M$-estimator & $O(\epsilon^{-2} k^{O(k)} d^2 \log^2 n)$ & $O(\epsilon^{-2} k^{O(k)} d^2 \log^5 n)$& \cite{feldman2012data} \\ Euclidean $M$-estimator & $O(d \epsilon^{-2} k \log k)$ & $O(d \epsilon^{-2} k \log k + k \log n)$ & ** \\ Metric $M$-estimator & $O(\epsilon^{-2} k^{O(k)} \log^4 n)$ & $O(\epsilon^{-2} k^{O(k)} \log^7 n)$ & \cite{feldman2012data} \\ Metric $M$-estimator & $O(\epsilon^{-2} k \log k \log n)$ & $O(\epsilon^{-2} k \log k \log n)$ & ** \\ \hline \end{tabular} \caption{Summary of Related Work} \label{table1} \end{table} \paragraph{Framework. }A generic framework for coreset construction was suggested in~\cite{FL11}. The main technique is a reduction from coreset to $\eps$-approximations, that can be computed using non-uniform sampling. The distribution of the sampling is based on the importance of each point (in some well defined sense), and the size of the coreset depends on the sum of these importance levels. This term of importance appeared in the literature as leverage score (in the context of low-rank approximation, see~\cite{papailiopoulos2014provable} and references therein), or, for the case of $k$-clustering, sensitivity~\cite{LS10}. The proof of many previous coreset constructions were significantly simplified by using this framework, and maybe more importantly, the size of these coresets was sometimes significantly reduced; see~\cite{FL11} for references. Many of these coresets size can be further improve by our improved framework, as explained below. The size of the coreset in~\cite{FL11} depends quadratically on the sum of sensitivities, called \emph{total sensitivity}~\cite{LS10}. In this paper, we reduce the size of the coreset that are constructed by this framework to be only near-linear $(t\log t)$ in the total sensitivity $t$. In addition, we generalize and significantly simplify the notation and results from this framework. \paragraph{$k$-means. } In the $k$-means problem we wish to compute a set $k$ of centers (points) in some metric space, such that the sum of squared distances to the input points is minimized, where each input point is assigned to its nearest center. The corresponding coreset is a positively weighted subset of points that approximates this cost to every given set of $k$ centers. First deterministic coresets of size exponential in $d$ were first suggested by Har-Peled and Mazumdar in~\cite{sariela}. The first coreset construction of size polynomial in $d$ was suggested by Ke-Chen in~\cite{chen2009coresets} using several sets of uniform sampling. The state-of-the-art is the result of Schulman and Langberg~\cite{LS10} who suggested a coreset of size $O(d^2k^3/\eps^2)$ for $k$-means in the Euclidean case based on non-uniform sampling. The distribution is similar to the distribution in~\cite{feldman2007ptas} over the input points, however in~\cite{feldman2007ptas} the goal was to have weaker version coresets that can be used to solve the optimal solution, but are of size independent of $d$. Some kind of coreset for $k$-means of size near-linear in $k$ was suggested in~\cite{FL11}. However, unlike the definition of this paper, the multiplicative weights of some of the points in this coreset were (i) negative, and (ii) depends on the query, i.e., instead of a weight $w(p)>0$ for an input point $p$, as in this paper, the weight is $w(p,C)\in \REAL$ where $C$ is the set of queries. While exhaustive search was suggested to compute a PTAS for the coreset, it is not clear how to compute existing algorithms or heuristics (what is actually done in practice) on such a coreset. On the contrary, generalizing an existing approximation algorithm for the $k$-means problem to handle positively weights is easy, and public implementations are not hard to find (e.g. in Matlab and Python). \paragraph{$k$-mean for handling outliers. }Coresets for $k$-means and its variants that handle outliers via m-estimators were suggested in~\cite{feldman2012data}, and also inspired our paper. The size of these coresets is exponential in $k$ and also depend on $\log $n. For comparison, we suggest similar coreset of size near-linear in $k$, and independent of $n$. PTAS for handling exactly $m$-outliers was suggested in~\cite{Chen08} but with no coreset or streaming version. \paragraph{Streaming. }The metric results of~\cite{chen2009coresets, FL11} and Euclidean results of~\cite{chen2009coresets, sariela, sarielb, FL11} that rely on merge-and-reduce have already been mentioned. A summary of these results appears in the tables below. For the specific case of Euclidean space, a more diverse set of stronger results is known. In particular, coreset constructions are known that do not begin with a bicriterion solution, and whose streaming variant does not rely on merge-and-reduce~\cite{Ackermann2012}. With the additional assumption in Euclidean space that the points lie on a discrete grid $\{1,\ldots,\Delta\}^d$, alternative techniques are known for $k$-means and other problems, even when the stream allows the deletion of points~\cite{FraSoh05}. \section{Streaming Algorithm} \label{section:streaming} We present a streaming algorithm for constructing a coreset for metric $k$-means clustering that requires the storage of $O(\epsilon^{-2} k \log n)$ points. The previous state-of-the-art~\cite{FL11} required the storage of $O(\epsilon^{-4} k \log k \log^6 n)$ points. In this section we assume the correctness of our offline algorithm, which is proven in Section~\ref{app}. More generally, our technique works for $M$-estimators, a general class of clustering objectives that includes the well-known $k$-median and $k$-means functions as special cases. Other special cases include the Cauchy functions, the Tukey functions, and the $L_p$ norms. Our method combines a streaming bicriterion algorithm~\cite{BMO11} and a batch coreset construction~\cite{FL11} to create a streaming coreset algorithm. The space requirements are combined addivitely, therefore ensuring no overhead. The streaming algorithm of~\cite{BMO11} provides a bicriterion solution using $O(k \log n)$ space. Our new offline construction of Section~\ref{app} requires $O(\epsilon^{-2} k \log k \log n)$ space. Therefore our main theorem yields a streaming algorithm that combines these spaces additively, therefore requiring $O(\epsilon^{-2} k \log k \log n )$ space while maintaining a coreset for $k$-means clustering. The previous state-of-the-art framework that works for the metric variant (other methods are known to improve upon this for the special case of Euclidean space) was the merge-and-reduce technique~\cite{sax} that yields a streaming algorithm requiring $O(\epsilon^{-4} k \log k \log^6 n)$ space, incurring an overhead of $\Theta(\log^{5} n)$ over the offline coreset size. In comparison, our framework incurs no overhead. The additional improvement in our space is due the improved offline construction given in Sections 5-7. We now state our main theorem. The result is stated in full generality: the $k$-median clustering in a $\rho$-metric space (see Definition~\ref{rho}). Note that metric $k$-means clustering corresponds to setting $\rho = 2$. Also, the probability of success $1-\delta$ typically has one of two meanings: that the construction succeeds at the end of the stream (a weaker result), or that the construction succeeds at every intermediate point of the stream (a stronger result). Our theorem gives the stronger result, maintaining a valid coreset at every point of the stream. \begin{theorem}[Main Theorem] \label{firstthm} There exists an insertion-only streaming algorithm that maintains a $(k,\epsilon)$-coreset for $k$-median clustering in a $\rho$-metric space, requires the storage of $O(\rho^2 \epsilon^{-2} k \log(\rho k) \log (n) \log(1/\delta))$ points, has $poly(k, \log n, \rho, \epsilon, \log(1/\delta))$ worst-case update time, and succeeds at every point of the stream with probability $1-\delta$. \end{theorem} Our method can be applied to the coreset constructions of~\cite{sariela, sarielb, chen2009coresets, FL11} with a multiplicative overhead of $O(\log n)$. Our second theorem is a more generally applicable technique; it applies to all constructions that first compute a bicriterion solution and then sample points according to the bicriterion solution. The constructions of~\cite{sariela, sarielb, chen2009coresets, FL11} follow this outline, and we are unaware of any constructions which do not. The theorem yields immediate corollaries as well as reducing certain streaming coreset problems to that of constructing an offline coreset. \begin{theorem} Given an offline algorithm that constructs a $(k,\epsilon)$-coreset consisting of $S = S(n,k,\epsilon,\delta)$ points with probability $1-\delta$ by sampling points based on a bicriterion solution, there exists a streaming algorithm requiring the storage of $O(S \log n)$ points that maintains a $(k,\epsilon)$-coreset on an insertion-only stream. \end{theorem} \begin{proof}[Proof Sketch] The known offline coreset constructions start with a bicriterion solution of $O(k)$ points. We modify the algorithm of~\cite{BMO11} to output $O(k \log n)$ centers; this is trivial since the final step of the algorithm of~\cite{BMO11} is to take the $O(k \log n)$ centers stored in memory and reduce them to exactly $k$ centers to provide a solution. Our first modification to the original algorithm is thus to simply remove this final step, but we must also keep a datastructure storing $\log(1/\epsilon)$ intermediate states of these $O(k \log n)$ centers. See Section~\ref{streamingSection} for a precise description of our modification and the sampling method, applied to the construction of~\cite{FL11} as an example (but equally applicable to~\cite{sariela, sarielb, chen2009coresets}). As the high-level idea, since the bicriterion given to the offline construction consists of $O(k \log n)$ centers instead of exactly $k$, the number of additional points taken in the coreset increases by a factor of $O(\log n)$. \end{proof} Two important corollaries include: \begin{enumerate} \item Using the result of~\cite{FL11}, we obtain a streaming algorithm that maintains a $(k,\epsilon)$-coreset with negative weights for metric $k$-median requiring the storage of $O(\epsilon^{-2} k \log n)$ points. \item Given a $O(k \cdot \text{poly}(\epsilon, \log n, \log(1/\delta)))$ point $(k,\epsilon)$-coreset, we would obtain a streaming algorithm that maintains a $(k,\epsilon)$-coreset (with only positive weights) for metric $k$-median requiring the storage of $O(k \cdot \text{poly}(\epsilon, \log n, \log(1/\delta)))$ points. This differs from Theorem~\ref{firstthm} in that the dependence on $k$ is linear instead of $O(k \log k)$. \end{enumerate} \subsection{Definitions} We begin by defining a $\rho$-metric space, which is defined in full as Definition~\ref{rho}. Briefly, let $X$ be a set. If $D : X \times X \rightarrow [0,\infty)$ is a symmetric function such that for every $x,z \in X$ we have that $D(x,z) \le \rho (D(x,y) + D(y,z))$ for every $y \in X$, when we call $(X,D)$ a $\rho$-metric space. Note that this is a weakening of the triangle inequality, and at $\rho=1$ we recover the definition of a metric space. All $M$-estimators can be re-cast for a certain constant value of $\rho$, and $k$-means is obtained with $\rho = 2$. This generality is therefore useful and working in this language allows us to naturally generalize our results to any $M$-estimator. The $k$-median problem is, given an input set $P$ and an integer $k \ge 1$, to find a set $C$ of $k$ points that minimizes: $$\sum_{p \in P} \min_{c \in C} D(p,c)$$ We use $\ensuremath{\text{\footnotesize\textsf{OPT}}}_k(P)$ to denote this minimal value. As this is NP-Hard to compute, we settle for an approximation. The notion of a bicriterion approximation is well-known; we state a definition that suits our needs while also fitting into the definition of previous works. \begin{definition}[$(\alpha,\beta)$-approximation] An $(\alpha,\beta)$-approximation for the $k$-median clustering of a multiset $P$ is a map $\pi : P \rightarrow B$ for some set $B$ such that $\sum_{p \in P} w(p) D(p,\pi(p)) \le \alpha \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(P)$ and $|B| \le \beta k$. \end{definition} We now define a coreset: \begin{definition}[$(k,\epsilon)$-coreset] A $(k,\epsilon)$-coreset of a multiset $P$ is a weighted set $(S,v)$ with non-negative weight function $v$ such that for every $Z \in \mathcal{X}^k$ we have $(1-\epsilon)\sum_{p \in P} D(p,Z) \le \sum_{s \in S} v(s) D(s,Z) \le (1+\epsilon)\sum_{p \in P} D(p,Z)$. \end{definition} Coresets with arbitrary weight functions (i.e. with negative weights allowed) have been considered [FL11, etc]. However, computing approximate solutions on these coresets in polynomial-time remains a challenge, so we restrict our definition to non-negative weight functions. This ensures that an approximate solution can be quickly produced. This implies a PTAS for Euclidean space and a polynomial-time $2\gamma(1+\epsilon)$-approximation for general metric spaces (where $\gamma$ is the best polynomial-time approximation factor for the problem in the batch setting). This factor of $2\gamma(1+\epsilon)$ is well-known in the literature, see~\cite{Charikar, BMO11, Guha} for details. \subsection{Constant-Approximation Algorithm} Let $P_i$ denote the prefix of the stream $\{p_1, \ldots, p_i\}$. The entire stream is then $P_n$. Consider the moment when the first $i$ points have arrived, meaning that the prefix $P_i$ is the current set of arrived points. The algorithm $\ensuremath{\mathcal{A}}$ of~\cite{BMO11} provides an $(O(1), O(\log n))$-approximation of $P_i$ in the following sense. Define $f_0 : \emptyset \rightarrow \emptyset$ as the null map, and define $B_i = \text{image}(f_i)$. Upon receiving point $p_i$, algorithm $\ensuremath{\mathcal{A}}$ defines a map $f_i : B_{i-1} \cup \{p_i\} \rightarrow B_i$. We define $\pi_i : P_i \rightarrow B_i$ by $\pi_i(p_j) = f_i(f_{i-1}(\ldots(f_j(p_j)) \ldots))$ for each $1 \le j \le i$. These mappings have an essential gaurantee stated in the following lemma. \begin{theorem}[\cite{BMO11}] For every $1 \le i \le n$, after receiving $P_i$, Algorithm $\ensuremath{\mathcal{A}}(k,n,\delta)$ defines a function $f_i$ such that with probability $1-\delta$, using the above definition of $\pi_i$, the bound $\sum_{p \in P_i} D(p,\pi_i(p)) \le \alpha \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(P_i)$ holds. The algorithm deterministically requires the storage of $O(k (\log n + \log(1/\delta)))$ points. \end{theorem} \subsection{Offline Coreset Construction} \label{section:newSampling} We briefly describe the offline coreset construction. The proof of correctness can be found in Sections~\ref{section:offlineImprovement} and~\ref{app}. It is this construction that we will maintain in the streaming setting. The sensitivity of a point $p \in P$ is defined as: \[ s(p) = \max_{Z \in {\mathcal{X}}^k} \frac{D(p,Z)}{\sum_{q \in P} D(q,Z)} \] Notice that $0 \le s(p) \le 1$. We give an upper bound $s'(p) \in [s(p), 1]$. Define the total sensitivity $t = \sum_{p \in P} s(p)$. Likewise, we give an upper bound $t' \ge t$ where $t' = \sum_{p \in P} s'(p)$ and will show that $t = O(k)$. The sampling probability distribution at point $p$ is set to $s'(p) / t'$. We take an i.i.d. sample from $P$ of size $m$ for any $m \ge ct'\epsilon^{-2}(\log n \log t' + \log(1/\delta))$ where $c$ is a constant. Let $R$ be the union of these $m$ i.i.d. samples, and then define a weight function $v : R \rightarrow [0,\infty)$ where $v(r) = (|R| s'(r))^{-1}$. It is proven as one of our main theorems (Theorem~\ref{mainthm}) that the weighted set $(R,v)$ is a $(k,\epsilon)$-coreset for $P$. \subsection{Bounding the Sensitivity} Consider the prefix $P_i$ which is the input after the first $i$ points have arrived. Using Algorithm $\mathcal{A}$ we obtain an $(\alpha,\beta)$-approximation $\pi_i$ where $\alpha = O(1)$ and $\beta = O(\log n)$. Recall that $B_i$ is the image of this approximation, i.e. $B_i = \text{image}(\pi_i(P_i))$. Running an offline $(\gamma,\lambda)$-approximation algorithm on $B_i$, we obtain a multiset $C_i$ of at most $\lambda k$ distinct points. Let $p'$ denote the element of $C_i$ nearest to $\pi_i(p)$ (this is the element of $C_i$ that $p$ gets mapped to when we pass from $P_i \rightarrow B_i \rightarrow C_i$). The following lemma implies that $\sum_{p \in P} w(p) D(p,p') \le \bar{\alpha} \ensuremath{\text{\footnotesize\textsf{OPT}}}(P)$ where $\bar{\alpha} = \rho \alpha + 2 \rho^2 \gamma (\alpha + 1)$. This is an observation used widely in the literature~\cite{Charikar, BMO11}, but we include a proof for completeness. \begin{lemma} Let $B$ be a $(\alpha,\beta)$-approximation of $P$, and let $C$ be a $(\gamma,\lambda)$-approximation of $B$. Then $C$ is a $(\rho \alpha + 2 \rho^2 \gamma (\alpha + 1),\lambda)$-approximation of $A$. \begin{proof} Let $\pi : P \rightarrow B$ be the $(\alpha,\beta)$-approximation of $P$ and let $t : B \rightarrow C$ be the $(\gamma,\lambda)$-approximation of $B$. In the following, all sums will be taken over all $p \in P$. The hypotheses state that $\sum D(p,\pi (p)) \le \alpha \ensuremath{\text{\footnotesize\textsf{OPT}}}(P)$ and $\sum D(\pi(p),t(\pi(p))) \le \gamma \ensuremath{\text{\footnotesize\textsf{OPT}}}(B)$. Let $P^*$ be an optimal clustering of $P$, that is $\sum D(p,P^*) = \ensuremath{\text{\footnotesize\textsf{OPT}}}(P)$. Then $\frac{1}{2} \ensuremath{\text{\footnotesize\textsf{OPT}}}(B) \le \sum D(\pi(p),P^*) \le \rho \sum (D(\pi(p),p) + D(p,P^*)) \le \rho(\alpha + 1) \ensuremath{\text{\footnotesize\textsf{OPT}}}(P)$. The factor of $\frac{1}{2}$ comes from the fact that $\ensuremath{\text{\footnotesize\textsf{OPT}}}(B)$ is defined using centers restricted to $B$ (see~\cite{Guha} for details). We now write $\sum D(p, t(\pi(p))) \le \rho \sum(D(p,\pi(p)) + D(\pi(p),t(\pi(p)))) \le (\rho \alpha + 2 \rho^2 \gamma (\alpha + 1)) \ensuremath{\text{\footnotesize\textsf{OPT}}}(P)$ as desired. \end{proof} \end{lemma} We now prove the following lemma which gives us our sampling probability $s'(p)$. Recall that for the construction to succeed, the sampling probability $s'(p)$ must be at least the sensitivity $s(p)$ (defined in the previous subsection). Since we focus on a single iteration, we drop subscripts and write $C = C_i$ and $P = P_i$. Let $p \mapsto p'$ be an $(\bar{\alpha},\lambda)$-approximation of $P$. Define $P(p) = \{q \in P : q' = p'\}$ to be the cluster containing $p$. \begin{lemma}\label{lemma:sensbound} Let the map $p \mapsto p'$ define an $(\bar{\alpha},\lambda)$-approximation for the $k$-median clustering of $P$. For every point $p \in P$: \[ s(p) \le \frac{\rho \bar{\alpha} D(p,p')}{\sum_{q\in P}D(q,q')} +\frac{\rho^2(\bar{\alpha}+1)}{|P(p)|} \] \end{lemma} \begin{proof} For an arbitrary $Z \in \mathcal{X}^k$ we need to provide a uniform bound for \begin{equation} \begin{split} \frac{D(p,Z)}{\sum_{q\in P}D(q,Z)} &\leq \frac{\rho D(p,p')}{\sum_{q\in P}D(q,Z)} +\frac{\rho D(p',Z)}{\sum_{q\in P}D(q,Z)}\\ &\leq \frac{\bar{\alpha}\rho D(p,p')}{\sum_{q\in P}D(q,q')} +\frac{\rho D(p',Z)}{\sum_{q\in P}D(q,Z)} \label{aa} \end{split} \end{equation} where the second inequality holds because $\sum_{q \in P} D(q,q') \le \bar{\alpha} \ensuremath{\text{\footnotesize\textsf{OPT}}}(P) \le \sum_{q \in P} D(q,Z)$. To bound the last term, recall that $q'=p'$ for all $q \in P(p)$ so: \[ \begin{split} D(p',Z) |P(p)| &= \sum_{q \in P(p)} D(p',Z) =\sum_{q\in P(p)}D(q',Z) \\ &\leq \rho \sum_{q\in P(p)} (D(q',q)+D(q,Z))\\ &\le \rho\sum_{q\in P}D(q',q)+\rho\sum_{q\in P(p)} D(q,Z)\\ &\leq \rho\bar{\alpha}\sum_{q\in P} D(q,Z)+\rho\sum_{q\in P(p)} D(q,Z)\\ &\leq \rho(\bar{\alpha}+1)\sum_{q\in P} D(q,Z) \end{split} \] Dividing by $|P(p)|\sum_{q\in P}D(q,Z)$ gives \[ \frac{D(p',Z)}{\sum_{q\in P}D(q,Z)} \leq \frac{\rho(\bar{\alpha}+1)}{|P(p)|} \] Substituting this in~\eqref{aa} yields the desired result. \end{proof} We therefore define our upper bound $s'(p)$ as in the lemma. An immediate but extremely important consequence of Lemma~\ref{lemma:sensbound} is that $t' = \sum_{p \in P} s'(p) = \rho \bar{\alpha} + \rho^2(\bar{\alpha}+1)k \le 3 \rho^2 \bar{\alpha} k$. This can be seen by directly summing the formula given in the lemma. \subsection{Streaming Algorithm} We now state Algorithm~\ref{alg:main}, which we then prove maintains a coreset. To use Lemma~\ref{lemma:sensbound} to determine $s'(p)$, we will compute the cluster sizes $|P(p)|$ and estimate the clustering cost $\sum_{q\in P}D(q,q')$. We must bound the clustering cost from below because we need an upper-bound of $s(p)$. On Line~\ref{line:lowerBound}, $L$ is an estimate of the cost of clustering $P$ to the centers $C$. On Line~\ref{line:constant}, $c$ is the absolute constant used in Theorem~\ref{mainthm}. \begin{algorithm} \label{alg:main} \textbf{Initilization:} \\ $R \gets \emptyset$ \\ $t' \gets \rho \bar{\alpha} + \rho^2(\bar{\alpha}+1)k$ \\ $x \gets 2c \epsilon^{-2} (\log n \log t' + \log(1/\delta)) $ \\ \label{line:constant} Initialize $\ensuremath{\mathcal{A}}(k,n,\delta)$ \\ \textbf{Update Process: after receiving} $p_i$ \\%\Update{after receiving $p_i$} { $(B_i, f_i) \gets \ensuremath{\mathcal{A}}.update(p_i)$ \label{line:BBstep} \\ $C \gets$ an $(\gamma,\lambda)$-approximation of $B_i$ \label{line:bicritStep} \\ $L \gets D(p_i,C) + \bar{\alpha}^{-1}(1+\epsilon)^{-1}\sum_{r\in R} v(r) D(r,C)$ \label{line:lowerBound} \\ \For{$r \in R$} { $\pi_i(s) \gets f_i(\pi_{i-1}(r))$ \\ $z(r) \gets \frac{\rho \bar{\alpha} D(p,p')}{L} +\frac{\rho^2(\bar{\alpha}+1)}{|P(p)|}$ \\ \label{line:newsize} $s'(r) \gets \min(s'(r) , z(r))$ \label{line:sens} \\ } \For{$r \in R$} { \If{$u(r) > x s'(r)$} { \label{line:delete} Delete $r$ from $R$ \\ } } $u(p_i) \gets$ uniform random from $[0,1)$ \\ \If{$u(p_i) \le x s'(p_i)$} { Add $p_i$ to $R$ \\ $\pi_i(p_i) \gets f_i(p_i)$ \\ } \textbf{Query Process:} \\ \For{each $r \in R$} { $v(r) \gets (|R| s'(r))^{-1}$ } \caption{Input: stream of $n$ points in a $\rho$-metric space, $\epsilon > 0$, $k \in \mathbb{N}$, maximum failure probability $\delta > 0$} \end{algorithm} Algorithm~\ref{alg:main} outputs $(R,v)$ such that point $p$ is sampled with probability $x s'(p)$ where $x$ is defined on Line~\ref{line:constant}. For each $p$ that has arrived, the value of $s'(p)$ is non-increasing (notice that it is defined as the minimum of itself and a new value on Line~\ref{line:sens}), so it is possible to maintain this in the streaming setting since once the deletion condition on Line~\ref{line:delete} becomes satisfied, it remains satisfied forever. We now proceed with the proof of Theorem 1. Since the probability of storing point $p$ is $x s'(p)$, the expected space of Algorithm~\ref{alg:main} is $x t'$. By Lemma~\ref{lemma:sensbound} that implies $t' \le 3 \rho^2 \bar{\alpha} k$, we then bound the expected space as $2c \epsilon^{-2} (\log n \log t + \log(1/\delta)) (3 \rho^2 \bar{\alpha} k)$. Simplifying notation by defining an absolute constant $\tilde{c}$ (a function of $c$ and $\bar{\alpha}$), we write this expected space as $\tilde{c} \rho^2 \epsilon^{-2} k \log (\rho k) \log n \log(1/\delta)$. By a Chernoff bound, the high-probability gaurantee follows by replacing $\tilde{c}$ with $2\tilde{c}$. \begin{proof}[Proof of Theorem 1] The proof of Theorem 1 can be divided into the following pieces: \begin{enumerate} \item For correctness (to satisfy the bound given in Lemma~\ref{lemma:sensbound}, we must show that $L \le \sum_{p \in P_i} D(p,p')$. For space, it is important that $L$ is not too small. In particular, the space grows as $1 / L$. We show that $L$ is a $\epsilon$-approximation of the true cost. \item The value of $|P(p)|$ can be computed exactly for every $p$. This is needed on Line~\ref{line:newsize}. \item The construction of Algorithm~\ref{alg:main} that samples $p$ with probability $x s'(p)$ can be processed to be identical to the offline construction of Subsection~\ref{section:newSampling} that takes an i.i.d. sample of size $x t'$ from the distribution where $p$ is given sampling probability $s'(p) / t'$. \end{enumerate} \textbf{1}: To lower bound the clustering cost, inductively assume that we have a $(k,\epsilon)$-coreset $S_{i-1}$ of $P_{i-1}$. Note that $p_i$ is a $(k,\epsilon)$-coreset of itself (in fact a $(k,0)$-coreset of itself), so $S_{i-1} \cup \{p_i\}$ is a $(k,\epsilon)$-coreset of $P_{i}$. Let $L$ be the cost of clustering $S_{i-1} \cup \{p_i\}$ to $C$. Therefore the cost of clustering $P_i$ to $C$ is in the interval $[(1-\epsilon)L, (1+\epsilon)L]$. Recall that the upper bound on $s(p)$ from Lemma~\ref{lemma:sensbound} is: \[ \frac{\rho \bar{\alpha} D(p,p')}{\sum_{q\in P}D(q,q')} +\frac{\rho^2(\bar{\alpha}+1)}{|P(p)|} \] By using $L$ in place of the true cost $\sum_{p \in P_i} D(p,p')$ for defining $s'(p)$, the value of $t' = \sum_{p \in P_i} s'(p)$ increases to at most $\rho \bar{\alpha} \left( \frac{1+\epsilon}{1-\epsilon} \right) + \rho^2(\bar{\alpha} + 1)\lambda k = O(\rho^4 k)$. Here there is no dependence on $\epsilon$ since we assume $\epsilon \le 1/2$, so $\frac{1+\epsilon}{1-\epsilon}$ is bounded by an absolute constant. \textbf{2}: Computing $|P(p)|$ is straightforward. Define $w(b) = |\{p \in P : \pi(p) = b\}|$ and then let $h : B \rightarrow C$ be the $(\gamma,\lambda)$-approximate clustering. Then $|P(p)| = \sum_{b \in h^{-1}(p')} w(b)$. \textbf{3}: In Algorithm~\ref{alg:main}, we sample point $p$ with probability $s'(p)$ to maintain a set $M$ of non-deterministic size. We now argue that this can be converted to the desired coreset, where an i.i.d. sample of size $m$ is taken from the distribution $s' / t'$. First, by a Chernoff bound, $|R| \ge E[|R|] /2$ with probability $1 - exp(-E[|R|]/8)$. Since $E[|R|] = xt' = (2c \epsilon^{-2} \log (\rho k) \log n \log(1/\delta)) \cdot (\rho \bar{\alpha} + \rho^2(\bar{\alpha}+1)k) = \Omega(\log(n))$, we have that $|R| \ge E[|R|] /2$ with probability $1 - O(1/n)$. Then by the union bound, this inequality holds true at each of the $n$ iterations of receiving a point throughout the entire stream. Recall that for the offline coreset construction outlined in Subsection~\ref{section:newSampling} to hold, we need an i.i.d. sample of at least $m = ct'\epsilon^{-2}(\log n \log t' + \log(1/\delta))$. By Lemma~\ref{lemma:sensbound} $t' = \rho \bar{\alpha} + \rho^2(\bar{\alpha}+1)k$, and so by plugging in values we see that $E[|R|] = x t' \ge 2m$. Having that $|R| \ge m$ with probability $1 - O(1/n)$, it is well-known (see~\cite{Mahoney} for example) that this can be converted to the required i.i.d. sample. \end{proof} \section{Preliminaries for Offline Coreset Construction} \label{section:offline} \subsection{Query space} In our framework, as in~\cite{FL11}, we are given a finite input set $P$ of items that are called \emph{points}, a (usually infinite) set $Q$ of items that are called \emph{queries}, and a \emph{cost} function $f$ that maps each pair of a point in $P$ and a query in $Q$ to a non-negative number $f(p,q)$. The cost of the set $P$ to this query is the sum over all the costs, $$\bar{f}(P,q):=\sum_{p\in P}f(p,q).$$ More generally, each input point might be given a positive multiplicative weight $w(p)>0$, and the overall cost of each point is then reweighed, so that \[ \bar{f}(P,w,q)=\sum_{p\in P}w(p)f(p,q). \] The tuple $(P,w,f,Q)$ is thus define our input problem and we call it a \emph{query space}. In the case that the points are unweighted we can simply define $w(p)=1$ for every $p\in P$. However, for the following sections, it might help to scale the weights so that their sum is $1$. In this case, we can think of the weights as a given distribution over the input points, and the cost $\bar{f}(P,w,q)$ is the expected value of $f(p,q)$ for a point that is sampled at random from $P$. For the unweighted case we thus have $w(p)=1/n$ for each point, and \[\bar{f}(P,w,q)=\frac{1}{n}\sum_{p\in P}f(p,q)\] is the average cost per point. The cost function $f$ is usually also scaled as will be explained later, to have values $f(p,q)$ between $0$ and $1$, or $-1$ to $1$. \textbf{Example 1:} Consider the following problem where the input is a set $P$ of $n$ points in $\REAL^d$. Given a ball $B=B(c,r)$ of radius $r$ that is centered in $c\in\REAL^d$, we wish to compute the fraction of input points that are covered by this ball, $ \frac{|P\cap B|}{|P|}. $ More generally, the query is a set of $k$ balls, and we wish to compute the fraction of points in $P$ that are covered by the union of these balls. In this case, each input point $p\in P$ has a weight $w(p)=1/n$, the set $Q$ of queries is the union over every $k$ balls in $\REAL^d$, \begin{equation}\label{QQ} Q=\br{B(c_1,r_1)\cup \cdots \cup B(c_k,r_k) \mid \forall i\in[k]: c_i\in\REAL^d, r_i\geq 0 }, \end{equation} and the cost $f(p,q)$ for a query $q=\br{B_1\cup \cdots\cup B_k}\in Q$ is either $1/n$ if $p$ is inside one of the balls of $q$, and $0$ otherwise. The overall cost $\bar{f}(P,w,q)=\sum_{p\in P}f(p,q)$ is thus the fraction of points of $P$ that are covered by the union of these $k$ balls. The motivation of defining query spaces is usually to solve some related optimization problem, such as the query $q$ that minimizes the cost $\bar{f}(P,w,q)$. In this case, the requirement to approximate every query in $Q$ is too strong, and we may want to approximate only the optimal query in some sense. To this end, we may wish to replace the set $Q$ by a function that assigns a different set of queries $Q(S)$ for each subset $S$ of $P$. For the correctness of our results, we require that this function $Q$ will be monotonic in the following sense: if $T$ is a subset of $S$ then $Q(T)$ must be a subset of $Q(S)$. If we wish to have a single set $Q(P)$ of queries as above, we can simply define $Q(S):=Q(P)$ for every subset $S$ of $P$, so that the desired monotonic property will hold. \textbf{Example 2:} For the set $Q$ in~\eqref{QQ}, define $Q(S)$ to be the set of balls in $\REAL^d$ such that the center of each ball is a point in $S$. More generally, we can require that the center of each ball will be spanned by (i.e., linear combination of) at most $10$ points from $S$. We now conclude with the formal definitions for the above discussion. \begin{definition}[weighted set] Let $S$ be a subset of some set $P$ and $w:P\to[0,\infty)$ be a function. The pair $(S,w)$ is called a \emph{weighted set}. \end{definition} \begin{definition}[query space]\emph{~\cite{FL11}}. Let $Q$ be a function that maps every set $S\subseteq P$ to a corresponding set $Q(S)$, such that $Q(T)\subseteq Q(S)$ for every $T\subseteq S$. Let $f:P\times Q(P)\rightarrow \REAL$ be a \emph{cost function}. The tuple $(P,w,Q,f)$ is called a \emph{query space}. We denote the \emph{cost} of a query $q\in Q(P)$ by \[ \overline{f}(P,w,q):=\sum_{p\in P}w(p)f(p,q). \] \end{definition} \subsection{$(\eps,\nu)$-Approximation} Consider the query space $(P,w,Q,f)$ in Example $1$ for $k=1$, and suppose that we wish to compute a set $S\subseteq P$, such that, for every given ball $B$, the fraction of points in $P$ that are covered by $B$, are approximately the same as the fraction of the points that are covered in $S$, up to a given small additive percentage error $\eps>0$. That is, for every ball $B$, \[ \left|\frac{|P\cap B|}{|P|}-\frac{|S\cap B|}{|S|} \right|\leq \eps. \] By defining the weight $u(p)=1/|S|$ for every $p\in S$, this implies that for every query (ball) $q=B$, \[ \left|\bar{f}(P,w,q)-\bar{f}(S,u,q)\right| =\left|\sum_{p\in P}\frac{1}{|P|}\cdot f(p,q)-\sum_{p\in S}\frac{1}{|S|}\cdot f(p,q)\right| =\left|\frac{|P\cap B|}{|P|}-\frac{|S\cap B|}{|S|} \right|\leq \eps. \] A weighted set $(S,u)$ that satisfies the last inequality for every query $q\in Q(S)$ is called an \emph{$\eps$-approximation} for the query space $(P,w,f,Q)$. Note that the above example assumes that the maximum answer to a query $q$ is $f(p,q)\leq 1$. Otherwise, the error guaranteed by an $\eps$-approximation for a query $q$ is $\eps\max_{p\in P}|f(p,q)|$. The above inequalities implies that if a ball covers a fraction of at least $\eps$ points from $P$ (i.e., at least $\eps n$ points), then it must cover at least one point of $P$. If we only ask for this (weaker) property from $S$ then $S$ is called an \emph{$\eps$-net}. To obtain the new results of this paper, we use a tool that generalizes the notion of $\eps$-approximation and $\eps$-net, but less common in the literature, and is known as \emph{$(\eps,\nu)$-approximation}~\cite{LiLonSri01a}. By letting $a=\bar{f}(P,w,q)$ and $b=\bar{f}(S,u,q)$, an $\eps$-approximation implies that $|a-b|\leq \eps$ for every query $q$, and $\eps$-net implies that $b>0$ if $a\geq \eps$. Following~\cite{li00improved}, we define below a distance function that maps a positive real $|a-b|_\nu$ for each pair of positive real numbers $a$ and $b$. A specific value $\nu$ will imply $|a-b|_{\nu}\leq \eps$, i.e., that $S$ is an $\eps$-approximation, and a different value of $\nu$ will imply that $S$ is an $\eps$-net for $P$. This is formalized as follows. \begin{definition}[$(\eps,\nu)$-approximation~\cite{li00improved}\label{def:approx}] Let $\nu>0$. For every $a,b\geq 0$, we define the distance function \[ |a-b|_\nu=\frac{|a-b|}{a+b+\nu}. \] Let $(P,w,Q,f)$ be a query space such that $\sum_{p\in P} w(p)=1$ and $f:P\to[0,1]$. For $\eps>0$, a weighted set $(S,u)$ is an \emph{$(\eps,\nu)$-approximation} for this query space, if for every $q\in Q(S)$ we have \begin{equation}\label{epsapp} \left|\overline{f}(P,w,q)-\overline{f}(S,u,q)\right|_\nu \leq \Ta \end{equation} \end{definition} \newcommand{a}{a} \newcommand{b}{b} \begin{corollary}[~\cite{li00improved,har2011geometric}\label{nets}] Let $a,b\geq 0$ and $\tau,\nu>0$ such that $|a-b|_\nu\leq \tau$. Let $\eps>0$. Then \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \item \emph{($\eps$-sample/approximation).} If $\tau=\eps/4$ and $\nu=1/4$ then \[ |a-b|\leq \eps. \] \item \emph{($\eps$-net).} If $\tau=1/4$ and $\nu=\eps$ then \[ (a\geq \eps \quad\Rightarrow\quad b>0). \] \item \emph{(Relative $\eps$-approximation).} Put $\mu>0$. If $\nu=\mu/2$ and $\tau=\eps/9$ then \[ \hspace*{-8cm}\begin{split} (1)\quad& a\geq \mu \quad\Rightarrow \quad(1-\eps)a\leq b\leq (1+\eps)a.\\ (2)\quad& a<\mu\quad \Rightarrow\quad b\leq (1+\eps)\mu.\\ \end{split} \] \end{enumerate} \end{corollary} \subsection{Constructing $\eps$-Approximations} Unlike the notion of coresets in the next section, the idea of $\eps$-approximations is known for decades~\cite{Vap71a,matouvsek1989construction}. In particular, unlike coresets, $\eps$-approximations and $(\eps,\nu)$-approximations in general, can be constructed using simple uniform random sampling of the points. The size of the sample depends linearly on the complexity of the queries in the sense soon to be defined in this section. Intuitively, and in most practical cases, including the examples in this paper, this complexity is roughly the number of parameters that are needed to define a single query. For example, a ball in $\REAL^d$ can be defined by $d+1$ parameters: its center $c\in\REAL^d$, which is defined using $d$ numbers and the radius $r>0$ which is an additional number. A query of $k$ balls is similarly defined by $k(d+1)$ parameters. However, by the set theory we have $|R|=|R^m|$ for every integer $c\geq1$, which means that we can encode every $m$ integers to a single integer. We can thus always reduce the number of parameters that are needed to define a query from $m$ to $1$ by redefining our cost function $f$ and re-encoding the set of queries. There are also natural examples of query spaces whose cost function $f$ is defined by one parameter, but the size of the sampling needed for obtaining an $\eps$-approximation is unbounded, e.g., the query space where $f(p,q)=\mathrm{sign}(\sin(pq))$ and $P=Q=\REAL^d$, where $\mathrm{sign}(x)=1$ if $x>0$ and $0$ otherwise; see details e.g. in~\cite{kecman2001learning}. Hence, a more involved definition of complexity is needed as follows. While the number of subsets from a given set of $n$ points is $2^n$, its can be easily verified that the number of subsets that can be covered by a ball in $\REAL^d$ is roughly $n^{O(d)}$. The exponent $O(d)$ is called the \emph{VC-dimension} of the family of balls in $\REAL^d$. The following definition is a simple generalization by~\cite{FL11} for query spaces where the query set is a function and not a single set. The original definition of pseudo-dimension can be found e.g. in~\cite{li00improved} and is very similar to the definition of VC-dimension given by~\cite{Vap71a} as well as many other similar measures for the complexity of a family of shapes. \begin{definition}[dimension~\cite{FL11}\label{vdim}] For a query space $(P,w,Q,f)$ and $r\in[0,\infty)$ we define $$\mathrm{range}(q,r)=\br{p\in P\mid w(p)\cdotf(p,q)\leq r}.$$ The \emph{dimension} of $(P,w,Q,f)$ is the smallest integer $d$ such that for every $S\subseteq P$ we have \[ \big|\br{\mathrm{range}(q,r)\mid q\in Q(S), r\in[0,\infty)}\big| \leq |S|^d. \] \end{definition} The main motivation of the above definition, is that it tells us how many samples we need to take uniformly at random from the input set $P$, to get an $(\eps,\nu)$-approximation $(S,u)$ as follows. \begin{theorem}[\cite{Vap71a,li00improved, FL11}\label{thm2}] Let $(P,w,Q,f)$ be a query space of dimension $d$ such that $\sum_{p\in P}w(p)=1$ and $f:P\to[0,1]$, and let $\eps,\delta,\nu>0$. Let $S$ be a random sample from $P$, where every point $p\in P$ is sampled independently with probability $w(p)$. Assign a weight $u(p)=\frac{1}{|S|}$ for every $p\in S$. If \[ |S|\in \Omega(1)\cdot \frac{1}{\eps^2\nu}\left(d\log\left(\frac{1}{\nu}\right)+\log\left(\frac{1}{\delta}\right)\right) \] then, with probability at least $1-\delta$, $(S,u)$ is an \emph{$(\eps,\nu)$-approximation} for the query space $(P,w,Q,f)$. \end{theorem} \section{Improved Coreset Framework} \label{section:offlineImprovement} \subsection{Improved $(\eps,\nu)$-approximations} In this section we show the first technical result of this paper: that the additive error $\eps\max_{p\in P}|f(p,q)|$ in~\eqref{epsapp} can be replaced by $\eps$, and the assumption $\sum_{i=1}w_i=1$ in Theorem~\ref{thm2} can be removed. The condition for this to work is that the total importance value $t$, as defined below, is small. More precisely, the required sample size will be near-linear with $t$. Also, the uniform sampling in Theorem~\ref{thm2} will be replaced by non-uniform sampling with a distribution that is proportional to the importance of each point. This result is essentially a generalization and significant simplification of the framework in~\cite{FL11} that used $\eps$-approximations for constructing coresets. Maybe more importantly: using the idea of $(\eps,\nu)$-approximations we are able to show that the sample size is near linear in $t$ while in~\cite{FL11} it is quadratic in $t$. For some applications, such an improvement means turning a theoretical result into a practical result, especially when $t$ is close to $\sqrt{n}$. We define the \emph{importance} of a point as the maximum absolute weighted cost $w(p)f(p,q)$ of a point $p$, over all the possible queries $q$, i.e, \[ s(p):=w(p)\max_{q\in Q(P)} |f(p,q)|, \] and hope that this sum is small (say, constant or $\log n$), in other words, that not \emph{all} the points are very important. More precisely, if the sum $t=\sum_{p\in P}s(p)$ of these costs is $1$, then we prove below that a new query space $(P,w',Q,f')$ can be constructed, such that an $(\eps,\nu)$-approximation $(S,u)$ for the new query space would imply \begin{equation}\label{epsapp2} \left||\overline{f}(P,w,q)|-|\overline{f}(S,u,q)|\right|_\nu \leq \eps. \end{equation} That is, the additive error in~\eqref{epsapp} is reduced as desired. The new query space $(P,w',Q,f')$ is essentially a re-scaling of the original weights by their importance, $w'(p)=s(p)$. To make sure that the cost of each query will still be the same, we need to define $f'$ such that $w(p)f(p,q)=w'(p)f'(p,q)$. This implies $f'(p,q):=w(p)f(p,q)/s(p)$. While the new cost $\bar{f'}(P,w',q)$ is the same as the old one $\bar{f}(P,w,q)$ for every query $q$, the maximum value of $|f'(p,q)|$ is $1$, by definition of $s(p)$, even if $|f(p,q)|$ is arbitrarily large. Hence, the additive error $\eps |f'(p,q)|$ in~\eqref{epsapp} reduced to $\eps$ in~\eqref{epsapp2}. More generally, an $(\eps,\nu/t)$-approximation for $(P,w',Q,f')$ would yield~\eqref{epsapp2}. Using the uniform sample construction of Theorem~\ref{thm2}, this implies that to get~\eqref{epsapp2} we need to increase the sample size by a factor that is nearly linear in $t$. \begin{theorem}\label{thm1} \quad \begin{itemize} \item Let $(P,w,Q,f)$ be a query space where $f(p,q)\leq 1$ for every $p\in P$ and $q\in Q(P)$. \item Let $s:P\to(0,\infty)$ such that $ s(p)\geq w(p)\max_{q\in Q(P)}f(p,q). $ \item Let $t=\sum_{p\in P}s(p)$. \item Let $w':P\rightarrow [0,1]$ such that $\displaystyle w'(p):= s(p)/t$. \item Let $f':P\times Q(P)\to [0,1]$ be defined as $\displaystyle f'(p,q):=w(p)\cdot \frac{f(p,q)}{s(p)}$. \item Let $(S,u)$ be an \emph{$\left(\eps,\nu/t\right)$-approximation} for $(P,w',Q,f')$. \item Let $u'(p)=u(p)\cdot \frac{w(p)}{w'(p)}$ for every $p\in S$. \end{itemize} Then for every $q\in Q(S)$, \begin{equation}\label{epsapp3} |\overline{f}(P,w,q)-\overline{f}(S,u',q)|_{\nu} \leq \eps. \end{equation} \end{theorem} \begin{proof} Put $q\in Q(S)$. \begin{align} |\overline{f}(P,w,q)-\overline{f}(S,u',q)|_{\nu} \label{e1}&=\left|\sum_{p\in P}w(p)f(p,q)-\sum_{p\in S}\frac{u(p)w(p)}{\frac{s(p)}{t}}\cdot f(p,q)\right|_{\nu}\\ \nonumber&=\left|t\sum_{p\in P}\frac{s(p)}{t}\cdot w(p)\cdot\frac{f(p,q)}{s(p)} -t\sum_{p\in S}u(p)\cdot w(p)\cdot\frac{f(p,q)}{s(p)}\right|_{\nu}\\ \label{e222}&= \left|t\sum_{p\in P}w'(p)\cdot f'(p,q)-t\sum_{p\in S}u(p)\cdot f'(p,q)\right|_{\nu}\\ \label{e2}&= \left|\sum_{p\in P}w'(p)\cdot f'(p,q)-\sum_{p\in S}u(p)\cdot f'(p,q)\right|_{\nu/t}\\ &\label{eq5}\leq \eps, \end{align} where~\eqref{e1} is by the definition of $u'$,~\eqref{e222} is by the definition of $f'$,~\eqref{e2} follows since \begin{equation}\label{tatb} |ta-tb|_{\nu} = \frac{|ta-tb|}{ta+tb+\nu} = \frac{|a-b|}{a+b+\nu/t} =|a-b|_{\nu/t}, \end{equation} and~\eqref{eq5} is by~\eqref{epsapp} and the definition of $S$ in the sixth bullet of the statement. \end{proof} Plugging Theorem~\ref{thm2} in Theorem~\ref{thm1} yields our main technical result that would imply smaller coresets in the next sections. \begin{theorem}\label{non} Let $(P,w,Q,f)$ be a query space of dimension $d$, where $f$ is non-negative, and let $\eps,\delta,\nu>0$. Let $S$ be a random sample from $P$, where every point $p\in P$ is sampled independently with probability $s(p)/t$. Assign a weight $u'(p)=\frac{tw(p)}{s(p)\cdot |S|}$ for every $p\in S$. If \[ |S|\in \Omega(1)\cdot \frac{t}{\eps^2\nu}\left(d\log\left(\frac{t}{\nu}\right)+\log\left(\frac{1}{\delta}\right)\right) \] then, with probability at least $1-\delta$, \[ |\overline{f}(P,w,q)-\overline{f}(S,u',q)|_{\nu} \leq \eps. \] \end{theorem} \begin{proof} By Theorem~\ref{thm1} it suffices to prove that $S$ is an $(\eps,\nu/t)$-approximation for $(P,w',Q,f')$. Indeed, since $\sum_{p\in P}w'(p)=1$ and $S$ is a random sample where $p\in P$ is sampled with probability $w'(p)=s(p)/t$, the weighted set $(S,u)$ is, with probability at least $1-\delta$, an $(\eps,\nu/t)$-approximation for $(P,w',Q,f')$ by Theorem~\ref{thm2}. \end{proof} While Theorem~\ref{thm2} suggests to construct an $\eps$-approximation simply by taking a uniform random sample from $P$, Theorem~\ref{non} requires us to take \emph{non-uniform} sample where the distribution is defined by the importance $s(\cdot)$. Bounding these importances such that the sum $t=\sum_{p\in P}s(p)$ of importances will be small raise a new optimization problem that, as we will see in the next sections, might be not easy at all to solve. So what did we gain by proving Theorem~\ref{thm2} (or the framework of~\cite{FL11} in general)? Most of the existing papers for constructing coresets essentially had to bound the total importance of the related problem anyway. However, without Theorem~\ref{thm2}, their proofs also had to deal with complicated terms that involved $\eps$ and include sophisticated probability arguments. Essentially each paper had to re-prove in some sense the very involved proofs in~\cite{Vap71a,li00improved} that were researched for decades, as well as the mathematics behind the usage of Definition~\ref{def:approx}. Beside the complicated proofs, the final bounds, the dependency on $\eps$ and $\delta$, as well as the coreset construction algorithm, were usually sub-optimal compared to Theorem~\ref{thm2}. On the contrary, bounding the total importance allows us to focus on a deterministic results (no $\delta$ involved) and the terms $s(p)$ can be approximated up to constant factors (and not $(1+\eps)$-factors). This is demonstrated in Section~\ref{app}. \subsection{Improved Coreset Framework}% While the result in the previous section can be used to improve the quality of $(\eps,\nu)$-approximation in general, its main motivation is to construct the more recent type of data structures that is sometimes called coresets. As explained in Theorem~\ref{thm2}, $\eps$-approximation and $(\eps,\nu)$-approximation in general, can be easily computed using uniform random sampling. While $\eps$-approximations are useful for hitting sets, where we wish to know how much points are covered by a set of shapes, they are less relevant for shape fitting, where we wish to approximate the sum of \emph{distances} of the input points to a given query shape or model. The main reason is that in shape fitting the maximum contribution of a point to the overall cost (sum of covered points) is bounded by $1$, while in general its distance to the shape is unbounded. Using the notation of the previous section: the importance of each point is the same, so Theorem~\ref{non} yields uniform sample $S$ and the same error as Theorem~\ref{thm2}. \textbf{Example:} In the Euclidean $k$-means problem the input is a set $P$ of $n$ points in $\REAL^d$. For simplicity, assume that the set is unweighted, that is $w(p)=1/n$ for every $p\in P$. A query in this context is a set of points (centers) in $\REAL^d$ so $Q(P)$ is the family (set) of $k$ balls in $\REAL^d$. We denote the squared Euclidean distance from a point $p\in P$ to a center $c\in\REAL^d$ by $D(p,c)=\norm{p-c}_2^2$. The distance from $p$ to its closest center in $C=\br{c_1,\cdots,c_k}$ is then \[ D(p,C):=\min_{c\in C} D(p,c)=\min_{c\in C} \norm{p-c}_2^2. \] By defining $g(p,C)=D(p,C)$ we obtain the query space $(P,w,g,Q)$, where $\bar{g}(P,w,C)$ is the average squared distances to a given (query) set $C$ of $k$ centers. Our goal is to compute a weighted subset $(S,u)$ such that $\bar{g}(P,w,C)$ will approximate the average squared distances $\bar{g}(S,u,C)$ for every set of centers. Suppose that $(S,u)$ is an $\eps$-approximation of $(P,w,Q,g)$. Then \begin{equation}\label{eq666} |\bar{g}(P,w,C)-\bar{g}(S,u,C)|\leq \eps \max_{p\in P}g(p,C). \end{equation} That is, \begin{equation}\label{eq555} \left|\frac{1}{n}\sum_{p\in P}D(p,C)-\sum_{p\in P}u(p)D(p,C)\right| \leq \eps\max_{p\in P}D(p,C) . \end{equation} In other words, the additive error depends on the maximum distance between a point to a given center, which can be arbitrary large, unless we assume, say, that both the input points and centers are inside the unit cube. Theorem~\ref{non} would not improve this bound by itself, since the importance of each point, $\max_{c\in Q(P)}D(p,C)$ is unbounded. In this section we wish to compute a weighted subset $(S,u)$ that will approximate the average distance $\bar{g}(P,w,C)$ for every query, up to a \emph{multiplicative} factor of $1\pm \eps$ without further assumptions. Such a set is sometimes called a coreset as follows. \begin{definition}[$\eps$-coreset] For an error parameter $\eps\in(0,1)$, the weighted set $(S,u)$ is an $\eps$-coreset for the query space $(P,w,Q,g)$ if $S\subseteq P$ and for every $q\in Q(S)$, \[ (1-\eps)\bar{g}(P,w,q)\leq \bar{g}(S,u,q) \leq (1+\eps) \bar{g}(P,w,q). \] \end{definition} An equivalent definition of an $\eps$-coreset, is that the additive error $\eps \max_{p\in P}g(p,C)$ in~\eqref{eq666} is replaced by $\eps\bar{g}(P,w,q)$. That is, for every $q\in Q(S)$ \[ |\bar{g}(P,w,q)-\bar{g}(S,u,q)| \leq \eps \bar{g}(P,w,q). \] $\eps$-coreset implies that not only the average, but also the sum of squared distances (or sum of costs, in general) are preserved up to $1\pm\eps$. Note also that simply multiplying the weight of each point in $S$ by $(1-\eps)$ would yield a one sided error, \[ \bar{g}(P,w,q)\leq \bar{g}(S,u,q)| \leq \frac{1+\eps}{1-\eps}\cdot \bar{g}(P,w,q)=1+\frac{2\eps}{1-\eps}\cdot \bar{g}(P,w). \] If we assume in addition that $\eps\in (0,1-2/c)$ for some $c > 2$, then $1-\eps\geq 2/c$ and thus \[ \bar{g}(P,w,q)\leq \bar{g}(S,u,q)| \leq (1+c\eps) \bar{g}(P,w). \] Hence, an $\eps/c$-coreset $(S,u)$ implies, \begin{equation}\label{eq777} \bar{g}(P,w,q)\leq \bar{g}(S,u,q) \leq (1+\eps) \bar{g}(P,w,q). \end{equation} For example, if $\eps\in(0,1/2)$, then an $(\eps/4)$-coreset would yield~\eqref{eq777}, by substituting $c=4$. The main observation for getting the desired multiplicative approximation is that a multiplicative approximation of $\eps{\bar{g}(P,w,q)}$, can be turned into an additive approximation of $\eps$ by replacing $g(p,q)$ with its scaled version \[ f(p,q)=\frac{g(p,q)}{\bar{g}(P,w,q)}. \] To get a coreset for $g$ as in~\eqref{eq777} it suffices to have an $\eps$-approximation for $f$. In addition, for many problems, while the importance $s(p)$ of a point in $p$ is unbounded with respect to $g$, it is bounded with respect to $f$. We formalize this observation as follows. \begin{corollary}\label{them17} Let $(P,w,Q,g)$ be a query space for some non-negative function $g$, and define $f:P\times Q(P)\to\REAL$ such that for every $p\in P$ and $q\in Q(P)$ we have \[ f(p,q)=\frac{g(p,q)}{\bar{g}(P,w,q)}. \] Let $(S,u')$ be a $\displaystyle \left(\eps/4,\frac{1}{4t}\right)$-approximation for $(P,w',Q,f')$ as defined in Theorem~\ref{thm1}. Then $(S,u')$ is an $\eps$-coreset for $(P,w,Q,g)$, i.e., for every $q\in Q(S)$ \[ |\bar{g}(P,w,q)-\bar{g}(S,u,q)| \leq \eps \bar{g}(P,w,q). \] \end{corollary} \begin{proof} Put $q\in Q(S)$, $\tau=\eps/4$ and $\nu=1/4$. Applying Theorem~\ref{thm1} with $\nu$ yields \[ |\bar{f}(P,w,q)-\bar{f}(S,u',q)|_\nu \leq \tau. \] Since $\bar{f}(P,w,q)=1$, this implies \begin{equation*}\label{eqab} \begin{split} \left|1-\frac{\bar{g}(S,u',q)}{\bar{g}(P,w,q)}\right|_\nu =|\bar{f}(P,w,q)-\bar{f}(S,u',q)|_\nu \leq \tau. \end{split} \end{equation*} Substituting $a=1$, and $b=\bar{f}(S,u',q)$ in Corollary~\ref{nets}(i) yields \[ \left|1-\frac{\bar{g}(S,u',q)}{\bar{g}(P,w,q)}\right| \leq \eps. \] Multiplying by $\bar{g}(P,w,q)$ yields an $\eps$-coreset as \[ \left|\bar{g}(P,w,q)-\bar{g}(S,u',q)\right| \leq \eps\bar{g}(P,w,q). \] \end{proof} Combining Corollary~\ref{them17} and Theorem~\ref{thm2} yields the following theorem. \begin{theorem}\label{cor1} Let $(P,w,Q,g)$ be a query space, where $g$ is a non-negative function. Let $s:P\to (0,\infty)$ such that \[ s(p)\geq \max_{q\in Q(P)}\frac{w(p)g(p,q)}{\sum_{p\in P}w(p) g(p,q)}, \] and $t=\sum_{p\in P}s(p)$. Let $d$ be the dimension of $(P,w',Q,f')$ as defined in Theorem~\ref{them17}. Let $c\geq1$ be a sufficiently large constant, and let $S$ be a random sample of \[ |S|\geq \frac{ct}{\eps^2}\left(d\log t+\log\left(\frac{1}{\delta}\right)\right), \] points from $P$, such that for every $p\in P$ and $q\in S$ we have $p=q$ with probability $s(p)/t$. Let $u'(p)=\frac{t\cdot w(p)}{s(p)|S|}$ for every $p\in S$. Then, with probability at least $1-\delta$, $(S,u')$ is an $\eps$-coreset for $(P,w,Q,g)$, i.e., \[ \forall Q\in Q(S): (1-\eps)\overline{g}(P,w,q)\leq \overline{g}(S,u',q) \leq (1+\eps) \overline{g}(P,w,q). \] \end{theorem} \section{Application: Smaller and Generalized Coreset for $k$-Means}\label{app} \begin{definition}[$\rho$-metric space\label{rho}] Let $X$ be a set, and $D:X^2\to [0,\infty)$ be a function. Let $\rho>0$. The pair $(X,D)$ is a \emph{$\rho$-metric} space if for every $(x,y,z)\in X^3$ we have \[ D(x,z)\leq \rho(D(x,y)+ D(y,z)). \] For $C\subseteq X$ we denote $D(x,C):=\min_{c\in C}D(x,c)$ assuming that such minimum exists. Moreover, for $\eps>0$, the pair $(X,D)$ is a \emph{$(\psi,\eps)$-metric} space if for every $(x,y,z)\in X^3$ we have \[ |D(x,z)-D(y,z)|\leq \frac{D(x,y)}{\psi}+ \eps D(y,z)). \] \end{definition} Note that for every $x,y\in X$, and a center $c_y\in C$ that is closest to $y$, we have \begin{equation}\label{DD} D(x,C) =\min_{c\in C}D(x,c) \leq D(x,c_y) \leq \rho(D(x,y)+D(y,c_y)) =\rho(D(x,y)+D(y,C)). \end{equation} A simple way to decide whether $(D,X)$ is indeed a $\rho$-metric space is to use the following bound, known as Log-Log Lipschitz, that is usually easier to compute. The following lemma is very similar to~\cite[Lemma 2.1]{feldman2012data}, where in the proof of $(i)$ the constant $4$ that appeared there is replaced here by $1/\eps$. \begin{lemma}[Lemma 2.1(ii) in~\cite{feldman2012data}\label{leolem}] Let $\tilde{D}:[0,\infty)\to [0,\infty)$ be a monotonic non-decreasing function that satisfies the following (Log-Log Lipschitz) condition: there is $r>0$ such that for every $x>0$ and $\Delta>1$ we have \[ \tilde{D}(\Delta x)\leq \Delta^{r}\tilde{D}(x). \] Let $(X,\mathrm{dist})$ be a metric space, and let $D(x,y)=\tilde{D}(\mathrm{dist}(x,y))$ for every $x,y\in X$. Then $(X,D)$ is a $\rho$-metric space for $\rho=\max\br{2^{r-1},1}$, and a $(\psi,\eps)$-metric space for every $\eps\in(0,1)$ and $\psi=(r/\eps)^r$. \end{lemma} For example, consider a metric space $(X,\mathrm{dist})$ and the function $\tilde{D}(x)=x^2$ that corresponds to the squared distance $D(p,q)=\tilde{D}(\mathrm{dist}(p,q))=(\mathrm{dist}(p,q))^2$. Note that $(X,D)$ is not a metric space since the triangle inequality does not hold. However, for every $x>0$ and $\Delta>1$ \[ \tilde{D}(x\Delta)=(x\Delta)^2=\Delta^2x^2=\Delta^2 \tilde{D}(x). \] Hence, by Lemma~\ref{leolem} $r=2$, $\rho=2^{r-1}=2$, $(D,X)$ is a $2$-metric space and a $(\psi,\eps)$-metric space for $\psi=(2/\eps)^2$. To compute a coreset for a problem we need to decide what are the important points, or more formally, to use Theorem~\ref{cor1} we need to bound the importance $s(p)$ of each point $p\in P$. To do this, we usually need to solve another optimization problem that is usually related to computing the query with the minimal cost. For example, the bound of importance of a point in the $k$-mean problem, as will be defined later, is based on the optimal solution for the $k$-means problem. Unfortunately, this optimization problem is usually hard and the main motivation for constructing the coreset in the first place. There are two leeways from this chicken-and-egg problem: \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \item Use the merge-and-reduce approach that reduces the problem of computing a coreset for a large set of $n$ items to the problem of computing coresets for $\frac{n}{2|S|}$ small weighted (core)sets. Each input coreset $2|S|$ is reduced to a coreset of $|S|$ and merged with another such coreset, where $2|S|$ is the minimum size of input set that can be reduced to half using the given coreset construction. In this case, if this coreset construction takes time $f(|S|)$ than, since there are such $O(n)$ constructions, the overall running time will then be $O(n)\cdot f(|S|)$. \item For problems such as $k$-means, it is NP-hard to compute the optimal solution, even for a small set of $n=O(k)$ points. Instead of computing an optimal solution, usually a constant factor approximation suffices for computing the importance of each point. Since for many problems such an approximation is also unknown or too slow to compute, $(\alpha,\beta)$-approximation, (or bi-criteria, or bicriterion), can be used instead as explained below. \end{enumerate} Suppose that the optimal solution for the $k$-means problem on a set is $OPT_k$. That is, there is a set $C$ of $k$ centers whose cost (sum of squared distances to the input points of $P$) is $OPT_k$, and there is no such set of smaller cost. Then a set $\tilde{C}$ is called an $\alpha$-approximation if its cost is at most $\alpha\cdot OPT$. However, for many problems, even a rougher approximation would do: instead of using $k$ centers to approximate $OPT_k$ by a factor of $\alpha$, we use $\beta k$ centers, where each input point may be assigned for the nearest center. Note that the cost is still compared to $OPT_k$ and not to $OPT_{\beta k}$. We define $(\alpha,\beta)$-approximation formally below. For our purposes later, we generalize this common definition of $(\alpha,\beta)$-approximation, and allow a point to be assigned to a different center than its nearest one, as long as the overall cost is small. \begin{definition}[$(\alpha,\beta)$-approximation.\label{alphabeta}] Let $(X,D)$ be a $\rho$-metric space, and $k\geq1$ be an integer. Let $(P,w,Q,g)$ be a query space such that $P\subseteq X$, $Q(P)=\br{C\subseteq X \mid |C|\leq k}$, and $g:P\times Q(P)$ be a function such that \begin{equation}\label{eeg} g(p,C)=D(p,C):=\min_{c\in C}D(p,c). \end{equation} Let $\alpha,\beta\geq 0$, $B\subseteq X$ such that $|B|\leq \beta k$ and $\mathcal{B}:P\to B$ such that \[ \sum_{p\in P}w(p)g(p,\br{\mathcal{B}(p)}) \leq \alpha \min_{q\in Q(P)}\bar{g}(P,w,q). \] Then $\mathcal{B}$ is called \emph{an $(\alpha,\beta)$-approximation} for $(P,w,Q,g)$. \end{definition} For every $b\in B$, we denote by $P_b=\br{p\in P\mid \mathcal{B}(p)=b}$ the points that are mapped to the center $b$. We also denote $p'=\mathcal{B}(p)$ for every $p\in P$. One of the main tools in our novel streaming algorithm is also a technique to update an $(\alpha,\beta)$-approximation. However, due to memory limitations, our streaming algorithm cannot attach each point to its nearest center, but still the distances to the approximated centers is bounded. We thus generalize the definition of an $(\alpha,\beta)$-approximation, which is usually a set $B$ of size $\beta k$, to a \emph{function} $\mathcal{B}:P\to B$ that assigns each input point to a (not necessarily closest) center in $B$, while the overall cost is still bounded. Since an $(\alpha,\beta)$-approximation yields a weaker result compared to a PTAS or $\alpha$-approximation, it can usually be computed very quickly. Indeed, a very general framework for constructing $(\alpha,\beta)$-approximation to any query space with a small VC-dimension is suggested in~\cite{FL11} where $\alpha=1+\eps$ and $\beta=O(\log n)$. \textbf{Reducing $(\alpha,\beta)$-approximation to an $\alpha=O(1)$ approximation. }The size of the coreset usually depends on $\alpha$ and $\beta$. However, if they are reasonably small (e.g. polynomial in $\log n)$, we can reduce the approximation factor and number of centers in few phases as follows: (i) Compute an $(\alpha,\beta)$-approximation for small (but maybe not constant) $\alpha$ and $\beta$. (ii) Compute an $\eps$-coreset for $\eps=1/2$ using this approximation. (iii) Compute an $O(1)$ factor approximation on the coreset. Since the coreset is small, such an approximation algorithm can run inefficiently, say, in polynomial time if the coreset is of size $(\log n)^{O(1)}$. The resulting $O(1)$ approximation for the coreset is also an $O(1)$ approximation for the original set, by the definition of the $\eps=1/2$ coreset. (iv) Recompute the coreset for the complete (original) data using the $O(1)$ approximation instead of the $(\alpha,\beta)$-approximation to obtain a coreset of size independent of both $n$ and $d$. \renewcommand{\mathrm{pr}}{\mathrm{Prob}} \newcommand{\textsc{Coreset}}{\textsc{Coreset}} \begin{algorithm} \caption{$\textsc{Coreset}(P,w,\mathcal{B},m)$\label{one}} \begin{tabbing} \textbf{Input:} \quad\quad\= A weighted set $(P,w)$ where $P\subseteq X$ and $(D,X)$ is a $\rho$-metric space,\\%$(\tau,\phi)$-trackable space $(P,w,D)$, \\ \> $(\alpha,\beta)$-approximation $\mathcal{B}:P\to B$,\\ \> and sample size $m\geq 1$.\\ \textbf{Output:} \>A pair $(S,u)$ that satisfies Theorem~\ref{mainthm}. \end{tabbing} \vspace{-0.3cm} \nl \For{each $b\in B$} {\nl Set $P_b\gets \br{p\in P\mid \mathcal{B}(p)=b}$\\ } \nl \For{each $b\in B$ and $p\in P_b$} { Set $\displaystyle \mathrm{pr}(p)\gets \frac{w(p)D(p,\mathcal{B}(p))}{2\sum_{q\in P}w(q)D(q,\mathcal{B}(q))}+\frac{w(p)}{2|B|\sum_{q\in P_b}w(q)}$.\label{l4}} \nl Pick a sample $S$ of at least $m$ points from $P$ such that for each $q\in S$ and $p\in P$ we have $q=p$ with probability $\mathrm{pr}(p)$.\\ \nl \For{each $p\in P$}{ \nl Set $u(p)\gets \frac{w(p)}{|S|\cdot \mathrm{pr}(p)}$.} \nl Set $u(p)\gets 0$ for each $p\in P\setminus S$. \tcc{Used only in the analysis.} \nl \Return $(S,u)$ \end{algorithm} \begin{assumption}\label{assum} In what follows we assume that: \begin{itemize} \item $P$ is a set that is contained in $X$ where $(X,D)$ is a $\rho$-metric space and $(\phi,\eps)$-metric space as defined in~\eqref{rho}. \item $(P,w,Q,g)$ is a query space as defined in~\eqref{eeg}. \item $dk$ denotes the dimension of $f'$ as defined in Corollary~\ref{cor1}. \item $c$ is a sufficiently large constant that can be determined from the proofs of the theorems. \item $\mathcal{B}$ is an $(\alpha,\beta)$-approximation for $(P,w,Q,g)$ as in Definition~\ref{alphabeta}. \item We are given an error parameter $\eps\in(0,1)$. \item We are given a maximum probability of failure $\delta \in(0,1)$. \end{itemize} \end{assumption} We begin with the following claim, that is a simplified and generalized version of a similar claim in~\cite{LS10}. \begin{lemma}\label{sensbound} For every $b\in B$, and $p\in P_b$ have \[ \frac{w(p)D(p,C)}{\sum_{q\in P}w(q)D(q,C)} \leq \frac{\rho\alpha w(p)D(p,p')}{\sum_{q\in P}w(q)D(q,q')} +\frac{\rho^2(\alpha+1)}{\sum_{q\in P_b}w(q)}. \] \end{lemma} \begin{proof} Put $p\in P$ and $b\in B$ such that $p\in P_b$. We need to bound \begin{equation}\label{aa} \begin{split} \frac{w(p)D(p,C)}{\sum_{q\in P}w(q)D(q,C)} &\leq \frac{\rho w(p) D(p,p')}{\sum_{q\in P}w(q)D(q,C)} +\frac{\rho w(p)D(p',C)}{\sum_{q\in P}w(q)D(q,C)}\\ &\leq \frac{\alpha\rho w(p)D(p,p')}{\sum_{q\in P}w(q)D(q,q')} +\frac{\rho w(p)D(p',C)}{\sum_{q\in P}w(q)D(q,C)}, \end{split} \end{equation} where the first inequality holds by~\eqref{DD}, and the second inequality holds since $\mathcal{B}$ is an $(\alpha,\beta)$-approximation. To bound the last term, we sum the inequality $D(p',C)\leq \rho(D(p',q)+D(q,C))$ over every $q\in P_b$ to obtain \[ \begin{split} D(p',C)\sum_{q\in P_b}w(q) &=\sum_{q\in P_b}w(q)D(p',C) \leq \sum_{q\in P_b}w(q)\cdot \rho( D(p',q)+D(q,C))\\ &= \rho\sum_{q\in P_b}w(q) D(q',q)+\rho\sum_{q\in P_b}w(q) D(q,C)\\ &\leq \rho\alpha\sum_{q\in P_b}w(q) D(q,C)+\rho\sum_{q\in P_b}w(q) D(q,C)\\ &\leq \rho(\alpha+1)\sum_{q\in P}w(q) D(q,C). \end{split} \] Dividing by $\sum_{q\in P_b}w(q)\cdot \sum_{q\in P}w(q)D(q,C)$ yields \[ \frac{D(p',C)}{\sum_{q\in P}w(q)D(q,C)} \leq \frac{\rho(\alpha+1)}{\sum_{q\in P_b}w(q)}. \] Substituting this in~\eqref{aa} yields the desired result \[ \frac{w(p)D(p,C)}{\sum_{q\in P}w(q)D(q,C)} \leq \frac{\rho\alpha w(p)D(p,p')}{\sum_{q\in P}w(q)D(q,q')} +\frac{\rho^2(\alpha+1)}{\sum_{q\in P_b}w(q)}. \] \end{proof} Our first theorem suggests a coreset for $k$-means of size near-quadratic in $k$ and quadratic in $\eps$, based on our improved framework and last lemma. Existing work~\cite{LS10} for obtaining such coresets with only positive weights requires size cubic in $k$. \begin{theorem}\label{mainthm} Under Assumption~\ref{assum}, let $t=k\cdot \rho^2(\alpha+1)\beta $. Let $(S,u)$ be the output of a call to algorithm $\textsc{Coreset}(P,w,\mathcal{B},m)$, where \[ m\geq\frac{ct}{\eps^2}\left(dk\log t+\log\left(\frac{1}{\delta}\right) \right). \] Then, with probability at least $1-\delta$, $(S,u)$ is an $\eps$-coreset of size $m$ for $(P,w,Q,g)$. \end{theorem} \begin{proof} Let $f:P\to [0,\infty)$ such that for every $C\in Q(P)$, \[ f(p,C)=\frac{D(p,C)}{\sum_{q\in P}w(q)D(q,C)}, \] and define \[ s(p)=\frac{\rho\alpha w(p)D(p,p')}{\sum_{q\in P}w(q)D(q,q')} +\frac{\rho^2(\alpha+1)}{\sum_{q\in P_b}w(q)}. \] By Lemma~\ref{sensbound}, \[ \begin{split} \sum_{p\in P} \max_{C\in Q(P)} w(p)|f(p,C)| &=\sum_{b\in B}\sum_{p\in P_b} \max_{C\in Q(P)} w(p)f(p,C)\leq \sum_{b\in B}\sum_{p\in P_b} s(p)\\ &=\rho\alpha + |B|\cdot \rho^2(\alpha+1) =\rho(\alpha+|B|\alpha+|B|) \in O(\rho^2(\alpha+1)\beta k). \end{split} \] Applying Theorem~\ref{cor1} with the query space $(P,w,Q,g)$ then yields the desired bound. \end{proof} Our second theorem in this section suggests a coreset for $k$-means of size near-linear in $k$ by combining new observations with our improved framework. \begin{theorem}\label{mainthm2} Under Assumption~\ref{assum}, let $t=\alpha/\phi$. Let $(S,u)$ be the output of a call to algorithm $\textsc{Coreset}(P,w,\mathcal{B},m)$, where \[ m\geq\frac{ck(t+\beta )}{\eps^2}\left(d\log t+\log (\beta k)+\log\left(\frac{1}{\delta}\right) \right). \] Then, with probability at least $1-\delta$, $(S,u)$ is an $\eps$-coreset of size $m$ for $(P,w,Q,g)$. \end{theorem} \begin{proof} Let \[ H=\br{p\in P\mid |D(p,C)-D(p',C)| \leq \frac{2D(p,p')}{\phi}}. \] We need to bound by $O(\eps)$ the expression \begin{align} \label{aaa}&\frac{\left|\sum_{p\in P}(w(p)-u(p))\cdot D(p,C)\right|}{\sum_{q\in P}D(q,C)}\\ &\leq \label{bba}\frac{\left|\sum_{p\in H}(w(p)-u(p))\cdot (D(p,C)-D(p',C))\right|}{\sum_{q\in P}D(q,C)} \\ \label{aab}&+\quad\Big|\sum_{b\in B}\frac{{\sum_{q\in P_b}w(q)D(b,C)}}{{\sum_{r\in P}w(r)D(r,C)}} \cdot \\& \left(\sum_{p\in P_b}\frac{(w(p)-u(p))\cdot D(p,C)}{\sum_{q\in P_b}w(q)D(b,C)}- \sum_{p\in P_b\cap H}\frac{(w(p)-u(p))\cdot (D(p,C)-D(p',C))}{\sum_{q\in P_b}w(q)D(b,C)} \right)\Big|. \end{align} Put $b\in B$ and $p\in P_b$. Then, \begin{align} \nonumber&\sum_{p\in P_b}\frac{(w(p)-u(p))\cdot D(p,C)}{\sum_{q\in P_b}w(q)D(b,C)}- \sum_{p\in P_b\cap H}\frac{(w(p)-u(p))\cdot (D(p,C)-D(p',C))}{\sum_{q\in P_b}w(q)D(b,C)}\\ \label{first1}&=\sum_{p\in P_b}\frac{w(p)-u(p)}{\sum_{q\in P_b}w(q)}\\ \label{second1}&\quad+\sum_{p\in P_b\setminus H}\frac{(w(p)-u(p))\cdot (D(p,C)-D(b,C))}{\sum_{q\in P_b}w(q)D(b,C)}. \end{align} We now prove that, with probability at least $1-c\delta$, each of the expressions ~\eqref{bba},~\eqref{first1} and~\eqref{second1} is bounded by $2\eps$. \paragraph{Bound on~\eqref{bba}: } Let $h(p,C)=\frac{D(p,C)-D(p',C)}{\sum_{q\in P}w(q)D(q,C)}$ if $p\in H$ and $h(p,C)=0$ otherwise. For every $p\in P$, \[ w(p)\cdot |h(p,C)|\leq \frac{2w(p) D(p,p')}{\phi\sum_{q\in P}w(q)D(q,C)} \leq \frac{2\alpha w(p) D(p,p')}{\phi\sum_{q\in P}w(q)D(q,q')}\leq \frac{2\alpha \mathrm{pr}(p)}{\phi}. \] Hence, using $t=2\alpha/\phi$ in Theorem~\ref{cor1}, with probability at least $1-\delta$, \begin{equation}\label{aca} \frac{\left|\sum_{p\in H}(w(p)-u(p))\cdot (D(p,C)-D(p',C))\right|}{\sum_{q\in P}D(q,C)} =\left|\sum_{p\in H} (w(p)-u(p))h(p,C)\right|=|\bar{h}(P,w,q)-\bar{h}(S,u,q)|\leq \eps, \end{equation} \paragraph{Bound on~\eqref{first1}: } Let $I(p,b)=1/\sum_{q\in P_b}w(q)$ if $p\in P_b$ and $I(p,b)=0$ otherwise. We have \[ \max_{p\in P} w(p)I(p,b) \leq 2|B|\mathrm{pr}(p), \] where $\mathrm{pr}(p)$ is defined in Line~\ref{l4} of Algorithm~\ref{one}. Hence, $\sum_{p\in P}\max w(p)I(p,b)\leq 2|B|$. Also, each point in $S$ is sampled with probability proportional to $2|B|\mathrm{pr}(p)$ and for $c\geq 2$, \[ |S|\geq \frac{2|B|}{\eps^2}\left(2\log(2|B|)+\log\left(\frac{1}{\delta}\right)\right) = \frac{2|B|}{\eps^2}\left(\log(|B|)+\log\left(\frac{|B|}{\delta}\right)\right). \] Using $d=1$ and replacing $\delta$ by $\delta/|B|$ in Theorem~\ref{cor1} yields that $S$ is an $\eps$-coreset for $(P,w,\br{b},I)$, with probability at least $1-\delta/|B|$. That is \begin{equation}\label{bb1} |\sum_{p\in P_b}\frac{w(p)-u(p)}{\sum_{q\in P_b}w(q)} | = |\bar{I}(P,w,b)-\bar{I}(S,u,b)|\leq \eps. \end{equation} By the union bound, with probability at least $1-\delta$, the last inequality holds for every $b\in B$ simultaneously. \paragraph{Bound on~\eqref{second1}: } Since $(D,X)$ is a $\rho$-metric, \begin{equation*} |D(p,C)-D(b,C)|\leq \frac{D(p,b)}{\phi}+\eps D(b,C)\leq \max\br{\frac{2D(p,b)}{\phi},2\eps D(b,C)}. \end{equation*} Hence \begin{equation}\label{eab} \sum_{p\in P_b\setminus H}\frac{(w(p)-u(p))\cdot (D(p,C)-D(b,C))}{\sum_{q\in P_b}w(q)D(b,C)} \leq 2\eps\sum_{p\in P_b\setminus H} \frac{w(p)+u(p)}{\sum_{q\in P_b}w(q)}. \end{equation} The last expression is bounded using~\eqref{bb1}, as \begin{equation}\label{sc2} \sum_{p\in P_b\setminus H} \frac{w(p)+u(p)}{\sum_{q\in P_b}w(q)} \leq \sum_{p\in P_b\setminus H} \frac{(1+\eps)w(p)}{\sum_{q\in P_b}w(q)} =(1+\eps). \end{equation} \textbf{Bounding~\eqref{aaa}: } By combining the above bounds we obtain that with probability at least $1-10\delta$, \[ \frac{\left|\sum_{p\in P}(w(p)-u(p))\cdot D(p,C)\right|}{\sum_{q\in P}D(q,C)}\\ \leq \eps+ \left|(\eps+2\eps(1+\eps))\sum_{b\in B}\frac{{\sum_{q\in P_b}w(q)D(b,C)}}{{\sum_{r\in P}w(r)D(r,C)}}\right| \leq c\eps. \] Replacing $\eps$ with $\eps/c$, and $\delta$ with $\delta/c$ then proves the theorem. \end{proof} \paragraph{The values of $\rho$ and $\psi$.} Algorithm~\ref{one} can be applied to compute a coreset for any given variant of the $k$-means/median problem given a set $P$ and a $\rho$ or $(\rho,\eps)$-metric $(X,D)$. The only difference is the size of the required coreset. The parameters $\rho$ and $\phi$ can usually be computed easily using Lemma~\ref{leolem}. For example, in the case of distances to the power of $r\geq1$, the value of $\rho$ is roughly $2^r$ and the value of $\phi$ is roughly $\eps^r$. For most common m-estimators the values are similar, when $r$ is some constant. \paragraph{The values of $\alpha$ and $\beta$} can be $\alpha=\beta=O(1)$ for $k$-means and all its variants, by using the generic algorithm for computing $(\alpha,\beta)$-approximation in~\cite{FL11} with $\alpha=\beta=O(\log n)$, and then use the technique for reducing $\alpha$ and $\beta$. For bounding the approximation of the bi-criteria approximation in~\cite{FL11} only the pseudo-dimension of the problem is required, which is usually easy to compute as explained below. \paragraph{Dimension $d$. }Unlike the total sensitivity $t$, the dimension $d$ for numerous problems was already computed in many papers in computational geometry and machine learning (in the context of PAC learning). This includes reductions and connections to similar notions such as the shattering dimension or the VC-dimension of a set. General techniques for computing the dimension of a set based on number of parameters to define a query, or number of operations that are needed to answer a query can be found in the book~\cite{anthony-bartlett:1999a}. \paragraph{Dimension of the $k$-means problem and its variants. } Note that, unlike sensitivity, the dimension is less dependent on the exact type of distance function. For example, using Euclidean distance or Euclidean distance to the power of $3$ as a variant for the $k$-means clustering problem does not change the dimension. This is because set of ranges for both of these problems is the same: subsets of the input points that can be covered by $k$ balls. It is easy to compute the dimension for the $k$-means problem (the query space $(P,w,Q,g)$ in our paper, as well as the modified query space for the function $f$. These bounds can be found in~\cite{FL11}. In addition, ~\cite{FL11} provide a simple reduction that shows that the dimension for $k$ centers is the same as the dimension of $1$ center multiplied by $k$. In short, for the Euclidean space, the dimension of the $k$-means/median problem is $O(dk)$, and for metric spaces (graph) the dimension is $O(k\log n)$. \paragraph{Smaller coreset for $k$-means queries. }Consider the k-means queries in $\REAL^d$, i.e., the cost is the sum of squared distances $\sum_{p\in P}D(p,C)$ over every point in $P$ to its nearest center in a given set $C$ of $k$ points in $\REAL^d$. It was proven that projecting $P$ onto an $O(k/\eps)$-dimensional subspace that minimizes its sum of squared distances, known as the low-rank approximation of $P$, would preserve this sum, to any set of $k$ centers, up to a factor of $1\pm\eps$. This is in some sense an $\eps$-coreset for $P$ of size $n$ and that is not subset of the input, but of low-dimensionality. In particular, this result implies that there is a set of centers (known as centroid set~\cite{sarielb}) that is contained in a $O(k/\eps)$-dimensional space, such that every set of $k$ centers in $\REAL^d$ can be replaced by a set of $k$ centers in the centroid set, that would yield the same cost up to a factor of $1\pm\eps$. In particular, this implies that the dimension of the $k$-means problem can be reduced to $O(k/\eps)$ instead of $dk$, i.e., independent of $d$. Combining this result with our paper yields the first coreset for $k$-means of size independent of $d$ that is subset of the input (in particular, preserve the sparsity of the input points) that also supports streaming. \paragraph{Weak coresets of size independent of $d$. } For the non-Euclidean case or non-squared distances it seems that it is impossible to obtain coreset of size independent of $d$. However, coreset as defined above (sometimes called strong coreset) approximates \emph{every} query in the set of queries, while the main application and motivation for constructing coreset is to compute the \emph{optimal }query or its approximation. A weak coreset is a small set that can be used to give such an approximation. The exact definition of weak coreset also changes from paper to paper. In particular, a weak coreset for $k$-means was suggested in~\cite{feldman2007ptas}. However, to extract the approximated solution from the coreset we must run an exhaustive search and cannot use existing algorithms or heuristics as in the case of strong coreset. In this paper, following~\cite{FL11}, we use a simple and general definition of weak coreset, that is also more practical. Instead of defining a unique (static, global) set $Q$ of queries, we define $Q$ to be a function that maps every subset (potential coreset) $S$ of $P$ to a set of queries. A weak coreset $S$ needs only to approximate the queries in $Q(S)$. It turns out that for many case the (generalized definition) of dimension for such a query space is much smaller compared to the traditional case where $Q(S)=Q(P)$ is the same for every subset $S\subseteq P$. To be able to use this property in our existing proofs, we require a monotonicity property: that the set $Q(T)$ of queries that are assigned to a subset $T\subset S$ must be contained in $Q(S)$. If we can prove that for every $S\subseteq P$, the set $Q(S)$ contains a $(1+\eps)$-approximation to both the optimal solution of $S$ and $P$, then we can extract such an approximation from $S$. For $k$-means, $k$-median and their variants, it was proven in~\cite{shyamalkumar2007efficient} that the optimal $k$-centers of a set $S$ can be approximated by such a set that is spanned by $O(k/\eps)$ points in $S$. By defining $Q(S)$ to be the union of the optimal center of $P$, with all the centers that are spanned by $O(k/\eps)$ points in $S$, we get that $S$ is a weak coreset. Note that the definition of $Q$ can be explicit (without knowing the optimal center of $P$) and is needed only to bound its dimension as in Definition~\ref{vdim}. \paragraph{Extracting a $(1+\eps)$-approximation from the coreset. } It was proven in~\cite{FL11} that for problems such as $k$-median such weak coresets have dimension $O(k/\eps)$, i.e., independent of $d$. Unlike~\cite{FL11}, we suggest here a very simple way to extract the approximated solution from the coreset: compute the weighted coreset $(S,u)$ (of size independent of $d$) as defined in Algorithm~\ref{one}, and then use any given algorithm to compute a $(1+\eps)$ approximation set $C$ of $k$ centers on the coreset (or any other trusted heuristic that we hope computes such a set $C$). Since $C$ is not necessarily spanned by few points in $S$, it may not be a good approximation for the original set $P$ and we should not return it. However,the proof in~\cite{shyamalkumar2007efficient} is constructive and shows a near-linear time algorithm that using such an approximated solution set $C$, we can compute another $(1+O(\eps))$-approximation $C'$ that has the additional property that $C'$ is spanned by $O(k/\eps)$ points in $S$. Hence, $C'$ is both a near-optimal solution to $S$ and in $Q(S)$, so it must be a $(1+\eps)$-approximation for the optimal solution of $P$. \section{Appendix A: Merge and Reduce Tree}\label{introMergeReduce} We now briefly introduce the previous technique for maintaining coresets in the streaming setting due to Har-Peled and Mazumdar~\cite{sariela} and Bentley and Sax~\cite{sax}. In this method, a merge-and-reduce tree is built by using an offline coreset construction as a blackbox. Previously merge-and-reduce was the only known technique for building a streaming coreset for metric $k$-median, and it relies solely on the following two properties: \begin{enumerate} \item Merge: The union of $(k,\epsilon)$-coresets is a $(k,\epsilon)$-coreset. \item Reduce: A $(k,\epsilon)$-coreset of a $(k, \delta)$-coreset is a $(k, \epsilon + \delta)$-coreset. \end{enumerate} The merge-and-reduce tree works as follows. There are buckets $B_i$ for $i \ge 0$. In each step, the bucket $B_0$ takes in a segment of $O(1)$ points from the stream. Then the tree works like counting in binary: whenever buckets $B_0$ to $B_{i-1}$ are full, these $i$ buckets are merged and then reduced by taking a $(k, \frac{\epsilon}{\log n})$-coreset and storing the result in $B_{i}$. Let $s$ be the space of offline construction, which depends on $\epsilon$ as $\epsilon^{-a}$. At the end of the stream, $O(\log n)$ buckets have been used and each bucket uses $O(s \log^a n)$ space; this incurs a multiplicative overhead of $\Theta(\log^{a+1} n)$ in the storage requirement. The second factor comes from using the accuracy parameter $\frac{\epsilon}{\log n}$, which is necessary by Property 2 since the construction will be compounded $O(\log n)$ times. Due to this compounding, the runtime is multiplied by a factor of $O(\log n)$. \section{Appendix B: General Streaming Reduction} \label{streamingSection} We present a general technique for converting an offline coreset construction to a streaming coreset construction with $O(\log n)$ overhead. Given a $\rho$-metric space $(X,D)$ (recall Definition~\ref{rho}), we build a query space $(P,w,g,Q)$ in the same way as in Definition~\ref{alphabeta}: $Q(P)=\br{C\subseteq X \mid |C|\leq k}$ and $g(p,C)=D(p,C):=\min_{c\in C}D(p,c)$. Here $k$ is a positive integer that denotes how many centers may be used for the clustering. Our bicriterion algorithm is an adjustment to the algorithms of~\cite{BMO11} and [CCF] with the following important difference: our bicriterion is online, so we do not delete and reassign centers as is~\cite{BMO11}. This ``online" property is critical for the algorithm to work and is one of the main technical ideas. Although a fully online bicriterion can require linear space, we maintain a division of the stream $P$ into a prefix $R$ and a suffix $P \setminus R$ such that our bicriterion is online on the suffix $P\setminus R$ and the prefix $R$ can be largely ignored. To maintain this property of being online for the suffix, we incur only a modest space increase from $O(k \log n)$ to $O(\log( \frac{1}{\epsilon} ) k \log n)$. After having an online bicriterion (it is further explained below why this property is essential), the offline coreset algorithms perform non-uniform sampling procedure with carefully chosen probabilities that are defined by the bicriterion. Equipped with our new bicriterion algorithm, implementing the sampling procedure is rather straightforward computation which is explained in Section~\ref{section:Sampling}. As a result, we can implement any sampling-based coreset algorithm for $k$-median without merge-and-reduce and in one pass. As such it is applicable to several coreset constructions (such as $k$-means and other $M$-estimators). In addition, we believe that our methods will work with other objective functions as well such as $(k,j)$-subspace, and we hope that future work will investigate these directions. Many clustering algorithms~\cite{Charikar, Guha, BMO11} maintain a weighted set $(B,u)$ of points (which are selected using a facility-location algorithm). Upon arrival of an update $(p,w(p))$ from the stream, this update is added to the set by $u(p) \gets u(p) + w(p)$. In these algorithms, only a single operation is performed on $(B,u)$ which we call $\ensuremath{\text{\footnotesize\textsf{MOVE}}}$. For two points $p, p' \in B$ with weights $u(p)$ and $u(p')$, the function $\ensuremath{\text{\footnotesize\textsf{MOVE}}}(p,p')$ does the following: $u(p') \gets u(p') + u(p)$ and $u(p) \gets 0$. This essentially moves the weight at location $p$ to the location of $p'$. The motivation for building $(B,u)$ will be compression; $(B,u)$ will be maintained over the stream $P$ in such a way that $|B| = O(\log |P|)$. Throughout this section, we will assume for ease of exposition that each point in the stream is from a distinct location. This simplifies the analysis, allowing $\mathcal{B}$ to take only a single value for each input point. The algorithm works for general inputs without modification, requiring only a more careful notation for the analysis. Additionally, we state the algorithm for unweighted input (where $w(p) = 1$ for all $p \in P$) and the parameter $n$ is defined as $|P|$. We still include $w(p)$ throughout the algorithm and analysis, as the analysis generalizes to weighted inputs where the parameter $n$ is replaced by the sum of all weights (after normalizing the minimally weighted point to $1$). \subsection{An Algorithm for building a coreset} \label{section:Bicriterion} First, let us describe how the algorithm of~\cite{BMO11} works. We will modify this algorithm as part of our coreset construction. In this summary, we alter the presentation from that of~\cite{BMO11} to more fluidly transition to our modified version but the algorithm remains the same. \cite{BMO11} operates in phases $i \ge 1$. This means that the algorithm maintains a phase number $i$ (used internally by the algorithm), beginning in phase $i=1$. As the stream arrives, the algorithm may decide to increment the phase number. Let $(R_i,w_i)$ denote the prefix of the input received before the end of phase $i$, and let $\ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R_i)$ denote the minimal value of $\bar{g}(R_i,w_i,C)$ over every $C \in Q$. When phase $i+1$ begins, a value $L_{i+1}$ is declared on Line~\ref{declareL} as a lower-bound for the current value of $\ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R_{i+1})$. The algorithm has computed $(M_i, u_i)$, which we inductively assume is a bicriterion approximation for $R_i$ (more precisely, a map $\mathcal{B} : R_i \rightarrow M_i$ such that $\cup_{p \in R_i} (\mathcal{B}(p), w(p)) = (M_i,u_i)$. However, to maintain polylogarithmic-space the algorithm pushes $(M_i, u_i)$ to the beginning of the stream and restarts the bicriterion construction. This means that the algorithm, at this point, restarts by viewing the stream as $(P, w - w_i + u_i)$ (i.e. replacing $(R_i,w_i)$ with $(M_i,u_i)$). Continuing in this way, the algorithm maintains a bicriterion $(M_{i+1},u_{i+1})$ for $(R_{i+1},w_{i+1} - w_i + u_i)$ (which is also a bicriterion for $(R_{i+1},w_{i+1})$ by Theorem~\ref{blackbox}) until the next phase change is triggered. Now we explain our modifications to~\cite{BMO11} (see Algorithm~\ref{alg:ouralg}). The first step is that the bicriterion our algorithm builds must be determined ``online" in the following sense: upon receiving a point $(x,w(x))$, the value of $\mathcal{B}(x)$ must be determined (and never be altered) before receiving the next point from the stream. This generalization is necessary for the following reason. Suppose we connect an element $p$ to a center $b_1$. Later in the stream, we open a new center $b_2$ that becomes the closest center to $p$. However, using polylogarithmic space, we have already deleted $b_1$ and/or $p$ from memory and the state of our algorithm is identical to the case where $b_1$ remains the closest center to $p$. Therefore the connections must be immutable, and this results in non-optimal connections. \begin{figure} \includegraphics[scale=0.25]{bicriterion} \caption{A bicriterion approximation. Although connections are not optimal, the sum of all connections costs is $O(\ensuremath{\text{\footnotesize\textsf{OPT}}})$.} \end{figure} \begin{definition} \label{online} An online $[\alpha,\beta]$-bicriterion is an algorithm that maintains a bicriterion $(B,t)$ over a stream $X$, and operates online in the following sense. Upon arrival of each point $p$, it is immediately decided whether $p$ is added to $B$, and then $t(p)$ is determined. Both of these decisions are permanent. \end{definition} Upon receiving an update $(p,w(p))$ from the stream, the algorithm may call $\ensuremath{\text{\footnotesize\textsf{MOVE}}}(p,p')$ for the nearest $p' \in B$ to $p$. In the analysis we use the function $\mathcal{B} : P \rightarrow B$ that maps each point $p$ to its immediate location after this initial move (either $p$ itself, or $p'$). If future moves are performed on $p$, this does not change the value $\mathcal{B}(p)$. $\mathcal{B}$ is not stored by the algorithm due to space constraints; only the value of $\mathcal{B}(p)$ for the most recent point $p$ is used by the algorithm, and older values are used in the analysis only. We will show that $\mathcal{B}$ is a $(O(1),O(\log n))$-approximation that allows us to maintain a coreset over the stream. We now state Algorithm~\ref{alg:ouralg}. $\phi$ and $\gamma$ are constants (dependent on $\rho$) used in the analysis that are defined as in~\cite{BMO11}. Each point has a flag that is either raised or lowered (given by the $Flag$ function). All points have their flag initially lowered, given on Line 1. A lowered flag shows that the point is being read for the first time (being received from the stream), and a raised flag shows that the point is being re-read by the algorithm (having been stored in memory). \begin{algorithm} $Flag(x) = 0$ for all $x \in P$ \\ $L_1 \gets $ minimum $D(x,y)$ for any $x,y$ in the first $k$ distinct points \\ $i \gets 1$ \\ $K_1 \gets 0$ \\ $B \gets \emptyset$ \\ $u(x) \gets 0$ for all $x$\label{u1} \\ \For{each point $(x, w(x))$ received} { $u(x) \gets u(x) + w(x)$\\ $y \gets argmin_{y \in Q} D(x,y)$ (break ties arbitrarily) \\ $I \gets 1$ with probability $\min\{\frac{w(x) D(x,y)}{L_{i} / k(1+\log_2 n)}, 1\}$, otherwise $I \gets 0$\\ \eIf{I}{ \If{$Flag(x) = 0$} { $\mathcal{B}(x) \gets x$\\ $B \gets B \cup \{x\}$\\ } }{ $K_i \gets K_i + w(x) D(x,y)$ \\ $u(y) \gets u(y) + u(x)$ \tcc{Step 1 of $\ensuremath{\text{\footnotesize\textsf{MOVE}}}(x,y)$} $u(x) \gets 0$ \tcc{Step 2 of $\ensuremath{\text{\footnotesize\textsf{MOVE}}}(x,y)$} \If{$Flag(x) = 0$} { $\mathcal{B}(x) \gets y$\\ } } \If{$K_i > \gamma L_i$ or $|B| > (\gamma-1) (1 + \log_2 n)k$}{ $(M_i,u_i) \gets (B,u)$ \label{Mi} \\ $Flag(b) \gets 1$ for all $b \in B$ \\ Push $(B,u)$ onto the stream $(P,w)$ before the next point to read \label{push}\\ $B \gets \emptyset$ \\ $u(x) \gets 0$ for all $x$ \label{u2} \\ $L_{i+1} \gets \phi L_i$ \label{declareL} \\ $q \gets 0$ \\ $K_{i+1} \gets 0$\\ $i \gets i+1$\\ } } \caption{Input: integer $k$, $\rho$-metric space $(X,D)$, stream $(P,w)$ of $n$ weighted points from $(X,D)$. Output: $\mathcal{B}(x)$ after receiving each point $x$, and a weighted set $(M_i,u_i)$ after each phase $i \ge 1$} \label{alg:ouralg} \end{algorithm} On Line~\ref{Mi}, $(M_i,u_i)$ is the weighted set $(B,u)$ as it exists at the end of phase $i$. We define the cost of $\ensuremath{\text{\footnotesize\textsf{MOVE}}}(p,p')$ to be $w(p) D(p,p')$. The value $K_i$ is therefore the total cost of all moves performed in phase $i$. At a phase change, the set $(B,u)$ is pushed onto the beginning of the stream with all points having a raised flag. This means that during the next $|B|$ iterations of the outer-loop where a point $(x, w(x))$ is received we actually receive a point from memory. We continue to process the stream after these $|B|$ points are read. The following theorem summarizes the guarantees of this algorithm. We note that, as stated, the algorithm's runtime of $O(nk \log n)$ is not correct - in fact the number of phases may be arbitrarily large. However, using the same technique as detailed in Section 3.3 of~\cite{BMO11} the number of phases can be bounded to $O(\frac{n}{k \log n})$ while requiring $O(k^2 \log^2 n)$ time per phase. \begin{theorem}[\cite{BMO11}] \label{blackbox} Let Algorithm~\ref{alg:ouralg} process a stream $(P,w)$ of at most $n$ points. Let $R_i$ denote the prefix of the stream received before the end of phase $i$, and let $K_i$ and $L_i$ be their final values from the algorithm (which are never modified after phase $i$). With probability at least $1 - \frac{1}{n}$, the following statements all hold after processing each point: \begin{enumerate} \item $\sum_i K_i \le \frac{\rho \phi \gamma}{\phi - \rho} \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R_i)$ \item The total runtime is $O(nk \log n)$ \item For every phase $i$, $L_i \le \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R_i) \le \phi L_i \le L_{i+1}$ \item At the execution of Line~\ref{Mi}, $M_i$ consists of $O(k \log n)$ points \end{enumerate} \end{theorem} Part 1 of the preceding theorem, which bounds the total cost of all moves performed by the algorithm, also serves as an upper bound on the Earth-Mover distance between the stream $(P,w)$ and the maintained set $(B,u)$. \begin{definition}[Earth-Mover Distance] Let $(A,w_A)$ and $(B,w_B)$ be weighted sets in $(X,D)$. Morever, let them be of equal weight in the sense that $\Sigma_{a \in A} w(a) = \Sigma_{b \in B} w(b)$. Define a ``movement" from $(A,w_A)$ to $(B,w_B)$ to be a weighted set $(Y, v)$ in $(X \times X, D)$ such that $\cup_{(a,b) \in Y} (a,v(a,b)) = (A,w_A)$ and $\cup_{(a,b) \in Y} (b,v(a,b)) = (B,w_B)$. Then $d_{EM}(A,w_A,B,w_B)$ is the minimum of $\Sigma_{(a,b) \in Y} v((a,b)) D(a,b)$ over all movements $Y$ from $(A,w_A)$ to $(B,w_B)$. \end{definition} Another way to view the preceding definition is in terms of probability distributions. Over all joint distributions over $A \times B$ with marginal distributions $(A, w_A)$ and $(B,w_B)$, we seek the minimum possible cost (as defined) for any such joint distribution. From now on we will write a weighted set $A$ instead of $(A,w_A)$ when the meaning is clear. If we start with a set $A_0$ and apply $n$ operations of $\ensuremath{\text{\footnotesize\textsf{MOVE}}}$ until it becomes the set $A_n$, we can provide an upper-bound for $d_{EM}(A_0,A_n)$ by summing $d_{EM}(A_i, A_{i+1})$ for $0 \le i < n$. This is a direct application of the triangle inequality. And if $A_{i+1}$ is obtained from $A_i$ by applying $\ensuremath{\text{\footnotesize\textsf{MOVE}}}(p,p')$, then $d_{EM}(A_i, A_{i+1}) = D(p,p') w(p)$, the cost of this move. The Earth-Mover distance is important for clustering problems for the following reason. For any weighted sets $(P,w)$ and $(B,u)$ and query $C$, $|\bar{g}(P,w,C) - \bar{g}(B,u,C)| \le d_{EM}(P,w,B,u)$. This is immediate from a repeated application of the triangle-inequality (proofs are found in Theorem 2.3 of~\cite{Guha} as well as in~\cite{Charikar, BMO11, Guha2}). \begin{theorem} \label{LayerOne} There exists an algorithm stores $O(\log \left( \frac{1}{\epsilon} \right) k \log n)$ points and maintains for every prefix $(R,w')$ of the stream $(P,w)$: (1) a weighted set $(M,u)$ such that $d_{EM}(M,u,R,w') \le \epsilon \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R)$, and (2) a $(O(1),O(\log (\frac{1}{\epsilon}) \log n))$-bicriterion $\mathcal{B}$ for $(R,w')$. Morever, this bicriterion $\mathcal{B}$ is computed online in the sense that $\mathcal{B}(x)$ is determined upon receiving $x$ from the stream. \end{theorem} \begin{proof} The algorithm will consist of running Algorithm~\ref{alg:ouralg} and storing certain information for the $\lambda$ most recent phases (where $\lambda$ depends on $\epsilon$). Let $i$ be the current phase number, and let $(R,w')$ be the points received so far. We remind the reader that when the meaning is clear we suppress notation and write $R$ for the weighted set $(R,w')$, and likewise for $(M,u)$. Define $\lambda = 2+\lceil \log_{\phi} ( \frac{\rho \gamma}{(\phi - \rho)\epsilon} ) \rceil$. The prefix $R$ and the set $M$ in the statement of the theorem will be $R_{i-\lambda}$ and $M_{i-\lambda}$. To upper bound $d_{EM}(R,M)$, which by definition is the minimum cost of any movement from R to M, we note that one movement is the set of moves carried out by the algorithm through phase $i-\lambda$, whose cost is $\sum_{j=1}^{i-\lambda} K_j$. Part 1 of Theorem~\ref{blackbox} then shows that $d_{EM}(R,M) \le \Sigma_{j = 1}^{i-\lambda} K_j \le \frac{\rho \phi \gamma}{\phi - \rho} \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R_{i-\lambda})$. Combining Statements 3 and 4 of the theorem shows that $\ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R_{i-\lambda}) \le \phi^{1-\lambda} \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R)$. Therefore $d_{EM}(R,M) \le \frac{\rho \phi \gamma}{\phi - \rho} \phi^{1-\lambda} \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R) \le \epsilon \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(R)$ as desired. As for the second statement of the theorem, the algorithm defines the map $\mathcal{B}$ and we are currently interested in the restriction of $\mathcal{B}$ to $R \setminus R_{i-\lambda}$. $\mathcal{B}$ maps to at most $O(k \log n)$ points per phase - this is guaranteed by Statement 5 (a direct result the termination condition on Line 17). Over the last $\lambda$ phases, this then maps to $O(\lambda k \log n) = O((\frac{1}{\epsilon}) k \log n)$ points. Therefore we have that $\beta = O((\frac{1}{\epsilon}) \log n)$. And the value of $\alpha$ is immediate from Statement 1 of Theorem~\ref{blackbox}, since $\mathcal{B}$ incurs only a subset of the costs as $K_j$ (the subset that comes from the new portion of the stream $R_j \setminus R_{j-1}$). As a final note, we do not store $\mathcal{B}(P)$ in memory (it may be linear in space). For algorithms in the following sections it is only required to know it's value for the most recent point received; previous values are used only in the analysis. \end{proof} \subsection{Maintaining a coreset over the stream} \label{section:Sampling} In the previous section, we presented an algorithm that maintains an $(\alpha,\beta)$-approximation of the stream (this is given by the function $\mathcal{B} : P \rightarrow B$). In this section we show how we can use this approximation to carry out the coreset construction of Algorithm~\ref{one} on an insertion-only stream. In the offline construction, a sample $S$ of $m$ points is taken from $X$ according to a distribution where point $p$ sampled with probability depending on $D(p,\mathcal{B}(p))$, $|B|$, and $n_{\mathcal{B}(p)}$ (the total weight of points connected to the center $\mathcal{B}(p)$, which is written as $\Sigma_{q \in P_b} w(q)$ where $b = \mathcal{B}(p)$) - the specific formula for the probability written below (and comes from Line 3 of Algorithm~\ref{one}). All three of these quantities can easily be maintained over the stream using Algorithm~\ref{alg:sampling} since $\mathcal{B}$ is online (i.e. $\mathcal{B}(p)$ never changes). $$Prob(p) = \frac{w(p)D(p,\mathcal{B}(p))}{2\sum_{q\in P}w(q)D(q,\mathcal{B}(q))}+\frac{w(p)}{2|B|\sum_{q\in P_b}w(q)}$$ Upon receiving a point $p$, we assign $r(p)$ a uniform random number in the interval $(0,1)$. This is the threshold for keeping $p$ in our sample $S$ - we keep the point $p$ if and only if $r(p) < Prob(p)$. For each point $p$, $Prob(p)$ is non-increasing as the stream progresses (this is immediate from the formula and from the fact that clusters never decrease in size since $\mathcal{B}$ is online). Therefore after receiving point $p$, we update $Prob(s)$ for each $s \in S$ and delete any such $s$ that drop below their threshold: $r(s) \ge Prob(s)$. Once a point crosses the threshold, it may be deleted since $Prob(s)$ is non-increasing and so it will remain below the threshold at all future times. In this way, the construction exactly matches the output as if the offline Algorithm~\ref{one} had been used. \begin{algorithm} $S \gets \emptyset$ \\ \For{each point $p$ that arrives} { Assign $r(p)$ a uniform random in $(0,1)$ \\ $S \gets S \cup \{p\}$ \\ Compute Prob($p$) according to Line 3 of Algorithm~\ref{one} \\ \For{each point $s \in S$} { Update Prob($s$) \\ \If{Prob($s$) $\le r(s)$} { Delete $s$ from $S$ \\ } Update $u(s)$ according to Line 6 of Algorithm~\ref{one} } } \caption{Input: integer $k$, $\rho$-metric space $(X,D)$, stream $P$ of points $n$ from $(X,D)$, online bicriterion $\mathcal{B}$, sample size $m \ge 1$. Output: a coreset $S$} \label{alg:sampling} \end{algorithm} Algorithm~\ref{alg:ouralg} provides the function $\mathcal{B}$ and a weighted set $M_{i-\lambda}$. Beginning in phase $i-\lambda$, we will begin running Algorithm~\ref{alg:sampling}. This outputs the sample $S$ for phases $i-\lambda+1$ until the current phase $i$. The following theorem shows that $M_{i-\lambda} \cup S_i$ is a coreset for the stream. Of course, we need to do this construction for each of the $\lambda = O(\log (\frac{1}{\epsilon}))$ most recent phases, so the space gets multiplied by this factor. We return to definition~\ref{leolem} of an $r$ Log-Log Lipschitz function $\tilde{D}$. Given a metric space $(X,dist)$, the space $(X,D)$ where $D = \tilde{D}(dist)$ is a $\rho$-metric space for $\rho = \max \{2^{r-1},1\}$. It is well-known that most $M$-estimators can be recast as a Log-Log Lipschitz function for a low constant value of $r$. For example, $k$-means has $r=2$. \begin{theorem} There exists a single-pass streaming algorithm requiring $O(\epsilon^{-O(1)} k \log n (\log n + \log k + \log \frac{1}{\delta}))$ space that maintains a $(k,\epsilon)$-coreset for the $k$-median clustering of an $r$-Log-Log-Lipschitz function $\tilde{D}$ on a stream of at most $n$ elements with probability at least $1-\delta$. \end{theorem} \begin{proof} Running Algorithm 2 (producing the $M_i$) and Algorithm 3 (producing the $S_i$) in parallel, we will show that $M_{i-\lambda} \cup S_i$ is the desired coreset of the stream. We already have by Theorem~\ref{LayerOne} that $d_{EM}(M_{i-\lambda},R_{i-\lambda}) \le \epsilon \ensuremath{\text{\footnotesize\textsf{OPT}}}_k(P) \le \epsilon \bar{g}(P,w,C)$ for any set $C$ of size $k$. Also, by Theorem~\ref{mainthm} we have the $S_i$ is an $(k,\epsilon)$-coreset for $P \setminus R_{i-\lambda}$. This is because (in the statement of Theorem~\ref{mainthm}) we are required to carry out the construction of Algorithm~\ref{one} using $m \ge \frac{ck(t+\beta )}{\eps^2}\left(d\log t+\log (\beta k)+\log\left(\frac{1}{\delta}\right) \right)$ where $t = \alpha / \psi$. It was shown in Lemma~\ref{leolem} that $\psi = (\epsilon/r)^r$. We are using the $(O(1),O(\log (\frac{1}{\epsilon}) \log n))$-approximation $\mathcal{B}$, and in~\cite{FL11} it is shown that $d = O( \log n)$ for $k$-median in a $\rho$-metric space. So therefore the minimal possible value of $m$ that satisfies the hypotheses of the theorem is $O(\frac{k(\epsilon^{-r}+\log n )}{\eps^2}\left(\log n\log \epsilon^{-r} +\log ( k \log n)+\log\left(\frac{1}{\delta}\right) \right) )$. Simplifying notation, this is $O(\epsilon^{-O(1)} k \log n (\log n + \log k + \log \frac{1}{\delta}))$. We write $\ensuremath{\text{\footnotesize\textsf{COST}}}(A,C)$ to denote $\bar{g}(A,w_A,C)$ to simplify notation; by $A \cup B$ we mean $(A\cup B, w_{A} + w_{B})$. Taking the union, we get that $|\ensuremath{\text{\footnotesize\textsf{COST}}}(P,C) - \ensuremath{\text{\footnotesize\textsf{COST}}}(M_{i-\lambda} \cup S_i,C)| \le |\ensuremath{\text{\footnotesize\textsf{COST}}}(R_{i-\lambda},C) - \ensuremath{\text{\footnotesize\textsf{COST}}}(M_{i-\lambda},C)| + |\ensuremath{\text{\footnotesize\textsf{COST}}}(P\setminus R_{i-\lambda},C) - \ensuremath{\text{\footnotesize\textsf{COST}}}(S_i,C)|$. The first term is upper-bounded by the Earth-Mover distance, and the second term is upper-bounded by $\epsilon$ since $S_i$ is a $(k,\epsilon)$-coreset for $P\setminus R_{i-\lambda}$. So therefore $M_{i-\lambda} \cup S_i$ is a $2\epsilon$-coreset for the stream $P$, and the proof is complete after rescaling $\epsilon$. \end{proof} The previous theorem has, as special cases, streaming coresets for $k$-median, $k$-means, $L_p$, Cauchy estimators, Tukey estimators, etc. This the first algorithm that does not use merge-and-reduce for building coresets over streams for any of these problems. Moreover, the constant in the exponent for $\epsilon$ is small, for the example of $k$-median the $\epsilon$ dependence is $\epsilon^{-3} \log(1/\epsilon)$. \bibliographystyle{alpha}
cc915ccabafcffc6ce8e6494dba47d996e9433b4
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} As the deployment of 5G technology continues in different parts of the world, the research community has started focusing on the 6G systems and how this generation can revamp wireless networks. Avant-garde applications have already been defined for the life in 2030 and beyond, some of which are super-smart city, holographic communication, wireless brain-computer interactions, tactile/haptic-based communication, connected autonomous vehicles, telesurgery, etc. One of the most important vows of 6G is to actualize the concept of Internet of Everything (IoE), where uninterrupted ubiquitous connectivity is provided for anyone and anything through hybrid terrestrial/non-terrestrial networks. Examples include connectivity for users in rural and disaster-struck areas as well as those traveling on ship or airplane, vehicle-to-everything (V2X) connectivity where vehicles can communicate with everything (i.e., other vehicles, pedestrians, cloud centers, and infrastructure units), Internet of Underwater Things, and Internet of Space Things (see Figure \ref{6G}). This all-around connectivity must be accompanied with very high spectral and energy efficiency, ultra-low latency, large capacity, high security, etc. so as to offer an excellent level of quality of experience (QoE) for heterogeneous 6G users with different service requirements. The path to this grandiose goal is not free of challenges and major breakthroughs are urgently called for. Very recently, the groundbreaking technology of reconfigurable intelligent surface (RIS) has sprung up which, with its unique features, is expected to be a major actor in the evolution of the next generation systems. RIS is a software-controlled metasurface, composed of semi-passive elements which can be modified to shape the radio waves according to specific needs. Thanks to the recent advancements in micro/nano electromechanical systems (MEMS/NEMS), RIS elements can be reconfigured dynamically and in real-time, allowing for instantaneous performance optimization of communication systems and catering to immediate changes of wireless networks. Reshaping the signals in the mid-way of communication, RIS makes it possible to optimize channel properties in addition to the optimization of transmission/reception schemes and fulfills the long-awaited dream of customizing the radio environment. This revolutionary technology can bring preeminent performance gains to different types of wireless systems, ranging from terrestrial networks to aerial, satellite, maritime, and even underwater communications systems \cite{Steven,Xu}. The far-reaching effects of RIS on the behavior of wireless networks can make this technology an imperative constituent of the upcoming 6G-enabled \textit{integrated terrestrial/non-terrestrial} (abbreviated as INTENT throughout this article) IoE environments, where seamless everywhere connectivity is demanded with a high performance and at a reasonable cost. With the emergence of the RIS technology, a question has been raised in the research community regarding the merits of this technology and the advantages of using RIS instead of the conventional relaying techniques. Though a broad range of performance improvements is foreseen for the future RIS-aided wireless systems, there have been doubts on whether it is worth resorting to RIS when traditional relays, already researched and tested on large scales, have been proved useful for offering performance advancements. A key point which has to be considered for addressing this concern is the expense which must be put in for achieving high performance targets in future 6G-enabled INTENT networks. Note that, by expense we do not only mean financial spendings, but any kind of resource that has to be utilized for fulfilling the promised performance objectives. Traditional relaying technologies, although endowing wireless communication networks with some degrees of performance improvement, possess several handicaps which impede their impeccable fitting into the future INTENT networks. High energy consumption, spectrum efficiency losses in half-duplex (HD) relaying systems, complex self-interference cancellation (SIC) procedure in full-duplex (FD) relays, high hardware cost, increased complexity and processing delay in decode-and-forward (DF) relays, and noise amplification in amplify-and-forward (AF) relays are some of the drawbacks of relaying techniques which can be relieved to a large extent if RIS is subbed for traditional relays. Having this in mind, we emphasize that RIS is much more than a replacement to conventional relays and its capabilities are far beyond relaying. \begin{figure}[t!] \centering \includegraphics[width=3.5in]{6G.pdf} \caption{ 6G's ubiquitous connectivity examples.} \label{6G} \end{figure} Given the promise of 6G for ubiquitous connectivity in all dimensions and the high potentials of RIS to efficiently connect different segments of the INTENT environment, it is imperative to have a closer look into the role of RIS in the imminent 6G-enabled INTENT ecosystem and inspect how RIS can contribute to the evolution of the promised omnipresent connectivity. Motivated by this, the aim of this article is to comprehensively investigate the prospective contributions of RIS for the development and success of 6G-enabled INTENT networks and delineate the challenges that must be overcome for the smooth functioning of RIS in all dimensions of this integrated ecosystem. We start by presenting the architecture of the RIS-enhanced INTENT networks. Then, we discuss the direct and indirect roles of RIS for facilitating the move towards this 6G-enabled integrated framework. Specifically, we first elucidate the performance gains that can be directly achieved by the inclusion of RIS into different segments of the INTENT environment. Subsequently, we introduce RIS as a technology enabler which can indirectly contribute to the progress of 6G-enabled INTENT networks by transforming immature technologies from visionary concepts into real game-changers. Related research challenges for RIS-enhanced environments are finally presented to light up the way for researchers in their future endeavors for developing proficient solutions for RIS-aided 6G systems. It is worth mentioning that there exist a few research works which review the applications of RIS for improving the performance of 6G systems \cite{Pan,Bash}. However, to our best knowledge, no prior work has scrutinized the role of RIS in future 6G systems from the perspective of INTENT networks where all dimensions including ground, air, space, ocean, and underwater are considered. \section{Architecture of RIS-Enhanced INTENT Networks } Figure \ref{Arch} illustrates an example architecture for the impending RIS-enhanced INTENT networks, where RIS is deployed and utilized in different parts of the integrated environment in order to enhance the network performance in all dimensions. Though we will provide a comprehensive discussion on the role of RIS in improving the performance of INTENT networks in subsequent sections, it is useful to have a brief look into some future deployment scenarios of RIS here. \begin{figure*}[t!] \centering \includegraphics[width=7.4in]{Arch.pdf} \caption{ Architecture of RIS-enhanced INTENT networks.} \label{Arch} \end{figure*} According to Figure \ref{Arch}, building facades and windows can be coated with RIS to serve outdoor and indoor users, respectively. The RISs mounted on outdoor structures such as billboards and building walls assist in vehicle-to-vehicle (V2V) communications by establishing reliable links between the vehicles. Unmanned aerial vehicles (UAVs) and high altitude platforms (HAPs) can also be equipped with RIS, allowing them to passively reflect signals to the intended receivers without needing to consume excessive energy. Further, installing RIS on the walls and windows of the ships will potentially enhance the performance of maritime communications and offer an improved QoE for onboard users. Additionally, underwater communications can benefit from the deployment of RIS, where the extreme path-loss and severe frequency selectivity due to scattering from uneven surfaces can be alleviated with the help of RIS \cite{Steven}. Moreover, setting up RIS on the satellites will endow them with more control over the challenging space conditions, thus boosting the performance of satellite communications (e.g., inter-satellite communications, satellite-airplane communications, etc.), without burdening the satellites with exorbitant costs in terms of complexity and energy usage \cite{Naveed}. All in one, RIS is envisioned to play a pivotal role in the future 6G ecosystem where seamless connectivity is needed not only on the ground but also in air, space, and water. \section{The Direct Role of RIS in 6G-Enabled INTENT Networks} The functionalities of RIS can be classified into three main categories of signal focusing, signal suppressing, and rank improvement. Taking advantage of these functionalities, striking performance gains can be achieved in future 6G-enabled INTENT networks. In the following, we will expound on how RIS can directly improve the performance of 6G-enabled INTENT networks with its unparalleled features and functionalities. For readers' convenience, a schematic overview of RIS functionalities and their implications for enhancing the performance of INTENT networks has been provided in Figure \ref{Dir}. \subsection{Coverage} Each RIS element individually modifies the signals incident on the surface. This modification can be done in a unified manner, such that the signals reflected by RIS elements are constructively combined with those of the direct path, resulting in strengthened signal at the receiver. This way, RIS creates strong virtual links between the transmitter and the receiver and makes the communication resilient against blockage and fading. This RIS functionality can be utilized for extending the coverage for communication systems in which efficient direct communication is hard to achieve. While the coverage extension property of RIS has been widely explored for terrestrial systems, this RIS feature has useful implications for non-terrestrial communications as well. Maritime and underwater communications are two apt examples. In maritime communications, when the ship gets farther from the shore, its connection with the onshore base station (BS) gradually degrades. The BS can increase its transmit power to maintain the ship's connectivity over longer distances; however, there always exists a maximum allowable transmit power for the BS so as to prevent harmful interference to other users. In such a scenario, UAVs can be used as active relays to facilitate the communication between the ships and land-based BSs; however, a more efficient strategy would be to use RIS-equipped UAVs which can smartly reflect the signals in a passive way. Maintaining connectivity in underwater communications is also very challenging because acoustic signal propagation in the underwater media suffers from substantial signal scattering and absorption loss. The inclusion of RIS has lately been suggested for tackling the impaired connectivity in underwater communications, where the RIS can either be attached to the ground and autonomous underwater vehicles (AUVs) or float beneath the ocean\cite{Steven}. \subsection{Capacity} A line-of-sight (LoS) channel without any multipath propagation is rank-deficient, where the use of spatial multiplexing is not possible. RIS is able to create additional paths with distinct spatial angles, thereby providing more degrees of freedom for transmission of multiple data streams and improving the system capacity \cite{Oz}. In this respect, RIS allows for concurrently supporting numerous users, which is beneficial for the future IoE applications in INTENT networks. For instance, satellite communications usually suffer from rank deficiency because rich scattering and non-LoS propagation are absent in this type of communication. Therefore, multiplexing gains are hard to achieve. In this context, the satellite-mounted RIS can modify the properties (e.g., polarization) of the illuminating electromagnetic wave such that multiple data streams can be concurrently transmitted by the satellite. The same is true for UAV communications where the rank of the LoS channel between the UAV and terrestrial/maritime users can be enhanced with the help of RIS. \begin{figure*}[t!] \centering \includegraphics[width=7.4in]{Dir.pdf} \caption{ The direct role of RIS for improving the performance of 6G-enabled INTENT networks.} \label{Dir} \end{figure*} \subsection{Security} One of the most important functionalities of RIS is its ability to suppress signals at unintended receivers by which RIS elements collaboratively modify the incident signals such that the reflected signals cancel out the direct ones. This capability of RIS leads to improved security in different segments of INTENT environments. In satellite communications, for example, the downlink transmission from the satellite to ground users is prone to eavesdropping threats. Suppressing signal leakage to eavesdroppers, RIS can be helpful for protecting the downlink satellite communications. Multi-access edge computing (MEC) systems are another example which, considering the oncoming computation-intensive applications of 6G, are expected to be widely used in the near future. In these systems, either fixed (e.g., BSs) or mobile (e.g, vehicles, UAVs, satellites, etc.) entities with sufficient computational capacities are used to assist the constrained nodes in performing their resource-intensive tasks. RIS can play a crucial role in MEC systems for providing a secure offloading framework, where the offloading process is protected against unauthorized interception of confidential information. \subsection{Reliability} By adding strong propagation paths between the transmitter and the receiver, RIS increases the diversity gain and elevates the chance of successful information transmission, thereby enhancing the reliability of communication. This is of paramount importance for 6G-enabled IoE services in all dimensions of ground, air, space, and water. For instance, ultra-high reliability is required for autonomous driving and remote piloting applications in order to ensure safe operations of the vehicles. Likewise, industrial automation systems have stringent reliability requirements. \begin{Table*}[t!] \centering \includegraphics[width=6in]{Table1.pdf} \caption{ Perfomance metrics and related RIS functionalities. }\label{Table1} \end{Table*} \subsection{Latency} RIS can be of remarkable importance for reducing the end-to-end delay of communication as it conceivably acts as an SI-free FD relay. Taking intelligent transportation system (ITS) as an example, real-time information must be delivered on time in order to ensure efficient operation of vehicular applications like autonomous driving. However, the communication in vehicular environments is usually susceptible to unstable channels and frequent link interruptions which make direct V2X communications challenging or even impossible. As a result, multi-hop delivery of information may be needed which inevitably incurs excessive delays and increases the end-to-end transmission latency. The proper deployment of RIS can noticeably reduce the communication delay and help in timely delivery of delay-sensitive information. As another example, maritime communications including ship-to-shore and ship-to-ship communications is envisioned to be facilitated by UAVs in future 6G systems. In such scenarios, equipping UAVs with RIS will considerably reduce the transmission delay as compared to the case where UAVs apply traditional HD relaying. What is more, as RIS improves the reliability of communication, the success rate of information transmission is increased which in turn reduces the required packet retransmissions and results in lower end-to-end transmission latencies. This is favorable for delay-intolerant applications such as online gaming, which are promised to be supported for all users including those aboard airplanes and ships. \subsection{Energy Efficiency} RIS can bring significant energy efficiency gains into future INTENT networks by letting communicating devices achieve a desirable performance without consuming large amounts of energy. For example, RIS-equipped UAVs which passively reflect the signals from the transmitter to the receiver save much energy compared to traditional active UAV relays as the former do not need sophisticated energy-hungry signal processing procedures. Additionally, installing RIS on the satellites can greatly reduce the energy consumption and boost the energy efficiency in satellite communications because RIS consumes lower power than traditional reflectarrays. In maritime communications, onshore BSs and marine vehicles can use less power for communicating to a designated ship if it is equipped with RIS. The role of RIS in reducing the energy expenditure is also substantial for low-power devices, e.g., sensors, which will be pervasively deployed in all parts of INTENT networks of the future. For instance, the sensors deployed in harsh environments such as underground and underwater usually face unfavorable communication conditions due to severe path-loss. As a result, these sensors need to use excessive energy to transmit the gathered data to other sensors or the sink node, which may lead to their premature energy depletion. The utilization of RIS has been recently proposed for tackling the problems that sensors encounter in such challenging environments \cite{Steven}. The improved communication reliability that RIS renders to INTENT networks further leads to energy efficiency enhancement since less energy needs to be expended on packet retransmissions. \subsection{Spectral Efficiency} Owing to its ability to simultaneously boost the power of the desired signals and suppress the unwanted interference, RIS has been extensively investigated in the literature for spectral efficiency enhancement purposes and proved substantially useful for unprecedented spectral efficiency improvement in various communication paradigms, e.g., aerial-terrestrial communication \cite{Cao}, inter-satellite communication \cite{Tek}, etc. Hence, it is of no doubt that RIS will play a major role for the realization of data-intensive applications in all dimensions of the 6G-enabled IoE ecosystem. \subsection{Summary} RIS can remarkably enhance the performance of 6G-enabled INTENT networks through its signal focusing, signal suppressing, and rank improvement functionalities. Table \ref{Table1} lists the reviewed performance metrics and specifies the most relevant RIS functionalities for improving the network performance from the view of each metric. \begin{figure*}[t!] \centering \includegraphics[width=7.4in]{Ind.pdf} \caption{ Application of RIS to WPC, NOMA, BackCom, and THz communication systems.} \label{Ind} \end{figure*} \section{The Indirect Role of RIS in 6G-Enabled INTENT Networks } So far, we have discussed how RIS can improve the performance of 6G-enabled INTENT networks using its unique features and functionalities. Nevertheless, the contribution of RIS for the realization of the next-generation systems and applications are more than what has hitherto been explored. Specifically, RIS can take the role of a technology enabler by helping other technologies grow and prosper. There are many promising technologies which have potentials to accelerate the realization of the 6G-enabled IoE in future INTENT networks; however, their inherent shortcomings may prevent or delay their wide-scale actualization. The integration of RIS with these technologies provides them with a singular opportunity to fulfill their promise and find their place in the upcoming 6G-enabled IoE ecosystem. Some of these technologies will be overviewed in the following. \subsection{Wireless Powered Communication} Wireless powered communication (WPC) is the technique which allows devices to collect energy from ambient and/or dedicated radio frequency (RF) signals and use the harvested energy to power their communication. WPC was once expected to become an indispensable component of different terrestrial and non-terrestrial wireless networks; however, the low efficiency of wireless power transfer (WPT) limits the practicality of this important technology. In this respect, RIS-empowered WPC is an effectual integration of RIS and WPC technologies which can enable green and self-sustainable communication in all segments of the 6G-enabled INTENT networks. \subsection{Non-Orthogonal Multiple Access} Non-orthogonal multiple access (NOMA) is a cutting-edge concept, foreseen to considerably enhance the spectral efficiency via increasing the number of served users in available time, frequency, and code resources. Although the integration of NOMA into INTENT wireless communication systems has attracted great deals of research in the previous decade \cite{Zhu}, this technique was excluded from the 5G new radio (NR) project because of its insignificant practical performance gain over the orthogonal multiple access (OMA) counterpart. As the success of the NOMA technology lies in the channel conditions of different users, the rank improvement property of RIS can be exploited for boosting the performance of NOMA-based systems. \subsection{Backscatter Communication} In backscatter communication (BackCom), devices are enabled to communicate with their receivers by modulating and reflecting the incident waves. Although this innovative technology has been widely used in RF identification (RFID) applications, its utilization in other communication systems has not gone far beyond theory due to short transmission range. The performance of BackCom can be greatly elevated with the assistance of RIS, resulting in broader coverage and higher transmission rates for backscatter-enabled devices. RIS can also be leveraged for supporting underwater BackCom, where deep-sea sensors communicate with ships by backscattering acoustic signals \cite{Jang}. \subsection{Terahertz Communication} With the dream of massive ubiquitous connectivity on its verge to become a reality, researchers have recently started to look into the possibility of using the Terahertz (THz) band for communication. The THz frequency band offers abundant bandwidth for the ever-growing number of wireless devices and can provide orders-of-magnitude higher rates compared to sub-6GHz frequency bands. Migration to THz band, however, is a non-trivial task. In particular, high propagation loss, penetration loss, and molecular absorption of signals in these frequency bands limit the transmission range and reduce the number of propagation paths between the transmitter and the receiver. RIS, if properly deployed, can provide a flexible and scalable solution to cope with the limited-coverage problem of THz transmissions and make it a competent technology for serving the massive number of users in the future 6G-enabled INTENT networks. \subsection{Summary} Many of the disruptive technologies proposed in the literature possess intrinsic limitations which hamper their wide adoption in practical INTENT environments. RIS can help unleash the potentials of these technologies and make them active participants in the 6G era. Examples include WPC, NOMA, BackCom, and THz communication which, as illustrated in Figure \ref{Ind}, can be assisted by RIS. \section{Relevant Research Challenges} Although RIS is envisaged to be a decisive actor for enhancing the communication performance in all dimensions of the upcoming 6G-enabled INTENT environments, there exist some research challenges which need to be competently addressed. These challenges include but are not limited to: \subsection{Dynamic Environmental Conditions} Dynamicity in environmental conditions gives rise to uncertain channel properties and leads to network optimization problems. For instance, bad weather conditions and strong winds result in random UAV fluctuations, known as jittering, which alters the air-to-ground channel \cite{Banagar} and makes the optimal design of UAV-mounted RIS phase shifts challenging or even impossible. Moreover, cold temperature adversely affects the lifetime of the RIS- and UAV-embedded batteries which further complicates the optimal design of the system since a compromise must be made between performance maximization and energy consumption. Similarly, atmospheric condition variations, e.g., rain impairments, have major impacts on the signal quality in satellite-based communications \cite{Ali} which must be carefully accounted for when designing RIS-assisted satellite communications. In maritime communications, channel conditions are highly dependent on wave fluctuations, wind speed, humidity, etc., and accurate information on these environmental factors is essential for achieving the performance gains promised by RIS \cite{Wei}. The effect of environmental factors such as temperature variations, rain, and wind on the operation of RIS elements must also be thoroughly assessed because improper functioning of some of the RIS elements due to aforementioned factors can bring about serious reliability concerns. In such a dynamic ecosystem, traditional optimization techniques, e.g., alternating optimization, may be inefficient and employment of artificial intelligence (AI)-based methods can better cater to the system needs. \begin{Table*}[t!] \centering \includegraphics[width=6.5in]{Table2.pdf} \caption{ Research challenges and open problems. } \label{Table2} \end{Table*} \subsection{Network Cooperation and Coordination} In order to have a seamless integration of terrestrial and non-terrestrial networks, the cooperation among various network entities in the ground, air, space, and water is vital. How to ensure a satisfactory level of cooperation and coordination in such a large-scale system is a challenge which needs robust and reliable solutions. The inclusion of RIS in different parts of the system makes the coordination further complicated and challenging because the passive beamforming at the RIS must be optimized in conjunction with other variables such as resource allocation, UAV trajectory, etc. What is more, this optimization needs to be fulfilled autonomously and in real-time so as to fully reap the benefits of RIS. Consequently, resorting to conventional centralized coordination mechanisms is not logical and novel distributed algorithms have to be devised. \subsection{Heterogeneous Requirements in 6G-Enabled IoE} With diversified requirements of heterogeneous devices in the forthcoming 6G-enabled INTENT IoE networks, versatile services with distinct quality of service (QoS) requirements have to be delivered to IoE users. The studies on RIS-aided wireless systems usually consider using RIS for only one purpose, e.g., maximizing the spectral efficiency. Nevertheless, RIS is expected to have more capabilities than what has been investigated so far. Specifically, RIS can be divided into sub-surfaces, with each sub-surface working towards a unique objective. For instance, the elements of one sub-surface may unite for minimizing the energy consumption of the communication for energy-constrained devices, while another sub-surface is intended for maximizing the downloading rate of video streaming services; at the same time, the elements of both sub-surfaces can also be involved in environmental sensing tasks. The potential of RIS for simultaneously responding to different demands can be pivotal for the success of 6G and beyond systems; however, designing effective methods for supporting the multi-purpose operation of RIS is far from trivial. \subsection{Power Consumption at the RIS} The circuit power consumption of RIS is a very important design consideration in RIS-assisted systems. Though zero power consumption is usually assumed for the RIS, the elements expend non-negligible power for adjusting their reflection parameters. Consequently, the number of active RIS elements for assisting network operations must be meticulously decided to achieve a satisfactory trade-off between RIS power consumption and network performance. The decision on the optimum number of reflecting elements is highly correlated with the installation place of RIS. For instance, if deployed in remote areas with limited fixed BSs, RIS must be vigilant in its energy usage and minimize the number of active elements. On the other hand, if RIS is installed on the walls of a building that is close to a BS, the elements can be powered by WPT from the BS and the RIS can use more elements for aiding the network. Likewise, a satellite-mounted RIS can rely on the sustainable solar power for its operation. Optimization of the number of active RIS elements considering the network requirements, available energy at the RIS, deployment location of the RIS, and the dynamics of energy harvesting is a challenging problem. \subsection{Impairments and Practical Limitations} Hardware impairments (HIs), modeling imperfections, and other practical limitations pose great challenges on the design and management of RIS-enhanced INTENT networks. While it is common in the research community to use oversimplified models and ignore imperfections for characterizing the performance bounds, useful insights on the system performance cannot be attained without taking practical limitations into consideration. These limitations include HIs in RF components of the transceivers, frequent link interruptions in maritime communications \cite{Wei}, mutual coupling between RIS elements \cite{Gradoni}, random shadowing and interference from terrestrial stations in satellite communications \cite{Guo}, etc. \subsection{Real-Life Experimentations} To date, RIS-aided systems have been mainly evaluated using numerical simulations, where researchers develop theoretical frameworks for assessment of their proposed algorithms. It is particularly prevalent to compare the performance of RIS-added systems with optimized phase shifts to the scenarios of "without RIS" and "with random RIS phase shifts". It is of no surprise that the proposed frameworks always outperform the aforementioned benchmarks. The performance of RIS-enhanced systems in reality, however, might be ways different from what researchers observe in their MATLAB simulations. Therefore, it is safe to say that the numerical results provided in extensive research works on RIS cannot be trusted unless they are verified through real-life experiments. Having said that, it is worth to note that carrying out testbed experiments of RIS-enhanced systems is far from straightforward, especially when it comes to non-terrestrial networks. \section{Conclusion and Future Outlook} RIS is an ingenious technology which will break new grounds for the accomplishment of 6G-enabled INTENT IoE networks and take wireless communications to the levels unthinkable today. This article presented the architecture of the forthcoming RIS-empowered INTENT networks and explicated the direct and indirect roles of RIS for furthering the performance of these networks. We also presented the key challenges which can hinder the successful realization of the envisioned 6G-enabled RIS-empowered systems in terrestrial and non-terrestrial environments. For future investigations in this promising area, we encourage researchers to have a closer look into the challenges provided in this article and develop robust solutions to tackle these challenges. Table \ref{Table2} lists the research challenges along with some open problems which promptly call for effective solutions to cater to the requirements of the future 6G-enabled INTENT networks.
be7198ee03e34620ccce12cf7af3276be0656a48
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }